The Linkielist

Linking ideas with the world

The Linkielist

Do those retail apps increase customer engagement and sales in all channels? In the US: Yes.

Researchers from Texas A&M University published new research in the INFORMS journal Marketing Science, which shows that retailers’ branded mobile apps are very effective in increasing customer engagement, increasing sales on multiple levels, not just on the retailer’s website, but also in its stores. At the same time, apps increase the rate of returns, although the increase in sales outweighs the return rates.

The study to be published in the September edition of the INFORMS journal Marketing Science is titled “Mobile App Introduction and Online and Offline Purchases and Product Returns,” and is authored by Unnati Narang and Ventakesh Shankar, both of the Mays Business School at Texas A&M University.

The study authors found that retail app users buy 33 percent more frequently, they buy 34 percent more items, and they spend 37 percent more than non-app user customers over 18 months after app launch.

At the same time, app users return products 35 percent more frequently, and they return 35 percent more items at a 41 percent increase in .

All factors considered, the researchers found that app users spend 36 percent more net of returns.

“Overall, we found that retail app users are significantly more engaged at every level of the retail experience, from making purchases to returning items,” said Narang. “Interestingly, we also found that app users tend to a more diverse set of items, including less popular products, than non-app users. This is particular helpful for long-tail products, such as video games and music.”

“For the retailer, the lesson is that having a retail app will likely increase customer engagement and expand the range of products being sold online and in store,” added Shankar. “We also found that some app users who make a purchase within 48 hours of actually using an app, tend to use it when they are physically close to the store of purchase. They are most likely to access the app for loyalty rewards, product details and notifications.”

Source: Do those retail apps increase customer engagement and sales in all channels?

Managers rated as highly emotionally intelligent are more ineffective and unpopular, research shows

Professor Nikos Bozionelos, of the EMLyon Business School, France, and Dr. Sumona Mukhuty, Manchester Metropolitan University, asked staff in the NHS to assess their managers’ emotional intelligence—defined as their level of empathy and their awareness of their own and others’ emotions.

The 309 managers were also assessed on the amount of effort they put into the job, the staff’s overall satisfaction with their manager, and how well they implemented change within the NHS system.

Professor Bozionelos told the British Academy of Management’s annual conference in Birmingham today [Wednesday 4 September 2019] that beyond a certain point managers rated as having high emotional intelligence were also scored as lower for most of the outcomes.

Those managers rated in the top 15 percent for emotional intelligence were evaluated lower that those who performed in the top 65 percent to 85 percent in the amount of effort they put into the job, and how satisfied their subordinates were with them.

The NHS was undergoing fundamental reorganization at the time of the study, and managers rated as most emotionally intelligent were scored as less effective at implementing this change, but highly for their continuing involvement in the process.

“Increases in emotional intelligence beyond a moderately high level are detrimental rather than beneficial in terms of leader’s effectiveness,” said Professor Bozionelos.

“Managers who were rated beyond a particular threshold are considered less effective, and their staff are less satisfied with them.

“Too much emotional intelligence is associated with too much empathy, which in turn may make a manager hesitant to apply measures that he or she feels will impose excessive burden or discomfort to subordinates.”

The research contradicted the general assumption that the more emotional intelligence in a manager the better, he said, which had led to “an upsurge in investment in emotional intelligence training programs for leaders.”

“Beyond a particular level, emotional intelligence may not add anything to many aspects of manager’s performance, and in fact may become detrimental. Simply considering that the more emotional the manager has the better it is may be an erroneous way of thinking.”

The researchers took into account a host of factors, such as leaders’ age and biological sex, in order to study the effects of in isolation.

Source: Managers rated as highly emotionally intelligent are more ineffective and unpopular, research shows

Scientists Say They’ve Found a New Organ in Skin That Processes Pain

Typically, it’s thought that we perceive harmful sensations on our skin entirely through the very sensitive endings of certain nerve cells. These nerve cells aren’t coated by a protective layer of myelin, as other types are. Nerve cells are kept alive by and connected to other cells called glia; outside of the central nervous system, one of the two major types of glia are called Schwann cells.

An illustration of nociceptive Schwann cells
Illustration: Abdo, et al (Science)

The authors of the new study, published Thursday in Science, say they were studying these helper cells near the skin’s surface in the lab when they came across something strange—some of the Schwann cells seemed to form an extensive “mesh-like network” with their nerve cells, differently than how they interact with nerve cells elsewhere. When they ran further experiments with mice, they found evidence that these Schwann cells play a direct, added role in pain perception, or nociception.

One experiment, for instance, involved breeding mice with these cells in their paws that could be activated when the mice were exposed to light. Once the light came on, the mice seemed to behave like they were in pain, such as by licking themselves or guarding their paws. Later experiments found that these cells—since dubbed nociceptive Schwann cells by the team—respond to mechanical pain, like being pricked or hit by something, but not to cold or heat.

Because these cells are spread throughout the skin as an intricately connected system, the authors argue that the system should be considered an organ.

“Our study shows that sensitivity to pain does not occur only in the skin’s nerve [fibers], but also in this recently discovered pain-sensitive organ,” said senior study author Patrik Ernfors, a pain researcher at Sweden’s Karolinska Institute, in a release from the university.

Source: Scientists Say They’ve Found a New Organ in Skin That Processes Pain

How Facebook is Using Machine Learning to Map the World Population

When it comes to knowing where humans around the world actually live, resources come in varying degrees of accuracy and sophistication.

Heavily urbanized and mature economies generally produce a wealth of up-to-date information on population density and granular demographic data. In rural Africa or fast-growing regions in the developing world, tracking methods cannot always keep up, or in some cases may be non-existent.

This is where new maps, produced by researchers at Facebook, come in. Building upon CIESIN’s Gridded Population of the World project, Facebook is using machine learning models on high-resolution satellite imagery to paint a definitive picture of human settlement around the world. Let’s zoom in.

Connecting the Dots

Will all other details stripped away, human settlement can form some interesting patterns. One of the most compelling examples is Egypt, where 95% of the population lives along the Nile River. Below, we can clearly see where people live, and where they don’t.

View the full-resolution version of this map.

facebook population density egypt map

While it is possible to use a tool like Google Earth to view nearly any location on the globe, the problem is analyzing the imagery at scale. This is where machine learning comes into play.

Finding the People in the Petabytes

High-resolution imagery of the entire globe takes up about 1.5 petabytes of storage, making the task of classifying the data extremely daunting. It’s only very recently that technology was up to the task of correctly identifying buildings within all those images.

To get the results we see today, researchers used process of elimination to discard locations that couldn’t contain a building, then ranked them based on the likelihood they could contain a building.

process of elimination map

Facebook identified structures at scale using a process called weakly supervised learning. After training the model using large batches of photos, then checking over the results, Facebook was able to reach a 99.6% labeling accuracy for positive examples.

Why it Matters

An accurate picture of where people live can be a matter of life and death.

For humanitarian agencies working in Africa, effectively distributing aid or vaccinating populations is still a challenge due to the lack of reliable maps and population density information. Researchers hope that these detailed maps will be used to save lives and improve living conditions in developing regions.

For example, Malawi is one of the world’s least urbanized countries, so finding its 19 million citizens is no easy task for people doing humanitarian work there. These maps clearly show where people live and allow organizations to create accurate population density estimates for specific areas.

rural malawi population pattern map

Visit the project page for a full explanation and to access the full database of country maps.

Source: How Facebook is Using Machine Learning to Map the World Population

It turns out Bystanders do Help Strangers in Need

Research dating back to the late 1960s documents how the great majority of people who witness crimes or violent behavior refuse to intervene.

Psychologists dubbed this non-response as the “bystander effect”—a phenomenon which has been replicated in scores of subsequent psychological studies. The “bystander effect” holds that the reason people don’t intervene is because we look to one another. The presence of many bystanders diffuses our own sense of personal responsibility, leading people to essentially do nothing and wait for someone else to jump in.

Past studies have used police reports to estimate the effect, but results ranged from 11 percent to 74 percent of incidents being interventions. Now, widespread surveillance cameras allow for a new method to assess real-life human interactions. A new study published this year in the American Psychologist finds that this well-established bystander effect may largely be a myth. The study uses footage of more than 200 incidents from surveillance cameras in Amsterdam; Cape Town; and Lancaster, England.

Researchers watched footage and coded the nature of the conflict, the number of direct participants in it, and the number of bystanders. Bystanders were defined as intervening if they attempted a variety of acts, including pacifying gestures, calming touches, blocking contact between parties, consoling victims of aggression, providing practical help to a physical harmed victim, or holding, pushing, or pulling an aggressor away. Each event had an average of 16 bystanders and lasted slightly more than three minutes.

The study finds that in nine out of 10 incidents, at least one bystander intervened, with an average of 3.8 interveners. There was also no significant difference across the three countries and cities, even though they differ greatly in levels of crime and violence.

Instead of more bystanders creating an immobilizing “bystander effect,” the study actually found the more bystanders there were, the more likely it was that at least someone would intervene to help. This is a powerful corrective to the common perception of “stranger danger” and the “unknown other.” It suggests that people are willing to self-police to protect their communities and others. That’s in line with the research of urban criminologist Patrick Sharkey, who finds that stronger neighborhood organizations, not a higher quantity of policing, have fueled the Great Crime Decline.

Source: How Often Will Bystanders Help Strangers in Need? – CityLab

Are Plants Conscious? Researchers Argue, but agree they are intelligent.

The remarkable ability of plants to respond to their environment has led some scientists to believe it’s a sign of conscious awareness. A new opinion paper argues against this position, saying plants “neither possess nor require consciousness.”

Many of us take it for granted that plants, which lack a brain or central nervous system, wouldn’t have the capacity for conscious awareness. That’s not to suggest, however, that plants don’t exhibit intelligence. Plants seem to demonstrate a startling array of abilities, such as computation, communication, recognizing overcrowding, and mobilizing defenses, among other clever vegetative tricks.

To explain these apparent behaviors, a subset of scientists known as plant neurobiologists has argued that plants possess a form of consciousness. Most notably, evolutionary ecologist Monica Gagliano has performed experiments that allegedly hint at capacities such as habituation (learning from experience) and classical conditioning (like Pavlov’s salivating dogs). In these experiments, plants apparently “learned” to stop curling their leaves after being dropped repeatedly or to spread their leaves in anticipation of a light source. Armed with this experimental evidence, Gagliano and others have claimed, quite controversially, that because plants can learn and exhibit other forms of intelligence, they must be conscious.

Nonsense, argues a new paper published today in Trends in Plant Science. The lead author of the new paper, biologist Lincoln Taiz from the University of California at Santa Cruz, isn’t denying plant intelligence, but makes a strong case against their being conscious.

Source: Plants Are Definitely Not Conscious, Researchers Argue

Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites (note, there’s lots of them influencing your unconsious to buy!)

Dark patterns are user interface design choices that benefit an online service by coercing, steering, or deceivingusers into making unintended and potentially harmful decisions. We present automated techniques that enableexperts to identify dark patterns on a large set of websites. Using these techniques, we study shoppingwebsites, which often use dark patterns these to influence users into making more purchases or disclosingmore information than they would otherwise. Analyzing∼53K product pages from∼11K shopping websites,we discover 1,841 dark pattern instances, together representing 15 types and 7 categories. We examine theunderlying influence of these dark patterns, documenting their potential harm on user decision-making. Wealso examine these dark patterns for deceptive practices, and find 183 websites that engage in such practices.Finally, we uncover 22 third-party entities that offer dark patterns as a turnkey solution. Based on our findings,we make recommendations for stakeholders including researchers and regulators to study, mitigate, andminimize the use of these patterns.

Dark patterns [31,47] are user interface design choices that benefit an online service by coercing,steering, or deceiving users into making decisions that, if fully informed and capable of selectingalternatives, they might not make. Such interface design is an increasingly common occurrence ondigital platforms including social media [45] and shopping websites [31], mobile apps [5,30], and video games [83]. At best, dark patterns annoy and frustrate users. At worst, dark patterns userscan mislead and deceive users, e.g., by causing financial loss [1,2], tricking users into giving upvast amounts of personal data [45], or inducing compulsive and addictive behavior in adults [71]and children [20].While prior work [30,31,37,47] has provided a starting point for describing the types ofdark patterns, there is no large-scale evidence documenting the prevalence of dark patterns, or asystematic and descriptive investigation of how the various different types of dark patterns harmusers. If we are to develop countermeasures against dark patterns, we first need to examine where,how often, and the technical means by which dark patterns appear, and second, we need to be ableto compare and contrast how various dark patterns influence user decision-making. By doing so,we can both inform users about and protect them from such patterns, and given that many of thesepatterns are unlawful, aid regulatory agencies in addressing and mitigating their use.In this paper, we present an automated approach that enables experts to identify dark patternsat scale on the web. Our approach relies on (1) a web crawler, built on top of OpenWPM [24,39]—aweb privacy measurement platform—to simulate a user browsing experience and identify userinterface elements; (2) text clustering to extract recurring user interface designs from the resultingdata; and (3) inspecting the resulting clusters for instances of dark patterns. We also develop a noveltaxonomy of dark pattern characteristics so that researchers and regulators can use descriptive andcomparative terminology to understand how dark patterns influence user decision-making.While our automated approach generalizes, we focus this study on shopping websites. Darkpatterns are especially common on shopping websites, used by an overwhelming majority of theAmerican public [75], where they trick users into signing up for recurring subscriptions and makingunwanted purchases, resulting in concrete financial loss. We use our web crawler to visit the∼11Kmost popular shopping websites worldwide, and from the resulting analysis create a large data setof dark patterns and document their prevalence. In doing so, we discover several new instancesand variations of previously documented dark patterns [31,47]. We also classify the dark patternswe encounter using our taxonomy of dark pattern characteristics.

We have five main findings:

•We discovered 1,841 instances of dark patterns on shopping websites, which together repre-sent 15 types of dark patterns and 7 broad categories.

•These 1,841 dark patterns were present on 1,267 of the∼11K shopping websites (∼11.2%) inour data set. Shopping websites that were more popular, according to Alexa rankings [9],were more likely to feature dark patterns. This represents a lower bound on the number ofdark patterns on these websites, since our automated approach only examined text-baseduser interfaces on a sample of products pages per website.

•Using our taxonomy of dark pattern characteristics, we classified the dark patterns wediscover on the basis whether they lead to anasymmetryof choice, arecovertin their effect,aredeceptivein nature,hide informationfrom users, andrestrictchoice. We also map the darkpatterns in our data set to the cognitive biases they exploit. These biases collectively describethe consumer psychology underpinnings of the dark patterns we identified.

•In total, we uncovered 234 instances of deceptive dark patterns across 183 websites. Wehighlight the types of dark patterns we discovered that rely on consumer deception.

•We identified 22 third-party entities that provide shopping websites with the ability to createdark patterns on their sites. Two of these entities openly advertised practices that enabledeceptive messages

[…]

We developed a taxonomy of dark pattern characteristics that allows researchers, policy-makers and journalists to have a descriptive, comprehensive, and comparative terminology for understand-ing the potential harm and impact of dark patterns on user decision-making. Our taxonomy is based upon the literature on online manipulation [33,74,81] and dark patterns highlighted in previous work [31,47], and it consists of the following five dimensions, each of which poses a possible barrier to user decision-making:

•Asymmetric: Does the user interface design impose unequal weights or burdens on theavailable choices presented to the user in the interface3? For instance, a website may presenta prominent button to accept cookies on the web but hide the opt-out button in another page.

•Covert: Is the effect of the user interface design choice hidden from users? A websitemay develop interface design to steer users into making specific purchases without theirknowledge. Often, websites achieve this by exploiting users’ cognitive biases, which aredeviations from rational behavior justified by some “biased” line of reasoning [50]. In aconcrete example, a website may leverage the Decoy Effect [51] cognitive bias, in whichan additional choice—the decoy—is introduced to make certain other choices seem moreappealing. Users may fail to recognize the decoy’s presence is merely to influence theirdecision making, making its effect hidden from users.

•Deceptive: Does the user interface design induce false beliefs either through affirmativemisstatements, misleading statements, or omissions? For example, a website may offer adiscount to users that appears to be limited-time, but actually repeats when they visit the siteagain. Users may be aware that the website is trying to offer them a deal or sale; however,they may not realize that the influence is grounded in a false belief—in this case, becausethe discount is recurring. This false belief affects users decision-making i.e., they may actdifferently if they knew that this sale is repeated.

•Hides Information: Does the user interface obscure or delay the presentation of neces-sary information to the user? For example, a website may not disclose, hide, or delay thepresentation of information about charges related to a product from users.3We narrow the scope of asymmetry to only refer to explicit choices in the interface.

•Restrictive: Does the user interface restrict the set of choices available to users? For instance,a website may only allow users to sign up for an account with existing social media accountssuch as Facebook or Google so they can gather more information about them.
In Section 5, we also draw an explicit connection between each dark pattern we discover and thecognitive biases they exploit. The biases we refer to in our findings are:
(1)Anchoring Effect [77]: The tendency for individuals to overly rely on an initial piece ofinformation—the “anchor”—on future decisions
(2)Bandwagon Effect [72]: The tendency for individuals to value something more because othersseem to value it.
(3)Default Effect [53]: The tendency of individuals to stick with options that are assigned tothem by default, due to inertia in the effort required to change the option.
(4)Framing Effect [78]: A phenomenon that individuals may reach different decisions from thesame information depending on how it is presented or “framed”.
(5)Scarcity Bias [62]: The tendency of individuals to place a higher value on things that arescarce.
(6)Sunk Cost Fallacy [28]: The tendency for individuals to continue an action if they haveinvested resource (e.g., time and money) into it, even if that action would make them worse off.
[…]
We discovered a total of 22 third-party entities, embedded in 1,066of the 11K shopping websites in our data set, and in 7,769 of the Alexa top million websites. Wenote that the prevalence figures from the Princeton Web Census Project data should be taken as a

24A. Mathur et al.lower bound since their crawls are limited to home pages of websites. […] we discovered that many shopping websites only embedded them intheir product—and not home—pages, presumably for functionality and performance reasons.

[…]
Many of the third-parties advertised practices that appeared to be—and sometimes unambiguouslywere—manipulative: “[p]lay upon [customers’] fear of missing out by showing shoppers whichproducts are creating a buzz on your website” (Fresh Relevance), “[c]reate a sense of urgency toboost conversions and speed up sales cycles with Price Alert Web Push” (Insider), “[t]ake advantageof impulse purchases or encourage visitors over shipping thresholds” (Qubit). Further, Qubit alsoadvertised Social Proof Activity Notifications that could be tailored to users’ preferences andbackgrounds.
In some instances, we found that third parties openly advertised the deceptive capabilities of theirproducts. For example, Boost dedicated a web page—titled “Fake it till you make it”—to describinghow it could help create fake orders [12]. Woocommerce Notification—a Woocommerce platformplugin—also advertised that it could create fake social proof messages: “[t]he plugin will create fakeorders of the selected products” [23]. Interestingly, certain third parties (Fomo, Proof, and Boost)used Social Proof Activity Messages on their own websites to promote their products.
[…]
These practices are unambiguously unlawful in the United States(under Section 5 of the Federal Trade Commission Act and similar state laws [43]), the EuropeanUnion (under the Unfair Commercial Practices Directive and similar member state laws [40]), andnumerous other jurisdictions.We also find practices that are unlawful in a smaller set of jurisdictions. In the European Union,businesses are bound by an array of affirmative disclosure and independent consent requirements inthe Consumer Rights Directive [41]. Websites that use the Sneaking dark patterns (Sneak into Basket,Hidden Subscription, and Hidden Costs) on European Union consumers are likely in violation ofthe Directive. Furthermore, user consent obtained through Trick Questions and Visual Interferencedark patterns do not constitute freely given, informed and active consent as required by the GeneralData Protection Regulation (GDPR) [42]. In fact, the Norwegian Consumer Council filed a GDPRcomplaint against Google in 2018, arguing that Google used dark patterns to manipulate usersinto turning on the “Location History” feature on Android, and thus enabling constant locationtracking [46

Source: Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites Draft: June 25, 2019 – dark-patterns.pdf

Upgrade your memory with a surgically implanted chip!

In a grainy black-and-white video shot at the Mayo Clinic in Minnesota, a patient sits in a hospital bed, his head wrapped in a bandage. He’s trying to recall 12 words for a memory test but can only conjure three: whale, pit, zoo. After a pause, he gives up, sinking his head into his hands.

In a second video, he recites all 12 words without hesitation. “No kidding, you got all of them!” a researcher says. This time the patient had help, a prosthetic memory aid inserted into his brain.

Over the past five years, the U.S. Defense Advanced Research Projects Agency (Darpa) has invested US$77 million to develop devices intended to restore the memory-generation capacity of people with traumatic brain injuries. Last year two groups conducting tests on humans published compelling results.

The Mayo Clinic device was created by Michael Kahana, a professor of psychology at the University of Pennsylvania, and the medical technology company Medtronic Plc. Connected to the left temporal cortex, it monitors the brain’s electrical activity and forecasts whether a lasting memory will be created. “Just like meteorologists predict the weather by putting sensors in the environment that measure humidity and wind speed and temperature, we put sensors in the brain and measure electrical signals,” Kahana says. If brain activity is suboptimal, the device provides a small zap, undetectable to the patient, to strengthen the signal and increase the chance of memory formation. In two separate studies, researchers found the prototype consistently boosted memory 15 per cent to 18 per cent.

The second group performing human testing, a team from Wake Forest Baptist Medical Center in Winston-Salem, N.C., aided by colleagues at the University of Southern California, has a more finely tuned method. In a study published last year, their patients showed memory retention improvement of as much as 37 per cent. “We’re looking at questions like, ‘Where are my keys? Where did I park the car? Have I taken my pills?’ ” says Robert Hampson, lead author of the 2018 study.

To form memories, several neurons fire in a highly specific way, transmitting a kind of code. “The code is different for unique memories, and unique individuals,” Hampson says. By surveying a few dozen neurons in the hippocampus, the brain area responsible for memory formation, his team learned to identify patterns indicating correct and incorrect memory formation for each patient and to supply accurate codes when the brain faltered.

In presenting patients with hundreds of pictures, the group could even recognize certain neural firing patterns as particular memories. “We’re able to say, for example, ‘That’s the code for the yellow house with the car in front of it,’ ” says Theodore Berger, a professor of bioengineering at the University of Southern California who helped develop mathematical models for Hampson’s team.

Both groups have tested their devices only on epileptic patients with electrodes already implanted in their brains to monitor seizures; each implant requires clunky external hardware that won’t fit in somebody’s skull. The next steps will be building smaller implants and getting approval from the U.S. Food and Drug Administration to bring the devices to market. A startup called Nia Therapeutics Inc. is already working to commercialize Kahana’s technology.

Justin Sanchez, who just stepped down as director of Darpa’s biological technologies office, says veterans will be the first to use the prosthetics. “We have hundreds of thousands of military personnel with traumatic brain injuries,” he says. The next group will likely be stroke and Alzheimer’s patients. Eventually, perhaps, the general public will have access—though there’s a serious obstacle to mass adoption. “I don’t think any of us are going to be signing up for voluntary brain surgery anytime soon,” Sanchez says. “Only when these technologies become less invasive, or noninvasive, will they become widespread.”

Source: Upgrade your memory with a surgically implanted chip! – BNN Bloomberg

Infographic: How Different Generations Approach Work

How Different Generations Approach Work

View the full-size version of the infographic by clicking here

The first representatives of Generation Z have started to trickle into the workplace – and like generations before them, they are bringing a different perspective to things.

Did you know that there are now up to five generations now working under any given roof, ranging all the way from the Silent Generation (born Pre-WWII) to the aforementioned Gen Z?

Let’s see how these generational groups differ in their approaches to communication, career priorities, and company loyalty.

Generational Differences at Work

Today’s infographic comes to us from Raconteur, and it breaks down some key differences in how generational groups are thinking about the workplace.

Let’s dive deeper into the data for each category.

Communication

How people prefer to communicate is one major and obvious difference that manifests itself between generations.

While many in older generations have dabbled in new technologies and trends around communications, it’s less likely that they will internalize those methods as habits. Meanwhile, for younger folks, these newer methods (chat, texting, etc.) are what they grew up with.

Top three communication methods by generation:

  • Baby Boomers:
    40% of communication is in person, 35% by email, and 13% by phone
  • Gen X:
    34% of communication is in person, 34% by email, and 13% by phone
  • Millennials:
    33% of communication is by email, 31% is in person, and 12% by chat
  • Gen Z:
    31% of communication is by chat, 26% is in person, and 16% by emails

Motivators

Meanwhile, the generations are divided on what motivates them in the workplace. Boomers place health insurance as an important decision factor, while younger groups view salary and pursuing a passion as being key elements to a successful career.

Three most important work motivators by generation (in order):

  • Baby Boomers:
    Health insurance, a boss worthy of respect, and salary
  • Gen X:
    Salary, job security, and job challenges/excitement
  • Millennials:
    Salary, job challenges/excitement, and ability to pursue passion
  • Gen Z:
    Salary, ability to pursue passion, and job security

Loyalty

Finally, generational groups have varying perspectives on how long they would be willing to stay in any one role.

  • Baby Boomers: 8 years
  • Gen X: 7 years
  • Millennials: 5 years
  • Gen Z: 3 years

Given the above differences, employers will have to think clearly about how to attract and retain talent across a wide scope of generations. Further, employers will have to learn what motivates each group, as well as what makes them each feel the most comfortable in the workplace.

Source: Infographic: How Different Generations Approach Work

Internet Meme Pioneer YTMND Shuts Down

You’re the Man Now Dog, a pioneer in the internet meme space, has shut down.

The online community at YTMND.com allowed users to upload an image or a GIF and pair it with audio for hilarious results. Traffic to the website, however, dried up years ago with the rise of Facebook, Twitter, and YouTube. In 2016, site creator Max Goldberg said YTMND would likely shut down soon due to declining ad revenue and his ill health.

“It seems like the internet has moved on,” Goldberg told Gizmodo at the time.

The site dates back to 2001 when Goldberg paired a looping audio clip of Sean Connery uttering the line “You’re the man now, dog!” with some text and placed it all on a webpage, Yourethemannowdog.com.

In 2004, Goldberg expanded on that with a site that let users pair images with audio, so they could create clips and post them online. The end result was YTMND, which by 2006 was reportedly amassing 4 million visitors a month and 120,000 contributors. By 2012, it had almost a million pages devoted to user-created memes. But it couldn’t compete with the rise of social media and the smartphone.

What prompted Goldberg to finally pull the plug on the site in recent days isn’t clear. He and the site didn’t immediately respond to a request for comment. However, all the pages have been saved on the Internet Archive and its Wayback Machine. So you’ll still be able to enjoy all the site’s content for nostalgia’s sake.

Source: Internet Meme Pioneer YTMND Shuts Down | News & Opinion | PCMag.com

A real real shame

New study shows scientists who selfie garner more public trust

The study builds on seminal work by Princeton University social psychologist Susan Fiske suggesting that scientists have earned Americans’ respect but not their trust. Trust depends on two perceived characteristics of an individual or social group: competence and warmth. Perceptions of competence involve the belief that members of a particular social group are intelligent and have the skills to achieve their goals. Perceptions of warmth involve the belief that the members of this group also have benevolent goals, or that they are friendly, altruistic, honest and share common values with people outside of their group. Together, perceptions of competence and warmth determine all group stereotypes, including stereotypes of scientists.

“Scientists are famously competent—people report we’re smart, curious, lab nerds—but they’re silent about scientists’ more human qualities,” Fiske said.

While perceptions of both the competence and the warmth of members of a are important in determining trust and even action, it turns out that perceived warmth is more important. And, as Fiske showed in a study published in 2014 in Proceedings of the National Academy of Sciences, Americans see scientists as competent but only as moderately warm. Scientists’ perceived warmth is on par with that of retail workers, bus drivers and construction workers but far below that of doctors, nurses and teachers.

The researchers of the new PLOS ONE study launched the investigation into perceptions of scientist Instagrammers after being struck with the idea that the competence versus warmth stereotype of scientists may not be an insurmountable challenge given the power of social media to bring scientists and nonscientists together.

[…]

To explore this idea, the team launched a popularly referred to as ScientistsWhoSelfie, based on the hashtag the researchers introduced to raise awareness about the project in an online crowdfunding campaign that raised more than $10,000. A few dozen scientists around the globe helped to develop a series of images for the project.

The idea was to show research participants images published to one of four different “Scientists of Instagram” rotation-curation accounts and then to ask them questions about their perceptions of the scientists represented in these images as well as of scientists in general. Each participant was shown three types of images: a scientific setting or a piece of equipment such as a microscope, a bioreactor on the lab bench or a plant experiment set-up in a greenhouse with no humans in any of the images but with captions attributing the images to either male or female scientists by name; a smiling male scientist looking at the camera in the same scientific setting; or a smiling female scientist looking at the camera in the same scientific setting.

A total of 1,620 U.S.-representative participants recruited online viewed these images in an online survey. People who saw images including a scientist’s smiling face, or “scientist selfies,” evaluated the scientists in the images and scientists in general as significantly warmer than people who saw control images or images of scientific environments or equipment that did not include a person. This perception of scientists as warm was especially prominent among people who saw images featuring a female scientist’s face, as female scientists in selfies were evaluated as significantly warmer than male scientists in selfies or science-only images. There was also a slight increase in the perceived competence of female scientists in selfies. Competence cues such as lab coats and equipment likely played a role in preserving the perceived competence of scientists in selfies.

“Seeing scientist selfies, but not images of scientific objects posted by scientists online, boosted perceptions that scientists are both competent and warm,” said lead author LSU alumna Paige Jarreau, who is a former LSU science communication specialist and current director of social media and science communication at LifeOmic. “We think this is because people who viewed science images with a scientist’s face in the picture began to see these scientist communicators on Instagram not as belonging to some unfamiliar group of stereotypically socially inept geniuses, but as individuals and even as ‘everyday’ people with ‘normal’ interests—people who, like us, enjoy taking selfies! Female scientists, in particular, when represented in substantial numbers and diversity, may cause viewers to re-evaluate stereotypical perceptions of who a scientist is.”

The team further found that seeing a series of female scientist selfies on Instagram significantly shifted gender-related science stereotypes, namely those that associate STEM fields with being male. However, they also found that people who saw female scientist selfies evaluated these scientists as significantly more attractive than male scientist selfies. This might help explain female scientists’ boosted warmth evaluations, as physical attractiveness is positively associated with perceived warmth. However, this could also be an indicator that viewers focused more on the physical appearance of female scientists than on male scientists. By extension, female scientists could be more unfairly evaluated for defying gender norms in their selfies, such as not smiling or appearing warm. In their PLOS ONE paper, the team writes that this possibility should be investigated further in future research.

Source: New study shows scientists who selfie garner more public trust

The Role of Luck in Life Success Is Far Greater Than We Realized – Scientific American Blog Network

There is a deep underlying assumption, however, that we can learn from them because it’s their personal characteristics–such as talent, skill, mental toughness, hard work, tenacity, optimism, growth mindset, and emotional intelligence– that got them where they are today.

[…]

But is this assumption correct? I have spent my entire career studying the psychological characteristics that predict achievement and creativity. While I have found that a certain number of traits— including passion, perseverance, imagination, intellectual curiosity, and openness to experience– do significantly explain differences in success, I am often intrigued by just how much of the variance is often left unexplained.

In recent years, a number of studies and books–including those by risk analyst Nassim Taleb, investment strategist Michael Mauboussin, and economist Robert Frank— have suggested that luck and opportunity may play a far greater role than we ever realized, across a number of fields, including financial trading, business, sports, art, music, literature, and science. Their argument is not that luck is everything; of course talent matters. Instead, the data suggests that we miss out on a really importance piece of the success picture if we only focus on personal characteristics in attempting to understand the determinants of success.

[…]

Consider some recent findings:

The importance of the hidden dimension of luck raises an intriguing question: Are the most successful people mostly just the luckiest people in our society? If this were even a little bit true, then this would have some significant implications for how we distribute limited resources, and for the potential for the rich and successful to actually benefit society (versus benefiting themselves by getting even more rich and successful).

[…]

Many meritocratic strategies used to assign honors, funds, or rewards are often based on the past success of the person. Selecting individuals in this way creates a state of affairs in which the rich get richer and the poor get poorer (often referred to as the “Matthew effect“). But is this the most effective strategy for maximizing potential? Which is a more effective funding strategy for maximizing impact to the world: giving large grants to a few previously successful applicants, or a number of smaller grants to many average-successful people? This is a fundamental question about distribution of resources, which needs to be informed by actual data.

Consider a study conducted by Jean-Michel Fortin and David Currie, who looked at whether larger grants lead to larger discoveries. They found a positive, but only very small relationship between funding and impact (as measured by four indices relating to scientific publications). What’s more, those who received a second grant were not more productive than those who only received a first grant, and impact was generally a decelerating function of funding.

[…]

the best funding strategy of them all was one where an equal number of funding was distributed to everyone. Distributing funds at a rate of 1 unit every five years resulted in 60% of the most talented individuals having a greater than average level of success, and distributing funds at a rate of 5 units every five years resulted in 100% of the most talented individuals having an impact! This suggests that if a funding agency or government has more money available to distribute, they’d be wise to use that extra money to distribute money to everyone, rather than to only a select few

[…]

The results of this elucidating simulation, which dovetail with a growing number of studies based on real-world data, strongly suggest that luck and opportunity play an underappreciated role in determining the final level of individual success. As the researchers point out, since rewards and resources are usually given to those who are already highly rewarded, this often causes a lack of opportunities for those who are most talented (i.e., have the greatest potential to actually benefit from the resources), and it doesn’t take into account the important role of luck, which can emerge spontaneously throughout the creative process. The researchers argue that the following factors are all important in giving people more chances of success: a stimulating environment rich in opportunities, a good education, intensive training, and an efficient strategy for the distribution of funds and resources. They argue that at the macro-level of analysis, any policy that can influence these factors will result in greater collective progress and innovation for society (not to mention immense self-actualization of any particular individual).

Source: The Role of Luck in Life Success Is Far Greater Than We Realized – Scientific American Blog Network

Scientists find genetic mutation that makes woman feel no pain

Doctors have identified a new mutation in a woman who is barely able to feel pain or stress after a surgeon who was baffled by her recovery from an operation referred her for genetic testing.

Jo Cameron, 71, has a mutation in a previously unknown gene which scientists believe must play a major role in pain signalling, mood and memory. The discovery has boosted hopes of new treatments for chronic pain which affects millions of people globally.

Cameron, a former teacher who lives in Inverness, has experienced broken limbs, cuts and burns, childbirth and numerous surgical operations with little or no need for pain relief. She sometimes leans on the Aga and knows about it not from the pain, but the smell.

[…]

But it is not only an inability to sense pain that makes Cameron stand out: she also never panics. When a van driver ran her off the road two years ago, she climbed out of her car, which was on its roof in a ditch, and went to comfort the shaking young driver who cut across her. She only noticed her bruises later. She is relentlessly upbeat, and in stress and depression tests she scored zero.

[…]

In a case report published on Thursday in the British Journal of Anaesthesia, the UCL team describe how they delved into Cameron’s DNA to see what makes her so unusual. They found two notable mutations. Together, they suppress pain and anxiety, while boosting happiness and, apparently, forgetfulness and wound healing.

The first mutation the scientists spotted is common in the general population. It dampens down the activity of a gene called FAAH. The gene makes an enzyme that breaks down anandamide, a chemical in the body that is central to pain sensation, mood and memory. Anandamide works in a similar way to the active ingredients of cannabis. The less it is broken down, the more its analgesic and other effects are felt.

The second mutation was a missing chunk of DNA that mystified scientists at first. Further analysis showed that the “deletion” chopped the front off a nearby, previously unknown gene the scientists named FAAH-OUT. The researchers think this new gene works like a volume control on the FAAH gene. Disable it with a mutation like Cameron has and FAAH falls silent. The upshot is that anandamide, a natural cannabinoid, builds up in the system. Cameron has twice as much anandamide as those in the general population

Source: Scientists find genetic mutation that makes woman feel no pain | Science | The Guardian

New research indicates we transition between 19 different brain phases when sleeping

A rigorous new study has examined the large-scale brain activity of a number of human subjects while sleeping, presenting one of the most detailed investigations into sleep phases conducted to date. The study suggests that instead of the traditional four sleep stages we generally understand the brain moves through, there are in fact at least 19 different identifiable brain patterns transitioned through while sleeping.

Traditionally scientists have identified four distinct stages our brain transitions through in a general sleep cycle – three non-REM sleep phases (N1-3) that culminate in an REM phase. The four stages have been classically determined and delineated using electroencephalographic (EEG) brainwave recordings.

“This way of dividing sleep into stages is really based on historical conventions, many of which date back to the 1930s,” explains Angus Stevner, one of the researchers on the project from the Center for Music in the Brain at Aarhus University. “We’ve come up with a more precise and detailed description of sleep as a higher number of brain networks which change their communication patterns and dynamic characteristics during sleep.”

The new research set out to more comprehensively record whole-brain activity in a number of subjects by using functional magnetic resonance imaging (fMRI). The study began by studying 57 healthy subjects in an fMRI scanner. Each subject was asked to lie in the scanner for 52 minutes with their eyes closed. At the same time, each subject was tracked using an EEG. This allowed the researchers to compare traditional brainwave sleep cycle data with that from the fMRI.

Due to the limited duration of the fMRI data, no subjects were found to enter REM sleep, however, 18 subjects did completely transition from wakefulness through the three non-REM sleep phases according to the EEG data. Highlighting the complexity of brain activity during our wake-to-sleep cycle the researchers confidently chronicled 19 different recurring whole-brain network states.

Mapping these whole-brain states onto traditional EEG-tracked sleep phases revealed a number of compelling correlations. Wakefulness, N2 sleep and N3 sleep all could be represented by specific whole brain states. The range of different fMRI-tracked brain states did reduce as subjects fell into deeper sleep phases, with two different fMRI brain states correlating with N2 sleep, and only one with N3. However, N1 sleep as identified by EEG data, the earliest and least clearly defined sleep phase, did not consistently correspond with any fMRI brain state.

The researchers conclude from this data that N1 is actually a much more complex sleep phase than previously understood. This phase, a strange mix of wakefulness and sleep, seemed to encompass a large range of the 19 different whole-brain network states identified in the fMRI data.

Source: New research indicates we transition between 19 different brain phases when sleeping

Humans Built Complex Societies Before They Invented Moral Gods

The appearance of moralizing gods in religion occurred after—and not before—the emergence of large, complex societies, according to new research. This finding upturns conventional thinking on the matter, in which moralizing gods are typically cited as a prerequisite for social complexity.

Gods who punish people for their anti-social indiscretions appeared in religions after the emergence and expansion of large, complex societies, according to new research published today in Nature. The finding suggests religions with moralizing gods, or prosocial religions, were not a necessary requirement for the evolution of social complexity. It was only until the emergence of diverse, multi-ethnic empires with populations exceeding a million people that moralizing gods began to appear—a change to religious beliefs that likely worked to ensure social cohesion.

Belief in vengeful gods who punish populations for their indiscretions, such as failing to perform a ritual sacrifice or an angry thunderbolt response to a direct insult, are endemic in human history (what the researchers call “broad supernatural punishment”). It’s much rarer for religions, however, to involve deities who enforce moral codes and punish followers for failing to act in a prosocial manner. It’s not entirely clear why prosocial religions emerged, but the “moralizing high gods” hypothesis is often invoked as an explanation. Belief in a moralizing supernatural force, the argument goes, was culturally necessary to foster cooperation among strangers in large, complex societies.

Source: Humans Built Complex Societies Before They Invented Moral Gods

Studies Keep Showing That the Best Way to Stop Piracy Is to Offer Cheaper, Better Alternatives

Study after study continues to show that the best approach to tackling internet piracy is to provide these would-be customers with high quality, low cost alternatives.

For decades the entertainment industry has waged a scorched-earth assault on internet pirates. Usually this involves either filing mass lawsuits against these users, or in some instances trying to kick them off of the internet entirely. These efforts historically have not proven successful.

Throughout that time, data has consistently showcased how treating such users like irredeemable criminals may not be the smartest approach. For one, studies show that pirates are routinely among the biggest purchasers of legitimate content, and when you provide these users access to above-board options, they’ll usually take you up on the proposition.

That idea was again supported by a new study this week out of New Zealand first spotted by TorrentFreak. The study, paid for by telecom operator Vocus Group, surveyed a thousand New Zealanders last December, and found that while half of those polled say they’ve pirated content at some point in their lives, those numbers have dropped as legal streaming alternatives have flourished.

The study found that 11 percent of New Zealand consumers still obtain copyrighted content via illegal streams, and 10 percent download infringing content via BitTorrent or other platforms. But it also found that users are increasingly likely to obtain that same content via over the air antennas (75 percent) or legitimate streaming services like Netflix (55 percent).

“In short, the reason people are moving away from piracy is that it’s simply more hassle than it’s worth,” says Vocus Group NZ executive Taryn Hamilton said in a statement.

Historically, the entertainment industry has attempted to frame pirates as freeloaders exclusively interested in getting everything for free. In reality, it’s wiser to view them as frustrated potential consumers who’d be happy to pay for content if it was more widely available, Hamilton noted.

“The research confirms something many internet pundits have long instinctively believed to be true: piracy isn’t driven by law-breakers, it’s driven by people who can’t easily or affordably get the content they want,” she said.

But it’s far more than just instinct. Studies from around the world consistently come to the same conclusion, says Annemarie Bridy, a University of Idaho law professor specializing in copyright.

Bridy pointed to a number of international, US, and EU studies that all show that users will quickly flock to above-board options when available. Especially given the potential privacy and security risks involved in downloading pirated content from dubious sources.

“This is especially true given that “pirate sites” are now commonly full of malware and other malicious content, making them risky for users,” Bridy said. “It seems like a no-brainer that when you lower barriers to legal content acquisition in the face of rising barriers to illegal content acquisition, users opt for legal content.”

Source: Studies Keep Showing That the Best Way to Stop Piracy Is to Offer Cheaper, Better Alternatives – Motherboard

Incredible Experiment Gives Infrared Vision to Mice—and Humans Could Be Next

By injecting nanoparticles into the eyes of mice, scientists gave them the ability to see near-infrared light—a wavelength not normally visible to rodents (or people). It’s an extraordinary achievement, one made even more extraordinary with the realization that a similar technique could be used in humans.

Of all the remarkable things done to mice over the years, this latest achievement, described today in the science journal Cell, is among the most sci-fi.

A research team, led by Tian Xue from the University of Science and Technology of China and Gang Han from the University of Massachusetts Medical School, modified the vision of mice such that they were able to see near-infrared light (NIR), in addition to retaining their natural ability to see normal light. This was done by injecting special nanoparticles into their eyes, with the effect lasting for around 10 weeks and without any serious side effects.

[…]

Drops of fluid containing the tiny particles were injected directly in their eyes, where, using special anchors, they latched on tightly to photoreceptor cells. Photoreceptor cells—the rods and cones—normally absorb the wavelengths of incoming visible light, which the brain interprets as sight. In the experiment, however, the newly introduced nanoparticles upconverted incoming NIR into a visible wavelength, which the mouse brain was then capable of processing as visual information (in this case, they saw NIR as greenish light). The nanoparticles clung on for nearly two months, allowing the mice to see both NIR and visible light with minimal side effects.

Graphical representation of the process in action. When infrared light (red) reaches a photoreceptor cell (light green circle), the nanoparticles (pink circles) convert the light into visible green light.
Image: Cell

Essentially, the nanoparticles on the photoreceptor cells served as a transducer, or converter, for infrared light. The longer infrared wavelengths were captured in the retina by the nanoparticles, which then relayed them as shorter wavelengths within the visible light range. The rods and cones—which are built to absorb the shorter wavelengths—were thus able to accept this signal, and then send this upconverted information to the visual cortex for processing. Specifically, the injected particles absorbed NIR around 980 nanometers in wavelength and converted it to light in the area of 535 nanometers. For the mice, this translated to seeing the infrared light as the color green. The result was similar to seeing NIR with night-vision goggles, except that the mice were able to retain their normal view of visible light as well.

[…]

Looking ahead, Tian and Gang would like to improve the technique with organic-based nanoparticles comprised of FDA-approved compounds, which could result in even brighter infrared vision. They’d also like to tweak the technique to make it more responsive to human biology. Optimistic of where this technology is headed, Tian and Gang have already claimed a patent application related to their work.

I can already imagine the television commercials: “Ask your doctor if near-infrared vision is right for you.”

[Cell]

Source: Incredible Experiment Gives Infrared Vision to Mice—and Humans Could Be Next

In small groups, people follow high-performing leaders

researchers at the NYU Tandon School of Engineering have cracked the code on how leaders arise from small groups of people over time. The work is detailed in a study, “Social information and Spontaneous Emergence of Leaders in Human Groups,” published in The Royal Society Interface.

[…]

To conduct the research, the team convened several groups of five volunteers each to participate in a cognitive test arranged in 10 consecutive rounds. The task involved estimating the number of dots displayed for just half a second on a large screen. In each round, participants were asked to choose one from multiple answers using a custom-made clicker, without verbally communicating with one another. Because the dots were visible for only an instant, group members, lacking the time to count them, had to venture a guess. However, the experiments were structured so that participants could alter their answers based on the answers of others in their group: once all participants had chosen their initial answers, the screen—viewable by all—displayed the current answers of all members along with their past performance in selecting correct responses. Participants then had a 10-second window in which to change their responses based on those of the others in the group.

The researchers, analyzing how participant responses evolved over the course of the experiment, found that individuals did not choose the simple majority rule, as posited by the wisdom of crowds. Rather, they dynamically decided whom to follow in making decisions, based on how well each group member performed over time. Based on their observations, the researchers inferred a dynamic evolution of the network of interaction, in which participants were nodes and the links were the consequences of social influence. For example, the investigators generated a link from one participant to another if the first had changed his or her answer to that of the second. The speed at which the network grew increased over the course of each of the rounds.

“Individuals used social information more and more over time, and the more accurate the information, the more influence it had over participants’ choices,” said Porfiri. “Therefore, the relationship between participants’ performance and their social influence was reinforced over time, resulting in the emergence of group leaders.”

To discern the influence of social networks within evolving group dynamics, the investigators noted that:

  • Participants were influenced by in changing their answers. On average, participants changed answers to ones that nobody had selected only about 5 percent of the time. (There were more than 9 instances over all rounds in which participants changed answers to those of others and only 0.6 in which they changed answers to those no one had selected.)
  • Participants were more likely to be copied by others if their performances were good, even if their answers differed from those of the group majority.

Nakayama, the lead author, explained that the behavior of small groups is strikingly different than that of much larger gatherings of people.

“Where a large crowd would adopt a simple majority rule, with an increase in the accuracy of performance over repeated interactions, individuals rely more on social than personal information and as a consequence, good performers would emerge as leaders, exerting a stronger influence on others over time, “he said.

Read more at: https://phys.org/news/2019-02-small-groups-people-high-performing-leaders.html#jCp

Source: In small groups, people follow high-performing leaders

New experimental drug rapidly repairs age-related memory loss and improves mood

A team of Canadian scientists has developed a fascinating new experimental drug that is purported to result in rapid improvements to both mood and memory following extensive animal testing. It’s hoped the drug will move to human trials within the next two years.

Gamma-aminobutyric acid (GABA) is a key neurotransmitter, and when altered it can play a role in the development of everything from psychiatric conditions to cognitive degeneration. Benzodiazepines, such as Xanax or Valium, are a class of drugs well known to function by modulating the brain’s GABA systems.

[…]

In animal tests the drug has been found to be remarkably effective, with old mice displaying rapid improvements in memory tests within an hour of administration, resulting in performance similar to that of young mice. Daily administration of the drug over two months was also seen to result in an actual structural regrowth of brain cells, returning their brains to a state that resembles a young animal.

“The aged cells regrew to appear the same as young brain cells, showing that our novel molecules can modify the brain in addition to improving symptoms,” says Sibille.

The experimental drug is not some miracle cognitive enhancer however, with no beneficial effects seen when administered to younger mice. So it seems likely the drug’s modulations to the brain’s GABA systems is directly related to normalizing either age- or stress-related disruptions.

[…]

If it proves safe and efficacious it could be a useful preventative tool, administered in short bursts to subjects in their 50s or 60s to slow the onset of age-related dementia and cognitive impairment.

The new study was published in the journal Molecular Neuropsychiatry.

Source: New experimental drug rapidly repairs age-related memory loss and improves mood

Finland basic income trial left people ‘happier but jobless’

Giving jobless people in Finland a basic income for two years did not lead them to find work, researchers said.

From January 2017 until December 2018, 2,000 unemployed Finns got a monthly flat payment of €560 (£490; $634).

The aim was to see if a guaranteed safety net would help people find jobs, and support them if they had to take insecure gig economy work.

While employment levels did not improve, participants said they felt happier and less stressed.

When it launched the pilot scheme back in 2017, Finland became the first European country to test out the idea of an unconditional basic income. It was run by the Social Insurance Institution (Kela), a Finnish government agency, and involved 2,000 randomly-selected people on unemployment benefits.

It immediately attracted international interest – but these results have now raised questions about the effectiveness of such schemes.

[…]

Although it’s enjoying a resurgence in popularity, the idea isn’t new. In fact, it was first described in Sir Thomas More’s Utopia, published in 1516 – a full 503 years ago.

Such schemes are being trialled all over the world. Adults in a village in western Kenya are being given $22 a month for 12 years, until 2028, while the Italian government is working on introducing a “citizens’ income”. The city of Utrecht, in the Netherlands, is also carrying out a basic income study called Weten Wat Werkt – “Know What Works” – until October.

[…]

Did it help unemployed people in Finland find jobs, as the centre-right Finnish government had hoped? No, not really.

Mr Simanainen says that while some individuals found work, they were no more likely to do so than a control group of people who weren’t given the money. They are still trying to work out exactly why this is, for the final report that will be published in 2020.

But for many people, the original goal of getting people into work was flawed to begin with. If instead the aim were to make people generally happier, the scheme would have been considered a triumph.

[…]

Researchers from Kela are now busy analysing all of their results, to figure out what else – if anything – they can tell us about basic income’s uses and shortcomings.

Mr Simanainen says that he doesn’t like to think of the trial as having “failed”.

From his point of view, “this is not a failure or success – it is a fact, and [gives us] new information that we did not have before this experiment”.

Source: Finland basic income trial left people ‘happier but jobless’ – BBC News

Decision making in space – different than on earth

Dr. Elisa Ferre, senior lecturer in psychology, and Maria Gallagher, lead author and Ph.D. student, both from Royal Holloway, investigated how alterations in gravity changed .

Astronauts are primarily trained in and given the right equipment, but are rarely proficient in how their brain functions will work millions of miles away from earth, when making decisions away from the comfort of terrestrial gravity.

The experiment saw participants take part in Random Number Generation task where they were upright, with the natural pull of gravity around them, and asked to shout out a between one and nine every time they heard a beep. This was then repeated, but with the participant laying down which manipulates the gravity.

When sitting up, the participant was able to shout out a different random sequence of numbers, but when lying down, and thus not within the natural force of gravity, things started to change.

Maria Gallagher explained: “We found decreased randomness in the sequence of numbers when participants were laying down: participants started to repeat the same they shouted out before and the random choices they made almost ceased.”

Dr. Elisa Ferre said: “With the 50th anniversary of the Apollo Landing, we are getting ever closer to the new space age we have all imagined, which is very exciting.

“However, whilst the physical training and incredible equipment is given to astronauts, research in decision-making when we’re away from terrestrial gravity is little known, and our findings show that altered gravity might affect the way in which we make decision.

“This is incredibly important and we need to fix this.

“With the prospect of people going up into space, whether as a trained astronaut or in the near future, civilian passengers, it can take few minutes for any transition messages to get from the spacecraft to Houston, so being able to make decisions promptly, concisely and on-the-spot without any outside help, is of paramount importance.”

Making the correct decision is vital in high-pressured environments, such as , remarked upon by Canadian Astronaut Chris Hadfield, ‘Most of the time you only really get one try to do most of the critical stuff and the consequences are life or death.’

During spaceflight, are in an extremely challenging environment in which decisions must be made quickly and efficiently.

To ensure crew well-being and mission success, understanding how cognition is affected by is vital.

Read more at: https://phys.org/news/2019-02-decision-space.html#jCp

Source: Decision making in space

Stock market shows greater reaction to forecasts by analysts with favorable surnames

Financial analysts whose surnames are perceived as favourable elicit stronger market reactions to their earnings forecasts, new research from Cass Business School has found.

The researchers found that following the 9/11 terrorist attacks, market reactions weakened for forecasts from analysts with Middle Eastern surnames. They also found that following the French and German governments’ opposition to the US-led Iraq War, the US market reactions weakened for analysts with French or German surnames. This effect was stronger in firms with lower institutional ownership and for analysts with non-American first names.

The researchers measured surname favourability using the US historical immigration records to identify countries of origin associated with a particular surname and the Gallup survey data on Americans’ favourability toward foreign countries.

Dr. Jay Jung, assistant professor of accounting at Cass Business School, said surname favourability was not associated with quality such as accuracy, bias, and timeliness but rather it suggested the investors made biased judgements based on their perception of analysts’ .

“Our finding is consistent with the prediction based on motivated reasoning that people have a natural desire to draw conclusions that they are motivated to reach. If investors have favourable views toward an analyst due to his or her surname, they are motivated to assess the analyst’s forecasts as being more credible or of higher quality because it reduces the unpleasant inconsistency between their attitudes and judgments,” said Dr. Jung.

Dr. Jung said surname favourability did have a complementary effect on analysts’ career outcomes, helping analysts prosper in their profession.

“We found that, conditional on good forecasting performance, having a favourable surname made it more likely for an analyst to get elected as an All-Star analyst and survive in the profession when his or her brokerage house went out of business or went through a M&A (mergers and acquisition) process,” he adds.

Dr. Jung said surname favourability also had impact on price drifts in the .

“The speed at which stock prices reacted to an analyst’s forecasts was faster when the analyst had a favourable surname. We found significantly smaller delayed price responses.”

Dr. Jung said the research demonstrated that investors’ perception of an analyst’ surname not only influences their information processing in capital markets but also affects market efficiency and leads to different labour market consequences for finance professionals.

“It is quite interesting to see how the favorability of a surname, unrelated to the or quality of an analyst’s forecast, influences investor reaction and price anomalies in the capital market.”

Explore further: Marriage name game: What kind of guy would take his wife’s last name?

More information: The research paper ‘An Analyst by Any Other Surname: Surname Favorability and Market Reaction to Analyst Forecasts’ is conditionally accepted for publication in the Journal of Accounting and Economics.

Read more at: https://phys.org/news/2019-02-stock-greater-reaction-analysts-favorable.html#jCp

Source: Stock market shows greater reaction to forecasts by analysts with favorable surnames

Visualizing the Crime Rate Perception Gap

 

The Crime Rate Perception Gap

The Crime Rate Perception Gap

There’s a persistent belief across America that crime is on the rise.

Since the late 1980s, Gallup has been polling people on their perception of crime in the United States, and consistently, the majority of respondents indicate that they see crime as becoming more prevalent. As well, a recent poll showed that more than two-thirds of Americans feel that today’s youth are less safe from crime and harm than the previous generation.

Even the highest ranking members of the government have been suggesting that the country is in the throes of a crime wave.

We have a crime problem. […] this is a dangerous permanent trend that places the health and safety of the American people at risk.

— Jeff Sessions, Former Attorney General

Is crime actually more prevalent in society? Today’s graphic, amalgamating crime rate data from the FBI, shows a very different reality.

Data vs Perception

In the early ’90s, crime in the U.S. was an undeniable concern – particularly in struggling urban centers. The country’s murder rate was nearly double what it is today, and statistics for all types of crime were through the roof.

Since that era, crime rates in the United States have undergone a remarkably steady decline, but public perception has been slow to catch up. In a 2016 survey, 57% of registered voters said crime in the U.S. had gotten worse since 2008, despite crime rates declining by double-digit percentages during that time period.

There are many theories as to why crime rates took such a dramatic U-turn, and while that matter is still a subject for debate, there’s clear data on who is and isn’t being arrested.

Are Millennials Killing Crime?

Media outlets have accused millennials of the killing off everything from department stores to commuting by car, but there’s another behavior this generation is eschewing as well – criminality.

Compared to previous generations, people under the age of 39 are simply being arrested in smaller numbers. In fact, much of the decline in overall crime can be attributed to people in this younger age bracket. In contrast, the arrest rate for older Americans actually rose slightly.

Arrests by Age Group

There’s no telling whether the overall trend will continue.

In fact, the most recent data shows that the murder rate has ticked up ever-so-slightly in recent years, while violent and property crimes continue to be on the decline.

A Global Perspective

Perceptions of increasing criminality are echoed in many other developed economies as well. From Italy to South Korea, the prevailing sentiment is that youth are living in a society that is less safe than in previous generations.

global crime perceptions

As the poll above demonstrates, perception gaps exist in somewhat unexpected places.

In Sweden, where violent crime is actually increasing, 53% of people believe that crime will be worse for today’s youth. Contrast that with Australia, where crime rates have declined in a similar pattern as in the United States – yet, more than two-thirds of Aussie respondents believe that crime will be worse for today’s youth.

One significant counterpoint to this trend is China, where respondents felt that crime was less severe today than in the past.

Source: Visualizing the Crime Rate Perception Gap

Why nonviolent resistance is more successful in effecting change than violent campaigns

Chenoweth and Stephan collected data on all violent and nonviolent campaigns from 1900 to 2006 that resulted in the overthrow of a government or in territorial liberation. They created a data set of 323 mass actions. Chenoweth analyzed nearly 160 variables related to success criteria, participant categories, state capacity, and more. The results turned her earlier paradigm on its head—in the aggregate, nonviolent civil resistance was far more effective in producing change.
[…]

it really boils down to four different things. The first is a large and diverse participation that’s sustained.

The second thing is that [the movement] needs to elicit loyalty shifts among security forces in particular, but also other elites. Security forces are important because they ultimately are the agents of repression, and their actions largely decide how violent the confrontation with—and reaction to—the nonviolent campaign is going to be in the end. But there are other security elites, economic and business elites, state media. There are lots of different pillars that support the status quo, and if they can be disrupted or coerced into noncooperation, then that’s a decisive factor.

The third thing is that the campaigns need to be able to have more than just protests; there needs to be a lot of variation in the methods they use.

The fourth thing is that when campaigns are repressed—which is basically inevitable for those calling for major changes—they don’t either descend into chaos or opt for using violence themselves. If campaigns allow their repression to throw the movement into total disarray or they use it as a pretext to militarize their campaign, then they’re essentially co-signing what the regime wants—for the resisters to play on its own playing field. And they’re probably going to get totally crushed.

[…]

One of the things that isn’t in our book, but that I analyzed later and presented in a TEDx Boulder talk in 2013, is that a surprisingly small proportion of the population guarantees a successful : just 3.5 percent. That sounds like a really small number, but in absolute terms it’s really an impressive number of people. In the U.S., it would be around 11.5 million people today. Could you imagine if 11.5 million people—that’s about three times the size of the 2017 Women’s March—were doing something like mass noncooperation in a sustained way for nine to 18 months? Things would be totally different in this country.

WCIA: Is there anything about our current time that dictates the need for a change in tactics?

CHENOWETH: Mobilizing without a long-term strategy or plan seems to be happening a lot right now, and that’s not what’s worked in the past. However, there’s nothing about the age we’re in that undermines the basic principles of success. I don’t think that the factors that influence success or failure are fundamentally different. Part of the reason I say that is because they’re basically the same things we observed when Gandhi was organizing in India as we do today. There are just some characteristics of our age that complicate things a bit.

Read more at: https://phys.org/news/2019-02-nonviolent-resistance-successful-effecting-violent.html#jCp

Read more at: https://phys.org/news/2019-02-nonviolent-resistance-successful-effecting-violent.html#jCp

Source: Why nonviolent resistance is more successful in effecting change than violent campaigns

Twins get some ‘mystifying’ results when they put 5 DNA ancestry kits to the test

Last spring, Marketplace host Charlsie Agro and her twin sister, Carly, bought home kits from AncestryDNA, MyHeritage, 23andMe, FamilyTreeDNA and Living DNA, and mailed samples of their DNA to each company for analysis.

Despite having virtually identical DNA, the twins did not receive matching results from any of the companies.

In most cases, the results from the same company traced each sister’s ancestry to the same parts of the world — albeit by varying percentages.

But the results from California-based 23andMe seemed to suggest each twin had unique twists in their ancestry composition.

According to 23andMe’s findings, Charlsie has nearly 10 per cent less “broadly European” ancestry than Carly. She also has French and German ancestry (2.6 per cent) that her sister doesn’t share.

The identical twins also apparently have different degrees of Eastern European heritage — 28 per cent for Charlsie compared to 24.7 per cent for Carly. And while Carly’s Eastern European ancestry was linked to Poland, the country was listed as “not detected” in Charlsie’s results.

“The fact that they present different results for you and your sister, I find very mystifying,” said Dr. Mark Gerstein, a computational biologist at Yale University.

[…]

AncestryDNA found the twins have predominantly Eastern European ancestry (38 per cent for Carly and 39 per cent for Charlsie).

But the results from MyHeritage trace the majority of their ancestry to the Balkans (60.6 per cent for Carly and 60.7 per cent for Charlsie).

One of the more surprising findings was in Living DNA’s results, which pointed to a small percentage of ancestry from England for Carly, but Scotland and Ireland for Charlsie.

Another twist came courtesy of FamilyTreeDNA, which assigned 13-14 per cent of the twins’ ancestry to the Middle East — significantly more than the other four companies, two of which found no trace at all.

Paul Maier, chief geneticist at FamilyTreeDNA, acknowledges that identifying genetic distinctions in people from different places is a challenge.

“Finding the boundaries is itself kind of a frontiering science, so I would say that makes it kind of a science and an art,” Maier said in a phone interview.

Source: Twins get some ‘mystifying’ results when they put 5 DNA ancestry kits to the test | CBC News