Quantcast
Channel: Hacker News
Viewing all 25817 articles
Browse latest View live

Stripe to donate $1M to California Yimby

$
0
0

“This may well be the beginning of tech firms deciding that they need to help solve this crisis,” said Brian Hanlon, California Yimby’s founder. “They don’t have a viable business model in California if the housing crisis continues unabated.”

Stripe’s donation could end up being controversial. The so-called Yimby movement — whose platform is to cut back zoning and other regulations so that housing is easier to build — has been criticized by tenants’ groups for its connections to the tech industry and accused of being insufficiently worried about the concerns of poorer renters. Nevertheless, there is little debate among economists that California’s crisis will persist until housing is more plentiful.

“We can sit back and sort of watch this unfold around us and abstain from taking any action or stance because we think that there might be some blowback that might be unpleasant for us,” Mr. Collison said. “But given just how severe the issue is, I really think that would be mistaken.”

California Online

(Please note: We regularly highlight articles on news sites that have limited access for nonsubscribers.)

Photo
Irma Rivera, 31, with her son Jesus Eduardo and daughter Soany, among other members of the migrant caravan camped out in Tijuana, Mexico, near the California border.Credit Meghan Dhaliwal for The New York Times

• A caravan of Central American migrants that trekked to the California border must now convince an immigration judge that they belong to a social group in order to gain asylum. Some recent cases reflect inconsistencies in this policy. [The New York Times]

The first death in the nationwide E. coli outbreak linked to romaine lettuce has been reported in California. [The New York Times]

• Costa Mesa became the latest Orange County city to oppose the so-called sanctuary state law that expands protection for undocumented immigrants. [The Los Angeles Times]

• Nearly 40 million people now call the Golden State home, according to a new report. California added about 309,000 new residents in 2017. [NBC Bay Area]

• A State Senate bill could expand marijuana delivery in California, where a patchwork of local laws have created “pot deserts.” And Senator Dianne Feinstein has dropped her opposition to legalized marijuana, saying the federal government should not interfere in the state’s market. [The Sacramento Bee]

• The political consulting firm Cambridge Analytica said it would file for bankruptcy, two months after it became embroiled in a Facebook data-harvesting scandal. [The New York Times]

Photo
Elaine Koyama, 72, who put down a deposit for the Tesla Model 3, viewed the car in a showroom in Los Angeles in January.Credit Lucy Nicholson/Reuters

Tesla lost nearly $800 million in the first quarter of the year, the company reported on Wednesday. But Elon Musk, the chief executive, said the electric-car maker would become profitable later in the year if it met its Model 3 production goals. [The New York Times]

• A major housing bill was killed last month after it was opposed by the low-income residents it aimed to help. Here are some of the main reasons. [The Los Angeles Times]

• The city of Oakland had a “mandatory duty” to ensure safety at the Ghost Ship, an Alameda County judge has ruled. A fire killed 36 people at the warehouse in 2016. [The Mercury News]

• The average price of gas in California is predicted to reach $4 a gallon this summer. It is already above $3.50 in every major city in the state. [USA Today]

Our chief film critics, Manohla Dargis and A.O. Scott, have some suggestions for Ava DuVernay, Reese Witherspoon, Brad Pitt, Netflix and others in the movie industry. [The New York Times]

Marcia Hafif, a Southern California native who was best known for her monochromatic paintings that experimented with color, has died at 89. [The New York Times]

Photo
Elaine Castillo’s debut novel exposes the social injustices of immigrant life in 1990s Milpitas.Credit Amaal Said

• “It was a crime to be a Filipino in California,” the poet and labor organizer Carlos Bulosan wrote in 1943. Now Elaine Castillo’s debut novel, “America Is Not the Heart,” traces the Filipino immigrant experience in the state. [The New York Times]

An Op-Ed contributor argues that a closer relationship between Silicon Valley and the Pentagon is good for industry and for national security. [The New York Times | Op-Ed]

And Finally ...

Photo
At the new Esports Arena in Oakland, the air is filled with game sound effects, hundreds of hands clicking on controllers and the periodic shrieks of “shoutcasters” commenting on game play.Credit Jason Henry for The New York Times

They want to sit next to each other, elbow to elbow, controller to controller. They want the lighting to be cool, the snacks to be Hot Pockets, and they want a full bar because they are not teenagers anymore.

As professional esports leagues grow, America’s 150 million gamers want to gather.

At the pre-opening party of Oakland’s new Esports Arena, the line stretched down the block in the heart of Jack London Square. Nearly 4,000 people had jammed into the former parking structure, which is now an industrial-looking space equipped with more than a hundred TVs and computers.

“It’s amazing,” said one gamer, who said the space was a nice change from the sweaty back rooms of video stores where he used to play. “There’s so much room.”

Read the full story here.

California Today goes live at 6 a.m. Pacific time weekdays. Tell us what you want to see: CAtoday@nytimes.com.

California Today is edited by Julie Bloom, who grew up in Los Angeles and graduated from U.C. Berkeley.

Continue reading the main story

Twitter urges users to change passwords after computer 'glitch'

$
0
0

(Reuters) - Twitter Inc urged its more than 330 million users to change their passwords after a glitch caused some of them to be stored in plain text on its internal computer system.

FILE PHOTO: The Twitter application is seen on a phone screen August 3, 2017. REUTERS/Thomas White/File Photo

The social network said it had fixed the glitch and that an internal investigation had found no indication passwords were stolen or misused by insiders, but it urged all users to consider changing their passwords “out of an abundance of caution.”

The blog did not say how many passwords were affected. But a person familiar with the company’s response said the number was “substantial” and that they were exposed for “several months.”

Twitter discovered the bug a few weeks ago and has reported it to some regulators, said the person, who was not authorized to discuss the matter.

The disclosure comes as lawmakers and regulators around the world scrutinize the way that companies store and secure consumer data, after a string of security incidents that have come to light at firms including Equifax Inc, Facebook Inc and Uber.

The European Union is due to start enforcing a strict new privacy law, known as the General Data Protection Regulation, that includes steep fees for violating its terms.

The glitch was related to Twitter’s use of a technology known as “hashing” that masks passwords as a user enters them by replacing them with numbers and letters, according to the blog.

A bug caused the passwords to be written on an internal computer log before the hashing process was completed, the blog said.  

“We are very sorry this happened,” the Twitter blog said.

Twitter’s share price was down 1 percent in extended trade at $30.35, after gaining 0.4 percent during the session.

The company advised users to take precautions to ensure that their accounts are safe, including changing passwords and enabling Twitter’s two-factor authentication service to help prevent accounts from being hijacked.

Reporting by Jim Finkle in Toronto; Editing by Susan Thomas

Ask HN: Building a game for AI Research

$
0
0
Ask HN: Building a game for AI Research
10 points by keerthiko2 hours ago | hide | past | web | favorite | 1 comment
I'm working on a PC game as a personal project. I am primarily a game designer and developer, but I'm interested in working towards making my game friendly towards AI research. I tried reading TF/PyTorch docs but couldn't find a guide on how to make my game a friendly environment that exposes the necessary surface area for AI research.

I would love advice from anyone who has experience building API layers for video games for AI (BWAPI, OpenAI's DOTA team, deepmind SC2 AI, etc) for pointers.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

A growing number of philosophers are conducting experiments to test arguments

$
0
0

Conducting thought experiments from the armchair has long been an accepted method in analytic philosophy. What do thought experiments from the armchair look like? Philosophers think about real and imagined scenarios involving knowledge, morality, free will and other matters. They then use those scenarios to elicit their own reactions (‘intuitions’), which serve as fodder for arguments.

One well-known kind of thought experiment is called a ‘Gettier case’. Named after the American philosopher Edmund Gettier, these are scenarios used to question a particular notion of knowledge, and are based on a range of examples he provided in a journal article in 1963. One might plausibly think of knowledge as a belief that is both true and justified (ie, based on evidence). But Gettier suggested some counterexamples to this definition, by telling stories in each of which there’s a true, justified belief that he claimed isn’t a case of knowledge. For example, imagine that at noon you look at a stopped clock that happens to have stopped at noon. Your belief that it’s noon is true, and arguably it’s also justified. The question is: do you thereby know that it’s noon, or do you merely believe it? While this example and others might seem frivolous – other cases involve fake zebras and imitation barns – they are intended to make headway in analysing knowledge.

In the late 1990s, the philosophers Jonathan Weinberg, Shaun Nichols and Stephen Stich started raising questions about this methodology of eliciting ‘our’ intuitions. Their question: who does ‘we’ refer to here? They wondered if philosophers – at least in analytic philosophy, a particularly Western, educated, industrialised, rich and democratic bunch, aka ‘WEIRD’ – might have intuitions that people in other demographic groups wouldn’t share. (They’d been inspired to raise this question by work on cross-cultural differences by psychologists such as Richard Nisbett and Jonathan Haidt.)

Rather than just speculate about whether differences existed, Stich and colleagues decided to do some real-world experiments. In their initial research, they focused on common thought-experiments in epistemology (a subfield of philosophy that studies topics such as justified belief and knowledge). They recruited people of East Asian and Western descent, as well as people of Indian-subcontinent descent, and asked them to read and think about some classic vignettes in epistemology. In their 2001 paper, they claimed that among their most interesting findings was something unexpected: while most people of Western descent in their experiment deemed that particular Gettier cases are not instances of knowledge, people of East Asian and of Indian descent often thought the opposite.

Stich and colleagues argued that this kind of variation in intuitions should cause a big shift in the way that analytic philosophy is practised. Until this point, most philosophers had traditionally thought it was fine to sit in their armchairs and consider their own intuitions. It was the way that philosophy was done. But experimental evidence, they claimed, undermined this traditional practice. If such differences of intuition existed, they wrote: ‘Why should we privilege our intuitions rather than the intuitions of some other group?’ If different groups had different intuitions, it wasn’t enough to say that ‘our’ intuition about justice or knowledge or free will is such-and-such. Rather, the philosopher must at the very least specify whose intuition is relevant, and why that intuition should matter rather than another one.

Over subsequent decades, experimental philosophy (x-phi for short) grew significantly. Some philosophers followed Stich et al’s lead, in testing intuitions of participants who varied in gender, age, native language and other categories. They also looked at variation in intuitions based on irrelevant factors such as the order in which cases are presented. Beyond that, some x-phi practitioners also found significant sources of funding. Stich and his fellow philosopher Edouard Machery, together with the anthropologist H Clark Barrett, received a grant of more than $2.5 million from the John Templeton Foundation to embark on a series of experiments on knowledge, understanding and wisdom across 10 countries, with a goal of better understanding these philosophical concepts as they appear across a large swathe of cultures.

It’s important to note that arguably the biggest factor in x-phi’s growth has been the result of some philosophers heading off into a new direction. According to a recent survey of the field conducted by Joshua Knobe, not too many philosophers kept up experiments on demographic differences with the aim of showing that traditional philosophy is ill-grounded (this came to be known as ‘the negative programme’ in x-phi). Instead, another class of experiments (with a ‘positive programme’) sprang up.

Knobe, a professor of psychology, philosophy and linguistics at Yale University and well-known in the field for his experimental work, which he’s been doing since the early 2000s, describes one kind of ‘positive programme’ as very similar to cognitive science. Conducting experiments uncovers interesting effects, and researchers then hypothesise about mechanisms that might explain these effects. A well-known example of this kind of work is Knobe’s own finding, called the ‘side-effect effect’ or just the ‘Knobe effect’. In a nutshell, this is the finding that people judge a side-effect to be intentionally caused much more often when that side-effect is negative than when it’s positive.

For example, in Knobe’s original experiment, participants were given this vignette:

The vice-president of a company went to the chairman of the board and said: ‘We are thinking of starting a new programme. It will help us increase profits, but it will also harm the environment.’ The chairman of the board answered: ‘I don’t care at all about harming the environment. I just want to make as much profit as I can. Let’s start the new programme.’ They started the new programme. Sure enough, the environment was harmed.

Other study participants saw the exact same story, except that the word ‘harmed’ was replaced with the word ‘helped’. The striking result was that, in most cases (82 per cent), participants said that the chairman brought about the harmful side-effect intentionally, but only 33 per cent of participants said that he intentionally brought about the helpful side-effect.

Since then, many philosophers have conducted hundreds of these kinds of experiments. Some of them involve repeating and extending the Knobe effect, and many others venture into new directions to run experiments involving questions about moral responsibility, free will, causation, personal identity and other topics. In their ‘Experimental Philosophy Manifesto’ (2007), Knobe and Nichols described the allure of experimental philosophy’s positive programme by writing: ‘Many find it an exciting new way to approach the basic philosophical concerns that attracted them to philosophy in the first place.’ But while x-phi has expanded over the years, not everyone in philosophy has been a fan.

Analysing concepts from the armchair is a poor method, because of the evidence of demographic variation

First, as the positive programme in x-phi shades into psychology and vice versa, some have asked: is experimental philosophy really philosophy? Knobe and some of his colleagues argue that it is. They describe the work as continuous with a long tradition of philosophers trying to understand the human mind, and point to the likes of Aristotle, David Hume and Friedrich Nietzsche as precedents. In their manifesto, Knobe and Nichols write:

It used to be a commonplace that the discipline of philosophy was deeply concerned with questions about the human condition. Philosophers thought about human beings and how their minds worked … On this traditional conception, it wasn’t particularly important to keep philosophy clearly distinct from psychology, history or political science … The new movement of experimental philosophy seeks to return to this traditional vision.

Some philosophers, even those who identify as part of the x-phi movement, disagree with this viewpoint. Machery, a fellow x-phi advocate, argues that even if, historically, philosophers used to engage in a huge range of intellectual endeavours, it doesn’t mean that studying all those things should now count as philosophy. There’s something lost, Machery thinks, if experimental philosophers start to resemble cognitive scientists more and more, and lose their focus on what has been of central interest in philosophy: analysing concepts. (According to Knobe’s recent analysis, only around 10 per cent of x-phi experiments over a period of five years were directly about conceptual analysis, as opposed to revealing new cognitive effects and discussing potential cognitive processes underlying them.) Machery concurs with Stich and other ‘negative programmers’ that trying to analyse concepts from the armchair is a poor method, because of the experimental evidence that judgments vary by demographic group. Instead, he argues in his bookPhilosophy Within Its Proper Bounds (2017), philosophers should make use of experiments as a way of clarifying and assessing important philosophical ideas.

A second kind of response comes from those who question the usefulness of eliciting intuitions from people outside of philosophy. For example, in his book Relativism and the Foundations of Philosophy (2009), Stephen Hales writes: ‘[I]ntuitions of professional philosophers are much more reliable than either those of inexperienced students or the “folk”.’ This response, dubbed the ‘expertise defence’, is generally made in response to the ‘negative programme’ in x-phi. The philosopher Jennifer Nado characterises the defence as insisting that experimental philosophy’s reliance on the conflicting intuitions of non-philosophers is ‘fundamentally misguided’, since ‘the intuitions of such persons are irrelevant’. There’s often an analogy drawn to other fields: we wouldn’t take the conflicting opinions of non-experts as a challenge to most scientific and mathematical claims. On the other hand, some philosophers – responding to the expertise defence – have questioned that analogy by asking what ‘philosophical expertise’ amounts to, and how we can tell that professional philosophers have it. (In some cases, philosophers have even run experiments on fellow philosophers, claiming that they are susceptible to various kinds of bias in their intuitions.)

A third response to the negative programme in x-phi has been to look more closely at ‘intuitions’ themselves. The British philosopher Timothy Williamson argues that those who attack traditional philosophy should define exactly what they mean by ‘intuition’. If an ‘intuition’ is just ‘how things seem to us’, he argues, then the critique of intuition leads to ‘global skepticism’, the position that we should withhold all kinds of judgments until they are proven to be widely shared. (This extreme conclusion is one that, Williamson takes it, x-phi practitioners would prefer to avoid.) And taking another tack, some philosophers, such as Herman Cappelen in his bookPhilosophy Without Intuitions (2012), claim that traditional philosophy doesn’t actually rely on intuitions at all (even though many traditional philosophers think that it does).

A final response to x-phi aims not just at the ‘negative programmers’ but at all philosophers conducting experiments. In a particularly pointed version of the critique, Williamson referred to x-phi practitioners as ‘philosophy-hating philosophers’. There’s nothing wrong with philosophers drawing heavily on work from empirical disciplines, he wrote in The New York Times in 2010, as certain subfields of philosophy often have (for example, philosophy of mind, and philosophy of space and time). In that very broad sense, experimental philosophy is both common and valuable. But, he argued, philosophers shouldn’t try to be ‘amateur experimentalists’, undertaking their own experiments and turning the field into ‘imitation psychology’.

There are a couple of strands to this objection. One is that philosophers should leave the actual running of empirical studies to experts in other fields. As Williamson puts it, philosophers have honed their skills in ‘logic, in imagining new possibilities and questions, in organising systematic abstract theories, making distinctions and the like’, and should stick to their forte. Of course, as experimental philosophers such as Knobe point out, philosophers often do collaborate with colleagues from cognitive science and psychology. Experimental philosophers are extremely well-connected with psychologists, he said in conversation, explaining that he and other philosophers have often co-authored papers with those in more traditionally experimental fields (presumably, benefiting from their skill sets along the way).

This replication study also failed to find evidence for the cross-cultural differences originally claimed

But there’s also another, related critique of experimental philosophy, which questions the reliability of many experiments, not just those conducted by philosophers. As experimental philosophy was forming over the past decade or two, the ‘reproducibility crisis’ was emerging: a deep skepticism about the reliability of empirical research based on difficulties of replicating results. Researchers have pointed to issues such as insufficient sample size, publication bias (the tendency to publish studies showing positive results while null results are buried in the file drawer), and ‘p-hacking’ (reporting the analyses of data that achieve a sufficiently low p-value), and have suggested that these issues are common in some (or maybe all) fields. The result is that many research conclusions might just be a matter of chance. For a classic overview of this kind of skepticism, see John Ioannidis’s paper‘Why Most Research Findings Are False’ (2005).

Knobe says that experimental philosophers have been well aware of concerns about the reliability of research over the past decade. He and his colleague Christian Mott set up an ‘x-phi replication’ site, tracking attempted replications of well-known x-phi studies. In some instances, there were many ‘replication failures’ piling up when others tried to replicate original studies by imitating them as closely as possible. Strikingly, the original experiments on Gettier intuitions were among those that couldn’t be replicated. A number of replications (posted on the x-phi replication site) failed to find evidence of cultural variation in Gettier intuitions.

Stich, one of the original authors of the 2001 study, decided to collaborate with colleagues in order to conduct a much larger replication of his original study, with more than 500 participants from four countries (more than 10 times the size of the original study). This replication study also failed to find evidence for the cross-cultural differences originally claimed. The authors conceded that this study diminished the evidence for cross-cultural variation of Gettier intuitions. They also, however, emphasised that this didn’t mean that the ‘negative programme’ was unfounded, and pointed to other, more consistent results showing demographic variation, such as variation in semantic intuitions.

In 2017, a group of philosophers decided to take on replication more systematically. Inspired by the reproducibility project in psychology, philosophers from eight countries conducted an x-phi replication project to repeat 40 experiments, sticking as closely as possible to the original experiment. Of the 40 studies, they found that around 70 per cent were successfully replicated (the exact percentage of success varied depending on how success was defined). By comparison, fewer than half of the experiments in the reproducibility project in psychology were successfully replicated. In their paper, x-phi replication authors explore possible explanations for their higher rate of replication, without settling on any as definitive. Whatever the reason for the higher level of successful replications, they conclude that their findings are ‘good news for experimental philosophy … calling into question the idea of x-phi being mere “amateurish” psychology’.

Where does all of this leave x-phi now? The x-phi replication-project researchers urge colleagues not to see the results as reason to ‘rest on their laurels’. Many x-phi practitioners have grown sensitive to the possibility that some study results might not hold up to repeat examination, and often phrase their claims cautiously, mentioning the need for further testing. The field is also moving to adopt better research practices, such as ensuring adequately powered studies, pre-registering study design, and data-sharing.

There continues to be a demarcation in the field between the ‘negative’ and ‘positive’ approaches, though some philosophers have also carved out something of a middle ground. Kaija Mortensen and Jennifer Nagel, for example, make a case for ‘armchair-friendly experimental philosophy’, describing ways that experiments can help armchair philosophy, without replacing it entirely or changing its original aims. Experiments might, for instance, help to show where philosophers’ intuitions are likely to be distorted by irrelevant factors. Similarly, Weinberg argues that x-phi can give us a better understanding of where bias might arise in our ‘human philosophical instrument’. Rather than seeing x-phi as a subfield of philosophy, he argues, experiments should be part of the wider discipline, a source of evidence that all philosophers can draw on while still retaining their original goals.

Regardless of one’s stance on experimental philosophy, it’s clear that the new method has fuelled an important conversation. Not only has it led to much questioning of reliance on intuitions in philosophy (and what ‘intuition’ means), but it has also brought out explicit discussion about what philosophers aim to achieve. Thus, x-phi fans the flames of ‘metaphilosophy’, in which philosophers scrutinise the underpinnings of their own work. Whether or not one sits in an armchair while doing philosophy, it seems like a good idea not to get too comfortable.

Syndicate this Essay

Eltoo: A Simplified Update Mechanism for Lightning and Off-Chain Contracts

$
0
0

A little over a year ago, the three Lightning Network implementation teams joined forces to work on a common specification for the protocol stack. Now that both that specification and our three implementations are becoming stable and usable, it is time to look forward: to further improve the protocol, to add new features, to simplify, and to fix downsides.

One of the core innovations that enabled Lightning in the first place was an off-chain update mechanism to renegotiate a new state and ensure that the old state can not be settled on-chain. Today, we’re excited to release our latest research paper on a new, simplified, update mechanism for layer 2 protocols, called eltoo.

How does eltoo work?

We can imagine off-chain negotiation as a contractual agreement between a number of parties and settlement as presenting the case to a court that will decide the final state — the court in this case being the blockchain. Since all updates happen off-chain, we need a way for the on-chain court to hear all sides of the argument before making a final judgement. In the case of a participant initiating settlement of the contract, we need a mechanism that defers final settlement, to give the counterparty a chance to provide a more recent state. The court must continue to wait for new state, until eventually it decides to settle the last one it heard. Surprisingly most of the requirements to create this blockchain tailor-made for layer 2 protocols are already fulfilled by the Bitcoin blockchain.

Figure 1: An example execution of the eltoo protocol, showing how intermediate states can be skipped by rebinding a later update transaction to an earlier one, or directly to the setup transaction. Only the last settlement transaction can ever be confirmed on the blockchain.

In eltoo every state is represented as a set of two transactions: an update transaction that spends the contract’s output and creates a new output, and a settlement transaction that spends the newly created update output and splits the funds according to the agreed-upon distribution. The outputs have a script that allows a new update transaction to be attached immediately or else a settlement transaction to be attached after a configurable timeout. Should the participants agree on an update before the timeout expires, they will create a new update transaction, spending the previous output and doublespending the corresponding settlement, effectively invalidating it.

The repeated invalidation of prior state to agree on a new state builds a long chain of update transactions that will eventually be terminated by the latest settlement transaction. However, this has a major disadvantage: should we want to settle, we would have to replay the entire chain of updates on the blockchain. At that point we could have simply performed the entire protocol on-chain. The key insight in eltoo is that we can skip intermediate updates, basically connecting the final update transaction to the contract creation. In order to enable this short-circuiting of updates, we propose a new SIGHASH flag,SIGHASH_NOINPUT, which allows a transaction input to be bound to any transaction output with a matching script. Since all output scripts of prior update-transaction outputs match later input scripts, we can bind a later update to any prior update, allowing us to skip any number of intermediate updates. The paper contains the full construction of the protocol, including the details on how the scripts are built.

Improving Lightning

What we presented above is an update mechanism that allows the endpoints of a payment channel to repeatedly adjust their balances and to attach more advanced constructs such as HTLCs to the state.

The main contribution of the original Lightning paper was one such update mechanism, so are we trying to replace Lightning with this proposal? Absolutely not!

Figure 2: A diagram of the various sub-protocols that are part of the Lightning Stack.

The Lightning Network specification is no longer the specification of a single protocol, but rather a full stack of protocols, each fulfilling its own responsibilities. eltoo doesn’t aim to replace the entirety of the Lightning stack; rather it is a drop-in replacement for the original update mechanism that maintains backward compatibility with the other parts of the stack.

eltoo has fundamentally different tradeoffs than the mechanism presented in the original Lightning paper, which we’ll call LN-penalty; while LN-penalty used a penalty system to punish a misbehaving party, eltoo simply enforces the latest agreed-upon state of the off-chain contract. This has important implications for the applicability and safety of the protocols that are built on top of the update mechanism.

Some of this arises from the fact that in eltoo all participants share a common set of transactions, unlike LN-penalty, which requires asymmetry in which participant has access to which transactions, in order to tailor the reaction to the misbehaving party. This change eliminates what we call toxic information in Lightning. Toxic information comes from transactions belonging to outdated states, which if leaked will result in the loss of funds. This happens not only if a party misbehaves, but also if a node forgets about an update (e.g., when being restored from a backup). With eltoo this is no longer possible because only agreed-upon states can be settled (i.e., eltoo is penalty-less).

The data management for the participants is also simplified under the new paradigm: they no longer need to store hash preimages for invalidated states, and they no longer need to store HTLCs that were invalidated, since the settlement transaction to which they were attached can never be committed to the blockchain. All they need to store is the latest update transaction, its corresponding settlement transaction, and potentially the HTLCs that spend from that settlement. Furthermore the settlement is simplified to just binding the latest update transaction to the setup output and letting the timeout expire before broadcasting the settlement transaction.

We can combine the update outputs with SIGHASH_SINGLE to allow the attachment of additional inputs and outputs to the update transaction at the time of settlement. While this might seem like a minor change, it allows the attachment of fees to the update transactions at the time of settlement, freeing us from having to commit to a fixed fee ahead of time. In the current implementations, we would have to agree on, and commit to, a fixed fee potentially months before we attempt to confirm the transactions on-chain, forcing us to predict how the fee-market will evolve; this can result in massive overcommitment, to be on the safe side. With deferred fee selection we no longer have to agree on a fee, and we can even bump fees should they turn out to be insufficient.

Thanks to the use of feature flags, which allow a node to signal support for a new feature when first connecting to a peer, eltoo can be deployed incrementally on top of today’s network. There is no need to spin up a completely new network.

Beyond Lightning

As a generic Layer 2 update mechanism, eltoo can be used for any number of systems beyond Lightning. For example, it allows for the creation of multiparty off-chain contracts that currently could have up to seven participants, and that could have any number of participants in combination with Schnorr signatures.

One such multiparty off-chain contract is the channel factories presented byBurchert et al. as a scalable way to fund any number of payment channels on top of a single on-chain transaction and to rebalance or reallocate them dynamically without ever touching the blockchain.

The road to eltoo

Before we can implement eltoo, we need a minor change to Bitcoin: the introduction of the SIGHASH_NOINPUT flag for signatures. This was first discussed a few months ago in the context of watchtowers to help secure Lightning channels, but was not formally proposed. A formal proposal may now be found in the eltoo paper.

We invite the community to consider our proposal and to participate in its discussion. We hope to arrive at a consensus for the usage of SIGHASH_NOINPUT, so that it can be accepted and included in a future soft fork of Bitcoin Script. Doing so will put us on the road to a more reliable and simpler Lightning Network, incorporating a new update mechanism that can also be used for many other applications.

Why NASA's Next Mars Lander Will Launch from California Instead of Florida

$
0
0
An artist’s rendering of InSight lander on Mars.
Illustration: NASA/JPL-CALTECH

NASA is set to launch its next Mars lander, InSight, early Saturday morning. But something’s different about this launch. It’s taking place on the West Coast, at Vandenberg Air Force Base in California.

Rockets typically launch from NASA’s John F. Kennedy Space Center in Florida, where the Earth’s rotation can give them a velocity boost spaceward. But the InSight launch—the first inter-planetary launch from the US West Coast—wasn’t a scientific decision.

“It’s logistics,” Bruce Banerdt, InSight’s Principal Investigator, told Gizmodo. “It saves NASA money.”

The story goes as follows: part of InSight’s concept was to show that NASA could launch a budget interplanetary mission, said Banerdt. The lander itself has the same design as the 2007-2008 Phoenix lander. InSight’s cameras were leftovers sitting in inventory. The robotic arm that will be used to deploy its scientific instruments onto the Martian surface came from a scrapped project from 2000.

But Phoenix was designed to fly on a Delta II rocket, which United Launch Alliance will be retiring this year. InSight is instead launching on the smallest rocket NASA has available, the Atlas V-401, which is larger than the the Delta II. That means that the scientists have power to spare.

Rather than just waste that extra power, the InSight team decided to move the launch to Vandenberg on the West Coast. West Coast launches are typically reserved for missions that require our planet rotating beneath them, like Earth-mapping satellites or reconnaissance missions. It didn’t matter for InSight, so they chose Vandenberg to lighten the load off of Kennedy and allow West Coasters to experience an inter-planetary launch.

An InSight mockup used for testing at the NASA Jet Propulsion Laboratory.
Photo: Ryan F. Mandelbaum

Or, as InSight scientist Mark Wallace tweeted in a thread explaining the specifics: “We’re going out of Vandenberg because we can!”

Cleaning and planetary protection processes not normally necessary for West Coast launches had to be added. But otherwise, “It’s a smooth transition to move these procedures from the East Coast,” said Banerdt.

The mission might use budget equipment, but its goals are ambitious. InSight will put a seismometer on the planet’s surface, and its heat transport probe will burrow 16 feet into the Red Planet. InSight’s goal is to record seismic activity—Marsquakes—to help understand how rocky planets evolve, and why our own planet somehow ended up with tectonic plates.

The InSight launch will happen on May 5 at the earliest, and we’ll continue covering it here. But the real fun starts in six months. That’s when InSight will need to safely land on Mars and use its robotic arm to deploy its instruments onto the surface. Said Banerdt: “There are a lot of exciting milestones coming up in this mission, even before the great science gets going.”

Zoopraxiscope

$
0
0
From Wikipedia, the free encyclopedia
Black-and-white picture of a coloured zoopraxiscope disc, circa 1893 by Eadweard Muybridge and Erwin F. Faber
Black-and-white animation of a colored zoopraxiscope (without distortation, hence the elongated form)

The zoöpraxiscope (initially named zoographiscope and zoogyroscope) is an early device for displaying moving images and is considered an important predecessor of the movie projector. It was conceived by photographic pioneer Eadweard Muybridge in 1879 (and built for him by January 1880 to project his famous chronophotographic pictures in motion and thus prove that these were authentic). Muybridge used the projector in his public lectures from 1880 to 1895. The projector used 16" glass disks onto which Muybridge had an unidentified artist paint the sequences as silhouettes. This technique eliminated the backgrounds and enabled the creation of fanciful combinations and additional imaginary elements. Only one disk used photographic images, of a horse skeleton posed in different positions. A later series of 12" discs, made in 1892–1894, used outlines drawn by Erwin F. Faber that were printed onto the discs photographically, then colored by hand. These colored discs were probably never used in Muybridge's lectures. All images of the known 71 disks, including those of the photographic disk, were rendered in elongated form to compensate the distortion of the projection. The projector was related to other projecting phenakistiscopes and used some slotted metal shutter discs that were interchangeable for different picture disks or different effects on the screen. The machine was hand-cranked.[1][2]

The device appears to have been one of the primary inspirations for Thomas Edison and William Kennedy Dickson's Kinetoscope, the first commercial film exhibition system.[3] Images from all of the known seventy-one surviving zoopraxiscope discs have been reproduced in the book Eadweard Muybridge: The Kingston Museum Bequest (The Projection Box, 2004).

...it is the first apparatus ever used, or constructed, for synthetically demonstrating movements analytically photographed from life, and in its resulting effects is the prototype of the various instruments which, under a variety of names, are used for a similar purpose at the present day.

— Eadward Muybridge, Animals in Motion (1899)

As stipulated in Muybridge's will the original machine and disks in his possession were left to Kingston upon Thames, where they are still kept in the Kingston Museum Muybridge Bequest Collection (except for four discs that are in other collections, including those of the Cinémathèque française and the National Technical Museum in Prague).[1][2]

Muybridge also produced a series of 50 different paper 'Zoopraxiscope discs' (basically phenakistiscopes), again with pictures drawn by Erwin F. Faber. The discs were intended for sale at the 1893 World's Fair at Chicago, but seem to have sold very poorly and are quite rare. The discs were printed in black-and-white, with twelve different discs also produced as chromolithographed versions. Of the coloured versions only four different ones are known to still exist with a total of five or six extant copies.[4]

See also

References

  1. ^ ab"COMPLEAT EADWEARD MUYBRIDGE - ZOOPRAXISCOPE STORY". www.stephenherbert.co.uk. Retrieved 2018-01-20.
  2. ^ abHammond, Michael. "Eadweard Muybridge". www.kingston.gov.uk. Retrieved 2018-01-20.
  3. ^Bierend, Doug (29 May 2014). "See Thomas Edison's Steampunk Version of Oculus Rift". Wired. Condé Nast. Retrieved 4 January 2017.
  4. ^"COMPLEAT EADWEARD MUYBRIDGE - MUY BLOG 2009". www.stephenherbert.co.uk. Retrieved 2018-01-20.

External links


Yippies vs. Zippies: New book reveals ’70s counterculture feud

$
0
0

BY MARY REINHOLZ | The late Yippie leader Jerry Rubin, a onetime West Villager who morphed into an investment banker and died in 1994 after getting struck by a car jaywalking in Westwood, California, comes back to flamboyant afterlife in Pat Thomas’s coffee-table book biography, “Did It!”

Published last year, Thomas’s book offers plenty of photographs of varied gurus and goblins of the counterculture, and sheds light on little-known internecine conflicts among the young politicized hippies who came under scrutiny by federal agents and undercover police for their opposition to the Vietnam War.

The cover of Pat Thomas’s new coffee-table book on Jerry Rubin. Photo by John Jekabson

The hefty tome, subtitled, “From Yippie to Yuppie: Jerry Rubin, an American Revolutionary,” recounts how a younger anarchist group known as the Zippies surfaced before the 1972 Democratic and Republican National Conventions in Miami. The Zippies engaged in fierce feuding with Rubin and Abbie Hoffman, Rubin’s more-famous fellow Yippie prankster and rival. Rubin reportedly regarded the Zippies’ founder, the late Tom Forcade — who ran the Underground Press Syndicate and started High Times magazine — as a provocateur and cop.

Downtown Manhattan street musician David Peel, who died of a heart attack last year at 73, wanted to stay friends with both factions. He states on Page 150 of the book, “Tom Forcade had had an altercation with Jerry Rubin. Very ugly war between Jerry and Tom — who kept changing Yippies to ‘The Zippies.’ Abbie played along with Jerry and we had the Zippies fighting each other, they had a big breakup because it’s all about powerbrokers and it’s all perpetual jealousy.”

Author Thomas, a music producer, reports that the Zippie faction believed that the Yippie leadership had grown old and had “sold out” by endorsing South Dakota Senator George McGovern for the Democratic presidential nomination against Republican incumbent Richard Nixon. Forcade also regarded Rubin and Hoffman as “burned out” after they were convicted and then acquitted as co-defendants in the notorious Chicago 7 conspiracy trial.

In addition, Forcade had a beef with Abbie Hoffman because the Yippie leader refused to pay him $5,000 for work Forcade had done for Hoffman’s bestselling 1971 paperback, “Steal This Book,” according to writer Rex Weiner. Back then, Weiner was a twentysomething Zippie sympathizer and a founder of the short-lived New York Ace underground newspaper. (Disclosure: This transplanted California writer penned a few pieces for the Ace). Weiner even organized what he called a “Counterculture Court” that tried to resolve the money conflict between Hoffman and Forcade.

Weiner said last weekend that the split between the two factions involved a “larger question” in the youth culture concerning the politics of people opposed to Nixon’s regime — whether to work “within the mainstream or become more radical. We were being targeted by the F.B.I.’s COINTELPRO [surveillance] program,” he said. “No one knew anyone to trust anymore. There was a leaflet being circulated in Miami accusing Tom Forcade of being a police agent. Tom definitely wasn’t a cop.”

Tensions were reportedly high when both radical groups showed up in in Miami for the two aforementioned 1972 conventions. The Yippies stayed at the Hotel Albion, a “dumpy” sort of place, while the Zippies camped out in Flamingo Park, staying in tents, recalled Dylan “garbologist” A.J. Weberman in “Did It!”

Sporting press credentials for the Democratic and Republican National Conventions in Miami in 1972, from left, Jerry Rubin, Ed Sanders and Abbie Hoffman.

Weberman, who still considers himself a Zippie, claims in “Did It!” that Forcade and Hoffman got into “physical fights” and that Forcade, back in New York, put out a “contract” on Rubin for calling him a cop.

“Rubin suggested, ‘Well, let’s have a conference in the pizza parlor on Bleecker and MacDougal,’” Weberman is quoted saying in the book. “When he got there, he sat down at this table. And Forcade’s henchman Tim Bloom came in [and] kicked Jerry in the back, and the entire table that was anchored to the wall came loose. After that, Jerry was willing to apologize.”

Author Thomas notes in “Did It!” that Rubin issued a “public apology” to Forcade and other Zippies that appeared in the Oct. 17, 1974, issue of the Village Voice with the headline: “Jerry Rubin: Yip & Karma.” Eventually, according to Weiner, the Zippies took over the Yippie name and banner and started “Yipster Times” at 9 Bleecker Sts. That address was the home for decades of the Yippie headquarters and of New York Yippie leader Dana Beal until the building was sold to a boxing gym in 2014 in forfeiture proceedings. In the last few years before it became a gym, Beal and Co. remade it into the Yippie Museum and Cafe, featuring live comedy and music performances.

Aron Kay, a former East Villager known as the Yippie Pie Man, told this reporter that he became a Zippie after he arrived in New York from California in 1972, noting he traveled to Miami for the two conventions and slept in Flamingo Park. He confirmed Weberman’s account in “Did It!” and also recalled nasty infighting between the Yippies and the Zippies.

“A lot of name-calling went on,” Kay said in a telephone conversation. “People were calling Tom Forcade a cop. Jerry should have known better.”

Kay, now 68, said both he and Weberman had “tagged along” when Forcade’s sidekick Bloom went to the aforementioned pizza parlor — the Pizza Box — “attacked Jerry” and “put the squeeze on him” to apologize to Forcade.

But Kay said he wasn’t present when Weberman went after Yippie eminence Ed Sanders, author and legendary leader of The Fugs rock band in the East Village. Weberman believed that Sanders had called him a cop on a radio show and, in retaliation, said he “trashed” Sanders’s Land Rover outside his Manhattan apartment.

Weberman described the incident vividly in “Did It!” on Page 150: “Meanwhile, Ed Sanders got on the Alex Bennett Radio show and call[ed] me a cop, so we found out where he was living — and trashed his Land Rover. We put sugar in the gas tank but the f—ing sugar wouldn’t go down. We didn’t have any water so I had to piss in the gas tank to get the sugar to go down.”

Speaking to this reporter, he noted, “I could have been arrested for indecent exposure.”

As to why he would do such a thing, Weberman said it was a crazy time.

“Crazy versus crazy,” he said. “Look what happened to Forcade [who killed himself in 1978] and Hoffman [another suicide] and Jerry, who turned himself into road kill.”

Contacted by e-mail, Sanders said of Weberman: “Never called him a cop. And to my knowledge, no sugar or urine in Land Rover, which worked fine up until late 1979.”

Informed of Sanders’s response, Weberman stood by his story. So did “Did It!” author Thomas, who wrote the 2012 book “Listen, Whitey! The Sights and Sounds of Black Power 1965-1975.”

“I believe A.J,” Thomas said. “These [Yippie] guys are very macho and they don’t want to show any vulnerability.”

A Yippie button from the Democratic and National Conventions in 1972. Photo by David Peller

Thomas said some of the old-time Yips were also complaining about his choice of Rubin for the book instead of Abbie Hoffman. He said the issue arose when satirist Paul Krassner, former editor of The Realist, wrote a review of “Did It!” for the Los Angeles Times in August, and sent it to “Yippies around the country, who said they loved Abbie more. Then they started arguing among themselves.”

Writer Judy Gumbo, widow of Yippie Stew Albert, who was Jerry Rubin’s best friend, said she had heard about the bickering over the book by men in their twilight years and “it turns my stomach. In the Trump era, we don’t need to be bickering among ourselves,” she said, in a conversation from her home in Berkeley.

Gumbo, one of the few original female Yippies, said Rubin and Hoffman had a “Cain and Abel relationship — they loved each other dearly but were ego competitive,” she said. “It was a bromance that had a lot of conflict in it.”

She doesn’t believe Rubin “sold out” to Wall Street, as some of his old cronies think he did.

“It wasn’t as much of a switcheroo as some people think,” she said. “He never was a stockbroker. He was a publicist for what’s been called socially responsible investing.”

Gumbo, who was a fundraiser and vice president of development for the Oregon affiliate of Planned Parenthood when she and Albert lived in Oregon, said she didn’t know about the battles between the Yippies and Zippies. But she went to Miami in 1972 with a women’s group and became aware of paranoia rampant at the Hotel Albion where she and the Yippies stayed, among them Rubin, who was decidedly nervous.

Paranoia was something Gumbo also experienced while living with Albert in the Catskills in 1974. She found an F.B.I. tracking device in their house.

On Sat., March 24, at Unoppressive Non-imperialist Bargain Books, at 34 Carmine St., from 4 p.m. to 5:30 p.m., author Pat Thomas will be in the Village for a book-signing of “Did It! From Yippie to Yuppie: Jerry Rubin, An American Revolutionary.” Paul Krassner will also be live-streamed into the shop to discuss the history of the Yippies and Thomas’s coffee-table book on Rubin. The event is free.

Jim Drougas, the bookstore’s owner, reflected, “I’m a little jealous that we don’t have such a juicy book about Krassner himself or even Abbie or especially Irwin,” referring to his friend the late standup comic “Professor” Irwin Corey. “But it is a nice collection of so much interesting material.”


The Hobo Ethical Code of 1889

$
0
0

Who wants to be a billionaire?

A few years ago, Forbes published author Roberta Chinsky Matuson’s sensible advice to businesspersons seeking to shoot up that golden ladder. These lawful tips espoused such familiar virtues as hard work and community involvement, and as such, were easily adaptable to the rabble---artists, teachers, anyone in the service industry or non-profit sector...

It must pain her that so many billionaires have been behaving so badly of late. Let’s hope so, anyway.

While there’s nothing inherently wrong with aspiring to amass lots of money, the next generation of billionaires are playing fast and loose with their souls if their primary role models are the ones dominating today’s headlines.

Wouldn’t it be grand if they looked instead to the Hobo Ethical Code, a serious standard of behavior established at the Hobo National Convention of 1889.

Given the peripatetic lifestyle of these migratory workers, it was up to the individual to hold him or herself to this knightly standard. Hoboes prided themselves on their self-reliance and honesty, as well as their compassion for their fellow humans.

The environment and the most vulnerable members of our society stand to benefit if tomorrow’s billionaires take it to heart.

The Hobo Ethical Code

1. Decide your own life; don’t let another person run or rule you.

2. When in town, always respect the local law and officials, and try to be a gentleman at all times.

3. Don’t take advantage of someone who is in a vulnerable situation, locals or other hobos.

4. Always try to find work, even if temporary, and always seek out jobs nobody wants. By doing so you not only help a business along, but ensure employment should you return to that town again.

5. When no employment is available, make your own work by using your added talents at crafts.

6. Do not allow yourself to become a stupid drunk and set a bad example for locals’ treatment of other hobos.

7. When jungling in town, respect handouts, do not wear them out, another hobo will be coming along who will need them as badly, if not worse than you.

8. Always respect nature, do not leave garbage where you are jungling.

9. If in a community jungle, always pitch in and help.

10. Try to stay clean, and boil up wherever possible.

11. When traveling, ride your train respectfully, take no personal chances, cause no problems with the operating crew or host railroad, act like an extra crew member.

12. Do not cause problems in a train yard, another hobo will be coming along who will need passage through that yard.

13. Do not allow other hobos to molest children; expose all molesters to authorities…they are the worst garbage to infest any society.

14. Help all runaway children, and try to induce them to return home.

15. Help your fellow hobos whenever and wherever needed, you may need their help someday.

Related Content:

How to Live a Good Life? Watch Philosophy Animations Narrated by Stephen Fry on Aristotle, Ayn Rand, Max Weber & More

The Power of Empathy: A Quick Animated Lesson That Can Make You a Better Person

Rules for Teachers in 1872 & 1915: No Drinking, Smoking, or Trips to Barber Shops and Ice Cream Parlors

Ayun Halliday is an author, illustrator, theater maker and Chief Primatologist of the East Village Inky zine.  Her play Zamboni Godot is opening in New York City in March 2017. Follow her @AyunHalliday.


The Hell that is filename encoding

$
0
0

By far, the worst part of working on beets is dealing with filenames. And since our job is to keep track of your files in the database, we have to deal with them all the time. This post describes the filename problems we discovered in the project’s early days, how we address them now, and some alternatives for the future.

What is a Path?

What would you say a path is? In Python terms, what should the type of the argument to open or os.listdir be?

Let’s say you think it should be text. The OS should tell us what encoding it’s using, and we get to treat its paths as human-readable strings. So the correct type is unicode on Python 2 or str on Python 3.

Here’s the thing, though: on Unixes, paths are fundamentally bytes. The arguments and return types of the standard Posix OS interfaces open(2) and opendir(2) use C char* strings (because we still live in 1969).

This means that your operating system can, and does, lie about its filesystem encoding. As we discovered in the early days of beets, Linuxes everywhere often say their filesystem encoding is one thing and then give you bytes in a completely different encoding. The OS makes no attempt to avoid handing arbitrary bytes to applications. If you just call fn.decode(sys.getfilesystemencoding()) in attempt to make turn your paths into Unicode text, Python will crash sometimes.

So, we must conclude that paths are bytes. But here’s the other thing: on Windows, paths are fundamentally text. The equivalent interfaces on Windows accept and return wide character strings—and on Python, that means unicode objects. So our grand plan to use bytes as the one true path representation is foiled.

It gets worse: to use full-length paths on Windows, you need to prefix them with the four magical characters \\?\. Every time. I know.

This contradiction is the root of all evil. It’s the cause of a huge amount of fiddly boilerplate code in beets, a good number of bug reports, and a lot of sadness during our current port to Python 3.

Our Conventions

In beets, we adhere to a set of coding conventions that, when ruthlessly enforced, avoid potential problems.

First, we need one consistent type to store in our database. We picked bytes. This way, we can consistently represent Unix filesystems, and it requires only a bit of hacking to support Windows too. In particular, beets encodes all filenames using a fixed encoding (UTF-8) on Windows, just so that the path type is always bytes on all platforms.

To make this all work, we use three pervasive little utility functions:

  • We use bytestring_path to force all paths to our consistent representation. If you don’t know where a path came from, you can just pass it through bytestring_path to rectify it before proceeding.
  • The opposite function, displayable_path, must be used to format error messages and log output. It does its best to decode the path to human-readable Unicode text, and it’s not allowed to fail—but it’s lossy. The result is only good for human consumption, not for returning back to the OS. Hence the name, which is intentionally not unicode_path.
  • Every argument to an OS function like open or listdir must pass through the third utility: syspath. Think of this as converting from beets’s internal representation to the OS’s own representation. On Unix, this is a no-op: the representations are the same. On Windows, this converts a bytestring path back to Unicode and then adds the ridiculous \\?\ prefix, which avoids problems with long names.

It’s not fun to force everybody to use these utilities everywhere, but it does work. Since we instated this policy, Unicode-related bugs still happen, but they’re not nearly as pervasive as they were in the project’s early days.

Must It Be This Way?

Although our solution works, I won’t pretend to love it. Here are a few alternatives we might consider for the future.

Python 3’s Surrogate Escapes

Python 3 chose the opposite answer to the root-of-all-evil contradiction: paths are always Unicode strings, not bytes. It invented surrogate escapes to represent bytes that didn’t fit the platform’s purported filesystem encoding. This way, Python 3’s Unicode str can represent arbitrary bytes in filenames. (The first commit to beets happened a bit before Python 3.0 was released, so perhaps the project can be forgiven for not adopting this approach in the first place.)

A few lingering details still worry me about surrogate escapes:

  • Migrating people’s databases containing old bytes paths to surrogate-escaped strings won’t exactly be fun.
  • Might surrogate escapes tie us too much to the Python ecosystem? What happens when you try to send one of these paths to another non-Python tool that interacts with the same filesystem?
  • People in the Python community have misgivings about the current implementation of surrogate escapes. Nick Coghlan summarizes. We’ll need to investigate the nuances ourselves.

Require UTF-8 Everywhere

One day, I believe we will live in a world where everything is UTF-8 all the time. We could hasten that glorious day by requiring that all paths be UTF-8 and either rejecting or fixing any other filenames as they come into beets. For now, though, this seems a just tad user-hostile for a program that works so closely with your files.

Pathlib

We could switch to Python 3’s pathlib module. We’d still need to choose a uniform representation for putting these paths into our database, though, and it’s not clear how well the Python 2 backport works. But we do have a ticket for it.

Why Pre-Order Food Then Wait in Line

$
0
0

Thanks to technology, no one has to wait in line anymore. Some people do it anyway.

Every day, Mitchell Burton orders and pays for an Italian B.M.T. sandwich on his Subway mobile app, so the sandwich is waiting at the counter. When he arrives, the 32-year-old Baton Rouge, La., parks and recreation worker frequently heads to the back of the line, to avoid seeming rude to less tech-savvy fellow customers.

Line skippers sometimes “get the stink eye,” he says, because fellow patrons don’t understand that there’s an app to order ahead. “I generally do not want to seem like an ass,” he says.

Various ways to skip lines have gained momentum in recent years, as businesses ranging from retailers to movie theaters have come up with ways for customers to avoid a wait, often with mobile apps and ordering kiosks. Amazon.com Inc. in January opened a convenience store without checkout lines in Seattle, while Starbucks Corp. last year said it was testing a coffee bar in its headquarters that accepts only mobile orders.

In theory, order-ahead technology should appeal to everyone. Restaurants can better manage the flow of customer traffic, and patrons can schedule their takeout orders and bypass cash-register bottlenecks. People downloaded food and drink apps more than 155 million times in the U.S. in 2017, up 35% on the year, according to market-data firm App Annie.

Stephanie Morris Knoder, owner of Pure Organic Kitchen & Juicery on Vashon Island, Wash., launched an ordering app last year, and says nearly all of the customers at her plant-based, gluten-free eatery in the Seattle suburb love it.

Nearly all. One customer told Ms. Knoder that standing in line—and the resulting social interaction—was why she came.

“You can’t keep us from talking to you, we’re right here,” Ms. Knoder says she told the customer. She adds: “We want to cater to people wanting to have that interaction.”

Some line lovers say technology gets in the way of the personal touch. That’s why Al DiSalvatore sometimes puts his phone down and lines up the old fashioned way at coffee shops in Philadelphia. He likes when the baristas remember his name and order—something that reminds him of his time living in smaller cities.

“You like that feeling of someone who knows you,” says Mr. DiSalvatore, who himself recently opened a coffee shop. He decided not to add an app and to instead use traditional loyalty points. “It’s just that sense of community.”

Lining up is part of a gauzy nostalgia for the days before smartphones, which also includes professors banning laptops in class, people stopping at the register to write checks and shoppers skipping shopping online.

It tends to be technophobes who hold the line, says Phong Vu, chief executive of Tapin2, a developer of mobile and self-serve ordering software for festivals, concerts and sporting events. At last month’s Coachella music and arts festival in Indio, Calif., use of a Tapin2 powered app took off among the young crowd after the festival advertised it on its own app, prompting festival goers to order-ahead merchandise, which shipped directly to their homes.

That doesn’t explain Arjun Sethi, a partner at Silicon Valley venture firm Social Capital, who says he has avoided using mobile apps to order. Instead, especially while traveling, he goes out and decides in the moment on what to eat. “When I actually stand in line for something, it’s for the experience,” he says. If he does order via app, he typically gets it delivered.

Erik Fairleigh, 38, who works in communications at Amazon, also has a simple reason for sometimes joining the line. “I like to pay in cash,” he says.

When Ray Reddy co-founded order-ahead app company Ritual Technologies Inc. in 2014, some restaurant workers balked at the idea of giving some customers preferential treatment. He pacified them, he says, by explaining the app as creating a “digital line.”

André Treiber, an Austin, Texas-based campaign manager, ran into a problem this week after ordering soup for pickup. The restaurant made the 26-year-old stand in line, and then sold out on his pre-ordered chicken soup with rice to the couple in front of him. He and his wife ended up getting smoothies and a side salad instead.

Typically, though, “it’s a happy experience, and no one has the soup sold out from under them,” Mr. Treiber says.

Ashleigh Azzaria, a 34-year-old Palo Alto, Calif., event designer, typically chooses to wait in line for coffee at Starbucks, even though she has the mobile app installed and skips the line for bigger orders.

“It’s my break,” she says. “It’s my time to just kind of decompress, to not be on the phone.”

Plus, it ensures her order of a signature Double Shot comes out just the way she wants it, since she often has to talk through the process with the barista: brew two shots of espresso, shake it with a little bit of raw sugar and then pour a small amount of milk on top to seep through over ice. “Lots of the time, they don’t know what that means,” she says.

Write to Laura Stevens at laura.stevens@wsj.com

De-Googling my phone

$
0
0

I’ve been a professional Free Software developer in the GNU/Linux area for 14 years now, and a hobbyist developer and user for much longer. For some reason that never extended much to the smartphone world, beyond running LineageOS on my older phones (my current Sony Xperia is still under warranty and I’m fine with the officially supported Android), and various stabs at using the Ubuntu phone (RIP!).

On a few long weekends this year it got a hold of me, and I had a look over the Google fence to see how Free Software is doing on Android and how to reduce my dependency on Google Play Services and Google apps. Less because I would actually severely distrust Google, as they have a lot of business and goodwill to lose if they ever majorly screw up; but more because of simple curiosity and for learning new things. I want to note down my experience here for sharing and discussing.

I started experimenting on my old Nexus 4 by completely blanking it and installing current LineageOS 14.1 without the Google apps. This provides a nice testing ground that is completely free of any proprietary Google stuff. From that I can apply good solutions on my “production” Xperia.

Alternative app stores and free apps

The number one must have is of course to install theF-Droid app, which opens the door to a surprisingly large and good world of Free Software apps. With that I was already able to replace a good deal of my installed apps with free alternatives:

  • K9 Mail,Telegram, and OsmAnd+ are also available on F-Droid, which I find preferrable as they rebuild everything from source and thus binaries seem a bit more trustworthy.

  • I previously used Opera as a web browser, because it is relatively lightweight (important on my previous phone) and the really good builtin ad blocker. But these days Firefox is really fast and good enough, so I replaced it withFennec, which is more or less Firefox with some non-free bits removed. After installinguBlock Origin I’ve never looked back.

  • Some apps have fairly straightforward replacements:

  • Some other apps like Google+ and Pocketliga just got replaced with bookmarks to web sites, which are good enough. This includes Google Maps; I usually prefer and use OsmAnd for walking, biking, and general navigation/routing. But for finding commercial things Maps is usually better; however, I use it rarely enough to not get too annoyed by the much higher latency of the web page.

There are also some apps that I want (like WhatsApp, Nextbike, or Quizduell) that don’t have free software alternatives. Uptodown is an alternative app store that does not require Play Services and has a lot of apps like the above or the Deutsche Bahn Navigator (but that requires Play Services by itself, so that doesn’t help that much).

Email

This was also easy. I’ve run my own Mail server forever, use mutt on the desktop,Rainloop for the occasional web mail access, and the gorgeous K9 on the phone. I have always loathed the GMail web UI, so I never got into the habit of using GMail much, except as a sacrificial spam address for random web shops. But it turns out that this is surprisingly simple to use without the GMail app and Play services - just create an app password and use it in a new K9 mail account. This also simplifies things to not having to deal with two different email apps.

Notes and TODO lists

I’ve used Google Keep for a long time for notes and TODO lists. The Android app and widget, and the web UI have a nice no-nonsense approach and work just fine. Perhaps surprisingly this was the thing that took me longest to replace: there is a plethora of notes/TODO apps around, and I’ve tested quite a few of them.

During that I discovered Org mode, a very powerful and flexible plaintext-based system (a bit like Markdown in spirit, but different syntax and much more powerful). Primarily it is an Emacs extension, but there is a reasonably good vim-orgmode plugin for writing on the desktop (which I much prefer over typing in the Keep web UI!), and Orgzly for Android that has a pleasant UI for maintaining notes, and also a nice widget for putting my TODO list on the starter screen.

Neither of these can sync with other devices, but fortunately this (rather independent) task also has a nice solution, see below.

I’ve happily used these tools for two months now. vim-orgmode fits my usual workflow on the desktop much better, and on the phone the experience is not significantly different (I don’t use the more powerful Org features - yet).

File syncing

While looking for how to synchronize my Org notes between phone and laptop, I stumbled over the really magnificient Syncthing project. It’s a general-purpose many-to-many file sync solution using modern and standard encryption and authentication technology, and is readily available on Debian (for my server), Fedora (for my laptop), and F-Droid (for my phone). Installation and configuration on the web and Android UIs are straightforward, and it can just as easily be used to backing up your photos or syncing music or ebooks (for you Dropbox or Google Drive using types - I use neither, though).

The main disadvantage to Keep or other Google apps is that there is no push notification on changes - all instances of Syncthing poll, so there is some delay in propagating updates. Also, Android keeps nagging with a “Syncthing is running in the background” notification, but fortunately it can be toned down enough in Android 8 to not get in the way any more. Power usage is obviously something to watch out for here, but I haven’t noticed any significantly worse battery drain so far.

A few years ago I had run ownCloud for quite some time, mostly to store my contacts and calendar. But it takes a non-trivial amount of setup, maintance, and resources on the server and has a rather high influx of security issues, so I went back to Google eventually.

Today I discovered Radicale, which is much more appealing: It is a lightweight Python module that provides a CalDAV/CardDAV API for a bunch of standard *.ics (calendar) and *.vcf (contact) files. For the most part its documentation is excellent, and I set up the service and Apache reverse proxying on my server in no time.

The only thing that is not documented well is how to import existing ics/vcf files (exported from Google Calendar/Contacts). Turns out one can just use a standard HTTP/WebDAV PUT request:

curl -u 'martin:myPassWord' -X PUT https://my.server.de/radicale/martin/contacts/ --data-binary @contacts.vcf

(the same thing with exported *.ics for calendar).

On Android I installedDAVdroid where I just added the Radicale root URL (with user/password), then saw my two contact/calendar sources, and enabled them. I disabled the phone’s Google account on the phone instead. Yay, once again I own my data. :-)

This does not provide a web UI for the calendar/contacts. I rarely usedhttps://calendar.google.com, and essentially never https://contacts.google.com, so I suppose I won’t actually need to find a replacement. But if so, I might take a look at Roundcube to replace Rainloop, as the former has a plugin for calendar. I don’t have Thunderbird, Evolution, or anything similar installed, as that’s just too heavyweight for the rare cases where I deal with my personal calendar on the laptop.

Remaining items

The one remaining app on my phone that needs Google Play Services is now the Deutsche Bahn Navigator. I like and use that a lot, both for information and for buying tickets. On my next trip I’ll give trainline.eu a go, which looks very promising: same price as https://bahn.de directly, they consider my BahnCard, don’t need Play Services, and can also book trains in other countries.

There are also some warts with other apps (Quizduell and swa Fahrinfo) that occasionally complain about missing Play Services, although they work well enough without. In Nextbike one loses the life map where bikes are available, which is quite important; a reasonable alternative is to look that up on their website, which provides the same map. Actually booking a bike and all the other functionality works fine without Play Services.

Update: OpenBikeSharing is a nice replacement for the Nextbike map. It’s free software and uses the free citybik.es API. Thanks to Tim Small for pointing this out!

Conclusion

I’ve learned a lot of new things during the above, so this was a worthwhile experiment beyond mere political and idealistic reasons. However, setting up such a “Google free” phone requires a significant amount of time, dedication, and skill (not to mention a personal server), definitively far beyond to what I’d trust my mother to comprehend. But it’s reassuring to know that, if push comes to shove, it’s definitively still possible to have an useful smartphone that’s running mostly FOSS and no Google proprietary stuff!

Over 400 Startups Are Trying to Become the Next Warby Parker

$
0
0

James McKean wants to revolutionize the manual tooth­brush. It's January 2018. The 31-year-old MBA candidate at the University of Pennsylvania's Wharton School whirls his laptop around to show me the prototype designs. Bristle, as the product might be called, has a detach­able head and a colorful pattern on the handle--like faux wood grain, flowers, or plaid. Cus­tomers would pay somewhere around $15 for their first purchase, and then get replacement heads, at $3 or $4 a pop, through a subscription service.

There are a few reasons McKean likes this plan. A Bristle subscription would be more convenient than going to CVS when you need a new toothbrush--you'd order online, set your replacement-head frequency, and forget about it. Also, Bristle brushes are friendlier looking than, say, Oral-B's spaceship-like aesthetic. "To me, brushing your teeth is such an intimate act. You engage with these products by putting them in your mouth," he says. A toothbrush, he adds, is "almost an extension of your individuality."

A former McKinsey consultant and private equity investor from Utah, McKean caught the entrepreneurial bug from watching his clients. We're sitting in a small study room at Wharton's Hunts­man Hall, named after a fellow Utahn, the late industrialist Jon M. Huntsman. When it was established in 1881, Wharton became the world's first business college. Its alumni, in addition to Huntsman, include Elon Musk, Google CEO Sundar Pichai, hedge fund billion­aire Steven Cohen, and Donald Trump.

For most of its history, Wharton's reputation has been built on turning out the world's finest spreadsheet jockeys. But, a few years ago, four students met at Wharton and started a company that would help ignite a startup revolution: Warby Parker. The concept: selling eye­glasses directly to consumers (DTC) online. Few thought the idea would work, but today Warby is valued at $1.75 billion, and its founding story has become a fairy tale at Wharton. Co-founders and co-CEOs Neil Blumenthal and Dave Gilboa give guest lectures at the business school--as does Jeff Raider, the third Warby co-founder, who went on to help hatch Harry's, a DTC razor brand.

Wharton, in turn, has become a sort of incubator of DTC companies in product categories as diverse as lingerie, sofas, and, if McKean gets his way, manual toothbrushes. Wharton is by no means the only place such companies originate, but it is the most fertile ground--a fact that's not lost on venture capitalists. "I've basically pitched a tent outside of Wharton," says Andrew Mitchell, who founded the venture capital firm Brand Foundry to invest in digital-first con­sumer businesses.

The appeal of the DTC movement goes like this: By selling directly to consumers online, you can avoid exorbitant retail markups and therefore afford to offer some combination of better design, qual­ity, service, and lower prices because you've cut out the middleman. By connect­­ing directly with consumers online, you can also better control your messages to them and, in turn, gather data about their purchase behavior, thereby enabling you to build a smarter product engine. If you do this while developing an "authentic" brand--one that stands for something more than selling stuff--you can effectively steal the future out from under giant legacy corporations. There are now an estimated 400-plus DTC startups that have collectively raised some $3 billion in venture capital since 2012.

If Wharton has become the spiritual center of the DTC startup movement, David Bell is its guru. A tall and tousle-haired Kiwi who comes off more like an edgy creative director than a professor, Bell has advised the founders of and invested in most of the DTC startups with Wharton roots. An expert in digital marketing and e-commerce, Bell first got a taste for investing when Jet.com founder Marc Lore (another Wharton alum, now at Walmart) invited him to put early money into his first startup, Diapers.com. When the Warby Parker founders were still in school and conceiving their company, the professor helped them refine its home-try-on program, arguably the key to getting people to purchase glasses online.

Bell sees an almost limitless potential for more companies to challenge the old guard by following the Warby play­book. "If you went to your kitchen, your bed­room, your bathroom, your living room, and you went through all the stuff that was in there, from your toothbrush to your sheets and towels and curtains--you name it--it could all be Warby-ed."

Not all Wharton professors have the same optimism. Kartik Hosanagar, a Wharton professor of technology and digital business, has also put his own money into several student startups, but he worries that the opportunities to build large-scale DTC brands online are limited, because what worked a few years ago may no longer be possible. "I keep complaining that I don't want to hear another pitch from a student that's like 'the Warby Parker of so-and-so,' " he says. "I think there's a reckoning coming for these people. These venture-funded companies trying to scale are going to find out there's just no way to make the numbers work."

"If you went to your kitchen, your bedroom, your bathroom, your living room, and you went through all the stuff that was in there--it could all be Warby-ed."--David Bell, Wharton professor

Over the course of several months, I met with dozens of young entrepreneurs at Wharton and beyond hawking napkins, suitcases, mattresses, and tampons. They all offered to connect me with other companies, which sell razors, bras, strollers, and much more. Two themes emerged. One, nearly every product category will see at least one DTC challenger. And two, largely because of that proliferation, it's harder than ever to build a big, profitable business with the Warby model.

Not All Product Categories Are Created Equal

Perhaps you've heard a story like this one. A guy goes to a department store looking for underwear and finds himself befuddled by the selection. What's the difference between the $30 pair and the $3 pair? Between the Dri-Stretch and the Climalite pairs? Why does he have to be standing in this store, anyway? Light bulb: The underwear business is broken.

The underwear epiphany happened to Jonathan Shokrian, founder of MeUndies, a Los Angeles-based DTC underwear company whose CEO, Bryan Lalezarian, is another Wharton alum (2012). For Jen Rubio, co-founder of the luggage maker Away, it happened when her suitcase broke on a trip and, on trying to replace it, she realized there was a gap in the market between expensive designer suitcases and low-quality cheap ones. A former Warby Parker employee, she saw an opportunity to offer a better suitcase at a better price, and sell it online. She teamed up with another Warby alum, Steph Korey, and has since raised $31 million in venture capital from the likes of Forerunner Ventures, an aggressive DTC investor.

It might be easy to discount these founding legends as trumped-up mythol­ogy, but Jesse Derris believes they repre­sent the first step in building a great new consumer brand. Derris is founder of the public relations agency Derris, which earned its DTC cachet by making Warby famous. Derris has since worked with dozens of other DTC companies to estab­lish their identities, which all share a core narrative. "I believe I'm getting ripped off by X, so I launched a brand to solve the pain point," says Derris. "I sometimes call it a Seinfeld-ism. It's there, everybody's thinking the same thing, but nobody has verbalized it."

Bell, the Wharton marketing profes­sor, has a different characterization of what DTC companies exploit: "Millennializa­tion." Twenty- and 30-something consum­ers are digital natives with lots of buying power who have no attachment to mall brands and big-box stores. Since these founders are usually Millennials them­selves, DTC companies speak the mother tongue--Instagram, experiential market­ing, brands as lifestyles. Away's suitcase, says Bell, "is a decent-enough product"--he describes it as a 7 or 8 out of 10--"but the market­ing is 10 out of 10. The way it's priced, the way it's distributed, the way it's promoted, the way it's targeted, the way it's positioned--that's really the secret sauce that makes the thing go."

Whether a DTC startup can actually deliver better value than its predecessors, says Warby's Blumenthal, depends on how broken the existing market is. In his case, he learned that the eyewear market was dominated by one giant con­­glomerate, Luxottica, which makes everything from Ray-Bans to Oakleys. "The market charges too much for glasses, and that was due to a con­solidation of power within the industry that had been built up over decades," Blumenthal says, explaining that Warby was able to come in and charge $95 for a $500 product. Harry's and Dollar Shave Club saw a similar opening in the razor industry, where Gillette commanded more than 70 percent of the market worldwide, according to Euromonitor.

But, Blumenthal adds, "there aren't many industries with those dynamics." Take homewares, for example--table­cloths, bedding, flatware. Rachel Cohen and Andres Modak, life partners and co-founders of three-year-old DTC home-goods company Snowe, landed on the idea for their company when they moved to New York City after both graduated from Wharton in 2012. They wanted simple but chic home decor at reasonable prices, but didn't want to buy the same West Elm stuff that all of their friends had.

"Our product is a product you would get at a super high-end home store, but we sell it for a 75 percent lower price," Cohen says. That sounds compelling, but when I pull up Snowe's website, the first product I look at--neutral-colored linen napkins made from a coveted natural fiber called Belgian flax--costs $36 for a set of four. On West Elm's website on the same day, a similar-looking set of four Belgian flax linen napkins costs $18 to $24. When I point out the discrepancy, Modak explains that Snowe's napkins are higher quality. But since Snowe sells only online, it's impo­ssible for me to experience the dif­ference without ordering the product. So how do you get the message across? "It's hard," concedes Modak.

Perhaps, as one industry observer put it to me, Snowe is "a brand in search of a problem that doesn't exist." That is, the home-goods market isn't fundamentally unfair in the way that eyeglasses and razors are, making it all the more difficult for Snowe to get consumers excited about an unclear advantage. Snowe's products might indeed be world class, but they don't fit neatly into the DTC playbook.

Beware the New Middleman

Every few months, a group of startup founders and friends from the DTC and Wharton circles get together for dinner in New York City. They call themselves the Directors Council and, says Bell, who is part of the group, one of the frequent subjects is how to deal with perhaps the most vex­ing piece of the DTC playbook: finding customers.

Back when the DTC movement was in its infancy, Amy Jain co-founded her fashion jewelry and accessories company, BaubleBar. It was 2011, around the same time that Warby Parker launched, and it was cheap and easy to get attention and win fans. "Social media was just starting. There wasn't a lot of noise," she says. Plus, the basic argument that clever start­ups were cutting out the middleman was a novelty back then. Warby was able to use PR to great effect in its early days and position itself as the friendlier, hipper, and cheaper alternative to Luxottica's many fashion brands. Dollar Shave Club, which launched before Harry's, was able to go viral with a hilari­ous YouTube video that made razor subscriptions seem revolu­tionary.

Today, those same tactics are harder. For Jane Fisher and Jenna Kerner, 2017 graduates of Wharton and the founders of DTC bra company Harper Wilde, launch­ing with a Dollar Shave Club-style humor­ous video seemed to make a lot of sense. The result--"What if Boxer Shopping Were as Frustrating as Bra Shopping?"--was good enough that a New York Times writer called it "one of the funniest videos I've ever seen." But, like most attempts to go viral, it didn't. Seven months since it launched, the video has been viewed on YouTube fewer than 6,000 times.

When they work, guerrilla tactics can jump-start a company's growth, but at some point, digital-first brands have no choice but to turn to paid search and social-media advertising. The advantages of the dominant online ad plat­forms are clear: They're inexpen­sive to set up, companies can target their desired audiences, and they can get smarter as they learn about which mes­sages and tactics work. The challenge, though, is that "those channels are increas­ingly getting more saturated and more expensive," says Bell.

"The temptation is to give the very best price, but then it becomes, 'Well, shit, we can't stay in business doing this. We have to make some money.' "--Stephen Kuhl, Burrow co-founder

The options for large-scale digital marketing are slim. "It's basically Face­book, Instagram, and Google at this point," says Daniel Gulati, a partner at Comcast Ventures, which has invested in several DTC brands. "They are able to extract more and more from advertisers, because they command so much more of con­sumers' attention than they did only a few years ago." According to a study by marketing-analytics company AdStage, during the first six months of 2017 alone, the average cost per 1,000 ad impressions on Facebook increased 171 percent, and the average cost per click increased 136 percent.

For DTC companies, the problem can be especially acute, because many product categories now have multiple upstarts armed with tens of millions of dollars of venture capital, all targeting roughly the same users and, in the process, driving one another's marketing costs up. It gets worse when the incum­bents take notice and start pouring their fortunes into the same ad buckets.

What's more, says Wharton's Hosanagar, advertising on Facebook can simply become less efficient the more you use it. He saw this firsthand when he and his wife started a company called Smarty­Pal, which sold interactive child­ren's books directly to consumers. As they tried targeting a wider set of people, they saw the cost of acquiring a single paying customer climb from $60 into the hundreds of dollars. It became unsustainable, and ultimately they had no choice but to change business models. "It's now more of a B2B company," he says.

"I think there's a reckoning coming for these people. These venture-funded companies trying to scale are going to find out there's just no way to make the numbers work."--Kartik Hosanagar, Wharton professor

Comcast's Gulati has a phrase for this phenomenon: "CAC is the new rent." In other words, for companies reliant on paid marketing, their digital customer acquisition cost (CAC) is a lot like paying for brick-and-mortar stores in the old model, or selling wholesale. Essentially, this undermines one of the most basic precepts of the DTC movement, that these companies are cutting out the middleman and therefore can afford to charge much less for higher-quality goods.

In fact, Facebook and Google are simply the new middlemen. Instead of paying rent to a landlord or letting a third-party retailer mark up the price of your product, many DTC companies have to pay the internet giants to be their storefront. Add to this the cost of things like shipping, returns, and great customer service, and the cost structure is not necessarily more efficient than it ever was. "The majority of these brands don't grow fast enough to warrant venture capital and many don't work out economically," Gulati says.

Spending Money Is Easy, Making It Is Hard

The key to making the economics of DTC companies work is balancing acquisition costs with a customer's life­time value--how much the average cus­tomer spends on the company's products over the long term. There are generally two ways DTC companies try to do this. Those that offer expensive products that customers aren't likely to purchase frequently (a $295 suitcase, a $1,000 mattress) must be profitable on the first sale, and try to keep customers coming back by rolling out accessories or new product lines. Those that sell inexpensive items (razors, toothbrushes, socks) must try to lock in customers for repeat pur­chases, which many try to do through subscriptions. The underlying challenge, says Gulati, is that both acquisition costs and a customer's lifetime value are hard to predict. "Retention at the beginning of a company's life is anyone's guess, and startups tend to be overly optimistic about repeat rates at the outset," he says.

In many cases, these companies have raised large amounts of venture capital and use it to subsidize their marketing efforts. Davis Smith (Wharton 2011), co-founder of Cotopaxi, a Salt Lake City-based DTC outdoor gear manufacturer, says heavy VC investment demands aggressive marketing to pursue fast growth. "It's like a hamster wheel, and just about everybody is on it," he admits. "There are very few who are not."

Desperation leads to throwing more money at the problem. Many DTC startups have taken to subway adver­tisements, billboards, direct mail, pod­casts, TV and radio spots--essentially all the expensive, traditional ad formats that the digital giants were supposed to replace--despite the old-school ads' inability to target consumers and track campaigns' effectiveness.

"In the beginning, I think you really underestimate how much it costs to get people to buy," says Stephen Kuhl (Wharton 2017), who founded DTC sofa startup Burrow with Kabeer Chopra (Wharton 2017). Burrow ended up having to raise its sofa price from $795 to $850, then to $950, and then to $1,095--all in a year. (The last part of the price tag increase was to improve quality and move manufac­turing to the U.S.) "The temptation is to give the very best price possible," says Kuhl, "but then it becomes, 'Well, shit, we can't stay in business doing this. We have to make some money.' "

The Future Looks Awfully Familiar

One after another, many DTC startups have come to a reali­zation: If CAC is the new rent, then why not pay actual rent? SoHo in Manhattan has become a physical manifestation of this. Within a one-mile radius, you can walk into stores belonging to a dozen DTC brands--Away luggage, Allbirds sneakers, M.Gemi shoes, Untuckit shirts, Everlane fashion, Indochino menswear, Outdoor Voices active­wear, Bonobos men's clothing, and, of course, Warby Parker.

Every one of these stores is another alternative to the Facebook and Google hamster wheel. There's a good reason that hanging out a shingle was the original customer-acquisition strategy, after all: It works.

Take, for example, Away's New York store, on a chic block near the com­pany's office. It's one of four Away stores in expensive locations around the country--NYC, Los Angeles, San Francisco, and Austin. Accented with coffee table books about exotic, Millennial-friendly desti­nations, an espresso bar, and a few suitcases displayed like sculptures on white pedestals, the store could easily be mistaken for the lobby of a mini­malist boutique hotel.

When Away first launched, the founders assumed that traditional retail would never play a part in their future. But, says Korey, the CEO, after they tested a pop-up shop, "our hypothesis turned out to be totally wrong. We had person after person coming in and being like, 'Oh, I've been on your website, but who knows what seven pounds really feels like? Oh, that is light. OK, I'll take the green set.' " Away opened a real store, tried pop-ups in other cities, and discovered that every time it opened a store in a new market, it lifted Web sales in that market. "It's almost like we're opening a profitable billboard," Korey says.

"Ninety percent of these brands are going to fail. But 90 percent of all brands fail. It's cynical to focus on it."--Jesse Derris, founder of Derris public relations

To one frequent DTC investor I spoke with, though, any young DTC company's moving into retail early in its lifecycle is a red flag that it might be overspending on online marketing. "Because if it's working online, why all the retail stores? Why not stay online and scale over time? I could see one or two stores as PR plays, but why take on all the overhead, the cost of the build-out?"

Warby Parker has famously opened locations all over the country--66 of them so far--but there's a key difference. Whereas someone might buy a new suitcase once every five years, Warby has managed to turn eye­glasses into fashion accessories that people buy over and over, to refresh their look. Not only are the stores a bill­board for the brand--to echo Korey--they also help change shopping behavior and frequency. Indeed, Warby brought in more sales from its stores last year than it did from its website.

PR maven Derris says DTC companies are realizing they "are not digital only--they're digital first." That's a major clarification: They can use the internet to get around the traditional barriers of entry, but once they've arrived, it's more like business as usual.

DTC razor brand Harry's is now selling its product through Target, the very middleman these brands claimed to be cutting out. Wholesaling means not only that Harry's gives up a large chunk of its gross margin to a big-box retailer, but also that it can't track those customers and learn from their data.

"It's just pure scale," says Wharton's Bell. "There are only so many people you can reach online, but there's a massive segment of people who are still shopping offline, and you want to be able to address that market. Target is the way to do it." In February, Harry's raised an addi­tional $112 million in venture capital to pursue a sort of new-age Procter & Gamble strategy. It recently invested in DTC hair-loss-prevention company Hims, and Harry's co-founder Raider personally invested in DTC tampon startup Lola--both of whose products you can imagine appearing on Target shelves. (Other DTC brands selling in Target include African American personal-care brand Bevel and mattress company Casper, which reportedly received $75 million in funding from the big-box retailer.)

BaubleBar is going even further. Early on, CEO Jain knew she and her co-founder, Daniella Yacobovsky, had identified a promising category for DTC. Fashion jewelry--like $35 tassel earrings and $45 resin necklaces--is a high-turnover, trend-based product with massive 90 percent margins that retailers are always hungry to sell more of. (Costume jewelry is the fashion equivalent of the gum you purchase impulsively while waiting in line at a 7-Eleven.) Why limit BaubleBar's potential customer base to those who sought out its new brand, when there was a much bigger play--supply fashion jewelry to anyone who wanted to sell it?

So the company has spent the past seven years doing just that--selling DTC through its website and then making private-label and white-label products for other designers and retailers like Target. After taking little VC funding, says Jain, the now-profitable New York City company earns 50 percent of its revenue from those other retail channels. "We don't look at our industry and say, 'We're disrupting a Luxottica.' We are building one because it doesn't exist," says Yacobovsky.

Of course, only a few companies will ever have a chance at the kind of land grab that Harry's and BaubleBar are making. "Ninety percent of these brands are going to fail," says Derris. "But 90 percent of all brands fail. That's what's meant to happen, and it's cynical to focus on it."

The Shakeout

Even as Kirsten Green of Forerunner Ventures continues to fund more DTC startups, she admits that "there are a lot of com­panies, so many com­panies, that are venture backed that shouldn't be." In many instances, she says, founders might be better off by taking less ven­ture money, building their companies more conserva­tively, and ending up owning larger stakes of $50 million or $75 million businesses, than by risking every­thing to build an unlikely $1 billion company.

MeUndies, for example, has raised about $10 million since it launched in 2011. It became profitable early on and it still is, on sales north of $50 million. "I have friends who start DTC companies," says CEO Lalezarian, "and I often try to talk them out of raising VC for the sake of growth at all costs. You need it to get the business off the ground, but in that phase between launch and profit­ability, it can put you on an unsus­tainable track."

Many startups are already on this unsustainable track, and if Derris is right, what will become of the 90 percent that don't become the next Warby? Some will fold, as the Burrow sofa competitor Greycork did in 2017. Some will merge, especially in crowded categories like mattresses. While Dollar Shave Club sold itself for $1 billion to Unilever, many more will likely follow the path of DTC menswear company Bonobos, which last year sold to Walmart for $310 million--not exactly the dream when you consider it raised more than $127 million in venture capital.

Rather than prompting a retail revolution, perhaps this new gen­er­ation of DTC companies will become something like an innovation pipeline for the old guard they are angling to disrupt. "For legacy companies, it's a cheap way to get a new customer base, marketing insights, e-commerce expertise," posits one DTC founder. "When I think about it from a VC's view, it's a smart strategy. It's almost like a risk-free bet: Here's $1 million, and I know at minimum I'll make $5 million." Says another founder: "We all have to screw up a lot to not have at least a positive outcome."

After nearly a decade helping these startups craft their brand messages, Derris now wants in on that math. The newest arm of his PR business? A venture capital fund that, in part, invests in DTC startups. Similarly, Wharton's Bell has moved to New York to launch Idea Farm, a company that builds and invests in DTC brands, advised by major players in the DTC scene, which include the Warby co-founders.

Idea Farm is now advising a new DTC baby stroller company started by a 2017 Wharton grad. It's also hatching a bath-products company for which it hired another recent alum. In March, Bristle's McKean decided there was more to go after beyond toothbrushes--"oral care." If it pans out as he finishes his MBA, Bell says, Idea Farm might invest in that company, too. "Selling bundles of all the things you might need--the mouth­wash, the gum stim­ulators, the toothpaste," says Bell. "It's quite interesting."

Published on: Apr 19, 2018

Ascii Art Generator

$
0
0
Ascii Art Generator
  _    _          _   _         
 | |  | |        | | | |        
 | |__| |   ___  | | | |   ___  
 |  __  |  / _ \ | | | |  / _ \ 
 | |  | | |  __/ | | | | | (_) |
 |_|  |_|  \___| |_| |_|  \___/ 
Build with all the ❤️ in the world By Sarath

Fullstack Academy is looking for developers who love teaching

$
0
0

Fullstack Academy is the premier software development school located in New York City and Chicago. Y Combinator-backed, our school has garnered the attention of Forbes, TechCrunch, and Business Insider, among others. The reputation of our school is built on the professional success of each and every one of our students. Students have gone on to promising careers at top-tier companies, like Google, Facebook, Amazon, and Goldman Sachs. In January 2016, we launched the Grace Hopper Program, which is a need-blind software engineering program for women.

"Fullstack Academy has been a life-changing experience" is something we hear often and the reason why we come to work everyday. As an Instructor, you will be responsible for not only teaching, but also inspiring, mentoring and encouraging. You are someone who deeply understands and loves JavaScript and in general the full-stack of web applications including technologies such as Node.js, React, HTML5, CSS and JavaScript.

Your Responsibilities

  • Teaching and mentoring the best students in the coding bootcamp world
  • Work alongside the energetic Fullstack Academy staff
  • Guide students through development of stellar projects
  • Facilitate a dynamic and collaborative classroom community

“A life is not important except in the impact it has on other lives.” - Jackie Robinson

Work where your daily goal is to have significant impact on the lives of others - students who travel from all over the world to attend a top-rated program that you will be responsible for building and growing. We have an open, friendly, and fun startup culture. You will work with colleagues who have founded companies and worked at Google, Yahoo!, and Gilt Groupe, among others. Our salaries and benefits are competitive, coupled with generous paid vacation and stock options. We are conveniently located in Lower Manhattan, near all major subway lines. Last, but not least, you will get an immense sense of pride and satisfaction every seven weeks, when our students graduate.


The extraordinary life and death of the world’s oldest known spider

$
0
0

A female Gaius villosus Rainbow. (Courtesy of Leanda Mason)

This is the story of the oldest known spider in the world and the people who knew her. The details are compiled largely from research conducted by Barbara Main and Leanda Mason, who knew her best over nearly half a century.

She was born beneath an acacia tree in one of the few patches of wilderness left in the southwest Australian wheat belt, in an underground burrow lined with her mother’s perfect silk.

Her mother had used the same silk, strong and thick, to seal the burrow’s entrance against the withering heat of the summer of 1974, and against all the flying, prodding things that prowled the North Bungulla Reserve.

She lived like that, in safety and darkness, for the first six months of her life. Then one day in the rainy autumn months, her mother unsealed the tunnel, and she left.

It’s likely that two or three dozen spiderlings left the burrow with her, and that nearly all of them soon died.

The 250-acre North Bungulla Reserve was surrounded completely by farmland and roads, abutting an abandoned gravel pit. Space was scarce under the leaves' protective shade, and competition was fierce. Most of the spider’s siblings would be eaten by birds or lizards or cannibalized by each other, or would bake to death in the sun.

But she was fortunate. She found an unoccupied patch of earth a few feet from her mother’s burrow and began to dig.

She dug an almost perfect circle straight down into the soil, just large enough to fit her body, a small fraction of an inch across. Then she lined the tunnel with silk, as her mother had lined the one in which she hatched.

For as long as she lived, this would be her only home.

She wove a silken door across the burrow's mouth, attached on one side to make a hinge. She dragged hundreds of twigs to the edge of the doorway, one by one, so that they radiated out like fan blades.

Then she went inside, closed the door and waited, likely days or even weeks, for her first real meal.

She was essentially blind but attuned to every vibration in the earth, so when she finally felt something move along the twigs — an ant or small beetle, maybe — she leaped out and pulled it in.

In this way, she caught food when it came to her, and hid from the outside when there was nothing to eat. Scientists called her Gaius villosus— one of dozens of trapdoor spider species that lived in the vanishing wilderness of the Australian wheat belt.

(Courtesy of Leanda Mason.)

After a year in her burrow, in 1975, she would have felt strange, heavy vibrations on the twigs outside her door. This often meant a predator was trampling around or a large forager like a kangaroo.

This time, though, the vibrations were caused by Barbara York Main, who was standing directly over the burrow.

Main had grown up in the wheat belt, she would tell the Australian Broadcasting Corp. years later. Throughout the 20th century, farming and industrialization had destroyed almost all the wilderness in the region — leaving patches like the North Bungulla Reserve as precious sanctuaries for the tiny species that held her fascination.

“I felt an immediate affinity with small things,” Main told the ABC. “I didn’t have that one-on-one relationship with a kangaroo that I could with caterpillars.”

Now she was a zoologist with the University of Western Australia. On that day in 1975, she knelt over the burrow, parted the twigs behind the spider's door and fixed a small metal sign into the soil.

It was engraved, "16.” A few feet away, Main had marked another burrow "1,” and deduced that 1 was 16's likely mother. And 24 and 30 were her likely sisters.

Main spent hours beneath the acacia tree, marking every burrow she could find. She was building a family tree of Gaius villosus, whose hold on the earth seemed so fragile and about which humans knew so little.

“It is also hoped ultimately to assemble complete case histories of several individual nests,” Main told the International Congress of Arachnology in her first report in 1977, according to her written paper. “A life-cycle of perhaps twenty years notwithstanding.”

But 20 years was just a guess. Among other things, she would later tell the ABC, “I wanted to know how long the spiders lived.”

The next years were hard for the spiders. A long, dry summer in 1977 wiped out a third of one year's generation, Main wrote. Hungry quail scratched open nests. A scorpion even invaded one burrow.

Still, 16 survived and grew larger, expanding her burrow every year, until it was as wide as a dime, then a quarter, and larger still.

Gaius villosus was a resilient species, Main wrote when she published her first major paper on the project, in the Bulletin of the British Arachnological Society in 1978. “Although adult nests frequently have their doors and twig-lines torn off (presumably by birds) none appear to have been seriously affected by this. The spiders reattach their doors, sometimes upside down or back-to-front and attach new twig-lines.”

She had tagged and mapped 101 villosus burrows by then, all within a few meters of each other, near the edge of the old gravel pit.

Main was impressed by how long the spiders seemed to live. Based on her estimates from their tunnel diameters and sporadic observations she had made before the study, she believed two matriarchs were at least 16 years old.

“They might be 18 or 20!” she added.

Main mentioned spider 16 only in passing in this early paper. She was only 4 years old and nothing special, yet.

One day toward the end of the 1970s, spider 16 made a rare excursion to her front yard. She wove a sort of welcome mat outside her door — a net laced with pheromones, which Main would compare to a tea doily.

Then she went back down and closed the door — waiting this time not for the skittering of prey, but for the knock of a male caller.

Main wrote in her paper about the very different lives of male and female villosus spiders. The males left their burrows as soon as they were sexually mature — around age 6 or so — to go search for doilies and knock on doors.

The males never return to their burrows after mating, Main wrote. They wandered until they died, and all died young.

The female, however, went straight back to her burrow after sex. She sealed the door up extra tight with a finely contoured plug, crafted so precisely that it could seal out heat and rainwater, and spent the next year locked inside her home, to lay her eggs and shelter them.

The next autumn, she unsealed the burrow and sent her children off into the world, to live or die as they could.

These matriarchs were the secret to villosus spider’s success in such an arid environment, Main noted. In their deep bunkers, they could survive droughts and fires on the surface. They could mate every few years, and continually replenish the spider population.

But even Main did not guess how much 16 would overachieve.

She spun, fed, spun and fed, and probably produced many generations of young in between many hardships. A swarm of grasshoppers ravaged the reserve in 1991, according to a paper in Pacific Conservation Biology. An invasive plant species began to spread from the gravel pit. “Biodiversity is being reduced,” the paper said.

Over the decades, spider 16’s mother, siblings and countless cousins and children died. But her family tree kept growing, and each time 16's burrow was checked, the webbing and swirl of twigs looked immaculate as ever.

In 2013, an Australian Broadcasting Corp. reporter named Vicki Laurie became intrigued by reports that an 84-year-old zoologist had been cataloguing a family of spiders for 40 straight years.

So Laurie traveled with Main, out to an “an unremarkable bit of scrub” in the wheat belt, and watched her work.

“We spend three hours on our knees as Barbara checks each burrow to see if it’s occupied or not,” Laurie reported. “She observes, with a mild air of concern, how few new trapdoor burrows there are and how unseasonably dry the reserve is.”

When they reached the plaque of spider 16, Laurie was skeptical that it had really been occupied by the same spider for the last four decades. But Main explained that females never left their burrows until they died, and no other spider ever moved in.

Main flipped open the door with a small knife, and through Laurie, 16 was introduced to the world.

“Inside, I can just see the spider, which has pulled a veil of silk lining half across its burrow,” Laurie recalled in her report. “Under my breath, I introduce myself and wish her well.”

It was around this time that an undergraduate student, Leanda Mason, began to accompany Main on her excursions to the reserve.

To Mason, who was studying to be an ecologist, the reserve looked like an Eden in the blight of industrialized southwest Australia — teeming with species that might not be around much longer.

By now, Main had catalogued hundreds of spiders. Generation after generation. But on each excursion, Mason told The Washington Post, they would beeline directly to 16's door, to visit the spider that never seemed to die.

This became a tradition. On 16's 40th birthday, Mason said, she asked if she could give the spider a mealworm.

“Barbara wouldn’t let me,” she said. “It interfered with the study.”

Main's plan of cataloguing a family of spiders having succeeded far beyond her expectations, and she began to look forward to the project's end.

“She was going to finish the study when number 16 died,” Mason said. “She was going to write it up as a big thing.”

Instead, she said, Main’s health declined before the spider’s.

The zoologist retired last year, in her late 80s, and Mason, now studying for her doctorate in ecology at Curtin University, took over the spider study.

On Oct. 31, 2016, she went out to the reserve with a drone, hoping to get a bird’s-eye view of how this small rectangle of bushland was holding up against the roads and fields.

(Courtesy of Leanda Mason)

But like her mentor before her, she went straight to 16 first.

When she arrived at the clearing that day, she noticed that the twigs around the door had lost their meticulous spiral fan shape. They lay scattered in disarray.

Mason looked at the silk door, and saw a tiny hole in the center, as if something had pierced it.

She lifted the door and lowered an endoscope into the burrow, and confirmed what she already suspected. The spider was gone.

A parasitic wasp had likely broken through the seal, and laid its eggs in 16's body.

“She was cut down in her prime,” Mason said. “It took a while to sink in, to be honest.”

On April 19, Mason, Main and Grant Wardell-Johnson co-published a paper in Pacific Conservation Biology, announcing the death of spider 16 at age 43.

She was the oldest spider known to have existed, Mason wrote, eclipsing the previous record set by a 28-year-old tarantula.

“We can be inspired by an ancient mygalomorph spider and the rich biodiversity she embodied,” she wrote, beside a photo of a perfect hole beneath a tree — nearly half a century of work: Main's, hers and the spider's.

More reading:

A kangaroo wouldn’t hop — so zoo visitors stoned it to death

Three men who laughed as they dragged a shark behind a motorboat could go to prison

A revered sea turtle was offered coins as a blessing. She ate them all and nearly died.

Bonobo and chimpanzee gestures overlap extensively in meaning

$
0
0

Abstract

Cross-species comparison of great ape gesturing has so far been limited to the physical form of gestures in the repertoire, without questioning whether gestures share the same meanings. Researchers have recently catalogued the meanings of chimpanzee gestures, but little is known about the gesture meanings of our other closest living relative, the bonobo. The bonobo gestural repertoire overlaps by approximately 90% with that of the chimpanzee, but such overlap might not extend to meanings. Here, we first determine the meanings of bonobo gestures by analysing the outcomes of gesturing that apparently satisfy the signaller. Around half of bonobo gestures have a single meaning, while half are more ambiguous. Moreover, all but 1 gesture type have distinct meanings, achieving a different distribution of intended meanings to the average distribution for all gesture types. We then employ a randomisation procedure in a novel way to test the likelihood that the observed between-species overlap in the assignment of meanings to gestures would arise by chance under a set of different constraints. We compare a matrix of the meanings of bonobo gestures with a matrix for those of chimpanzees against 10,000 randomised iterations of matrices constrained to the original data at 4 different levels. We find that the similarity between the 2 species is much greater than would be expected by chance. Bonobos and chimpanzees share not only the physical form of the gestures but also many gesture meanings.

Author summary

Bonobos and chimpanzees are closely related members of the great ape family, and both species use gestures to communicate. We are able to deduce the meaning of great ape gestures by looking at the ‘Apparently Satisfactory Outcome’ (ASO), which reflects how the recipient of the gesture reacts and whether their reaction satisfies the signaller; satisfaction is shown by the signaller ceasing to produce more gestures. Here, we use ASOs to define the meaning of bonobo gestures, most of which are used to start or stop social interactions such as grooming, travelling, or sex. We then compare the meanings of bonobo gestures with those of chimpanzees and find that many of the gestures share the same meanings. Bonobos and chimpanzees could, in principle, understand one another’s gestures; however, more research is necessary to determine how such gestures and gesture meanings are acquired.

Introduction

In a series of well-known children’s books, Doctor Dolittle was able to talk to nonhuman animals, but in reality, deciphering meaning in nonhuman communication presents a much bigger challenge. First, there is the question of whether animal signals can be said to have ‘meanings’ or merely ‘functions’. Functions are known for many animal signals: for example, various species are able to decode complex information from their conspecifics’ calls on the location or class of food or predators [14], level of risk [4,5], and size of predator [6]. However, for meaning, a signal needs to be produced intentionally—the signaller must aim to change the behaviour (first-order intentional) or the mental state (at least second-order intentional) of the recipient [79].

Mounting evidence shows that, unlike most nonhuman animals [10], great apes habitually engage in first-order intentional communication: great apes routinely direct their gestures towards a specific recipient; monitor that recipient’s attentional state and choose gestures appropriate to it; wait for the recipient to respond; and, if the recipient does not respond, they persist and elaborate with further gestures [1117]. These criteria demonstrate that the signaller has a specific outcome in mind and uses gestures to achieve that outcome [18]. It has also been argued that to have meaning, communication needs to be ostensive, drawing attention to the fact that it is being used to communicate [19]. In developmental psychology, eye gaze is taken as an ostensive cue; the audience checking performed by great apes before gesturing serves the same ostensive function [20]. Because great apes deploy gestures intentionally, it is appropriate to go beyond simply describing their function and enquire about the intended meaning that a signaller aims to achieve by gesturing [20]. Although we focus on gestural communication, it should be noted that great apes also appear to deploy some vocal signals intentionally [2123]. Moreover, we focus on a Gricean approach to meaning, rather than a semantic approach [24,25], given that few great ape gestures appear to be referential (but see [26]).

The second challenge is that gesture meanings must be deduced indirectly. Past studies have tackled the issue of meaning by looking at the context in which gestures occur [16,27], thereby showing that the same gesture may occur in several contexts. We have taken a different approach. By using the reaction that each gesture elicits, but only in cases where the signaller’s behaviour indicates that this reaction was their intended aim, we hope to pin down the signaller’s intended meaning for each specific gesture. The meaning of a gesture can thus be defined by the ‘Apparently Satisfactory Outcome’ (ASO), the reaction of the recipient that satisfies the signaller as shown by cessation of gesturing [28]. This method indicates the individual signaller’s intended meaning in each instance, and across many instances and individuals, one can examine the gesture’s general meaning(s) in a population. Aggregating the meanings represents population level patterns of meaning but does not infer that meanings are conventionalised nor indeed does it imply any particular ontogeny for gesture meanings.

Defining meaning by ASOs, wild chimpanzees use their gestures to achieve at least 19 ASOs; that is, their gestures achieve 19 types of behavioural response from the recipient [28]. Each gesture type has a distinct (set of) meaning(s) that is calculated by comparing the distribution of meanings for a gesture type to the distribution of meanings across all gesture types [28]. Using ASOs to define the meaning of gestures is a relatively new approach, so gesture meanings have not yet been defined for our other closest living relative—the bonobo (or ‘bilia’, as the species is locally known) [29]. Our study is the first to investigate meaning in the natural gestural repertoire of wild bonobos.

Once the meanings of bonobo gestures are defined, we can examine the gestural overlap of bonobos and chimpanzees. All species of nonhuman great ape share the majority of their gestural repertoire in terms of the gestures’ physical forms. The overlap for chimpanzees and bonobos is 88%–96% [30]; for chimpanzees and gorillas, 60% [31]; and for chimpanzees and orangutans, 80% [31]. But simply using the same actions does not mean that chimpanzees and bonobos share a communication system (that is, that a chimpanzee and bonobo would in principle be able to understand one another). Only if bonobo and chimpanzee gestures share the same meanings can they be said to share the same system of communication.

Deciding that issue is not straightforward. Ape gestural repertoires are large, with over 70 distinct gestures in the chimpanzee and bonobo catalogues. In captivity, large quantities of gestural data can be collected very quickly, but the majority of it occurs during play [32,33]. Data from the wild are needed to examine the full breadth of meaning expressed in nonplayful ape communication. Previous studies have used traditional analyses of variance or goodness of fit tests, demonstrating that different individuals within a chimpanzee group use the same gesture to achieve the same outcome [28]. However, despite data sets containing thousands of gesture cases, large repertoires and the regular use of only a subset of these gestures [34] limits the number of gesture types that can be examined in this way. Furthermore, those tests are not suited to data sets in which many of the possible outcomes never occur for each signal type, as we would expect in a system of communication in which specific signals are employed for specific outcomes. We have therefore adapted methods from numerical ecology to compare the similarity between the meanings of bonobo and chimpanzee gestures. In doing so, we offer the first analysis that examines whether the overlap in the physical form of bonobo and chimpanzee gestures extends to their meaning.

Results

Bonobo gesture meanings

We analysed 2,321 intentional gesture instances (occasions on which a gesture was used) that successfully achieved an ASO. These instances concerned 33 gesture types (categories of gestures that share the same physical form) [30,31] (S1 Table) and 14 different ASOs: ‘Acquire object/food’, ‘Climb on me’, ‘Climb on you’, ‘Contact’, ‘Follow me’, ‘Initiate grooming’, ‘Mount me’, ‘Move closer’, ‘Reposition’, ‘Initiate copulation’, ‘Initiate genito-genital rubbing (GG-rubbing)’, ‘Travel with me’, ‘Move away’, and ‘Stop behaviour’. The first 12 of these ASOs served to initiate or develop an activity, and the last 2 served to stop an activity. Of the 33 gesture types, 17 had only a single ASO, 6 had 2 ASOs, and 10 had >2 ASOs (Fig 1). The mean number of ASOs per gesture type was 2.27 ± 1.84 (median = 2, range 1–8).

Fig 1. Proportional stacked histogram for ASOs achieved by each gesture type (values from S1 Table).

ASOs are coloured in a gradient adjacent to similar ASOs, and gesture types are arranged adjacent to those with similar profiles. ASO distributions for chimpanzee gestures were reported in [28]. ASO, Apparently Satisfactory Outcome.

https://doi.org/10.1371/journal.pbio.2004825.g001

Next, in accordance with [28], we used a series of ANOVAs to analyse whether the distribution of ASOs for a given gesture type differed from the average distribution of ASOs across all gesture types (Figs 25). Fifteen gesture types were suitable for analysis, having been used by at least 3 individuals at least 3 times to achieve an ASO or ASOs (see Materials and methods for more information). If gestures were achieving outcomes at random, we would expect no difference between the distribution of a given gesture type and the average distribution across all gesture types. All but 1 gesture type (Object shake) showed significant deviation from the average distribution. Bonobo gesture types, like chimpanzee gesture types [28], do have distinct (sets of) meanings.

All ASOs are given for each gesture type, in descending order from most to least frequent, as a percentage of all instances for all ASOs included in analysis (values in S1 Table, raw data in S1 Data). For bonobos, results for ANOVA are given in square brackets, e.g., [N(n): ANOVA results], with N as number of individuals and n as number of gesture instances (for age and sex of contributing individuals, see S2 Table); a significant effect shows that gesture usage differs from the average distribution of gesture frequencies. For chimpanzees, ANOVA or chi-squared analyses were performed [28]; square brackets contain published results. Underlined ASOs are shared by both chimpanzees and bonobos for that gesture type. This chi-squared analysis was conducted in Hobaiter & Byrne 2014 after checking and finding no effect of signaller identity on gestural meaning. We have included it for comparison but recognise that chi-squared analyses risk pseudoreplication. For analyses, we combined several ASOs from [28]: ‘Initiate copulation’ includes ‘Sexual attention—female’ and ‘Sexual attention—male’; ‘Initiate grooming’ includes ‘initiate grooming’ and ‘direct attention’; ‘Travel with me’ includes ‘Travel with me (adult)’ and ‘Travel with me (young)’. ASO, Apparently Satisfactory Outcome.

https://doi.org/10.1371/journal.pbio.2004825.g002

All ASOs are given for each gesture type, in descending order from most to least frequent, as a percentage of all instances for all ASOs included in analysis (values in S1 Table, raw data in S1 Data). For bonobos, results for ANOVA are given in square brackets, e.g., [N(n): ANOVA results], with N as number of individuals and n as number of gesture instances (for age and sex of contributing individuals, see S2 Table); a significant effect shows that gesture usage differs from the average distribution of gesture frequencies. For chimpanzees, ANOVA or chi-squared analyses were performed [28]; square brackets contain published results. Underlined ASOs are shared by both chimpanzees and bonobos for that gesture type. This chi-squared analysis was conducted in Hobaiter & Byrne 2014 after checking and finding no effect of signaller identity on gestural meaning. We have included it for comparison but recognise that chi-squared analyses risk pseudoreplication. For analyses, we combined several ASOs from [28]: ‘Initiate copulation’ includes ‘Sexual attention—female’ and ‘Sexual attention—male’; ‘Initiate grooming’ includes ‘initiate grooming’ and ‘direct attention’; ‘Travel with me’ includes ‘Travel with me (adult)’ and ‘Travel with me (young)’. ASO, Apparently Satisfactory Outcome.

https://doi.org/10.1371/journal.pbio.2004825.g003

All ASOs are given for each gesture type, in descending order from most to least frequent, as a percentage of all instances for all ASOs included in analysis (values in S1 Table, raw data in S1 Data). For bonobos, results for ANOVA are given in square brackets, e.g., [N(n): ANOVA results], with N as number of individuals and n as number of gesture instances (for age and sex of contributing individuals, see S2 Table); a significant effect shows that gesture usage differs from the average distribution of gesture frequencies. For chimpanzees, ANOVA or chi-squared analyses were performed [28]; square brackets contain published results. Underlined ASOs are shared by both chimpanzees and bonobos for that gesture type. This chi-squared analysis was conducted in Hobaiter & Byrne 2014 after checking and finding no effect of signaller identity on gestural meaning. We have included it for comparison but recognise that chi-squared analyses risk pseudoreplication. For analyses, we combined several ASOs from [28]: ‘Initiate copulation’ includes ‘Sexual attention—female’ and ‘Sexual attention—male’; ‘Initiate grooming’ includes ‘initiate grooming’ and ‘direct attention’; ‘Travel with me’ includes ‘Travel with me (adult)’ and ‘Travel with me (young)’. ASO, Apparently Satisfactory Outcome.

https://doi.org/10.1371/journal.pbio.2004825.g004

All ASOs are given for each gesture type, in descending order from most to least frequent, as a percentage of all instances for all ASOs included in analysis (values in S1 Table, raw data in S1 Data). For bonobos, results for ANOVA are given in square brackets, e.g., [N(n): ANOVA results], with N as number of individuals and n as number of gesture instances (for age and sex of contributing individuals, see S2 Table); a significant effect shows that gesture usage differs from the average distribution of gesture frequencies. For chimpanzees, ANOVA or chi-squared analyses were performed [28]; square brackets contain published results. Underlined ASOs are shared by both chimpanzees and bonobos for that gesture type. This chi-squared analysis was conducted in Hobaiter & Byrne 2014 after checking and finding no effect of signaller identity on gestural meaning. We have included it for comparison but recognise that chi-squared analyses risk pseudoreplication. For analyses, we combined several ASOs from [28]: ‘Initiate copulation’ includes ‘Sexual attention—female’ and ‘Sexual attention—male’; ‘Initiate grooming’ includes ‘initiate grooming’ and ‘direct attention’; ‘Travel with me’ includes ‘Travel with me (adult)’ and ‘Travel with me (young)’. ASO, Apparently Satisfactory Outcome.

https://doi.org/10.1371/journal.pbio.2004825.g005

Comparison of bonobo and chimpanzee gesture meanings

Using a randomisation procedure, we tested the null hypothesis that the similarity between the 2 species (Fig 6) would be the same under a random assignment of gestures to ASOs for each species (see Materials and methods). We compared 4 different methods of matrix permutation (R code in S1 Data), generating gesture-to-ASO assignment matrices with (a) no constraints (least conservative), (b) constraints on the column sums, (c) constraints on the row sums, and (d) constraints on both column and row sums (most conservative), none of which produced a pair of matrices that were more similar than the original data (Fig 7). When constraining the column or row sums, the total number of ASOs a gesture was assigned to (row sum preserve) or gestures an ASO was assigned to (col. sum preserve) in a permutation was constrained to that of the original chimpanzee and bonobo matrices, though the actual assignment is random. For example, under the row sum preserve method, the row “Object Shake” would have exactly 7 1s for any permutation of the chimpanzee matrix and a single 1 for the bonobo matrix, as in the original data (raw data and species matrices in S2 Data). We can be confident that the similarity of the gesture matrices for the 2 species is greater than expected by chance assignment of gestures to ASOs, as defined by the randomisation procedure.

Fig 6. The overlap in gesture-to-ASO assignment between chimpanzees and bonobos (S1 and S2 Data).

White cells correspond to gesture–ASO assignments absent in both species, green cells correspond to gesture–ASO assignments only present in chimpanzees, blue cells correspond to gesture–ASO assignments only present in bonobos, and black cells correspond to gesture–ASO assignments present in both species. ASO, Apparently Satisfactory Outcome.

https://doi.org/10.1371/journal.pbio.2004825.g006

Fig 7. The frequencies of the total number of matches between species gesture matrices achieved by 10,000 iterations of the permutation test using 4 different constraints on matrix permutation (S1 and S2 Data).

From bottom to top: unconstrained randomisation of assignments (No Constraints, grey), preservation of only the number of ASOs assigned to each gesture (Row Sum Pres., yellow), preservation of only the number of gestures assigned to each ASO (Col. Sum Pres., purple), and preservation of the number of gestures assigned to each ASO and the number of ASOs assigned to each gesture (Row & Col. Sum Pres., orange). The total number of matches in the original gesture matrices is given by the red vertical line. ASO, Apparently Satisfactory Outcome.

https://doi.org/10.1371/journal.pbio.2004825.g007

We further explored the randomisation procedure, in order to find the limits of its application to communication, by repeating the randomisation process on subsets of our data. Specifically, we examined the effects of the available number of observed gestures on the results of our analysis by subsetting our original data to include an incrementally increasing number of gestures, from 4 to the maximum 21 available for use. The probability of generating randomised gesture matrices, using any of the 4 different constraint sets described above, that are more similar than the original gesture matrices remains very low (<0.05) as long as at least 8 or more of the gestures are included in the randomisation (Fig 8). The fact that such a strong signal of similarity between the 2 species’ gesture matrices exists with considerably less data lends weight to the robustness of our main result.

Fig 8. The probability of generating randomised gesture matrices that are more similar than those observed when a random subset of available gestures is used (S1 and S2 Data).

Each line corresponds to 1 of the 4 different matrix randomisation constraints: preservation of the number of gestures assigned to each ASO and the number of ASOs assigned to each gesture (Row & Col. Sum Pres., orange), preservation of only the number of ASOs assigned to each gesture (Row Sum Pres., yellow), preservation of only the number of gestures assigned to each ASO (Col. Sum Pres., purple) and unconstrained randomisation of assignments (No Constraints, grey). ASO, Apparently Satisfactory Outcome.

https://doi.org/10.1371/journal.pbio.2004825.g008

Discussion

Bonobos intentionally deploy gestures to achieve at least 14 different intended outcomes—12 that initiate or develop an activity and 2 that stop it. They use gestures to request things (such as food) and to initiate co-locomotion, grooming, and sex. Because the gestures are intentionally produced, meeting widely accepted criteria for intentional communication [18], these outcomes are not only the gestures’ ‘functions’—they are their ‘meanings’ [20,28]. Moreover, bonobo gesture types have distinct (sets of) meanings. Almost all gesture types achieve a different distribution of ASOs to the average distribution, showing distinct aims. Object shake, the 1 exception, may have failed to show a distinctive pattern because of its small sample size and because it is primarily used for sex and grooming—2 behaviours that numerically dominate bonobo gesture instances and thus contribute substantially to the average distribution. Overall, we conclude that bonobos are using gestures to achieve distinct outcomes, as has also been found for chimpanzees [28]. About half of bonobo gestures have only a single meaning, while the others have 2 or more meanings. Words in human language can have a single meaning or polysemous meanings, and this poses no problem for the recipient in deciphering the signaller’s intended meaning. Future research (with an expansive enough dataset) can explore how bonobo recipients appear able to correctly interpret the meaning of apparently ambiguous gesture types, perhaps by including analysis of facial expressions, gesture sequences, or local situational context.

Having catalogued the meanings of bonobo gestures, we then compared them with the meanings of chimpanzee gestures, finding evidence of their similarity. Across 10,000 random permutations of the gesture matrices, we failed to generate a single pair that were more similar than the observed data, implying a negligible probability that such similarity arose by chance. This was the case even under the most conservative constraints, where randomised matrices were generated maintaining the number of assignments in both columns and rows as the original data. Our findings remained robust, with clear similarities found between bonobo and chimpanzee meanings, even with subsets of our main gesture matrices, down to a minimum of just 8 (of the 21 total) gesture types. In the future, researchers would ideally be able to use these methods to compare the meaning of gesture repertoires among a range of primate species, determining whether more closely related species have more similar gesture repertoires.

The bonobo and chimpanzee gestural repertoires—that is, the physical form of the gestures—overlap by 88%–96% [30]. We now know that there is also a large overlap in the intended outcomes achieved by these shared gesture types in bonobos and chimpanzees. Whilst biological inheritance is one possible explanation for this overlap, we recognise that similar gestures and meanings could emerge through another acquisition mechanism, such as ontogenetic ritualization [35] or a version of imitation [36,37] (but see [17]). Bonobos and chimpanzees also experience similar environmental and anatomical constraints that may restrict the available gestures and desired outcomes. More research is needed to explore the precise mechanism behind the overlap of gesture meanings. It is probable that this pattern of gestures and meanings also applied to the last common ancestor we shared with the 2 Pan species. That is, it is likely that the Pan-Homo last common ancestor would have been able to use and understand most of the gestures of modern bonobos and chimpanzees; less likely, but not impossible, the elaborate shared Pan repertoire could have evolved after divergence from the hominin lineage. If we can now discover whether humans also share or understand these great ape gestures, those 2 possibilities can be resolved [38]. Doubtless, gestural communication was an important contributor in the evolution of language [39,40]; but it remains to be seen how gesture as it manifests in nonhuman great apes relates to gesture as it manifests in humans alongside thought and language. Understanding this ‘baseline’ of gestural communication may better enable us to predict those new meanings that development of protolanguage offered to human-specific ancestors, ultimately resulting in the evolution of language.

Materials and methods

Subjects

KEG collected data on 2 neighbouring communities of wild bonobos (E1 group and P group) at Wamba, Luo Scientific Reserve, Democratic Republic of the Congo (00° 10' N, 22° 30' E). Habituation began for E1 group (n = 39) in 1976 (when it was still part of E group) and for PE group (n = 30) in 2010 (24, 25). At the beginning of this study, in 2014, the total sample size was 63 individuals, with 28 adults, 12 adolescents, 9 juveniles, and 14 infants. In 2015, the total sample size was 64 individuals, with 30 adults, 8 adolescents, 10 juveniles, and 16 infants. Bonobo age groups are divided into infant (<4 years), juvenile (4–7 years), adolescent (8–14 years), and adult (15+ years) [41].

CH collected data on 1 community of wild chimpanzees (Sonso community) at the Budongo Conservation Field Station, Uganda (1° 35’–1° 55’ N, 31° 18’–31° 42’ E). Habituation began for the Sonso community (n = 92) in 1990. At the beginning of this study, in 2007, the total sample size was 81 individuals, with 32 adults, 16 subadults, 15 juveniles, and 18 infants. Chimpanzee age groups are divided into infant (≤4 years), juvenile (5–9 years), subadult (male: 10–15 years, female: 10–14 years), and adult (male: 16+ years, female: 15+ years) [42].

Data collection and video coding

KEG conducted fieldwork from 4 February 2014 to 28 June 2014 and 19 January 2015 to 13 June 2015, following bonobos daily from approximately 05:50 to approximately 12:00, with a weekly schedule of 4 days on and 1 day off. Observation time amounted to 204 days. CH conducted fieldwork from 25 October 2007 to 8 March 2008, 13 April 2008 to 1 January 2009, and 5 May 2009 to 8 August 2009, following chimpanzees daily from approximately 07:30 to approximately 16:30, with a weekly schedule of 3 days on, 1 day off, 3 days on, 2 days off. Observation time amounted to 266 days.

We filmed social interactions using focal behaviour sampling, where the focal behaviour was whenever 2 or more individuals approached within 5 m of each other. We chose this criterion to ensure that any gestures preceding social interactions were recorded. KEG recorded video footage with a Panasonic HDC-SD90 video camera, using the 3-second pre-record feature to increase the likelihood of catching the gestures in time. CH used a MiniDV tape using a Sony Handycam (DCR-HC-55). We imported video footage each day, labelled it, and entered it into a clip directory in FileMaker Pro.

Filemaker Pro was also used for video coding. Each gesture instance (that is, a single gesture) was coded in a separate sheet, with the following information: signaller, recipient, signaller age and sex, recipient age and sex, gesture type, part of sequence, audience checking, response waiting, persistence, recipient response, and ASO.

The signaller is the individual who produces the gesture, and the recipient is the individual who is the target of the gesture. The gesture type is a category comprising physically similar gesture instances, where gesture instances are grouped by body part and action. A complete list of gesture types is described in [20], with additional bonobo gesture types found in [19]. A sequence is a series of gesture instances separated by <1 s and produced by 1 individual. Audience checking, response waiting, and persistence are all criteria for intentionality. For audience checking, we reported whether or not the signaller turned to face the recipient; for response waiting, whether or not they paused for >1 s after gesturing; and for persistence, whether or not they continued to gesture. To be included in analyses, we required that each gesture instance meet at least 1 of these criterion for intentionality.

Recipient response was categorical: No response, ASO, Gesture (if the recipient responded with a gesture), or Unknown. To analyse meaning, we only used gesture instances where the recipient responded with an ASO. For a gesture to be assigned an ASO, we required that the recipient react to the gesture sequence with an ASO (that is, a response that satisfies the signaller shown by cessation of gesturing). ASO was then the specific outcome by the recipient.

KEG coded all the bonobo video footage and, to test interobserver reliability, CH coded 100 gesture instances for several of the aforementioned categories: gesture type, audience checking, persistence, and signaller apparently satisfied. We analysed Cohen’s kappa for interobserver reliability of these variables giving 0.87 (almost perfect), 0.56 (moderate), 0.70 (substantial), and 0.63 (substantial), respectively. CH coded all of the chimpanzee video footage; interobserver was conducted in their 2011 paper [31], with another experienced coder coding 50 gesture instances for directedness, recipient attentional state, and gesture type (Cohen’s kappa: directedness, κ = 0.69 [substantial]; recipient attentional state, κ = 0.63 [substantial]; gesture type, κ = 0.86 [almost perfect]).

Analysis of bonobo gesture meaning

We recorded 4,256 intentionally produced gesture instances for wild bonobos, but we only analysed the 2,463 gesture instances (including those in sequences) that successfully achieved an ASO. We then excluded gestures used in play (231 instances), because including cases where gestures were used playfully would risk masking their normal meaning—the very nature of play means that gestures would be used playfully, not necessarily containing the same meaning they would otherwise. All statistical analyses were conducted in R 3.2.3.

In accordance with Hobaiter & Byrne 2014 [28], we used a series of ANOVAs to analyse whether the specific distribution of ASOs for a gesture type differed from the average distribution (the distribution of ASOs across all gesture instances). For direct comparability, we set the same parameters in our analyses to those used in their previous study [28]. To be included in parametric analyses, we required that each gesture type achieve an ASO at least 3 times (per individual) by at least 3 individuals (we analysed 1,896 gesture instances; 15 gesture types were suitable for this analysis, and 51 individuals contributed data). Then, we converted the number of instances a gesture type achieved any 1 ASO into a proportion of the total number of gesture instances in which an individual used that gesture type. We also calculated the average distribution by converting the number of instances in which all gesture instances achieved each ASO into a proportion of the total number of gesture instances. For values of 0 or 1, we converted them in accordance with Snedecor and Cochran (0 → 1/(4N) and 1 → 1-(1/(4N)), where N is the total number of instances for that gesture type) [43]. Finally, to calculate how the specific distribution deviated from the average distribution, we subtracted the average from the specific distribution. We then conducted the ANOVA with this resulting deviation as the dependent variable, ASO as the independent variable, and signaller identity as a random effect. P values of < 0.05 show that the deviation of the specific from the average distribution is significant.

Randomisation procedure for bonobo–chimpanzee comparison

The relationships between gestures and ASOs for each species were represented as a matrix in which each row corresponded to a possible gesture and each column corresponded to 1 of the possible ASOs. A ‘1’ in this gesture matrix indicated that the associated gesture was observed to precede the associated ASO in the corresponding species. The criterion for inclusion was that a gesture type must achieve the given ASO at least 2 times and by a minimum of 2 individuals (that is, ape individual A uses it once and ape individual B uses it once). Note that criteria for the previous ANOVA were necessarily strict to meet the requirements for parametric analyses. In comparing the communication of 2 species that differ markedly in social behaviour, it is important not to mistake differences in the frequency of use of signals, which are to be expected, with genuine differences in communication system. To avoid that error, we deliberately adopted a looser criterion so that subtle differences in the bonobo and chimpanzee repertoires could be detected but not confused with spurious differences in usage frequency.

A ‘0’ in the matrix indicated that such an association was not observed. In order to deal with the few cases where we had insufficient data for 1 or other species, we defined the possible gestures as the intersection of all gestures used (n = 22) and possible ASOs as the intersection of all ASOs observed across both species (m = 11). Thus, the dimensions of the gesture matrices were the same for both species (n× m), and each row and column had at least one ‘1’. We defined the similarity between 2 gesture matrices simply as the sum of all matching corresponding matrix entries, be they 0 or 1.

Using a randomisation procedure, we tested the null hypothesis that the similarity between the 2 species would be the same under a random assignment of gestures to ASOs for each species [44]. To perform the randomisation test, we iteratively generated new gesture matrices for each species by randomly permuting the original gesture matrices and calculating the similarity between the 2 resultant matrices, generating a null distribution for similarity over 10,000 iterations. We used 4 different methods of permutation, each imposing different constraints on the possible matrices that could be generated. The simplest method simply randomly shuffled the values in each matrix without any constraints. The row sum preserve method shuffled the entries in each row of a matrix, thus preserving the number of ASOs assigned to each gesture for each species. The column sum preserve method shuffled the entries in each column, thus preserving the number of gestures allocated to each ASO. Finally, the row and column sum preserve method maintained both the number of gestures allocated to ASOs and ASOs allocated to gestures and was performed using the “tswap” algorithm in the vegan package in R, which implements a swap algorithm to generate new matrices that preserve row and column sums whilst sampling the distribution of possible matrices with equal probability [44]. From the null distribution of similarity values generated by each method, we calculated a corresponding P value as the proportion of iterations of the randomisation procedure in which the resultant similarity score was equal to or exceeded that of the original data.

We then examined the effects of the available number of observed gestures on the results of our analysis by subsetting our original data to include an incrementally increasing number of gestures, c, from 4 to the maximum 21 available to use. For each gesture count, c, and each iteration, we randomly sampled c gestures (rows) from both species’ gesture matrices to form the data subset for that iteration. Using this subset of the original data, randomised matrices were generated and the resultant similarity compared to that of the nonrandomised, subsetted matrices. A probability value was calculated as the proportion of iterations in which the randomised subset matrices were of equal or greater similarity than their nonrandomised subset counterparts. This probability value allowed us to test the null hypothesis that the observed similarity was the same as would be expected by chance, given that only c randomly selected gestures were observed. When c = 21, the maximum number of gestures available, any subset was the same as the original matrix, so the randomisation test was equivalent to that described in the main text.

It should be noted that this randomisation test does not rule out the possibility of another primate species having a more similar gesture–ASO matrix to chimpanzees or bonobos than they have to each other, though, to the best of our knowledge, no such data currently exist in order to test this. The procedure simply compares the similarity of the 2 species to the similarity of hypothetical gesture matrices generated under the above set of constraints.

Acknowledgments

The authors offer thanks to Tetsuya Sakamaki, Wamba Committee for Bonobo Research, and Centre de Recherche en Écologie et Foresterie for supporting fieldwork at Wamba; to Heungjin Ryu and Nahoko Tokuyama for additional video footage; and to Wamba’s field assistants for making fieldwork possible. We thank all the staff of the Budongo Conservation Field Station. Research permits were kindly granted by the Ministère de la Recherche Scientifique et Technologie, Democratic Republic of the Congo, and by the Uganda National Council for Science and Technology, The Uganda Wildlife Authority, the President’s Office, and the National Forestry Authority, Uganda.

References

  1. 1.Seyfarth RM, Cheney DL, Marler P. Monkey responses to three different alarm calls: evidence of predator classification and semantic communication. Science. 1980 Nov 14;210(4471):801–3. pmid:7433999
  2. 2.Cäsar C, Zuberbühler K, Young RJ, Byrne RW. Titi monkey call sequences vary with predator location and type. Biol Lett. 2013 Oct 23;9(5):20130535. pmid:24004492
  3. 3.Arnold K, Zuberbühler K. Meaningful call combinations in a non-human primate. Curr Biol. 2008 Mar 11;18(5):R202–3. pmid:18334192
  4. 4.Manser MB. The acoustic structure of suricates’ alarm calls varies with predator type and the level of response urgency. Proc R Soc B Biol Sci. 2001 Nov 22;268(1483):2315–24. pmid:11703871
  5. 5.Schlenker P, Chemla E, Arnold K, Zuberbühler K. Pyow-hack revisited: Two analyses of Putty-nosed monkey alarm calls. Lingua. 2016 Feb 29;171:1–23..
  6. 6.Templeton CN, Greene E, Davis K. Allometry of alarm calls: black-capped chickadees encode information about predator size. Science. 2005 Jun 24;308(5730):1934–7. pmid:15976305
  7. 7.Grice HP. Utterer’s meaning and intention. Philos Rev. 1969 Apr 1;78(2):147–77.
  8. 8.Grice HP. Meaning. Philos Rev. 1957 Aug 1;66(3):377–88.
  9. 9.Dennett DC. Intentional systems in cognitive ethology: The “Panglossian paradigm” defended. Behav Brain Sci. 1983 Sep;6(3):343–55.
  10. 10.Rendall D, Owren MJ, Ryan MJ. What do animal signals mean? Anim Behav. 2009 Aug 1;78(2):233–40.
  11. 11.Leavens DA, Russell JL, Hopkins WD. Intentionality as measured in the persistence and elaboration of communication by chimpanzees (Pan troglodytes). Child Dev. 2005 Jan 1;76(1):291–306. pmid:15693773
  12. 12.Tomasello M, Gust D, Frost GT. A longitudinal investigation of gestural communication in young chimpanzees. Primates. 1989 Jan 1;30(1):35–50.
  13. 13.Cartmill EA, Byrne RW. Orangutans modify their gestural signaling according to their audience’s comprehension. Curr Biol. 2007 Aug 7;17(15):1345–8. pmid:17683939
  14. 14.Bohn M, Call J, Tomasello M. The role of past interactions in great apes’ communication about absent entities. J Comp Psychol. 2016 Nov;130(4):351. pmid:27690504
  15. 15.Liebal K, Pika S, Call J, Tomasello M. To move or not to move: How apes adjust to the attentional state of others. Interact Stud. 2004 Jan 1;5(2):199–219.
  16. 16.Call J, Tomasello M. The Gestural Communication of Apes and Monkeys. Manhaw, NJ: Lawrence Erlbaum Associates; 2007.
  17. 17.Byrne RW, Cartmill E, Genty E, Graham KE, Hobaiter C, Tanner J. Great ape gestures: intentional communication with a rich set of innate signals. Anim Cogn. 2017 Jul 1;20(4):755–69. pmid:28502063
  18. 18.Townsend SW, Koski SE, Byrne RW, Slocombe KE, Bickel B, Boeckle M, et al. Exorcising Grice’s ghost: an empirical approach to studying intentional communication in animals. Biol Rev. 2017 Aug 1;92(3):1427–33. pmid:27480784
  19. 19.Scott-Phillips TC. Meaning in animal and human communication. Anim Cogn. 2015 May 1;18(3):801–5. pmid:25647173
  20. 20.Moore R. Meaning and ostension in great ape gestural communication. Anim Cogn. 2016 Jan 1;19(1):223–31. pmid:26223212
  21. 21.Schel AM, Townsend SW, Machanda Z, Zuberbühler K, Slocombe KE. Chimpanzee alarm call production meets key criteria for intentionality. PLoS ONE. 2013;8(10):e76674. pmid:24146908
  22. 22.Schel AM, Machanda Z, Townsend SW, Zuberbühler K, Slocombe KE. Chimpanzee food calls are directed at specific individuals. Anim Behav. 2013 Nov 30;86(5):955–65.
  23. 23.Crockford C, Wittig RM, Mundry R, Zuberbühler K. Wild chimpanzees inform ignorant group members of danger. Curr Biol. 2012 Jan 24;22(2):142–6. pmid:22209531
  24. 24.Zuberbühler K, Cheney DL, Seyfarth RM. Conceptual semantics in a nonhuman primate. J Comp Psychol. 1999 Mar;113(1):33.
  25. 25.Seyfarth RM, Cheney DL, Bergman T, Fischer J, Zuberbühler K, Hammerschmidt K. The central importance of information in studies of animal communication. Anim Behav. 2010 Jul 1;80(1):3–8.
  26. 26.Hobaiter C, Leavens DA, Byrne RW. Deictic gesturing in wild chimpanzees (Pan troglodytes)? Some possible cases. J Comp Psychol. 2014 Feb;128(1):82–87. pmid:24040760
  27. 27.Pollick AS, de Waal FBM. Ape gestures and language evolution. Proc Natl Acad Sci. 2007 May 8;104(19):8184–9. pmid:17470779
  28. 28.Hobaiter C, Byrne RW. The meanings of chimpanzee gestures. Curr Biol. 2014 Jul 21;24(14):1596–600. pmid:24998524
  29. 29.Kano T, Nishida T. Bilia as an authentic vernacular name for Pan paniscus. Pan Africa News. 1999 June;6(1): 1–3.
  30. 30.Graham KE, Furuichi T, Byrne RW. The gestural repertoire of the wild bonobo (Pan paniscus): a mutually understood communication system. Anim Cogn. 2017 Mar 1;20(2):171–7. pmid:27632158
  31. 31.Hobaiter C, Byrne RW. The gestural repertoire of the wild chimpanzee. Anim Cogn. 2011 Sep 1;14(5):745–67. pmid:21533821
  32. 32.Liebal K, Call J, Tomasello M. Use of gesture sequences in chimpanzees. Am J Primatol. 2004 Dec 1;64(4):377–96. pmid:15580580
  33. 33.Pika S, Liebal K, Tomasello M. Gestural communication in subadult bonobos (Pan paniscus): repertoire and use. Am J Primatol. 2005 Jan 1;65(1):39–61. pmid:15645456
  34. 34.Hobaiter C, Byrne RW. Serial gesturing by wild chimpanzees: its nature and function for communication. Anim Cogn. 2011 Nov 1;14(6):827–38. pmid:21562816
  35. 35.Halina M, Rossano F, Tomasello M. The ontogenetic ritualization of bonobo gestures. Anim Cogn. 2013 Jul 1;16(4):653–66. pmid:23370783
  36. 36.Byrne RW, Tanner JE. Gestural imitation by a gorilla: evidence and nature of the capacity. Int J Psychol Psychol Ther. 2006;6(2): 215–231.
  37. 37.Meltzoff AN, Moore MK. Imitation of facial and manual gestures by human neonates. Science. 1977 Oct 7;198(4312):75–8. pmid:17741897
  38. 38.Byrne RW, Cochet H. Where have all the (ape) gestures gone? Psychon Bull Rev. Psychonomic Bulletin & Review; 2017 Feb 1;24(1):68–71. pmid:27368621
  39. 39.Corballis MC. Language evolution: a changing perspective. Trends Cogn Sci. 2017 Apr 1;21(4):229–36. pmid:28214132
  40. 40.Perlman M. Debunking two myths against vocal origins of language: Language is iconic and multimodal to the core. Interact Stud. 2017 Dec 8;18(3):376–401.
  41. 41.Hashimoto C. Context and development of sexual behavior of wild bonobos (Pan paniscus) at Wamba, Zaire. Int J Primatol. 1997 Feb 1;18(1):1–21.
  42. 42.Reynolds V. The Chimpanzees of the Budongo Forest. Oxford: Oxford University Press; 2006. https://doi.org/10.1093/acprof:oso/9780198515463.001.0001
  43. 43.Snedecor GW, Cochran WG. Statistical methods 7th edition. 7th ed. Iowa: Iowa State University Press; 1989.
  44. 44.Miklós I, Podani J. Randomization of presence-absence matrices: Comments and new algorithms. Ecology. 2004 Jan 1;85(1):86–92.

Extracting Structured Data from Recipes Using Conditional Random Fields

$
0
0

In 1994, a member of the newsroom named Rich Meislin wrote an internal memo about the value of “computer-based services” that The Times could offer its readers. One of the proposed services was RecipeFinder: a database of recipes “searchable by key ingredient” and “type of cuisine.” It took the company almost 20 years, several failed starts and a massive data cleanup effort, but the idea of cooking as a “digital service” (read: web app) is finally a reality.

NYT Cooking launched last fall with over 17,000 recipes that users can search, save, rate and (coming soon!) comment on. The product was designed and built from scratch over the course of a year, but it relies heavily on nearly six years of effort to clean, catalogue and structure our massive recipe archive.

We now have a treasure trove of structured data to play with. As of yesterday, the database contained $17,507$ recipes, $67,578$ steps, $142,533$ tags and $171,244$ ingredients broken down by name, quantity and unit.

In practical terms, this means that if you make Melissa Clark’s pasta with fried lemons and chile flakes recipe, we know how many cups of Parmigiano-Reggiano you need, how long it will take you to cook and how many people you can serve. That finely structured data, while invisible to the end user, has allowed us to quickly iterate on designs, add granular HTML markup to improve our SEO, build a customized search engine and spin up a simple recipe recommendation system. It’s not an exaggeration to say that the development of NYT Cooking would not have been possible without it.

Until recently, the collection and maintenance of this structured data was a completely manual process. For years, overnight contractors have entered recipes, dropdown by dropdown, into a gray and white web form that lives in our content management system (CMS). Since the database breaks down each ingredient by name, unit, quantity and comment, an average recipe requires over 50 fields, and that number can climb above 100 for more complicated recipes.

I long suspected that the manual process of entering recipes into the database could be replaced with an algorithmic solution. The field of Natural Language Processing (NLP) has developed powerful algorithms to solve similar tasks over the past decade. If a computer can identify the part of speech of each word in a sentence, it should be able to identify an ingredient quantity from an ingredient phrase.

For an internal hack week last summer, a colleague and I decided to test our faith in statistical NLP to automatically convert unstructured recipe text into structured data. A few months of on-and-off work later, our recipe parser is now fully integrated into our CMS.

The most challenging aspect of the recipe parsing problem is the task of predicting ingredient components from the ingredient phrases. Recipes display ingredients like “1 tablespoon fresh lemon juice,” but the database stores ingredients broken down by name (“lemon juice”), quantity (“1″) , unit (“tablespoon”) and comment (“fresh”). There is no regular expression clever enough to identify these labels from the ingredient phrases.

Example

Ingredient Phrase1tablespoonfreshlemonjuice
Ingredient LabelsQUANTITYUNITCOMMENTNAMENAME

This type of problem is referred to as a structured prediction problem because we are trying to predict a structure — in this case a sequence of labels — rather than a single label. Structured prediction tasks are difficult because the choice of a particular label for any one word in the phrase can change the model’s prediction of labels for the other words. The added model complexity allows us to learn rich patterns about how words in a sentence interact with the words and labels around them.

We chose to use a discriminative structured prediction model called a linear-chain conditional random field (CRF), which has been successful on similar tasks such as part-of-speech tagging and named entity recognition.

The basic set up of the problem is as follows:

Let $\{x^1, x^2, …, x^N\}$ be the set of ingredient phrases, e.g. {“$½$ cups whole wheat flour”, “pinch of salt”, …} where each $x^i$ is an ordered list of words. Associated with each $x^i$ is a list of tags, $y^i$.

For example, if $x^i = [x_1^i, x_2^i, x_3^i] = [\text{“pinch”}, \text{ “of”}, \text{ “salt”}]$ then $y^i = [y_1^i, y_2^i, y_3^i]= [\text{UNIT}, \text{ UNIT}, \text{ NAME}]$. A tag is either a NAME, UNIT, QUANTITY, COMMENT or OTHER (i.e., none of the above).

The goal is to use data to learn a model that can predict the tag sequence for any ingredient phrase we throw at it, even if the model has never seen that ingredient phrase before. We approach this task by modeling the conditional probability of a sequence of tags given the input, denoted $p(\text{tag sequence} \mid \text{ingredient phrase})$ or using the above notation, $p(y \mid x)$.

The process of learning that probability model is described in detail below, but first imagine that someone handed us the perfect probability model $p(y \mid x)$ that returns the “true” probability of a sequence of labels given an ingredient phrase. We want to use $p(y \mid x)$ to discover (or infer) the most probable label sequence.

Armed with this model, we could predict the best sequence of labels for an ingredient phrase by simply searching over all tag sequences and returning the one that has the highest probability.

For example, suppose our ingredient phrase is “pinch of salt.” Then we need to score all the possible sequences of $3$ tags.

$$
p(\text{UNIT UNIT UNIT} \mid \text{“pinch of salt”}) \\
p(\text{QUANTITY UNIT UNIT} \mid \text{“pinch of salt”})\\
p(\text{UNIT QUANTITY UNIT} \mid \text{“pinch of salt”})\\
p(\text{UNIT UNIT QUANTITY} \mid \text{“pinch of salt”})\\
p(\text{UNIT QUANTITY QUANTITY} \mid \text{“pinch of salt”}) \\
p(\text{QUANTITY QUANTITY QUANTITY} \mid \text{“pinch of salt”}) \\
p(\text{UNIT QUANTITY NAME} \mid \text{“pinch of salt”}) \\
\vdots
$$

While this seems like a simple problem, it can quickly become computationally unpleasant to score all of the $|\text{tags}|^{|\text{words}|}$ sequences**. The beauty of the linear-chain CRF model is that it makes some conditional independence assumptions that allow us to use dynamic programming to efficiently search the space of all possible label sequences. In the end, we are able to find the best tag sequence in a time that is quadratic in the number of tags and linear in the number of words ($|\text{tags}|^2 * |\text{words}|$).

So given a model $p(y \mid x)$ that encodes whether a particular tag sequence is a good fit for a ingredient phrase, we can return the best tag sequence. But how do we learn that model?

A linear-chain CRF models this probability in the following way:

$$
\begin{equation}
p( y \mid x ) \propto \prod_{t=1}^T \psi(y_t, y_{t-1}, x)
\end{equation}
$$

where $T$ is the number of words in the ingredient phrase $x$.

Let’s break this equation down in English.

The above equation introduces a “potential” function $\psi$ that takes two consecutive labels, $y_t$ and $y_{t-1}$, and the ingredient phrase, $x$. We construct $\psi$ so that it returns a large, non-negative number if the labels $y_t$ and $y_{t-1}$ are a good match for the $t^{th}$ and ${t-1}^{th}$ words in the sentence respectively, and a small, non-negative number if not. (Remember that probabilities must be non-negative.)

The potential function is a weighted average of simple feature functions, each of which captures a single attribute of the labels and words.

$$
\begin{equation}
\psi(y_t, y_{t-1}, x) = \exp{\sum_{k=1}^K w_k f_k(y_t, y_{t-1}, x)}
\end{equation}
$$

We often define feature functions to return either 0 or 1. Each feature function, $f_k(y_t, y_{t-1}, x)$, is chosen by the person who creates the model, based on what information might be useful to determine the relationship between words and labels. Some feature functions we used for this problem were:

$$
\begin{align*}
&f_1(y_t, y_{t-1}, x) = \left\{
\begin{array}{lr}
1 \text{ if } x_t \text{ is capitalized and }y_t \text{ is NAME} \\
0 \text{ otherwise}
\end{array} \right.\\ \\
&f_2(y_t, y_{t-1}, x) = \left\{
\begin{array}{lr}
1 \text{ if } x_t \text{ is “1/2” and } y_t \text{ is QUANTITY} \\
0 \text{ otherwise}
\end{array} \right. \\ \\
&f_3(y_t, y_{t-1}, x) = \left\{
\begin{array}{lr}
1 \text{ if } x_t \text{ is “cup” and }y_t \text{is QUANTITY} \\
0 \text{ otherwise}
\end{array} \right.\\ \\
&f_4(y_t, y_{t-1}, x) = \left\{
\begin{array}{lr}
1 \text{ if } x_t \text{ is “flour” and } y_t \text{ is QUANTITY} \\
0 \text{ otherwise}
\end{array} \right.\\ \\
&f_5(y_t, y_{t-1}, x) = \left\{
\begin{array}{lr}
1 \text{ if } x_t \text{ is a fraction and } y_t \text{is QUANTITY} \\
0 \text{ otherwise}
\end{array} \right.\\ \\
&f_6(y_t, y_{t-1}, x) = \left\{
\begin{array}{lr}
1 \text{ if } y_t \text{ is QUANTITY and } y_{t-1} \text{is UNIT} \\
0 \text{ otherwise}
\end{array} \right.\\ \\
&f_7(y_t, y_{t-1}, x) = \left\{
\begin{array}{lr}
1 \text{ if } y_t \text{ is QUANTITY and } y_{t-1} \text{ is NAME} \\
0 \text{ otherwise}
\end{array} \right.\\
\end{align*}
$$

There is a feature function for every word/label pair and for every consecutive label pair, plus some hand-crafted functions. By modeling the conditional probability of labels given words the following way, we have reduced our task of learning $p(y \mid x)$ to the problem of learning “good” weights on each of the feature functions. By good, I mean that we want to learn large positive weights on features that capture highly likely patterns in the data, large negative weights on features that capture highly unlikely patterns in the data and small weights on features that don’t capture any patterns in the data.

For example, $f_2$ describes a likely pattern in the data (“$½$” is likely a quantity), $f_4$ describes an unlikely pattern in the data (the word “flour” is almost never a quantity) and $f_1$ doesn’t capture a common pattern (the ingredient phrases are almost always lowercased). In this case, we want $w_2$ to be a a large positive number, $w_4$ to be a large negative number and $w_1$ to be close to 0.

Due to properties of the model — chiefly, that the function is convex with respect to the weights — there is one best set of weights and we can find it using an iterative optimization algorithm. We used the CRF++ implementation to do the optimization and inference.

Results

Our model got $89$% sentence-level accuracy when trained on $130,000$ labeled ingredient phrases from the database. The data was too noisy for automatic evaluation, so we evaluated sentence-level accuracy by hand on a test set of $481$ examples.

Below are some examples of where we do well, where we do poorly, and where there is no clear correct answer. Recall that we are trying to predict NAME, UNIT, QUANTITY, COMMENT and OTHER.

Truth: 1QTgarlic cloveNA, minced ( optional )OT
Guess: 1QTgarlicNAcloveUN, minced ( optional )OT

This example is confusing for both our human annotators and the algorithm. We probably want “clove” to be part of the ingredient name instead of the unit, but we see both variations in our training data.

Truth: 2QTred onions , peeled and dicedNA
Guess: 2QTred onionsNA,OTpeeled and dicedCO

Here is an example of the CRF correcting a human annotator’s error. Peeled and diced should be part of the comment.

Truth: 4QTtablespoonsUNmelted nonhydrogenatedOTmargarineNA, melted coconut oil or canola oilOT
Guess: 4QTtablespoonsUNmelted nonhydrogenatedOTmargarineNA, meltedCOcoconut oilNAor canolaOToilNA

This ingredient phrase contains multiple ingredient names, which is a situation that is not accounted for in our database schema. We need to rethink the way we label ingredient parts to account for examples like this.

Truth: 1QTbunchUNscallionsNA,OTtrimmed and cut into 1/4-inch lengthsCO
Guess: 1QTbunchUNscallionsNA,OTtrimmed and cut into 1/4-inch lengthsCO

And sometimes everyone gets it right!

Takeaway

Extracting structured data from text is a common problem at The Times, and for 164 years the vast majority of this data wrangling (e.g. cataloging, tagging, associating) has been done manually. But there is an ever-increasing appetite from developers and designers for finely structured data to power our digital products and at some point, we will need to develop algorithmic solutions to help with these tasks. The recipe parser, which combines machine learning with our huge archive of labeled data, takes a first step towards solving this important problem. Email me at erica.greene@nytimes.com if you’d like more details about this project.

** We actually do BIO tagging so there are $(|\text{tags}|* 2) ^ {|\text{words}|} $ sequences.

A Criminal Gang Used a Drone Swarm to Obstruct an FBI Hostage Raid

$
0
0

And that’s just one of the ways bad guys are putting drones to use, law enforcement officials say.

DENVER, Colorado — Last winter, on the outskirts of a large U.S. city, an FBI hostage rescue team set up an elevated observation post to assess an unfolding situation. Soon they heard the buzz of small drones — and then the tiny aircraft were all around them, swooping past in a series of “high-speed low passes at the agents in the observation post to flush them,” the head of the agency’s operational technology law unit told attendees of theAUVSI Xponential conference here. Result: “We were then blind,” said Joe Mazel, meaning the group lost situational awareness of the target. “It definitely presented some challenges.”

The incident remains “law enforcement-sensitive,” Mazel said Wednesday, declining to say just where or when it took place. But it shows how criminal groups are using small drones for increasingly elaborate crimes.

Mazel said the suspects had backpacked the drones to the area in anticipation of the FBI’s arrival. Not only did they buzz the hostage rescue team, they also kept a continuous eye on the agents, feeding video to the group’s other members via YouTube. “They had people fly their own drones up and put the footage to YouTube so that the guys who had cellular access could go to the YouTube site and pull down the video,” he said.

Mazel said counter surveillance of law enforcement agents is the fastest-growing way that organized criminals are using drones.

Some criminal organizations have begun to use drones as part of witness intimidation schemes: they continuously surveil police departments and precincts in order to see “who is going in and out of the facility and who might be co-operating with police,” he said.

Drones are also playing a greater role in robberies and the like. Beyond the well-documented incidence of house break-ins, criminal crews are using them to observe bigger target facilities, spot security gaps, and determine patterns of life: where the security guards go and when.

In Australia, criminal groups have begun have used drones as part of elaborate smuggling schemes, Mazel said. The gangs will monitor port authority workers. If the workers get close to a shipping container that houses illegal substances or contraband, the gang will call in a fire, theft, or some other false alarm to draw off security forces.

Andrew Scharnweber, associate chief of U.S. Customs and Border Protection, described how criminal networks were using drones to watch Border Patrol officers, identify their gaps in coverage, and exploit them.

“In the Border Patrol, we have struggled with scouts, human scouts that come across the border. They’re stationed on various mountaintops near the border and they would scout … to spot law enforcement and radio down to their counterparts to go around us. That activity has effectively been replaced by drones,” said Scharnweber, who added that cartels are able to move small amounts of high-value narcotics across the border via drones with “little or no fear of arrest.”

Nefarious use of drones is likely to get worse before it gets better, according to several government officials who spoke on the panel. There is no easy or quick technological solution. While the U.S. military has effectively deployed drone-jamming equipment to the front lines in Syria and Iraq, most of these solutions are either unsuitable or have not been tested for use in American cities where they may interfere with cell phone signals and possibly the avionics of other aircraft, said Ahn Duong, the program executive officer at DHS’s homeland security, science and technology directorate.

The most recent version of theFAA reauthorization bill contains two amendments that could help the situation, according to Angela Stubblefield, the FAA’s deputy associate administrator in the office of security and hazardous materials safety. One would make it illegal to “weaponize” consumer drones.

The other — and arguably more important — amendment would require drones that fly beyond their operators’ line of sight to broadcast an identity allowing law enforcement to track and connect them to a real person.

“Remote identification is a huge piece” of cutting down on drone crime, Stubblefield said. “Both from a safety perspective… enabling both air traffic control and other  UAS [unmanned areal systems] to know where another is and enabling beyond line-of-sight operations. It also has an extensive security benefit to it, which is to enable threat discrimination. Remote ID connected to registration would allow you to have information about each UAS, who owns it, operates it, and thus have some idea what its intent is,” said Stubblefield.

But even if both amendments pass as part of the re-authorization, it will be some time before they take effect, so it will be the Wild West in America’s skies a while longer.

Why Silicon Valley can’t fix itself

$
0
0

Big Tech is sorry. After decades of rarely apologising for anything, Silicon Valley suddenly seems to be apologising for everything. They are sorry about the trolls. They are sorry about the bots. They are sorry about the fake news and the Russians, and the cartoons that are terrifying your kids on YouTube. But they are especially sorry about our brains.

Sean Parker, the former president of Facebook – who was played by Justin Timberlake in The Social Network – has publicly lamented the “unintended consequences” of the platform he helped create: “God only knows what it’s doing to our children’s brains.” Justin Rosenstein, an engineer who helped build Facebook’s “like” button and Gchat, regrets having contributed to technology that he now considers psychologically damaging, too. “Everyone is distracted,” Rosenstein says. “All of the time.”

Ever since the internet became widely used by the public in the 1990s, users have heard warnings that it is bad for us. In the early years, many commentators described cyberspace as a parallel universe that could swallow enthusiasts whole. The media fretted about kids talking to strangers and finding porn. A prominent 1998 study from Carnegie Mellon University claimed that spending time online made you lonely, depressed and antisocial.

In the mid-2000s, as the internet moved on to mobile devices, physical and virtual life began to merge. Bullish pundits celebrated the “cognitive surplus” unlocked by crowdsourcing and the tech-savvy campaigns of Barack Obama, the “internet president”. But, alongside these optimistic voices, darker warnings persisted. Nicholas Carr’s The Shallows (2010) argued that search engines were making people stupid, while Eli Pariser’s The Filter Bubble (2011) claimed algorithms made us insular by showing us only what we wanted to see. In Alone, Together (2011) and Reclaiming Conversation (2015), Sherry Turkle warned that constant connectivity was making meaningful interaction impossible.

Still, inside the industry, techno-utopianism prevailed. Silicon Valley seemed to assume that the tools they were building were always forces for good – and that anyone who questioned them was a crank or a luddite. In the face of an anti-tech backlash that has surged since the 2016 election, however, this faith appears to be faltering. Prominent people in the industry are beginning to acknowledge that their products may have harmful effects.

Internet anxiety isn’t new. But never before have so many notable figures within the industry seemed so anxious about the world they have made. Parker, Rosenstein and the other insiders now talking about the harms of smartphones and social media belong to an informal yet influential current of tech critics emerging within Silicon Valley. You could call them the “tech humanists”. Amid rising public concern about the power of the industry, they argue that the primary problem with its products is that they threaten our health and our humanity.

It is clear that these products are designed to be maximally addictive, in order to harvest as much of our attention as they can. Tech humanists say this business model is both unhealthy and inhumane – that it damages our psychological well-being and conditions us to behave in ways that diminish our humanity. The main solution that they propose is better design. By redesigning technology to be less addictive and less manipulative, they believe we can make it healthier – we can realign technology with our humanity and build products that don’t “hijack” our minds.

The hub of the new tech humanism is the Center for Humane Technology in San Francisco. Founded earlier this year, the nonprofit has assembled an impressive roster of advisers, including investor Roger McNamee, Lyft president John Zimmer, and Rosenstein. But its most prominent spokesman is executive director Tristan Harris, a former “design ethicist” at Google who has been hailed by the Atlantic magazine as “the closest thing Silicon Valley has to a conscience”. Harris has spent years trying to persuade the industry of the dangers of tech addiction. In February, Pierre Omidyar, the billionaire founder of eBay, launched a related initiative: the Tech and Society Solutions Lab, which aims to “maximise the tech industry’s contributions to a healthy society”.

As suspicion of Silicon Valley grows, the tech humanists are making a bid to become tech’s loyal opposition. They are using their insider credentials to promote a particular diagnosis of where tech went wrong and of how to get it back on track. For this, they have been getting a lot of attention. As the backlash against tech has grown, so too has the appeal of techies repenting for their sins. The Center for Humane Technology has been profiled – and praised by – the New York Times, the Atlantic, Wired and others.

But tech humanism’s influence cannot be measured solely by the positive media coverage it has received. The real reason tech humanism matters is because some of the most powerful people in the industry are starting to speak its idiom. Snap CEO Evan Spiegel has warned about social media’s role in encouraging “mindless scrambles for friends or unworthy distractions”, and Twitter boss Jack Dorsey recently claimed he wants to improve the platform’s “conversational health”.

Tristan Harris, founder of the Center for Humane Technology. Photograph: Robert Gumpert for the Guardian

Even Mark Zuckerberg, famous for encouraging his engineers to “move fast and break things”, seems to be taking a tech humanist turn. In January, he announced that Facebook had a new priority: maximising “time well spent” on the platform, rather than total time spent. By “time well spent”, Zuckerberg means time spent interacting with “friends” rather than businesses, brands or media sources. He said the News Feed algorithm was already prioritising these “more meaningful” activities.

Zuckerberg’s choice of words is significant: Time Well Spent is the name of the advocacy group that Harris led before co-founding the Center for Humane Technology. In April, Zuckerberg brought the phrase to Capitol Hill. When a photographer snapped a picture of the notes Zuckerberg used while testifying before the Senate, they included a discussion of Facebook’s new emphasis on “time well spent”, under the heading “wellbeing”.

This new concern for “wellbeing” may strike some observers as a welcome development. After years of ignoring their critics, industry leaders are finally acknowledging that problems exist. Tech humanists deserve credit for drawing attention to one of those problems – the manipulative design decisions made by Silicon Valley.

But these decisions are only symptoms of a larger issue: the fact that the digital infrastructures that increasingly shape our personal, social and civic lives are owned and controlled by a few billionaires. Because it ignores the question of power, the tech-humanist diagnosis is incomplete – and could even help the industry evade meaningful reform. Taken up by leaders such as Zuckerberg, tech humanism is likely to result in only superficial changes. These changes may soothe some of the popular anger directed towards the tech industry, but they will not address the origin of that anger. If anything, they will make Silicon Valley even more powerful.


The Center for Humane Technology argues that technology must be “aligned” with humanity – and that the best way to accomplish this is through better design. Their website features a section entitled The Way Forward. A familiar evolutionary image shows the silhouettes of several simians, rising from their crouches to become a man, who then turns back to contemplate his history.

“In the future, we will look back at today as a turning point towards humane design,” the header reads. To the litany of problems caused by “technology that extracts attention and erodes society”, the text asserts that “humane design is the solution”. Drawing on the rhetoric of the “design thinking” philosophy that has long suffused Silicon Valley, the website explains that humane design “starts by understanding our most vulnerable human instincts so we can design compassionately”.

There is a good reason why the language of tech humanism is penetrating the upper echelons of the tech industry so easily: this language is not foreign to Silicon Valley. On the contrary, “humanising” technology has long been its central ambition and the source of its power. It was precisely by developing a “humanised” form of computing that entrepreneurs such as Steve Jobs brought computing into millions of users’ everyday lives. Their success turned the Bay Area tech industry into a global powerhouse – and produced the digitised world that today’s tech humanists now lament.

The story begins in the 1960s, when Silicon Valley was still a handful of electronics firms clustered among fruit orchards. Computers came in the form of mainframes then. These machines were big, expensive and difficult to use. Only corporations, universities and government agencies could afford them, and they were reserved for specialised tasks, such as calculating missile trajectories or credit scores.

Computing was industrial, in other words, not personal, and Silicon Valley remained dependent on a small number of big institutional clients. The practical danger that this dependency posed became clear in the early 1960s, when the US Department of Defense, by far the single biggest buyer of digital components, began cutting back on its purchases. But the fall in military procurement wasn’t the only mid-century crisis around computing.

Computers also had an image problem. The inaccessibility of mainframes made them easy to demonise. In these whirring hulks of digital machinery, many observers saw something inhuman, even evil. To antiwar activists, computers were weapons of the war machine that was killing thousands in Vietnam. To highbrow commentators such as the social critic Lewis Mumford, computers were instruments of a creeping technocracy that threatened to extinguish personal freedom.

But during the course of the 1960s and 70s, a series of experiments in northern California helped solve both problems. These experiments yielded breakthrough innovations like the graphical user interface, the mouse and the microprocessor. Computers became smaller, more usable and more interactive, reducing Silicon Valley’s reliance on a few large customers while giving digital technology a friendlier face.

Apple founder Steve Jobs ‘got the notion of tools for human use’. Photograph: Ted Thai/Polaris / eyevine

The pioneers who led this transformation believed they were making computing more human. They drew deeply from the counterculture of the period, and its fixation on developing “human” modes of living. They wanted their machines to be “extensions of man”, in the words of Marshall McLuhan, and to unlock “human potential” rather than repress it. At the centre of this ecosystem of hobbyists, hackers, hippies and professional engineers was Stewart Brand, famed entrepreneur of the counterculture and founder of the Whole Earth Catalog. In a famous 1972 article for Rolling Stone, Brand called for a new model of computing that “served human interest, not machine”.

Brand’s disciples answered this call by developing the technical innovations that transformed computers into the form we recognise today. They also promoted a new way of thinking about computers – not as impersonal slabs of machinery, but as tools for unleashing “human potential”.

No single figure contributed more to this transformation of computing than Steve Jobs, who was a fan of Brand and a reader of the Whole Earth Catalog. Jobs fulfilled Brand’s vision on a global scale, launching the mass personal computing era with the Macintosh in the mid-80s, and the mass smartphone era with the iPhone two decades later. Brand later acknowledged that Jobs embodied the Whole Earth Catalog ethos. “He got the notion of tools for human use,” Brand told Jobs’ biographer, Walter Isaacson.

Building those “tools for human use” turned out to be great for business. The impulse to humanise computing enabled Silicon Valley to enter every crevice of our lives. From phones to tablets to laptops, we are surrounded by devices that have fulfilled the demands of the counterculture for digital connectivity, interactivity and self-expression. Your iPhone responds to the slightest touch; you can look at photos of anyone you have ever known, and broadcast anything you want to all of them, at any moment.

In short, the effort to humanise computing produced the very situation that the tech humanists now consider dehumanising: a wilderness of screens where digital devices chase every last instant of our attention. To guide us out of that wilderness, tech humanists say we need more humanising. They believe we can use better design to make technology serve human nature rather than exploit and corrupt it. But this idea is drawn from the same tradition that created the world that tech humanists believe is distracting and damaging us.


Tech humanists say they want to align humanity and technology. But this project is based on a deep misunderstanding of the relationship between humanity and technology: namely, the fantasy that these two entities could ever exist in separation.

It is difficult to imagine human beings without technology. The story of our species began when we began to make tools. Homo habilis, the first members of our genus, left sharpened stones scattered across Africa. Their successors hit rocks against each other to make sparks, and thus fire. With fire you could cook meat and clear land for planting; with ash you could fertilise the soil; with smoke you could make signals. In flickering light, our ancestors painted animals on cave walls. The ancient tragedian Aeschylus recalled this era mythically: Prometheus, in stealing fire from the gods, “founded all the arts of men.”

All of which is to say: humanity and technology are not only entangled, they constantly change together. This is not just a metaphor. Recent research suggests that the human hand evolved to manipulate the stone tools that our ancestors used. The evolutionary scientist Mary Marzke shows that we developed “a unique pattern of muscle architecture and joint surface form and functions” for this purpose.

The ways our bodies and brains change in conjunction with the tools we make have long inspired anxieties that “we” are losing some essential qualities. For millennia, people have feared that new media were eroding the very powers that they promised to extend. In The Phaedrus, Socrates warned that writing on wax tablets would make people forgetful. If you could jot something down, you wouldn’t have to remember it. In the late middle ages, as a culture of copying manuscripts gave way to printed books, teachers warned that pupils would become careless, since they no longer had to transcribe what their teachers said.

Yet as we lose certain capacities, we gain new ones. People who used to navigate the seas by following stars can now program computers to steer container ships from afar. Your grandmother probably has better handwriting than you do – but you probably type faster.

The nature of human nature is that it changes. It can not, therefore, serve as a stable basis for evaluating the impact of technology. Yet the assumption that it doesn’t change serves a useful purpose. Treating human nature as something static, pure and essential elevates the speaker into a position of power. Claiming to tell us who we are, they tell us how we should be.

Intentionally or not, this is what tech humanists are doing when they talk about technology as threatening human nature – as if human nature had stayed the same from the paleolithic era until the rollout of the iPhone. Holding humanity and technology separate clears the way for a small group of humans to determine the proper alignment between them. And while the tech humanists may believe they are acting in the common good, they themselves acknowledge they are doing so from above, as elites. “We have a moral responsibility to steer people’s thoughts ethically,” Tristan Harris has declared.

Harris and his fellow tech humanists also frequently invoke the language of public health. The Center for Humane Technology’s Roger McNamee has gone so far as to call public health “the root of the whole thing”, and Harris has compared using Snapchat to smoking cigarettes. The public-health framing casts the tech humanists in a paternalistic role. Resolving a public health crisis requires public health expertise. It also precludes the possibility of democratic debate. You don’t put the question of how to treat a disease up for a vote – you call a doctor.

This paternalism produces a central irony of tech humanism: the language that they use to describe users is often dehumanising. “Facebook appeals to your lizard brain – primarily fear and anger,” says McNamee. Harris echoes this sentiment: “Imagine you had an input cable,” he has said. “You’re trying to jack it into a human being. Do you want to jack it into their reptilian brain, or do you want to jack it into their more reflective self?”

The Center for Humane Technology’s website offers tips on how to build a more reflective and less reptilian relationship to your smartphone: “going greyscale” by setting your screen to black-and-white, turning off app notifications and charging your device outside your bedroom. It has also announced two major initiatives: a national campaign to raise awareness about technology’s harmful effects on young people’s “digital health and well-being”; and a “Ledger of Harms” – a website that will compile information about the health effects of different technologies in order to guide engineers in building “healthier” products.

These initiatives may help some people reduce their smartphone use – a reasonable personal goal. But there are some humans who may not share this goal, and there need not be anything unhealthy about that. Many people rely on the internet for solace and solidarity, especially those who feel marginalised. The kid with autism may stare at his screen when surrounded by people, because it lets him tolerate being surrounded by people. For him, constant use of technology may not be destructive at all, but in fact life-saving.

Pathologising certain potentially beneficial behaviours as “sick” isn’t the only problem with the Center for Humane Technology’s proposals. They also remain confined to the personal level, aiming to redesign how the individual user interacts with technology rather than tackling the industry’s structural failures. Tech humanism fails to address the root cause of the tech backlash: the fact that a small handful of corporations own our digital lives and strip-mine them for profit. This is a fundamentally political and collective issue. But by framing the problem in terms of health and humanity, and the solution in terms of design, the tech humanists personalise and depoliticise it.

This may be why their approach is so appealing to the tech industry. There is no reason to doubt the good intentions of tech humanists, who may genuinely want to address the problems fuelling the tech backlash. But they are handing the firms that caused those problems a valuable weapon. Far from challenging Silicon Valley, tech humanism offers Silicon Valley a useful way to pacify public concerns without surrendering any of its enormous wealth and power. By channelling popular anger at Big Tech into concerns about health and humanity, tech humanism gives corporate giants such as Facebook a way to avoid real democratic control. In a moment of danger, it may even help them protect their profits.


One can easily imagine a version of Facebook that embraces the principles of tech humanism while remaining a profitable and powerful monopoly. In fact, these principles could make Facebook even more profitable and powerful, by opening up new business opportunities. That seems to be exactly what Facebook has planned.

When Zuckerberg announced that Facebook would prioritise “time well spent” over total time spent, it came a couple weeks before the company released their 2017 Q4 earnings. These reported that total time spent on the platform had dropped by around 5%, or about 50m hours per day. But, Zuckerberg said, this was by design: in particular, it was in response to tweaks to the News Feed that prioritised “meaningful” interactions with “friends” rather than consuming “public content” like video and news. This would ensure that “Facebook isn’t just fun, but also good for people’s well-being”.

Zuckerberg said he expected those changes would continue to decrease total time spent – but “the time you do spend on Facebook will be more valuable”. This may describe what users find valuable – but it also refers to what Facebook finds valuable. In a recent interview, he said: “Over the long term, even if time spent goes down, if people are spending more time on Facebook actually building relationships with people they care about, then that’s going to build a stronger community and build a stronger business, regardless of what Wall Street thinks about it in the near term.”

Sheryl Sandberg has also stressed that the shift will create “more monetisation opportunities”. How? Everyone knows data is the lifeblood of Facebook – but not all data is created equal. One of the most valuable sources of data to Facebook is used to inform a metric called “coefficient”. This measures the strength of a connection between two users – Zuckerberg once called it “an index for each relationship”. Facebook records every interaction you have with another user – from liking a friend’s post or viewing their profile, to sending them a message. These activities provide Facebook with a sense of how close you are to another person, and different activities are weighted differently. Messaging, for instance, is considered the strongest signal. It’s reasonable to assume that you’re closer to somebody you exchange messages with than somebody whose post you once liked.

Why is coefficient so valuable? Because Facebook uses it to create a Facebook they think you will like: it guides algorithmic decisions about what content you see and the order in which you see it. It also helps improve ad targeting, by showing you ads for things liked by friends with whom you often interact. Advertisers can target the closest friends of the users who already like a product, on the assumption that close friends tend to like the same things.

Facebook CEO Mark Zuckerberg testifies before the US Senate last month. Photograph: Jim Watson/AFP/Getty Images

So when Zuckerberg talks about wanting to increase “meaningful” interactions and building relationships, he is not succumbing to pressure to take better care of his users. Rather, emphasising time well spent means creating a Facebook that prioritises data-rich personal interactions that Facebook can use to make a more engaging platform. Rather than spending a lot of time doing things that Facebook doesn’t find valuable – such as watching viral videos – you can spend a bit less time, but spend it doing things that Facebook does find valuable.

In other words, “time well spent” means Facebook can monetise more efficiently. It can prioritise the intensity of data extraction over its extensiveness. This is a wise business move, disguised as a concession to critics. Shifting to this model not only sidesteps concerns about tech addiction – it also acknowledges certain basic limits to Facebook’s current growth model. There are only so many hours in the day. Facebook can’t keep prioritising total time spent – it has to extract more value from less time.

In many ways, this process recalls an earlier stage in the evolution of capitalism. In the 19th century, factory owners in England discovered they could only make so much money by extending the length of the working day. At some point, workers would die of exhaustion, or they would revolt, or they would push parliament to pass laws that limited their working hours. So industrialists had to find ways to make the time of the worker more valuable – to extract more money from each moment rather than adding more moments. They did this by making industrial production more efficient: developing new technologies and techniques that squeezed more value out of the worker and stretched that value further than ever before.

A similar situation confronts Facebook today. They have to make the attention of the user more valuable – and the language and concepts of tech humanism can help them do it. So far, it seems to be working. Despite the reported drop in total time spent, Facebook recently announced huge 2018 Q1 earnings of $11.97bn (£8.7bn), smashing Wall Street estimates by nearly $600m.


Today’s tech humanists come from a tradition with deep roots in Silicon Valley. Like their predecessors, they believe that technology and humanity are distinct, but can be harmonised. This belief guided the generations who built the “humanised” machines that became the basis for the industry’s enormous power. Today it may provide Silicon Valley with a way to protect that power from a growing public backlash – and even deepen it by uncovering new opportunities for profit-making.

Fortunately, there is another way of thinking about how to live with technology – one that is both truer to the history of our species and useful for building a more democratic future. This tradition does not address “humanity” in the abstract, but as distinct human beings, whose capacities are shaped by the tools they use. It sees us as hybrids of animal and machine – as “cyborgs”, to quote the biologist and philosopher of science Donna Haraway.

To say that we’re all cyborgs is not to say that all technologies are good for us, or that we should embrace every new invention. But it does suggest that living well with technology can’t be a matter of making technology more “human”. This goal isn’t just impossible – it’s also dangerous, because it puts us at the mercy of experts who tell us how to be human. It cedes control of our technological future to those who believe they know what’s best for us because they understand the essential truths about our species.

The cyborg way of thinking, by contrast, tells us that our species is essentially technological. We change as we change our tools, and our tools change us. But even though our continuous co-evolution with our machines is inevitable, the way it unfolds is not. Rather, it is determined by who owns and runs those machines. It is a question of power.

Today, that power is wielded by corporations, which own our technology and run it for profit. The various scandals that have stoked the tech backlash all share a single source. Surveillance, fake news and the miserable working conditions in Amazon’s warehouses are profitable. If they were not, they would not exist. They are symptoms of a profound democratic deficit inflicted by a system that prioritises the wealth of the few over the needs and desires of the many.

There is an alternative. If being technological is a feature of being human, then the power to shape how we live with technology should be a fundamental human right. The decisions that most affect our technological lives are far too important to be left to Mark Zuckerberg, rich investors or a handful of “humane designers”. They should be made by everyone, together.

Rather than trying to humanise technology, then, we should be trying to democratise it. We should be demanding that society as a whole gets to decide how we live with technology – rather than the small group of people who have captured society’s wealth.

What does this mean in practice? First, it requires limiting and eroding Silicon Valley’s power. Antitrust laws and tax policy offer useful ways to claw back the fortunes Big Tech has built on common resources. After all, Silicon Valley wouldn’t exist without billions of dollars of public funding, not to mention the vast quantities of information that we all provide for free. Facebook’s market capitalisation is $500bn with 2.2 billion users – do the math to estimate how much the time you spend on Facebook is worth. You could apply the same logic to Google. There is no escape: whether or not you have an account, both platforms track you around the internet.

In addition to taxing and shrinking tech firms, democratic governments should be making rules about how those firms are allowed to behave – rules that restrict how they can collect and use our personal data, for instance, like the General Data Protection Regulation coming into effect in the European Union later this month. But more robust regulation of Silicon Valley isn’t enough. We also need to pry the ownership of our digital infrastructure away from private firms.

This means developing publicly and co-operatively owned alternatives that empower workers, users and citizens to determine how they are run. These democratic digital structures can focus on serving personal and social needs rather than piling up profits for investors. One inspiring example is municipal broadband: a successful experiment in Chattanooga, Tennessee, has shown that publicly owned internet service providers can supply better service at lower cost than private firms. Other models of digital democracy might include a worker-owned Uber, a user-owned Facebook or a socially owned “smart city” of the kind being developed in Barcelona. Alternatively, we might demand that tech firms pay for the privilege of extracting our data, so that we can collectively benefit from a resource we collectively create.

More experimentation is needed, but democracy should be our guiding principle. The stakes are high. Never before have so many people been thinking about the problems produced by the tech industry and how to solve them. The tech backlash is an enormous opportunity – and one that may not come again for a long time.

The old techno-utopianism is crumbling. What will replace it? Silicon Valley says it wants to make the world a better place. Fulfilling this promise may require a new kind of disruption.

Main illustration by Lee Martin/Guardian Design

Follow the Long Read on Twitter at @gdnlongread, or sign up to the long read weekly email here.

Viewing all 25817 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>