Quantcast
Channel: Hacker News
Viewing all 25817 articles
Browse latest View live

Truly intelligent enemies could change the face of gaming

$
0
0

The Monolith Productions team isn't alone in believing that one of gaming's frontiers lies with the unpredictability of AI-controlled enemies and allies. Mitu Khandaker teaches on the topic as assistant arts professor at NYU Game Center -- but as chief creative officer at artificial-intelligence company Spirit AI, she's also working with a team to develop technology for companies to use in their own games.

"What we do is build tools to help developers creatively author story scenarios and author personalities for characters and the kinds of things that characters might say, but then those characters might improvise based on the space that you've authored for them," Khandaker told Engadget. "There's a lot of potential there for players to really have deeper, more meaningful conversations with characters."

"There's a lot of potential there for players to really have deeper, more meaningful conversations with characters."

Spirit AI's efforts could be summarized as "building technology which will let us make the walking simulator a conversation," according to Khandaker. Think of the squad's idle chatter in Mass Effect, or the casual smalltalk during long car rides in Final Fantasy XV: Pre-written, nonessential dialogue tumbling out of an algorithmic generator that organically delivers exposition and character detail. But what if those AI characters talking to the player and making up responses on the fly — even if they're enemy grunts with their guns drawn?

Khandaker can imagine creating games where the enemies aren't just tokens or pawns but more fully formed virtual characters. "Instead of just committing violence upon some kind of enemy, maybe [players will be] trying to understand their motives, she said. "Now, in this cultural context, more than ever, a human understanding of the reasons why people make decisions they do is super-important. Even if, on some level, we think decisions people make might be evil, we still need to have the level of understanding because that's how we learn and grow and how we combat evil."

What Shadow of War won't have are human enemies that players can mind control or kill in gruesome ways: Your foes will be Mordor-born Orcs who span the gray-brown gamut and exhibit the violent, traitorous ways of their race. This is intentional.

"One of the challenging things is striking the balance of having a game that's fundamentally pretty gritty and violent, but also making sure that we have this humor in there and this levity to it," de Plater said. "Ultimately, even though it is dealing with some dark themes, there is a cartoony level of violence as well. Orcs represent these caricatures. Everything's turned up to 11 in terms of their personality and their characters and their faults, and the violence of their society and how power-crazed they all are; how backstabbing and cutthroat they are against anyone."

In short, you'll be dispatching and commanding a class of enemy designed to be dynamically interesting yet disposable in a way that shouldn't trigger a player's ethical qualms. Game critic Austin Walker believed that the first game, Shadow of Mordor, failed to justify Talion's anti-Orc kill-and-enslave crusade: "But we're told again and again that these Orcs want to destroy beautiful things. It just doesn't hold up, and this tension extends to every element of their narrative and systemic characterizations. These Orcs have fears, interests, values, rivalry and friendships. Some Orcs are lovingly protective of their bosses or underlings. But they are 'savage creatures' that 'hate beauty,' so go ahead and enslave them," Walker wrote.

At least Shadow of War will strive to explore new and uncomfortable relationships between player and enemy. Even if it never lets players forget Orcs are villains at their core, some will attempt to liberate themselves from any overlord, dark or bright, de Plater said. He didn't specify whether these autonomy-seeking enemies will be a scripted faction in the game. But imagine wandering down the sludgy Mordor foothills only to find a procedurally-generated band of Orcs that avoid conflict and try to run away from you, the bogeyman who's murdered (or recruited) all their friends, as they search for a better life.

Imbuing enemies with relatable traits -- human traits -- is as fascinating as it is discomforting. Since their inception, single-player games have driven a hard wedge between players and enemies by making the latter alien and threatening. Space Invaders and Galaga literally used aliens, while Missile Defense tossed unthinking explosives at the vulnerable people populating the player's cities. The dawn of the first-person-shooter genre featured demonic monsters in Doom and Nazis in Wolfenstein 3D, enemies so unrelatable that players don't think when gunning them down.

Spirit AI's clients are using its AI-conversation tech to augment NPC allies, though Khandaker's team is starting to graft it onto enemies. But it's really up to whoever uses Spirit's tools, and whichever studio decides to challenge players with ordinary foes that do more than shoot in their direction.

"I would love to see that as a moral choice that you make. It should be sometimes deeply troubling, depending on your particular game, that somebody is so human and so full of their own motive, doing the things that they're doing, that it's not so easy to dehumanize them," Khandaker said.

"I think that through good, well-considered design, we'll get to a point where actually these interactions with characters help us to better understand the motivations that real people have."

"This is why I think it comes down to designing photo-realistic, naturalistic AI really well. If [designers] let you push them around, you're going to maybe transfer that to real people. If, however, they don't — if they push back and they try and do the emotional labor of helping you to understand what it is to interact with someone in a nice, well-considered way — then you can maybe transfer that to your interactions with people," Khandaker said. "I think that through good, well-considered design, we'll get to a point where actually these interactions with characters help us to better understand the motivations that real people have."

Whether AI tech will develop substantially in the next few years and, ultimately, whether improving enemy and ally AI will positively affect the player's experience, is another question. As Compulsion Games' Creative Director Guillaume Provost points out, making smarter enemies doesn't matter much if the player doesn't know what's going on.

"Making AIs that are believable often involve stuff that's not that technical and has a lot more to do with the acting parts that are involved in the AI," Provost said. "So it's not so much the sophistication of the technology behind it as it is the sophistication of expressing what's going on in their heads to the player."

"It's not so much the sophistication of the technology behind it as it is the sophistication of expressing what's going on in their heads to the player."

For Provost, that meant tweaking some gameplay in Compulsion Games' latest title, We Happy Few, which was released in Early Access last year. In it, players try to escape an English city whose denizens imbibe drugs en masse to forget their communal crimes -- and punish those who won't do the same. In playtesting, this meant making the hostile NPCs warn the player several times before violently reacting. They couldn't assume players would pick up on cues because in gaming, players' attention is focused on what they're interacting with at the time.

"The truth is, it's not a movie where you sit down and watch people the whole time. You're actively doing stuff. You're running around, you're stealing stuff. The player has a smaller portion of their brain left to understand what the people around them are doing," Provost said.

Which is why developers have to treat player attention as a resource and be smart about what they make intelligent. Provost recalled a story about the grunts in the first Halo who were programmed to yell out "I surrender" and wave their arms around -- but players would gun them down before the little enemies could bark out their lines. Similarly, Provost doesn't see nearly as much use for plugging more AI into enemies to make them smarter in future games.


Is neuroenhancement by noninvasive brain stimulation a net zero-sum proposition?

$
0
0

In the past several years, the number of studies investigating enhancement of cognitive functions through noninvasive brain stimulation (NBS) has increased considerably. NBS techniques, such as transcranial magnetic stimulation and transcranial current stimulation, seem capable of enhancing cognitive functions in patients and in healthy humans, particularly when combined with other interventions, including pharmacologic, behavioral and cognitive therapies. The "net zero-sum model", based on the assumption that brain resources are subjected to the physical principle of conservation of energy, is one of the theoretical frameworks proposed to account for such enhancement of function and its potential cost. We argue that to guide future neuroenhancement studies, the net-zero sum concept is helpful, but only if its limits are tightly defined.

Nobody lives here: The nearly 5M US Census Blocks with zero population (2014)

$
0
0

10,396 notes

A Block is the smallest area unit used by the U.S. Census Bureau for tabulating statistics. As of the 2010 census, the United States consists of 11,078,300 Census Blocks. Of them, 4,871,270 blocks totaling 4.61 million square kilometers were reported to have no population living inside them. Despite having a population of more than 310 million people, 47 percent of the USA remains unoccupied.

Green shading indicates unoccupied Census Blocks. A single inhabitant is enough to omit a block from shading.

Update Jan 2015: Prints and canvas of the Nobody Live Here map are now available.

Update 2014.05.01: I’ve received a couple questions about Canada. Just to be clear, this map is of the United States only. It is based on 2010 data published by the U.S. Census Bureau, which for reasons I hope are apparent, does not include data on our friends in the Great White North. For a similar depiction of Canada, see this map whipped up by Michael Chung.

Map observations

The map tends to highlight two types of areas:

  • places where human habitation is physically restrictive or impossible, and
  • places where human habitation is prohibited by social or legal convention.

Water features such lakes, rivers, swamps and floodplains are revealed as places where it is hard for people to live. In addition, the mountains and deserts of the West, with their hostility to human survival, remain largely void of permanent population.

Of the places where settlement is prohibited, the most apparent are wilderness protection and recreational areas (such as national and state parks) and military bases. At the national and regional scales, these places appear as large green tracts surrounded by otherwise populated countryside.

At the local level, city and county parks emerge in contrast to their developed urban and suburban surroundings. At this scale, even major roads such as highways and interstates stretch like ribbons across the landscape.

Commercial and industrial areas are also likely to be green on this map. The local shopping mall, an office park, a warehouse district or a factory may have their own Census Blocks. But if people don’t live there, they will be considered “uninhabited”. So it should be noted that just because a block is unoccupied, that does not mean it is undeveloped.

Perhaps the two most notable anomalies on the map occur in Maine and the Dakotas. Northern Maine is conspicuously uninhabited. Despite being one of the earliest regions in North America to be settled by Europeans, the population there remains so low that large portions of the state’s interior have yet to be politically organized.

In the Dakotas, the border between North and South appears to be unexpectedly stark. Geographic phenomena typically do not respect artificial human boundaries. Throughout the rest of the map, state lines are often difficult to distinguish. But in the Dakotas, northern South Dakota is quite distinct from southern North Dakota. This is especially surprising considering that the county-level population density on both sides of the border is about the same at less than 10 people per square mile.

Update: On a more detailed examination of those two states, I’m convinced the contrast here is due to differences in the sizes of the blocks. North Dakota’s blocks are more consistently small (StDev of 3.3) while South Dakota’s are more varied in area (StDev of 9.28). West of the Missouri River, South Dakota’s blocks are substantially larger than those in ND, so a single inhabitant can appear to take up more space. Between the states, this provides a good lesson in how changing the size and shape of a geographic unit can alter perceptions of the landscape.

Finally, the differences between the eastern and western halves of the contiguous 48 states are particularly stark to me. In the east, with its larger population, unpopulated places are more likely to stand out on the map. In the west, the opposite is true. There, population centers stand out against the wilderness.

::

Ultimately, I made this map to show a different side of the United States. Human geographers spend so much time thinking about where people are. I thought I might bring some new insight by showing where they are not, adding contrast and context to the typical displays of the country’s population geography.

I’ve all but scratched the surface of insight available from examining this map. There’s a lot of data here. What trends and patterns do you see?

Errata

  • Due to a cartographic mishap, the Gulf of California was missing from the original version. Though it was quickly fixed, that version keeps popping up across the Internet. It displeases me I see it, yet I’m amused that people keep reposting it without noticing the error. The geography gods judge those people harshly.
  • Some islands may be missing on the hi-res edition if they were not a part of the waterbody data sets I used.

::

©mapsbynik 2014
Creative Commons Attribution-NonCommercial-ShareAlike
Block geography and population data from U.S. Census Bureau
Water body geography from National Hydrology Dataset and Natural Earth
Made with Tilemill
USGS National Atlas Equal Area Projection

Mathematics of shuffling by smooshing

$
0
0

Persi Diaconis shuffled and cut the deck of cards I’d brought for him, while I promised not to reveal his secrets. “I’m not going to give you the chance,” he retorted. In an empty conference room at the Joint Mathematics Meetings in San Antonio, Texas, this January, he casually tossed the cards into four piles in a seemingly random motion — yet when he checked, each pile magically had an ace on top. “Of course, it’s easy to get confused when there are a lot of cards, so let me just take four,” he said, scooping up the aces. He swiveled the four-card pile in his hands — always keeping it in the same flat plane — and sometimes the aces were faceup, sometimes facedown, even though they couldn’t possibly have flipped over.

Diaconis’ career as a professional magician began more than five decades ago, when he ran away from home at age 14 to go on the road with the sleight-of-hand virtuoso Dai Vernon. But unlike most magicians, Diaconis eventually found his way into academia, lured by an even more powerful siren song: mathematics. At 24, he started taking college classes to try to learn how to calculate the probabilities behind various gambling games. A few years later he was admitted to Harvard University’s graduate statistics program on the strength of a recommendation letter from the famed mathematics writer Martin Gardner that said, more or less, “This kid invented two of the best ten card tricks in the last decade, so you should give him a chance.”

Now a professor of mathematics and statistics at Stanford University, Diaconis has employed his intuition about cards, which he calls “the poetry of magic,” in a wide range of settings. Once, for example, he helped decode messages passed between inmates at a California state prison by using small random “shuffles” to gradually improve a decryption key. He has also analyzed Bose-Einstein condensation — in which a collection of ultra-cold atoms coalesces into a single “superatom” — by envisioning the atoms as rows of cards moving around. This makes them “friendly,” said Diaconis, whose speech still carries the inflections of his native New York City. “We all have our own basic images that we translate things into, and for me cards were where I started.”

In 1992, Diaconis famously proved— along with the mathematician Dave Bayer of Columbia University — that it takes about seven ordinary riffle shuffles to randomize a deck. Over the years, Diaconis and his students and colleagues have successfully analyzed the effectiveness of almost every type of shuffle people use in ordinary life.

All except one: “smooshing.”

This toddler-level technique involves spreading the cards out on a table, swishing them around with your hands, and then gathering them up. Smooshing is used in poker tournaments and in baccarat games in Monte Carlo, but no one actually knows how long you need to smoosh a deck to randomize it. “Smooshing is a completely different mechanism from the other shuffles, and my usual techniques don’t fit into that,” Diaconis said. The problem has tantalized him for decades.

Now he is on a quest to solve it. He has carried out preliminary experiments suggesting that one minute of ordinary smooshing may be enough for all practical purposes, and he is now analyzing a mathematical model of smooshing in an attempt to prove that assertion.

Diaconis’ previous card-shuffling work has helped to shed light on numerical approximation algorithms known as Markov chain Monte Carlo methods, ubiquitous in scientific simulation, which employ processes akin to shuffling to generate random examples of phenomena that are too hard to calculate completely. Diaconis believes that a mathematical analysis of smooshing will likewise have ramifications that go far beyond card shuffling. “Smooshing is close to a whole raft of practical life problems,” he said. It has more in common with a swirling fluid than with, say, a riffle shuffle; it’s reminiscent, for example, of the mechanics underlying the motion of large garbage patches in the ocean, during which swirling currents stir a large collection of objects.

“The smooshing problem is a way of boiling down the details of mixing to their essence,” said Jean-Luc Thiffeault, a professor of applied mathematics at the University of Wisconsin, Madison, who studies fluid mixing.

Fluid-flow problems are notoriously hard to solve. The most famous such problem, which concerns the Navier-Stokes equations of fluid flow, is so difficult that it has a million-dollar bounty on its head. “The mathematics of any model for spatial mixing is in pretty bad shape,” Diaconis said.

Diaconis hopes that the union of fluid-flow techniques and card-shuffling math might point a way forward. “My kind of math — combinatorics, probability — is at right angles to the kind of math the Navier-Stokes people do,” he said. “If you bring fresh tools in, it might do some good in a bunch of these classical problems.”

Going Random

It might seem that no amount of smooshing can be definitively determined to be enough. After all, no matter how long you’ve smooshed the cards, wouldn’t more smooshing be even better? From a practical standpoint, probably not. Diaconis and Thiffeault both suspect that there is a particular moment in smooshing — a “cutoff,” as mathematicians call it — at which the deck transitions from highly ordered to highly unpredictable. After this point, more smooshing will confer only inconsequentially tiny increments of additional randomness.

The cutoff phenomenon, which occurs in a variety of situations in math and physics, owes its discovery to an earlier shuffling analysis by Diaconis and Mehrdad Shahshahani. In 1981 the pair was trying to understand a simple shuffle in which you just swap two randomly chosen cards. If you do many such shuffles, for a long time the deck will be far from random. But after roughly 100 shuffles it will suddenly transition to nearly random.

Since that discovery, the cutoff has been identified in many Markov Chain Monte Carlo algorithms, and recently it has even been discovered in the behavior of atomic spins in the Ising model, which describes the process by which materials become permanent magnets. “The idea of the cutoff has been very influential,” said Yuval Peres, a mathematician at Microsoft Research in Redmond, Wash.

All the card shuffling methods that have been successfully analyzed have cutoffs, and Diaconis conjectures that smooshing will too. “I’d bet $100 to $1 that smooshing has a cutoff,” Diaconis said.

10 Smooshing Tests

Diaconis is drawn to problems he can get his hands on. When he got curious about how shaving the side of a die would affect its odds, he didn’t hesitate to toss shaved dice 10,000 times (with help from his students). And when he wondered whether coin tossing is really unbiased, he filmed coin tosses using a special digital camera that could shoot 1,000 frames per second — and discovered, disconcertingly, that coin tosses are slightly biased toward the side of the coin that started out faceup.

So to get a feel for how much smooshing is needed to produce a random deck, Diaconis grabbed a deck and started smooshing. Together with his collaborators, the Stanford biostatistician Marc Coram and Lauren Bandklayder, now a graduate student at Northwestern University, he carried out 100 smooshes each in lengths of 15 seconds, 30 seconds, and one minute.

Next, he had to figure out how random the decks had become. The ideal way to do this would be to check whether each possible deck arrangement appears equally often among the smooshed decks. But this approach is utterly impractical: The number of arrangements of a deck of cards is 52 factorial — the product of the first 52 numbers — which approaches the estimated number of atoms in the Milky Way galaxy. “If everyone had been shuffling decks of cards every second since the start of the Earth, you couldn’t touch 52 factorial,” said Ron Graham, a mathematician at the University of California, San Diego. In fact, any time you shuffle a deck to the point of randomness, you have probably created an arrangement that has never existed before.

Since a direct experimental test of randomness isn’t feasible, Diaconis and his collaborators subjected their smooshed decks to a battery of 10 statistical tests designed to detect nonrandomness. One test looked at whether the top card of the deck had moved to every possible position equally often in the 100 smooshed decks. Another looked at how often pairs of adjacent cards — the seven and eight of spades, for example — remained adjacent after the shuffle.

Of the 10 tests, Diaconis suspected that smooshing might have the hardest time passing the adjacent-pairs test, since cards that start out together might get swept along together by the hand motions. And indeed, the 15-second smooshes failed the adjacent-pairs test spectacularly, often having as many as 10 pairs still adjacent after the smoosh — more than enough hidden order for a smart gambler to exploit. “If you know that, say, 10 percent of the cards are still going to be next to the cards they were next to before, that’s a tremendous advantage if you’re playing blackjack,” Graham said.

Diaconis expected the 30-second and one-minute smooshes to fail the adjacent-pairs test too, but to his surprise, they aced all 10 tests. “I thought this was a lousy method of shuffling,” he said. “I have new respect for it.”

The experiments don’t prove that 30 seconds is enough smooshing to randomize a deck. They only establish that 30-second smooshes are not as egregiously nonrandom as 15-second smooshes. With a sample size of only 100 smooshes, “you can only detect very strong departures from randomness,” Diaconis said. It seems likely that the cutoff occurs sometime before one minute, since 30-second smooshes already seem to do pretty well. But, he said, “we’d be on more solid ground in discriminating between 30 seconds and one minute if we had 10,000 smooshes.” That’s far more than his group can carry out, so Diaconis is thinking about organizing a “national smoosh” in high-school or junior-high math classes.

Even more than additional data, however, Diaconis wants a proof. After all, ad hoc statistical tests are never a conclusive way to show that a shuffle is random. “It’s perfectly possible that some clever person will say, ‘Why didn’t you try this test?’ and it turns out to all be wrong,” he said. “I want to be able to say, ‘It doesn’t work after a minute and here’s why,’ or ‘It works after a minute and here’s a proof.’”

Theoretical Smooshing

When Diaconis returned to college after a decade as a professional magician, his first three grades in advanced calculus were C, C and D. “I didn’t know you were supposed to study,” he said. His teacher told him that he should write down the proofs and practice them as if they were French verbs. “I said, ‘Oh, you’re allowed to do that?’” Diaconis said. “I thought you were just supposed to see it.”

When it came to smooshing, instead of just trying to “see it,” Diaconis devoured the literature on fluid mixing. “When we started talking about the connections between cards and fluid mixing, he read the whole 200 pages of my Ph.D. thesis,” said Emmanuelle Gouillart, a researcher who studies glass melting at Saint-Gobain, a glass and construction materials company founded in Paris in 1665. “I was really impressed.”

While Diaconis grew more conversant in fluid mechanics, Gouillart benefited from his unique insight into card shuffling. “It turned out that we were studying very similar systems, but with different descriptions and different tools,” Gouillart said. The collaboration led her to develop a better way to measure correlations between neighboring particles in the fluids she studies.

Diaconis, meanwhile, has developed a mathematical model for what he calls “the sound of one hand smooshing.” In his model, the cards are represented by points scattered in a square, and the “hand” is a small disk that moves around the square while rotating the points under it by random angles. (It would be easy, Diaconis noted, to extend this to a two-handed smooshing model, simply by adding a second disk.)

Diaconis has been able to show — not just for a 52-card deck but for any number of points — that if you run this smooshing model forever, the arrangement of points will eventually become random. This might seem obvious, but some shuffling methods fail to randomize a deck no matter how long you shuffle, and Diaconis worried that smooshing might be one of them. After all, he reasoned, some cards might get stuck at the edges of the table, in much the same way that, when you mix cake batter, a little flour inevitably gets stranded at the edges of the bowl and never mixes in. But by drawing on 50 years of mathematics on the behavior of random flows, Diaconis proved that if you smoosh long enough, even cards at the edge will get mixed in.

His theoretical result says that the smooshing model will eventually mix the cards, but doesn’t say how long it will take. The model does provide a framework for relating the size of the deck to the amount of mixing time needed, but pinning down this relationship precisely requires ideas from a mathematical field still in its infancy, called the quantitative theory of differential equations. “Most studies of differential equations focus on what happens if you run the equation for a long time,” Diaconis said. “People are just now starting to study how the equation behaves if you run it for, say, a tenth of a second. So I have some careful work to do.”

Diaconis is optimistic that the work will lead him not just to an answer to the smooshing question, but to deeper discoveries. “The other shuffles have led to very rich mathematical consequences, and maybe this one will too,” he said.

Diaconis shares his magical secrets with only a select inner circle, but he dreams of laying the secrets of smooshing bare. “Smooshing is something that people use thousands of times a day, and mathematicians should be able to say something about it.”

Fifty Years of Minimalism (2015)

$
0
0

Brian Eno is MORE DARK THAN SHARK
spacer

INTERVIEWS, REVIEWS & RELATED ARTICLES
"Craft is what enables you to be successful
when you're not inspired." - Brian Eno

Gramophone APRIL 2015 - by Philip Clark

FIFTY YEARS OF MINIMALISM

Philip Clark searches out the roots of minimalism and traces the development of the highly influential musical movement over the past half-century.

The humble and subservient arpeggio, which had always been one of music's essential building blocks as the staging posts around which melodies were constructed and as the filigree of Classical passagework, found itself in the foreground of modern composition during the mid-1960s. It was as if a minor triad could fulfil Andy Warhol's decree that, in New York City, everyone could bask in their fifteen minutes of fame.

As Warhol plastered the walls of downtown galleries with looping images of Campbell's tomato soup cans and the recently deceased Marilyn Monroe, the earliest pieces of Philip Glass and Steve Reich, performed in equivalent gallery spaces, or if not in lofts - no concert hall would have been foolhardy enough to give these arpeggio-fixated reprobates a gig - built apparently comparable structures with sound. Warhol's Marilyn Diptych of 1962 comprised fifty repeats of the same epochal image, bright blond yellow on the left, and on the right a phased disintegration of a black-and-white version that faded towards nothing.

Reich's 1965 piece for tape It's Gonna Rain opened with an equally potent sonic image: the voice of Brother Walter, a black preacher, proclaiming the words 'It's gonna rain!' which smudge into harmonic potash as Reich runs the recording on two tape recorders that are moved out of phase. Warhol's 32 Campbell's Soup Cans was deliberately non-painterly - the same soup can depicted thirty-two times, the desired uniformity of each canvas secured by mechanical screen-printing. In Glass's Music In Contrary Motion (1969), a basic melodic hook was added to with each repetition scratching a similar structural itch. The aesthetic disjoint between Warhol's view of painting and those schooled in the European grand tradition could not have been starker. And these pioneering pieces of minimalist music kept European tradition at a comparable distance: this music was as far removed from Brahms or Bruckner as Warhol's work was from Rembrandt's The Night Watch.

Analogies drawn between this puzzling new music that, in the mid-1960s, had yet to acquire the envelope term 'minimalism', and the work of Andy Warhol, was one way that commentators of the time attempted to make sense of Glass, Reich and their compadres. When, twenty years later, second-generation minimalist John Adams premiered his opera Nixon In China at Houston Grand Opera, the critic of The New York Times, Donal Henahan, told his readers that 'Mr Adams does for the arpeggio what McDonald's did for the hamburger' - which was not meant to be read as a compliment. Fast food was homogenised and insipid, culinary pornography manufactured for the purposes of instant gastro-satisfaction. By making art out of soup cans, Warhol had abandoned the idealism of abstract expressionist painters like Mark Rothko, Barnett Newman and Willem de Kooning. The aesthetics of advertising had made an unwelcome incursion into the solemnity of the gallery space - and minimalist composition, too, propped up by nakedly tonal chord sequences and arpeggios, had signed a comparable Faustian pact; a dubious sell-out of purist modernist aesthetics.

But Henahan's sniffy attitude towards minimalism was not shared by his New York Times colleague John Rockwell who, in 1983, wrote an utterly joyful account of encountering Music With Changing Parts, as performed by Philip Glass and His Musicians, in a downtown loft space. "The music danced and pulsed with a special life, its motoric rhythms, burbling, highly amplified figurations and mournful sustained notes booming out through the huge black windows and filling up the bleak industrial neighbourhood." Rockwell describes spontaneous dancing in the streets as the music leaked out of those huge windows - "And across the street, silhouetted high up in a window, a lone saxophone player improvised a silent accompaniment like some faded postcard of 1950s Greenwich Village Bohemia. It was a good night to be in New York City."

Five decades on from its first stirrings - and a full fifty years after a nervous Steve Reich supervised the first performance of It's Gonna Rain - when the unfolding story of minimalism can often feel like a done deal, those controversies are worth revisiting. History is written by the victors, and there's no doubt that the minimalist composers consider themselves to have triumphed; where the modernism of Stockhausen, Boulez and Nono had alienated genuine music-lovers, the progress and audience credibility of Western classical music had been rescued by minimalists' determination to show atonality the red card. Perched somewhere between classical respectability and mass popular culture - Glass's 1976 opera Einstein On The Beach cracked the Metropolitan Opera, then he collaborated with Patti Smith, David Bowie and Leonard Cohen, while Reich would get down with the kids from Radiohead - minimalism was ideally placed to deal with the challenges and responsibilities dodged by introverted, self-serving Euro-modernism.

The inner life of music, though, is hopefully more nuanced, and this official, cannily spun history of minimalism represents only a minimal part of the whole story. Minimalism enjoys superb PR. Robert Hurwitz, President of Nonesuch Records, the record label of Glass, Reich and Adams, used the booklet-notes he wrote for his ten-disc anthology The John Adams Earbox to outline how he repositioned the one-time house label of Elliott Carter, Milton Babbitt and George Crumb towards a label that preached the minimalist credo. The problem with Carter's music was "a huge gap between what the music was supposed to be saying, and my gut response to listening to it."

In Robert Maycock's uncomfortably laudatory Glass: A Portrait (Sanctuary: 2002), you wait for the inevitable assault on European Modernism and, by page sixty, you're rolling with the punches. Messiaen's Catholicism, apparently, gave his modernist instincts some soul (and "Gershwin-like directness"), while the dreams of Boulez and Stockhausen "became corrupted". And a regrettable false dichotomy opens up, which has solidified into the dominant narrative. Last year Howard Goodall's BBC television documentary The Story Of Music came to the same conclusion: modernism bad, minimalism good.

Hints that there might be more to minimalism than this mundane pop history came when I interviewed the pianist, composer and improviser Frederic Rzewski in 2002. During the mid-1960s, Rzewski toured with the Italian composer Sylvano Bussotti and minimalism was a key obsession: "except the term 'minimalism' hadn't yet evolved and it was simply another strand of the avant-garde. We performed music by Giuseppe Chiari, who was a master, but never became a cultural icon like Philip Glass. The success of Górecki in the early 1990s opened the door for people to appreciate music by Morton Feldman and Howard Skempton. And yet Chiari remains unfamiliar." Rzewski concluded with the thought that Michael Nyman and Gavin Bryars became rich and famous. But what had happened to Chiari - and Thomas Schmitt, Terry Jennings and Eric Anderson? "They were major figures involved in the minimalist movement who have since disappeared from view."

But what hope Chiari when the official minimalist yarn doesn't know quite where to place two composers who were present right from the start as community organisers and catalysts of big ideas - Terry Riley and La Monte Young? Reich's It's Gonna Rain had its first airing, not in New York as is often assumed, but at the San Francisco Tape Music Center at 321 Divisadero on January 27, 1965. Reich has recalled being anxious and depressed about his piece, which he fully expected would be thoroughly disliked before disappearing without trace.

Twelve weeks earlier, on November 4, 1964, Reich had been involved in the premiere of another composition that dealt with repeating modules of melody. Terry Riley's In C feels today like the unruly country cousin of the minimalism that would eventually turn up in New York. Riley handed his musicians fifty-three melodic fragments arranged on a single sheet of paper which they worked through in sequence, repeating each module at will as they zoned inside the unfolding heterophony of sound, using their instinct to guide them towards moving forwards through the piece. Reich came up with the smart idea of having one musician set the pulse by repeating top C on a keyboard, a role he fulfilled in the first performance. Meanwhile, musicians including the saxophonist Jon Gibson, then a disciple of John Coltrane, and Pauline Oliveros (accordion) and Morton Subotnick (clarinet), who would later become known for their work with electronics, listened and felt their way through Riley's piece - ensemble music functioning in a way utterly alien to Western concert music.

But Riley, eighty this year, had no reason to organise his music after any European model. California born, a student of the Indian vocal master Pandit Pran Nath, an admirer of John Cage and John Coltrane, Riley's relationship to the idea of a notated score, and the sounds he wanted that score to generate, was necessarily very different. For Riley, notation was not just about reading. He expected his musicians to internalise his melodic modules to the extent that they could not only hear, but feel them. Essentially, they took ownership from Riley, In C building from their sensitive ensemble-listening as a network of overlapping conversations was triggered - questions and answers, no one allowed to dominate the floor or press their point of view too assertively.

Riley's point of compositional departure was his realisation that Indian music and Coltrane's modally anchored jazz, as stylistically distinct as they were, shared one common characteristic: cosmic rhythmic energy flowed over relatively static harmony. His friend La Monte Young - who as a college student had befriended Ornette Coleman and Eric Dolphy - became Riley's sounding board; but Young's own extraordinary path through music was already under way. Having flirted with twelve-tone technique and the conceptual, anti-art message of Fluxus (including works requiring a pianist to push his instrument through a wall and, elsewhere, to step inside the genitals of a whale), in 1960 Young's Composition 1960 Number 7 suggested a future. By instinct Young was a distiller and simplifier who had already created a proto-minimalism with his 1958 Trio For Strings, which slowed down to a crawl, and isolated, corners of twelve-tone rows. But the score of Composition 1960 Number 7 showed a B natural and an F sharp suspended on the stave, with the simple instruction: 'To be held for a very long time'.

And this is the moment, surely, that the minimalist seed was planted. That apparently simple instruction, though, was not as simple as it seemed. The perfect fifth encapsulated the most fundamental of all intervallic relationships - that between the tonic root and the dominant fifth - but how should that interval be tuned and, once sounded, what should you do next? Riley and Young came to a mutual understanding that equal-tempered tuning was an unpleasant and unnecessary evil: just intonation was their tuning system of choice and much of the music they created during this period - the drones, the leisurely repetitions, the shamanistic intensity of colliding patterns - explore Riley's idea that 'Western music is fast because it's not in tune'. Composition 1960 Number 7 feels like the start of minimalism because all points - from Cage's 4'33", to Coltrane's modes and Indian drones - pass through. It was the brave new sound of tomorrow.

In New York, the city of Charles Mingus, Elliott Carter, Deborah Harry and Bob Dylan - who all wanted to intensify music, to pack musical structure with more event and polarities of emotion - Steve Reich and Philip Glass were pedalling this new music that, to those looking in from the outside, fused Wagnerian length with the plainness of Eric Satie. Reich's Four Organs - one chord gradually increased over a thirty-minute duration - caused a Rite Of Spring-style fracas at Carnegie Hall in 1973, while Glass's first recording of Music In Twelve Parts, made in 1975, is a whole sound world away from slick and polished later recordings. The ensemble barely keeps to the equal-tempered straight and narrow. Saxophones and keyboards churn and wail; structural points of demarcation are pointier and brutally cut.

This performance satisfies the same constructivist, cerebral pleasures as Stockhausen's Gruppen or Boulez's Structures. Minimalism, as Rzewski implies, was a modernism too.

And what of those composers Rzewski mentioned? The neglect of Terry Jennings feels especially inexplicable and unjust. A childhood friend of La Monte Young, a Jennings piece is typically understated, serene as it plays coy games with tonality. The music of Howard Skempton and Laurence Crane is much indebted. Giuseppe Chiari, always on the conceptual margins of minimalism, kept faith in his ideas of music existing in a hinterland between sound and speech, vocal inflection being altered by carefully choreographed movements of the body.

At some point in time, the tendency to strip musical ideas back to minimal means turned into Minimalism: the genre. Reich and Glass, and later Adams, like to be liked and the backstory of awkward tuning systems and the orgiastic counterpoint of In C become quietly abandoned as minimalism reigns supreme. Personally, disillusionment set in when Reich's The Four Sections appeared on Nonesuch in 1990 and the project of transferring Reich's ideas onto an orchestral canvas (the London Symphony Orchestra under Michael Tilson Thomas) made little sense. Drumming and Music For 18 Musicians reconfigured the relationship between harmony and structure. Tiny motivic ideas were made to swim in big ponds. Suddenly a music existed that questioned the certainties of going to a concert hall to hear perfectly formed twenty-minute pieces. The sound of Reich's ensemble - singers vocalising through microphones, the bebop rhythmic bounce of mallets against marimbas and glockenspiel - was lost within the weight of a mass of instruments designed to carry another sort of music.

Minimalism began to appear where you least expected it - in The Netherlands, where Louis Andriessen's music, derived independently of Reich and Glass, becomes known as 'Dutch minimalism', while 'holy minimalism', the devotional music of Tavener, Górecki and Pärt, becomes a commercial goer. We're all minimalists now.

The pulsating tonal shimmer of John Adams's orchestral music becomes the default soundworld of a whole generation of post-minimalist composers: Joseph Schwantner, Michael Torke, Carter Pann. A minimalist orthodoxy becomes as discernable as those suffocating post-serial tendencies that Reich and Glass are said to have rebelled against in the 1960s. The next generation of minimalist composers have themselves now reached comfortable middle-age. Bang On A Can, which began in 1987 when three New York composers - Julia Wolfe, David Lang and Michael Gordon, then in their twenties - presented a marathon concert of new music inside a downtown art gallery in New York City that kicked off at two in the afternoon and finished twelve hours later, is the most direct descendant of the pioneering work of Reich and Glass.

Bang On A Can sounds like the name of a fabled rock album that somehow history never got around to recording. Wolfe once told me, "Early minimalist pieces developed over a slow trajectory and I sometimes think we took that idea and condensed it, perhaps re-energising Reich and Glass's ideas about rhythm with the grooves we'd heard on Jimi Hendrix or Earth, Wind and Fire records." But it's often forgotten that, before 'minimalism' became all invasive, 'process music' was the preferred term with which to identify the characteristic traits of Glass and Reich. Bang On A Can composers reinvestigated the processes of minimalism and found something fresh therein.

But minimalism has also become another way of doing music, another set of rules to be followed - a contemporary sound there for the taking. Is there another fifteen minutes of fame left to be plucked from the air? Will the humble and subservient arpeggio live to fight another day? Masterworks produced during the 1960s and '70s were restorative and optimistic - the stuff of sound cloud dreams - and the composerly instinct to investigate process, to put the theoretically rigorous together with the sensuously immediate, is the overriding legacy of the minimalist ideal. The music is wonderful; the lessons just as important.

ESSENTIAL MINIMALISM

Four recordings highlighting the range of minimalism.

PHILIP GLASS:How Now, Strung Out - This was our first sighting of Glass on record, indeed this was the first-ever all-Glass concert, consisting of two raucous solo instrumental works - another side of minimalism.

STEVE REICH:Four Organs / Phase Patterns - Philip Glass is on organ, and Reich's Four Organs remains, depending on your point of view, his most fascinatingly didactic - or utterly infuriating - score.

TERRY RILEY:In C - Cut in New York in 1968, this first recording of In C remains striking in its exhilarating, trippy rawness.

BANG ON A CAN ALL-STARS:Big Beautiful Dark And Scary - To celebrate the All-Stars' twenty-fifth anniversary, here's music by the post-Reich and Glass generation: Lang, Gordon, Wolfe et al.


ALBUMS | BIOGRAPHY | BOOKS | HOME | INSTALLATIONS | INTERVIEWS | LINKS | LYRICS | MULTIMEDIA | SITE | STORE | UPDATES


“They Basically Reset My Brain”

$
0
0

I was strapped to a stretcher in the back of an ambulance, still in full uniform. Shoulder pads, helmet — everything except for my face mask. The trainers had taken that off while I was still lying on the field in front of 81,000 people at Lambeau. When we got to the hospital, the first room the paramedics took me to was freezing cold and the walls looked all rough and unpainted.

This didn’t feel like no hospital.

It felt like a morgue.

I wanted to yell, “Where are y’all taking me?” But I couldn’t. I was still gasping for air. It had been maybe 30 minutes since I had taken a hit from — You know what? I don’t even remember the guy’s name. I just remember his jersey number: 39.

So it had been maybe 30 minutes since I went across the middle and number 39 from the Browns hit me on the crown of my helmet, and I still couldn’t breathe. I couldn’t move my neck. I couldn’t feel my fingers or toes. I was numb.

For a second I thought, Maybe this IS the morgue … maybe I’m actually dead.

Then a paramedic said something like, “Hang tight, Mr. Finley. This is just a detour. We’re taking you up to the ICU.” 

I guess the media and fans knew which hospital I’d be going to — because Green Bay is such a small town — and some of them beat me there. The paramedics wanted to avoid the crowd at the emergency entrance, so they took me in through the basement.

After they got me up to the ICU, they cut my pants and jersey off with a pair of scissors, and then they brought in this machine — I don’t know what it was, but it was loud— to cut the shell of my helmet and shoulder pads off my body. And then I was just lying there, still stinking from the game, straight nude with a white sheet over me, like a dead body. I had no idea how severe my injury was at the time, but as scared as I was, I just kept thinking …

I’m gonna come back from this.

Four weeks earlier, in Cincinnati, I had suffered a concussion against the Bengals. I was running a route up the seam and when I stretched out for the ball, a safety came down and popped me while another guy rolled up over me from behind. I took a knee to my head, then my head hit the ground.

When I stood up, my body felt like it was on fire and everything looked blurry, like I was underwater. I looked to our sideline, and all I could see was my teammates’ yellow pants. No feet, no jerseys, no heads. Just bright yellow pants. It was like everybody had been decapitated. I tried to walk towards them, but I only made it a few steps before I went back to the ground. The trainers came out and helped me off the field, took me into the locker room, diagnosed me with a concussion and then took my helmet away. I was done for the day.

By the time I started to feel normal again and got to my phone, I had like 30 missed calls from my wife, Courtney. She and my son Kaydon, who was five at the time, had been watching the game at home on TV. I called her back and told her I was O.K., and then she put Kaydon on the phone.

“Daddy,” he said. “I don’t want you to play football anymore.”

I pictured my five-year-old son watching me on TV stumbling around, not even able to walk off the field under my own power. I pictured him crying to his mom, asking if his daddy was gonna be O.K. The whole thing hit me pretty hard.

But I’m a football player. So after the bye, which was the following week, I was back on the field. Nothing was gonna stop me from coming back — not even a plea from my son.

I’m not saying that to sound tough. That’s just how football players are wired. When you’re in the NFL, you sacrifice everything to play the game. Then, when you’re on the ground after a big hit and you can’t move, you think, Why do I do this to myself?

But then you get treatment, your body heals, and you get right back out there.

Right back to sacrificing everything.

Most guys don’t get to decide for themselves when they’re done with the game. The game lets you know when it’s done with you.

Jermichael Finley

I suffered five concussions during my football career. The first came when I was in college at Texas, then I had a couple early in my NFL career that were kind of — I wouldn’t say minor… you just wouldn’t have been able to tell by watching the tape. They kind of went under the radar. I suffered a fourth in 2012 during training camp, and the one against the Bengals in 2013 made five.

But it wasn’t even a concussion that put me in the ICU that day. When it happened, I thought it was just a stinger — one of those hits where you get popped and everything goes numb, or you get a tingling sensation … like that pins-and-needles feeling when one of your limbs falls asleep and the feeling starts coming back.

We call it “getting your bell rung.”

It usually goes away just in time for the next play.

So that’s what I thought had happened. It’s not like the guy blew me up or anything. It wasn’t a huge collision. I just saw number 39 coming and put my head down to protect my knees, and he just caught me on the crown of my helmet.

Immediately after the hit, I was conscious, but I let go of the ball because my hands stopped working. I lay on the ground because my legs went numb.

The official diagnosis was a spinal cord contusion. The hit shocked my spine and left me with a two-centimeter bruise on my spinal cord. A couple of weeks later, I had surgery to fuse together the C-3 and C-4 vertebrae in my neck, and after about six months of rehab, I was cleared by my doctor to resume football activities.

I had a $10 million disability insurance policy in place — thanks to my agent, who talked me into getting one. So if I wasn’t able to play football again because of my neck injury, I could collect $10 million, tax-free. That was more than I would get in guaranteed money if I signed a new contract, especially coming off a serious neck injury.

But the game of football was like an addiction. I worked my butt off to come back, even though I knew that if I signed with a team and stayed for a certain number of days — I think it was around 14 days, or two games’ worth — I would become ineligible to collect the $10 million in insurance money. And if I did that, there was a very real possibility that I could reinjure myself the next day, get cut and wind up with nothing. I was basically one hit away from never walking again.

But all I wanted to do was play football.

I had teams interested, too. The Patriots, Giants, Steelers, Seahawks — I had options. Good options.

I was basically one hit away from never walking again.

The first team I worked out for was the Seahawks, because John Schneider, their general manager, had been in Green Bay with me when I came into the league. He really wanted me, and the Seahawks were ready to offer up a pretty lucrative deal. But I failed my physical. The Seahawks doctors said that my neck hadn’t fully healed yet, so John told me they couldn’t sign me.

Here’s the thing about failing a physical: It follows you around the league. After you fail one, a lot of teams won’t even bring you in because they’ll see it as a waste of time — that if I wasn’t healthy enough for the Seahawks, I wasn’t going to be healthy enough for their team, either.

So I waited — for my neck to heal, and for the phone to ring. I made some visits, but training camp came and went, and I still hadn’t been signed. The weeks went by, and nobody called.

Then Week 7 came around. It was October 19, 2014. A Sunday. I was watching football on TV. Normally, I would be wishing I was out there, thinking I should be out there. But that day, I wasn’t. It had been a whole year since my injury, and something just kind of came over me that day. I was sitting at home, completely content and comfortable, and I said to myself, O.K. I’m done with it. I’m gonna retire. So I made it official.

I eventually collected the $10 million from my insurance policy, basically setting myself and my family up for life, which was a blessing. But I was saying goodbye to something I had worked my whole life for. It’s not how I wanted to go out, but most guys don’t get to decide for themselves when they’re done with the game.

The game lets you know when it’s done with you.

I started this article by talking about concussions and the injury that ended my career mostly because if you followed my career or knew anything about me, that’s probably where my story left off for you — that moment when I was taken off the field on that stretcher and I was no longer available for your fantasy team.

But that was just Part 1.

Part 2 is everything that happened next.

That’s the story I’m really here to tell.

I had already seen my personality start to change after the neck injury, but after I officially retired, it got even worse. I would wake up every morning grumpy and agitated. I became really quiet. Sometimes Courtney would try to talk to me and I would just get irritated. Every now and then I’d snap at her, but most of the time I would just walk away or take a long drive. I wasn’t angry, I was just … awkward. Even with people around town. It was like I forgot how to talk to people.

Courtney would ask, “What’s going on with you?” And I would just walk away. Mostly because I didn’t have an answer. I had no idea, no reason, no why.

It was like I didn’t quite know myself.

I started driving around a lot. It was an easy way for me to be alone, and it helped me avoid those awkward interactions with people.

Then one day, I went out to my truck to go for a drive, and I had to go back inside because I forgot my keys. That started happening a lot. It got to the point where sometimes I’d have to go back inside two or three times because I had forgotten my keys, then my phone, then my wallet. Some days, when I’d go to pick my kids up from school, I’d get halfway to their school and have to turn around because I forgot to put the car seat in the car, even after Courtney had reminded me.

One night, we went out to dinner, and after I paid the bill and walked out of the restaurant, Courtney came up behind me with my wallet in her hand, waving it at me. I guess I had left it on the table.

“This is the third time I’ve had to pick up after you today,” she said. “What’s going on?”

I wasn’t gonna call somebody for help. I was gonna own my shit and work it out myself.

I didn’t have an answer for her.

Over time, I grew more isolated and more distant from Courtney and the kids — from everybody, really — while I tried to sort myself out and deal with whatever the hell was going on with me. I thought a lot about the way I was acting, and I just chalked it all up to being fresh out of the NFL and not being in tune with “real life” yet. I figured I was just in a funk because I missed the game, and everything that came along with it. I’d snap out of it … right? I mean, I loved the game of football, and now it was gone.

I missed the adrenaline rush you get from playing the game and all that, but more than anything, I think I just felt abandoned. I played in the NFL for six years, and every day I was being evaluated. Coaches, trainers, teammates, fans — I was constantly being judged for my performance on Sundays and during the week in the classroom and weight room.

Now, all of a sudden, nobody was paying attention.

I started working out three, sometimes four times a day just to validate myself. But I also craved that outside validation, and it just wasn’t there. Because nobody was watching anymore.

That’s a loneliness that’s tough to describe.

I basically lost my closest friends, too. Sometimes I’d call up one of my old teammates on a Friday night like, “Hey, what are you up to?”

“Uhh … we got a game Sunday, so … film study, game prep … you know.”

Then they’d ask me what I’m up to, and I’d have to figure out a way to sound like I’m busy — like I had something going on.

But I didn’t.

Most of the time, I was just trying to find a way to slow my dadgum brain down. I was used to waking up every morning and having a schedule and a ton of stuff to do. Now, I had nothing. My days were wide open, and I had to try to find a way to fill them.

So I was depressed because I felt like my identity as a football player had been taken away from me, I was lonely because I felt abandoned by the game and my friends, and I had anxiety because my entire future felt like an empty calendar that I had to fill up somehow.

I was 27 years old and I was retired.

I had no idea what to do with myself.

I thought these feelings explained my behavior, my attitude — even my forgetfulness. And the same way I would have never run to a trainer after a big hit and asked to be checked out for a concussion, I wasn’t gonna call somebody for help. I was gonna own my shit and work it out myself.

Suck it up and get back out there, you know?

I honestly believe that if it weren’t for my wife and my kids, 10 years from now, I might have ended up one of those former players who put a bullet in his chest.

Jermichael Finley

It was Courtney who finally gave me my wake-up call. Somebody had told us about a clinic out in California that some former players had been going to. It was like a neurological clinic where guys who were struggling or having some issues were going to get their heads checked out and do tests and get therapy and all that stuff.

I thought, Clinic? I don’t need no clinic. I got this under control.

But I didn’t. And Courtney got pretty frustrated with me when I told her I wasn’t going to go. She pushed me and pushed me, but I wouldn’t budge.

Then, one morning, she started talking to me about how distant I had been. She said to me, “Jermichael … I need you here.” Normally, I would have walked away or gone for a drive, like we had gotten used to me doing. But she had this pain in her face. I can’t really describe it … but it got my attention.

Then, for the first time, she said something that made me snap out of whatever funk I had been in.

“You need to be here for the boys.”

That hit me pretty hard. I don’t know why, but I started thinking about some of the former NFL players I had heard about who were really hurting — you know, the older guys — and I wondered if the stuff I was going through was just the beginning. I wondered if it was going to get worse over time, like it probably did for them. The mood swings. The memory loss. That wasn’t the way I wanted to live, and that wasn’t how I wanted my three boys to see me.

Then I thought about the former players who had committed suicide.

Was that in my future?

That’s when I finally caved.

“I’ll do it,” I said. “I’ll go to the clinic.”

I thought the clinic would be like a hospital or something, or maybe someplace cold and unpainted like the hospital basement the paramedics took me through back in Green Bay. But when I got there, it was totally different. It felt like a resort. The building was really nice and there were palm trees everywhere — you know, it was California. It felt like I was on vacation. And when the staff greeted me at the door, it was like they were so excited to have me there. Everybody made me feel so welcome. They made me feel … safe.

But they also let me know right away that I was there to work.

They interviewed me about all the big hits I had taken in my career — the stingers, the knockouts, the neck injury. Then they hooked me up this machine that monitors the activity in the brain to figure out which parts of my brain were working and which weren’t working as good as they should. They basically made a map of my brain.

The doctors told me that when you get a concussion — and remember, I had five — it can have long-lasting effects on the way certain areas of the brain work. By looking at the map of my brain, they identified those areas, and then they put me on a program that stimulated them to improve brain function. They basically reset my brain to get the different parts working together again the way they’re supposed to.

Just having that discussion with them was huge for me. I had been thinking that this whole thing was something that would pass over time. That it was temporary. I’d figure it out.

Now, for the first time, somebody was telling me, This isn’t your fault. Something is doing this to you. And we can fix it.

They had all kinds of neuro training exercises and routines they put me through, but a lot of it was centered around meditation and intense emotional therapy sessions. The exercises and therapy were to stimulate the parts of my brain that were running slow, and the mediation was to slow down the parts of my brain that were going a mile a minute.

It all just brought me back to center.

I spent 30 days in that clinic, then I went home to Texas to be with Courtney for the birth of our fourth child — another boy. I sat in a hospital room with a brand-new baby in my arms, and I felt like a brand-new man myself. I was sleeping better. I wasn’t so irritable. I was talking to people again. I wasn’t forgetting my dadgum wallet at the restaurant anymore.

I was back to my old self … maybe even better.

As part of the neuro training, the doctors needed to find something that stimulated me — something I got excited about the way I got excited about football.

We determined that that something was coaching.

So today, I put on football camps and work with kids in the small town of Aledo, Texas, where I live, and I work with my own boys, coaching them up, too. I’m obsessed with my family now. I finally came to the realization that my relationship with the NFL was temporary, but my relationship with my wife and kids is permanent. It’s forever.

And I honestly believe that if it weren’t for my wife and my kids, I never would have gotten any kind of help, and 10 years from now, I might have ended up one of those former players who put a bullet in his chest.

That’s the path I was on.

I think that’s why I want to share my story. I bet there are a lot of guys who are either fresh out of the league or who’ve been out for a while who are experiencing a lot of the same things I did. And if there are, they’re probably thinking they can handle it themselves, like I thought I could. Because football players don’t tell people when they’re hurting. That’s just not how we’re wired. We suck it up and get back out there.

But don’t fall into that trap.

Today, I’m 30 years old and I still meditate daily, and every now and then I go back to the clinic for a mental tune-up, or just to check in. I’m as happy and healthy as I’ve ever been, and it’s all because I was able to swallow my pride and go get help.

We give so much to the game of football. But whether you walk away on your own terms or you’re forced to give it up like I was, don’t let the game take away the rest of your life once it’s gone. There’s no shame in asking for help. In fact, it might be the bravest thing you could do.

The scourge of web analytics

$
0
0

I hate the web.

I’ve been making web apps since 2003, which means that I’ve been doing this for fourteen years now, or it means that I can’t count. So, there are few people more qualified than me to tell you this:

The web is shit.

If you disagree with the above statement, you spent more than $1000, less than two years ago, on the device you’re currently reading this on, so websites feel fast to you. There are many factors that make the web shit, but today I’d like to talk about one of them:

Web analytics.

A brief retrospective

The web was created in, like, the nineties, and was initially envisioned as a lightweight system to share text and cat photos. So far we’ve achieved only one of those goals.

As the web started to develop, and more and more people started to join in, more and more people started to produce and consume content. Naturally, human avarice and vanity led to people wanting to know how popular their content was, which, in turn, led to people creating and using those awful counters that showed you how many visitors had been to your website, and reminding you that it was still the nineties.

Those counters were the granddaddy of modern analytics. As the number of people with money to spend on the web grew, so did the desire to make them spend more money, and thus the need to manipulate them more effectively into parting with their gold. A central tenet of psychological manipulation is “know thy victim”, as manipulation gets easier the more data about the target you have, so analytics systems kept becoming more and more sophisticated to extract more and more money from unsuspecting users.

Finally, we are now at the point where the web has somnambulated into being a full-blown application delivery platform, except both the “application” and the “delivery” parts are hacked together with chicken wire and duct tape, and analytics are the cherry on top.

The current state of affairs

The current state of affairs is as encouraging as you’d expect after having read the previous paragraph, i.e. not at all. Almost all analytics software tries to extract as much information about the user as possible, in the hopes that, at some point, an intern will stumble upon a meaningful correlation between a user’s mouse cursor color and a preference for chicken nuggets. This information is retained indefinitely, resulting in a privacy nightmare for users, who end up with their cursor color and complete browsing histories in the hands of unscrupulous third-party vendors.

Have you ever noticed how much Javascript a Twitter button loads? Have you ever wondered why a Twitter button needs to load an entire iframe in order to display what is literally equivalent to an image with a link? It’s because every time you load a page that has a button or element from Twitter, Facebook, YouTube, Google, etc, that company makes a note of which page you visited, when, who you are, what your browser was, where you were when you visited, etc. Facebook literally has a list of sites that you, John Smith, visited, even if you visited them completely outside Facebook, without clicking on any Facebook posts or links at all, just because there was a “Like” button on them.

Yes, Twitter knows all the porn you ever watched, because you were logged into Twitter and YouPorn has a “Tweet” button on its pages that does pretty much exactly what this link does, minus the tracking of your fetishes.

Not only that, but it slows webpages to a crawl. Websites load thousands of lines of code, which takes many seconds to load and takes up a bunch of your memory to compute, just to send your personal information back to the site. Just take a look at how much data The Verge loads just for trackers and spyware, and that’s the rule, not the exception.

Countermeasures

Due to this gross invasion of privacy, users have begun using ad blockers en masse, with somebrowsers shipping with ad blockers by default. These ad blockers range from incidentally blocking analytics trackers to specifically blocking only privacy-violating trackers, and doing a very good job at it, too.

This arms race means that the more data advertisers want to collect about the users, the less inclined the users are to tolerate it, and the less data the advertisers end up getting. This, of course, makes analytics less accurate, since they underestimate the number of people who visit websites, to the dismay of everyone.

Most blockers (mainly ad blockers) work by preventing connections to various well-known ad-serving domains, but Privacy Badger works by specifically blocking only services that track you (as it can accurately tell which service is tracking you). Users nowadays run a combination of both, as they provide great benefits: They get rid of ads, increase privacy, and make websites much, much faster. Why wouldn’t someone use them?

Solutions

What sort of cranky curmudgeon would I be if I didn’t offer any solutions to the problem, after such a long-winded introduction? The right kind, of course, but I’m going to talk about some solutions anyway.

One good aspect of the situation is that incentives are somewhat aligned. By virtue of using the service, a user gives a minimum amount of information (which page was accessed, where the user came from, where the user is roughly located), and the publisher can use that to perform some analysis. All that’s necessary is for publisher to approach the problem the right way.

Server-side log analysis

A better alternative to client-side analytics (i.e. loading a small (but in reality quite large) piece of code into the user’s webpage to get as much information about the user as possible) is server-side analytics. Server-side analytics only uses analysis of the actions that the user has taken, and does not inject any tracking code into webpages. This makes webpages smaller, faster, lighter, and is not as intrusive to the user’s privacy.

The downside to the publisher is, of course, that some aspects of tracking are less effective. In particular, since server-side analytics do not inject any tracking code, they are less effective at detecting whether a user has just come to the website for the first time or whether they are returning to it. On the other hand, they are more accurate on the number of visitors.

GoAccess

The best (and pretty much only?) such software I’ve found is GoAccess. GoAccess is a fantastic piece of software that analyzes your webserver log entries very quickly and stores them in its database, either generating HTML reports that you can open in a browser, or displaying reports in a curses window in the console. The latter mode is a bit hardcore, but the former is very detailed and very readable.

Of course, the report won’t contain things like screen resolutions, click heatmaps, or scroll patterns, but it’s fast, can give you statistics from before it was installed, and, more importantly, incurs no additional overhead in page load times or performance for the user. It also does all the usual weeding out of bots, detects return visitors, and generally provides a good, if basic, analytics experience, which may not be suitable for publishers with dedicated marketing teams, but is more than adequate for the average website.

Another benefit for the publisher is that, because it analyzes HTTP requests, it cannot be blocked by the user, so its reports are as accurate as possible. Personally, I have seen around a 30% discrepancy between Google Analytics and GoAccess (Google Analytics reports 30% fewer visits than GoAccess), which I assume is due to the high usage of ad blockers among the people that read my website.

If you have a website that you want to use analytics on, you’d ideally add GoAccess to your crontab or logrotate config, and it would read each log as it is being rotated and add them to its database, so you could view accurate historical data for any point in time. Unfortunately, the cron method doesn’t work properly right now (although the logrotate method should), because of an open issue which has GoAccess double-count entries if you run the import twice.

Generally, though, GoAccess is a great alternative to client-side tracking, and I switched to it exclusively (until I changed hosts and that became impossible, cough).

Piwik

Piwik isn’t like GoAccess, in that it’s not exclusively server-side. It’s more of an open-source, self-hosted Google Analytics alternative, but it has various modes of operation. It includes a log analyzer, but you can also set it to serve a “tracking pixel”, which is a small image that gathers basically the same information as log analysis, and similarly cannot be easily blocked if served from the domain of the site.

Piwik does also include a full-blown JS tracker, but I don’t recommend that for the reasons stated above. Even that, though, is better than Google Analytics, as the data stays with the publisher and doesn’t get shared with Google, only to stay with them forever and ever.

Social button alternatives

I find the social buttons particularly egregious, especially Twitter’s and Google+’s (good thing nobody ever uses the latter, at least), because they do nothing for the user that a simple link wouldn’t. Publishers like them because they make it easy for people to share the content, which makes the latter visible to more people, which means more viewers, which means more ads, which means more money. That’s why they charge a hefty fee for their use, not in currency but in tracking and slowness.

Here you’ll notice my hypocrisy in having social buttons at the bottom of this post, but you may also notice that they aren’t actually social buttons, and yours can not be, too! They’re just static images, each with a link to a URL that will allow you to share this article on each service.

If you want to replace the Twitter, Facebook and Google+ buttons on your site with non-tracking, lightweight alternatives, here’s how you can do that. Just use the following HTML and the result will be almost indistinguishable (except your site will be quite a bit faster).

For the Twitter button, a simple link to the URL below is enough:

<ahref="https://twitter.com/intent/tweet?text=Look,+ma!+No+tracking!&url=https%3A//www.stavros.io/&via=stavros"target="new">Twitter button</a>

Same for Google+:

<ahref="https://plusone.google.com/_/+1/confirm?hl=en&amp;url=https%3A//www.stavros.io/"target="new">G+ button</a>

And Facebook:

<ahref="https://www.facebook.com/sharer/sharer.php?u=https%3A//www.stavros.io/"target="new">Facebook share button</a>

These load almost instantly, use almost no extra memory or CPU and leak no information about your users to any third-party site.

EDIT: User nachtigall on the Hacker News comments thread gave me a great tip: Shariff replaces your social buttons with simple, non-tracking links, much like my code above. I will integrate that to this website as soon as possible, as it’s a fantastic idea. User Raphmedia linked me to SocialSharePrivacy, which only loads the sharing scripts on-demand, preventing them from tracking the user unless specifically used.

Epilogue

As you’ve surmised, I’m very disappointed by the direction the web has taken. However, that’s only because I think things don’t have to be this way, and I’m happy to see that things are changing for the better. Ad blockers are forcing publishers to rethink their business models and to reduce the amount of tracking they do, browsers implement features that make it harder to violate users’ privacy, and frontend tooling is getting better and making it easier for developers to create lighter websites.

I think things could be better, though, which is why I’m writing this, and why I’m trying to create an informational site with resources for making the web lighter. There’s no reason why small websites should embed Google Analytics, only to use 3% of its functionality, or use the bloated Disqus comments when there exist lightweight, open-source alternatives like Isso, which I’m using here. I would encourage you to get rid of Google Analytics and use one of the alternatives above, if you can. For event tracking, doing it on the server is much better for everyone, as the user’s experience isn’t affected by it at all, and you usually don’t need any of the extra information anyway.

If you have any recommendations or agree/disagree with what I’ve written, please leave a comment or tweet to me. I’m especially interested to hear if you like and have switched to any of the alternatives above.

Thanks!

Giphy plans to build a real business

$
0
0

Plenty of tech startups dream of building a new consumer brand that's used and recognized by hundreds of millions of people every day.

Few of them get as close as Giphy, the four-year-old GIF search engine that's raised $150 million in funding to date at a $600 million valuation.

Giphy's mission is to evangelize and proliferate the world of GIFs — those micro-videos and animations that replay continuously in an endless loop and have become a standard means for expressing humor, shock or affection in news feeds and message threads across the world.

What started as a simple web crawler for finding GIFs now serves more than two billion of the auto-looping clips every day to more than 150 million daily users. The company recently started experimenting with standalone apps like Giphy Cam, which lets people create their own silly GIFs in seconds.

Giphy isn't profitable yet. In fact, the company doesn't even have a reliable means of generating revenue at this point. But now that GIFs are an ingrained aspect of online behavior, the company is hard at work drafting a blueprint to turn its popular service into a money-making business. 

CEO and cofounder Alex Chung tells Business Insider that his 70-person team is kicking around "over a dozen different business models" that it may implement. Central to the effort is Giphy's move to evolve from being a search engine for GIFs into a hub for what Chung calls "micro-entertainment."

“We are a platform for everything short-form, from communication to entertainment," Chung said during a recent interview at Giphy's newly-opened headquarters in New York City's trendy Chelsea neighborhood. "The future model is going to be jumping between the two.”

More than just the Google for GIFs

Giphy started four years ago as a side project of Chung's while he was a hacker in residence at New York startup incubator Betaworks.

By scraping sites like Tumblr for GIFs, he quickly realized that there were few quality GIFs on the internet, and most of them were low resolution.

“It’s like if Google had indexed the internet and found out there were only a few webpages," he said. “Most of them were pretty much garbage. There was a bunch of not-safe-for-work stuff.”

So he started quickly building a team that could chop down all kinds of content, from TV shows to sports games, into GIFs. Now Giphy licenses content from a wide swath of content providers, including HBO, the NFL, and CBS. Last year, it opened a production studio in Los Angeles to make its own GIFs and GIFs for outside partners.

Giphy's natural habitat remains messaging, as GIFs ricochet across platforms like Apple's iMessage and Slack, the work chat app that has integrated Giphy functionality. Search the word "hungry" in Google and you'll see bland dictionary definitions and famine reports. But "hungry" is one of Giphy's top search terms.

“We kind of branded expression search," said Chung. "No one even thought about searching expression, it wasn’t a thing.”

Slice and dice

After Giphy.com started adding standalone pages for events like New York Fashion Week and shows like "South Park," Chung and his team noticed another behavior. People weren't just coming to Giphy to find a GIF and leave. They were coming to be entertained.

Alex Chung GiphyGiphy CEO and co-founder Alex Chung.Getty

Now 50% of the visitors to Giphy's website are coming to just browse and watch GIFs, according to Chung. And people watch more than 4 million hours of GIFs through Giphy every day.

“These are people who are coming to us to just look at entrainment, TV, celebrities," he said. "They’re sitting and watching and spending hours just combing through the site.”

What happens if Google wakes up to GIFs and decides to do a better job of featuring the mini-clips within its search page? Chung said he's not worried. 

"We’re years ahead of everyone and we have the brand and partnerships. We’re the Google here," he said.

The company has also made strides in producing GIFs, saving money and time by developing what is effectively a GIF factory that churns out a steady stream of self-looping clips. Every episode of the popular Netflix series "Gilmore Girls", for example, is pumped into an array of PC rigs which dice and tag the content into thousands of GIFs. Each of these GIFs redirects to Netflix's website when a viewer taps.

Make GIFs first, make money later

One thing Giphy hasn't figured out yet is how to make money. But after raising $72 million in additional venture capital funding last fall, monetization is being talked about more seriously internally.

“It's definitely something that’s become more of a priority at the company," said investor Spencer Lazar, who led General Catalyst's participation in Giphy's Series B, C, and D rounds of funding.

“Anyone with a huge network of engaged users who are searching for things has an opportunity to build a business on top of that," said David Rosenberg, who leads Giphy's business development efforts. "Exactly how we slice it, that’s what we’re thinking about now."

There are the obvious ways Giphy could monetize: ads in search, sponsored GIFs, and licensing deals with content providers like Netflix that agree to have their shows sliced into millions of tiny GIFs. Giphy has already experimented with creating sponsored GIFs — last year it made a GIF ad for the NBC show "Superstore" that was displayed on a giant screen in the World Trade Center. 

“It’s not like we’re allergic to the notion of taking money," assured Rosenberg.

But Giphy is still very much in the try-everything-and-see-what-sticks phase of its growth. Last year it acquired the sticker messaging app Imoji, which it turned into an animated sticker app that lets you place GIFs on top of stickers. A software development kit that's in the works will allow developers of all sizes to quickly integrate Giphy's search engine into their apps.

Screen Shot 2017 05 26 at 1.43.45 PMThe Giphy Cam app lets you create goofy GIF captions out of what you say aloud.Giphy

When Facebook debuted its new camera interface and augmented reality platform at its annual developer conference last month, Giphy was one the first outside partners. Its app Giphy Says can create looping GIF thought bubbles with captions based on what you speak into your phone's camera.

None of Giphy's standalone apps have been commercial hits. Giphy Cam, for example, hasn't ranked in the App Store's charts since it debuted in October 2015, according to analytics firm App Annie.

But Giphy maintains that its many experiments are just that: experiments intended to inform the company's overall direction.

“We can test to see if these products are interesting enough to put into the main product," said Chung, referencing the search engine. "And if it’s good, we’ll bake it in.”

“You do not get to build the massive business that we’re going to build without being maniacal about user experience and product for years and years," said Rosenberg. “I think we have a meaningful chance of being the next American tech consumer company that your grandma hears about.”

For Lightspeed Ventures partner Jeremy Liew, Giphy will succeed because of how it's become almost synonymous with the word GIF. Liew said he invested in Giphy for the same reason he invested early in Snapchat: both are about making communication more visual and expressive.

“If you become part of popular culture, you always figure out a way to make money," he said.


Neil Hunt on Netflix and the Story of Netflix Streaming

$
0
0

Neil Hunt Chief Product Officer of Netflix

Summary

For several months now, I’ve been complaining on Twitter and a bunch of other places that, for as ubiquitous as Netflix streaming has become—I think it’s one of the most important technology products of the last decade at least— there’s actually been comparatively little journalism or scholarship about how the product came about. That’s why I was delighted to get acquainted with Neil Hunt, who is the Chief Product Officer at Netflix. Since he’s been at Netflix since 1999, not only is he the perfect person to tell us how Netflix streaming came about (the technical hurdles, the strategic decisions, etc.) but he can also give us the whole history of Netflix, from basically the very beginning.

Transcription

The following is a full transcription of this conversation with Netflix’s Neil Hunt. Scroll to the bottom for full audio download options.

Brian: Neil Hunt, thanks for coming on the Internet History Podcast.

Neil: Brian, thanks for having me on your show.

Brian: I always like to begin with educational background in a really, really general way, but I see from your CV that you went ahead and got a full PhD in Computer Science. So I’m wondering, was your intention…earlier in your career, were you gonna be an academic or a researcher?

Neil: I sort of found myself attracted to that, yeah. Definitely, academia was a good way to go. Actually, I found myself pretty attracted to university life, in general. It was a good way to prolong that, but that’s not the way it played out.

Brian: So the first few jobs when you came out of university, were they generally in, like, research labs and things like that?

Neil: Yeah. I managed to overlap those in a fairly unusual way. I got my BSc at the University of Durham, and then I stayed on there to do a PhD. And once I finished the coursework, my professor ended up signing me to a summer internship at a lab in Palo Alto called Schlumberger Palo Alto Research. And it was very much doing work that was aligned with the computer vision and image processing research that I was doing at…well, had by then become the University of Aberdeen because my professor had moved. And so, I got to spend a summer, which turned into five years at Schlumberger, and I did eventually graduate. It took a little longer than it should’ve done, but I came out with my PhD eventually in, whenever that was, 1999. No, 1991, sorry.

Brian: Nineteen ninety one, but…

Neil: I came out eventually. Would’ve graduated in 1989 when Schlumberger dissolved their lab.

Brian: But when you were at Schlumberger, that’s generally in the what, the mid-’80s, early ’80s?

Neil: Yeah, 1984 for a bunch of years. That’s right, yeah.

Brian: Well, the reason that I ask that…

Neil: Schlumberger, of course, was an oil field services company, so then, in the ’80s, they were getting very deeply into computer and microelectronic stuff and so they bought a bunch of companies. And remember, they bought Fairchild. This was actually the Fairchild AI Lab, and, you know, it became a Schlumberger Palo Alto lab about the day before I joined, in fact.

Brian: And I bring that up because, is it correct that you met Reed Hastings at Schlumberger?

Neil: That’s correct, yeah. He was there in a different position, but we crossed paths from time to time. And that formed a connection that he would come to tap a little later on.

Brian: Right. So moving forward a bit in time, can you tell me about Pure Software, what it was, and how you got involved in what you did there?

Neil: Yeah. After the Schlumberger Lab had dissolved, people were spun out to all kinds of different companies, and I went to join [inaudible] Research Lab. And I was working on productizing some of my research, making a software tool, and Reed contacted me. He had built a prototype of a software checking and validation tool he thought might be useful. And he brought it over and I tried it out, and it was useful. It found a bunch of issues and challenges. And shortly afterwards, he invited me to join his new company and productize that offering. And then that was Pure Software and the tool was Purify, which was a C and C++ error checking tool.

Brian: So to be clear, Reed founded Pure and was the CEO?

Neil: Correct, that’s right.

Brian: So this is now into the early ’90s?

Neil: Yes, it’s 1991. And Pure Software quickly became profitable on that [inaudible] tool Purify, and then we added a variety of additional tools. The software tool space was going through a phase of consolidation at that point, and so there were a bunch of different mergers. Oh, by the way, we became Pure Atria Software, and then we became Rational Software and a big suite of software development tools off of Rational, of which Purify was one, and that was kind of the story there.

At that point, Reed moved on. I moved, temporarily, to Boston, which probably wasn’t my long-term future. And in that intervening year, Reed and a couple of other folk, Mark Randolph and a few folk, started putting Netflix together. And in 1998, he approached me about joining Netflix and moving the engineering effort there. And so we came and talked, and ended up…I joined Netflix in early 1999 when it was still a very nascent business and there were lots of things still to be figured out in the future.

Brian: To the best of your recollection, I know it’s a long time ago, but do you remember what you thought of the idea of Netflix when you first heard about it?

Neil: It clearly had some big ambitions, but at the time, the goal was to start building a business around shipping DVDs through the mail. And DVDs, you remember, were only just introduced, so there was a very small install base of DVD players in the country and the world. And shipping by email was an unusual proposition, to be sure, so it was quite a speculative venture.

Brian: So when you do sign on, what is your job description, or what do you work on first? What do you join the company to do?

Neil: My job description was VP of Internet Engineering. And the first mission, perhaps, was to solve the year 2000 problem, which was upon us at that time. And even though this was a brand new company, we were able to find a bunch of places where two digit year codes would’ve rolled over backwards and caused all kinds of trouble. In the end, 11 months later, we didn’t have a year 2000 problem, but it was certainly the very first thing dumped on my plate on day one.

Brian: So the website, when you got there, had already been launched live to the public?

Neil: Yes, it had. That’s right. At that point, it was a service offering transactions a la carte, rentals of DVDs for a period of a week or so. And we found it pretty difficult to get consumers to come back and have a second try. You know, shipping delay is a pretty serious impediment when you’re taking a day to four days to ship a disc each way. And so at that stage, the business was not tremendously successful.

Brian: Well, right. I think we need to underline that. Everyone thinks of Netflix as this subscription service, but it was an a la carte rental service at the beginning. Do you remember what the thinking was that evolved into the subscription plan that people would remember?

Neil: Well, even before I joined, we were discussing how we could turn this into a recurring revenue model, which is what subscription is, effectively. And it became quickly apparent that we needed to get to that really quickly. We were able to get customers to come try Netflix, but to get them to come back again was difficult. And so, the key pieces that we needed to build were a model where people pay for a month of service and then they get to use it a number of times during that month. And secondly, the Queue was a super important piece of the puzzle because if you got consumers to build a list of the titles they were interested in, then you could automatically ship the next one when they returned the first one. And that was the key to providing a continuity of service and a value that made people engage.

And so, I’d like to lay just a little claim to credit for calling it the “Queue” which, of course, me as a Brit, that’s a very familiar word. To people outside of Britain who are not computer scientists, queue is a pretty strange word, is it the Kway or the QU or what have you? Unfortunately, my technical terminology leaked through into the consumer space, which I’ve regretted for years and years, but it is kind of a humor [inaudible].

Brian: Yeah. I mean, so the queue is obviously this… moving to subscription allows you to do the Queue, it allows you to collect information what people want to watch next and what they’re interested in, but it also gives you the great tagline of “No late fees.”

Neil: That’s right, exactly, yes. I think I’ve got it the other way around. The Queue allowed us to go to subscription, not the other way around. Without a queue, we wouldn’t have had a subscription plan, but with a queue, we are able to eliminate late fees, to always have a DVD sitting on top of your TV ready to go, and you know, the rest was history at that point. The business began to grow and accumulate subscribers, and the revenue started [inaudible] looking great. We worked on the…I joined in ’99, we started work on subscription pretty quickly. And by the fall of ’99, we were out with the subscription plan.

Brian: And you also—correct me if I’m wrong—but you also started working really early on things like personalization and algorithms to help recommend movies and things like that, right?

Neil: That’s correct, yeah. You’ve gotta remember, again, that in 1999, the library of titles on DVD was measured in the dozens, not the thousands. But very quickly, it became apparent that giving people a tool to find the next interesting thing to watch was gonna be important for subscriber satisfaction, member satisfaction. But also, the habit was for people to come in and search for the newest release, and we would rush to the store and buy a whole lot of copies of the new release, whatever it was. “Das Boot” was one of them, was one of the first new releases after I joined, the…

Brian: Submarine movie, yeah.

Neil: The submarine movie, correct. That’s right.

Brian: The U-boat movie, yeah, yeah, yeah, yeah.

Neil: Yeah. Which was chiefly distinguished by the fact that they printed the Side 1/Side 2 on the wrong side of the disc, and it really wasn’t clear. So you put the thing in, and then the thing started right in the middle of a really intense [inaudible] charging scene with rivets popping off and explosions and…

Brian: Wait a minute, why do I have a memory of that? Like, did I actually watch that DVD? You literally just rang a bell in my head. I feel like that happened to me. Anyway, sorry, go ahead.

Neil: It probably did. And then about 10 minutes in, you realize, “No, this isn’t the start of the DVD.” And you pull it out, and then it’s, “Yeah, okay. Let’s flip it over [inaudible].” Oh, there was so much broken stuff in those days.

Brian: Well, okay…

Neil: Anyway…

Brian: Yeah, go on.

Neil: Yeah. So we needed to figure out a way that we weren’t shipping the newest copies of the newest titles to our members and then having them come back and sit on the shelf and nobody else ever wanted to watch them. And so, the idea of recommending stuff that they might not immediately have in mind but which they wanted to see, maybe all the titles, historic titles, was gonna be an important piece to making the business sustainable. If we could rent each title 10 or 20 times, it would become a sustainable business. If we were buying a title for $20 and renting it for part of a subscription, it was not gonna float for very long.

Brian: So let me just underline this again. So the Recommendation engine is obviously something to help users, you know, find movies they might wanna watch, but also on your side of it, it’s a solution to inventory issues and profitability issue?

Neil: That’s correct, yeah. We called it the “percent new problem,” that if 80% of what we shipped out of the door was a brand new disc, then it clearly was gonna be very expensive to run this business. And the recommendations was a way to solve that in a win-win way.

Brian: To nudge people into the long-tail.

Neil: Correct.

Brian: So I’m gonna just throw this in here real quick. Just, you know, like we said, you join in ’99, it’s the height of the dot com bubble, 2000, the bubble bursts. I know you can only speak for yourself, but you’re only there for about a year, maybe a year and a half, when the bubble bursts. You can only speak for yourself, but were you ever concerned that, “Oh, maybe we’re one of these dot com companies that are gonna go under, we’re not gonna make it?”

Neil: Of course. You know, the world was pretty strange. A little anecdote that probably comes from a few months later, but that’s quite relevant…

Brian: Please.

Neil: I feel like Reed and the management of Netflix had always had in mind building a sustainable business from the beginning and so, you know, hence their focus on solving the new disc shipping problem, even though all of our peer companies, at the time, were raising piles of cheap money and using it to fund growth and gaining eyeballs and market share at almost any cost. And I remember a board meeting where the board members were puzzled as to why we weren’t spending money faster and growing faster, as opposed to trying to solve the problems of making the thing profitable. And within months, that kinda turned on it’s head, and the fact that we had gotten reasonably close to profitability meant that we were able to raise one more round and eke that out. It was touch and go, but we were able to get through to profitability and growth, in spite of the fact that the market for internet businesses at that point had slipped to extremely negative.

Brian: Well, and you were one of the first, or possibly the first internet company, to IPO after the bubble burst. So yeah, you managed to thread that needle.

Neil: Yeah. As with all of these things, there’s a good measure of luck that. Luck favors the prepared mind, the prepared business, and we were fortunate that we had approached it with kind of a sound business idea in mind.

Brian: Before we get to what we came here to talk about, the streaming stuff, I wondered if you wanted to say a little bit about the competition with Blockbuster. I mean, this is one of those classic, you know, “Harvard Business School Disruption from the Internet” sort of stories. Blockbuster ignored you guys for a while, enough time for you to get traction. But once Blockbuster came online—again, from your perspective, speaking for yourself—like, how worried were you about competition from them and competition from Walmart and people like that?

Neil: Blockbuster was really the third wave of competition. The first wave was Walmart, and the second wave was Amazon. So we kind of faced a showdown with the world’s biggest retailer, followed by the world’s biggest e-tailer, and then Blockbuster came in as the world’s biggest video renter. But by that time, we had learned, I think, the benefit and virtue of being extremely focused on our customers, on delivering value that they needed and required. And I think what we saw, with Walmart, was that they didn’t really have the same level of focus and attention. And while it spooked the public markets and the stock price was in the toilet for a while, the actual outcome really was never particularly in doubt. We were able to keep growing, and we were able to be successful competing against Walmart. And then Amazon, I think, a little more smartly, decided to compete outside the U.S. and they launched [inaudible] in the UK.

Brian: Right. They never actually launched in the U.S. a DVD rental service.

Neil: That’s correct.

Brian: Okay.

Neil: And then Blockbuster finally woke up that this was a potential threat to their business, and they proceeded to copy our model pretty much exactly and launched that out at a slightly lower price, and we ended up competing on price for a bit. They goaded us into— “Goaded” is perhaps the wrong word, they stimulated us into thinking about tiers of service. So we would offer a 1-disc, a 2-disc, and a 3-disc plan at different price points so that we could have a low price point compete. And that was an important outcome.

But at the end of the day, it’s fascinating, we had lots of meetings where we debated all the ways we could respond to the Blockbuster threat. And we charted all kinds of new projects in the niche [inaudible] that we were gonna do. And in the end, it amounted to a great deal of running around and not much outcome. The thing that really mattered was sticking to the core and delivering a great service and a better performance than Blockbuster was able to do.

And so, I think the learning that we took away from that was it was more important to focus on our customers and being great at making a great business than worrying about what the competitors are up to. And, you know, eventually, as history shows, Blockbuster overextended themselves and went bankrupt. And you know, I think, in that case, it was responding a little too late and then not really being able to put enough effort and focus on it to catch up with the lead and the advantage that we had.

So it sounds like a bit of a David and a Goliath story, but in the end of the day, I think it’s probably not that uncommon for the incumbent to fail to notice the challenger until it’s too late to effectively respond. So this is the story that you see repeated from time to time.

Brian: Well, you know, it’s probably not that uncommon, but what I would say is uncommon is that…and this is gonna lead us into streaming… is that it’s around 2007 to 2009 that Blockbuster is really vanquished. And you guys have proven your model, and you’re on top. At that exact same moment is when you start to move into this new business model. So what I would say is rare is, you know, you would expect you guys would maybe rest on your laurels for a bit and say, “We won” and celebrate for a bit, but you’re immediately moving into this new business model.

Neil: Yes, that’s correct. The interesting challenge here, of course, is it’s a pretty different business that serves the needs of a somewhat different customer base that doesn’t necessarily overlap with the first one. And I think we had learned from the Blockbuster experience that focus on your customers, your growth customers, your new customers, is an important thing. And so I think we pivoted pretty hard to really fit the attention and focus on the streaming business. Yes, initially, it was pretty nascent. We had to piggy back it on top of our DVD rental subscription business because we were unable to put together a library of sufficiently compelling content to be a standalone business.

Brian: But you know what, you know what? Let’s…hold on. I apologize for cutting in here. Let’s get into streaming with two questions, first of all. When the company was conceived… You know, Reed has, several times, given this quote that he didn’t call the company DVD By Mail, he called it Netflix. Was the idea always, eventually, to do delivery of video over the internet?

Neil: Sure. And when you look back at broadband plans and pricing back in ’99, 2000, 2001, that seemed like a pretty out-there suggestion. Then as the years went by, it became more and more practical until, by 2007, it was quite feasible. That was about the time that YouTube started out too, doing streaming, consumer-generated, user-generated video…

Brian: Okay, okay. That was gonna be…

Neil: So we had that as a parallel.

Brian: I was gonna say, that was my second question. Was there some moment or catalyst, perhaps YouTube, that was the tipping point that caused you guys to say, “Okay, now is the moment to jump in and try to make streaming happen”?

Neil: Actually, I would say it was concurrent, but no, we had started working on internet delivery actually a couple of years before, three years I think, before YouTube came on the scene, and it was a bit stop and start. And at the time, internet speeds were far too low and video compression was far too poor for a real-time delivery. And so we were looking at something much closer to the DVD model of trickle it down overnight and then have it stored on a disc on a box.

So it became pretty clear that that was not gonna be a very interesting model. I’m not sure whether we were thinking about streaming before YouTube or YouTube helped us to think about that. It’s exactly the same time we gave up on the disc storage model and started to think about real-time delivery and streaming to customers as they were watching.

Brian: And so, you’re thinking of set top boxes? I know, obviously, Roku sort of incubated inside Netflix. Were those some of the early projects that you’re talking about?

Neil: No, it was long before Roku. It was in like, 2003, 2004, we started prototyping a box that… If you remember, TiVo was the hot thing about then, and that was a TV receiver with a disc drive. And we were working with some contract manufacturers who were building similar boxes, you know, with a video processor and a hard disc, and we were cobbling together some software to be able to work with a thing like that because that was clearly how we start in the wrong direction that doesn’t go anywhere. And then, in the height of competition with Blockbuster, we cancelled all of those projects in favor of just winning. And then when Blockbuster finally diminished as an existential threat, we were able to get back. And at that point, it was like, “Okay, well, we’ll skip those things, go straight to streaming.”

Brian: Okay, so this is around…in 2007, I know, is when the first, I think, streaming products come to market. So when do you think that you start working on it in earnest?

Neil: Oh, it was probably 2006, something like that, we kinda had a model and…

Brian: There were so many other streaming experiments around that time, like MovieBeam, Movielink, you know, Unbox or something, I can’t even…there was dozens of ’em. And iTunes Video Rental starts to come out around the same time. So what is it that makes you guys wanna go into this crowded field that, seemingly, no one had had success with before?

Neil: I don’t recall it being as crowded as you are painting now. It feels to me like a lot of those things were a bit later. There was the ill-fated Enron project to “stream across fiber networks that they would own and deploy”, which, in retrospect, seems awfully quaint. Of course, we all know what happened to Enron. But that was maybe the thing that I recall was being most present in our minds as we thought about streaming. Like, “What is this thing, and how can this possibly work? And the economics are just gonna be completely crazy.” Yeah, anyway. That was a novelty. As far as streaming was concerned, YouTube demonstrated the feasibility of the technology, and there wasn’t a lot that was… When iTunes came along, you’ll remember that it was a download model, it was not a streaming model.

Brian: Right. Oh, that’s true.

Neil: And for the first several years, you pressed a button and you waited several hours while the thing trickled down to your disc on your laptop or your device.

Brian: You know what, you’re right because I’m thinking…

Neil: That’s very far from streaming.

Brian: I’m thinking of the Apple TV model, not the original iTunes model. You’re right about that.

Neil: Yeah. Even the Apple TV model, when the Apple TV first came out, that was a… I think I’m right. It had a disc in it, you pressed a button and you waited and it might take 20 minutes to buffer enough to be able to start watching.

Brian: Okay, so that’s a perfect entree into, what are the issues that you guys are having to deal with, you know, ranging from things like bandwidth, the technical issues of like audiovisual quality and things like that? So, in around 2006, you said, 2007, you’ve decided that you wanna do a streaming product. What are the roadblocks and what are the problems that you guys have to solve?

Neil: Well, the good news is that there are a bunch of off-the-shelf pieces that we can assemble to make this feel more real. We were able to go to Windows Media Player, and that was a Windows app that played media from your machine. And we wrapped that in a…in a wrapper that basically managed the streaming, dynamically assembled a video file on your disc, and then Windows Media Player added to play the content. Windows Media Player also had Windows Media DRM that enabled us to satisfy studio requirements that this not be a vehicle for piracy, for a source of stolen copies, that we would protect the content that went out.

And so were able to piggy back on top of that, and then we were able to leverage ACMI[SP] as a CBM. I think the key insight that we’ve had was that we chose to use HTTP to deliver a file, rather than to use RTP or one of these other streaming protocols. And we stayed away from all of the stuff that was being developed at the time in terms of smart streaming protocol, smart streaming engines, where the server was an integral part of delivering the content and where user firewalls and user networks had to be properly configured to listen to a new protocol.

By sticking with HTTP, we were leveraging the basic capabilities that deliver a webpage in a way that would seamlessly work almost everywhere. So we put together those pieces, and I would say the technology that we built, there was a lot of tape and veiling wire and that was pretty tricky, but it was good enough to say, “This is an interesting possibility.”

The flip side of this is that the content team was trying to assemble a library of content to stream, and that was extremely difficult because of the way video licensing works. That essentially, we’re competing with terrestrial broadcasters to acquire the rights to a title, and that can be pretty expensive, especially for a small business with just a handful of customers. And so, it was very difficult to get compelling content, certainly nothing that was new or current, it was all extreme, long-tail, deep catalog stuff at the time.

And it was a good supplement to the DVD rental business. We would be able to provide people with something to watch if they happen to run out of DVDs at home, and they could at least click and watch. They could browse catalogs, pick something interesting, say, “That’s what I wanna see,” click the button, and have it start playing within a few seconds, and go through it. So it was a great technology demo with a great introduction to some of the challenges of building a content library, but it got us off the ground. And so that…

Brian: Right. So that first product is, you know, on your computer desktop. How soon into that do you start to go out to other platforms? I think it was LG, maybe I’m wrong about that, was maybe the first consumer electronics partnership, but then, you know, things like Xbox, PS3, things like that. So do you start working on that right away because you realize, you know, only so many people are gonna watch on their desktop?

Neil: The chronology here is that, almost as soon as we had launched we went into hardware, and we started to build the thing that became the Roku platform, the Roku box, way back in 2007, and put a lot of effort into that. Part-way through, I think, most of the way through that development effort, we also engaged with the Microsoft Xbox team, and we started working on an app for the Xbox 360. In the end, I think the… If I have it right, I think the Roku box was first out and it ended up being Roku, the standalone company, rather than the box from Netflix.

And the reason for that is that we realized that we were going to want to work with all of the CE manufacturers and other partners. The Xbox project had demonstrated the appeal. And so, we thought it would be a lot easier if we didn’t have our own competing hardware solution in-house. And so, by selling up Roku as a standalone entity and freeing him up to go through competing streaming opportunities, we paved the path for working with Xbox, working with Sony PlayStation, and as you say, working with LG and Samsung.

Brian: So that’s interesting. I just wanna underline that again. The reason you spin off Roku is because then you can be sort of a neutral partner to all of these other hardware manufacturers. “We’re not doing hardware in-house. We’re just giving you this platform, and you can put it on your hardware.” And so that’s why you divest of Roku?

Neil: That’s essentially right, yeah.

Brian: What’s the big break… Maybe that’s not what I wanna ask, but was Xbox and things like that, those were the first big breakthroughs, even before we get into the era of all these smart TVs that start coming out?

Neil: Oh, for sure. Smart TVs were still way down the road. So we did the Xbox thing. It was an interesting quirk that…there were a bunch of things about the Xbox deal that were a little difficult. One of them was an exclusivity piece. And so, we did start working with Sony for their PlayStation platform, but Microsoft had a year of complete exclusivity, and so we didn’t get Sony until a year later. And then, the next year, they had an exclusivity on a game console app, and so we actually…we built a BD-Java application on a Blu-ray disc. Your listeners may not remember them, but Blu-rays had just about replaced DVDs by kind of the late 2000s.

And the content protection scheme was based on actually having a full-out Java implementation that was able to execute Java code on the Blu-ray disc. And, in fact, it was that BD-Java that was the authoring environment for all of the menuing and setup and control for Blu-ray discs. And we actually, in an amazing effort, we were able to build a complete streaming player using the BD-Java and delivering it on a disc to Sony PlayStation. And so, for the second year, we actually had PlayStation streaming, but not from a [inaudible], but from a piece of code delivered on a plastic disc, which is an interesting irony.

Brian: I think I remember that too, actually. What was…?

Neil: Along the way, we took the software platform that we had developed before Roku was spun out, and we generalized that into an SEK[SP]. I should put an asterisk on that, that generalization took years and is still ongoing at some level. But we were able to take that and deliver it to LG and Samsung. But not for smart TVs, it was for Blu-ray players at the time. Smart TVs were still a couple of years in the future, and Blu-ray players had…a process of it was fast enough to do streaming. And so, it was a credible place to go build a streaming app, and that’s what we did.

Brian: Well, I mean…

Neil: Another thing you have to…

Brian: Go ahead, sorry.

Neil: Another thing you have to remember here is that the capabilities of the early Roku box and of the LG and Samsung players were somewhat limited. And so we didn’t actually build a…what I would call a “discovery and selection UI.” We leveraged the queue concept that all of our DVD members were familiar with, and we allowed you to build a list of the titles you might be interested in. And then the UI on the LG Blu-ray player or the Samsung Blu-ray player just basically showed one long list of titles, which were things that you had added to your streaming queue, streaming list. And so, it was a very, very simple-done user interface, but it was enough to get going and begin to make things happen. And it wasn’t until a couple of years later that we started to add a real choosing and selection…discovery and selection UI on top of that.

Brian: And then that’s probably when you were into the smart TV era and things like that?

Neil: That’s right. And, you know, I’ll just say, we grew beyond LG and Samsung and pretty most of manufacturers of Blu-ray disc players, and then… The disc player and the TV are actually not that different when you dig inside. It’s a pretty similar chip inside. And soon as TVs started to add networking in meaningful numbers..and we pushed hard to have them add Wi-Fi. There was a lot of skepticism at the time that the Wi-Fi would actually work for streaming, and it often didn’t. It’s hard to imagine today, where Wi-Fi is super robust and generally works pretty well. But early Blu-ray players and smart TVs typically had a wired connection and no Wi-Fi, and you had to run a wire out to your TV, which was an unusual thing, for sure. But yeah, we got past that and eventually got good Wi-Fi in these boxes and made it work well.

Brian: Well, I want to point out for the listeners an interesting parallel here, which I’m sure you’re aware of, but back in the DVD-by-mail days, Netflix grew by… As DVD is a technology that’s being adopted and people are buying DVD players for the first time, one strategy that Netflix used heavily was you get a new DVD player, inside the box comes these coupons to try out Netflix. So it’s almost serendipitous that, again, as this universe of devices starts to come out, as these smart TVs start to come out, you sort of pursue the same strategy of, “You’ve got this new $3,000 smart TV. What are you gonna do with it? Well, look, Netflix is there.”

Neil: That’s right. You’ve kinda rewound the clock by a bunch of years. In fact, it was an innovation from the very earliest attempts to market the DVD service in early 2000, 2001, that we came up with the idea of… I shouldn’t say we, I wasn’t anything to do with marketing at that point. This was Leslie Kilgore and her predecessor, and they came up with the idea of, red tickets in the box were good for three or five DVD rentals.

And then we had the painful problem, when we introduced streaming, of converting that into, you know, “It says three rentals. It actually means a month of free service”. And it was a little bit of hand waving for you to tell there’s a slight difference in the service we were trying to do, but it was obviously a better deal for consumers. And then, eventually, yes, that became end [SP] streaming, and the red ticket morphed into other things. And in many ways, we continue that same partner marketing plan of including some periods of free subscription in with the purchase of new devices, and that’s been a hugely effective vehicle.

Brian: You know what, it occurs to me we do need to back up for a second. When you first launched streaming, what is the uptake, like, that you noticed? Like, how quickly do you see existing Netflix users using streaming?

Neil: That’s a great question. I don’t have numbers at my fingertips, but it was pretty slow at first. This was a fringe activity that was relevant to eager consumers, early adopters, who had superior home networking and who had the technical savvy to be able to, you know, download our app or to go buy a Roku box or find a high-end DVD player which happened to have an app on it. And these are the things that took off slowly. And, you know, the content was not strong at first, it took a while to start building a strong content library.

Brian: So the content library needs to catch up, the actual installed base of devices that can do the streaming needs to catch up, so it’s not an overnight thing.

Neil: That’s right. And there was a lot of hard work that Reed and others went through in sort of selling the story on both sides. You know, to the hardware manufacturers, “This is gonna be huge, and you should work with us because it’s gonna be a really big piece.” And to the content owners that, “You should sell us content because we’re gonna be able to deliver it effectively.” And, you know, [inaudible] when we started, and so there was a lot of building the credibility on both sides of the business to make it go. But it happened, and the rest is history.

Brian: Let’s go back to, also, some of the technical details. You know, famously, you guys start using AWS very early on. Just some of the… Once it does start to take off…we’ll fast forward to that. So once it starts to become a thing that, you know, a certain percentage of users are using and then it’s so popular it takes over a certain percentage of all internet traffic… So just the scaleability issues, the bandwidth issues, any of those things that you wanna go into or stories you wanna tell about that.

Neil: Well, there was an interesting moment in the summer of, I think, 2008 when it was still primarily a DVD business, and the distinction in that is because, as a DVD business, the value delivered to consumers is through the DVDs they have at home. And if the website is not up, well, they can come back tomorrow and order a new disc for the next day, and it’s not an egregious failure. But at that point, streaming was beginning to become interesting, and we knew it was gonna be the future.

At that point, we had a database hardware failure that corrupted the main database that we ran on, and it was an ugly mess. It took most of the day to get the consumer-facing website back live again, and it took several days to get our logistics system back up and going again. And I’d say a lot of people tore out a lot of hair and lost a lot of sleep during those three days. And it became very clear that, as a streaming business, that kind of downtime was not acceptable, that we needed to have a much higher availability, and that we were gonna need to re-architect our systems to have redundancy and failover and all of the things that are necessary for that kind of improved uptime.

And we could look at building a duplicate data center and putting the Oracle and Java stuff in it, that we had at that point, or we could look to something completely different. Amazon had just launched AWS at the time and so we concluded, after a lot of discussion, that if we’re gonna rebuild for redundancy and failover, we should do it in the newest architecture that’s available and not try to build on our legacy stuff. And so, we chose to start from scratch and move our systems over to AWS piece by piece starting in 2008. And that was an odyssey that ended taking us about eight years…four years for the majority of it and eight years to clean up the last pieces and unplug our data center.

And, yeah, lots of interesting stories along the way. The key one is that we recognized immediately that to do a fork of our legacy system into AWS was not gonna get us what we wanted, we needed to re-architect with the structure of AWS in mind. And we needed to switch to things like NoSQL databases, instead of trying to use bigger and bigger Oracle installations. And we needed to switch to a micro services architecture, instead of a sort of monolithic client server architecture attached to the Oracle database.

And, in order to do that, we wondered about, “Do we try to build a selection of the complete service top to bottom and then bring that off in AWS? We can parallel with the current stuff and then shift customers over, a few at a time? Or instead, do we try to take this feature by feature and implement portions of our system in AWS, while leaving portions left back in the legacy architecture?” And it was that latter approach that we chose and I think it was the right decision. I think it got us a lot more experience with how to work with AWS and how to use the services, while deferring, kind of finishing up everything in order to get anything going at all. And that was, I think, a really key decision.

One of the implications, though, is that we spent months building scaffolding that was to real-time replication of our Oracle database in our legacy data center to the databases to the databases we were trying to use inside AWS that would copy data backward and forward in various different ways. We called this phase the “Phase of Roman Rising.” Certainly, we had a lot of challenges and frustrations with keeping that scaffolding robust and stable. But in the end, it enabled us to move page by page and feature by feature from the legacy data center into AWS and gradually build and gain confidence with, “This is the right way to go” and then move faster and faster until it was all done.

So it’s a…a critical moment there right after…actually, it was right before the iPhone 3 launched. iPhone 3, you’ll recall, was the first iPhone that had third-party apps available. And Apple had instructed[SP] us to build a streaming app for the iPhone 3, and in true Apple form, they had only given us about a month’s notice of what they wanted us to do. And so, we rushed to build the iPhone app. The app was actually pretty easy, but what would it talk to? We needed to build capabilities to serve user interface and the streaming content to the iPhone. And we debated extensively for a couple of days, and then we decided that, really, the only way forward here was to build this in AWS. We didn’t have time to build a backup first in our old legacy data center, and so were faced with a big launch of the Apple WWDC, predicated on our new AWS architecture working well. And it was certainly a bit of a nail biting moment for us, but it was a success and it gave us great confidence that AWS was the right way forward. And from then on, everything new we built was AWS only.

Brian: That was actually gonna be one of my questions because here you guys are originally trying to build for set top devices, for TVs, and things like that. So did the whole mobile viewing and, you know, the app economy, did that all kind of take you by surprise a bit?

Neil: I think it was pretty clear that these are gonna be good devices, but certainly the idea that the iPhone was gonna have third-party apps on it was… well, [inaudible] wasn’t obvious until it was, you know, kind of made public.

Brian: Or was it…?

Neil: I’m trying to remember the timing here. There was a tablet, too, and I think maybe it was the tablet that was… You know, the first iPad was the first device that we got onto, and then the iPhone followed a few months later.

Brian: I mean, because…was it even obvious to you guys that people would want to stream over a tiny screen, the handheld screen?

Neil: It still might not be obvious! There’s some differences in behavior in how people use the small screen versus the big screen. And certainly…

Brian: What are those? What are those?

Neil: Oh, if you’re watching on a TV, you tend to ignore the phone. If you’re watching on the phone, the phone call comes in, you get interrupted, and so, you know, we end up with shorter viewing sessions. I think that somewhat influences the kind of content, just content that people watch on phones it’s content that’s easy to stop and start. And then, nowadays, of course, we’ve…you know, fast forward many years into the future, we’ve implemented downloading on phones. And so, you know, people are now taking it with them on planes and on trains and on long commutes and in parts of the world where there isn’t a, you know, stable home connection sometimes. So, you know, certainly, the world and the environment has changed a bunch and the phone is a nice complement to a smart TV, it’s a way to watch content.

Brian: One more technical question and then we’ll try to bring it into the present day a bit. You guys, I feel like, have always been like really at the forefront of pushing…so you got streaming to work in a big way. You were the first ones to really, you know, crack that nut, but you were always pushing like audiovisual standards as well, like, you know, pushing HD and then moving into 4K. And so, was that always a goal as well, where it’s not just enough to get people movies on demand, we have to get them movies that are as high-quality as they could get from any piece of content?

Neil: I would say it was, in many ways, a reaction to YouTube being seen as lower quality video.

Brian: Ah, differentiate, yeah.

Neil: I think that we… We worried about being tarred with the same brush that, you know, Netflix streaming is an inferior brush. So we wanted to get out in front and say, “No, we can deliver the best quality AV. We can beat DVDs. We can beat Blu-ray. We can be first to market with 4K. We can be a leader with high dynamic range.” The big advantage of delivering over the internet is that you don’t have to upgrade an entire infrastructure in order to be able to deliver a new format. And so, we’ve been able to be pushing the lead in a lot of these things. And it’s been kinda fun working with…now, of course, we’re into original content…so working with the big-name producers and directors to put this together in the highest-quality formats available. That’s pretty exciting.

Brian: Well, before we get to the original content era, I have to ask about Quickster. We don’t have to spend an hour on it or even two minutes, but just whatever you wanna say about the thinking going into that, but maybe more importantly, the lesson that you guys learned from that experience.

Neil: Yeah, that’s a great question. In the Blockbuster fight, we had learned the importance of focusing on new customers and the future. At at that point, we were a business that was delivering DVDs and streaming on the same subscription, and we were charging $8 for the DVDs and $10 for the combined plan. And we were spending probably comparable amounts of money licensing streaming content as we were buying DVDs. And so, clearly, an $8 plus $2 didn’t work, we needed to start to collect revenue for the streaming plan that was commensurate with the quality of the content that was available, but by this time was much much better. And so, we needed to separate it into two plans and let people choose either DVD or streaming or both and charge a real amount of money for the streaming.

We were so eager not to make the mistake that most businesses do of ignoring the new business in favor of the legacy business that we were a bit too bold and a bit too eager to make the switch over, and so we kind of spinned into it. And we certainly…at this point, [inaudible] bunch of customers with some heavy-handed pricing stuff where we could’ve done somewhat better, I think, by looking to the future customer and being a bit more generous with the pre-existing customer, the customer who got us where we were going. But in the end, I look at the Quickster episode, not as a disaster that we not only avoided, but as a critical business transition, a discontinuity that we needed to go through that was gonna be painful, no matter how we did it. We didn’t do it perfectly, but we survived and we did accomplish the mission of ending up with a team that was dedicated and focused on streaming and so was able to grow the streaming business.

And today, dvd.com is a separate division in a separate building. They’re focused on the DVD customers, doing very nicely, and the streaming business is focused on the streaming customers. And because they’re different and non-overlapping, minimally overlapping, I should say, we’re able to build the features and capabilities relevant for each side and it’s successful. I think, had we not gone through some of the pain of the Quickster period that we might not even be here today at all.

Brian: So maybe you, you know, modern business theory says you should disrupt yourself, but maybe you guys just did it a little too aggressively.

Neil: Yeah, that’s a pretty fair assessment. We were certainly intent on disrupting ourselves, and we did it rather nicely. We could’ve been more sophisticated about it, but I think the piece that many people miss is that it was a transition that needed to happen. If we hadn’t have done it or if we’d been to timid about it, you know, we probably wouldn’t be where we are today.

Brian: So again, to bring us into the modern-day era, in the DVD rental era by mail, you buy the physical discs and you can rent them out time and again to people. But into the streaming era, it’s a completely different ballgame, you have to acquire the rights to stream content. And so early on, because everyone thinks that this is an experiment, maybe even you guys think it’s an experiment, it’s not super expensive to get these rights. But then, as the streaming takes off and, as we talked about with the Quickster, you’re seeing this is the business going forward, that becomes more expensive, and then that’s the reason why you realize you need to go into creating your own content right?

Neil: Yeah. I’m gonna have to put a big caveat on this, that my role as the technology guy, Chief Product Officer, by this point, the technology really had relatively little to do with the content.

Brian: Well, you know what?

Neil: Ted Sarandos is the architect of the content stuff, so he’s the guy who you should probably do a follow-up episode with and…at some point in the future.

Brian: Well, obviously, we’d love to, but you know what, let me tee it up, though. I actually was doing that to tee it up to a technical question, which is that, you know, we talked… Early on you were coming up with recommendations and things like that and you guys used personalization and data very early on to help the business. But from the technical standpoint, once you get into streaming, you have loads more data about what people are actually watching, about what they’re actually interested in. So maybe talk about that, about how once you’re you in the streaming era, like, what you can learn about user behavior potentially transforms what Netflix can do.

Neil: Yeah, that’s a great point. The recommendation stuff we started out with in 1999 and through the DVD era was…it was important, but it was kind of the sweetener on top of the system. We didn’t have great feedback as to what people had watched and enjoyed. We knew what we’d sent them, we had hard data on which discs we’d sent, but we didn’t know which discs they’d actually watched and which ones they’d enjoyed. And so, right from 1999, 2000, we built the star bar, the five-star thing that’s become such a popular internet meme, because we needed to collect data, “This is a terrible movie” “This is a wonderful movie” and “This is an okay movie” to be able to feed back. And we collected a lot of that. I think we probably collected 5 million ratings a day, but still relatively small, compared to the shipping volume. And if people didn’t give us ratings, then it was very hard to predict accurately for them.

But with streaming, you actually have much better data. You see which titles people really engage with and which ones they watched a few minutes of and then turned off again. And so, that quality enjoyment data is actually probably better than the star data. The star data tends to be viewed as, “Well, this is a high-quality movie, whether or not I enjoyed it,” you know. So maybe it rates production value and sort of perceived value of the story rather than enjoyment value…

Brian: Like, you might give, say, I don’t know, “King’s Speech” five stars because you know it won Best Picture, but maybe you didn’t really like it and maybe you didn’t even finish it. But now, with streaming, you can see, “Oh, they would tell us it’s five stars, but they stopped after 1:20.”

Neil: Exactly. And the two examples from the era that I like to use for that, you know, “Schindler’s List” and “Hotel Rwanda,” both really good movies, but not necessarily what you wanna watch tonight. You know, really dark, deep, provocative, thought-provoking things, but you know, “Tonight, I just want something that I can lean back to and enjoy.” And so yeah, the streaming feeling data sort of gives us that enjoyment factor, which is much more valuable, and it gives us complete coverage too. And so, we’re able to use that pretty effectively to drive suggestions and recommendations.

The search feature on our system drives a small amount of viewing, it’s in the teens percent, I think. And search itself has a little bit of recommendations and suggestions built in. It’s designed to [inaudible] you, if you’ve got a small number of titles presented, then we know, roughly, what you might be aiming for. But everything that’s not search has a strong component of personalization and predicted enjoyment built in. And so, it’s true to say that more than 80% of everything that people watch on Netflix is influenced, to some degree, by the personal information that we’ve brought to bear. And that delivers a lot of value for our members and a lot of value for us, in terms of making Netflix just the key place, the place that people come back to every night.

Brian: Well, and also helps you figure out, now that you are producing your own content, what would be the content to produce that your users would actually want to view.

Neil: Sure, but there’s an assumption there that I wanna challenge. We don’t write stories by data. Stories have to be organic and intact, and so we trust a producer or a team to craft an interesting story. But we are able to take the outlines, the pitch, the casting choices, the various decisions that go around that and plug that in to get an idea of who’s gonna be interested, which helps us to know if this is an economically feasible project. And so, using the data to make funding decisions is a different game than using the data to tune the story, which we don’t do.

Brian: One more question and then I’ll let you go. You’ve been so generous with your time. I saw you give a talk where you discussed how, not only are you…you know, everything’s in the cloud, you’re delivering content streaming. But as part of the production process, as you’re producing content, you’re also using some of this cloud technology and these cloud services to help streamline production as well?

Neil: Yeah. You know, there’s an aside that I have to make here [inaudible] an introduction. Everything’s in the cloud except our CDN. OpenConnect is another important piece of the story that we haven’t talked about at all.

Brian: And we should say, CDN is Content Delivery Network. Sorry, go ahead.

Neil: That’s correct. So to make streaming work effectively, you don’t want to be streaming… Particularly, we’re streaming to 192 and a half countries around the world, the half is Ukraine without Crimea. You gotta get the content close to the subscriber, you don’t want it [inaudible 01:05:35] on the fiber optic pipes under the ocean. You know, the kinds of volumes we do that doesn’t work. And so, we have content servers all over the world in thousands of locations, in many cases, actually installed into ISP’s infrastructure and they write it…they [inaudible] in their system so that there’s a minimum amount of “internet structural backbone” that’s the first [inaudible].

And we put a lot of effort into building very small, very power-efficient, exceedingly effective delivery devices that can stream, you know, most of 100 Gigabytes a second out of a relatively small footprint. And the stack of the listings installed in an IX or an ISP’s infrastructure is really key to delivery, and that’s been an important piece of building to the scale we’ve got today. So I certainly wanna mention the team that’s been working hard on that because that’s an important piece of the story. If you’ll remind me your real question, I think just close out on the right topic…

Brian: That I saw you mention that you’re using things like the cloud and things like that on the production side, so that it’s easier for your actual, you know, original productions to do things more efficiently.

Neil: Got it, right. So yes, we are producing this year 400 original titles for, I think, 1,000-plus hours of content, and that makes us one of the bigger production studios in the world. And, of course, there’s lots of opportunity for IT to support an operation like making a series or making a movie, all the way from the planning stages, where we’re receiving pitches and scripts and we’re soliciting opinions and getting feedback, and where we’re making decisions and working through legal contracts and planning and scheduling and booking casts and sets and renting equipment and delivering it to the locations. All of this stuff is an opportunity for IT to make it more efficient, more effective. And then, into the production phase, where we wanna be able to capture the daily shoots, bring them home so people maybe in another country be able to see the daily takes and see what’s going on, through to the editing. We subtitle in 20-something languages, 23 or 25, I think. We [inaudible 01:08:38] languages, and so the localization and the post-production work.

So all this process in an area where, rather than acquire or license the conventional tooling to do this stuff, it’s an area where we’ve chosen to invest in building cloud-based technology. And the vision that I’ve tried to bring to life here, and so still a few years from being anything like tangible yet, is that there’s kind of a dashboard of a production that a director or a producer can carry around on their mobile device. And they can see yesterday’s tapes and today’s scripts and the planning for tomorrow, and they can make sure that everything’s lined up, and they can look to see what did they do last year in the previous season and then what’s going on on this other side. And, you know, it’s the dashboard that brings the data from all the phases of the current production and all the acquired knowledge from all the other productions that we’ve done in a way that provides us with…that will provide us, when it’s fully realized, an amazing tool for efficient production of original content. And so, I think that’s a really exciting possibility that we’re investing in heavily at the moment.

Brian: Well, Neil, I believe you are leaving Netflix in July, is it?

Neil: That’s correct.

Brian: So my last question is really simple, and if it’s too personal, you can be vague. But what are your plans for life post-Netflix?

Neil: I’ve been continuously employed since I was 16 or 17 years old, and I’m gonna take a bunch of months, maybe a year or two, to just go through the things that appeal. Travel, I’m a big outdoors person, I love to bike and kayak and climb mountains. And I’m gonna be doing all of those things here in my home in California and around the world, and that’s gonna be fun. I’m expecting, though, that at some point, I’ll wanna get back into building something. And I find, at this point, there are sort of three areas that are particularly appealing to me, the area of precision medicine, so healthcare augmented by technology and big data and machine learning. The area of education, I think that learning is going through a big transformation, and we need to make it much more effective and affordable and available. And then alternative energy, I feel like the opportunity to apply technology to solving some of the climate crisis problems is a good one too. So those three areas are particularly appealing to me. I don’t know if I’ll find something that’s compelling and where I can make a difference, but that’s something I’d like to imagine is in my future.

Brian: Yeah, well, good luck if you do one or maybe all three of those things. But Neil Hunt, thank you so much for coming on the show, giving us just a wonderful background history of Netflix, of the streaming product. But also, thank you for sharing a really, really fascinating career.

Neil: Well, thank you for the opportunity. And I would be remiss if I didn’t note that there’s about 2,000 people here now at Netflix and [inaudible 00:12:33] who’ve worked with me and around me and on my team and on my care teams over the past 18 years that I’ve been at Netflix. And it’s been an extraordinary experience, and I really I’m very fortunate to have worked with so many smart and sharp people. And that’s been great, so I thank them all as well. You know who you all are, if you’re listening to this. You did it with me. Thank you.

Listen:

Why do so few people major in computer science?

$
0
0

In 2005, about 54,000 people in the US earned bachelor’s degrees in computer science. That figure was lower every year afterwards until 2014, when 55,000 people majored in CS. I’m surprised not only that the figure is low; the greater shock is that was flat for a decade. Given high wages for developers and the cultural centrality of Silicon Valley, shouldn’t we expect far more people to have majored in computer science?

This is even more surprising when we consider that 1.90 million people graduated with bachelor’s degrees in 2015, which is 31% higher than the 1.44 million graduates in 2005. (Data is via the National Center for Education Statistics, Digest of Education Statistics) That means that the share of people majoring in computer science has decreased, from 3.76% of the all majors in 2005 to 3.14% of all majors in 2015. Meanwhile, other STEM majors have grown over the same period: “engineering” plus “engineering technologies” went from 79,544 to 115,096, a gain of 45%; “mathematics and statistics” from 14,351 to 21,853, a gain of 52%; “physical sciences and science technologies” from 19,104 to 30,038, a gain of 57%; “biological and biomedical sciences” from 65,915 to 109,896, a gain of 67%. “Computer sciences and information technologies?” From 54,111 in 2005 to 59,581 in 2015, a paltry 10.1%.

If you’d like a handy chart, I graphed the growth here, with number of graduates normalized to 2005.

I consider this a puzzle because I think that people who go to college decide on what to major in significantly based on two factors: earning potential and whether a field is seen as high-status. Now let’s establish whether majoring in CS delivers either.

Are wages high? The answer is yes. The Bureau of Labor Statistics has data on software developers. The latest data we have is from May 2016, in which the median annual pay for software developers is $106,000; pretty good, considering that the median annual pay for all occupations is $37,000. But what about for the lowest decile, which we might consider a proxy for the pay of entry level jobs that fresh grads can expect to claim? That figure is $64,650, twice the median annual pay for all occupations. We can examine data from a few years back as well. In 2010, median pay for software developers was $87,000; pay at the lowest decile was $54,000. Neither were low, now both have grown.

Now we can consider whether someone majoring in computer science can expect to join a high-status industry. That’s more difficult to prove rigorously, but I submit the answer is yes. I went to high school during the late-aughts, when the financial crisis crushed some of Wall Street’s allure, and Silicon Valley seemed glamorous even then. Google IPO’d in 2004, people my age all wanted to buy iPhones and work for Steve Jobs, and we were all signing up for Facebook. People talked about how cool it would be to intern at these places. One may not expect to end up at Google after college, but that was a great goal to aspire to. Industries like plastics or poultry or trucking don’t all have such glittering companies that attract.

I tweeted out this puzzle and received a variety of responses. Most of them failed to satisfy. Now I want to run through some common solutions offered to this puzzle along with some rough and dirty argument on what I find lacking about them.

Note: All data comes from the Digest of Education Statistics, from the Department of Education.

***

1. Computer science is hard. This is a valid observation, but it doesn’t explain behaviors on the margin. CS is a difficult subject, but it’s not the only hard major. People who proclaim that CS is so tough have to explain why so many more people have been majoring in math, physics, and engineering; remember, all three majors have seen growth of over 40% between 2005 and 2015, and they’re no cakewalks either. It’s also not obvious that their employment prospects are necessarily more rosy than the one for CS majors (at least for the median student who doesn’t go to a hedge fund). Isn’t it reasonable to expect that people with an aptitude for math, physics, and engineering will also have an aptitude for CS? If so, why is it the only field with low growth?

On the margin, we should expect high wages to attract more people to a discipline, even if it’s hard. Do all the people who are okay with toiling for med school, law school, or PhD programs find the CS bachelor’s degree to be unthinkably daunting?

2. You don’t need a CS degree to be a developer. This is another valid statement that I don’t think explains behaviors on the margin. Yes, I know plenty of developers who didn’t graduate from college or major in CS. Many who didn’t go to school were able to learn on their own, helped along by the varieties of MOOCs and boot camps designed to get them into industry.

It might be true that being a software developer is the field that least requires a bachelor’s degree with its associated major. Still: Shouldn’t we expect some correlation between study and employment here? That is, shouldn’t having a CS major be considered a helpful path into the industry? It seems to me that most tech recruiters look on CS majors with favor.

Although there are many ways to become a developer, I’d find it surprising if majoring in CS is a perfectly useless way to enter the profession, and so people shun it in favor of other majors.

3. People aren’t so market-driven when they’re considering majors. I was a philosophy major, and no I didn’t select on the basis of its dazzling career prospects. Aren’t most people like me when it comes to selecting majors?

Maybe. It’s hard to tell. Evidence for includes a study published in the Journal of Human Capital, which suggests that people would reconsider their majors if they actually knew what they could earn in their associated industries. That is, they didn’t think hard enough about earning potentials when they were committing to their majors.

We see some evidence against this idea if we look at the tables I’ve been referencing. Two of the majors with the highest rates of growth have been healthcare and law enforcement. The number of people graduating with bachelor’s degrees in “health professions and related programs” more than doubled, from 80,865 in 2005 to 216,228 in 2015. We can find another doubling in “homeland security, law enforcement, and firefighting,” from 30,723 in 2005 to 62,723 in 2015. Haven’t these rents-heavy and government-driven sectors been pretty big growth sectors in the last few years? If so, we can see that people have been responsive to market-driven demand for jobs.

(As a sidenote, if we consider the pipeline of talent to be reflective of expectations of the economy, and if we consider changes in the number of bachelor’s degrees to be a good measure of this pipeline, then we see more evidence for Alex Tabarrok’s view that we’re becoming a healthcare-warfare state rather than an innovation nation.)

In the meantime, I’m happy to point out that the number of people majoring in philosophy has slightly declined between 2005 to 2015, from 11,584 to 11,072. It’s another sign that people are somewhat responsive to labor market demands. My view is that all the people who are smart enough to excel as a philosophy major are also smart enough not to actually pursue that major. (I can’t claim to be so original here—Wittgenstein said he saw more philosophy in aerospace engineering than he did in philosophy.)

4. Immigrants are taking all the jobs. I submit two ways to see that not all demand is met by immigrants. First, most immigrants who come to the US to work are on the H1B visa; and that number has been capped at 65,000 every year since 2004. (There are other visa programs available, but the H1B is the main one, and it doesn’t all go to software engineers.) Second, rising wages should be prima facie evidence that there’s a shortage of labor. If immigrants have flooded the market, then we should see that wages have declined; that hasn’t been the case.

To say that immigrants are discouraging people from majoring in CS requires arguing that students are acutely aware of the level of the H1B cap, expect that it will be lifted at some point in the near future, and therefore find it too risky to enter this field because they expect to be competing with foreign workers on home soil sometime. Maybe. But I don’t think that students are so acutely sensitive of this issue.

5. Anti-women culture. Tech companies and CS departments have the reputation of being unfriendly to women. The NCES tables I’m looking at don’t give a breakdown of majors by gender, so we can’t tell if the shares of men and women majoring in CS has differed significantly from previous decades. One thing to note is that the growth of people earning CS majors has been far below the growth of either gender earning bachelor’s degrees.

More women graduate from college than men. (Data referenced in this paragraph comes from this table.) In 1980, each gender saw about 465,000 new grads. Since then, many more women have earned degrees than men; in 2015, 812,669 men earned bachelor’s degrees, while 1,082,265 women did. But since 2005, the growth rate for women earning bachelor’s has not significantly outpaced that of men. 32.5% more men earned bachelor’s degrees in the previous decade, a slightly higher rate than 31.5% for women. It remains significant that women are keeping that growth rate for over a higher base, but it may be that it’s no longer the case that growth can be much higher than that of men in the future.

What’s important is that the growth rate of 30% for both genders is below that of 10% for CS majors over this time period. We can’t pick out the breakdown of genders from this dataset, but I’d welcome suggestions on how to find those figures in the comments below.

6. Reactionary faculty. The market for developers isn’t constrained by some guild like the American Medical Association, which caps the number of people who graduate from med schools in the name of quality control.

CS doesn’t have the same kind of guild masters, unless we want to count faculty to be serving this function on their own. It could be that people serving on computer science faculties are contemptuous of people who want high pay and the tech life; instead they’re looking for the theory-obsessed undergraduate who are as interested in say Turing and von Neumann as much as they are. And so in response to a huge new demand for CS majors, they significantly raise standards, allowing no more than say 500 people to graduate if a decade ago only 450 did. Rather than catering to the demands of the market, they raise standards so that they’re failing an even higher proportion of students to push them out of their lovely, pure, scholarly field.

I have no firsthand experience. To determine this as a causal explanation, we would have to look into how many more students have been graduating from individual departments relative to the number of people who were weeded out. The latter is difficult to determine, but it may be possible to track if particular departments have raised standards over the last few decades.

7. Anti-nerd culture. Nerds blow, right? Yeah, no doubt. But aren’t the departments of math, physics, and engineering also filled with nerds, who can expect just as much social derision on the basis of their choice? That these fields have seen high growth when CS has not is evidence that people aren’t avoiding all the uncool majors, only the CS one.

8. Skill mismatch and lack of training from startups. This is related but slightly different to my accusation that CS faculty are reactionaries. Perhaps all the professors are too theoretical who would never make it as coders at tech companies. Based on anecdotal evidence, I’ve seen that most startups are hesitant to hire fresh grads, instead they want people to have had some training outside of a college. One also hears that the 10X coders aren’t that eager to train new talent; there isn’t enough incentive for them to.

This is likely a factor, but I don’t think it goes a great length in explaining why so few people commit to majoring in the field. Students see peers getting internships at big tech companies, and they don’t necessarily know that their training is too theoretical. I submit that this realization should not deter; even if students do realize this, they might patch up their skills by attending a boot camp.

9. Quality gradient. Perhaps students who graduate from one of the top 50 CS departments have an easy time finding a job, but those who graduate from outside that club have a harder time. But this is another one of those explanations that attributes a greater degree of sophistication than the average freshman can be observed to possess. Do students have an acute sense of the quality gradient between the best and the rest? Why is the marginal student not drawn to study CS at a top school, and why would a top student not want to study CS at a non-top school, especially if he or she can find boot camps and MOOCs to bolster learning? I would not glance at what students do and immediately derive that they’re hyperrational creatures. And a look at the growing numbers of visual arts majors offers evidence that many people are not rational about what they should study.

10. Psychological burn from the dotcom bubble. Have people been deeply scarred by the big tech bubble? It bursted in 2001; if CS majors who went through it experienced a long period of difficulty, then it could be the case that they successfully warned off younger people from majoring in it. To prove this, we’d have to see if people who graduated after the bubble did have a hard time, and if college students are generally aware of the difficulties experienced by graduates from previous years.

11. No pipeline issues anymore. In 2014, the number of people majoring in CS surpassed the figure in 2005, the previous peak. In 2015, that figure was higher still. And based on anecdotal evidence, it seems like there are many more people taking CS intro classes than ever before. 2014 corresponds to four years after The Social Network movie came out; that did seem people more excited for startups, so perhaps tech wasn’t as central then as it seems now.

I like to think of The Social Network as the Liar’s Poker of the tech world: An intended cautionary tale of an industry that instead hugely glamorized it to the wrong people. The Straussian reading of these two works, of course, is that Liar’s Poker and The Social Network had every intention to glamorize their respective industries; the piously-voiced regrets by their creators are absolutely not to be believed.

Even if the pipeline is bursting today, the puzzle is why high wages and the cultural centrality of Silicon Valley have not drawn in more people in the previous decade. Anyone who offers an argument also has to explain why things are different today than in 2005. Perhaps I’ve overstated how cool tech was before 2010.

***

A few last thoughts:

If this post is listed on Hacker News, I invite people to comment there or on this post to offer discussion. In general, I would push on people to explain not just what the problems are in the industry, but how they deter college students from pursuing a major in CS. College freshmen aren’t expected to display hyperrationality on campus or for their future. Why should we look for college students to have a keen appreciation of the exponential gradient between different skill levels, or potential physical problems associated with coding, or the lack of training provided by companies to new grads? Remember, college students make irrational choices in major selection all the time. What deters them from studying this exciting, high-wage profession? Why do they go into math, physics, or engineering in higher numbers instead?

I wonder to what extent faculties are too strict with their standards, unwilling to let just anyone enter the field, especially for those who are jobs-minded. Software errors are usually reversible; CS departments aren’t graduating bridge engineers. If we blame faculty, should people be pushing for a radical relaxation/re-orientation of standards in CS departments?

Let’s go to the top end of talent. Another question I think about now: To what extent are developers affected by power law distributions? Is it the case that the top say 25 machine learning engineers in the world as worth as much as the next 300 best machine learning engineers together, who are worth as much as the next best 1500? If this is valid, how should we evaluate the positioning of the largest tech companies?

Perhaps this is a good time to bring up the idea that the tech sector may be smaller than we think. By a generous definition, 20% of the workers in the Bay Area work in tech. Matt Klein at FT Alphaville calculates that the US software sector is big neither in employment nor in value-added terms. Software may be eating the world, but right now it’s either taking small bites, or we’re not able to measure it well.

Finally, a more meditative, grander question from Peter Thiel: “How big is the tech industry? Is it enough to save all Western Civilization? Enough to save the United States? Enough to save the State of California? I think that it’s large enough to bail out the government workers’ unions in the city of San Francisco.”

Thanks to Dave Petersen for offering helpful comments.

Unicode is hard

$
0
0

In the last couple of months, I've been seeing the ú symbol on British receipts. Why?

Receipts with a missing pound sign

1963 - ASCII

In the beginning* was ASCII. A standard way for computers to exchange text. ASCII was originally designed with 7 bits - that means 128 possible symbols. That ought to be enough for everyone, right?

Wrong! ASCII is the AmericanCode for Information Interchange. It contains a $ symbol, but nothing for other currencies. That's a problem because we don't all speak American.

*ASCII has its origins in the telegraph codes of the early 20th Century. They derive from Baudot codes from the 19th Century.

1981 - Extended ASCII

So ASCII gradually morphed into an 8 bit language - and that's where the problems began. Symbols 0-127 had already been standardised and accepted. What about symbols 128 - 255?

Because of the vast range of symbols needed for worldwide communication - and only 256 symbols available in an 8 bit language - computers began to rely on "code pages". The idea is simple, the start of a file contains a code to say what language the document is written in. The computer uses that to determine which set of symbols to use.

In 1981, IBM released their first Personal Computer. It used code page 437 for English.

Each human script / alphabet needed its own code page. For example Greek uses 737 and Cyrillic uses 855. This means that the same code can be rendered multiple different ways depending on which encoding is used.

Here's how symbols 162, 163, and 164 are rendered in various code pages.

162163164
Code Page 437 (Latin US)óúñ
Code Page 737 (Greek)λμν
Code Page 855 (Cyrillic)бБц
Code Page 667 (Polish)óÓñ
Code Page 720 (Arabic)تثج
Code Page 863 (French)óú¨

As you can see, characters are displayed depending on which encoding you use. If the computer gets the encoding wrong, your text will become incomprehensible mix of various languages.

This made everyone who worked with computers very angry.

1983 - DEC

This is silly! You can't have the same code representing different symbols. That's too confusing. So, in 1983, DEC introduced yet another encoding standard - the Multinational Character Set.

On the DEC VT100, the British Keyboard Selection has the £ symbol in position 35 of extended ASCII (35 + 128 = 163). This becomes important later.

Of course, if you sent text from a DEC to an IBM, it would still get garbled unless you knew exactly what encoding was being used.

People got even angrier.

1987

Eventually, ISO published 8859-1 - commonly known as Latin-1.

It takes most of the previous standards and juggles them around a bit, to put them in a somewhat logical order. Here's a snippet of how it compares to code page 437.

162163164
Code Page 437 (Latin US)óúñ
ISO-8859-1 (Latin-1)¢£¤

8859-1 defines the first 256 symbols and declares that there shall be no deviation from that. Microsoft then immediately deviates with their Windows 1252 encoding.

Everyone hates Microsoft.

1991 - Unicode!

In the early 1990s, Unicode was born out of the earlier Universal Coded Character Set. It attempts to create a standard way to encode all human text. In order to maintain backwards compatibility with existing documents, the first 256 characters of Unicode are identical to ISO 8859-1 (Latin 1).

A new era of peace and prosperity was ushered in. Everyone now uses Unicode. Nation shall speak peace unto Nation!

2017 - Why hasn't this been sorted out yet?

Here's what's happening. I think.

  • The restaurateur uses their till and types up the price list.
  • The till uses Unicode and the £ symbol is stored as number 163.
  • The till connects to a printer.
  • The till sends text to the printer as a series of 8 bit codes.
  • The printer doesn't know which code page to use, so makes a best guess.
  • The printer's manufacturer decided to fall back to the lowest common denominator - code page 437.
  • 163 is translated to ú.
  • The customer gets confused and writes a blog post.

Over 30 years later and a modern receipt printer is still using IBM's code page 437! It just refuses to die!

Even today, on modern windows machines, typing alt+163 will default to 437 and print ú.

As I tap my modern Android phone on the contactless credit card reader, and as bits fly through the air like færies doing my bidding, the whole of our modern world is still underpinned by an ancient and decrepit standard which occasionally pokes its head out of the woodwork just to let us know it is still lurking.

It's turtles all the way down!

Or is it?

Of course, when we look at the full receipt, we notice something weird.

Receipt with both pound signs and u symbols

The £ is printed just fine on some parts of the receipt!

⨈Һ𝘢ʈ ╤ћᘓ 𝔽ᵁʗꗪ❓

Tallest Lego building with 4 pieces?

$
0
0

My daughter asked me: ‘Daddy, what’s the tallest structure we can build with those 4 pieces ?’

The only rule is “The structure needs to stand on its own.

so here is one simple solution:

but we can do better:

The pink circle is hidden between two pieces, and it adds to the height

and better

The pink circle does not add to the height

Now, stop !!!

Can you think how you would make it even higher ?

Well, below is our best solution.

How the failed deal with Waymo left Ford in the lurch

$
0
0

New CEO Jim Hackett, far right, with three key members of his team; from l. to r., Raj Nair, Jim Farley and Marcy Klevorn -- and Executive Chairman Bill Ford.

At the end of 2015, it looked like Ford's then-CEO Mark Fields was going to score a big win: a partnership with Google to develop autonomous vehicles.

The deal would have been a huge boon to Ford. At the time, none of the major automakers had spelled out a serious plan for getting fully self-driving cars on the road. And despite posting solid profits, Ford's shares were falling because investors didn't see anything coming down the line that they felt excited about. An announcement of a Ford-Google pairing could have significantly moved the needle on Ford's share prices.

A look back at the timeline of the Ford-Google talks reveals a moment that became a turning point in Fields' career at Ford. His failure to pull off the deal indirectly left Ford struggling to voice an autonomous vehicle strategy that resonated with stakeholders and led to the emergence of Jim Hackett as his replacement as CEO.

Sources with knowledge of the talks said the deal came undone because Fields' enthusiasm for Wall Street's reaction didn't mesh with Google's desire to lay the groundwork for a quiet, technical partnership in which each side learned from the other before deciding how to move forward.

A spokeswoman for Ford said the company does not comment on rumors or speculation.

Google was comfortable with the Silicon Valley approach of "frenemies" — sometimes partnering with competitors so each side can learn something. That concept seemed anathema to Ford, sources said.

Within weeks of the deal falling apart, Executive Chairman Bill Ford identified Hackett, a member of Ford's board, as someone who had a good relationship with Silicon Valley. Hackett was named head of the new Ford Smart Mobility unit just a few weeks after that and was named CEO a little more than a year later. Here's how it went down.

Early 2015

Google's autonomous program was emerging as a leader in the self-driving realm, but the tech giant didn't have a viable car platform to build on. Google wanted its first partnership to be with a carmaker that had a solid lineup of hybrid electric vehicles and was interested in learning how to integrate autonomous technology into production-car platforms. Google was drawn to Ford because of its reputation and scale, sources said. Google had been retrofitting Lexus RX 450h crossovers with self-driving tech, then it developed a cute car platform that looked like a koala. Those koala cars topped out at 25 mph and were designed primarily for testing, not selling to consumers.

Details on who reached out to whom first or how the talks began are unclear. But in 2015, executives in the Google autonomous car project, then run by Chief Technology Officer and technical lead Chris Urmson, began discussing a partnership with Ford executives.

The deal would not be exclusive: Google was free to collaborate with other automakers. But Ford argued that the partnership had to be big enough to make sense for the automaker and said it needed a large-scale commitment to justify the costs. Only a strategic partnership with long-term commitments would convince Wall Street this deal was beneficial to both sides, Ford said.

By the fall, both companies were getting close to signing an agreement.

Late 2015

In September that year, Google showed the first public signs it was serious about turning the self-driving car project into a business by hiring longtime auto industry insider John Krafcik and naming him CEO of the project. Krafcik, former CEO of Hyundai Motor America and also a former Ford engineer, was tasked with spinning off the unit into a stand-alone company.

In October, Google hit an internal milestone by operating the first unassisted self-driving trip on public roads in Austin, Texas, in one of the company's koala cars. The technology was emerging as the only system that could actually drive itself without human interference.

But the company wasn't looking to draw attention to that milestone. It waited an entire year before telling the world in December 2016 that it had conducted that first drive, rolling the announcement in with the news that it was spinning off the self-driving unit into a company called Waymo.

In early December 2015, Fields came to Silicon Valley to discuss the deal with Google co-founder Sergey Brin. In a region where there are so many electric cars that office workers often argue over charging stations to plug in their Teslas and Nissan Leafs during the workday, Fields showed up at Google with an army of staffers in a fleet of Lincoln Navigators. Sources said Fields and his team were armed with a plan to make a big splash out of the partnership news, and much of the discussion centered around making an impression on Wall Street.

The culture clash between the two companies was becoming evident. Google began to reconsider the deal.

On Dec. 21, news leaked that Ford and Google were in discussions. The news made a big splash, with Road & Track declaring: "This is a big deal." Ford shares went up 2 percent the following day, and Google shares fell less than 1 percent.

Early 2016

The news stories about the Ford-Google deal contained an error: They said the companies were set to make the announcement at tech trade show CES in Las Vegas in early January. That had never been in the plan, but it left Ford dealing with an awkward situation. Ford announced at the show that it would triple its own driverless fleet, partner with Amazon to connect Alexa to its cars and use drones with F-150s.

The response was underwhelming.

Later that month, Google's team reached out to Ford's team to say they were backing out. Fields and Brin had a short text conversation that explained little.

A source inside Ford said the cancellation came within days of a planned presentation to the Ford board. Fields was incensed. Google's cavalier approach to ending the deal — the source said Brin didn't view the cancellation as "a big deal" and didn't understand Fields' frustration — left Ford executives stunned.

In February, Bill Ford organized a tour of Silicon Valley businesses with the entire Ford Motor board, which included Hackett. It was on that trip, he said at the press conference announcing Hackett's new role last week, that he saw that Hackett was already familiar and ingrained in Silicon Valley culture.

"Every one of them were walking right up to Jim, and they gave him a hug and said, 'I didn't know you were on this board,'" Ford said. "The leaders out there said, 'My gosh, he's one of the real original thinkers that we know, and you guys are really lucky to have him on your board.'"

On March 11, Hackett had stepped down from the board and was named head of Ford's Smart Mobility program, a new subsidiary focusing on new technology. The unit has gone from 12 employees to 600 in just a year.

Meanwhile, in May 2016, Google and Fiat Chrysler announced a partnership to develop 100 Chrysler Pacifica hybrid minivans. The deal has since been expanded to 500 vehicles.

It is unclear how much of a hand Hackett has had in the mobility deals Ford has inked since he joined the company as an executive. In the past year, Ford announced investments in lidar maker Velodyne and 3D-mapping company Civil Maps. It also acquired San Francisco-based Chariot, a ride-hailing company, and Israeli machine-learning developer SAIPS. And in February, the automaker announced a $1 billion investment in Argo AI, a Pittsburgh company developing artificial intelligence for self-driving cars.

Bill Ford made it clear at the press conference last week that he expects Hackett to get the automaker operating more quickly.

"The speed at which the world is moving really requires us to make decisions at a faster pace," he said. "I don't think we missed any opportunities per se, but if we're going to win in this new world, we have to move fast and trust people to move fast."

Ford, Google timeline


  • Early 2015: Ford and Google enter talks to partner on autonomous technology.

  • September 2015: John Krafcik is named CEO of the Google X self-driving car project.

  • Early December 2015: Ford CEO Mark Fields travels to Google's headquarters in Mountain View, Calif., to discuss the deal.

  • Dec. 21, 2015: News breaks that Ford and Google are in discussions.

  • January 2016: Google backs out of the talks.

  • February 2016: Executive Chairman Bill Ford brings the Ford Motor board to tour Silicon Valley companies; witnesses the warm welcome Jim Hackett receives from many people.

  • March 2016: Hackett steps down from the board to lead Ford Smart Mobility, an independent, Silicon Valley-based subsidiary focusing on mobility investments.

  • May 22, 2017: Hackett is named Ford CEO.


Thoughts on Tokens

$
0
0

By Balaji S. Srinivasan and Naval Ravikant

The exponential rise of non-Bitcoin tokens prior to the coming correction. Data from coinmarketcap.com/charts

In 2014, we wrote that “Bitcoin is more than money, and more than a protocol. It’s a model and platform for true crowdfunding — open, distributed, and liquid all the way.”

That new model is here, and it’s based on the idea of an appcoin or token: a scarce digital asset based on underlying technology inspired by Bitcoin. While indisputably frothy, as of this writing the token sector sits at a combined market cap in the tens of billions. These new “fat protocols” may eventually create and capture more value than the last generation of Internet companies.

Here we discuss many concepts related to tokens, beginning with the basics for folks new to the space and then moving to advanced ideas.

The most important takehome is that tokens are not equity, but are more similar to paid API keys. Nevertheless, they may represent a >1000X improvement in the time-to-liquidity and a >100X improvement in the size of the buyer base relative to traditional means for US technology financing — like a Kickstarter on steroids. This in turn opens up the space for funding new kinds of projects previously off-limits to venture capital, including open source protocols and projects with fast 2X return potential.

But let’s start with the basics first. Why now?

1. Tokens are possible because of four years of digital currency infrastructure

The last time the public at large heard much about digital currency was in late 2013 to early 2014, when the Bitcoin price last touched its then all-time high of $1242 dollars. Since then, several things happened:

In 2013, the legality of digital currency was still in question, with many predicting death and others going so far as to call Bitcoin “evil”. Those kneejerk headlines eventually gave way to Satoshi billboards in Davos and the Economist putting the technology behind Bitcoin on its cover.

By 2017, every major country has a digital currency exchange and every major financial institution has a team working on blockchains. The maturation of infrastructure and societal acceptance for digital currencies has set the stage for the next phase: internet-based crowdfunding of novel Bitcoin-like tokens for new applications.

2. Tokens vary in their underlying blockchains and codebases

To first order, a token is a digital asset that can be transferred (not simply copied) between two parties over the internet without requiring the consent of any other party. Bitcoin is the original token, with bitcoin transfers and issuances of new bitcoin recorded in the Bitcoin blockchain. Other tokens also have transfers and changes to their monetary base recorded in their own blockchains.

One key concept is that a token’s codebase is different from its blockchain database. As an offline analogy, imagine if the US banking infrastructure was repurposed to manage Australian dollars: both are “dollars” and have a shared cultural origin, but a completely different monetary base. In the same way, two tokens may use similar codebases (monetary policies) but have different blockchain databases (monetary bases).

The success of Bitcoin inspired several different kinds of tokens:

  • Tokens based on new chains and forked Bitcoin code. These were the first tokens. Some of these tokens, like Dogecoin, simply changed parameters in the Bitcoin codebase. Others like ZCash, Dash, and Monero innovated on privacy-preserving features. Still others like Litecoin also began as simple tweaks to Bitcoin’s code, but eventually became test grounds for new features. All of these tokens initiated their own blockchains, completely separate from the Bitcoin blockchain.
  • Tokens based on new chains and new code. The next step was the creation of tokens based on wholly new codebases, of which the most prominent example is Ethereum. Ethereum is Bitcoin-inspired but has its own blockchain and was engineered from the ground up to be more programmable. Though this comes with an increased attack surface, it also comes with new capabilities.
  • Tokens based on forked chains and forked code. The most important example here is Ethereum Classic, which was based on a hard fork of the Ethereum blockchain that occurred after a security issue was used to exploit a large smart contract. That sounds technical, but essentially what happened is that a crisis caused the Ethereum community to split 90/10 with two different go-forward monetary policies for each group. A real world example would be if all the citizens of the US who disagreed with the 2008 bailouts changed in their dollars for “classic dollars” and adopted a different Fed.
  • Tokens issued on top of the Ethereum blockchain. Examples include Golem and Gnosis, all based on ERC20tokens issued on top of Ethereum.

In general, it is technically challenging to launch wholly new tokens on new codebases, but much easier to launch new tokens through Bitcoin forks or Ethereum-based ERC20 tokens.

The latter deserves particular mention, as Ethereum makes it so simple to issue these tokens that they are the first example in the Ethereum tutorial! Nevertheless, the ease with which Ethereum-based tokens can be created does not mean they are inherently useless. Often these tokens are a sort of public IOU intended for redemption in a future new chain, or some other digital good.

3. Token buyers are buying private keys

When a new token is created, it is often pre-mined, sold in a crowdsale/token launch, or both. Here, “pre-mining” refers to allocating a portion of the tokens for the token creators and related parties. A “crowdsale” refers to a Kickstarter-style crowdfunding in which internet users at large have the opportunity to purchase tokens.

Given that tokens are digital, what do token buyers actually buy? The essence of what they buy is a private key. For Bitcoin, this looks something like this:

5Kb8kLf9zgWQnogidDA76MzPL6TsZZY36hWXMssSzNydYXYB9KF

For Ethereum, it looks something like this:

3a1076bf45ab87712ad64ccb3b10217737f7faacbf2872e88fdd9a537d8fe266

You can think of a private key as being similar to a password. Just like your private password grants you access to the email stored on a centralized cloud database like Gmail, your private key grants you access to the digital token stored on a decentralized blockchain database like Ethereum or Bitcoin.

There is one major difference, however: unlike a password, neither you nor anyone else can reset your private key if you lose it. If you have the private key, you have possession of your tokens. If you do not, you have lost access.

4. Tokens are analogous to paid API keys

The best existing analogy for tokens may be the concept of a paid API key. For example, when you buy an API key from Amazon Web Services for dollars, you can redeem that API key for time on Amazon’s cloud. The purchase of a token like ether is similar, in that you can redeem ETH for compute time on the decentralized Ethereum compute network.

This redemption value gives tokens inherent utility.

Tokens are similar to API keys in another respect: if someone gains access to your Amazon API keys, they can bill yourAmazon account. Similarly, if someone sees the private keys for your tokens, they can take your digital currency. Unlike traditional API keys, though, tokens can be transferred to other parties without the consent of the API key issuer.

So, tokens are inherently useful. And tokens are tradeable. As such, tokens have a price.

5. Tokens are a new model for technology, not just startups

Because tokens have a price, they can be issued and sold en masse at the inception of a new protocol to fund its development, similar to the way startups have used Kickstarter to fund product development.

The money is typically received in digital currency form and goes to the organization issuing the tokens, which can be a traditional company or an open source project funded entirely through a blockchain.

In the same way that boosting sales is an alternative to raising money, token launches can be an alternative to traditional equity-based financings — and can provide a way to fund previously unfundable shared infrastructure, like open source. A word of caution, though: read thesethreeposts and consult a good lawyer before embarking on a token launch!

6. Tokens are a non-dilutive alternative to traditional financing

Tokens aren’t equity, because they have intrinsic use and because they are non-dilutive to the company’s capitalization table. A token sale is more similar to a Kickstarter sale of paid API keys than equity crowdfunding.

However, when considered as an alternative to classic equity financing, token sales yield a >100X increase in the available base of buyers and a >1000X improvement in the time to liquidity over traditional methods for startup finance. The three reasons why: a 30X increase in US buyers, a 20–25X increase in international buyers, and a 1000X improvement in time-to-liquidity.

7. Tokens can be bought by any American (>30X increase in buyers)

A token launch differs from an equity sale — the latter is regulated by the 1934 Act, while the former is more similar to a sale of API keys.

While equities can only be sold in the US to so-called “accredited investors” (the 3% of adults with >$1 million in net worth), the US could not restrict the sale of API keys to accredited investors alone without crippling its IT industry. Thus, if tokens (like API keys) can be sold to 100% of the American population, it would represent an increase of 33x in the available US buyer base relative to a traditional equity financing for a US startup.

Do note, however: some people might want to issue a token and explicitly advertise it as a way to share in the profits of their efforts as a company. For example, the issuer might want to make token holders entitled to corporate dividends, voting rights, and the company’s total ownership stock may be denominated in these in tokens. In these cases, we really are talking about tokenized equity (namely securities issuance), which is very different than the appcoin examples we’ve discussed. Don’t issue tokenized equity unless you want to be limited to accredited investors under US securities laws. The critical distinction is whether the token is simply a useful and tradable digital item like a paid API key. Again: read thesethreeposts and consult a good lawyer before embarking on a token launch!

8. Tokens can be sold internationally over the internet (~20–25X increase in buyers)

Token launches are typically international affairs, with digital currency transfers coming in from all over the world. New bank accounts receiving thousands of wires from all over the world in minutes for millions of dollars would likely be frozen, but a token sale paid in digital currency is always open for business. Given that the US is only ~4–5% of world population, the international availability provides another factor of 20–25X in the available buyer base.

9. Tokens have a liquidity premium (>1000X improvement in time-to-liquidity)

A token has a price immediately upon its sale, and that price floats freely in a global 24/7 market. This is quite different from equity. While it can take 10 years for equity to become liquid in an exit, you can in theory sell a token within 10 minutes — though founders can and should cryptographically lock up tokens to discourage short-term speculation.

Whether or not you choose to sell or use your tokens, the ratio between 10 years and 10 minutes to get the option of liquidity is up to a 500,000X speedup in time, though of course any appreciation in value is likely to be larger and more sustainable over a 10 year window.

This huge liquidity premium alone would cause tokens to predominate whenever they are legally and technically feasible, because the time to liquidity enters inversely in the exponent of the compound annual growth rate. Fast liquidity permits reinvestment in new tokens permits faster growth.

10. Tokens will decentralize the process of funding technology

Because token launches can occur in any country, the importance of coming to the United States in general or Silicon Valley / Wall Street in particular to raise financing will diminish. Silicon Valley will likely remain the world’s leading technology capital, but it will not be necessary to physically travel to the United States as it was for a previous generation of technologists.

11. Tokens enable a new business model: better-than-free

Large technology companies like Google and Facebook offer extremely valuable free products. Despite this, they have sometimes come under fire for making billions of dollars while early adopters only receive the free service.

After the early kinks are worked out, the token launch model will provide a technically feasible way for tech companies (and open source projects in general) to spread the wealth and align their userbase behind their success. This is a better-than-free business model, where users make money for being early adopters. Kik is the first example of this, but expect to see more.

12. Token buyers will be to investors what bloggers/tweeters are to journalists

Tokens will break down the barrier between professional investors and token buyers in the same way that the internet brought down the barrier between professional journalists and tweeters and bloggers.

This will have several implications:

  • The internet allowed anyone to become an amateur journalist. Now, millions of people will become amateur investors.
  • As with journalism, some of these amateurs will do extremely well, and will use their token-buying track-record to break into professional leagues.
  • Just like it eventually became a professional requirement for journalists to use Twitter, investors of every size from seed funds to hedge funds will get into token buying.
  • New tools analogous to Blogger and Twitter will be developed that make it easy for people to use, buy, sell, and discuss tokens with others.

We don’t yet have a term for this, but perhaps it will be “commercial media” by analogy to “social media”.

13. Tokens further increase the primacy of the technologist over the traditional executive

Since the rise of Bill Gates in the late 70s, there has been a trend towards ever more tech-savvy senior executives. This trend is going to accelerate with token sales, as folks who are even more predisposed to the pure computer science end of the spectrum end up founding valuable protocols. Many successful token founders will have skillsets more similar to open source developers than traditional executives.

14. Tokens mean instant custody without intermediaries

Because token buyers need only hold private keys to guarantee custody, it changes our notion of property rights. For tokens, the final arbiter of who possesses what property is not a national court system but an international blockchain. While there will be many contentious edge cases to work through, over time blockchains will provide “rule-of-law-as-a-service” as an international, programmable complement to the Delaware Chancery Court.

15. Tokens may be generalizable to every tech company through paid logins

Can the token model can be extended beyond pure protocols like Bitcoin, Ethereum, or ZCash? It’s not hard to imagine selling tokens as tickets — for access to logins, to car-rides, to future products. Or distributing them as rewards to the authors who power social networks and the drivers who power ride-sharing networks. Eventually, tokens can be extended to hardware as well: every time someone buys a slot in line for a Tesla Model 3 or re-sells a ticket, they’re exchanging a primitive token. But the model will need to work for protocols first before being generalized.

Conclusion

The token space is very early, and is likely to experience a dramatic correction over the next few weeks. To deal with the coming profusion of tokens we will need review sites like Coinlist, portfolio management tools like Prism, exchanges like GDAX, and many other pieces of supporting technical and legal infrastructure.

But the world has changed. Tokens represent a 1000X improvement over the status quo, and those don’t come around very often.

PS: If you thought this post was interesting, go join the list at 21.co/digital-currency/join. You’ll get paid bitcoin by token developers to hear about several upcoming launches.

V8, Advanced JavaScript, and the Next Performance Frontier [video]

$
0
0

Offentliggjort den 18. maj 2017

This talk will help developers write performant JavaScript, use new language constructs (ES2015+, async/await, etc.), and learn about the latest developments in modern benchmarking. We'll also demo DevTools asynchronous debugging features and new JavaScript code coverage tools.

Watch more Chrome and Web talks at I/O '17 here: https://goo.gl/Q1bFGY
See all the talks from Google I/O '17 here: https://goo.gl/D0D4VE

Subscribe to the Chrome channel: http://goo.gl/LLLNvf

#io17#GoogleIO#GoogleIO2017


An empirical study on the correctness of formally verified distributed systems

$
0
0

An empirical study on the correctness of formally verified distributed systems Fonseca et al., EuroSys’17

“Is your distributed system bug free?”

“I formally verified it!”

“Yes, but is your distributed system bug free?”

There’s a really important discussion running through this paper – what does it take to write bug-free systems software? I have a real soft-spot for serious attempts to build software that actually works. Formally verified systems, and figuring out how to make formal verification accessible and composable are very important building blocks at the most rigorous end of the spectrum.

Fonseca et al. examine three state-of-the-art formally verified implementations of distributed sytems: Iron Fleet, Chapar: Certified causally consistent distributed key-value stores, and Verdi. Does all that hard work on formal verification verify that they actually work in practice? No.

Through code review and testing, we found a total of 16 bugs, many of which produce serious consequences, including crashing servers, returning incorrect results to clients, and invalidating verification guarantees.

The interesting part here is the kinds of bugs they found, and why those bugs were able to exist despite the verification. Before you go all “see I told you formal verification wasn’t worth it” on me, the authors also look at distributed systems that were not formally verified, and the situation there is even worse. We have to be a little careful with our comparisons here though.

To find bugs in unverified (i.e., almost all) distributed systems, the authors sample bugs over a one year period, from the issue trackers of a number of systems:

These unverified systems are not research prototypes; they implement numerous and complex features, have been tested by innumerable users, and were built by large teams.

The unverified systems all contained protocol bugs, whereas none of the formally verified systems did. (Still, I’ve never met a user whom, upon having a system crash on them and generating incorrect results, said “Oh well, at least it wasn’t a protocol bug” 😉 ).

Now, why do I say we have to be a little careful with our comparisons? The clue is in the previous quote – the unverified systems chosen “have been tested by innumerable users.” I.e., they’ve been used in the wild by lots of different people in lots of different environments, giving plenty of occasion for all sorts of weird conditions to occur and trip the software up. The formally verified ones have not been battle tested in the same way. And that’s interesting, because when you look at the bugs found in the formally verified systems, they relate to assumptions about the way the environment the system interacts with behaves. Assumptions that turn out not to hold all the time.

Bugs in formally verified systems! How can that be?

The bugs found by the team fall into three categories. By far the biggest group of bugs relate to assumptions about the behaviour of components that the formally verified system interacts with. These bugs manifest in the interface (or shim layer) between the verified and non-verified components.

These interface components typically consist of only a few hundred lines of source code, which represent a tiny fraction of the entire TCB (e.g., the OS and verifier). However, they capture important assumptions made by developers about the system; their correctness is vital to the assurances provided by verification and to the correct functioning of the system.

Two of the sixteen found bugs were in the specification of the systems analyzed: “incomplete or incorrect specification can prevent correct verification.” The team also found bugs in the verification tools themselves – causing the verifier to falsely report that a program passes verification checks for example! All of these verifier bugs were caused by functions that were not part of the core components of the verifier.

Let’s come back to those misplaced assumptions though. What’s most interesting about them, is that many of these assumptions (with the benefit of hindsight!) feel like things the designers should obviously have known about and thought about. For example:

And these are developers trying their very best to produce a formally verified and correct system. Which I think just goes to show how hard it is to keep on top of the mass of detail involved in doing so.

There were also a few bugs found which would always be tough to discover, such as subtle gotchas lurking in the libraries used by the system.

In total, 5 of 11 shim layer bugs related to communication:

Surprisingly, we concluded that extending verification efforts to provide strong formal guarantees on communication logic would prevent half of the bugs found in the shim layer, thereby significantly increasing the reliability of these systems. In particular, this result calls for composable, verified RPC libraries.

How can we build real-world “bug-free” distributed systems?

After discovering these gaps left by formal verification, the authors developer a toolchain called “PK,” which is able to catch 13 of the 16 bugs found. This includes:

  • Building in integrity checks for messages, and abstract state machines
  • Testing for liveness using timeout mechanisms
  • A file system and network fuzzer
  • Using negative testing by actively introducing bugs into the implementation and confirming that the specification can detect them during verification.
  • Proving additional specification properties (to help find specification bugs). “Proving properties about the specification or reusing specifications are two important ways to increase the confidence that they are correct.”
  • Implementing chaos-monkey style test cases for the verifier itself. “We believe the routine application to verifiers of general testing techniques (e.g., sanity checks, test-suites, and static analyzers) and the adoption of fail-safe designs should become establish practices.”

Also of interest in this category are Jepsen,

Lineage-driven fault injection, Redundancy does not imply fault tolerance: analysis of distributed storage reactions to single errors and corruptions, and Uncovering bugs in distributed storage systems during testing (not in production!).

The answer is not to throw away attempts at formal verification (“we did not find any protocol-level bugs in any of the verified prototypes analyzed, despite such bugs being common even in mature unverified distributed systems“). Formal verification can bring real benefits to real systems (see e.g., Use of formal methods at Amazon Web Services). I was also delighted to see that Microsoft’s Cosmos DB team also made strong use of formal reasoning with TLA+:

“When we started out in 2010, we wanted to build a system – a lasting system. This was the database of the future for Microsoft… we try to apply as much rigor to our engineer team as we possible can. TLA+ has been wonderful in getting that level of rigor in a team of engineers to set the bar high for quality.” – CosmosDB interview with Dharma Shukla on TechCrunch

Instead we must recognise that even formal verification can leave gaps and hidden assumptions that need to be teased out and tested, using the full battery of testing techniques at our disposal. Building distributed systems is hard. But knowing that shouldn’t make us shy away from trying to do the right thing, instead it should make us redouble our efforts in our quest for correctness.

We conclude that verification, while beneficial, posits assumptions that must be tested, possibly with testing toolchains similar to the PK toolchain we developed.

Here are a few other related papers we’ve covered previous in The Morning Paper that I haven’t already worked into the prose above:

Show HN: Legion, an as-simple-as-possible blockchain server written in Haskell

$
0
0

README.md

An as-simple-as-possible blockchain server inspired by naivechain, but written in Haskell. Spinning up several Legion nodes creates a peer to peer network that syncronizes the block chain across the network.

Prereqs: To compile from source, you'll need stack.

Alternatively, you can get a precompiled pre-release binary. Note: if you download the binary from github, you'll need to mark it executable by running:

$ chmod +x legion-exe

Usage:

$ stack exec legion-exe [http port] [p2p port] [optional: `seedhost:seedP2PPort`]

Examples:

$ stack exec legion-exe 8001 9001

By default, legion will log what it's doing to standard out. In another terminal window:

$ stack exec legion-exe 8002 9002 localhost:9001

Alternatively, you grab the binaries from the github releases, and run that directly rather than via stack exec

The 3rd argument tells the node where a seed node can be found to bootstrap the connection to the peer to peer network. The current state of the (valid) blockchain will be fetched from all servers, and it will automatically keep itself updated and post its own updated to others.

Now that 2 nodes are now synced, and you can view the current chain from either node at http://localhost:$httpPort/chain, eg http://localhost:8001/chain

Add a new block to the blockchain via a POST request to /block:

$ curl -H "Content-Type: application/json" -X POST -d '{"blockBody": "this is the data for the next block"}' http://localhost:8001/block

Extracting Chinese Hard Subs from a Video, Part 1

$
0
0

I’ve been watching the Chinese TV show 他来了,请闭眼 (Love Me If You Dare). It’s a good show, kinda reminiscent of the BBC series Sherlock, likewise a crime drama centered around an eccentric crime-solving protagonist and a sympathetic sidekick. You should check it out if you’re into Chinese film or are learning Chinese and want something interesting to watch.

I wanted to get a transcript of the episode’s dialog so I could study the unfamiliar vocabulary. Unfortunately, the video files I have only have hard subtitles, i.e. the subtitles are images directly composited into the video stream. After an hour spent scouring both the English- and Chinese- language webs, I couldn’t find any soft subs (e.g. SRT format) for the show.

So I thought it’d be interesting to try to convert the hard subs in the video files to text. For example, here’s a frame of the video:

car scene

From this frame, we want to extract the text “怎么去这么远的地方“. To approach this, we’re going to use the Tesseract library and the PyOCR binding for it.

We could just try throwing Tesseract at it and see what comes out:

importpyocrfromPILimportImageLANG='chi_sim'tool=pyocr.get_available_tools()[0]print(tool.image_to_string(Image.open('car_scene.png'),lang=LANG))

Running it:

$ python snippet_1.py$ 

Hmm, so that didn’t work. What’s happening?

Tesseract requires that you clean your input image before you do OCR. Our input image is full of irrelevant background features but Tesseract expects clean black text on a white background (or white on black).

To remove the background image and get just the subtitles, we turn to OpenCV. The easiest part is cropping the image. We keep a larger left/right border because some frames have more text:

importcv2TEXT_TOP=621TEXT_BOTTOM=684TEXT_LEFT=250TEXT_RIGHT=1030img=cv2.imread('car_scene.png')cropped=img[TEXT_TOP:TEXT_BOTTOM,TEXT_LEFT:TEXT_RIGHT]cv2.imshow('cropped',cropped)cv2.waitKey(10000)

The result:

car scene cropped

Now we want to isolate the text. The text is white, so we can mask out all the areas in the image that aren’t white:

white_region=cv2.inRange(cropped,(200,200,200),(255,255,255))

This uses the OpenCV inRange function. inRange returns a value of 255 (pure white in an 8-bit grayscale context) for pixels where the red, blue, and green components are all between 200 and 255, and 0 (black) for pixels that are outside this range. This is called thresholding. Here’s what we get:

car scene white region

A lot better! Let’s run Tesseract again:

extracted_text=tool.image_to_string(Image.fromarray(white_region),lang=LANG)print(extracted_text)

And Tesseract returns (drumroll…):

′…′二′′′'′ 怎么去逯么远的地方 '/′

Now we’re getting somewhere! Several areas in the background are white, so when we pass those through to Tesseract it interprets them as assorted punctuation. Let’s strip out these non-Chinese characters using the built-in Python unicodedata library:

importunicodedatachinese_text=[]forcinextracted_text:ifunicodedata.category(c)=='Lo':chinese_text.append(c)chinese_text=''.join(chinese_text)print(chinese_text)

The 'Lo' here is one of the General Categories that Unicode assigns to characters and stands for “Letter, other”. It’s good for extracting East Asian characters. From this code we get:

二怎么去逯么远的地方

There are two mistakes here: a spurious 二 character on the front, and a mismatched character in the middle (that 逯 should be 这). Still, not bad!

That’s all for now, but in Part 2 (and maybe Part 3?) of this post series I’ll discuss how we can use some more advanced techniques to perfect the above example and also handle cases where extracting the text isn’t so straightforward. If you can’t wait until then, the code is on GitHub.

If you have any comments about this post, join the discussion on Hacker News, and if you enjoyed it, please upvote on HN!

Wikipedia’s Switch to HTTPS Has Successfully Fought Government Censorship

$
0
0

"Knowledge is power," as the old saying goes, so it's no surprise that Wikipedia—one of the largest repositories of general knowledge ever created—is a frequent target of government censorship around the world. In Turkey, Wikipedia articles about female genitals have been banned; Russia has censored articles about weed; in the UK, articles about German metal bands have been blocked; in China, the entire site has been banned on multiple occasions.

Determining how to prevent these acts of censorship has long been a priority for the non-profit Wikimedia Foundation, and thanks to new research from the Harvard Center for Internet and Society, the foundation seems to have found a solution: encryption.

In 2011, Wikipedia added support for Hyper Text Transfer Protocol Secure (HTTPS), which is the encrypted version of its predecessor HTTP. Both of these protocols are used to transfer data from a website's server to the browser on your computer, but when you try to connect to a website using HTTPS, your browser will first ask the web server to identify itself. Then the server will send its unique public key which is used by the browser to create and encrypt a session key. This session key is then sent back to the server which it decrypts with its private key. Now all data sent between the browser and server is encrypted for the remainder of the session.

"The decision to shift to HTTPS has been a good one in terms of ensuring accessibility to knowledge."

In short, HTTPS prevents governments and others from seeing the specific page users are visiting. For example, a government could tell that a user is browsing Wikipedia, but couldn't tell that the user is specifically reading the page about Tiananmen Square.

The researchers saw a sharp drop in traffic to the Chinese language Wikipedia around May 19, 2015, indicating a censorship event. This did in fact turn out to be the case—the site had been blocked in anticipation of the upcoming anniversary of the Tiananmen Square massacre. Image: Harvard

Up until 2015, Wikipedia offered its service using both HTTP and HTTPS, which meant that when countries like Pakistan or Iran blocked the certain articles on the HTTP version of Wikipedia, the full version would still be available using HTTPS. But in June 2015, Wikipedia decided to axe HTTP access and only offer access to its site with HTTPS. The thinking was that this would force the hand of restrictive governments when it came to censorship—due to how this protocol works, governments could no longer block individual Wikipedia entries. It was an all or nothing deal.

Critics of this plan argued that this move would just result in more total censorship of Wikipedia and that access to some information was better than no information at all. But Wikipedia stayed the course, at least partly because its co-founder Jimmy Wales is a strong advocate for encryption. Now, new research from Harvard shows that Wales' intuition was correct—full encryption did actually result in a decrease in censorship incidents around the world.

The Harvard researchers began by deploying an algorithm which detected unusual changes in Wikipedia's global server traffic for a year beginning in May 2015. This data was then combined with a historical analysis of the daily request histories for some 1.7 million articles in 286 different languages from 2011 to 2016 in order to determine possible censorship events. At the end of their year-long data collection, the Harvard researchers also did a client-side analysis, where they would try to access various Wikipedia articles in a variety of languages as they would be seen by a resident in a particular country.

Read More: Jimmy Wales to China After Blocking Wikipedia: I Can Outwait You

After a painstakingly long process of manual analysis of potential censorship events, the researchers found that, globally, Wikipedia's switch to HTTPS had a positive effect on the number censorship events by comparing server traffic from before and after the switch in June of 2015.

Although countries like China, Thailand and Uzbekistan were still censoring part or all of Wikipedia by the time the researchers wrapped up their study, they remained optimistic: "this initial data suggests the decision to shift to HTTPS has been a good one in terms of ensuring accessibility to knowledge."

Show HN: Early-stage Yahoo Pipes spiritual successor

$
0
0

You enter data by pointing a block to an URL and an RSS (or Atom) feed. This feed then moves through your pipe, block by block, and each block can manipulate it. A simple example is combining multiple RSS feeds into one and filtering that combined data stream for a keyword.

Viewing all 25817 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>