Quantcast
Channel: Hacker News
Viewing all 25817 articles
Browse latest View live

Join Sourceress (YC S17) as tech lead to build AI to get people jobs that matter

$
0
0

People hate their jobs (70% of Americans according to Gallup) and it’s hard for them to leave because they don’t know what else is out there. We want to bring this down to 0%.

We’re tearing down the old system of resumes and job descriptions, replacing it with AI models that rigorously capture the nuances of open positions and find the ideal candidates for each company. As we grow, we’ll be able to model every job — which means we can then tell every candidate exactly what opportunities are available, based on their skills and interests, at any point in their career. 

If we can reduce friction to finding higher impact work, we’ll help people be more productive, feel more fulfilled, and ultimately accelerate human progress.


Extreme event attribution is an expanding subfield of climate science

$
0
0

As floodwaters from the swollen River Thames crept closer to the walls of Myles Allen's south Oxford home in the United Kingdom, he was thinking about climate change—and if scientists could figure out if it was affecting the climbing water outside.

It was January 2003, and as Allen—a climate expert at the University of Oxford—monitored the rising waters from the safety of his house, a voice on the radio was telling him that it couldn't be done. Sure, the flood was the type of event likely to be made more frequent by global warming, the representative of the United Kingdom's Met Office said on the show. But ascertaining anything more concrete was out of reach.

At the time, the Thames River Basin had seen some of its greatest rainfall in decades, and by early January, the flow in some parts of the river was the highest it had been since 1947.

But the radio voice added that it would be "impossible to attribute this particular event [floods in southern England] to past emissions of greenhouse gases," said Allen in a commentary published in Nature shortly thereafter.

In 2003, that was the predominant view in the scientific community: While climate change surely has a significant effect on the weather, there was no way to determine its exact influence on any individual event. There are just too many other factors affecting the weather, including all sorts of natural climate variations.

But Allen wasn't so sure.

"At the time, everybody was saying, 'Well, you can't attribute a single event to climate change,'" he said in an interview with E&E News. "And this prompted me to ask, 'Why not?'"

So he drafted his commentary as the floodwaters inched closer to his kitchen door. He wrote that it might not always be impossible to attribute extreme weather events to climate change—just "simply impossible at present, given our current state of understanding of the climate system." And if researchers were ever able to make that breakthrough, he mused, the science could potentially influence the public's ability to blame greenhouse gas emitters for the damages caused by climate-related events.

His hunch held true. Nearly 15 years later, extreme event attribution not only is possible, but is one of the most rapidly expanding subfields of climate science.

"The public stance of the scientific community about individual event attribution in the year 2000 is that it's not something that science does," said Noah Diffenbaugh, a Stanford University climate scientist and attribution expert. "And so to go from that to now, that you'll find a paper every week ... that's why we say there's been an explosion of research. It's gone from zero to 60, basically."

Over the last few years, dozens of studies have investigated the influence of climate change on events ranging from the Russian heat wave of 2010 to the California drought, evaluating the extent to which global warming has made them more severe or more likely to occur.

The Bulletin of the American Meteorological Society now issues a special report each year assessing the impact of climate change on the previous year's extreme events. Interest in the field has grown so much that the National Academy of Sciences released an in-depth report last year evaluating the current state of the science and providing recommendations for its improvement.

And as the science continues to mature, it may have ramifications for society. Legal experts suggest that attribution studies could play a major role in lawsuits brought by citizens against companies, industries or even governments. They could help reshape climate adaptation policies throughout a country or even the world. And perhaps more immediately, the young field of research could be capturing the public's attention in ways that long-term projections for the future cannot.

"I think the public and many policymakers don't really take those 100-year forecasts very seriously," said Allen, who is now one of the world's leading experts in attribution science. "They are much more seriously interested in the question of what is happening now and why—which boils down to attribution."

The birth of a new field

Less than two years after his flood-inspired commentary, Allen was publishing another paper in Nature — this time, one that would help turn extreme event attribution from a scientific impossibility to a burgeoning new field.

In 2004, he and Oxford colleague Daithi Stone and Peter Stott of the Met Office co-authored a report that is widely regarded as the world's first extreme event attribution study. The paper, which examined the contribution of climate change to a severe European heat wave in 2003—an event which may have caused tens of thousands of deaths across the continent—concluded that "it is very likely that human influence has at least doubled the risk of a heat wave exceeding this threshold magnitude."

Before this point, climate change attribution science had existed in other forms for several decades, according to Diffenbaugh. Until 2004, much of the work had focused on investigating the relationship between human activity and long-term changes in climate elements like temperature and precipitation. More recently, scientists had been attempting to understand how these changes in long-term averages might affect weather patterns in general.

The breakthrough paper took the existing science a step further. Using a climate model, the researchers compared simulations accounting for climate change with scenarios in which human-caused global warming did not exist. They found that the influence of climate change roughly doubled the risk of an individual heat wave. The key to the breakthrough was framing the question in the right way—not asking whether climate change "caused" the event, but how much it might have affected the risk of it occurring at all.

Despite a reluctance to attempt this type of research, the response from other scientists was "not particularly controversial," according to Allen. Instead, he said, "much of the reaction was more along the lines of that it was kind of obvious."

Since then, interest in extreme event attribution has continued to grow—slowly at first, and now with increasing momentum. According to last year's National Academy of Sciences report, "An indication of the developing interest in event attribution is highlighted by the fact that in 4 years (2012-2015), the number of papers increased from 6 to 32."

According to Friederike Otto, an attribution expert at the University of Oxford, the progression of technology—namely, the improvement of climate models—is driving the recent surge.

"Extremes are, by definition, rare," she told E&E News. So in order to actually get one to crop up in a simulation, models need to accurately represent the physical factors that help these extremes occur, and researchers need to be able to run them over and over again. The development and improvement of climate ensembles—large groups of slightly different climate models—have improved scientists' ability to simulate weather events under different conditions.

"I think it's not so much that the philosophy changed, but that the technology has changed," Otto said.

Asking the right questions

In 2010, a record-breaking heat wave swept through Russia, driving temperatures in some places above 100 degrees Fahrenheit. According to some estimates, the extreme temperatures contributed to the deaths of more than 50,000 people.

Two separate studies attempted to quantify the influence of climate change on that event and appeared to come to very different conclusions, inspiring a confusing series of headlines in the news. One research paper, published in Geophysical Research Letters, suggested that the heat wave was mainly the product of natural climate variations, while the other, in Proceedings of the National Academy of Sciences, claimed that human-caused climate change was a major factor.

"That, of course, sounded as if they were contradictory," said Otto, the Oxford attribution expert. For a brief time, scientists were bemused—the two sets of findings had to be at odds with one another.

But in a separate paper, published in 2012 in Geophysical Research Letters, Otto, Allen and several other colleagues demonstrated that the two studies were actually investigating two different questions—and their conclusions were compatible.

The first study, they found, explored the extent to which climate change had affected the heat wave's magnitude, or severity, and concluded that natural climate variations were mainly accountable. The second had investigated global warming's influence on the heat wave's overall probability of occurring. It's possible for climate change to have a significant effect on one factor, but not the other, for the same event, Otto and her colleagues pointed out.

Today, scientists still generally agree that it's impossible to attribute any individual weather phenomenon solely to climate change. Storms, fires, droughts and other events are influenced by a variety of complex factors. And they're all acting at once, including both natural components of the climate system and sometimes unrelated human activities. For instance, a wildfire may be made more likely by hot, dry weather conditions, and by human land-use practices.

But what scientists can do is investigate the extent to which climate change has influenced a given event. Generally, researchers do this with the help of climate models, which allow them to run simulations accounting for the influence of climate change alongside simulations that assume that climate change did not exist. Then they compare the outcomes. The focus is typically on highly unusual or even unprecedented events where the influence of human-caused climate change, as opposed to natural climate variability, is likely to be clearer.

Certain types of events lend themselves to analysis better than others. For instance, researchers have high confidence when investigating heat waves, droughts or heavy precipitation. But they have less confidence when it comes to hurricanes and other more complex phenomena.

Still, scientists are investigating all kinds of weather events. The special issue of the Bulletin of the American Meteorological Society issued last month included about two dozen papers on a variety of extreme events from 2016, ranging from snowstorm Jonas to the heat-induced bleaching of the Great Barrier Reef.

It also contained some surprises: Three papers, for the first time in the Bulletin's history, suggested that the studied events not only were influenced by climate change but could not have occurred without it (Climatewire, Dec. 14, 2017). The studies determined that the record-breaking global temperatures in 2016 (the hottest year ever recorded), extreme heat in Asia and an unusually warm "Blob" of water off the coast of Alaska would all have been impossible in a world where human-caused climate change did not exist.

Scientists have cautioned that the findings don't necessarily overturn the existing narrative that no single event can be attributed to climate change. Even events that would not have been possible without warming are still influenced by the Earth's natural climate and weather systems. But the research does make it clear that the planet has reached a new threshold in which climate change has become not only a component of extreme weather events but an essential factor for some.

As scientists continue to investigate the weather and climate events that reflect the changing planet, the two questions asked by the Russian heat wave studies—one focusing on probability, and the other on magnitude—have emerged as two main approaches used in attribution studies. The probability approach is perhaps most significant from a policy perspective, Otto suggested, because it helps identify the types of events that might become more common in the future and where they may occur.

The second method, sometimes called the "anatomy of an extreme event," advances scientists' understanding of the components that cause these events, and how changes to the climate system may affect them.

Both approaches are strengthening the body of evidence that climate change can influence the kinds of damaging weather events formerly thought of as "natural" disasters. As a result, some experts now believe that extreme event attribution could be the cutting edge not only of climate science but of climate litigation, as well.

New frontiers

In the aftermath of Hurricane Katrina, which devastated the Mississippi and Louisiana shorelines in 2005, residents of the U.S. Gulf Coast felt that Mother Nature wasn't the only one to blame for the damage. A month after the storm struck, a group of citizens filed a lawsuit, Comer v. Murphy Oil, against a group of oil and energy companies for releasing greenhouse gas emissions. The plaintiffs said the emissions had contributed to climate change, which intensified the hurricane's effects.

After an unusual series of legal maneuverings, including dismissals, appeals, reversals and recusals, the case was ultimately dismissed (Greenwire, June 1, 2010). It never went to trial. But the message was clear: The public is paying attention to the links between climate change and harmful extreme weather.

Allen, the Oxford scientist, had hinted at such litigation two years before Hurricane Katrina occurred. In his Nature commentary, he mused about the possibility of massive class-action lawsuits—carrying the potential for "up to six billion plaintiffs" around the world—attempting to hold greenhouse gas emitters liable for damages.

Attribution studies could help to apportion the blame, he noted. For instance, he wrote, "If, at a given confidence level, past greenhouse-gas emissions have increased the risk of a flood tenfold, and that flood occurs, then we can attribute, at that confidence level, 90% of any damage to those past emissions."

More recently, other legal experts have suggested that the fossil fuel industry isn't the only player at risk of being sued over climate-related damages. In the future, attribution studies could become evidence in cases against governments or private companies for failing to protect property, or public infrastructure, against extreme weather in a warming world.

In a recent paper published in August in the journal Nature Geoscience, legal experts from the United States and the United Kingdom argued, "Improvements in attribution science are affirming the foreseeability of certain climatic events and patterns in specific locations, and in identifying increasing risks of consequential impacts on property, physical assets and people." As a result, they wrote, attribution studies may inspire an increase in climate change litigation in the future.

Citizens have already sued other parties for failing to protect them from natural disasters, even if they don't specifically blame climate change. In another post-Katrina case, for instance, a class-action case claimed that the U.S. Army Corps of Engineers, and other parties involved in the planning and construction of Louisiana's levees, should be held accountable for the levees' failures. In that case, a settlement was reached—and as with the case brought against the energy companies, it's unclear how the lawsuit might have panned out had it gone to court.

Now, more than 12 years after Katrina, with a growing stack of studies pointing to the link between climate change and damaging weather events, experts are warning that these types of lawsuits may become more commonplace. The idea is that if decisionmakers are aware that climate change can make certain events more frequent or more severe, they may be held legally responsible for failing to prepare for the worst.

As a general rule, extreme event attribution studies don't predict the likelihood of a future event. They focus on how climate change has affected events that have already happened. Even the National Academy of Sciences report warned, "Attribution studies of individual events should not be used to draw general conclusions about the impact of climate change on extreme events as a whole."

But from a legal standpoint, the studies' implications could be broader. Lindene Patton, a member of the legal team at the Earth &Water Group and one of the authors of the Nature Geoscience paper, noted that as more and more attribution studies are released, they could accumulate a body of evidence suggesting that events occurring today are being influenced by climate change—meaning governments and businesses should be prepared for similar events in the future.

"When the science changes, when a body of knowledge to which a responsible professional is expected to keep up with and understand and pay attention to—when that changes, it changes what they have to do to protect people," she told E&E News. "It changes the standard of care."

Michael Burger, executive director of Columbia Law School's Sabin Center for Climate Change Law, cautioned that the field likely has some serious maturing to do before it becomes a major tool for climate litigation. There's no standardized method for conducting all attribution studies, he noted. Different research groups tend to use different models, ask different questions or use different criteria for selecting the events they investigate, making individual analyses difficult to compare.

"In court, expert testimony doesn't need to reflect a consensus view," Burger said. "It does need to be based, however, on established methodologies. And here, it's not clear that we're at the point yet where we have those established methodologies."

But that point may be coming. Some scientists hope to eventually launch a kind of standardized extreme event attribution service, similar to a weather forecasting service, that would release immediate analyses—with the same uniform methods used for each one—for every extreme event that occurs.

It's still unclear what such a service might look like, but one could imagine receiving an email or smartphone notification each time an extreme heat wave or flood rolls through, explaining its connection to climate change.

The Met Office is already working on such a project, although it's in early stages. The European Prototype demonstrator for the Harmonisation and Evaluation of Methodologies for attribution of extreme weather Events, or EUPHEME, is an ongoing project designed to "build the bridge between science and an operational service," according to Nikos Christidis, one of the scientists involved.

In the meantime, though, individual studies are expected to keep rolling out. This last summer alone, a wave of unusual events across the world—from Hurricane Harvey in the United States to devastating floods in Southeast Asia—sparked renewed interest in the link between extreme weather and climate change.

Scientists have already tackled some of them. Two separate studies published in December both found that climate change had influenced Harvey's record-breaking rainfall (Climatewire, Dec. 14, 2017).

As for the 2003 flood that sparked Allen's interest, its exact connection to global warming may remain in question. But scientists have investigated several similar events since then—Allen, himself, co-authored a paper published in Nature Climate Change concluding that the wet January that caused the south England floods of 2014 was made 43 percent more likely by climate change.

Future floods are less likely to go uninvestigated. According to Christidis, the Met Office scientist, extreme event attribution is not only a matter of scientific advancement but a public obligation.

"Every time we have a high-impact, catastrophic, perhaps extreme event happening, people are invariably asking the question, 'Is this climate change?'" Christidis said.

"The whole science of event attribution developed so that we can provide scientifically robust answers to these questions. If we the experts don't do this, then there will be people who are not qualified who will go and fill in the gaps. So this is the very important challenge that we are called to face."

Reprinted from Climatewire with permission from E&E News. E&E provides daily coverage of essential energy and environmental news at www.eenews.net.

“Ready Player One” is the worst thing nerd culture ever produced

$
0
0

This is not a series about games you haven’t heard of.  This is a series about games EVERYONE has heard of.  Games that everyone has an opinion on, regardless of whether they’ve played them or not.  Games whose actual qualities have been buried in a narrative, whether good or bad.  Games that everyone always makes the exact same comments about.  Games that are in desperate need of…a Second Opinion.

I work for a gaming website. I write about games. When we don’t write about games, we have a harder time sharing our content. For example: we were told we couldn’t share Welcome To Warcraft, a show about why WoW has some of the worst-written lore of all time, on sites like N4G because the moderators there weren’t convinced it was videogame-related enough. This led to that show doing extremely poorly and eventually getting cancelled.

I’m telling you this because I need you to understand that bashing Ready Player One is not an especially savvy move on my part. Far from being “clickbait,” this video is likely to make us much less money than it would if I had simply stuck to videogames. But with the release of last week’s trailer for the film and the subsequent disembark of the Hype Train, I felt this video was important. Because, you see, I don’t just dislike Ready Player One. I don’t just think it was a good idea that could have been done better in the hands of a competent writer or that it just lacked depth or any of the other “negative takes” written by cowards in a cowardly industry. I fucking despise Ready Player One, and consider it easily one of the worst books ever written, a piece of art that fails on every possible level, and which represents the absolute worst of nerd culture.

This isn’t a Second Opinion. This is a reckoning.

This will come as a surprise to absolutely no one, but I’m extremely passionate about the area of entertainment we collectively refer to as “nerd culture.” I like comic books, I like superheroes, I like Edgar Wright films – and most of all, I like videogames. What’s more, I’ve been privileged enough to make decent money here and there writing for and about these things, along with making videos, hosting podcasts, et cetera. And while I love my work, it does come with some unfortunate side effects, and chief among those side effects is the fact that people expect me to have read and enjoyed Ready Player One.

I have read Ready Player One – in fact, I’ve done so twice. But – and I’d like to officially nominate the following sentence for “Understatement Of The Year 2017” – I have not enjoyed it.

For those who live in blissful ignorance, Ready Player One is a 2011 novel written by Ernest Cline, who looks exactly like what you think he looks like. Actually, through “written by” is a pretty generous attribution. Let me try again.

Ready Player One is a 2011 novel that lifts its setting, premise, and most of its story beats from 1992’s Snow Crash, removes all of the self-awareness, badass action, and philosophical musings on the nature of the relationship between language and technology, replaces them with painfully awkward 80s references, and changes the main character from a samurai pizza deliveryman and freelance hacker to the asshole kid in your friend group who claimed he “didn’t need showers,” vomited onto the page by Ernest Cline. Its bestseller success and Cline’s subsequent 7-figure sale of the screenplay to Steven Spielberg is as close as we can get to objective proof that the meritocracy isn’t working.

Of course, the stuff that rips off Snow Crash – that is to say, the actual plot – is a distant second priority to what the book’s really about: references. This is the part that everyone has already made fun of, but it’s usually from the perspective that “reference humor is always terrible.” I disagree with this premise – there’s actually a great movie that’s based on references, and it’s called Scott Pilgrim VS The World. There’s two reasons the references in Scott Pilgrim work. The first is that they actually serve a point – Scott Pilgrim is a film about how relationships are often harder than we think and how, rather than being the reward at the end of a successful adventure, love is an adventure, one which takes constant work to get better in the same way you have to grind through difficult challenges in a videogame. Its references aren’t just there to get you to go, “Ha! A thing I recognize!”, but to use a thing that you recognize to contextualize the point it’s trying to make. A joke about an extra life isn’t just a one-off joke – it’s a metaphor for how we often fail in life, find ourselves at a point so low we feel like we might as well be dead, and have to pick ourselves back up and keep fighting for love, work, self-respect, or whatever’s important in our lives. The 1-UP just serves as a funny way to get that fairly dark and complex idea across.

But the second reason these references work is that they’re often short and sweet. In the director’s own words, they were written to be so short that you could overlook them, which gives fans of “nerd culture” something to look for when they re-watch the film and means that people who don’t recognize extremely specific Legend of Zelda sound effects can still enjoy the movie without realizing they’ve missed anything.

Compare that to this shit.“I watched every episode of The Greatest American Hero, Airwolf, the A-Team, Knight Rider, Misfits of Science, and The Muppet Show. What about The Simpsons, you ask? I knew more about Springfield than I knew about my own city. Star Trek? Oh, I did my homework. TOS, TNG, DS9. Even Voyager and Enterprise. I watched them all in chronological order. The movies, too. Phasers locked on target…I learned the name of every last goddamn Gobot and Transformer. Land of the Lost. Thundarr the Barbarian, He-Man, Schoolhouse Rock! G.I. Joe – I knew them all. Because knowing is half the battle!”

The section I’m quoting goes on for ten pages. If I pitched an article to my editor that was just me saying the names of stuff I liked for ten pages, I’d be fired. Apparently if I pitched the same thing to Random House, I’d get a huge check and a lucrative movie deal.

And honestly? That’s one of the better uses of references in the book, because at least he just says the name of the thing and moves on. If he had written the aforementioned moment in Scott Pilgrim, Scott would have turned to the camera after grabbing the 1-UP and explained “1-UP is a videogame term that refers to an extra ‘life’ gained by the player to allow continuous play before game over. The term was first used in Super Mario Bros, a 1985 videogame developed and published by Nintendo.” Then he’d probably offer some unsolicited opinion, something like “Super Mario Bros was totally great, a masterpiece-“ hang on, that’s too complicated a word – “Super Mario Bros was totally great. It was great! It kicked ass!” And I know some of you think I’m going too far here, so let me hit you with another actual quote: “It’s fucking lame, is what it is! The swords look like they were made out of tinfoil. And the soundtrack is epically lame. Full of synthesizers and shit. By the motherfucking Alan Parsons Project! Lame-o-rama. Beyond lame. Highlander II lame.”

By this point, many of you will have probably realized the truth that lies at the heart of a book that can only be described as “Highlander III: The Sorcerer lame.” In Scott Pilgrim (or Wreck-it Ralph, or Who Framed Roger Rabbit, or Alice in Wonderland, or anything else that uses references and reference humor well) the references are used as a cultural shorthand that somehow means something in the context of the story. In Ready Player One, the references exist for one reason, and one reason only: to let you know how smart Ernest Cline is. Here’s a scene where the main character walks into a bar, hears a song playing, and has to recite the song’s name, recording artist, and release date so that everyone else in the bar knows he knows a lot about music. That’s the entirety of Ready Player One, extended over several hundred pages of torture.

In fact, let’s talk more about the main character. In Ready Player One, nobody on the planet matters except for Wade Watts – every character exists to serve him, whether it’s his perfect magical girlfriend, the gigantic corporation that owns a million holding companies and whose sole interest is tracking down one guy, or the character who transparently exists just to prove he’s not racist (more on that later.) And why is Wade so amazing? What makes him the savior of the universe? He has in-depth knowledge of and appreciation for the media that was popular when Ernest Cline was a kid.

The novel’s main deviation from the world of Snow Crash – besides sucking – is that not only is 80s culture still the dominant culture in 2044, it’s the only culture. By Wade’s own admission, no one has created new movies, television, or videogames in the past half-century because everyone’s been too busy trying to find the clues for the Easter Egg hunt (I could bother explaining this plotline, but I won’t, because it’s just a contrivance to explain why anyone in 2044 would still give a shit about Ladyhawke.) This is a horrifying idea. It’s okay to like or even prefer older media – lord knows I talk about Doom enough to bear out that sentiment – but the day the human race stops producing new art, new stories, new songs, and new ideas, that’s the day we might as well lie down and just wait for the rising sea levels to drown us all. If the best we as a species have to offer is Count Duckula, then it’s time to throw in the existential towel.

(Of course, Count Duckula isn’t mentioned in Ready Player One, because it came out in England, and therefore wasn’t a part of Ernest Cline’s childhood.)

How much does the book love Wade (who’s somehow bullied for liking nerd shit even though he lives in a world where the nerd shit he likes is practically holy writ, because why have a consistent setting when you could have an author-insert)? Not only does the book describe in detail from pages 183 to 194 the several days he spends fucking a sexbot and then raving about how important and great masturbation is. Not only does he a few pages later go on a tangent about how “geeks have a harder time getting laid than anyone” and go on a long, defensive rant about why it’s not Erne-WADE’S fault he’s still a virgin. Despite this, not only does the book’s only female character (until more than 300 pages in), a startlingly gorgeous girl who he constantly calls ugly and accuses of being a man, exist as a disgustingly transparent nerd sexual fantasy (she’s interested in all the same things Wade’s interested in, but nowhere near as good at the challenges as he is despite having the same or often better training.)

But towards the end of the novel, Wade is directly responsible for genocide, and is presented as the victim.

What, you think I’m joking?

“Genocide” may be a bit harsh, but he directly causes the deaths of the thousands of people in the Stacks, the place where he grew up, the place which includes his family and close friends. And once – only once– does he show any remorse, a single paragraph that’s completely forgotten. Worse, the reaction of the rest of the characters isn’t “how horrible” or “what a tragedy,” but simply telling Wade: “Thank god you weren’t there when it happened.” That’s another direct quote. In Ready Player One, nothing matters unless it directly affects Wade.

Later on, he’ll slip into depression – not because he committed a war crime, but because Girl Character has broken up with him for no reason BECAUSE WOMEN RIGHT. While a good writer might use this as an excuse to cause Wade to become introspective and experience a character arc that makes him less of a jerk, Ernest solves it in a single page when Wade purchases an exercise bike and works out until he’s not sad anymore.  No, I’m still not kidding.  I’m never kidding about how bad this book is.

I want to get back to this in a moment, but first, we need to talk about what some people in the comments are inevitably going to refer to as “SJW bullshit,” if they haven’t already done so because I mentioned that a female character was poorly-written. I guess in some ways, Ready Player One is actually a model of equality, because every character is poorly-written. Other than the girl, Art3mis, the most important secondary characters are Daito and Shoto. They’re Japanese stereotypes so embarrassingly written that in the movie I expect them to be played by two Scarlett Johannssons with scotch tape pulling her eyes back.

On page 154, during a team meeting, one of them says that “the Sixers [the villains of the book] have no honor.” In the same scene, two pages later, the OTHER character says “the Sixers have no honor,” because Ernest Cline couldn’t think of a second thing that a Japanese person would say. Writing good characters aside, I may never get over the fact that he uses the same piece of dialogue twice in a single scene. Didn’t this embarrassment to the word “literature” have an editor? Or were they not able to get all the way through this shitty thing either?

There’s plenty more examples of misogyny, racism, and even homophobia in the novel, though few quite as blatant as what we’ve already talked about. And here’s the thing: maybe you think diversity in storytelling isn’t an important issue. Maybe you think that stereotypes aren’t harmful. Or maybe you just think that these sorts of issues don’t necessarily ruin a piece of media for you – after all, Duck Soup is one of my favorite films, despite a particularly infamous racist joke towards the end that I will be the first to admit should never have been part of the movie.

The point is, no matter how you feel about the diversity issue, we can still all join hands and laugh at the terrible way Ernest Cline tries to address it in Ready Player One.

So, there’s only one major character who we haven’t talked about yet, and that’s Aech. Aech is Wade’s best friend, and therefore spends most of the book dutifully divided between doting sycophant, exposition machine, and punchline setup. Aech is a bland, poorly-written, personality-free archetype of exactly the kind you’ve come to expect from this book until page 318, when she and Wade meet in person. Yep, I said “she” – in the ultimate expression of “I can’t be racist, I have a black friend” – it turns out that Aech, who introduced herself to Wade as a man and who had a male avatar in the Metaverse-I-mean-OASIS, is both black AND female AND gay. After taking note of her (quote) “large bosom” – which is honest-to-god his first reaction to this revelation – Wade realizes that “None of [how I felt about Aech] could be changed by anything as inconsequential as her gender, or skin color, or sexual orientation.” I need you to understand this: to pre-emptively dodge criticism of his work, Cline literally introduces a black, gay, fat woman in the last pages of the book just so that his author-insert character can have a moment of introspection where he realizes how not-racist he is.

I could honestly go on for days about how badly-handled this is. Instead of actually making a character for whom being black, gay, or female are character traits, or actually looking at how being those things would affect people in a dystopian future, or diving into why such a person might want to adopt a different online persona (something which is a fascinating issue even in real life) Ernest says “none of that stuff matters! I, a white straight dude, never have to think about race or gender or sexuality, so no one else should either!” He barely even gives Aech any time to speak, devoting 90% of that chapter to Wade’s personal feelings about the matter.

Let me be clear: I’m not saying that Ernest Cline is racist, sexist, or homophobic – it’s really not my place to make any judgement of that type. What I am saying is that he is a terrible writer who writes terribly. He cannot conceive of anything outside of his own extremely insular experiences and doesn’t even put in the bare minimum of effort to give any character agency outside of his beloved protagonist. In fact, Wade immediately goes back to referring to Aech as “he” and “him” in dialogue and in narration, with no reason given other than “I just felt like calling him what I’d always called him,” which would not only be a pretty icky thing to do to a real human being, but is also utterly fucking bizarre. Why even introduce those traits if you’re not going to make them part of the story? You shot yourself in the foot with Chekhov’s gun and then put it back on the mantelpiece! It really feels like you inserted the two-page reveal in the middle of an already finished book after someone pointed out it didn’t pass the Bechdel Test (and still doesn’t, by the way.)

The fact that this thing actually got published – nay, became something of a cultural phenomenon – is absolutely hysterical.

I could go on and on about the bad writing in this book – about how it doesn’t follow the rule of “One Big Lie” by introducing three different unbelievable premises (the end of the world, the existence of a massive virtual reality, the nonsensical treasure hunt that can only be solved by knowing the most 80s trivia), or how Wade goes from being a fat neckbeard to a muscley Adonis after one month on an exercise bike – or hell, I could just read you some more amazing romantic dialogue like “The female of the species has always found me repellent” or “It was working for me. In a big way. In a word: hot” or “You can’t stop me from E-mailing you.” But as awful as the bad writing, stolen plot, and paper-thin characters are, it’s time to talk about what I really hate about Ready Player One.

As I mentioned before, I’m very much a part of “nerd culture.” I’ve been a computer science major in a school full of dudes, I’ve worked for multiple gaming websites, and I make talk-into-a-camera videos on the Internet. And to me, Ready Player One is an important book because it is a distillation of everything that is wrong with said culture.

See, when Fandom – which used to be a singular, all-encompassing term for “stuff nerds like” – got started, it was an extremely niche thing. Often, yes, people did get bullied for liking videogames or comics, for memorizing Star Wars facts in the same way other folks memorized football statistics. And then that stopped being the case. Star Wars: Episode VII is one of the highest-grossing films of all time, followed by Jurassic World and The Avengers. Game of Thrones is the most popular show on TV, and the most popular shows on Netflix are based on the freaking Defenders. Let that sink in: we live in an age where the average person on the street has almost certainly at least heard of Jessica Jones and Luke Cage. And on my home turf, videogames, Grand Theft Auto V sold faster than any piece of entertainment in history up to that point. It wasn’t the fastest-selling game. It wasn’t even the fastest-selling game or film. It was the fastest-selling anything. Period.

Congratulations, folks – the geek has inherited the Earth.

But along the way, the history of bullying caused some of us to develop some pretty bad habits. Like the gatekeeping that’s keeping people out of comic book shops even as Marvel is more popular than ever. Or the insular gaming communities – I’m talking to you, Doomworld – who attack all newcomers who don’t have the “100% correct” opinions on everything (possibly because a new game just came out and introduced a load of new fans to the series.) Or, yes, in extreme cases, attacks on women and minorities who seem like they’re “invading” nerds’ safe spaces. I myself, as one of the few public Jews writing about games, have to delete an anti-Semitic comment from the YouTube channel at a rate of about once a week, which is absolute child’s play compared to what many of my colleagues have to deal with on a daily basis thanks to a very small but very vocal contingent of geeks who are terrified of losing their toys in the wake of a very different world than the one they grew up in.

This is obvious, but I’ll say it anyway: liking videogames, comics, cartoons, obscure films, or anything else that would be considered part of Fandom does not make you a bad person. It doesn’t make you depraved or a mass murderer or whatever the hell Jack Thompson thought we all were in the 90s and 00s. But it also doesn’t make you a good person. You become a good person by being a good friend, helping the less fortunate, donating to worthy causes, et cetera. Playing games may be something you do – it may even be something you do a lot, or something that you consider a major part of your life and a huge influence on how you view the world (as it certainly is in my case.) But it’s not who you are.

Wade Watts, the main character of Ready Player One, is an asshole of the highest caliber. He’s a self-obsessed manchild who is never actually forced to change for the better or even to confront the cost of his actions. His obsession with 80s culture borders on the unhealthy, and he lives in a world where that unhealthy obsession has choked out the vital human need to create. And the book considers these his good traits. This is his superpower. And that’s concerning. It’s concerning to me that this is somebody’s perfect world.

If you want a geek hero, look at Peter Parker. He likes Star Wars and obsesses over superheroes. He’s a nerd. He gets bullied for being a nerd. But his fondness for LEGOs isn’t what makes him a hero – that would be his heroism. His goodness. The fact that he’ll go out of his way to help an old lady cross the street. He knows what it’s like to get picked on, and instead of picking on others in turn, he chooses to stand up for the little guy no matter how hard it is. Peter Parker is what geek culture needs to strive to be every day. When we write an article or a videogame or a book, we should think “Would Peter Parker write this? Would he agree with what we’re saying?”

And conversely, I propose we should also ask “would Wade Watts like this?” And if the answer is yes, you should delete your draft, burn your script, drown the thing in white-out and start over. And it’s this test, more than anything else, that Ready Player One so catastrophically fails. Yes, it’s boring, poorly-written, and literally contains a ten-page list of titles of things the author likes. But it also fails the basic test of humanity, creating a character and a world so repugnant that I feel more than justified in saying it represents the absolute worst of nerd culture.

Robust Client-Side JavaScript – A Developer’s Guide

$
0
0

The JavaScript programming language is an essential tool of web developers today. Websites ship more and more JavaScript to the browser to be more interactive. The more complex client-side JavaScript gets, the more error-prone and fragile the user experience might get. Why do we need to talk about robust JavaScript and how do we achieve it?

Introduction

Characteristics of JavaScript

In the trinity of front-end web technologies – HTML, CSS and JavaScript –, the latter is different from the others. HTML and CSS are declarative languages for the special purpose of structuring a text document and expressing style rules, respectively. Both HTML and CSS are designed in a way that allows browsers to process the code in a forgiving, fault-tolerant way. These design features are necessary to allow for backward and forward compatibility.

JavaScript however is a fully fledged programming language that happens to run in the context of a web page. JavaScript has only a few fail-safe and compatibility mechanisms built in. Whereas JavaScript’s power is unlimited, HTML and CSS have the least power that is necessary to serve their special purpose.

There are thousands of ways HTML, CSS and JavaScript might fail, and it happens every day, on almost every website. But when HTML or CSS fail, the impact is rather limited. A web page may still be usable in spite of several HTML and CSS errors. In contrast, a single JavaScript error may render the whole website unusable. Sometimes there are ways to recover, but the user might not be aware of them.

In this guide, we will investigate why JavaScript might fail and how to prevent or handle these errors in a graceful way that ensures a working website.

The browser as a runtime environment

Writing client-side JavaScript for the web differs from programming for other platforms. There is not one well-defined runtime environment a developer may count on. There is not one hardware architecture or device type. There is not a single vendor that defines and builds the runtime, the compiler and the tools.

The web is an open, vendor-independent, heterogenous publishing platform. It is held together by several technical standards of different quality. New standards appear frequently, old standards are amended or deprecated. Different standardization organizations follow different rules of procedure.

This has lead to the following situation:

  • There is technical behavior that is standardized and that major browsers agree on. For example, the basic HTML elements are well-supported.
  • There is technical behavior that is standardized and that major browsers do not agree on. For example, browsers may have bugs in their implementation or simply not support newer standards yet.
  • There is technical behavior that is not standardized and that major browsers agree on. Standards may omit some details, leaving them for implementors to decide. Still, browser vendors copy the detailed behavior from other browsers for consistency.
  • There is technical behavior that is not standardized and that major browsers do not agree on. Typically new web technologies are born as proprietary experiments before entering a standardization process. Some technologies are never widely adopted and fall into oblivion.

There are numerous relevant browsers in numerous versions running on different operating systems on devices with different hardware abilities, internet connectivity, etc. The fact that the web client is not under their control maddens developers from other domains. They see the web as the most hostile software runtime environment. They understand the diversity of web clients as a weakness.

Proponents of the web counter that this heterogeneity and inconsistency is in fact a strength of the web. The web is open, it is everywhere, it has a low access threshold. The web is adaptive and keeps on absorbing new technologies and fields of applications. No other software environment so far has demonstrated this degree of flexibility.

Front-end developers benefit from a web that keeps on evolving and innovating. Especially JavaScript developers may quickly adopt new language features as soon as they are specified and implemented by some browsers. In this guide, we will explore how to use new features in a backwards-compatible way.

It is true that client-side JavaScript programming is a minefield. But there is a simple, Socratic principle that will light our way: Do not take anything for granted. Do not count on anything. Question your beliefs. If you know that you know nothing about the client that runs your JavaScript code, you can turn unfounded assumptions into justified knowledge.

Assumptions are necessary and inevitable in JavaScript, but we need to own these assumptions. Every JavaScript program makes a lot of assumptions about its runtime environment. While having a low entry barrier is certainly desirable, the program needs to fulfill a task. The requirements should be in a well-balanced relation to the provided features.

JavaScript standards

There is no single technical specification that defines JavaScript, but a whole bunch of specifications.

The ECMAScript specification defines the core of the language: the basic language features, the syntax, the execution and the standard library. A new version of ECMAScript is published every year, and ECMAScript 2017, Edition 8, also called ECMAScript 8, is the latest at the time of writing.

With ECMAScript alone, you cannot do anything useful. For example, there is no way to read or output any data. ECMAScript does not define the so-called host environment in which a program is executed. It allows several possible host environments. An HTML document in the browser is one possible host environment. Node.js is another popular one.

The host environment we are interested in here is primarily defined in the HTML specification. It not only defines HTML as a markup language, it also defines how JavaScript is executed in the context of an HTML document. It defines how JavaScript can access and alter the document. For this purpose, it relies on yet another specification: the Document Object Model (DOM).

The HTML and DOM specifications define the main objects that client-side JavaScript is dealing with: nodes, elements and events. Fundamental objects include window, window.alert(), document, document.body, document.getElementById() and document.createElement().

There are a lot of other specifications that add more APIs to the browser’s JavaScript environment. The web platform: Browser technologies gives an overview.

The global object window

The most important ECMAScript object is the global object. In the browser, the global object is window. It is not only the top-most object representing the current browsing instance, it also forms the top-most scope for names defined by the developer.

These names are called “bindings” in ECMAScript terminology. They include, among others, global variables like var fooVariable = 1;, functions declarations like function fooFunction() {} and class declarations like class FooClass {}. When this code is executed in the global scope, properties on the global object window are created: window.fooVariable, window.fooFunction and window.FooClass.

Understanding scope is crucial since all scripts running on a web page share the same global scope. A script needs to be careful to not conflict with built-in window properties – there are hundreds – and properties created by other scripts.

How JavaScript is executed

JavaScript is typically embedded into an HTML document either directly with a <script> … </script> element, or it is referenced externally with <script src="…"></script>. Scripts may load other scripts dynamically.

The HTML specification has a lengthy definition on how scripts are loaded and executed. The gist is that normal scripts are downloaded in parallel but are executed one after another in the order they are referenced in the HTML. Such synchronous scripts block the parsing of the HTML code since they may insert new code into the HTML stream using document.write().

Nowadays this is a performance anti-pattern. Scripts should be loaded asynchronously using <script src="…" defer></script> or <script src="…" async></script>. And document.write() should be avoided altogether. This allows the HTML parser to do its job without being interrupted by JavaScript.

Mind that the JavaScript engine is still single-threaded, so only one script or function is executed at a given time. (Web workers are an exception to this rule.) Also the JavaScript execution happens in the browser tab’s main thread, which means in the worst case it freezes the whole page.

Achieving Robustness

What does robustness mean? In everyday language, a thing is considered robust when it is made of solid, strong material that somehow resists to applied force. You can use it for a long time, you can drop it by accident, you can even hit on it with a hammer or throw it around, but it does not break.

According to this definition, a piece of hard metal may be robust, but an elastic bouncy ball as well. Science looks for materials that combine strength with a certain resistance to force, like ductility and elasticity.

A structure can also be robust. Think of a lattice tower that is using a certain type of truss. It is huge and strong, yet light and modular.

Similarly, in computer science, a robust program performs well not only under ordinary conditions but also under unusual conditions that stress its designers’ assumptions. (The Linux Information Project). The program does not stop execution when errors occur. It does not fail when the input data or user input is invalid or bogus.

So robustness is all about making informed assumptions. What happens when the developer’s assumptions are not met? Let us have a look at several concepts of robustness.

Graceful Degradation

In the context of web development, Graceful Degradation means building a full-featured website, than adding fallbacks for clients that lack certain capabilities.

A website starts with a large, fixed set of features and, consequently, a large set of requirements. The client may not meet a requirement, so a feature depending on it may not be available. If a requirement is not met, the site does not break, but handles the situation gracefully. For example, it falls back to a simpler version.

For JavaScript, Graceful Degradation could mean to implement the site using the latest JavaScript language features and APIs. But every usage must be guarded by a capability check. In case the browser does not have the required capability, a simpler implementation is activated.

Progressive Enhancement

Progressive Enhancement is similar to Graceful Degradation, but turns the process around.

A website still has a set of desired features and a set of client requirements. But the implementation starts with a minimal set of features. A first version of the site has a low entry barrier because it only uses well-established technologies.

Then, a second version is built that enhances the first version by adding new features. The enhanced version checks whether the client supports certain web technologies, then uses them safely. If the client meets the requirements for a feature, that feature is activated.

This process is repeated, creating a third, fourth, fifth version and so on. That is why it is called Progressive Enhancement. In theory, the website can be enhanced endlessly while staying robust and accessible to devices and browsers with restricted capabilities.

Graceful Degradation vs. Progressive Enhancement

Graceful Degradation and Progressive Enhancement are two implementations of the same idea, but with a different twist.

Graceful Degradation aims for the full experience using bleeding-edge technologies – the moonshot. Building a perfect site takes a lot of time and resources. When such a page is built, it typically only works in one browser on the newest devices.

Then sophisticated fallbacks need to be added after the fact. This turns out to be difficult and tedious. For certain new browser features, developing equivalent fallbacks is virtually impossible. But more importantly, adding fallbacks is often neglected. When the budget is almost exhausted, web developers tend to add “this site requires browser X” signs, excluding many users, instead of writing proper fallbacks.

Progressive Enhancement in contrast follows the “minimal viable product” school of product development. The goal is to publish a working website quickly. This first version is not the most user-friendly, and certainly not the shiniest among its competitors. But the site works smoothly on every device. What it does, it does reliably.

Enhancements that make use of the latest browser features can now be added safely and deployed quickly. There is no stage during development in which the site only works for a small fraction of users.

It is widely agreed that Progressive Enhancement offers more benefits, but when applied to an individual website both methods should be considered and even mixed.

If you are planning a “moonshot” that relies on bleeding-edge technology in its core experience, like video conferencing or augmented reality, Graceful Degradation may help you to build a more inclusive site.

If you are planning a service with a rock-solid base and demanding extras, like realtime data analysis and visualization, Progressive Enhancement may help you to build high without losing accessibility.

When applied to JavaScript programming, both Graceful Degradation and Progressive Enhancement raise a lot of practical questions. How is a fallback applied? How well does it integrate with the rest of the code? To which extent is it possible to built on an existing version and enhance it? Is it not sometimes necessary to make a clear cut? You need to find answers that are specific for your project.

Both Graceful Degradation and Progressive Enhancement rely on checking the client’s capabilities. The crucial technique we are going to discuss later is called feature detection.

Fault tolerance

Another concept of robustness is fault tolerance. A technical system is considered fault-tolerant when the whole system continues to operate even if sub-systems fail.

A system consists of critical sub-systems and non-critical sub-systems. Critical sub-systems provide infrastructure and orchestrate the other parts. If they fail, typically the whole system fails. In contrast, non-critical sub-systems may recover from an error. Or they shut down in a controlled way and report the shutdown to allow for backup systems to take over.

While Graceful Degradation and Progressive Enhancement are native principles of web technologies, fault tolerance is not. It is probably the hardest yet most beneficial technique for achieving robustness.

In particular, fault tolerance is hard to implement in JavaScript. Used without caution, JavaScript is the opposite of fault-tolerant. Usually, if one operation fails, if one exception occurs, the whole call stack or the whole program blows up.

Implementing fault tolerance in JavaScript means dividing the code into independent, sandboxed sub-systems. Only few of them are critical. Most of them should be non-critical. If the latter fail with an error, the error needs to be caught and handled. Other sub-systems and the system as a whole should not be affected.

JavaScript does not support the definition of native sandboxes yet, but we can employ existing techniques like try…catch to achieve the desired effect.

Postel’s Law

John Postel was a computer scientist that helped designing the core technologies of the internet. He edited the technical specifications of fundamental internet protocols, called Request for Comments (RFC).

In RFC 790, published in January 1980, Postel first described the Internet Protocol (IPv4). There is a precise description of how implementations should behave:

The implementation of a protocol must be robust. Each implementation must expect to interoperate with others created by different individuals. While the goal of this specification is to be explicit about the protocol there is the possibility of differing interpretations. In general, an implementation should be conservative in its sending behavior, and liberal in its receiving behavior. That is, it should be careful to send well-formed datagrams, but should accept any datagram that it can interpret (e.g., not object to technical errors where the meaning is still clear).

In RFC 761, also published in January 1980, Postel described the Transmission Control Protocol (TCP) as used by the United States Department of Defense:

TCP implementations should follow a general principle of robustness: be conservative in what you do, be liberal in what you accept from others.

Today this principle is often called Postel’s Law. While the original context was very specific – processing packets on a wide-area computer network –, today it is applied to all programs that read, parse and process user input, file formats or other structured data.

For example, the liberal, fault-tolerant HTML 5 parser definition along with the conservative HTML 5 syntax definition is an application of Postel’s Law.

Personally, I do not think Postel’s Law should be seen as a general principle of robustness. I agree to some point that a program should accept data that it can interpret (e.g. not object to technical errors where the meaning is still clear). This rule requires careful interpretation.

In this guide, I do not argue that every program should be liberal in what it accepts. I find it more important that every program is explicit about what it accepts, is outspoken about technical errors and has a well-defined error handling.

How JavaScript might fail

Web crawlers without JavaScript support

Adding JavaScript to a website assumes that the client downloads and executes the code. This is not the case for a lot of automated web clients. Most robots and web crawlers speak HTTP, HTML and probably some CSS, but usually not JavaScript.

Some try to forage the JavaScript code for URLs or other valuable information. Some try to analyze the JavaScript to find malware or security vulnerabilities. Some even try to execute JavaScript in a fake browser environment.

What these robots have in common is that they are not interested in JavaScript per se. JavaScript typically makes a web page interactive, but a robot aims to analyze the page without simulating user interaction.

A search engine for example needs to evaluate if a page is valuable with regard to a query. So a search engine crawler is interested in text content, semantic markup, hyperlinks and probably media files.

Such a crawler wants simple code that it can parse quickly to find valuable data. Like HTML code. Executing arbitrary JavaScript is complex, slow and a potential a security risk. Some crawlers might do it anyhow, but just as a way to find text content, semantic markup, hyperlinks, etc.

If a site cares for a decent search engine ranking, it should make it easy for crawlers to find meaningful, unique, structured text content. HTML is the best technology to present such content. This means the relevant content should be accessible without JavaScript, just by looking at the HTML returned by the server. All content should be reachable by plain hyperlinks, like <a href="…">…</a>.

For complex interactivity and content that cannot or should not be read by robots, it is fine to require JavaScript.

Disabled JavaScript execution

While robots avoid running JavaScript, humans typically use a browser that runs JavaScript. Almost all browsers today have the capability to run JavaScript. But the user or their administrator may allow only JavaScript of certain origins or may have disabled JavaScript execution completely.

There are good security reasons for disabling the execution of arbitrary JavaScript. Since JavaScript is a fully fledged programming language, processing it is more complex and error-prone than any other format on the web. The browser exposes several critical APIs to JavaScript code. In consequence, JavaScript is the most frequent attack vector for browser exploits.

JavaScript is also used to invade the user’s privacy. Especially the advertisement industry gathers and combines information that is obtained using JavaScript across different sites. JavaScript APIs allow reading details about the machine’s hardware and software as well as saving data on the machine. These features are abused to create a unique “fingerprint” and an extensive profile of the user: visited sites, search terms, purchase history, interests; also age, gender, location, marital status, profession, income, ethnicity, political views, etc.

To protect the users, ad and privacy blockers as well as corporate web proxies may ignore the JavaScript from certain hosts or limit the access to certain JavaScript APIs. Some security proxies even change the author’s JavaScript code.

JavaScript authors need to learn how blockers and web proxies work. They typically match the URL with a whitelist or blacklist. Make sure the host (example.org) serving the JavaScript is not on a blacklist. In a corporate intranet with a whitelist, make sure the host is on the whitelist. Also avoid suspicious patterns in the URL path that could trigger the blocking, like ad.js.

Since ads and privacy-invading scripts are typically loaded from third-party servers, blockers tend to allow JavaScript from the same domain and likely block JavaScript from a different domain. Make sure your scripts are placed on the same domain, a custom domain for assets, or a well-known, trusted content delivery network (see next chapter).

Network and loading errors

In the age of mobile web access, a flaky internet connection is the norm. The connections from the client to a server are interrupted frequently. Sometimes the browser re-establishes the connections to send requests again. Sometimes the user needs to reload the page manually so all parts are fully loaded.

Network interruptions affect JavaScript more negatively than other formats on the web. HTML, CSS, images and videos can be loaded and processed incrementally. If half of the HTML code has been transmitted and the connection drops, the browser can still render half of the page. Image formats like JPEG and PNG have progressive modes so the user gets to see a low-resolution preview after 10-20% of the file have been transmitted.

For JavaScript, it is all or nothing. To execute the JavaScript, the full script needs to be transmitted.

JavaScript authors can do little against connectivity loss. But they can prepare for the case by shipping fewer and smaller scripts, and by making JavaScript optional for key content.

One way to improve the loading performance of scripts is to host JavaScript on content delivery networks (CDN). These are arrays of well-connected servers distributed around the globe optimized for caching and serving static assets like CSS, JavaScript and media files. When the browser requests an asset, the request is automatically routed to the nearest CDN server.

For example, if a user in Indonesia visits a site hosted in Europe, the network latency slows down the transfer. With a CDN server in Indonesia, the assets can be served more quickly, lowering the risk of connection interruption.

Apart from network connectivity problems, an HTTP request for a script can fail for other obvious reasons: 404 Not found, 500 Server error, etc. This seems trivial but these types of errors are probably the most common. Monitor the server log to catch these errors. Use tools to find broken links and check the output of web crawlers like the Google search robot.

Parsing errors

The parser is the part of the browser’s JavaScript engine that reads the JavaScript source code sequentially to build an in-memory representation of the syntax. While the JavaScript code is just a gibberish stream of characters, the engine needs to transform it into a usable data structure in order to execute it later.

What is syntax again? It is the set of rules in a language that allow us to form a meaningful and correct sentence.

For example, if you read the sentence “The dog wags its tail”, you may think of a friendly Golden Retriever. A linguist does the same, but involuntarily starts to dissect the sentence, breaking it up into pieces and their relation.

The sentence is made of a noun phrase and a verb phrase. The noun phrase, “the dog”, consists of a determiner and a noun. The verb phrase, “wags its tail”, consists of verb and a noun phrase again. The verb, “wags”, has the third person singular present form. And so on.

For JavaScript, it is quite similar, yet less familiar since JavaScript is a not a natural language, but an artificial computer language. If you write window.alert('Hello World!');, the parser generates an Abstract Syntax Tree (AST) that may look like this:

ProgramExpressionStatementCallExpression
      callee:MemberExpression
          object:Identifierwindow
          property:Identifieralert
      arguments:Literal
          value: "Hello World!"

We will not go into detail here, but let us describe the structure of the program window.alert('Hello World!'); in our own words:

There is an expression (think of a mathematical term) with a call of a function. To obtain this function, we need to look up the name window. We assume the value is an object and get its property named alert. We treat this value as the function being called. There is one function argument, a string literal containing Hello World!.

To execute a script, the JavaScript engine needs such a high-level format, not the low-level code consisting of letters, dots, braces, brackets, semicolons, etc.

If you make a slip of the tongue, a gentle listener will probably ask: “Pardon me, what did you mean by ‘alert Hello World’?” The JavaScript parser is not that polite. It has a draconian, unforgiving error handling. If it encounters a character that is not expected in a certain place, it immediately aborts parsing the current script and throws a SyntaxError. So one misplaced character, one slip of the pen can ruin your script.

The most frequent syntax error is probably due to typos in hand-written code. Fortunately, these errors are easy to prevent by using an editor with syntax checking or a linter.

Even with these safeguards in place, syntax errors occur. There are several ECMAScript versions with different syntaxes. For example, if you use class declarations from ECMAScript 6 (2015) or async/await from ECMAScript 8 (2017), older browsers are not able to parse your script.

The standard solution is to compile newer ECMAScript syntax into an older equivalent syntax that is widely supported, usually ECMAScript 3 or 5.

Exceptions

You may have heard of exceptions in the context of JavaScript, but what are they?

“Exception” does not mean exception to any rule here. An exception is an exceptional error, a fatal error. A program error that the JavaScript engine cannot handle on its own. If such an error occurs, the program is aborted. More specifically, the current function call stack is aborted. It is still possible to call the same function or other functions later.

There are several causes for exceptions, and we are already encountered one: The SyntaxError occurs during parsing, before your code is even executed. Let us look at two common exceptions that may happen when the code is run: The ReferenceError and the TypeError.

Reference errors

A ReferenceError is thrown when the program references a name – an identifier in ECMAScript terminology – that cannot be resolved to a value.

First, let us look at successful references:

varname='Kitty';window.alert('Hello '+name);

We have two references here, window.alert and name. To resolve them to values, the JavaScript engine first looks for the identifiers window and name in the scope chain.

window is a global identifier, a property of the global object, as we have learned. After having resolved window to an object, the JavaScript engine looks for a property alert on this object.

name is a local or global variable, depending on the context.

Now, let us look at erroneous references:

window.alert(frobnicateFoo);

The identifier frobnicateFoo cannot be found in the scope chain. So the JavaScript engine throws a ReferenceError: “frobnicateFoo is not defined”.

So ReferenceErrors happen when the code uses an identifier that cannot be found in the current scope and all parent scopes. This is may be due to a typo. Linters can catch these bugs easily.

Another possible cause is the developer assuming that the browser supports a certain API. The developer assumes a global identifier is provided and uses it without caution. These are several examples that assume the availability of certain browser APIs:

varobject=JSON.parse(string);
localStorage.setItem('name','Kitty');
varpromise=newPromise(function(resolve,reject){/* … */});
fetch('/something').then(function(response){/* … */},function(error){/* … */});

JSON is available in 98.14% of the browsers, localStorage in 95.31%, Promise in 89.04%, fetch in 77.81%.

We can avoid such careless use of APIs by using feature detection. In particular, we need to check for the names we intent to use.

Writing good feature checks requires thorough knowledge of the API being used. We will go into details later in its own chapter. This is how we can guard the API uses above:

if(typeofJSON==='object'&&typeofJSON.parse==='function'){/* Call JSON.parse() */}
if(typeoflocalStorage==='object'&&typeoflocalStorage.setItem==='function'){/* Call localStorage.setItem() */}
if(typeofPromise==='function'){/* Call new Promise() */}
if(typeoffetch==='function'){/* Call fetch() */}

These guards are only the first step. They check whether the API objects exist and have a certain type, like function. They do not check whether the browser has full and correct support of the API. They do not check whether the APIs can be used in the current context.

For example, security and privacy preferences may limit the usage of APIs like localStorage or fetch. Each API defines its own way how to deal with failure, like throwing an exception or returning a value denoting an error.

Type errors

A TypeError is thrown when a program tries to perform an operation with a value whose type is not suitable for this operation. In other words, when you try to do something with a value that you cannot do with the value.

For example, functions can be called with the call operator (…). All other values, like strings, numbers or plain objects cannot be called. All these examples fail with a TypeError because the value on the left side of the braces is not a function:

"a string"();5();({})();undefined();null();

This seems obvious. Why would you try to call a number as if it was a function? You would not do that on purpose when writing the code, but it happens in production. Let us look at this example:

Here, we have a reference to a property frobnicateFoo on the object window. Resolving window yields the global object. But there is no property frobnicateFoo on this very object. If you get the value of a non-existing property, JavaScript does not throw an exception, it simply returns undefined. So after resolving window.frobnicateFoo, the code is equivalent to undefined();.

Such TypeErrors are both common and hard to debug since they may have highly different causes.

In the example above, the cause the use of a certain function without checking its existence beforehand. frobnicateFoo might be a user-defined function or a part of a browser API. If the function call fails because the function does not exist, the script defining the function was not loaded correctly or the browser does not support the API.

Here is another example of a similar “undefined is not a function” TypeError.

varmyLibrary={start(){/* … */}};myLibrary.statr();

The problem here is a simple typo. myLibrary.start is a function, but myLibrary.statr returns undefined.

These errors can be avoided by manual and automated testing as well static code analysis. An IDEs for example understands that the code defines an object myLibrary with the single property start. When it encounters myLibrary.statr, it shows a warning because it does not recognize the property statr.

There are several other cases where TypeErrors are thrown. For example when you try to redefine the value of a constant:

As William Shakespeare famously wrote in his sonnets about JavaScript, an immutable binding is not an immutable binding …

Which alters when it alteration finds,
Or bends with the remover to remove:
O, no! it is an ever-fixed mark,
That looks on tempests and is never shaken

The nature of a constant is that its value cannot be changed later, so using the assignment operator = with the constant a on the left side throws a TypeError: “invalid assignment to const "a"”.

Similarly, a TypeError is thrown when you try to add a property to an object that does not allow the addition of properties:

constMyLibrary={start(){}};Object.seal(MyLibrary);MyLibrary.newProperty=1;

In Strict Mode, this code throws a TypeError “can’t define property "newProperty": Object is not extensible”. Without the strict mode, the new property is silently ignored.

The same goes for overwriting properties which are read-only:

constMyLibrary={start(){}};Object.freeze(MyLibrary);MyLibrary.start=()=>{};

In strict mode, this code throws a TypeError “"start" is read-only”. Without the strict mode, the assignment is silently ignored.

Again, these errors can only be avoided by manual and automated testing.

Security errors

There is no common error type for security errors in ECMAScript. Browser APIs throw several types of errors when API access is disallowed. Some APIs wrap the error in a Promise that is accessible in the rejection handler. Here are some examples:

  • You try to use localStorage to save data persistently, but the user has disabled data saving for the site. Merely accessing the property window.localStorage throws a SecurityError.

  • You try to read the current location using navigator.geolocation.getLocation(), but the user declines. The error callback is called with a PositionError with the code 1 (PERMISSION_DENIED).

  • You try to fetch a URL from a different domain using fetch(), but the remote server does not allow it via CORS. The returned promise is rejected with a TypeError.

  • You ask for the permission to show notifications using Notification.requestPermission(), but the user declines. The returned promise is resolved with the string “denied” (yes, you read correctly).

  • You try to access the device’s camera using navigator.mediaDevices.getUserMedia(), but the user declines. The returned promise is rejected with a NotAllowedError.

As you can see, handling security errors requires a careful study of a particular API documentation.

How to prevent failure

After this short glance at the different types of JavaScript errors, we got an idea of the problem and mentioned some possible solutions. Now let us go into detail about the techniques that prevent JavaScript from failing and handle errors gracefully.

Failing fast

Every computer program may have logic bugs: A case is not considered, the state is changed incorrectly, data is transformed wrongly, input is not handled. These bugs can have several consequences in JavaScript:

In the best case the script fails with an exception. You may wonder, why is that the best case? Because an exception is visible and easy to report. The line of code that threw an exception is likely not the root cause, but the cause is somewhere in the call stack. An exception is a good starting point for debugging.

In the worst case the application continues to run despite the error, but some parts of the interface are broken. Sometimes the user gets stuck. Sometimes data gets lost or corrupted permanently.

JavaScript code should fail fast (PDF) to make errors visible. Failing early with an exception, even with a user-facing error, is better than failing silently with undefined, puzzling behavior.

Unfortunately, JavaScript does not follow the principle of failing fast. JavaScript is a weakly typed language that goes great lengths to not fail with an error. Most importantly, JavaScript performs implicit type conversion.

Let us look at a simple, contrived example:

functionsum(a,b){returna+b;}

This function expects two numbers and returns their sum. The implicit assumption is that both arguments, a and b, are numbers. If one of them is not, the result probably will not be a number either. Whether the function works correctly depends on correct input types.

The problem is, the + operator is a dangerous beast. Its purpose is to add two numbers, but also to concatenate two strings. If the operands are not two numbers or two strings, implicit type conversion is performed.

These rules are specified in ECMAScript, but you should try to avoid ambiguous implicit type conversion. Here is an improved version of the function:

functionsum(a,b){if(!(typeofa==='number'&&!isNaN(a)&&typeofb==='number'&&!isNaN(b))){thrownewTypeError('sum(): Both arguments must be numbers. Got: "'+a+'" and "'+b+'"');}returna+b;}

The key to failing fast is to make your assumptions explicit with assertions.

The function above uses typeof to assert the types of a and b. It throws an exception if they are not numbers or if they are NaN. We are going to explain these techniques later in detail.

This example shows that assertions make small errors visible before they grow into big errors. The problem is, NaN is a dangerous beast. NaN is a special value that means “not a number”, but in fact it is a number you can calculate with.

NaN is contagious. All calculations involving NaN fail silently, yielding NaN: 5 + NaN makes NaN, Math.sqrt(NaN) produces NaN. All comparisons with NaN yield false: 5 > NaN is false, 5 < NaN is also false. 5 === NaN is false, NaN === NaN is also false.

If a NaN slips into your logic, it is carried through the rest of the program until the user sees a “NaN” appearing in the interface. It is hard to find the cause of a NaN since the place where it appears can be far from the place that caused it. Typically, the cause of a NaN is an implicit type conversion. My advice is to raise the alarm as soon as you see a NaN.

You need to decide how to implement assertions. If you throw an exception, like in the example above, make sure to catch it in a global error handler and report it to an error logging service. If you follow Postel’s Law instead, at least output a warning on the console and report the error.

If the user’s task is affected, you should show a useful error message that something went wrong and that the incident has been reported. Also suggest workarounds, if applicable.

Feature detection

Feature detection is a fundamental technique in an ever-changing web. As web authors, we want to use the newest browser features to provide a rich experience to the users and to make our life easier.

Feature detection first checks whether a browser supports a certain web technology, then uses the technology safely. In the context of JavaScript, most feature detections are object and value checks, as well as function calls. Before looking at them in detail in the next chapter, let us learn about the basics of feature detection.

When writing client-side JavaScript, you need to define a baseline of requirements. You need to take some basic features for granted, like ECMAScript 3 and W3C DOM Level 2 support. If you use other JavaScript features, you should first learn about the browser support.

Can I Use is an essential resource that documents browser support of web technologies. For example, according to Can I Use, the Fetch API in available in the browsers of 77.81% of the users worldwide. Can I Use allows to import usage data for a certain country in order to see stats for the target market.

The Can I Use data for Fetch shows that it is a fairly new API that almost all latest browsers support, but not the older browser generations. So Fetch should be used with a feature detection, ideally with a fallback or polyfill.

Another essential site is the Web API documentation of the Mozilla Developer Network (MDN). Here you will find a reference of all major JavaScript APIs, alongside with browser compatibility information and links to the original specifications.

If you are looking for ECMAScript core features, the place to go are the ECMAScript compatibility tables by kangax.

As we have learned before, writing good feature detection requires thorough knowledge of the particular JavaScript API you would like to use. Fortunately, people have developed and collected feature checks for the relevant APIs so you do not have to wade through the specifications and come up with proper checks yourself.

Modernizr is a comprehensive feature detection library. You can select browser features you would like to use and build your own minimal library. Modernizr then provides a global object Modernizr with boolean properties. For example, Modernizr.fetch has the value true if the browser supports the Fetch API, or false if it does not. This allows you to write:

if(Modernizr.fetch){/* Call fetch() */}

If you do not want to use Modernizr but look for bulletproof feature detection code, look into Modernizr’s repository of detects. For detecting the Fetch API, the Modernizr simple checks'fetch' in window.

Types of checks

Writing feature detections in JavaScript means checking for names and values defined by the host environment.

There are three levels of checks:

  1. Existence check: Does a name exist?

    • Either: Does an identifier exist in the scope chain? Ultimately, does an identifier exist in the global scope?
    • Or: Does a property exist on a certain object?
  2. Type check: After resolving the name to a value, does the value has the expected type?
  3. Value check: Does the value equals the expected value?

This is a cascade of checks you can perform. From top to bottom, the checks get more specific. Typically, we need check the existence and the type of a value in order to use it safely. Sometimes checking the value is necessary as well.

Conditional statements and truthy values

The key to robust JavaScript is asking “if” a lot. During the concept phase, ask “what if”. In the code, ask if to handle different cases differently.

The if statement, or conditional statement, consists of a condition, a code block and an optional second code block.

if(condition){// …}else{// …}

When an if statement is evaluated, first the condition expression is evaluated. The result of the expression is then converted into a boolean value, true or false. If this result is true, the first code block is executed, otherwise the second block, if given.

Most likely, this is not new to you. The reason I am revisiting it is the conversion into boolean. It means you can use a condition expression that does not necessarily evaluate to a boolean value. Other types, like Undefined, Null, String or Object are possible. For example, it is possible to write if ("Hello!") {…}.

If you rely on the implicit conversion, you should learn the conversion rules. ECMAScript defines an internal function ToBoolean for this purpose. In our code, we can use the public Boolean() function to convert a value into boolean. This delegates to the internal ToBoolean function.

To illustrate the conversion, imagine that

if(condition){// …}else{// …}

is a short version of

if(Boolean(condition)===true){// …}else{// …}

Values are called truthy when ToBoolean converts them into true. Values are called falsy when ToBoolean converts them into false.

The way ToBoolean works is simple, but with a twist. Let us quote the ECMAScript specification which is quite readable for once:

ToBoolean Conversions
Argument TypeResult
UndefinedReturn false.
NullReturn false.
BooleanReturn argument.
NumberIf argument is +0, -0, or NaN, return false; otherwise return true.
StringIf argument is the empty String (its length is zero), return false; otherwise return true.
SymbolReturn true.
ObjectReturn true.

As you can see, most types have a clear boolean counterpart. All objects, including functions, dates, regular expressions and errors, are truthy. The two types denoting emptyness, undefined and null, are falsy.

For numbers and strings though, it is complicated. Numbers are truthy except for zeros and NaN. Strings are truthy except for empty strings.

This ECMAScript design decision is controversial. On the one hand, it is a source of errors, since some developers expect that all numbers and all strings are truthy. On the other hand, it allows to write simple value checks like if (value) {…} for non-empty strings and usable non-zero numbers.

Usually, a value check aims to distinguish usable and valid values from unusable and invalid values. In most cases, truthy values are usable and falsy values are unusable. But keep in mind the exceptions for numbers and strings.

If you choose not to use implicit type conversion, make sure the if condition directly evaluates to boolean. For example, use comparison operators like ===, !==, > and <=. These always produce boolean values.

Existence checks

Does an identifier exist in the scope chain? Ultimately, does an identifier exist in the global scope?

Let us assume we would like to detect the Fetch API that specifies a global function fetch. Let us try this:

if(fetch){fetch(/* … */);}else{// …}

This works in browsers that do support fetch, but throws an exception in browsers that do not. Especially, it throws a ReferenceError.

This renders the whole check useless. This is exactly what we are trying to avoid with the check.

We cannot just use an identifier that cannot be resolved. There are several ways to work around this problem:

  1. We know that fetch is a property of the global object window. So we can use the in operator to check whether the property exists without checking its type:

    if('fetch'inwindow){fetch(/* … */);}else{// …}

    This existence check is in fact an object property check.

  2. If present, fetch is a property of window with the type Function. Knowing this, we access the property using the familiar dot notation, object.property:

    if(window.fetch){fetch(/* … */);}else{// …}

    This existence check is in fact a value check. We are relying on the ToBoolean conversion here. A function is truthy.

  3. Alternatively, use the typeof operator. typeof does not throw an error in case the identifier cannot be resolved, it merely returns the string 'undefined'.

    if(typeoffetch==='function'){fetch(/* … */);}else{// …}

    This existence check is in fact a type check (see next chapter).

Type checks with typeof

typeof is an operator that takes one value as operand. The operator is placed before a value, for example typeof 'Hello'. As the name suggests, typeof returns the type of a value as a string. typeof 'Hello' evaluates to 'string' since the value 'Hello' is a string.

typeof has a behavior that makes it useful for feature detection: You can place an identifier after typeof, like typeof fetch. typeof does not throw an error in case the identifier cannot be resolved, it simply returns the string 'undefined'.

The problem is, typeofis a dangerous beast. typeof does not return what you probably expect. This operator is one of the biggest design flaws of ECMAScript. It is deceiving and in older browsers simply incorrect.

First of all, let us learn about the type system of ECMAScript. There are seven main types: Undefined, Null, Boolean, Number, String, Symbol and Object. The first six are called primitive types.

The seventh type, Object, has all sorts of subtypes: Function, Array, RegExp, Date, Error, Map, Set; Window, Document, Element, Node, Event, Image and much more. Values of these types are complex, made up of values of primitive types.

You might expect that typeof deals with the seven main types by returning 'undefined' for Undefined, 'null' for Null, 'object' for Object and so on. Unfortunately not.

Let us paraphrase the ECMAScript specification to see what typeof really returns:

typeof Operator Results
Operand typeResult
Undefined"undefined"
Null"object"
Boolean"boolean"
Number"number"
String"string"
Symbol"symbol"
Object that is ordinary and not callable (not a function)"object"
Object that is standard exotic and not callable (not a function)"object"
Object that is ordinary and callable (a function)"function"
Object that is non-standard exotic and not callable (not a function)

Implementation-defined, but not 'undefined', 'boolean', 'function', 'number', 'symbol', or 'string'.

Implementations are discouraged from defining new typeof result values for non-standard exotic objects. If possible 'object' should be used for such objects.

The first oddity is that typeof null returns object, which does not make any sense. It is a dangerous pitfall.

The second oddity is the special detection of functions. A function typically has the type Object, but typeof returns 'function' instead of 'object'. This exception turns out to be highly useful: typeof is the easiest way to detect a function. Unfortunately, there are no other exceptions for common object types. For arrays, dates and regular expressions, typeof still returns 'object'.

The third oddity is the distinction between ordinary, standard exotic and non-standard exotic objects. Let us try to understand this distinction without going too much into detail.

An ordinary object comes with a default behavior that all objects share. An exotic object overrides and redefines the default behavior. Exotic objects are either standard (specified in ECMAScript) or non-standard (not specified in ECMAScript). For example, an array is a standard exotic object.

In the past, browsers have provided objects that fall into the “non-standard exotic” category. The typeof operator in Internet Explorer misidentified these objects as 'unknown'. Also Internet Explorer misidentified ordinary, callable objects (functions) as 'object'.

Newer browsers adhere to the specification, but the historical pitfalls remain. Since the result of typeof used to be unreliable, people have used typeof mostly for existence checks instead of explicit type checks.

Let us look at the Fetch API example again:

if(typeoffetch==='function'){fetch(/* … */);}else{// …}

This check uses typeof to assert the Function type. This is more explicit: Since we are going to call fetch, we assert it is a function.

if(typeoffetch!=='undefined'){fetch(/* … */);}else{// …}

This check uses typeof to assert fetch is defined and has an arbitrary type except Undefined. This is implicit: We assert fetch exists, then assume it is a function defined by the Fetch API.

Both are useful feature checks. Personally, I follow the rule “explicit is better than implicit”.

Type checks with instanceof

Besides typeof, there are several other ways to check the type of a value. One of them is the instanceof operator.

Simply speaking, instanceof returns whether an object is an instance of a given class. For example, value instanceof Date returns true if the value is a Date object. instanceof expects the object on the left side and a class on the right side.

More precisely, instanceof returns whether an object inherits from the prototype property of a given constructor. To understand this, we quickly need to revisit ECMAScript’s object model.

ECMAScript is a language based on prototypal inheritance. Every object has a prototype reference that may point to another object. If a property cannot be found on the object, the JavaScript engine follows the prototype reference and looks for the property on the prototype.

This principle is quite simple. Imagine someone asking you a question, but you do not know the answer. You still try to be helpful: “I’m sorry, I do not know the answer myself, but I know someone who is an expert on this topic!” So the other person walks to the expert and repeats the question.

Since a prototype is a simple object, it can have its own prototype again. This way, a prototype chain is formed: Objects referencing other objects, like abc. The engine walks up the prototype chain to find a property. When you retrieve a property on a and it cannot be found, b is searched, then c.

How does instanceof fit in here? Let us investigate what happens when value instanceof Date is evaluated. instanceof expects a constructor on the right side, Date in the example. First the engine gets the prototype property of the constructor, Date.prototype. This is the prototype of all date objects. Then it takes the value on the left side, value, and walks up its prototype chain. If Date.prototype is found in the chain, the operator returns true, otherwise false.

In consequence, value instanceof Date checks whether the value inherits from Date.prototype using prototypal inheritance.

The instanceof operator is only applicable to the type Object and subtypes like Function, Array, RegExp, Date, etc. instanceof always returns false for primitive types.

Another drawback limits the usefulness of instanceof: It does not work across windows, like frames, iframes and popup windows.

Every browser window has its own set of host objects and therefore constructor functions. For example, Array in one window is a different object than Array in another window. This sounds logical, but it causes problems when two windows exchange JavaScript objects.

Assume there is one HTML document embedding another HTML document in an iframe. A script in the iframe document calls a function in the parent document, passing an array of numbers: parent.reportFigures([ 63, 843, 13 ]).

The function reportFigures now wants to check if the argument is an array. Typically, value instanceof Array would be a good fit. But in this scenario, it is a foreign array that does not inherit from Array.prototype in the parent window. value instanceof Array would return false– a false negative.

The standard way to solve this particular problem is to use a type check function provided by ECMAScript: Array.isArray(). Unfortunately, equivalents for other types like Date and RegExp do not exist.

Duck typing

As a weakly typed language, JavaScript performs implicit type conversion so developers do not need to think much about types. The concept behind this is called duck typing: “If it walks like a duck and quacks like a duck, it is a duck.”

typeof and instanceof check what a value is and where it comes from. As we have seen, both operators have serious limitations.

In contrast, duck typing checks what a value does and provides. After all, you are not interested in the type of a value, you are interested in what you can do with the value.

For example, a function that expects a date may check the input with instanceof Date:

functiongetNextDay(date){if(!(dateinstanceofDate)){thrownewTypeError('getNextDay: expected a date');}constnextDay=newDate();nextDay.setTime(date.getTime());nextDay.setDate(nextDay.getDate()+1);returnnextDay;}

Duck typing would ask instead: What does the function do with the value? Then check whether the value fulfills the needs, and be done with it.

The example function above calls the method getTime on the value. Why not accept all objects that have a getTime method?

if(!(date&&typeofdate.getTime==='function')){thrownewTypeError('getNextDay: expected a date');}

If the value walks and talks like a date, it is a date – for this purpose.

This check is not as strict as instanceof, and that is an advantage. A function that does not assert types but object capabilities is more flexible.

For example, JavaScript has several types that do not inherit from Array.prototype but walk and talk like arrays: Arguments, HTMLCollection and NodeList. A function that uses duck typing is able to support all array-like types.

Value checks

Compared to existence and type checks, value checks are less relevant for feature detection, but they are still important for writing robust application logic.

We’ve learned that putting a value in an if condition makes a truthy test. When being converted to boolean, is the value true?

The truthy test is simple and effective to determine if a value is usable, but it comes with several limitations we’ve already visited. For a lot of feature checks, the truthy test suffices. See the Fetch API example again:

if(window.fetch){fetch(/* … */);}else{// …}

When detecting features, testing for a specific value is rare. Most feature detection looks for the existence objects and functions. There is no specific value to compare them to.

In normal application logic though, testing for specific values is common. Such value checks make use of JavaScript’s comparison operators: <, >, <=, >=, ==, !=, === and !==.

For example, you may want to check the length of an array or a string:

if(array.length>0){/* … */}if(string.length>0){/* … */}

Or if an array contains a given value:

if(array.indexOf(value)!==-1){/* … */}

Unfortunately, the comparison operators in JavaScripts are dangerous beasts. The relational operators like <, >, <= and >= are overloaded with behavior so they work both for numbers and strings. They may implicitly convert the operands into numbers.

The equality operators == and != are even more complex. If the types of the operands do not match, they perform an implicit type conversion. We will not go into the details of ECMAScript’s equality comparison algorithm. For the sake of robustness, it is best practice to avoid these two operators altogether.

Fortunately, the strict equality operators === and !== exist. They do not perform implicit type conversion. Hence they are easier to describe:

The === operator first checks if the types of the operands match. If they do not, return false. This means you have to do manual type conversion if you want to compare values of different types.

Then the operator checks if both operands are of the type Object. If they are, check if both are the same object. If yes return true, else return false. So two objects are considered unequal unless they are identical. There is no deep comparison of object properties.

Otherwise, both operands must be of primitive types. The values are compared directly. If they match, return true, else return false.

These rules are not trivial and you still have to learn and remember them. The strict equality operators force you to think about types again. They make implicit logic explicit.

Handling exceptions with try…catch

JavaScript APIs have different ways to report failure. The simplest way is a function that returns a falsy or empty value. For example, document.querySelector('.peanutButter') returns null if no element with the selector could be found in the document. Similarly, document.querySelectorAll('.peanutButter') returns an empty list if no element with the selector could be found.

In addition to return values, APIs may throw exceptions. For example, document.querySelector('!"§$%') does not return null, but throws a SyntaxError: '!"§$%' is not a valid selector.

You may have guessed that !"§$% is not a valid CSS selector, but browsers throw the same type of error when they do not recognize the selector. For example, older browsers like Internet Explorer 8 do support querySelector, but do not support CSS Selectors Level 3. And most recent browsers do not support the CSS Selectors Level 4 Working Draft yet.

So a program that calls querySelector would need to check both:

  1. Is the return value an element and not null?
  2. Does the function throw an exception?

We’ve learned how to check the return value:

constselector='a selector that might be unknown or invalid';constelement=document.querySelector(selector);if(element!==null){// … Do something with element …}else{// … Error handling …}

But how do we check whether the function threw an exception?

The try { … } catch (error) { … } statement wraps a piece of code and adds exception handling.

constselector='a selector that might be unknown or invalid';letelement;try{element=document.querySelector(selector);}catch(error){console.error(error);// … Report the error to a logging service …}if(element){// … Do something with element …}else{// No match. This might indicate an error as well.// Report the error to a logging service.}

The try…catch statement consists of two main parts. The first part is the try block delimited by curly braces { … }. This block contains the code that may throw an exception. The second part after the keyword catch consists of a variable name in parentheses, (error), and the another code block in curly braces { … }. This block is executed when an exception is thrown in the try block. It can access the error object using the name in the parentheses, error in the example.

Normally, an exception stops the execution of the current function call stack. When the source of the exception is wrapped in try…catch, only the execution of the try block is stopped. Using error handling, the program is able to recover from the exception. The example above catches exceptions caused by querySelector and reports them.

After executing the catch (…) {…} block, the JavaScript engine continues to run the code after the try…catch statement. There is an if statement with a truthy test on element. This covers two cases: querySelector returned null or threw an exception. If querySelector returned null, element is falsy. If querySelector threw an error, the assignment element = … never happened. Therefore element is undefined, also falsy.

try…catch is particularly useful when it wraps a small piece code that is likely to throw an error. Wrap an API call in try…catch when the API specification states that the call may throw exceptions.

try…catch is often misused by placing a large amount of code in the try { … } block. Often the catch (…) {…} block is left empty. This is not error handling, it is error suppression. It may be necessary in some cases, but try…catch is most useful for catching specific exceptions.

Programmatic exceptions

We’ve touched programmatic exceptions briefly in “fail fast” assertions. Let us have a deeper look at them.

Typically we take great efforts to avoid or catch exceptions during JavaScript runtime. Why should we deliberately cause exceptions?

Exceptions aren’t inherently good or bad. They are simply messages stating that something went wrong during execution. These messages can be very helpful given someone is listening to them and takes action.

In the querySelector example above, querySelector uses two ways to send messages to the caller: A return value or an exception. In our own code, we can use the same pattern.

Here is our “fail fast” example again, a function that returns a number or throws a TypeError if it cannot produce a number:

functionsum(a,b){if(!(typeofa==='number'&&!isNaN(a)&&typeofb==='number'&&!isNaN(b))){thrownewTypeError('sum(): Both arguments must be numbers. Got: "'+a+'" and "'+b+'"');}returna+b;}

The throw statement allows to throw an exception programmatically. It expects an arbitrary value after the throw keyword, but Error objects and their subtypes are most useful.

The example creates a new TypeError instance. Every Error should have a meaningful message describing the problem. The message is a string that is passed to the Error constructor.

First, a programmatic exception is a message to the developer calling the code. The sum function says: “This function needs two numbers in order to work correctly! It does not deal with other types. For reliability, this function does not perform implicit type conversion. Please fix your code to make sure only numbers are passed, before this small error grows to a big one.”

This message is only effective if it reaches the developer. When the exception is thrown in production, then it should be reported and logged so the developer gets the message as soon as possible.

Second, a programmatic exception is a message to the calling code, similar to the return value of the function. We’ve seen this in the querySelector example above. The caller should catch the exception and handle it appropriately. For this purpose, the error object holds a type, a message, the source code position it originates from, a stack trace and possibly more information on the incident.

The Strict Mode

ECMAScript 5, released in 2009, started to deprecate error-prone programming practices. But it could not just change code semantics from one day to the next. This would have broken most existing code.

In order to maintain backwards compatibility, ECMAScript 5 introduces the Strict Mode as an opt-in feature. In Strict Mode, common pitfalls are removed from the language or throw visible exceptions. Previously, several programming mistakes and bogus code were ignored silently. The Strict Mode turns these mistakes into visible errors – see failing fast.

Enable the Strict Mode by placing a marker at the beginning of a script:

'use strict';window.alert('This code is evaluated in Strict Mode! Be careful!');

Or at the beginning of a function:

functionstrictFunction(){'use strict';window.alert('This function is evaluated in Strict Mode! Be careful!');}

Syntax-wise, 'use strict'; is simply an expression statement with a string literal. This code does not do anything when evaluated. It is a meaningful marker for browsers that support ECMAScript 5, and innocuous code for browsers that do not.

Enabling the Strict Mode for a script or a function is contagious. All code syntactically nested also switches to Strict Mode. For example:

window.alert('Non-strict mode!');functionstrictFunction(){'use strict';window.alert('Strict Mode!');nestedFunction();functionnestedFunction(){window.alert('Strict Mode as well!');}}

The Strict Mode changes a lot of small things that you can read about elsewhere. A big thing is the handling of variable assignments in functions.

Consider this function:

functionsloppyFunction(){name='Alice';window.alert(name);}

In non-strict mode, the assignment to name implicitly creates a global variable, window.name. Coincidentally, window.name already exists and has a special meaning.

name is not supposed to be a global variable here, but a local variable in the scope of sloppyFunction. We forgot to add var, let or const before the assignment.

In Strict Mode, this mistake does not go unnoticed. It leads to a ReferenceError: “assignment to undeclared variable name”.

Here is the fixed code that is also valid in Strict Mode:

functionstrictFunction(){'use strict';varname='Alice';window.alert(name);}

Today, the Strict Mode should be used everywhere unless there are particular reasons against it.

Newer ECMAScript versions make the Strict Mode the default when using new features. For example, ECMAScript 6 module code is always evaluated in Strict Mode. Code inside of ECMAScript 6 classes is also Strict Mode per default.

Most likely, if you are using modules or classes, you are already using the Strict Mode. If not, I highly recommend to use the 'use strict'; marker in your scripts to enable the Strict Mode.

Abstraction libraries

jQuery, Underscore, Lodash and Moment.js are probably the most used client-side JavaScript libraries. They all emerged for two main reasons:

  1. The JavaScript APIs available in the browser were unhandy, clumsy and lacked essential features or expressiveness.
  2. The web lacked technical standards that browser vendors agreed upon. Or old browsers that lacked support for essential web standards still dominated the market.

A main goal of client-side JavaScript libraries is to even out differences between browsers. Back in the beginnings of the web, these differences were enormous. Today, most JavaScript APIs are well-specified and browser vendors care for interoperability. Still, small differences remain. Even after browsers have fixed bugs in their implementations, old browser versions do not simply vanish into thin air but delight us for years.

Every year or so, someone writes an article titled “You do not need jQuery” or “You do not need Lodash”. These articles point out that the native APIs have been improved since or old browsers that prevented the usage of native APIs have died out. That is right, but they often miss the other main goal of libraries.

Libraries provide a concise and consistent API that is an abstraction of several inconsistent browser APIs. For example, using jQuery for traversing and manipulating the DOM, handling events and animation is still more pleasant than using the respective native APIs. This is because jQuery provides an unbeaten abstraction: A list type containing DOM nodes with powerful map, reduce and filter operations. Also, jQuery still deals with browser inconsistencies and tries to level them.

For the sake of robustness, use well-tested, rock-solid libraries. The time, resources and brain power that went into the creation and maintenance of such libraries do not compare to your own solutions.

Polyfills

Polyfills are an important tool for writing robust, cross-browser JavaScript. A polyfill is a script that fixes holes in the browser’s web standard support in order to create a level playing field for other scripts. It implements a particular JavaScript API in case the browser does not support it natively yet.

Polyfills are like libraries, but instead of defining their own API, they implement an established or emerging web standard. The benefit for the developer is that after loading the polyfill, all browsers provide the same feature with the same standard API.

For example, some browsers do not support the Fetch API. A polyfill for the Fetch API implements the Fetch specification using older existing techniques like XMLHttpRequest. Then it fills the browser’s holes.

A polyfill for the Fetch API may have the following structure:

if(!window.fetch){window.fetch=function(){/* … Polyfill code … */};}

If the browser does not support the Fetch API, including window.fetch, the code fills the hole by defining window.fetch.

It is worth noting that not all APIs can be fully polyfilled. Some APIs include new and special behavior that cannot be implemented by standard ECMAScript means.

For example, if the browser does not provide access to live audio and video streams from the device, no JavaScript polyfill can implement this feature. In such cases, you need to use Graceful Degradation or Progressive Enhancement to come up with an alternative.

Linters

A linter is a program that checks code for potential errors and compatibility issues. Some linters also enforce style guide rules, like code formatting and naming conventions. Typically a linter has a command line interface, but it can also be integrated in most editors and build tools.

When developing JavaScript, a linter is an essential tool for writing robust code. If you take one thing away from this guide, let it be the use of a linter. It will point out most issues that are described here and much more.

The most flexible and powerful JavaScript linter is ESlint. It is written in JavaScript and runs on Node.js.

ESlint consists of a ECMAScript parser and numerous rules that examine the Abstract Syntax Tree (AST). The rules search for pitfalls in your code, for deprecated language idioms, for inconsistencies and code style violations.

When a rule finds a violation, it outputs a warning or error you can see on the command line or in your editor. Some rules, especially stylistic rules, may automatically fix the problem by changing the source file.

In addition to the built-in rules, ESlint is fully extensible via plugins. A lot of libraries and ecosystems have created ESlint plugins that check for the respective best practices. For example, there are rules for writing React.js code and rules that check React/JSX code for accessibility issues.

Since more and more markup and style logic on the web is expressed in JavaScript, it is crucial to check the JavaScript code for well-established best practices from these domains as well.

Before ESlint existed, JavaScript best practices were described in books, blog posts, talks and project style guides. But not all of them could be checked and enforced automatically. ESlint became a tool for documenting best practices as well as checking them.

ESlint continues to shape the way people write JavaScript. Large projects and corporations are sharing their ESlint configurations. For example, the AirBnB style guide and the “Standard” style are popular style guides based on ESlint.

ESlint is a safe and easy way to explore different programming paradigms possible with JavaScript. With ESlint, it is possible to impose strict rules on your JavaScript usage. For example, ESLint rules for functional programming disallow all JavaScript features that contradict the concepts of pure functional programming.

Especially for beginners, the ESlint ecosystem may be confusing. Hundreds of rules with configuration options, hundreds of plugins and conflicting guidelines. There are few things people quarrel about more than the “right” programming style.

Fortunately, ESlint and most ESlint plugins come with a recommended configuration. Start with this configuration to get an impression how ESlint works, then adapt your ESlint configuration to reflect your or your team’s preferences.

The Babel compiler

Every year, a new ECMAScript version is released. Some versions introduce new syntax. For example, ECMAScript 6 (released 2015) introduced a bunch of new syntax features. Here is a small selection:

leta=1;constb=0b1100101;constc=`Hello ${a}`;constfunc=(x)=>x*2;classCat{}

The ECMAScript syntax is not forward-compatible. When new syntax is added, engines that do not support the extension cannot parse the code. They throw a SyntaxError and do not execute the code.

There are still browsers around that do not support the ECMAScript 6 syntax. Does this mean we cannot use ECMAScript 6 until all these browsers become extinct?

We can use the newest language features today, but we need to translate the code to an older version of ECMAScript before shipping it to the browsers.

The Babel compiler makes this possible. It turns new syntax into an older, more compatible syntax. For example, Babel may translate ECMAScript 6 syntax to equivalent ECMAScript 5 code. A compiler that is in fact a translator is called transpiler, because nerds love blending words.

When using Babel, there is a mandatory compilation step between hand-written code and the code delivered to browsers. Babel provides a command line tool for translating JavaScript files. It also integrates well with popular build tools like Grunt, Gulp, Browserify and Webpack.

Babel is not just a tool to transform code written in a newer ECMAScript version into an older version. Babel is a plugin-based parser and translation framework that may support arbitrary syntax extensions.

As you can imagine, this is both powerful and dangerous. On the one hand, people use Babel to prototype and test new ECMAScript language proposals. On the other hand, people use Babel to add syntax that most likely will not be standardized, like JSX.

This leads to a situation where large codebases are not valid ECMAScript but full of syntax extensions. Some of them are on the standards track, some are not. Such code can only parsed by Babel with certain plugins. Like in the biblical story about the Tower of Babel, language confusion prevents people from working together.

The safest approach is to write interoperable code conforming to a released ECMAScript specification and compile it to an older version using Babel.

A core assumption of compiling new syntax into old syntax is that a fully equivalent old syntax exists at all. This is not always the case. Some new ECMAScript 6 features cannot be fully translated into ECMAScript 5. Babel does its best to reproduce the semantics, but keep in mind that some detailed behavior cannot be reproduced.

These difference are not noticeable if you ship the same ECMAScript 5 code to all browsers. But in the future it makes sense to ship a smaller build with ECMAScript 6 to the browsers that support it.

Babel primarily deals with syntax extensions, not with extensions to the standard library, the ECMAScript core objects. For example, if you write:

constpElements=Array.from(document.querySelectorAll('p'));

Babel will it translate to:

'use strict';varpElements=Array.from(document.querySelectorAll('p'));

Still, this will not work in a browser that only supports ECMAScript 5 because Array.from() is first specified in ECMAScript 6. Babel does not translate the function call. To use new ECMAScript objects and methods, you can use a polyfill. The Babel project provides a polyfill based on core-js.

Languages that compile to JavaScript

From a language design point of view, JavaScript has severe shortcomings and pitfalls. The technical term is “footgun”: A technology that makes it easy to shoot yourself in the foot.

JavaScript is weakly and dynamically typed. It borrows ideas from multiple paradigms and fuses them into one language: imperative programming, object-oriented programming and functional programming.

Some people make this imprecision and inconsistency responsible for many JavaScript pitfalls. That is true to some regard. JavaScript was not originally designed for writing large web applications or user interfaces. Recent ECMAScript standards introduced more strictness and consistency to improve programming in the large.

When it is so hard to write robust JavaScript, why not use another programming language?

The browsers only have a JavaScript engine built in, so we cannot just run, say, PHP in the browser. But other languages can be translated into JavaScript, like an Arabic text can be translated into English.

Typically, programming languages are compiled into machine code for a specific processor architecture or into bytecode for a virtual machine. It is also possible to compile them into another language, like JavaScript. As we’ve learned already, such a compiler-translator is called transpiler.

Transpilers allow to write front-end code in an arbitrary language. Someone has to develop the transpiler, of course. This opens up tremendous possibilities. We can use strictly-typed languages, or purely functional languages, or languages designed for the purpose of building user interfaces.

In fact, there are numerous languages that compile to JavaScript. They have different levels of familiarity with JavaScript:

  • Some languages are strict subsets of JavaScript, meaning they resemble JavaScript in all points but remove some problematic aspects.
  • Some languages are strict supersets of JavaScript, meaning they resemble JavaScript in all points and add additional features.
  • Some languages have a different, incompatible syntax that resembles JavaScript.
  • Some languages have a different, incompatible syntax that does not resemble JavaScript.

Let us have a look at a small selection of languages that compile to JavaScript.

CoffeeScript

CoffeeScript was one of the first widely-used languages that compile to JavaScript. It has a syntax very familiar to JavaScript. CoffeeScript’s motto is “It is just JavaScript”. It mostly provides “syntactic sugar” that makes writing common JavaScript idioms easier.

In JavaScript, curly braces { … } are used to delimit functions and blocks. In CoffeeScript, whitespace like line breaks and spaces is used for that purpose. The mapping from CoffeeScript to JavaScript is direct. The compiled JavaScript code closely resembles the CoffeeScript source.

Here is the sum function in CoffeeScript:

When CoffeeScript version 1.0 was released on 2010, it made JavaScript programming more robust since it eliminated several common pitfalls. The JavaScript produced by the CoffeeScript compiler implemented best practices and was less error-prone.

CoffeeScript’s language design and its brevity influenced the work on the ECMAScript standard. ECMAScript 6 and Babel address several language shortcomings that existed when CoffeeScript was created. So today CoffeeScript is used less then it was several years ago, but it is still an influential language.

TypeScript

TypeScript is an ambitious effort by Microsoft to create a language that compiles to JavaScript by extending standard ECMAScript.

As the name suggest, TypeScript adds static typing to JavaScript. It comes with well-known ways to define types, like classes, interfaces, unions and generics.

TypeScript is a strict superset of ECMAScript. All valid ECMAScript code is also valid TypeScript, but TypeScript code with type annotations is usually not valid ECMAScript.

This design decision makes learning and adopting TypeScript easier. You do not have to forget everything you know about JavaScript and learn a new language, you focus on learning additional TypeScript features.

This is how the sum function with explicit type annotations may look like in TypeScript:

function sum(a: number, b: number): number {
  return a + b;
}

Do you see the type information added to the parameters a and b as well as the return value?

In plain JavaScript, we need to add type assertions to make sure that sum is only called with two numbers. In TypeScript, the code simply does not compile when sum is called somewhere with non-numbers. By adding type information, the TypeScript compiler can analyze the code and check if the actual type of a value matches the expected type.

Using a language with strong, static typing like TypeScript has these main benefits:

  • With proper typings in place, the compiler catches a certain class of bugs early. It is harder to write code that fails for simple reasons. Runtime errors like TypeError and ReferenceError are almost eliminated.
  • Static typing forces you to handle cases that are logically possible, even though they are rare in practice. Without type checking, someone has to write automated tests for the edge cases, otherwise the errors are not caught.
  • Static typing makes you think twice about the structure of your data, about object modeling and API design. In plain JavaScript code, it is easy to create, mix and mutate complex objects. This makes it hard to see which properties are available and which types they have. In TypeScript, each function has a well-defined signature. The structure of all objects passed around in the code is described by classes or interfaces.
  • Strong typing means there is no implicit type conversion. Explicit code is simpler code.
  • Editors with strong TypeScript support, like Visual Studio Code, make programming a bliss. They have productivity features known from fully fledged IDEs. Writing, navigating and refactoring code is much easier since the editor understands the structure of the program, knows all names and types.

But what are the downsides?

  • Although TypeScript is a superset of ECMAScript, learning TypeScript thoroughly takes a lot of effort. Especially for people who have not worked with statically typed languages before, the type system is fundamentally new and hard to grasp.
  • Turning JavaScript into a type-safe language is not easy. The TypeScript compiler knows the semantics of all ECMAScript operators and built-in types. In addition, there are type definitions for browser APIs and libraries. Since the code still runs in loosely-typed JavaScript land, the type definitions do not always match the reality.
  • TypeScript may give a false sense of safety. TypeScript aims for type safety on compile time given that all code has correct type definitions. After the trans bvlation to JavaScript, all type information is discarded. Dynamic code can still create errors during runtime. So runtime checks are still necessary and valuable.
  • Like other compile-to-JavaScript languages, writing TypeScript requires setting up the compiler. To enjoy all benefits, you need to use a specific editor and linter.

In conclusion, TypeScript is a valuable tool to make JavaScript programming more robust.

ClojureScript

ClojureScript is a compile-to-JavaScript language derived from Clojure, an independent, well-established language. It embraces functional programming with optional type safety. It has a Lisp-like syntax that follows a “code is data” philosophy. Clojure code is typically compiled to bytecode running on the Java virtual machine.

Clojure and ClojureScript share little resemblance with JavaScript and the ties to the JavaScript ecosystem are loose. Both the unfamiliar Lisp-like syntax and the functional programming style may put off JavaScript developers.

Here is how the contrived sum function looks in ClojureScript:

(defnsum[a,b](+ab))

This is how to call the function and output the result using JavaScript’s premium debugging tool, console.log():

(js/console.log(sum12))

The core philosophy of Clojure is that it aims to be simple in the first place and easy in the second place. Let us unravel that profound sentence.

“Simple” is the opposite of complex. “Simple” means having only one purpose, doing one thing. “Simple” means unambiguous and logically clear.

“Easy” means familiar, easy to reach. “Easy” is the opposite of “hard”.

Clojure tries to use “simple” concepts to build large applications. In other words, Clojure tries everything not to be a “footgun”.

JavaScript in contrast is a melting pot of conflated features. New syntax and semantics are added with each version. JavaScript is “easy”, because its idioms are familiar to developers from different backgrounds.

ClojureScript and functional programming in general keep influencing the way people write JavaScript and the way ECMAScript is advanced. In particular, two basic concepts: pure functions and immutable values.

In short, a function is pure when it is free of side effects. Such a function takes some input values as arguments and computes a new value. The new value becomes the return value.

A pure function always produces the same output given the same input. There is no internal state. The function does not change its input values. It does not do anything besides computing the return value. It does not “change the world”. So it is always safe to call a pure function: It may take some computation time, but it does not change the state of your application.

It turned out that breaking down the logic of an application into pure functions makes the whole application more robust. Pure functions are “simple”, they do only one thing. They are easy to reason about and easy to test automatically. You can simply pass different input values and check the return value.

Here is an example of an impure function in Javascript:

constmyCats=[];functionadoptCat(cat){// Mutate the outer value myCatsmyCats.push(cat);// Mutate an input valuecat.owner='Alice';// Output as a side effect, no return valueconsole.log('Cat adopted!',cat);}adoptCat({name:'Cheshire Cat'});

Here is an example of a pure function in JavaScript:

functionadoptCat(cats,cat){// Make a copy of the cat object in order to augment itconstadoptedCat={name:cat.name,owner:'Alice'};// Make a copy of the cats array in order to augment itreturncats.concat(adoptedCat);}constmyOldCats=[];constmyNewCats=adoptCat(myOldCats,{name:'Cheshire Cat'});console.log('Cat adopted!',myNewCats[myNewCats.length-1]);

The rule that pure functions do not mutate their input values is enforced in functional programming languages like Clojure. Variable bindings as well as values are typically immutable: You can read them to create new values, but you cannot change them in-place.

Elm

Elm is a functional programming language that compiles to JavaScript. In contrast to Clojure/ClojureScript it was specifically designed as a compile-to-JavaScript language. It is not a general-purpose programming language, it is a toolkit for developing client-side web applications.

The syntax of Elm does not resemble JavaScript, but it is quite approachable. Here is how the familiar sum function looks like in Elm with explicit type annotations:

sum : Float -> Float -> Float
sum x y =
  x + y

This code calls sum and outputs the result to the HTML document:

main =
  text (
    toString (sum 1 2)
  )

Elm’s main goal is to prevent runtime exceptions. In this guide, we’ve learned how hard that is in JavaScript when done manually. Elm’s idea is to free the developer from this burden. If the program may throw exceptions during runtime, it simply should not compile. The Elm compiler is known for its strictness as well as for friendly error messages that help you making the code compile.

In Elm, operations still may fail. Like other functional languages, Elm has built-in types for wrapping uncertain values (Maybe: Nothing& Just) and operations that produce a value or an error (Result: Ok& Error). When working with potential values, the success and error cases must be handled explicitly.

Elm is designed with static types from the ground up. Static typing feels natural since Elm has strong type inference. It deduces the types of values so you do not have to add type annotations in many places.

TypeScript has type inference as well, but TypeScript imposes static typing on a dynamic language and ecosystem. Elm makes a clear cut.

The real novelty of Elm is the “Elm Architecture”. As mentioned earlier, Elm is not a general-purpose language, but designed for building user interfaces running in the browser.

Typically, such interfaces are built using patterns like Model View Controller (MVC) or Model View ViewModel (MVVM). These patterns originate from object-oriented languages that mix logic and mutable state. They are not applicable to functional languages with immutable values.

In Elm, a dynamic user interface consists of three parts:

  • Model– A type describing the state of the application, as well as the initial value. This is where all work data is stored. There is no logic here.
  • Update– A pure function that takes a message and the existing model, processes the message and returns a new model. This is where all state changes happen, but in an immutable way.
  • View– A pure function that takes the model and returns the description of an HTML element tree. This is similar to a declarative HTML template. The view may embed information from the model in the HTML, for rendering data, and may register messages as event handlers, for adding interactivity.

The update cycle in Elm looks like this:

  1. On user input, a message is sent.
  2. The update function is called automatically. It may react to the message and may return a new, updated state.
  3. The view function is called automatically with the updated state. It generates a new HTML element tree.
  4. Using a technique called Virtual DOM diffing, the actual DOM is updated.

This concept is radically simple and radically different from classical UI patterns. It was later dubbed uni-directional data flow to contrast it with bi-directional model-view binding.

In additional to synchronous model updates, messages can have asynchronous effects. The update function can return commands that trigger new messages eventually, like sending HTTP requests. The application may declare subscriptions for listening to input streams like WebSockets.

Elm exists in a niche, but its brilliant concepts have been widely adopted in the larger JavaScript ecosystem. Elm influenced React, Flux, Redux and NgRx as well as several side-effect solutions for Redux.

Even if you do not choose to write Elm over plain JavaScript, there is much to learn from Elm regarding robust programming.

Error logging

Despite all precautions, with extensive testing in place, errors will happen in production when diverse users with diverse browsers and devices are using your site.

In particular, JavaScript exceptions will happen in production. We’ve learned that exceptions are helpful messages about problems in your code or your larger infrastructure – as long as you receive these messages and act upon them.

Therefore, sending information about exceptions to you, the developer, is vital for every site that relies on JavaScript.

The standard approach is to monitor all exceptions on a page and to handle them in a central handler, for example using window.onerror. Then gather a bunch of context information and send an incident report to a log server. That server stores all reports, makes them accessible using an interface and probably sends an email to the developer.

Here is a simple global error reporter:

window.onerror=function(message,file,line,column,error){varerrorToReport={type:error?error.type:'',message:message,file:file,line:line,column:column,stack:error?error.stack:'',userAgent:navigator.userAgent,href:location.href};varurl='/error-reporting?error='+JSON.stringify(errorToReport);varimage=newImage();image.src=url;};

This code sends a report to the address /error-reporting using a GET request.

The example above is not enough. It is not that easy to compile a meaningful, cross-browser report from an exception. Tools like TraceKit and StackTrace.js help to extract meaning from exceptions.

There are several hosted services that provide such error reporting scripts and the server-side monitoring and data processing.

Manual testing

Once a feature of a website is implemented, it needs to be tested manually. The first tester is typically the developer, switching between code editor and browser, adding logic and making the necessary input to test the logic. Before committing the code, there is probably a final test.

These ad hoc tests do not scale. They are not exhaustive, not documented, not repeatable and they do not catch regressions. A regression is when a change in one part breaks another part. For example, improving one feature may accidentally break another feature.

More importantly, the developer perspective is not as meaningful as the user perspective.

There are several effective testing approaches in different development phases. To improve the robustness of client-side JavaScript, I find it most important to add frequent manual testing to the product development cycle.

Typically, designing an application feature produces a user story that describes who wants what and why: “As a new user, I want to browse restaurants nearby so I can find a cozy place for dinner”.

The next step may be to design the user interaction that achieves the goal stated in the user story. For example, the home page contains a search form and asks for the user’s location. On form submission, a list of restaurants is shown.

Every feature has a list of steps the user needs to perform. The feature and step descriptions are poured into a test plan.

A dedicated tester now executes all tasks that a real user needs to be able to do. The tester verifies that all features are working as described and verifies that the application reacts to input as expected. If the tester encounters a mismatch or finds anything suspicious, they report a bug.

Whenever the feature set changes or the code changes, the test plan needs to be revised and all tests need to be repeated in order to yield meaningful results.

For websites, the tester needs to execute the tasks with different browsers, devices and internet connections to catch all possible errors.

Manual testing with step-by-step instructions is probably the most time-consuming and expensive type of testing, but it is highly beneficial. Alongside with real user testing, manual testing can quickly find errors caused by client-side JavaScript.s.

Automated testing

Automated testing plays a crucial role in writing robust applications. Especially when writing JavaScript code, a simple automated test already catches a lot of common bugs.

There are plenty of resources on automated testing in general and testing JavaScript in particular. In this guide, I will focus on how automated testing contributes to robust JavaScript.

In contrast to manual testing, automated testing verifies that the software meets the requirements using automated means. This typically includes writing test code or another formal proof. Once the automated test is set up, it can be executed repeatedly without human interference.

Unit tests

A unit test is an automated test with the smallest possible scope that looks into the application code. Almost always, a unit test is code written in the same language as the implementation.

In JavaScript, the smallest reusable unit of code is a function. Other possible units are an object, a class or a module. For example, a unit test for a JavaScript function is some JavaScript code that calls the function.

For simplicity, let us write a unit test that deals with a function. But how do we write and execute a test?

There are numerous ways how to write and run unit tests in JavaScript. Popular testing frameworks include Jasmine and Mocha. They may be combined with assertion libraries like Chai and Unexpected. Unit tests are typically executed using test runners like Jest, Ava and Karma.

In my experience, all these libraries allow to write unit tests that make JavaScript more robust. It is mostly a matter of style and taste which one to use. For the purpose of this guide, I will use widely accepted Jasmine testing framework.

First of all, we need a function to test. Let us start with the simple, flawed sum function:

functionsum(a,b){returna+b;}

How would a unit test for this function look like and how does it make the code more robust?

In Jasmine, a single unit test is called test suite. It describes the unit under test. The suite consists of specifications or shortly specs. Each spec sets up the necessary environment, pokes the unit under test and finally makes some expectations, also called assertions. If all expectations are met, the spec passes, otherwise the spec fails.

Here is a simple Jasmine test suite for the sum function:

describe('sum',function(){it('adds two numbers',function(){expect(sum(1,3)).toBe(4);});});

describe('…', function() {…}); declares a test suite. The first argument, 'sum' in the example, is a description of the unit under test. The function passed to describe may contain several specs.

A spec is declared with it('…', function() {…}). The first argument, 'adds two numbers' in the example, is the human-readable requirement. The function passed to it contains the actual test code.

The example above tests the sum function. Each spec needs to call the function with some arguments and make assertions about the return value. The example calls sum(1, 3) and expects the result to be 4 using Jasmine’s expect and toBe functions. As you can see, Jasmine code tries to be human-readable.

If you do not understand the details of the code above, that is fine. It is rather important to understand the structure: A unit test describes the behavior of a piece of code and thereby documents the requirements. The unit test tells you whether the implementation meets the specifications.

A specification consists of a human-readable text and an executable proof. A spec allows you to describe and verify how the code behaves in particular cases.

Unit testing makes you think about these cases in the first place, then write them down and define behavior. Does the function return the correct result when valid input is given? How does the function behave when invalid input is given? Handling these cases makes the implementation more robust.

As we’ve seen before, the simple sum function does not behave well when invalid input is given. Let us specify how the function should behave in this case. We want sum to throw an exception in case one argument is not a number.

Here is the respective spec:

it('throws an error if one argument is not a number',function(){expect(function(){sum('1',3);}).toThrow();});

This spec fails when being tested against the implementation function sum(a, b) { return a + b; }. It is common practice to write a failing spec first. Test-driven development advises to first define the cases, specify the behavior and then write as little code as necessary to make the test pass.

Let us do that! Here is the sum function that makes the test pass:

functionsum(a,b){if(typeofa!=='number'){thrownewTypeError('sum(): Both arguments must be numbers. Got: "'+a+'" and "'+b+'"');}returna+b;}

Wait, is not something missing there? The code only checks the a argument. Should we not add a check for b as well?

Yes, but first we write a failing spec:

it('throws an error if one argument is not a number',function(){expect(function(){sum('1',3);}).toThrow();expect(function(){sum(1,'3');}).toThrow();expect(function(){sum({},null);}).toThrow();});

Now let us change the implementation so the test passes:

functionsum(a,b){if(!(typeofa==='number'&&typeofb==='number')){thrownewTypeError('sum(): Both arguments must be numbers. Got: "'+a+'" and "'+b+'"');}returna+b;}

One thing is still missing: The handling of NaN values. Let us add a failing spec:

it('throws an error if one argument is NaN',function(){expect(function(){sum(NaN,3);}).toThrow();expect(function(){sum(1,NaN);}).toThrow();expect(function(){sum(NaN,NaN);}).toThrow();});

Finally, this is the implementation that conforms to all specifications:

functionsum(a,b){if(!(typeofa==='number'&&!isNaN(a)&&typeofb==='number'&&!isNaN(b))){thrownewTypeError('sum(): Both arguments must be numbers. Got: "'+a+'" and "'+b+'"');}returna+b;}

Writing unit tests has several benefits. For one, it leads to a programming style that creates small units of code that are easy to test in isolation and have a well-defined interface. Moreover, it makes you think about robust code. By writing specs for ordinary as well as unusual conditions, by running the specs in several browsers, you put the assumptions baked into your code to the test.

In commercial web applications that make heavy use of JavaScript, every line of JavaScript code should be covered by unit tests. Tools like Istanbul allow to measure the test coverage.

100% test coverage means there is no logic that is not executed by unit tests. It does not necessarily mean that the logic is correct. Writing meaningful specs that reflect the actual conditions in production requires a lot of experience.

If you do not use unit testing yet, start small by writing specs for your core parts. It gives you a feeling about how testable code looks like and how to test different cases effectively.

Integration tests

As we’ve learned, unit testing tries to focus on a small reusable unit of code and to put it through its paces. A unit test assures that a unit works well in isolation.

Such a test is precise, but it is hard to isolate a unit from the rest. For example, if a function under test calls a second function, the unit tests needs to remove and replace this dependency in order to focus on the function under test. A common technique is called dependency injection.

Unit tests are necessary, but not sufficient. An application is a complex combination of units. Having 100% test coverage, having passing unit tests says little about the application as a whole.

This is where integration tests come in. An integration test describes and verifies the behavior of a several connected units. The integration test does not need to know the internals, it runs against the public interface.

For example, if a function under test calls a second function, the integration test simply lets it be. The test knows that it integrates all dependencies. Such a test has a larger impact and covers a lot of code. But it is hard to set up the different cases and test side effects thoroughly.

When testing JavaScript, the difference between unit tests and integration tests is subtle. Most things we’ve learned about unit tests also apply to integration tests. For example, integration tests may use the same tools like Jasmine. In practice, unit tests and integration tests are mixed in order to test a codebase precisely and extensively.

Acceptance tests

Both unit and integration tests consist of code that checks various internal parts of the application code. Again, these tests say little about the application as a whole. The crucial question is whether the application works for the user. Is a user able to complete their tasks?

A certain class of JavaScript bugs only occurs when the code runs on the target website in a real browser. These bugs are not caught by unit or integration tests running in a cleanroom environment that bears little resemblance to the production environment.

JavaScript is error-prone because it depends on other front-end and back-end technologies. A script typically reads and changes the HTML DOM, changes CSS styles, makes HTTP requests and controls media. So when the script runs in production, it needs to work together with the HTML, CSS, other JavaScript code, server APIs and media content.

We need an automated test that checks the website as a whole. This is called acceptance testing or end-to-end testing in the web context.

An acceptance test does not tests parts of the application individually, like the front-end, back-end code or database, but the full stack. It ensures that all technologies come together to provide the desired user experience.

In particular, an acceptance test simulates a user by remotely controlling a browser. Such a test mimics the input of a user and checks the output of the website. Every test consists of step-by-step instructions like these:

  1. Go to the website http://carols.example.org
  2. Wait until the page is fully loaded.
  3. Expect that the top-level heading reads “Christmas Carols”.
  4. Focus the search field by clicking on it.
  5. Enter the text “hark”.
  6. Submit the form by pressing enter.
  7. Wait until the next page is fully loaded.
  8. Expect that the top-level heading reads “Hark! The Herald Angels Sing”.
  9. Expect that the first paragraph contains “Peace on earth and mercy mild”.

An acceptance test expresses these instructions as code. Since the test interacts with the website through a browser, it can be written in any language. It does not need to be JavaScript or whatever language is used in the back-end.

Of course, you can write acceptance tests in JavaScript and run them with Node.js. Popular libraries include Nightwatch.js and WebdriverIO.

The technology that makes remote control of the browser possible is called WebDriver. Today, all big browsers implement the WebDriver protocol. A popular server for orchestrating browsers is Selenium.

Like all types of testing, acceptance tests should run on different devices and browsers for meaningful results. Commercial services like Saucelabs and BrowserStack allow to run WebDriver tests against numerous devices and browsers.

Writing less JavaScript

The role of a front-end developer is to improve the user experience with the available technologies. The developer needs to assess which interactions can and should be improved with client-side JavaScript.

JavaScript allows to build more user-friendly interfaces than HTML and CSS alone could do. It is the best technology to build excellent interactivity in the browser.

Still, JavaScript is the most brittle of all front-end web technologies. An important skill of a front-end developer is to know when not to solve a problem with client-side JavaScript. It is always more robust to solve a problem further down in the stack.

If all techniques and tools did not help you to write robust JavaScript, consider reducing the code complexity and the amount of code. In the last resort, reduce the usage of client-side JavaScript. Find simpler solutions that rely on HTML, CSS and server-side logic alone.

References

About

Author: Mathias Schäfer (molily)

Twitter: @molily

Please send feedback and corrections to zapperlott@gmail.com.

License: Creative Commons Attribution-ShareAlike (CC BY-SA 4.0)

Published on .

Impressum

Norwegian Student Takes Secret Street Photos In The 1890s

$
0
0

Carl Størmer (1872-1957) enjoyed a hobby that was very, very unusual at the time. He walked around Oslo, Norway in the 1890s with his spy camera and secretly took everyday pictures of people. The subjects in Størmer's pictures appear in their natural state. It extremely differs from the grave and strict posing trends that dominated in photography during those years.


Show Full Text

Carl got his C.P. Stirn Concealed Vest Spy Camera in 1893 when he was studying mathematics at the Royal Frederick University (now, University of Oslo). "It was a round flat canister hidden under the vest with the lens sticking out through a buttonhole," he told St. Hallvard Journal in 1942. "Under my clothes I had a string down through a hole in my trouser pocket, and when I pulled the string the camera took a photo."

Norway's first paparazzi usually photographed people at the exact time they were greeting him on the street. "I strolled down Carl Johan, found me a victim, greeted, got a gentle smile and pulled. Six images at a time and then I went home to switch [the] plate." In total, Størmer took a total of about 500 secret images.

His candid photos aside, Størmer was also fascinated with science. He was a mathematician and physicist, known both for his work in number theory and studying the Northern Lights (Aurora Borealis).

Amazon Will Buy Target This Year, Gene Munster Predicts

$
0
0

Amazon.com Inc.’s shakeup of the retail landscape may not be over, according to one well-known technology analyst.

The Internet giant will acquire Target Corp., Loup Venture co-founder Gene Munster wrote in a report highlighting eight predictions for the technology industry in 2018. Amazon made waves in retailing last year with its $13.7 billion purchase of Whole Foods Market Inc.

“Target is the ideal offline partner for Amazon for two reasons, shared demographic and manageable but comprehensive store count,” Munster wrote, noting both companies focus on mothers and families. “Getting the timing on this is difficult, but seeing the value of the combination is easy.”

Market-share numbers suggest a deal would be approved by regulators, and Wal-Mart Stores Inc. would still have a larger share than an Amazon-Target combination, Munster said. He estimated a take-out valuation of $41 billion, or a 15 percent premium to Target’s current value. Target shares rose as much as 3.1 percent as of 10:36 a.m. in New York.

Munster, 46, co-founded Loup Ventures, a venture capital firm focused on virtual reality and artificial intelligence, in early 2017. Before that, he’d worked for 21 years as an analyst at Piper Jaffray Cos., where he was known for his accuracy in predicting Apple Inc.’s financial potential.

Predicting Amazon’s next deal has become a common theme for analysts. In November, DA Davidson analyst Tom Forte wrote that Lululemon Athletica Inc. may be attractive to the online retailer, while Citigroup analyst Paul Lejuez recently catalogued a host of potential targets, including Abercrombie & Fitch Co., Bed Bath & Beyond Inc. and Advance Auto Parts Inc.

Still, Amazon may not just be interested in retail deals. Last month, CFRA bank analyst Ken Leon wrote that he foresees the Internet company buying a small- or mid-sized bank in 2018.

— With assistance by Paul Jarvis

Computer Science I [pdf]

Show HN: Mask R-CNN Neural Network for Mapping Sport Fields in OpenStreetMap

$
0
0

README.md

This project uses the Mask R-CNN algorithm to detect features in satellite images. The goal is to test the Mask R-CNN neural network algorithm and improve OpenStreetMap by adding high quality baseball, soccer, tennis, football, and basketball fields to the map.

The Mask R-CNN was published March 2017, by the Facebook AI Research (FAIR).

This paper claims state of the art performance for detecting instance segmentation masks. The paper is an exciting result because "solving" the instance segmentation mask problem will benefit numerious practical applications outside of Facebook and OpenStreetMap.

Using Mask R-CNN successfully on a new data set would be a good indication that the algorithm is generic enough to be applicable on many problems. However, the number of publicly available data sets with enough images to train this algorithm are limited because collecting and annotating data for 50,000+ images is expensive and time consuming.

Microsoft's Bing satellite tiles, combined with the OpenStreetMap data, is a good source of segmentation mask data. The opportunity of working with a cutting edge AI algorithms and doing my favorite hobby (OSM) was too much to pass up.

Samples Images

Mask R-CNN finding baseball, basketball, and tennis fields in Bing images.

OSM Mask R-CNN sample 1OSM Mask R-CNN sample 2OSM Mask R-CNN sample 3

Mask R-CNN Implementation

At this time (end of 2017), Facebook AI research has not yet released their implementation. Matterport, Inc has graciously released a very nice python implementation of Mask R-CNN on github using Keras and TensorFlow. This project is based on Matterport, Inc work.

Why Sports Fields

Sport fields are a good fit for the Mask R-CNN algorithm.

  • They are visible in the satellite images regardless of the tree cover, unlike, say, buildings.
  • They are "blob" shape and not a line shape, like a roads.
  • If successful, they are easy to conflate and import back into OSM, because they are isolated features.

Training with OSM

The stretch goal for this project is to train a neural network at human level performance and to completely map the sports fields in Massachusetts in OSM. Unfortunately the existing data in OSM is not of high enough quality to train any algorithm to human level performance. The plan is to iteratively train, feed corrections back to OSM, and re-train, bootstrapping the algorithm and OSM together. Hopefully a virtuous circle between OSM and the algorithm will form until the algorithm is good as a human mapper.

Workflow

The training workflow is in the trainall.py, which calls the following scripts in sequence.

  1. getdatafromosm.py uses overpass to download the data for the sports fields.
  2. gettilesfrombing.py uses the OSM data to download the required Bing tiles. The script downloads the data slowly, please expect around 2 days to run the first time.
  3. maketrainingimages.py collects the OSM data, and the Bing tiles into a set of training images and masks. Expect 12 hours to run each time.
  4. train.py actually runs training for the Mask R-CNN algorithm. Expect that this will take 4 days to run on single GTX 1080 with 8GB of memory.

Convert Results to OSM File

  1. createosmanomaly.py runs the neural network over the training image set and suggests changes to OSM.

    This script converts the neural network output masks into the candidate OSM ways. It does this by fitting perfect rectangles to tennis and basketball mask boundaries. For baseball fields, the OSM ways are a fitted 90 degree wedges and the simplified masks boundary. The mask fitting is a nonlinear optimization problem and it is performed with a simplex optimizer using a robust Huber cost function. The simplex optimizer was used because I was too lazy code a partial derivative function. The boundary being fit is not a gaussian process, therefor the Huber cost function is a better choice than a standard least squared cost function. The unknown rotation of the features causes the fitting optimization to be highly non-convex. In English, the optimization gets stuck in local valleys if it is started far away from the optimal solution. This is handled by simply seeding the optimizer at several rotations and emitting all the high quality fits. A human using the reviewosmanomaly.py script sorts out which rotation is the right one. Hopefully as the neural network performance on baseball fields improves the alternate rotations can be removed.

    In order to hit the stretch goal, the training data from OSM will need to be pristine. The script will need to be extended to identify incorrectly tagged fields and fields that are poorly traced. For now, it simply identifies fields that are missing from OSM.

  2. The reviewosmanomaly.py is run next to visually approve or reject the changes suggested in the anomaly directory.

    Note this is the only script that requires user interaction. The script clusters together suggestions from createosmanomaly.py and presents an gallery options. The the user visually inspects the image gallery and approves or reject changes suggested by createosmanomaly.py. The images shown are of the final way geometry over the Bing satellite images.

  3. The createfinalosm.py creates the final .osm files from the anomaly review done by reviewosmanomaly.py. It breaks up the files so that the final OSM file size is under the 10,000 element limit of the OSM API.

Phase 1 - Notes

Phase 1 of the project is training the neural network directly off of the unimproved OSM data, and importing missing fields from the training images back into OSM. About 2,800 missing fields were identified and will soon be imported back into OSM.

For tennis and basketball courts the performance is quite good. The masks are rectangles with few false positives. Like a human mapper it has no problem handling clusters of tennis and basketball courts, rotations, occlusions from trees, and different colored pavement. It is close, but not quite at human performance. After the missing fields are imported into OSM, hopefully it will reach human level performance.

The good news/bad news are the baseball fields. They are much more challenging and interesting than the tennis and basketball courts. First off, they have a large variation in scale. A baseball field for very small children is 5x to 6x smaller than a full sized field for adults. The primary feature to identify a baseball field is the infield diamond, but the infield is only a small part of the actual full baseball field. To map a baseball field, the large featureless grassy outfield must be included. The outfields have to be extrapolated out from the infield. In cases where there is a outfield fence, the neural network does quite well at terminating the outfield at the fence. But most baseball fields don't have an outfield fence or even a painted line. The outfields stretch out until they "bump" into something else, a tree line, a road, or another field while maintaining its wedge shape. Complicating the situation, is that like the neural network, the OSM human mappers are also confused about how to map the outfields without a fence! About 10% of the mapped baseball fields are just the infields.

The phase 1 neural network had no trouble identifying the infields, but it was struggling with baseball outfields without fences. In the 2,800 identified fields, only the baseball fields with excellent outfield were included. Many missing baseball fields had to be skipped because of poor outfield performance. Hopefully the additional high quality outfield data imported into OSM will improve its performance in this challenging area on the next phase.

Problem with Baseball Outfields

Configuration

  • Ubuntu 17.10
  • A Bing key, create a secrets.py file, add in bingKey ="your key"
  • Create a virtual environment python 3.6
  • In the virtual environment, run "pip install -r requirements.txt"
  • TensorFlow 1.3+
  • Keras 2.0.8+.

How Atlassian Built a $10B Growth Engine

$
0
0

“We’ve had a lot of smart people who wouldn’t join the company or give us money or advise us because [our business] made no sense to them.” — Mike Cannon-Brookes, Atlassian co-founder

When Atlassian was founded in 2002, the founders had a choice to make.

They could jump through the standard hoops and do the things that most SaaS companies were doing—build out a sales team, knock down investors’ doors, and try to turn an idea into millions of dollars in funding.

But, Atlassian didn’t jump through the standard (and expected) hoops. Instead, they chose an unconventional path that would ultimately help them build a $10 billion business.

Atlassian still doesn’t have an enterprise sales team 15 years into the company’s founding. But their biggest—and most unusual—lever for growth has been to consistently acquire other products throughout the company’s history and integrate them into the existing product suite. This has helped Atlassian grow into a family of products that can spread organically through enterprise organizations.

How exactly has Atlassian created a growth engine around acquisitions and integrations to build their behemoth global business? Let’s dive deeper into how the company:

  • Developed a loyal market by building a best-in-class project management tool for engineering teams
  • Strategically expanded their product offerings through acquisitions to broaden their customer base to teams around the dev teams
  • Doubled down on freemium distribution and horizontal use-cases in their recent acquisitions to make the top of their funnel even wider across teams

So many of Atlassian’s strategies were unique for their time, but have since become common practice for SaaS companies. Let’s take a look at how several of these practices were developed out of Atlassian’s specific needs throughout the company’s lifetime, and how each one helped shape the company’s success.

2002-2010: Self-funded and freemium

In 2002—the tail end of a nuclear winter for tech—being a Silicon Valley entrepreneur was tough. But being an entrepreneur in Sydney, Australia was much harder. There wasn’t a large tech community, and there weren’t local VCs that founders could go to for investments. Atlassian co-founders Mike Cannon-Brookes and Scott Farquhar put it this way: there was no IPO preschool like there was in Silicon Valley.

So with an idea for a new developer tool and no money, they realized that building a successful company meant two things:

  • They had to create really useful tools quickly so they could win over the market
  • They had to find a way to sell them without paying for a sales team

Since they were developers themselves, the co-founders saw the need for developer-specific tools around tracking issues and collaborating with one another. They built these functions into their first two tools—Jira and Confluence.

No one had built project management or collaboration tools yet for developers, and the co-founders knew from their own work that other developers would want these tools. All they had to do was get them to try it. They decided to use a freemium plan to allow people to test out the tools without risk, and realize for themselves how useful they were.

This model allowed a lot of people to start using the tools really quickly—and as they onboarded more customers and grew revenue without sales overhead, they were able to start acquiring other companies and adding to their developer offerings very early on in their company’s lifetime.

Let’s take a closer look at how they built—and acquired—these initial products in the early years to win over the developer market.

2002: Cannon-Brookes and Farquhar both studied computer science and met in college. They knew they wanted to start their own company, and they began by creating a third-party support service. On the side, they built their own issue tracker because they were fed up with using email or personal productivity tools to track their developer work. Doing developer work is messy and they needed a concrete place where they could log issues and work collaboratively. Soon, they realized what they built had the potential to be really useful for other developers—and they decided to pivot from a service company to a product company. They took out $10,000 in credit card debt to start Atlassian, and launched that first flagship product, Jira.

Jira featured a simple interface and provided developers with a single place to manage bugs, plan features, and track tasks. Jira also featured version-history, file attachments, and a search function for issues—everything a developer needed to manage software projects in one place. This hadn’t been possible before with other tools.

Because of Jira’s comprehensiveness and complexity, the product came with a steep learning curve. But this was actually a blessing in Atlassian’s specific market. The thing that made the product challenging to learn was what developers loved most about it—that it did everything needed for issue tracking, and they could customize it to work precisely the way they needed it to for their specific teams and projects.

2004: Jira was bringing in revenue, but even in these first years, Atlassian was on the lookout for other potential revenue streams. Wiki technology was gaining traction in the developer market—the Atlassian co-founders took this as their opportunity to provide simple wiki functions to teams with the requirements of enterprise knowledge management systems. They called this new dev team collaboration platform Confluence. It was for the kinds of teams that would also use Jira, and was meant to provide more value by making wikis easy to create, edit, link, search, and organize.

Confluence also integrated really well with Jira, which developer teams were already starting to love. It worked seamlessly with another really specific, useful product, which motivated teams to give Confluence a try.

The decision to build a second product after the initial success of Jira was risky because many early-stage companies focus all of their attention on a single product. Dividing resources could have meant that both products would fail. But the team had faith in how much users liked using Jira, and recognized that there were still other helpful tools Atlassian could provide. The multi-product strategy paid off. As Cannon-Brookes said: “We had two rocket engines driving us along, not just one.”

2005: Just three years after its founding, Atlassian was profitable without having taken any venture capital. This was because they charged enterprise prices and didn’t have to spend money paying sales people: they sold the product by providing an option for a 30-day free trial on their website, and then gave trial users the option to get on a paid plan. This allowed developers to try their products out, realize how useful they were, and then recommend the products to their team members and developer friends.

2007: At this time, both Jira and Confluence sales were growing, and this validated that the developer market was a space with a lot of opportunity. But here’s where Atlassian made a really interesting, unique move. The typical way for enterprise developer tool companies to expand is to build more products. Since Atlassian had capital, they decided that instead of spending time building more of their own tools, they’d buy ones that were already successful.

This lead them to look to the company Cenqua, which made three developer tools—Fisheye, Crucible, and Clover. These tools filled the gaps in Atlassian’s product offerings. Cannon-Brookes noted that some of the functionality of these tools, like Crucible’s code review, was incredibly valuable to developers. He said that “on a scale of one to ten, the strategic fit [of the Cenqua tools] is a ten.”

They integrated these products into their offerings by allowing the services to continue uninterrupted, but moved all of the products’ information and documentation over to the Atlassian site. All of Cenqua’s development and executive staff moved over to join Atlassian. By 2008, The Crucible tools were listed alongside Atlassian’s other product offerings on Atlassian’s website as part of a cohesive product suite.

Making such an early acquisition was unusual, especially given the company’s lack of venture funding. But Atlassian’s freemium sales model had helped them quickly generate a lot of revenue. When they weighed the pros and cons of the time and money it would take to build out their own version of those tools, the company saw the acquisition of Cenqua as the best use of their resources at the time.

The freemium sales and distribution model, as well as the early acquisitions, created many revenue streams that lead to over $50M in ARR by 2010. At this point the company was eight years old and had already been profitable for five years.

It was clear that Atlassian didn’t need venture money to survive—they’d built an engine that could survive on its own. But in their long-term plans for the company, they had their own unique reasons for wanting to work with investors in the next stage of growth.

2010-2015: Integrating acquisitions and spreading to other teams

“We want to build a 50-year company. Going public is one step on that journey. There are very few long-term companies that are private.” – Scott Farquhar

Unlike a lot of companies, Atlassian didn’t raise money because they needed cash—they’d already built a model for a healthy business by winning over the developer market with useful tools.

The early success of Jira, Confluence, and the Cenqua products encouraged the team and proved their freemium distribution model could work. They realized that by building a suite of products developers needed, they could become indispensable to customers and retain them long-term. Given the success of the first acquisition, the company decided the best path forward was to become exceptional at acquisition and folding useful pre-existing products into the Atlassian suite.

Here’s what Atlassian did during this time to acquire the right products, integrate them well, and continue expanding their user base to users that were tangential to dev teams.

2010: Atlassian raised $60 million in secondary funding from Accel Partners, eight years after starting their company. They planned to use the money from the investment to add to their war chest for acquisitions and growth. The team stated that capital would go toward M&A with other enterprise tools. These additional tools would help them provide even more functionality to enterprise developer teams, and start to widen into other verticals.

At this point, Atlassian had over 20,000 customers worldwide, including Facebook and Adobe, and were feeling the need to offer an even more robust and comprehensive set of developer tools. So in an effort to start doing“what Adobe does for designers, except for technical teams,” Atlassian looked at how they could help developers manage their projects at other stages in their pipeline.

This lead them to acquire Bitbucket, a hosted service for code collaboration, for an undisclosed amount. Bitbucket helped developers share and collaborate on a decentralized software repository. As one report noted, thanks to the Bitbucket acquisition, developers now had “a place to dump and host their code, and a place to track their project issues and bugs within Atlassian.” It was a perfect product fit to fill a gap in Atlassian’s offerings. Atlassian offered the product right alongside all of the others, and created new pricing tiers—including a freemium plan—to fit seamlessly into Atlassian’s existing freemium distribution model.

2012: Atlassian acquired the hosted private chat service Hipchat, and announced a plan to integrate the chat feature into its suite of products. This was a brilliant move that showed Atlassian was ahead of its time—Slack had not blown up yet and a real-time communication tool wasn’t an obvious acquisition. However, Atlassian used Hipchat on their own team and knew how useful it was, so they wanted to provide the same integrated functionality to their users, too.

HipChat was growing quickly at the time and had over 1,200 customers of their own, including Groupon and HubSpot. The product was helping entire organizations communicate. Pete Curley, the CEO and founder of Hipchat, said that Atlassian was the perfect environment to continue scaling Hipchat’s services quickly, noting“the no-friction business model.” Acquiring Hipchat was the perfect way for Atlassian to plug another a hole in their product offerings and get more non-developer teams to start using Atlassian products. As Cannon-Brookes put it, Hipchat is “perfect for product teams but fantastic for any team.”

Source

2013: Atlassian released a service desk offering on top of Jira that targeted the IT market. The new features included a customer-centered interface, an SLA engine, customizable team queues, and real-time reports and analytics.

At the time, Atlassian president Jay Simmons said that this addition was an organic extension based on the needs of Jira customers. About 40% of Jira users had already extended Jira to service desk and help desk use cases and had asked Atlassian to build the service. The service desk helped extend Atlassian’s offerings to IT departments and continue growing bookings, which at this time were over $100 million a year.

2015: Atlassian’s Git services were fast growing—Eric Wittman, the general manager of Atlassian’s developer tools, noted that Bitbucket’s customer growth the year before was around 80%, and that 1 in 3 Fortune 500 companies used Bitbucket. To align themselves with this kind of growth and present a cohesive brand, Atlassian combined all of their Git-based services under the Bitbucket brand, and added features that supported larger distributed teams and projects.

Over these years, Atlassian filled a gap in team SaaS tools by providing a broad vision for how teams could collaborate. While a lot of companies make acquisitions, most don’t execute well on integration. Combining two separate companies can be a nightmare, with clashing brands, personalities, and even code bases. Atlassian not only developed the skill of making smart acquisitions—they’ve also mastered the treacherous integration process from people to code.

This strategy was all about lock-in. With such tightly integrated solutions, the product superiority of one tool wasn’t necessarily the selling point. Instead, it was the comprehensive function of the whole suite of products. It didn’t matter if a company preferred Github’s functions to Bitbucket’s—if a team was already hooked into Jira and Confluence, they were going to use Bitbucket because it made their workflow more convenient and efficient.

This strategy fed directly into Atlassian’s low-cost distribution model. They charged very little for the initial products and sold a low number of seats. Each product had its own pricing model, so teams could add products a la carte. Most of them started with a free trial.

Jira pricing in 2015.

Pretty soon their flywheel would kick into effect as more team members were onboarded, more teams and users got pulled in, and teams realized that trying to work without the Atlassian tools would be difficult.

The goal was for the sum of the parts to create a sticky, all-consuming whole. And in terms of acquisition efficiency, it worked. Atlassian spent between 12 and 21% of their yearly revenue on CAC from 2012-2015, compared to the SaaS industry median of 50-100%.

Adding products helped Atlassian finding new ways to hook users into the Jira suite—like with Bitbucket for developers and Hipchat for non-developer teams. This led to steep user and revenue growth.

Source

The first few Atlassian products, like Jira and Confluence, brought in revenue at a steady trickle. The acquisitions allowed Atlassian to compound growth with each additional product. Right before Atlassian’s IPO, they announced $320 million in annual sales, which was up 60% from the year before.

Atlassian nailed their distribution flywheel and acquisition/integration machine. They were primed to widen the top of their funnel and try to win over even more teams in enterprise organizations.

2015-Present: Expanding to competitive and lucrative markets

To a lot of SaaS companies, Atlassian looks like the “end goal.” They’ve grown into a giant, public, global company with a complex and integrated suite of products for many different verticals.

But Atlassian understands that success is a continuum. They’re burning brightly, but unless they can keep up their growth going and maintain a stronghold over their markets, they’ll flare out. The goal is no longer to just build out a best-in-class suite of tools for developer teams. Rather, it’s getting an entire company—and all of the teams within it—using relevant products in the Atlassian suite.

While Atlassian works to offer products to many different teams within an organization, the company still needs to find a way to stay relevant to small teams. Many of their moneymaking products are getting unnecessarily complex for small teams. Instead of wasting time building lightweight versions, they’re using their successful acquisition and integration strategy to add lightweight products to the Atlassian suite.

Let’s take a closer look at exactly how their acquisitions and integrations over the past few years have helped them widen their funnel and expand into more lucrative and competitive markets.

2015: Atlassian held its IPO in December and started trading shares with a market cap of nearly $5.8 billion.

The company’s plan after their IPO was to continue expanding sales aggressively by investing in research and development. At the time, they invested over 40% of their revenue into R&D and wanted to keep up their 30% YoY growth for the suite of products. This in turn drove more revenue, which they could put back into R&D and acquisitions to continue fueling their flywheel.

2016: To build itself out into an even more ubiquitous tool provider and help companies maintain their software, Atlassian acquired Statuspage, which allows businesses to keep users updated about the status of their online services. Atlassian already had a relationship with Statuspage because they were an early user of Hipchat, and because they already hosted their own statuses with Statuspage.

Atlassian’s president, Jay Simons, said that he thought the product filled a natural need within Atlassian’s offerings, especially alongside Jira’s issue-tracking services. It was a complementary product for users already using Jira and other products within the suite—but it was also a way to attract different users with a more broad use-case.

2017: Recently, Atlassian took a big step to target smaller teams by acquiring the lightweight project management tool Trello. I’ve already talked about Atlassian’s acquisition of Trello from the perspective of Trello’s potential—but the acquisition is also a really important recent step in Atlassian’s journey. Trello is a much simpler project management tool than Jira, and the simple Kanban board covers much broader use cases.

Source

Atlassian needed a simple product that could fill the small-company gap in their distribution strategy as Jira moved upmarket and got more complex.

Trello is use-case agnostic, and is a good simple alternative to developer-specific Jira. The addition of Trello directly widens the funnel through which Atlassian can pull new, broader bases of users into their suite—and then cross-sell and upsell them to different products as their needs grow.

Later in the year, Atlassian made another huge product move by pivoting Hipchat’s services into an Atlassian-branded product called Stride. It’s a Slack competitor for team-wide messaging, which means it has some predictable features like text-based messaging and video and audio conferencing. It also has some unique additions like Focus Mode, an “away” setting while you’re at work, and Actions and Decisions, which show highlights from conversations that happened when you were away. According to Atlassian, the goal is to make teams more productive in ways that competitive products don’t do.

Source

Atlassian’s recent product decisions, specifically around Trello and Stride, show that they are still finding new ways to target different teams within companies of all sizes. Both Trello and Stride are freemium products—this was Trello’s Achilles heel, but the free plans will be key for acquiring smaller teams that can’t afford to pay for enterprise products with enterprise functionality.

None of Atlassian’s most recent product acquisitions—Trello, Stride, or Statuspage—have market-specific use cases. They’re all springboards into virtually any team within an organization of any size. And they all have potential for horizontal adoption across teams once there’s buy-in with one.

These recent moves show that Atlassian is being very smart about growing product offerings for more wide-spread use cases in more competitive and lucrative markets. They’re turning acquisitions into additional product offerings—but at the same time, they’re clearing out potential disruptors and protecting the lock-in they’ve already established.

Where Atlassian can go from here

Atlassian’s pattern of acquisition—and their success with product additions—is very atypical. Looking forward, they have all of the pieces in place to continue edging out into even more additional markets through acquisitions.

With its acquisition/integration machine in place, here are some specific ways Atlassian can expand:

  • Use Trello to land in new companies: The acquisition of Trello is all about finding a way to land in new areas within organizations. One of the biggest opportunities for Atlassian with Trello is to gain support among product managers on teams that aren’t yet using Jira or other Atlassian tools. Because Trello is so simple, there is much less friction, making it easier for teams to adopt the product. But it has a lot of powerful integrations with Atlassian tools that simplify workflows across teams. This adds another point of entry for teams into the Atlassian suite. If Atlassian can onboard PMs to Trello, they may be able to get those they collaborate with, like dev teams, using Trello as well. Exposure to Trello will make these dev teams aware of integrated products like Jira, and may motivate them to use the more developer-specific tools if they aren’t already.
  • Use Stride to connect all elements of team collaboration: Messenger tools are important layers in any company’s ability to collaborate—and they have the potential to reach every single team in a company. Atlassian knows that it can’t give away this precious and far-reaching potential to Slack, which has integrations that allow users to plug in workplace tools. Stride is a defensive move, but it has potential to be really useful to Atlassian users and pull more users throughout a company into the Atlassian suite. Less back and forth and manual transfer of information would make work much more productive and painless, so Atlassian has to make sure their integrations and marketing around Stride tap into that.
  • Build out designer tools: Atlassian provides users with Atlaskit, which is the company’s official UI library. It contains all of the tools needed to build in Atlassian’s design style. But Atlassian can expand to build out more agnostic front-end developer tools that can be used for UI design and UX testing. They have the potential to compete with Invision and build a better integrated tool for designers. They could kickstart the effort by purchasing a full stack UX design platform like UXPin.

The Atlassian team bought themselves a lot of time to make careful decisions by building a healthy business that could stand on its own legs, become profitable, and go public. Moving forward, if they continue to make calculated decisions, they could be the main provider of the most helpful workplace tools for SaaS companies of all sizes.

3 key lessons from Atlassian’s growth

The co-founders know that Atlassian is different than other SaaS companies. A lot of their decisions have been unconventional and really specific to their constraints. They’ve said to other companies, don’t try to copy us.

The bigger point for those growing their own businesses is that you shouldn’t and can’t copy any of Atlassian’s decisions because each company has to find ways to succeed based on their specific goals and challenges. But you can learn from the way that Atlassian approached their problems and how they looked for opportunities. These skills are vital to success and growth—without them, you’re dead in the water.

These are the key lessons that anyone building a company should take away from Atlassian:

1. Focus on capital efficiency early on

Many SaaS companies try to bootstrap their way to profitability. What made Atlassian’s journey so unique—and so successful—is that they pioneered a lot of unprecedented ways of doing this.

For one, Atlassian used a freemium distribution model starting in 2002 because they couldn’t afford a sales team. There weren’t a lot of enterprise software companies doing this at the time. But this made sales and marketing spend from the outset very low.

Another important step towards capital efficiency was aggressively adding more products very early on. This was risky because many companies gain their footing by focusing on a single product. Atlassian started building out a suite very early, which bolstered revenue.

There are two main mechanics that contribute to better capital efficiency: lower CAC and higher revenue. Atlassian found specific ways to lower their CAC and grow their revenue that worked for them, and you can do the same by looking at your unique constraints and considering the following:

  • You can lower CAC by targeting more effective marketing channels, onboarding freemium users more effectively to paid accounts, and focusing on inbound marketing through content and free trials.
  • You can grow revenue by increasing your prices, adding additional revenue streams from new products, and cross-selling users to other products and add-ons.

Capital efficiency was vital to Atlassian’s early growth and success, and it’s a good mentality and practice for any SaaS company.

2. Own a particular customer segment

Atlassian put down roots in developer teams. From there, it had a point of entry within enterprise organizations to expand to other teams. Having a stronghold over developer teams who loved using Jira and wouldn’t want to switch was essential to onboarding other teams to auxiliary Atlassian tools.

Salesforce has done the same thing in the sales market. Even though they’re now expanding out into almost every market in enterprise businesses, they started with just a CRM and got lock-in within sales teams. Winning over one department is key to playing the long game.

When you’re just starting out, it’s hard to win over broad use cases because it’s difficult to hone in on one specific value proposition and get anyone’s attention. Instead, focus on one department within an organization and evangelize your product to that specific group of people. There are some specific steps you can take to grow this evangelism:

  • Offer workshops and networking conferences with early groups of users in one specific market
  • Talk to early customers within your target market about what specific problems your product solves, and make sure that your marketing reflects these pain points and solutions
  • When you start building add-ons and new features, make sure they’re specifically related to your target market before expanding out to other segments.

3. Choose the style of business you want to master

There are many ways to build a great company. Some choose to focus more on product innovation. Others, like Atlassian, put more emphasis on strategic acquisitions. A company like Salesforce chose to focus on delivery method and industry disruption. Some, like Basecamp, focus on single product simplicity.

There’s no one-size-fits-all path, but there is a common theme among all of these successful companies. They all developed a concept to master early on, and they stuck with it and followed through in all of their decisions.

Deciding what is more important for your company to master early on is challenging but it gives you sharp focus. Two important factors will shape this early decision:

  • What is your initial vision for your company and product?
  • What do you want to accomplish with your company in 5, 10, and 20 years?

Buck convention and build your own growth engine

Building a long-term company is challenging because you’re going to face so many different constraints at different points in your business. The same factors won’t necessarily drive your growth at every stage. But if you know how you want to grow and have a strategy around what steps you’ll take to do it, you can develop a framework for making decisions in many different circumstances.

Atlassian’s strategy—acquisition, integration, and organic distribution—was particularly special because not many other companies have done it (and done it well). Your’s may look different from other companies, too. The goal is to develop a framework that helps you win with the cards you’re dealt.

Physicists Uncover Geometric ‘Theory Space’

$
0
0

In the 1960s, the charismatic physicist Geoffrey Chew espoused a radical vision of the universe, and with it, a new way of doing physics. Theorists of the era were struggling to find order in an unruly zoo of newfound particles. They wanted to know which ones were the fundamental building blocks of nature and which were composites. But Chew, a professor at the University of California, Berkeley, argued against such a distinction. “Nature is as it is because this is the only possible nature consistent with itself,” he wrote at the time. He believed he could deduce nature’s laws solely from the demand that they be self-consistent.

Scientists since Democritus had taken a reductionist approach to understanding the universe, viewing everything in it as being built from some kind of fundamental stuff that cannot be further explained. But Chew’s vision of a self-determining universe required that all particles be equally composite and fundamental. He conjectured that each particle is composed of other particles, and those others are held together by exchanging the first particle in a process that conveys a force. Thus, particles’ properties are generated by self-consistent feedback loops. Particles, Chew said, “pull themselves up by their own bootstraps.”

Chew’s approach, known as the bootstrap philosophy, the bootstrap method, or simply “the bootstrap,” came without an operating manual. The point was to apply whatever general principles and consistency conditions were at hand to infer what the properties of particles (and therefore all of nature) simply had to be. An early triumph in which Chew’s students used the bootstrap to predict the mass of the rho meson — a particle made of pions that are held together by exchanging rho mesons — won many converts.

But the rho meson turned out to be something of a special case, and the bootstrap method soon lost momentum. A competing theory cast particles such as protons and neutrons as composites of fundamental particles called quarks. This theory of quark interactions, called quantum chromodynamics, better matched experimental data and soon became one of the three pillars of the reigning Standard Model of particle physics.

But the properties of individual quarks seemed arbitrary, and in another universe they might have been different. Physicists were forced to recognize that the set of particles that happen to populate the universe do not reflect the only possible consistent theory of nature. Rather, an endless variety of possible particles can be imagined interacting in any number of spatial dimensions, each situation described by its own “quantum field theory.”

The bootstrap languished for decades at the bottom of the physics toolkit. But recently the field has been re-energized as physicists have discovered novel bootstrap techniques that appear to solve many problems. While consistency conditions still aren’t much help for sorting out complicated nuclear particle dynamics, the bootstrap is proving to be a powerful tool for understanding more symmetric, perfect theories that, according to experts, serve as “signposts” or “building blocks” in the space of all possible quantum field theories.

As the new generation of bootstrappers explores this abstract theory space, they seem to be verifying the vision that Chew, now 92 and long retired, laid out half a century ago — but they’re doing it in an unexpected way. Their findings indicate that the set of all quantum field theories forms a unique mathematical structure, one that does indeed pull itself up by its own bootstraps, which means it can be understood on its own terms.

As physicists use the bootstrap to explore the geometry of this theory space, they are pinpointing the roots of “universality,” a remarkable phenomenon in which identical behaviors emerge in materials as different as magnets and water. They are also discovering general features of quantum gravity theories, with apparent implications for the quantum origin of gravity in our own universe and the origin of space-time itself. As leading practitioners David Poland of Yale University and David Simmons-Duffin of the Institute for Advanced Study in Princeton, New Jersey, wrote in a recent article, “It is an exciting time to be bootstrapping.”

Bespoke Bootstrap

The bootstrap is technically a method for computing “correlation functions” — formulas that encode the relationships between the particles described by a quantum field theory. Consider a chunk of iron. The correlation functions of this system express the likelihood that iron atoms will be magnetically oriented in the same direction, as a function of the distances between them. The two-point correlation function gives you the likelihood that any two atoms will be aligned, the three-point correlation function encodes correlations between any three atoms, and so on. These functions tell you essentially everything about the iron chunk. But they involve infinitely many terms riddled with unknown exponents and coefficients. They are, in general, onerous to compute. The bootstrap approach is to try to constrain what the terms of the functions can possibly be in hopes of solving for the unknown variables. Most of the time, this doesn’t get you far. But in special cases, as the theoretical physicist Alexander Polyakov began to figure out in 1970, the bootstrap takes you all the way.

Polyakov, then at the Landau Institute for Theoretical Physics in Russia, was drawn to these special cases by the mystery of universality. As condensed matter physicists were just discovering, when materials that are completely different at the microscopic level are tuned to the critical points at which they undergo phase transitions, they suddenly exhibit the same behaviors and can be described by the exact same handful of numbers. Heat iron to the critical temperature where it ceases to be magnetized, for instance, and the correlations between its atoms are defined by the same “critical exponents” that characterize water at the critical point where its liquid and vapor phases meet. These critical exponents are clearly independent of either material’s microscopic details, arising instead from something that both systems, and others in their “universality class,” have in common. Polyakov and other researchers wanted to find the universal laws connecting these systems. “And the goal, the holy grail of all that, was these numbers,” he said: Researchers wished to be able to calculate the critical exponents from scratch.

What materials at critical points have in common, Polyakov realized, is their symmetries: the set of geometric transformations that leave these systems unchanged. He conjectured that critical materials respect a group of symmetries called “conformal symmetries,” including, most importantly, scale symmetry. Zoom in or out on, say, iron at its critical point, and you always see the same pattern: Patches of atoms oriented with north pointing up are surrounded by patches of atoms pointing downward; these in turn are inside larger patches of up-facing atoms, and so on at all scales of magnification. Scale symmetry means there are no absolute notions of “near” and “far” in conformal systems; if you flip one of the iron atoms, the effect is felt everywhere. “The whole thing organizes as some very strongly correlated medium,” Polyakov explained.

The world at large is obviously not conformal. The existence of quarks and other elementary particles “breaks” scale symmetry by introducing fundamental mass and distance scales into nature, against which other masses and lengths can be measured. Consequently, planets, composed of hordes of particles, are much heavier and bigger than we are, and we are much larger than atoms, which are giants next to quarks. Symmetry-breaking makes nature hierarchical and injects arbitrary variables into its correlation functions — the qualities that sapped Chew’s bootstrap method of its power.

But conformal systems, described by “conformal field theories” (CFTs), are uniform all the way up and down, and this, Polyakov discovered, makes them highly amenable to a bootstrap approach. In a magnet at its critical point, for instance, scale symmetry constrains the two-point correlation function by requiring that it must stay the same when you rescale the distance between the two points. Another conformal symmetry says the three-point function must not change when you invert the three distances involved. In a landmark 1983 paper known simply as “BPZ,” Alexander Belavin, Polyakov and Alexander Zamolodchikov showed that there are an infinite number of conformal symmetries in two spatial dimensions that could be used to constrain the correlation functions of two-dimensional conformal field theories. The authors exploited these symmetries to solve for the critical exponents of a famous CFT called the 2-D Ising model — essentially the theory of a flat magnet. The “conformal bootstrap,” BPZ’s bespoke procedure for exploiting conformal symmetries, shot to fame.

Far fewer conformal symmetries exist in three dimensions or higher, however. Polyakov could write down a “bootstrap equation” for 3-D CFTs — essentially, an equation saying that one way of writing the four-correlation function of, say, a real magnet must equal another — but the equation was too difficult to solve.

“I basically started doing other things,” said Polyakov, who went on to make seminal contributions to string theory and is now a professor at Princeton University. The conformal bootstrap, like the original bootstrap more than a decade earlier, fell into disuse. The lull lasted until 2008, when a group of researchers discovered a powerful trick for approximating solutions to Polyakov’s bootstrap equation for CFTs with three or more dimensions. “Frankly, I didn’t expect this, and I thought originally that there is some mistake there,” Polyakov said. “It seemed to me that the information put into the equations is too little to get such results.”

Surprise Kinks

In 2008, the Large Hadron Collider was about to begin searching for the Higgs boson, an elementary particle whose associated field imbues other particles with mass. Theorists Riccardo Rattazzi in Switzerland, Vyacheslav Rychkov in Italy and their collaborators wanted to see whether there might be a conformal field theory that is responsible for the mass-giving instead of the Higgs. They wrote down a bootstrap equation that such a theory would have to satisfy. Because this was a four-dimensional conformal field theory, describing a hypothetical quantum field in a universe with four space-time dimensions, the bootstrap equation was too complex to solve. But the researchers found a way to put bounds on the possible properties of that theory. In the end, they concluded that no such CFT existed (and indeed, the LHC found the Higgs boson in 2012). But their new bootstrap trick opened up a gold mine.

Their trick was to translate the constraints on the bootstrap equation into a geometry problem. Imagine the four points of the four-point correlation function (which encodes virtually everything about a CFT) as corners of a rectangle; the bootstrap equation says that if you perturb a conformal system at corners one and two and measure the effects at corners three and four, or you tickle the system at one and three and measure at two and four, the same correlation function holds in both cases. Both ways of writing the function involve infinite series of terms; their equivalence means that the first infinite series minus the second equals zero. To find out which terms satisfy this constraint, Rattazzi, Rychkov and company called upon another consistency condition called “unitarity,” which demands that all the terms in the equation must have positive coefficients. This enabled them to treat the terms as vectors, or little arrows that extend in an infinite number of directions from a central point. And if a plane could be found such that, in a finite subset of dimensions, all the vectors point to one side of the plane, then there’s an imbalance; this particular set of terms cannot sum to zero, and does not represent a solution to the bootstrap equation.

Physicists developed algorithms that allowed them to search for such planes and bound the space of viable CFTs to extremely high accuracy. The simplest version of the procedure generates “exclusion plots” where two curves meet at a point known as a “kink.” The plots rule out CFTs with critical exponents that lie outside the area bounded by the curves.

Surprising features of these plots have emerged. In 2012, researchers used Rattazzi and Rychkov’s trick to home in on the values of the critical exponents of the 3-D Ising model, a notoriously complex CFT that is in the same universality class as real magnets, water, liquid mixtures and many other materials at their critical points. By 2016, Poland and Simmons-Duffin had calculated the two main critical exponents of the theory out to six decimal places. But even more striking than this level of precision is where the 3-D Ising model lands in the space of all possible 3-D CFTs. Its critical exponents could have landed anywhere in the allowed region on the 3-D CFT exclusion plot, but unexpectedly, the values land exactly at the kink in the plot. Critical exponents corresponding to other well-known universality classes lie at kinks in other exclusion plots. Somehow, generic calculations were pinpointing important theories that show up in the real world.

The discovery was so unexpected that Polyakov initially didn’t believe it. His suspicion, shared by others, was that “maybe this happens because there is some hidden symmetry that we didn’t find yet.”

“Everyone is excited because these kinks are unexpected and interesting, and they tell you where interesting theories live,” said Nima Arkani-Hamed, a professor of physics at the Institute for Advanced Study. “It could be reflecting a polyhedral structure of the space of allowed conformal field theories, with interesting theories living not in the interior or some random place, but living at the corners.” Other researchers agreed that this is what the plots suggest. Arkani-Hamed speculates that the polyhedron is related to, or might even encompass, the “amplituhedron,” a geometric object that he and a collaborator discovered in 2013 that encodes the probabilities of different particle collision outcomes — specific examples of correlation functions.

Researchers are pushing in all directions. Some are applying the bootstrap to get a handle on an especially symmetric “superconformal” field theory known as the (2,0) theory, which plays a role in string theory and is conjectured to exist in six dimensions. But Simmons-Duffin explained that the effort to explore CFTs will take physicists beyond these special theories. More general quantum field theories like quantum chromodynamics can be derived by starting with a CFT and “flowing” its properties using a mathematical procedure called the renormalization group. “CFTs are kind of like signposts in the landscape of quantum field theories, and renormalization-group flows are like the roads,” Simmons-Duffin said. “So you’ve got to first understand the signposts, and then you can try to describe the roads between them, and in that way you can kind of make a map of the space of theories.”

Tom Hartman, a bootstrapper at Cornell University, said mapping out the space of quantum field theories is the “grand goal of the bootstrap program.” The CFT plots, he said, “are some very fuzzy version of that ultimate map.”

Uncovering the polyhedral structure representing all possible quantum field theories would, in a sense, unify quark interactions, magnets and all observed and imagined phenomena in a single, inevitable structure — a sort of 21st-century version of Geoffrey Chew’s “only possible nature consistent with itself.” But as Hartman, Simmons-Duffin and scores of other researchers around the world pursue this abstraction, they are also using the bootstrap to exploit a direct connection between CFTs and the theories many physicists care about most. “Exploring possible conformal field theories is also exploring possible theories of quantum gravity,” Hartman said.

Bootstrapping Quantum Gravity

The conformal bootstrap is turning out to be a power tool for quantum gravity research. In a 1997 paper that is now one of the most highly cited in physics history, the Argentinian-American theorist Juan Maldacenademonstrated a mathematical equivalence between a CFT and a gravitational space-time environment with at least one extra spatial dimension. Maldacena’s duality, called the “AdS/CFT correspondence,” tied the CFT to a corresponding “anti-de Sitter space,” which, with its extra dimension, pops out of the conformal system like a hologram. AdS space has a fish-eye geometry different from the geometry of space-time in our own universe, and yet gravity there works in much the same way as it does here. Both geometries, for instance, give rise to black holes — paradoxical objects that are so dense that nothing inside them can escape their gravity.

Existing theories do not apply inside black holes; if you try to combine quantum theory there with Albert Einstein’s theory of gravity (which casts gravity as curves in the space-time fabric), paradoxes arise. One major question is how black holes manage to preserve quantum information, even as Einstein’s theory says they evaporate. Solving this paradox requires physicists to find a quantum theory of gravity — a more fundamental conceptualization from which the space-time picture emerges at low energies, such as outside black holes. “The amazing thing about AdS/CFT is, it gives a working example of quantum gravity where everything is well-defined and all we have to do is study it and find answers to these paradoxes,” Simmons-Duffin said.

If the AdS/CFT correspondence provides theoretical physicists with a microscope onto quantum gravity theories, the conformal bootstrap has allowed them to switch on the microscope light. In 2009, theorists used the bootstrap to find evidence that every CFT meeting certain conditions has an approximate dual gravitational theory in AdS space. They’ve since been working out a precise dictionary to translate between critical exponents and other properties of CFTs and equivalent features of the AdS-space hologram.

Over the past year, bootstrappers like Hartman and Jared Kaplan of Johns Hopkins University have made quick progress in understanding how black holes work in these fish-eye universes, and in particular, how information gets preserved during black hole evaporation. This could significantly impact the understanding of the quantum nature of gravity and space-time in our own universe. “If I have some small black hole, it doesn’t care whether it’s in AdS space; it’s small compared to the size of the curvature,” Kaplan explained. “So if you can resolve these conceptual issues in AdS space, then it seems very plausible that the same resolution applies in cosmology.”

It’s far from clear whether our own universe holographically emerges from a conformal field theory in the way that AdS universes do, or if this is even the right way to think about it. The hope is that, by bootstrapping their way around the unifying geometric structure of possible physical realities, physicists will get a better sense of where our universe fits in the grand scheme of things — and what that grand scheme is. Polyakov is buoyed by the recent discoveries about the geometry of the theory space. “There are a lot of miracles happening,” he said. “And probably, we will know why.”

Correction: On February 24, this article was changed to clarify that heating iron to its critical point would cause it to lose magnetization. In addition, the two main exponents of the 3-D Ising model have been calculated out to six decimal places, not their “millionth,” as the article originally stated.

This article was reprinted on Wired.com.

Stimulus: A modest JavaScript framework for the HTML you already have

$
0
0

README.md

A modest JavaScript framework for the HTML you already have

Stimulus is a JavaScript framework with modest ambitions. It doesn't seek to take over your entire front-end—in fact, it's not concerned with rendering HTML at all. Instead, it's designed to augment your HTML with just enough behavior to make it shine. Stimulus pairs beautifully with Turbolinks to provide a complete solution for fast, compelling applications with a minimal amount of effort.

How does it work? Sprinkle your HTML with magic controller, target, and action attributes:

<divdata-controller="hello"><inputdata-target="hello.name"type="text"><buttondata-action="click->hello#greet">Greet</button></div>

Then write a compatible controller. Stimulus brings it to life automatically:

// hello_controller.jsimport { Controller } from"stimulus"exportdefaultclassextendsController {greet() {console.log(`Hello, ${this.name}!`)
  }getname() {returnthis.targets.find("name").value
  }
}

Stimulus continuously watches the page, kicking in as soon as magic attributes appear or disappear. It works with any update to the DOM, regardless of whether it comes from a full page load, a Turbolinks page change, or an Ajax request. Stimulus manages the whole lifecycle.

You can write your first controller in five minutes by following along in The Stimulus Handbook.

You can read more about why we created this new framework in The Origin of Stimulus.

Installing Stimulus

Stimulus integrates with the webpack asset packager to automatically load controller files from a folder in your app.

You can use Stimulus with other asset packaging systems, too. And if you prefer no build step at all, just drop a <script> tag on the page and get right down to business.

See the Installation Guide for detailed instructions.

Contributing Back

Stimulus is MIT-licensed open source software from Basecamp, the creators of Ruby on Rails.

Have a question about Stimulus? Find a bug? Think the documentation could use some improvement? Head over to our issue tracker and we'll do our best to help. We love pull requests, too!

We expect all Stimulus contributors to abide by the terms of our Code of Conduct.


© 2018 Basecamp, LLC.

Mapzen Shutdown

$
0
0

Unfortunately, we have some sad news. Mapzen will cease operations at the end of January 2018. Our hosted APIs and all related support and services will turn off on February 1, 2018. You will not be charged for API usage in December/January. We know this is an inconvenience and have provided a migration guide to similar services for our developer community. Our goal is to help as much as possible to ensure continuity in the services that you have built with us.

Fortunately, the core products of Mapzen are built entirely on open software and data. As a result, there are options to run Mapzen services yourself or to switch to other service providers.

Thanks for being with us over the past four years. We’re grateful that the open work we’ve done can continue outside of Mapzen and while we know this is a sad day, we’re optimistic about what’s next.

Preview image: Air Afrique Map of the World by M. Bourie from the David Rumsey Map Collection

Linux page table isolation is not needed on AMD processors

$
0
0
From
Subject[PATCH] x86/cpu, x86/pti: Do not enable PTI on AMD processors
Date
AMD processors are not subject to the types of attacks that the kernel
page table isolation feature protects against. The AMD microarchitecture
does not allow memory references, including speculative references, that
access higher privileged data when running in a lesser privileged mode
when that access would result in a page fault.

Disable page table isolation by default on AMD processors by not setting
the X86_BUG_CPU_INSECURE feature, which controls whether X86_FEATURE_PTI
is set.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
---
arch/x86/kernel/cpu/common.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index c47de4e..7d9e3b0 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -923,8 +923,8 @@ static void __init early_identify_cpu(struct cpuinfo_x86 *c)

setup_force_cpu_cap(X86_FEATURE_ALWAYS);

- /* Assume for now that ALL x86 CPUs are insecure */
- setup_force_cpu_bug(X86_BUG_CPU_INSECURE);
+ if (c->x86_vendor != X86_VENDOR_AMD)
+ setup_force_cpu_bug(X86_BUG_CPU_INSECURE);

fpu__init_system(c);

Ask HN: Who is hiring? (January 2018)

$
0
0
Ask HN: Who is hiring? (January 2018)
133 points by whoishiring2 hours ago | hide | past | web | favorite | 161 comments
Please lead with the location of the position and include the keywords REMOTE, INTERNS and/or VISA when the corresponding sort of candidate is welcome. When remote work is not an option, include ONSITE. If it isn't a household name, please explain what your company does.

Submitters: please only post if you personally are part of the hiring company—no recruiting firms or job boards. One post per company please.

Readers: please only email submitters if you personally are interested in the job—no recruiters or sales calls.

You can also use kristopolous' console script to search the thread:https://news.ycombinator.com/item?id=10313519.

Note for this month: please don't go through these posts and downvote them in bulk. Users who do that eventually lose downvoting rights.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Show HN: Sapper.js – towards a better web app framework

$
0
0

Taking the next-plus-one step

Quickstart for the impatient: the Sapper docs, and the starter template

If you had to list the characteristics of the perfect Node.js web application framework, you'd probably come up with something like this:

  1. It should do server-side rendering, for fast initial loads and no caveats around SEO
  2. As a corollary, your app's codebase should be universal — write once for server and client
  3. The client-side app should hydrate the server-rendered HTML, attaching event listeners (and so on) to existing elements rather than re-rendering them
  4. Navigating to subsequent pages should be instantaneous
  5. Offline, and other Progressive Web App characteristics, must be supported out of the box
  6. Only the JavaScript and CSS required for the first page should load initially. That means the framework should do automatic code-splitting at the route level, and support dynamic import(...) for more granular manual control
  7. No compromise on performance
  8. First-rate developer experience, with hot module reloading and all the trimmings
  9. The resulting codebase should be easy to grok and maintain
  10. It should be possible to understand and customise every aspect of the system — no webpack configs locked up in the framework, and as little hidden 'plumbing' as possible
  11. Learning the entire framework in under an hour should be easy, and not just for experienced developers

Next.js is close to this ideal. If you haven't encountered it yet, I strongly recommend going through the tutorials at learnnextjs.com. Next introduced a brilliant idea: all the pages of your app are files in a your-project/pages directory, and each of those files is just a React component.

Everything else flows from that breakthrough design decision. Finding the code responsible for a given page is easy, because you can just look at the filesystem rather than playing 'guess the component name'. Project structure bikeshedding is a thing of the past. And the combination of SSR (server-side rendering) and code-splitting — something the React Router team gave up on, declaring 'Godspeed those who attempt the server-rendered, code-split apps' — is trivial.

But it's not perfect. As churlish as it might be to list the flaws in something so, so good, there are some:

  • Next uses something called 'route masking' to create nice URLs (e.g. /blog/hello-world instead of /post?slug=hello-world). This undermines the guarantee about directory structure corresponding to app structure, and forces you to maintain configuration that translates between the two forms
  • All your routes are assumed to be universal 'pages'. But it's very common to need routes that only render on the server, such as a 301 redirect or an API endpoint that serves the data for your pages, and Next doesn't have a great solution for this. You can add logic to your server.js file to handle these cases, but it feels at odds with the declarative approach taken for pages
  • To use the client-side router, links can't be standard <a> tags. Instead, you have to use framework-specific <Link> components, which is impossible in the markdown content for a blog post such as this one, for example

The real problem, though, is that all that goodness comes for a price. The simplest possible Next app — a single 'hello world' page that renders some static text — involves 66kb of gzipped JavaScript. Unzipped, it's 204kb, which is a non-trivial amount of code for a mobile device to parse at a time when performance is a critical factor determining whether or not your users will stick around. And that's the baseline.

We can do better!

The compiler-as-framework paradigm shift

Svelte introduced a radical idea: what if your UI framework wasn't a framework at all, but a compiler that turned your components into standalone JavaScript modules? Instead of using a library like React or Vue, which knows nothing about your app and must therefore be a one-size-fits-all solution, we can ship highly-optimised vanilla JavaScript. Just the code your app needs, and without the memory and performance overhead of solutions based on a virtual DOM.

The JavaScript world is moving towards this model. Stencil, a Svelte-inspired framework from the Ionic team, compiles to web components. Glimmerdoesn't compile to standalone JavaScript (the pros and cons of which deserve a separate blog post), but the team is doing some fascinating research around compiling templates to bytecode. (React is getting in on the action, though their current research focuses on optimising your JSX app code, which is arguably more similar to the ahead-of-time optimisations that Angular, Ractive and Vue have been doing for a few years.)

What happens if we use the new model as a starting point?

Introducing Sapper

Sapper is the answer to that question. Sapper is a Next.js-style framework that aims to meet the eleven criteria at the top of this article while dramatically reducing the amount of code that gets sent to the browser. It's implemented as Express-compatible middleware, meaning it's easy to understand and customise.

The same 'hello world' app that took 204kb with React and Next weighs just 7kb with Sapper. That number is likely to fall further in the future as we explore the space of optimisation possibilities, such as not shipping any JavaScript at all for pages that aren't interactive, beyond the tiny Sapper runtime that handles client-side routing.

What about a more 'real world' example? Conveniently, the RealWorld project, which challenges frameworks to develop an implementation of a Medium clone, gives us a way to find out. The Sapper implementation takes 39.6kb (11.8kb zipped) to render an interactive homepage.

The entire app costs 132.7kb (39.9kb zipped), which is significantly smaller than the reference React/Redux implementation at 327kb (85.7kb), but even if was as large it would feel faster because of code-splitting. And that's a crucial point. We're told we need to code-split our apps, but if your app uses a traditional framework like React or Vue then there's a hard lower bound on the size of your initial code-split chunk — the framework itself, which is likely to be a significant portion of your total app size. With the Svelte approach, that's no longer the case.

But size is only part of the story. Svelte apps are also extremely performant and memory-efficient, and the framework includes powerful features that you would sacrifice if you chose a 'minimal' or 'simple' UI library.

Trade-offs

The biggest drawback for many developers evaluating Sapper would be 'but I like React, and I already know how to use it', which is fair.

If you're in that camp, I'd invite you to at least try alternative frameworks. You might be pleasantly surprised! The Sapper RealWorld implementation totals 1,201 lines of source code, compared to 2,377 for the reference implementation, because you're able to express concepts very concisely using Svelte's template syntax (which takes all of five minutes to master). You get scoped CSS, with unused style removal and minification built-in, and you can use preprocessors like LESS if you want. You no longer need to use Babel. SSR is ridiculously fast, because it's just string concatenation. And we recently introduced svelte/store, a tiny global store that synchronises state across your component hierarchy with zero boilerplate. The worst that can happen is that you'll end up feeling vindicated!

But there are trade-offs nonetheless. Some people have a pathological aversion to any form of 'template language', and maybe that applies to you. JSX proponents will clobber you with the 'it's just JavaScript' mantra, and therein lies React's greatest strength, which is that it is infinitely flexible. That flexibility comes with its own set of trade-offs, but we're not here to discuss those.

And then there's ecosystem. The universe around React in particular — the devtools, editor integrations, ancillary libraries, tutorials, StackOverflow answers, hell, even job opportunities — is unrivalled. While it's true that citing 'ecosystem' as the main reason to choose a tool is a sign that you're stuck on a local maximum, apt to be marooned by the rising waters of progress, it's still a major point in favour of incumbents.

Roadmap

We're not at version 1.0.0 yet, and a few things may change before we get there. Once we do (soon!), there are a lot of exciting possibilities.

I believe the next frontier of web performance is 'whole-app optimisation'. Currently, Svelte's compiler operates at the component level, but a compiler that understood the boundaries between those components could generate even more efficient code. The React team's Prepack research is predicated on a similar idea, and the Glimmer team is doing some interesting work in this space. Svelte and Sapper are well positioned to take advantage of these ideas.

Speaking of Glimmer, the idea of compiling components to bytecode is one that we'll probably steal in 2018. A framework like Sapper could conceivably determine which compilation mode to use based on the characteristics of your app. It could even serve JavaScript for the initial route for the fastest possible startup time, then lazily serve a bytecode interpreter for subsequent routes, resulting in the optimal combination of startup size and total app size.

Mostly, though, we want the direction of Sapper to be determined by its users. If you're the kind of developer who enjoys life on the bleeding edge and would like to help shape the future of how we build web apps, please join us on GitHub and Gitter.


After Equifax breach, anger but no action in Congress

$
0
0

The massive Equifax data breach, which compromised the identities of more than 145 million Americans, prompted a telling response from Congress: It did nothing.

Some industry leaders and lawmakers thought September’s revelation of the massive intrusion — which took place months after the credit reporting agency failed to act on a warning from the Homeland Security Department — might be the long-envisioned incident that prompted Congress to finally fix the country’s confusing and ineffectual data security laws.

Story Continued Below

Instead, the aftermath of the breach played out like a familiar script: white-hot, bipartisan outrage, followed by hearings and a flurry of proposals that went nowhere. As is often the case, Congress gradually shifted to other priorities — this time the most sweeping tax code overhaul in a generation, and another mad scramble to fund the federal government.

“It’s very frustrating,” said Rep. Jan Schakowsky of Illinois, the top Democrat on the House Energy and Commerce consumer protection subcommittee, who introduced legislation in the wake of the Equifax incident.

“Every time another shoe falls, I think, ‘Ah, this is it. This will get us galvanized and pull together and march in the same direction.’ Hasn’t happened yet,” said Sen. Tom Carper (D-Del.), a member of a broader Senate working group that has tinkered for years to come up with data breach legislation.

Every time lawmakers punt on the issue, critics say, they are leaving Americans more exposed to ruinous identity theft scams — and allowing companies to evade responsibility. With no sign that mammoth data breaches like the one at Equifax are abating, the situation is only growing more dire, according to cyberspecialists.

In the meantime, companies and consumers are left to navigate 48 different state-level standards that govern how companies must protect sensitive data and respond to data breaches. Companies say the varying rules are costly and time-consuming, while cyberspecialists and privacy hawks argue they do little to keep Americans’ data safe.

But while industry groups, security experts, privacy advocates and lawmakers of both parties agree that Congress must do something to unify these laws, no one has been able to agree on what that “something” should be.

On Capitol Hill, lawmakers have struggled to navigate an issue that touches several committees, while tussling over how strongly a federal law should preempt state regulations — Democrats worry a weak federal standard might supplant robust state laws, but Republicans don’t want to give too much power to federal regulators.

In the private sector, industries like banking — which already have strict, sector-specific data security rules on the books — have pushed to apply their regulations to broad swaths of the economy. But other industries, such as retailers, believe such a move would impose unnecessary standards on smaller businesses that don’t collect as much sensitive data.

Lawmakers told POLITICO that similar forces were at play post-Equifax.

Carper’s working group is effectively “on hold” for now, he told POLITICO, falling victim to jurisdictional issues. The group features Republican leaders like Senate Commerce Chairman John Thune of South Dakota and Judiciary Chairman Chuck Grassley of Iowa, as well as senior Democrats like Intelligence ranking member Mark Warner of Virginia and Dianne Feinstein of California, ranking member on the the Judiciary panel.

And long-running industry battles came roaring back, Warner said. While lawmakers mostly got the retail and banking industries on board, telecommunications firms — which are already subject to industry-specific privacy rules— became a sticking point.

“I think one of the problems was telecom,” said Warner, himself a former telecom executive.

“All industries have to be covered” by federal data breach laws, Warner added. “But then, how they’re covered could be tailored. … What you can’t do is start coming along and carving out. I think that still remains to be an issue.”

The lack of legislative response has industry groups and lawmakers, including Thune, uttering a familiar refrain: Wait until next year.

In a statement, Thune said that as much as he favors “an effective and coordinated approach on data security issues across industries, the reality is that our legislative progress has been much more incremental this year.

“There hasn’t been — and still isn’t — consensus among major stakeholders on data breach and data security legislation,” he added. “There isn’t a panacea for cybersecurity and the absolute worst thing we could do is pass an ineffective mandate that leads Americans to take our guard down.”

Those working on the issue also expressed cautious optimism about 2018, despite the fact that Congress has been bullish about “next year” for the last half-decade, to no avail.

“I think there is a political dynamic and clearly a policy interest in doing something to stop these breaches, by deterring them and helping people who are harmed by them,” said Sen. Richard Blumenthal (D-Conn.), who is backing legislation that would let prosecutors potentially seek jail time for companies that cover up data breaches. Ride-hailing giant Uber was recently caught mounting such a cover-up, when it disclosed it had paid $100,000 to keep hackers silent about a 2016 digital theft that compromised 57 million customers’ information.

“Certainly, every member here has had constituents that have been victims of these breaches, be it Target or Equifax or whoever, and you would think that they would be interested in moving ahead,” added Schakowsky, whose bill would set federal digital security benchmarks and require prompt notification and ongoing assistance to breach victims.

House Energy and Commerce Chairman Greg Walden told POLITICO he was working on “a more consumer-first policy” that he plans to unveil after additional hearings in 2018.

The measure would create penalties in a way that “actually inures to the benefit of the consumer,” instead of “just another penalty to the government [where] the government gets paid,” said the Oregon Republican.

Industry representatives who track the negotiations said some progress is occurring on Capitol Hill, despite the lack of concrete steps in the past four months.

Lawmakers are trying to produce a bill that can actually move, said Jason Kratovil, vice president of government affairs for payments at the Financial Services Roundtable, which has backed proposals from a House Financial Services subpanel in the past.

“I think there’s a lot of energy being spent trying to get this one right and work toward a legislative outcome that isn’t just a product of one committee and one committee’s jurisdiction, but instead is something that is going to have a lot of interest and a lot of support from many different stakeholders,” he told POLITICO.

A retail industry representative told POLITICO that 2018 might be different because industries are increasingly not using data security as a proxy battle for other policy fights.

“In this case, we think we can find some common ground,” said the individual, who requested anonymity to discuss behind-the-scenes negotiations. “I don’t think that was there before.”

The overriding factor pressuring lawmakers and industry groups to take action will be the gobsmacking way companies have mishandled their data breaches — a trend that shows no signs of ending in 2018.

But if the Equifax breach — which featured basic security failures, allegations of insider trading and possible attempts to prevent consumers from suing the company — didn’t do the trick, some aren’t sure what will.

“When you’ve got 145 million people, you would think, but …” Schakowsky said, trailing off and throwing up her hands.

Pepper's Cone: An Inexpensive Do-It-Yourself 3D Display

$
0
0

Our 3D display consists of (a) an iPad, a thin hollow plastic cone, and a rotatable base. The nickel at the base provides stability. (b-c) As the user rotates the display, the system renders a perspective-correct image for their point of view that gives a convincing impression of a 3D object suspended inside the cone. This provides a very simple way of interactively examining a 3D scene for a fraction of the cost of alternative volumetric or light field displays and doesn't require the use of special glasses. (d) In addition, the system can be extended to produce correct binocular cues by incorporating stereoscopic rendering and glasses.

Chisel: Constructing Hardware in a Scala Embedded Language

$
0
0
importchisel3._classGCDextendsModule{val io =IO(newBundle{val a  =Input(UInt(32.W))val b  =Input(UInt(32.W))val e  =Input(Bool())val z  =Output(UInt(32.W))val v  =Output(Bool())})val x =Reg(UInt(32.W))val y =Reg(UInt(32.W))
  when (x > y){ x := x -% y }.otherwise     { y := y -% x }
  when (io.e){ x := io.a; y := io.b }
  io.z := x
  io.v := y ===0.U}
importchisel3._classMaxN(val n:Int,val w:Int)extendsModule{privatedefMax2(x:UInt, y:UInt)=Mux(x > y, x, y)val io =IO(newBundle{val ins =Input(Vec(n,UInt(w.W)))val out =Output(UInt(w.W))})
  io.out := io.ins.reduceLeft(Max2)}
importchisel3._importscala.collection.mutable.ArrayBuffer/** Four-by-four multiply using a look-up table.*/classMulextendsModule{val io =IO(newBundle{val x   =Input(UInt(4.W))val y   =Input(UInt(4.W))val z   =Output(UInt(8.W))})val muls =newArrayBuffer[UInt]()// -------------------------------- \\// Calculate io.z = io.x * io.y by// building filling out muls// -------------------------------- \\for(i <-0 until 16)for(j <-0 until 16)
      muls +=(i * j).U(8.W)val tbl =Vec(muls)
  io.z := tbl((io.x <<4.U)| io.y)}
importchisel3._importchisel3.util._//A n-bit adder with carry in and carry outclassAdder(val n:Int)extendsModule{val io =IO(newBundle{val A    =Input(UInt(n.W))val B    =Input(UInt(n.W))valCin=Input(UInt(1.W))valSum=Output(UInt(n.W))valCout=Output(UInt(1.W))})//create a vector of FullAddersvalFAs=Vec.fill(n)(Module(newFullAdder()).io)val carry =Wire(Vec(n+1,UInt(1.W)))val sum   =Wire(Vec(n,Bool()))//first carry is the top level carry in
  carry(0):= io.Cin//wire up the ports of the full addersfor(i <-0 until n){FAs(i).a := io.A(i)FAs(i).b := io.B(i)FAs(i).cin := carry(i)
    carry(i+1):=FAs(i).cout
    sum(i):=FAs(i).sum.toBool()}
  io.Sum:=Cat(sum.reverse)
  io.Cout:= carry(n)}

Fix All Conflicts: Easy-To-use CUI for Fixing Git Conflicts

$
0
0

README.md

Easy-to-use CUI for fixing git conflicts

I never really liked any of the mergetools out there so I made a simple program that does simple things… in a simple fashion.

👷 Installation

Execute:

$ go get github.com/mkchoi212/fac

Or using Homebrew 🍺

brew tap mkchoi212/fac https://github.com/mkchoi212/fac.git

Using

fac operates much like git add -p . It has a prompt input at the bottom of the screen where the user inputs various commands.

The commands have been preset to the following specifications

w - show more lines up
s - show more lines down
a - use local version
d - use incoming version

j - scroll down
k - scroll up

v - [v]iew orientation 
n - [n]ext conflict
p - [p]revious conflict 

h | ? - [h]elp
q | Ctrl+c - [q]uit

[wasd] >> [INPUT HERE]

The movement controls have been derived from both the world of gamers (WASD) and VIM users (HJKL).

Contributing

This is an open source project so feel free to contribute by

👮 License

See License

The Legacy of the Mississippi Delta Chinese

$
0
0

The Mississippi Delta is known for sharecroppers, cotton fields and blues music, but it has also been a hub to Chinese immigrants over the last century. Elissa Nadworny/NPRhide caption

toggle caption
Elissa Nadworny/NPR

Think of the Mississippi Delta. Maybe you imagine cotton fields, sharecroppers and blues music.

It's been all that. But for more than a century, the Delta has also been a magnet for immigrants. I was intrigued to learn about one immigrant group in particular: the Delta Chinese.

To find out more, I travelled to Greenville, Miss., a small city along the Mississippi River. I meet Raymond Wong in Greenville's Chinese cemetery, right across a quiet road from an African-American cemetery. Wong's family has long been part of a thriving — but separate — Chinese community.

"We were in-between," Wong explains, "right in between the blacks and the whites. We're not black, we're not white. So that by itself gives you some isolation."

Raymond Wong (top) visits the gravesite where his parents are buried in Greenville, Miss. Wong's parents are buried in the Chinese cemetery, right across from an African-American cemetery. His family has long been part of a thriving,€” but separate, Chinese community. Elissa Nadworny/NPRhide caption

toggle caption
Elissa Nadworny/NPR

We walk in the shade of a huge magnolia tree that stretches out over the gravestones. They're carved with Chinese characters, and bear the names of the Chinese families whose history here goes back decades: Quong, Yu, Jung, Fu.... and Wong.

Raymond Wong leads me to the graves of his parents, Pon Chu Lum Wong and Suey (Henry) Heong Wong. His father immigrated to the Mississippi Delta from Canton, or Guangdong, province when he was 15 in the 1930s; his mother arrived several years later.

Like most of the Delta Chinese, they were merchants. Virtually all the Chinese families of that generation opened and ran grocery stores.

The number of Chinese merchants and grocers steadily grew throughout the Mississippi Delta in the late 1930s and early 1940s. Marion Post Wolcott/Library of Congresshide caption

toggle caption
Marion Post Wolcott/Library of Congress

The first wave of Chinese immigrants came to the Delta soon after the Civil War, and the pace picked up by the early 1900s. The Chinese originally came to work picking cotton, but they quickly soured on farming. They started opening grocery stores, mostly in the African-American communities where they lived.

Greenville, in particular, was known for the dozens of Chinese groceries open here in the heyday: as many as 50 stores in a city of some 40,000 people. "I was raised in a grocery store," Wong says, and he means it literally.

Inside the Mississippi Delta Chinese Heritage Museum on the campus of Delta State University in Cleveland, Miss. Elissa Nadworny/NPRhide caption

toggle caption
Elissa Nadworny/NPR

The Wong family lived — all six of them — in a couple of rooms at the back of their store.

"Everyone else I know grew up in grocery stores," Wong says. "I'm sure as soon as we could count money we had to work in the front."

The stores stocked meat, fresh vegetables, canned goods, laundry soap, washtubs, anything you might need. Nothing Chinese about them, except the owners. "On my block itself, we had at least four grocery stores," Wong recalls. "I'm talking about a small block, too."

Kim Ma grocery store used to be owned by Raymond Wong's (pictured) parents. He grew up there — literally. His family lived in the back of the store. Elissa Nadworny/NPRhide caption

toggle caption
Elissa Nadworny/NPR

In 1968, Wong's father opened a Chinese restaurant called How Joy in Greenville, one of the first in the Mississippi interior. Raymond Wong says it was a gamble. At the time, he says, "nobody knew what Chinese food is." But the restaurant flourished for 40 years. Raymond worked there, too, serving How Joy steak, butterfly shrimp, chow mein and chop suey. "We had all that kind of stuff," Wong recalls. "Give the people what they want!"

Wong remembers hearing ethnic slurs as he grew up, which he got used to ignoring. But the family felt more pernicious discrimination, too. Wong remembers a time of big excitement when he was young: The family was finally going to get to move out of the cramped grocery store. His father had found a house he wanted to buy, in a white neighborhood.

Then suddenly, that conversation stopped. There would be no deal. Later, his father told him that the white residents had made it quite clear they didn't want Chinese in their neighborhood.

"When people found out that we were moving," Wong says, "they started throwing bottles ... in the driveway. Glass everywhere. And we knew it had to be directed at us. Father told me he didn't want to subject us [to that]. Somebody might get hurt."

Many of the Chinese groceries in Greenville, Miss. have long since closed, but several are still in operation. Elissa Nadworny/NPRhide caption

toggle caption
Elissa Nadworny/NPR

The family ended up building a house directly behind the grocery store. "But at least it was a house!" Wong says, laughing. "We'd never lived in a house!"

We hop in the car to see what's left of Greenville's Chinese grocery tradition.

As we drive, Wong points to one battered building after another: "There was a Chinese grocery store right here. Right here was another grocery store, right on this corner."

Most have long since closed, but the store Raymond's family ran is still going, under different ownership. It's now the Kim Ma grocery store, run by Cindy and Danny Ma, selling chips, soda, beer and cigarillos to a steady stream of customers.

Cindy Ma tells me business is slow, as a lot of people have moved away from Greenville. Still, with this business, the Mas have managed to put their two sons through college and graduate school. One son is in medical school in Jackson, Miss.; the other is studying accounting at Ole Miss in Oxford.

Cindy Ma (top) and her husband now run the store once owned by Raymond Wong's parents in Greenville, Miss. Elissa Nadworny/NPRhide caption

toggle caption
Elissa Nadworny/NPR

That's been the story of many Delta Chinese: Work hard. Send your kids to college. Watch them move away.

We hear that same narrative 70 miles north of Greenville, in Clarksdale, Miss. We've come to the home of Gilroy and Sally Chow, who greet us enthusiastically at the front door."Come on in!" Gilroy says. "It's your lucky day! This is comfort food day!"

Every week, the Chows get together with friends and relatives to try to recreate the dishes they remember their mothers making when they were young. They're attempting to summon up flavors they fear are getting lost. Sally Chow wonders out loud, "Why didn't I ask mom how she did that?"

Sally teaches special education and has a cake-baking business with her sister-in-law, Alice. Gilroy is a former industrial engineer who worked for NASA for seven years.

United in their Chinese heritage, the Chows are divided by their passionate school loyalties. Sally and the Chows' daughter Lisa went to the University of Mississippi, or Ole Miss. Gilroy and their son Bradley went to Mississippi State.

The family even has a football changing-of-the-flag ritual. Whoever wins the Egg Bowl each year gets to fly their school's flag on top, right in front of the Chows' house. They all march out front, and the loser has to sing the fight sing of the other school.

This year, MSU's flag is on top: "Very sad," Sally notes.

"When the Rebels win," she says (that's her school, Ole Miss), "we come out the front and we play 'From Dixie with Love' very slow, and we sway."

Sally and Gilroy Chow in their home in Clarksdale, Miss. Elissa Nadworny/NPRhide caption

toggle caption
Elissa Nadworny/NPR

But let's get back to dinner. As the group gets busy chopping and sauteing in the kitchen, Gilroy heads outside and starts tossing fried rice in a gigantic wok nestled into a super-hot, custom burner stand.

He tosses in some cubed ham: "This is what makes it Southern fried rice!" he says.

Before long, an impressive feast is laid out before us: beef with cauliflower. Whole fish garnished with fried ginger. Spare ribs with carrots and potatoes. Roast pork with a honey-hoisin glaze, and much more. The flavors of their youth.

After Gilroy says grace, we settle in around the dining room table.

All of the eight people at dinner are the first generation born in America. Like Raymond Wong, they are all the children of grocery store owners.

And they all grew up speaking Chinese at home. Sally's brother Sammy tells us, "When I first started school I had a difficult time in the first grade, because I couldn't speak English."

But now few around this table can speak Chinese.

Everyone here describes a common experience. When they travel, jaws drop as soon as people realize they're Chinese. And from Mississippi.

"They ask you, 'What are you doing there!'", says Frieda Quon, who has the thickest, most syrupy southern drawl of all the group. "I guess they just have this idea that it is black and white."

(Top left) Gilroy Chow makes fried rice in the family's wok in the driveway of their home in Clarksdale, Miss. (Top right) All the meat served was cut up "bite sized" so that guests wouldn't need a knife at the table, explains Gilroy. (Bottom) From left: Sally Chow, Gilroy and Alice Chow serve dinner. Elissa Nadworny/NPRhide caption

toggle caption
Elissa Nadworny/NPR

"The Chinese face with a Southern accent throws people off," Jean Maskas chimes in. "I was at my daughter's school, and we'd taken some friends out to eat, and they all said, 'I just can't get used to talking to your mother! It's like an identity theft!'" The others chuckle knowingly.

Quon says the more she's traveled, the more she's come to realize how unique this Mississippi Chinese community is.

"We are all connected," she says. "The other states are not like that, truly. We knew Chinese from Memphis to Vicksburg."

As outsiders, they stuck together.

They all remember driving for miles to dances that would draw Chinese young people from all over. "They were infamous!" someone says, drawing big laughs around the table.

Their children's generation doesn't have that. They're more assimilated, more accepted.

And their future? It's probably not in the Delta:

Retired pharmacist Sammy Chow remembers the question his son asked when he was still in high school: "'Dad, do you want me to take over the drugstore when you retire?'"

Sammy's response was immediate: "I said, 'No. I want you to do better than me.'" His son, Matthew, is now a dentist in Clinton, Miss.

"I think all of this generation realize that the opportunities are not here," adds his sister, Sally.

Fields near Greenville, Miss., a small city along the Mississippi river where Chinese immigrants have come for more than a century. Elissa Nadworny/NPRhide caption

toggle caption
Elissa Nadworny/NPR

Gilroy Chow figures that the Chinese population in the Mississippi Delta has shrunk from 2,500 at its peak in the mid 1970s to about 500 now.

"In these small towns, the population is dwindling," says Sandra Chow, Sammy's wife. "For these children who've been educated ... a lot of them want what's in the big cities. Lots of things to do, and things for their families to grow up doing."

"But," she concludes, "I don't think it bothers any of us. We're happy that our children are doing well and enjoying life, and experiencing a lot of things that we didn't get to experience because of being in small towns."

Dinner over and dishes washed, there's an important challenge still to come. The group is trying to master making steamed buns: working late into the night to keep the old traditions alive.

The "Our Land" series is produced by Elissa Nadworny.

Viewing all 25817 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>