Quantcast
Channel: Hacker News
Viewing all 25817 articles
Browse latest View live

Reddit is adding native, auto-playing video ads

$
0
0

We’re rolling out native video ads to all platforms starting next week. Some bit of background on Reddit video: We first launched Reddit video hosting in August 2017. Since then, it’s become core to the Reddit experience.  About a year after we launched native video, it’s now the dominant way people are viewing and posting video content (Reddit users have a natural affinity for organic native video) –– and that presents a huge opportunity for advertisers. By the numbers:

  • More than 2x video views, growing 23% each month since the start of 2018.
  • We’re now averaging more than 5 million minutes of views per day. This doesn’t account for spikes in viral content, which we’ve seen jump to ~100M in a day.
  • Since launching, videos uploaded via our native player receive twice as many views as YouTube videos on Reddit.
  • Native video has taken off in a variety of communities and now accounts for as much as 20% of content in a number of major communities such as r/oddlysatisfying, r/aww and r/FortniteBR for example.

So what does this mean for advertisers?

There’s a massive opportunity with video ads on Reddit, which we’ve been testing, refining and improving alongside several key partners across the gaming, entertainment and auto industries. In fact, we’ve seen engagement and performance lifts up to 5x when compared to non-video campaigns.

Today, we’re introducing a new video ad product with improved engagement, targeting, performance-based bidding and measurement capabilities. The native video ads product is a big step in our effort to optimize the redesign for both users and advertisers. It marries the best of the redesign for advertisers (card view) with the updates our users are most excited about (video hosting).

Specifically, video ads are now:

  • Available on a cost per view (CPV) basis
    • Previously, Reddit Ads were purchased on a CPM basis
  • Available for video-only campaigns, which means advertisers can now target card view only in the redesign and mobile apps
  • Fully native across platforms so you can leverage standard Reddit engagement characteristics (upvotes, downvotes and more!)
  • In-feed native videos autoplay
  • Ads are displayed in the redesign’s expanded card view format, which makes them better looking and more engaging

Video Ads will start rolling out today, Tuesday June 12, and will be available to all advertisers over the course of the Summer. To learn more about our advertising offerings, read case studies, and get started, check outour home page for Reddit Advertising.


Show HN: Fo: An experimental language which adds generics on top of Go

$
0
0

README.md

Fo is an experimental language which adds functional programming features to Go. The name is short for "Functional Go".

Table of Contents

Background

Go already supports many features that functional programmers might want: closures, first-class functions, errors as values, etc. The main feature (and in fact only feature for now) that Fo adds is type polymorphism via generics. Generics encourage functional programming techniques by making it possible to write flexible higher-order functions and type-agnostic data structures.

At this time, Fo should be thought of primarily as an experiment or proof of concept. It shows what Go looks like and feels like with some new language features and allows us to explore how those features interact and what you can build with them. The syntax and implementation are not finalized. The language is far from appropriate for use in real applications, though that will hopefully change in the future.

Playground

If you want to give Fo a try without installing anything, you can visitThe Fo Playground.

Installation

The Fo compiler is written in Go, so you can install it like any other Go program:

go get -u github.com/albrow/fo

Command Line Usage

For now, the CLI for Fo is extremely simple and only works on one file at a time. There is only one command, run, and it works like this:

fo run <filename>

<filename> should be a source file ending in .fo which contains a main function.

Examples

You can see some example programs showing off various features of the language in theexamples directory of this repository.

Language Features

In terms of syntax and semantics, Fo is a superset of Go. That means that any valid Go program is also a valid Fo program (provided you change the file extension from ".go" to ".fo").

Generic Named Types

Declaration

Fo extends the Go grammar for type declarations to allow the declaration of generic types. Generic types expect one or more "type parameters", which are placeholders for arbitrary types to be specified later.

The extended grammar looks like this (some definitions omitted/simplified):

TypeDecl   = "type" identifier [ TypeParams ] Type .
TypeParams = "[" identifier { "," identifier } "]" .

In other words, type parameters should follow the type name and are surrounded by square brackets. Multiple type parameters are separated by a comma (e.g.,type A[T, U, V]).

Here's the syntax for a generic Box which can hold a value of any arbitrary type:

typeBox[T] struct {
	v T
}

Type parameters are scoped to the type definition and can be used in place of any type. The following are all examples of valid type declarations:

typeA[T] []TtypeB[T, U] map[T]UtypeC[T, U] func(T) UtypeD[T] struct{
	a T
	b A[T]
}

In general, any named type can be made generic. The only exception is that Fo does not currently allow generic interface types.

Usage

When using a generic type, you must supply a number of "type arguments", which are specific types that will take the place of the type parameters in the type definition. The combination of a generic type name and its corresponding type arguments is called a "type argument expression". The grammar looks like this (some definitions omitted/simplified):

TypeArgExpr = Type TypeArgs .
TypeArgs    = "[" Type { "," Type } "]" .

Like type parameters, type arguments follow the type name and are surrounded by square brackets, and multiple type arguments are separated by a comma (e.g.A[string, int, bool]). In general, type argument expressions can be used anywhere you would normally use a type.

Here's how we would use the Box type we declared above to initialize a Box which holds a string value:

x:=Box[string]{ v: "foo" }

Fo does not currently support inference of type arguments, so they must always be specified.

Generic Functions

Declaration

The syntax for declaring a generic function is similar to named types. The grammar looks like this:

FunctionDecl = "func" FunctionName [ TypeParams ] Signature [ FunctionBody ] .
TypeParams   = "[" identifier { "," identifier } "]"

As you might expect, type parameters follow the function name. Both the function signature and body can make use of the given type parameters, and the type parameters will be replaced with type arguments when the function is used.

Here's how you would declare a MapSlice function which applies a given function f to each element of list and returns the results.

funcMapSlice[T](f func(T) T, list []T) []T {result:=make([]T, len(list))fori, val:=range list {
		result[i] = f(val)
	}return result
}

Usage

Just like generic named types, to use a generic function, you must supply the type arguments. If you actually want to call the function, just add the function arguments after the type arguments. The grammar looks like this (some definitions omitted/simplified):

CallExpr = FunctionName ( TypeArgs ) Arguments .
TypeArgs = "[" Type { "," Type } "]" .

Here's how you would call the MapSlice function we defined above:

funcincr(nint) int {return n+1
}// ...

MapSlice[int](incr, []int{1, 2, 3})

Generic Methods

Declaration

Fo supports a special syntax for methods with a generic receiver type. You can optionally include the type parmaters of the receiver type and those type parameters can be used in the function signature and body. Whenever a generic method of this form is called, the type arguments of the receiver type are passed through to the method definition.

The grammar for methods with generic receiver types looks like this (some definitions omitted/simplified):

MethodDecl = "func" Receiver MethodName [ TypeParams ] Signature [ FunctionBody ] .
Receiver   = "(" [ ReceiverName ] Type [ TypeParams ] ")" .
TypeParams = "[" identifier { "," identifier } "]" .

Here's how we would define a method on the Box type defined above which makes use of receiver type parameters:

typeBox[T] struct {
  v T
}func(bBox[T]) Val() T {return b.v
}

You can also omit the type parameters of the receiver type if they are not needed. For example, here's how we would define a String method which does not depend on the type parameters of the Box:

func(bBox) String() string {return fmt.Sprint(b.val)
}

A method with a generic receiver can define additional type parameters, just like a function. Here's an example of a method on Box which requires additional type parameters.

func (bBox[T]) Map[U] (f func(T) U) Box[U] {return Box[U]{
    v: f(b.v),
  }
}

Usage

When calling methods with a generic receiver type, you do not need to specify the type arguments of the receiver. Here's an example of calling the Val method we defined above:

x:=Box[string]{ v: "foo" }
x.Val()

However, if the method declaration includes additional type parameters, you still need to specify them. Here's how we would call the Map function defined above to convert a Box[int] to a Box[string].

y:=Box[int] { v: 42 }z:= y.Map[string](strconv.Itoa)

Why Threads Are a Bad Idea (1995) [pdf]

Judge Rules AT&T Can Acquire Time Warner

$
0
0

WASHINGTON—A federal judge ruled Tuesday that AT&T Inc. can proceed with its planned acquisition of Time Warner Inc., rejecting the Justice Department’s allegations that the deal would suppress competition in the pay-TV industry.

U.S. District Judge Richard Leon announced his decision in a packed courtroom, ruling that antitrust enforcers...

AT&T Cleared by Judge to Buy Time Warner, Create Media Giant

$
0
0

AT&T Inc. was cleared by a judge to take over Time Warner Inc. in an $85 billion deal that the mobile-phone giant says will fuel its evolution into a media powerhouse that can go head-to-head with Netflix Inc. and Amazon.com Inc.

U.S. District Judge Richard Leon on Tuesday rejected the Justice Department’s request for an order blocking the Time Warner acquisition, saying the government failed to make its case that the combination would lead to higher prices for pay-TV subscribers. The judge put no conditions on the deal.

AT&T slipped 1.6 percent to $33.80 in extended trading at 4:46 p.m. in New York. Time Warner jumped 4.7 percent.

After nearly two years, AT&T is on the cusp of completing its acquisition of Time Warner, a deal it struck in a bid to become an entertainment giant that can feed Time Warner programming like HBO and CNN to its 119 million mobile, internet and video customers.

"We think the evidence throughout the trial was quite clear and we’re very pleased that the court saw it the same way," said Daniel Petrocelli, AT&T’s lawyer.

Under an agreement with the government, it can close the deal in six days, unless the Justice Department appeals the decision and wins an emergency stay. Leon said he wouldn’t grant a stay if the government requested one.

Read more: How other media stocks fared after AT&T win

Leon’s decision is a blow to Makan Delrahim, the head of the department’s antitrust division, who brought in a new enforcement approach with the case. The government’s November lawsuit was also the first major merger challenge under President Donald Trump, who railed against the tie-up when it was announced during the 2016 campaign. He vowed that his administration would oppose it, and as president, he has relentlessly attacked CNN for its news coverage.

Antitrust Theory

His criticism prompted speculation that the lawsuit was politically motivated, Still, the Justice Department’s case laid out a traditional antitrust theory: that combining two companies in different parts of a supply chain can give the merged company the ability to harm rivals.

The suit stunned investors and antitrust lawyers because it broke with years of past practice for reviewing such deals, known as vertical mergers. Rather than negotiating an agreement that imposes conditions on how AT&T can conduct business, Delrahim demanded AT&T sell businesses to address threats to competition, which the company refused to do.

After Delrahim took over the division, he announced that the department would require asset sales to remedy harm to competition from vertical deals. Leon’s ruling raises the question of whether Delrahim can successfully maintain that stance.

The Justice Department claimed that AT&T’s acquisition of Time Warner would give the No. 1 pay-TV provider increased bargaining leverage over rivals like Dish Network Corp. that pay for Time Warner programming.

Harder Bargain

Because of AT&T’s ownership of DirecTV, it can drive a harder bargain with other distributors that want Time Warner content, the government’s lawyers argued during the trial. If negotiations break down and rivals can’t secure that programming, their customers could switch to DirecTV, the lawyers said. That leverage would allow AT&T to raise prices for Time Warner content, with those costs being passed on to consumers, according to the Justice Department.

The government’s case hinged on an economic model produced by Carl Shapiro, an economist at the University of California at Berkeley, who predicted an annual price increase to consumers of at least $285 million. AT&T attacked that projection as baseless, repeatedly poking holes in the various inputs Shapiro used to calculate the estimate.

The judge indicated during the trial that he wasn’t buying Shapiro’s projection. After his testimony, Leon said he was "confused." Further explanation from Shapiro didn’t help.

"I’m not sure I got it, but it’s too late and too hot to belabor the point any further," the judge said.

— With assistance by Greg Stohr, Tom Schoenberg, and Jeran Wittenstein

(Updates with shares in third paragraph.)

FreeBSD gets pNFS support

$
0
0
Log Message:
Merge the pNFS server code from projects/pnfs-planb-server into head.

This code merge adds a pNFS service to the NFSv4.1 server. Although it is
a large commit it should not affect behaviour for a non-pNFS NFS server.
Some documentation on how this works can be found at:
http://people.freebsd.org/~rmacklem/pnfs-planb-setup.txt
and will hopefully be turned into a proper document soon.
This is a merge of the kernel code. Userland and man page changes will
come soon, once the dust settles on this merge.
It has passed a "make universe", so I hope it will not cause build problems.
It also adds NFSv4.1 server support for the "current stateid".

Here is a brief overview of the pNFS service:
A pNFS service separates the Read/Write oeprations from all the other NFSv4.1
Metadata operations. It is hoped that this separation allows a pNFS service
to be configured that exceeds the limits of a single NFS server for either
storage capacity and/or I/O bandwidth.
It is possible to configure mirroring within the data servers (DSs) so that
the data storage file for an MDS file will be mirrored on two or more of
the DSs.
When this is used, failure of a DS will not stop the pNFS service and a
failed DS can be recovered once repaired while the pNFS service continues
to operate.  Although two way mirroring would be the norm, it is possible
to set a mirroring level of up to four or the number of DSs, whichever is
less.
The Metadata server will always be a single point of failure,
just as a single NFS server is.

A Plan B pNFS service consists of a single MetaData Server (MDS) and K
Data Servers (DS), all of which are recent FreeBSD systems.
Clients will mount the MDS as they would a single NFS server.
When files are created, the MDS creates a file tree identical to what a
single NFS server creates, except that all the regular (VREG) files will
be empty. As such, if you look at the exported tree on the MDS directly
on the MDS server (not via an NFS mount), the files will all be of size 0.
Each of these files will also have two extended attributes in the system
attribute name space:
pnfsd.dsfile - This extended attrbute stores the information that
    the MDS needs to find the data storage file(s) on DS(s) for this file.
pnfsd.dsattr - This extended attribute stores the Size, AccessTime, ModifyTime
    and Change attributes for the file, so that the MDS doesn't need to
    acquire the attributes from the DS for every Getattr operation.
For each regular (VREG) file, the MDS creates a data storage file on one
(or more if mirroring is enabled) of the DSs in one of the "dsNN"
subdirectories.  The name of this file is the file handle
of the file on the MDS in hexadecimal so that the name is unique.
The DSs use subdirectories named "ds0" to "dsN" so that no one directory
gets too large. The value of "N" is set via the sysctl vfs.nfsd.dsdirsize
on the MDS, with the default being 20.
For production servers that will store a lot of files, this value should
probably be much larger.
It can be increased when the "nfsd" daemon is not running on the MDS,
once the "dsK" directories are created.

For pNFS aware NFSv4.1 clients, the FreeBSD server will return two pieces
of information to the client that allows it to do I/O directly to the DS.
DeviceInfo - This is relatively static information that defines what a DS
             is. The critical bits of information returned by the FreeBSD
             server is the IP address of the DS and, for the Flexible
             File layout, that NFSv4.1 is to be used and that it is
             "tightly coupled".
             There is a "deviceid" which identifies the DeviceInfo.
Layout     - This is per file and can be recalled by the server when it
             is no longer valid. For the FreeBSD server, there is support
             for two types of layout, call File and Flexible File layout.
             Both allow the client to do I/O on the DS via NFSv4.1 I/O
             operations. The Flexible File layout is a more recent variant
             that allows specification of mirrors, where the client is
             expected to do writes to all mirrors to maintain them in a
             consistent state. The Flexible File layout also allows the
             client to report I/O errors for a DS back to the MDS.
             The Flexible File layout supports two variants referred to as
             "tightly coupled" vs "loosely coupled". The FreeBSD server always
             uses the "tightly coupled" variant where the client uses the
             same credentials to do I/O on the DS as it would on the MDS.
             For the "loosely coupled" variant, the layout specifies a
             synthetic user/group that the client uses to do I/O on the DS.
             The FreeBSD server does not do striping and always returns
             layouts for the entire file. The critical information in a layout
             is Read vs Read/Writea and DeviceID(s) that identify which
             DS(s) the data is stored on.

At this time, the MDS generates File Layout layouts to NFSv4.1 clients
that know how to do pNFS for the non-mirrored DS case unless the sysctl
vfs.nfsd.default_flexfile is set non-zero, in which case Flexible File
layouts are generated.
The mirrored DS configuration always generates Flexible File layouts.
For NFS clients that do not support NFSv4.1 pNFS, all I/O operations
are done against the MDS which acts as a proxy for the appropriate DS(s).
When the MDS receives an I/O RPC, it will do the RPC on the DS as a proxy.
If the DS is on the same machine, the MDS/DS will do the RPC on the DS as
a proxy and so on, until the machine runs out of some resource, such as
session slots or mbufs.
As such, DSs must be separate systems from the MDS.

Tested by:	james.rose@framestore.com
Relnotes:	yes

AT&T to Acquire Time Warner

$
0
0
  • New company with complementary strengths to lead the next wave of innovation in converging media and communications industry.
    -Combination unlike any other — the world’s best premium content with the networks to deliver it to every screen, however customers want it
    -The future of video is mobile and the future of mobile is video
    -Time Warner is a global leader in creating premium content, has the largest film/TV studio in world and an unrivaled library of entertainment
    -AT&T has unmatched direct-to-customer distribution across TV, mobile and broadband in the U.S., mobile in Mexico and TV in Latin America
  • Combined company positioned to create new customer choices — from content creation and distribution to a mobile-first experience that’s personal and social
    -Goal is to give customers unmatched choice, quality, value and experiences that will define the future of media and communications
    -Customer insights across TV, mobile and broadband will allow new company to: offer more relevant and valuable addressable advertising; innovate with ad-supported content models; better inform content creation; and make OTT and TV Everywhere products smarter and more personalized
  • Acquisition provides significant financial benefits
    -Accretive to AT&T in the first year after close on adjusted EPS & free cash flow per share basis
    -Improves AT&T’s dividend coverage
    -Improves AT&T’s revenue and earnings growth profile
    -Diversifies AT&T’s revenue mix and lowers capital intensity
    -Committed to strong balance sheet and maintaining investment-grade credit metrics
  • Delivers significant benefits for customers
    -Stronger competitive alternative to cable & other video providers
    -Provides better value, more choices, enhanced customer experience for over-the-top and mobile viewing 
    -More innovation with ad-supported models that shift more cost of content creation from customers to advertisers

DALLAS and NEW YORK CITY, Oct. 22, 2016 — AT&T Inc. (NYSE:T) and Time Warner Inc. (NYSE:TWX) today announced they have entered into a definitive agreement under which AT&T will acquire Time Warner in a stock-and-cash transaction valued at $107.50 per share. The agreement has been approved unanimously by the boards of directors of both companies.

The deal combines Time Warner's vast library of content and ability to create new premium content that connects with audiences around the world, with AT&T's extensive customer relationships, world’s largest pay TV subscriber base and leading scale in TV, mobile and broadband distribution.

“This is a perfect match of two companies with complementary strengths who can bring a fresh approach to how the media and communications industry works for customers, content creators, distributors and advertisers,” said Randall Stephenson, AT&T chairman and CEO. “Premium content always wins. It has been true on the big screen, the TV screen and now it’s proving true on the mobile screen. We’ll have the world’s best premium content with the networks to deliver it to every screen. A big customer pain point is paying for content once but not being able to access it on any device, anywhere. Our goal is to solve that.  We intend to give customers unmatched choice, quality, value and experiences that will define the future of media and communications.

“With great content, you can build truly differentiated video services, whether it’s traditional TV, OTT or mobile. Our TV, mobile and broadband distribution and direct customer relationships provide unique insights from which we can offer addressable advertising and better tailor content,” Stephenson said. “It’s an integrated approach and we believe it’s the model that wins over time.

“Time Warner’s leadership, creative talent and content are second to none. Combine that with 100 million plus customers who subscribe to our TV, mobile and broadband services – and you have something really special,” said Stephenson. “It’s a great fit, and it creates immediate and long-term value for our shareholders.”

Time Warner Chairman and CEO Jeff Bewkes said, “This is a great day for Time Warner and its shareholders. Combining with AT&T dramatically accelerates our ability to deliver our great brands and premium content to consumers on a multiplatform basis and to capitalize on the tremendous opportunities created by the growing demand for video content. That’s been one of our most important strategic priorities and we’re already making great progress — both in partnership with our distributors, and on our own by connecting directly with consumers.  Joining forces with AT&T will allow us to innovate even more quickly and create more value for consumers along with all our distribution and marketing partners, and allow us to build on a track record of creative and financial excellence that is second to none in our industry. In fact, when we announce our 3Q earnings, we will report revenue and operating income growth at each of our divisions, as well as double-digit earnings growth.  

Bewkes continued, “This is a natural fit between two companies with great legacies of innovation that have shaped the modern media and communications landscape, and my senior management team and I are looking forward to working closely with Randall and our new colleagues as we begin to capture the tremendous opportunities this creates to make our content even more powerful, engaging and valuable for global audiences.”

Time Warner is a global leader in media and entertainment with a great portfolio of content creation and aggregation, plus iconic brands across video programming and TV/film production. Each of Time Warner’s three divisions is an industry leader: HBO, which consists of domestic premium pay television and streaming services (HBO Now, HBO Go), as well as international premium & basic pay television and streaming services; Warner Bros. Entertainment, which consists of television, feature film, home video and videogame production and distribution. Warner Bros. film franchises include Harry Potter & DC Comics, and its produced TV series include Big Bang Theory and Gotham; Turner consists of U.S. and international basic cable networks, including TNT, TBS, CNN and Cartoon Network/Adult Swim. Also, Turner has the rights to the NBA, March Madness and MLB. Time Warner also has invested in OTT and digital media properties such as Hulu, Bleacher Report, CNN.com and Fandango.

Customer Benefits

The new company will deliver what customers want — enhanced access to premium content on all their devices, new choices for mobile and streaming video services and a stronger competitive alternative to cable TV companies.

With a mobile network that covers more than 315 million people in the United States, the combined company will strive to become the first U.S. mobile provider to compete nationwide with cable companies in the provision of bundled mobile broadband and video. It will disrupt the traditional entertainment model and push the boundaries on mobile content availability for the benefit of customers. And it will deliver more innovation with new forms of original content built for mobile and social, which builds on Time Warner’s HBO Now and the upcoming launch of AT&T’s OTT offering DIRECTV NOW.

Owning content will help AT&T innovate on new advertising options, which, combined with subscriptions, will help pay for the cost of content creation. This two-sided business model — advertising- and subscription-based — gives customers the largest amount of premium content at the best value.

Summary Terms of Transaction    

Time Warner shareholders will receive $107.50 per share under the terms of the merger, comprised of $53.75 per share in cash and $53.75 per share in AT&T stock. The stock portion will be subject to a collar such that Time Warner shareholders will receive 1.437 AT&T shares if AT&T’s average stock price is below $37.411 at closing and 1.3 AT&T shares if AT&T’s average stock price is above $41.349 at closing.

This purchase price implies a total equity value of $85.4 billion and a total transaction value of
$108.7 billion, including Time Warner’s net debt. Post-transaction, Time Warner shareholders will own between 14.4% and 15.7% of AT&T shares on a fully-diluted basis based on the number of AT&T shares outstanding today. 

The cash portion of the purchase price will be financed with new debt and cash on AT&T’s balance sheet. AT&T has an 18-month commitment for an unsecured bridge term facility for $40 billion.

Transaction Will Result in Significant Financial Benefits

AT&T expects the deal to be accretive in the first year after close on both an adjusted EPS and free cash flow per share basis.

AT&T expects $1 billion in annual run rate cost synergies within 3 years of the deal closing. The expected cost synergies are primarily driven by corporate and procurement expenditures. In addition, over time, AT&T expects to achieve incremental revenue opportunities that neither company could obtain on a standalone basis.

Given the structure of this transaction, which includes AT&T stock consideration as part of the deal, AT&T expects to continue to maintain a strong balance sheet following the transaction close and is committed to maintaining strong investment-grade credit metrics.

By the end of the first year after close, AT&T expects net debt to adjusted EBITDA to be in the 2.5x range.

Additionally, AT&T expects the deal to improve its dividend coverage and enhance its revenue and earnings growth profile.

Time Warner provides AT&T with significant diversification benefits:

  • Diversified revenue mix — Time Warner will represent about 15% of the combined company’s revenues, offering diversification from content and from outside the United States, including Latin America, where Time Warner owns a majority stake in HBO Latin America, an OTT service available in 24 countries, and AT&T is the leading pay TV distributor.
  • Lower capital intensity — Time Warner’s business requires little in capital expenditures, which helps balance the higher capital intensity of AT&T’s existing business. 
  • Regulation — Time Warner’s business is lightly regulated compared to much of AT&T’s existing operations.

The merger is subject to approval by Time Warner Inc. shareholders and review by the U.S. Department of Justice.  AT&T and Time Warner are currently determining which FCC licenses, if any, will be transferred to AT&T in connection with the transaction. To the extent that one or more licenses are to be transferred, those transfers are subject to FCC review. The transaction is expected to close before year-end 2017.

Conference Call/Webcast

On Monday, October 24, at 8:30 am ET, AT&T and Time Warner will host a webcast presentation to discuss the transaction and AT&T’s 3Q earnings. Links to the webcast and accompanying documents will be available on both AT&T’s and Time Warner’s Investor Relations websites. AT&T has cancelled its previously scheduled call to discuss earnings, which had been set for Tuesday, October 25.

About AT&T

AT&T Inc. (NYSE:T) helps millions around the globe connect with leading entertainment, mobile, high-speed Internet and voice services. We’re the world’s largest provider of pay TV. We have TV customers in the U.S. and 11 Latin American countries. We offer the best global coverage of any U.S. mobile provider*. And we help businesses worldwide serve their customers better with our mobility and highly secure cloud solutions.

About Time Warner Inc.

Time Warner Inc. (NYSE:TWX) is a global leader in media and entertainment with businesses in television networks and film and TV entertainment, uses its industry-leading operating scale and brands to create, package and deliver high-quality content worldwide on a multi-platform basis.

Cautionary Language Concerning Forward-Looking Statements

Information set forth in this communication, including financial estimates and statements as to the expected timing, completion and effects of the proposed merger between AT&T and Time Warner, constitute forward-looking statements within the meaning of the safe harbor provisions of the Private Securities Litigation Reform Act of 1995 and the rules, regulations and releases of the Securities and Exchange Commission.  These forward-looking statements are subject to risks and uncertainties, and actual results might differ materially from those discussed in, or implied by, the forward-looking statements. Such forward-looking statements include, but are not limited to, statements about the benefits of the merger, including future financial and operating results, the combined company’s plans, objectives, expectations and intentions, and other statements that are not historical facts. Such statements are based upon the current beliefs and expectations of the management of AT&T and Time Warner and are subject to significant risks and uncertainties outside of our control.

Among the risks and uncertainties that could cause actual results to differ from those described in the forward-looking statements are the following: (1) the occurrence of any event, change or other circumstances that could give rise to the termination of the merger agreement, (2) the risk that Time Warner stockholders may not adopt the merger agreement, (3) the risk that the necessary regulatory approvals may not be obtained or may be obtained subject to conditions that are not anticipated, (4) risks that any of the closing conditions to the proposed merger may not be satisfied in a timely manner, (5) risks related to disruption of management time from ongoing business operations due to the proposed merger, (6) failure to realize the benefits expected from the proposed merger and (7) the effect of the announcement of the proposed merger on the ability of Time Warner and AT&T to retain customers and retain and hire key personnel and maintain relationships with their suppliers, and on their operating results and businesses generally. Discussions of additional risks and uncertainties are and will be contained in AT&T’s and Time Warner’s filings with the Securities and Exchange Commission. Neither AT&T nor Time Warner is under any obligation, and each expressly disclaim any obligation, to update, alter, or otherwise revise any forward-looking statements, whether written or oral, that may be made from time to time, whether as a result of new information, future events, or otherwise.  Persons reading this communication are cautioned not to place undue reliance on these forward-looking statements which speak only as of the date hereof.

No Offer or Solicitation

This communication does not constitute an offer to sell or the solicitation of an offer to buy any securities or a solicitation of any vote or approval, nor shall there be any sale of securities in any jurisdiction in which such offer, solicitation or sale would be unlawful prior to registration or qualification under the securities laws of any such jurisdiction.  No offer of securities shall be made except by means of a prospectus meeting the requirements of Section 10 of the Securities Act of 1933, as amended. 

Additional Information and Where to Find It

In connection with the proposed merger, AT&T has filed a registration statement on Form S-4, containing a proxy statement/prospectus with the Securities and Exchange Commission (“SEC”).  AT&T and Time Warner have made the proxy statement/prospectus available to their respective stockholders and AT&T and Time Warner will file other documents regarding the proposed merger with the SEC.  This communication is not intended to be, and is not, a substitute for such filings or for any other document that AT&T or Time Warner may file with the SEC in connection with the proposed merger.  STOCKHOLDERS OF TIME WARNER ARE URGED TO READ ALL RELEVANT DOCUMENTS FILED WITH THE SEC, INCLUDING THE REGISTRATION STATEMENT AND THE PROXY STATEMENT/PROSPECTUS, CAREFULLY BECAUSE THEY WILL CONTAIN IMPORTANT INFORMATION ABOUT AT&T, TIME WARNER AND THE PROPOSED MERGER.  Investors and security holders are able to obtain copies of the proxy statement/prospectus as well as other filings containing information about AT&T and Time Warner, without charge, at the SEC’s website, http://www.sec.gov.  Copies of documents filed with the SEC by AT&T will be made available free of charge on AT&T’s investor relations website at http://phx.corporate-ir.net/phoenix.zhtml?c=113088&p=irol-sec. Copies of documents filed with the SEC by Time Warner will be made available free of charge on Time Warner’s investor relations website at http://ir.timewarner.com/phoenix.zhtml?c=70972&p=irol-sec.

Participants in Solicitation

AT&T, Time Warner and certain of their respective directors and executive officers and other members of management and employees may be deemed to be participants in the solicitation of proxies from the holders of Time Warner common stock in respect to the proposed merger. Information about the directors and executive officers of AT&T is set forth in the proxy statement for AT&T’s 2016 Annual Meeting of Stockholders, which was filed with the SEC on March 11, 2016. Information about the directors and executive officers of Time Warner is set forth in the proxy statement for Time Warner’s 2016 Annual Meeting of Stockholders, which was filed with the SEC on May 19, 2016. Investors may obtain additional information regarding the interest of such participants by reading the proxy statement/prospectus regarding the proposed merger and other relevant materials filed with the SEC.  These documents will be available free of charge from the sources indicated above.

Vice Media: A Company Built on a Bluff

$
0
0
User Data and Cookie Consent

New York Media

Daily Intelligencer, The Cut, Vulture, Grub Street, The Strategist, Select AllDaily Intelligencer, The Cut, Grub Street, The Strategist, Vulture, Select All

We use cookies to give you the best online experience. By continuing to use our website you are agreeing to our use of cookies in accordance with our Cookie Policy.


The Document Base URL Element

$
0
0

The HTML <base> element specifies the base URL to use for all relative URLs contained within a document. There can be only one <base> element in a document. 

The base URL of a document can be queried from a script using document.baseURI.

Attributes

This element's attributes include the global attributes.

href
The base URL to be used throughout the document for relative URL addresses. If this attribute is specified, this element must come before any other elements with attributes whose values are URLs. Absolute and relative URLs are allowed.
target
A name or keyword indicating the default location to display the result when hyperlinks or forms cause navigation, for elements that do not have an explicit target reference. It is a name of, or keyword for, a browsing context (for example: tab, window, or inline frame). The following keywords have special meanings:
  • _self: Load the result into the same browsing context as the current one. This value is the default if the attribute is not specified.
  • _blank: Load the result into a new unnamed browsing context.
  • _parent: Load the result into the parent browsing context of the current one. If there is no parent, this option behaves the same way as _self.
  • _top: Load the result into the top-level browsing context (that is, the browsing context that is an ancestor of the current one, and has no parent). If there is no parent, this option behaves the same way as _self.

Usage notes

If multiple <base> elements are specified, only the first href and first target value are used; all others are ignored.

Examples

<base href="http://www.example.com/page.html"><base target="_blank" href="http://www.example.com/page.html">  

Specifications

Browser compatibility

FeatureChromeEdgeFirefoxInternet ExplorerOperaSafari
Basic support Yes Yes1 Yes1 Yes Yes
href Yes Yes1 Yes Yes Yes
target Yes Yes Yes Yes Yes Yes
FeatureAndroid webviewChrome for AndroidEdge mobileFirefox for AndroidOpera AndroidiOS SafariSamsung Internet
Basic support Yes Yes Yes4 Yes Yes Yes
href Yes Yes Yes4 Yes Yes Yes
target Yes Yes Yes Yes Yes Yes Yes

1. Before Internet Explorer 7, <base> can be positioned anywhere in the document and the nearest value of <base> is used.

Hint

  • The usage of an anchor tag within the page, e.g. <a href="#anchor">anchor</a> is resolved by using the base url as reference and triggers an http request to the base url.

    Example:

    The base url:
    <base href="http://www.example.com/">

    The anchor:
    <a href="#anchor">Anker</a>

    Refers to:
    http://www.example.com/#anchor

OpenGraph

OpenGraph meta-tags do not acknowledge base-url and should always have full URLs. For example:

<meta property='og:image' content='http://example.com/thumbnail.jpg'>

The rise and fall of Sugru

$
0
0

Crowdfunding a startup is a risky business. Last month FormFormForm Ltd (FFF) — the British firm behind a moldable material called Sugru — disappointed investors after it sold out to German adhesives specialists Tesa for a knock-down price of £7.6 million. That’s almost £25.4 million less than it told investors it was worth when they backed it on Crowdcube.

That means the investors are losing 90 per cent of their money – and needless to say, they are pretty bummed.

“I’m very disappointed as the funding literature indicated high growth plans,” says a Crowdcube backer who wishes to remain anonymous. “The marketing material was clearly inadequate and did not highlight the risks and threats especially with the bank funding. Sugru should have advised us months ago.”

This investor lost 91 per cent of his £500 investment. He says he would have invested more had it not been his first time investing on Crowdcube. Asked if he’d use the platform to back startups again, he shakes his head: "Most likely not”.

So why did Sugru come unstuck?

Founded in 2004 by Irish inventor Jane ni Dhulchaointigh, along with James Carrigan and Roger Ashby, FFF is a London-based startup. It launched Sugru, its main product, to online consumers in 2009, and built a web community of over two million makers and DIY enthusiasts around it.

FFF says that its patented material is ideal for making cables last longer, fixing toys, piecing together ceramics, and a million and one other things. The product, which looks and feels a lot like Play Doh, was ranked as one of the best inventions in the world by Time in 2010. FFF raised millions of pounds on Crowdcube to help it expand Sugru (derived from the Irish language word "súgradh" for "play") into stores around the world.

But Sugru’s Crowdcube investors didn’t quite get the return they were hoping for. Many of them found out at the end of last month that they’d lost up to 90 per cent of their initial investment as a result of the German adhesive manufacturing giant Tesa acquisition, and dozens have been expressing their outrage on social media, with some claiming they were misled by the company.

“The challenge of explaining the vast potential of mouldable glue and how it can be relevant to any given consumer has proved to be a lot harder than anyone imagined,” said ni Dhulchaointigh.

Pressure mounts: Clydesdale Bank withdraws £2m

Prior to the second Crowdcube funding round in March 2017, FFF valued itself at £33 million and Crowdcube backers were keen to secure a chunk of equity in what looked like a fast-growing business.

But FFF found itself in financial difficulty at the end of last year after Clydesdale and Yorkshire Bank pulled £2 million in debt funding. The funding, which FFF said was “secured” in its second Crowdcube campaign, was withdrawn in November 2017.

FFF received the first half of a £4 million funding package from Clydesdale and Yorkshire Bank in November 2016 but the bank got cold feet after Sugru failed to take off in the way that FFF wanted it to.

Despite some significant commercial achievements (thriving online sales, a growing presence on Amazon, getting onto shelves in Target), building mainstream awareness for Sugru took much longer and needed much more investment than FFF had planned. “Driving people into store was far more costly than we anticipated,” said ni Dhulchaointigh. “Yes, we were able to fuel spikes in our growth, launching in big store chains, but the critical challenge was driving enough people into store to maintain the ongoing sales needed.”

FFF said it had little choice but to sell the company after Clydesdale and Yorkshire Bank withdrew the second round of debt funding. The Tesa acquisition, which went through on 24 May following Tesa’s initial offer in March, saves FFF and its remaining employees but values shares at 9p each and leaves most of its backers significantly out of pocket.

FFF takes more than £5m from investors

FFF raised £3.39 million on Crowdcube in May 2015 and a further £1.6 million in March 2017. According to The Irish Times, almost half of those who backed FFF on Crowdcube invested less than £100. However, some investors backed the company with larger amounts. The average investment was between £1,000 and £2,000, but some reportedly pledged in the £10,000, £20,000 and £50,000 brackets, and one pledging £1 million, according to Crowdfund Insider.

When it raised its second round of funding on Crowdcube, FFF said they were targeting 50 per cent year on year growth for the next three years, aiming to triple revenues from £4.6 million in 2016 to £13.8 million by 2019. It failed to mention any risks associated with the loan from Clydesdale and Yorkshire Bank.

FFF claims that the 2017 Crowdcube campaign was run with a scaled back business plan in the context of slower than anticipated growth after the 2015 campaign. FFF’s Crowdcube backers have been questioning why the company didn’t inform them that it may have to sell at a knock-down price when it went live with the second crowdfunding campaign. Some believe that FFF was aware of the risks associated with the Clydesdale and Yorkshire Bank loan when it went live with the second campaign.

Sugru’s sticky history


July 2015
FFF raises £3.39 million on Crowdcube and launches Sugru nationwide with Target in the US. It achieves 53 per cent YoY growth.
January 2016
FFF invests in expanding a lab and manufacturing space in London, but it scales back in May 2016 when it cuts ten staff along with overall budgets for the year.
June 2016
In a bid to improve its margins, FFF starts to move Sugru manufacturing operations to Mexico.
August 2016
Sugru launches in South Africa with retailer Massmart. In October 2016, Sugru launches with DIY giant Leroy Merlin in France.
November 2016
FFF secures the first half of £4 million funding package from Clydesdale and Yorkshire Bank.
December 2016
Company growth slows to just 30 per cent YoY. In the same month, FFF makes further redundancies as part of a plan to cut £1 million in marketing, project and overhead costs.
March 2017
A second crowdfunding campaign raises £1.6 million.
June 2017
Sugru launches in Australia and New Zealand. In September 2017 a new family-friendly version of Sugru launches online. A month later, Sugru becomes available in Canada.
November 2017
Clydesdale Bank withdraws £2 million in funding.
December 2017
FFF reveals that growth has slowed to just 20 per cent YoY.

Another Crowdcube backer who wishes to remain anonymous says that even though they accept that businesses fail, “what gets me is that in the last fundraiser little over a year ago they were spouting all sorts about growth, and having secured funding, which clearly wasn’t true. This suggests that they were peddling mistruths simply to secure another £1 million, knowing full well their business was failing”.

One person WIRED spoke to plans to ask the Financial Conduct Authority to investigate why FFF failed to indicate the risks in its second crowdfunding campaign. When WIRED approached the FCA, a spokesperson said they “never confirm or deny if we are or aren’t investigating a firm”.

When asked why Clydesdale Bank decided to pull its funding and when FFF found out that this might happen, FFF said that the bank continued to support the company into the second half of 2017, “but with growing concern as to our ability to fund our losses until we reached break-even”. Then in October 2017 the FFF management were notified that the bank had decided not to extend the second tranche of the loan. “This was mainly due to the fact we had a material breach of our covenants with the bank in September 2017,” FFF explained in a statement.

“Their risk assessment processes concluded we were vulnerable unless we found either a very substantial investor or a buyer for the business,” Sugru’s parent company said. Clydesdale Bank did, apparently, agree to provide further funding in December 2017 – “but with the condition that this was matched by existing shareholders, and with the understanding that the sale of the company would make meaningful progress by end of January 2018,” FFF added.

The finger has also been pointed at Crowdcube, with some questioning whether it did all the necessary due diligence. But a Crowdcube spokesperson said that “FormFormForm’s pitch was fair, clear and not misleading; we take our role as a regulated platform extremely seriously, and we are confident that our due diligence process, as outlined in our publicly available charter, was adhered to.”

A groundbreaking product?

Startups that raise money from the crowd often go out of business and this isn’t the first time crowdfunding investors have been left upset. Drone company Zano raised $3 million only to go bankrupt, for example, and 3D printing company Pirate3D failed after raising $1.5 million on Kickstarter.

In the case of FFF, not everyone appears to be as disappointed with the exit as those who are expressing their outrage on Twitter.

Sugru was also backed by investment firms such as Medra Capital, Antipodean Ventures, and LocalGlobe, and their comments have been much more subdued. Andrew McPhee, a product lead at Snap (Snapchat’s parent company) and a Sugru investor through Antipodean Ventures, says that “as entrepreneurs, the Antipodean founders have experienced every twist and turn of the startup rollercoaster, and we know just how much Sugru wanted to win big for everyone who backed them and believed in them.

“Jane and her team went after a bold bet, in a difficult space,” says McPhee. “They managed to put a groundbreaking product into market and scaled it significantly across the world. We think this should be celebrated.”

Elsewhere, Robin Klein, a venture capital investor and cofounder at LocalGlobe in London, wrote on 18 May in a blog post on Medium that LocalGlobe lost “most” of its investment. “Sugru struggled to realise its commercial ambitions and grow the business in line with targets,” he wrote. “To build mainstream awareness for Sugru, along with mass retail distribution, takes much longer, and requires more investment than any of us imagined.

“Venture capitalists are famous for trumpeting their successful investments (and we’ll continue to do so) – sometimes we need to talk about our failures – especially when they are successes in other ways.”

Nobel Laureates Paul D. Boyer and Jens C. Skou Die at 99

$
0
0

Two laureates who shared the 1997 Nobel Prize in Chemistry, Jens C. Skou and Paul D. Boyer, died less than one week apart, on May 28 and June 2, respectively. Both men were 99.

LN-skou.jpg

Credit: Associated Press

Jens Skou in 1997.

Skou discovered the first molecular pump, the ion-transporting enzyme Na+,K+ adenosine triphosphatase. And Boyer determined the mechanism of action of the enzyme adenosine triphosphate synthase. Both discoveries involved adenosine triphosphate (ATP), the universal energy currency of cells.

LN-boyer.jpg

Credit: Associated Press

Paul Boyer in 1997, after winning the Nobel Prize.

Skou was born in 1918 in Lemvig, Denmark, to a wealthy family of timber and coal merchants. After graduating from high school in 1937, he studied medicine on the advice of a colleague. He earned his M.D. at the University of Copenhagen in 1944, while Germany was still occupying Denmark during World War II. After an internship, he joined Aarhus University in 1947 to study the mechanism of action of local anesthetics. He married Ellen M. Nielsen in 1948 and earned a Ph.D. in physiology at Aarhus University in 1954.

Skou remained at Aarhus for the rest of his career. In 1957, while continuing to study the mechanisms of anesthetic drugs, he discovered Na+,K+ adenosine triphosphatase in crab nerve cell membranes. The enzyme helps maintain proper salt balance across neuronal cell membranes, setting up voltage differences that cause nerve impulses and muscle contractions.

From then on, “my scientific interest shifted from the effect of local anesthetics to active transport of cations,” Skou wrote in his Nobel biography. He retired in 1988 and later shared half of the 1997 Nobel Prize for his 1957 discovery.

During his career, Skou was a strong advocate of non-targeted funding, support for basic research that scientists receive routinely, without having to file laborious applications. “His tireless struggle to tell politicians and the outside world about the importance of non-targeted funding for research has had a huge impact on the research environment,” said Lars Bo-Nielsen, dean of health at Aarhus University. “He has been a cornerstone and a beacon for research.”

Skou is survived by his wife Ellen, their two daughters and sons-in-law, four grandchildren, and one great-grandchild.

Boyer, who shared the 1997 chemistry prize for determining the mechanism of action of ATP synthase, died a few days after Skou. Boyer “had outstanding character,” says David Eisenberg, who Boyer recruited to the University of California, Los Angeles, in 1968. “Everyone liked him and admired him. He was a modest and thoughtful leader.”

Boyer was born in 1918 in Provo, Utah. “You have proven yourself a most outstanding student,” Provo High School chemistry teacher Rees Bench wrote in Boyer’s yearbook, foreshadowing the Nobel Committee’s judgment. Boyer earned a B.S. degree in chemistry from Brigham Young University in 1939 and married Lyda Whicker that same year.

After earning a Ph.D. in biochemistry at the University of Wisconsin in 1943, Boyer participated in a war-related research project at Stanford University to stabilize serum albumin for battlefield treatment of shock. He then returned to the University of Wisconsin and built a home nearby largely by himself, serving as contractor, plumber, electrician, and carpenter.

Boyer was chair of the Biochemistry Section of the American Chemical Society from 1959 to 1960. In 1963, he moved to UCLA, where he remained for the rest of his career. He was founding director of UCLA’s Molecular Biology Institute, to which he recruited Eisenberg and others. He managed the construction of the institute’s building, which was later named in his honor. And from 1969 to 1970, he served as president of the American Society of Biological Chemists, forerunner of the American Society for Biochemistry & Molecular Biology.

In the 1970s, Boyer developed a set of mechanistic proposals that described how the enzyme ATP synthase converts adenosine diphosphate and inorganic phosphate into ATP. For this work, Boyer shared the other half of the 1997 Nobel Prize with the researcher who experimentally verified his ATP synthase mechanism, John E. Walker of the Medical Research Council Laboratory of Molecular Biology. Boyer donated a majority of his prize money to fund chemistry postdoctoral fellows at UCLA and two other institutions.

Eisenberg recounts that after securing key funding for the Molecular Biology Institute building, in 1973, Lyda and institute faculty members brought music, champagne, flowers, and a 20-foot sign to the airplane door at Los Angeles International Airport to celebrate Boyer’s return from Washington, D.C. But Boyer didn’t disembark and instead showed up behind the group. He had taken an earlier flight. “It was typical of Paul to be in a hurry,” Eisenberg says.

Boyer is survived by his wife Lyda, two daughters, eight grandchildren, and six great-grandchildren. A son, Douglas, died in 2001.

RemoteStorage – An open protocol for per-user storage on the Web

$
0
0
remoteStorage ‒ An open protocol for per-user storage on the Web
remoteStorage

An open protocol for per-user storage on the Web

Own your data

Everything in one place – your place. Use a storage account with a provider you trust, or set up your own storage server. Move house whenever you want. It's your data.

Stay in sync

remoteStorage-enabled apps automatically sync your data across all of your devices, from desktop to tablet to smartphone, and maybe even your TV.

Compatibility & choice

Use the same data across different apps. Create a to-do list in one app, and track the time on your tasks in another one. Say goodbye to app-specific data silos.

Go offline

Most remoteStorage-enabled apps come with first-class offline support. Use your apps offline on the go, and automatically sync when you're back online.

Unhosted Architecture

remoteStorage is the first (and currently only) open standard to enable truly unhosted web apps. That means users are in full control of their precious data and where it is stored, while app developers are freed of the burden of hosting, maintaining and protecting a central database.

Traditional Web Apps

Traditional web app
Traditional hosted web stack, for example LAMP/.Net/RoR/Django/etc.

Developer hosts app and data,
user controls device.

No-Backend Web Apps

Traditional web app
100% client-side app plus CouchDB, Hoodie, Firebase, Parse, Kinto, etc.

Developer provides app and data,
user controls device.

Unhosted Web Apps

Traditional web app
100% client-side app plus remoteStorage, Google Drive, Dropbox, etc.

Developer provides app only,
user controls device and data.

remoteStorage Protocol

remoteStorage is a creative combination of existing protocols and standards. It aims to re-use existing technologies as much as possible, adding just a small layer of standardization on top to facilitate its usage for per-user storage with simple permissions and offline-capable data sync.

Discovery: WebFinger

In order for apps to know where to ask permission and later actually sync user data, users give them a user address, basically like with E-Mail or Jabber/XMPP. With that address, apps retrieve storage information for the username on that domain/host.

Check out a live example for a 5apps user.

Authorization: OAuth 2.0

User data is scoped by so-called categories, which are essentially base directories, for which you can give apps read-only or read/write permission. Apps will use OAuth scopes to ask for access to one or more categories.

In the example screenshot, Litewrite is asking for read/write access to the "documents" category, using the OAuth scope documents:rw. If you allow access, the app will retrieve a bearer token, with which it can read and write to your storage, until you revoke that access on your server.

Data Storage & Sync: HTTP REST

remoteStorage defines a simple key/value store for apps to save and retrieve data. The basic operations are GET/PUT/DELETE requests for specific files/documents.

In addition to that – and the only special feature aside from plain HTTP – there are directory listings, formatted as JSON-LD. They contain both the content type and size, as well as ETags, which can be used to implement sync mechanisms. The files and listings themselves also carry ETag headers for sync/caching and conditional requests.

The Strange Case of the Missing Joyce Scholar

$
0
0

I asked him that afternoon, one more time, about the perfect “Ulysses.” It always seems so close. Back in the 1960s, again in 1980s. What happened to his work in Boston? Why can’t we just publish the thing? A few errors — how hard can it be?

He told me a story, a parable, really. “There are the gauchos and the gauleiters,” he explained. It’s a mixed metaphor, but one that nicely captures his view of the world and of Joyce scholars too. Gauchos, I knew, were Argentine cowboys, but gauleiters (pronounced gow-lieders), I learned, were municipal bureaucrats in the early Nazi government; in other words, menacing apparatchiks.

Across the great landscape of understanding are the gauchos, at once both rugged and audacious. “They roam the pampas,” he told me, taking care of the vast terrain by knowing its vastness intimately. Meanwhile back at the edge of the pampas, in civilization, are the gauleiters. They are everywhere, they are busy, they are overwhelming. The gauchos are few — iconoclasts like himself, or the occasional Joyce fanatic like Jorn Barger, a polymath who in the earliest days of the internet wrote a lot of brilliant Joyce analysis on his weblog (a word he also coined). But, Kidd said, it doesn’t matter. In the end, the victory always goes to the gauleiters because of their peevish concern for “administrative efficiency.”

When I pressed him on real-world specifics, the manuscripts, the work that must have been on disks somewhere, he recalled that, yes, he had assembled a draft of an edition with a complete introduction. One of Kidd’s editors at Norton, Julia Reidhead, confirmed that both existed but said that one delay after another — “an infinite loop of revision” — ran into the legal wall of new copyright extensions, and so Norton “stopped the project.” One Joyce scholar remembers reading the introduction but no longer has a copy, and Kidd doesn’t have one either. Instead, we are left with bizarre relics of what could have been. Early on in the Joyce wars, in fact, Arion Press issued a new edition of “Ulysses” that included some of the preliminary Kidd edits. The book was luxurious, with prints by Robert Motherwell, and only 175 of them were printed. I found one for sale on Amazon. The seller wanted $25,678.75.

In the years after Kidd’s disappearance, an uncanny thing happened. The very book Kidd had tried to shame into disrepute was embraced by the world of scholarship. In 1993, the “Gabler Edition” of “Ulysses,” a bright red tome, appeared on the bookshelves. There are various printings of this book now, and many have no dot at all at the end of the Bloom chapter. No period of any size, which Gabler has said is a printing error — making this nondot an error miscorrected so many times that it is now perfectly invisible.

Gabler’s book thrives because it now has its own captive audience: academics. “Scholars have quietly gone back to Gabler,” said Robert Spoo, a former editor of The James Joyce Quarterly. “By not publishing his own edition, Kidd never completed the argument against Gabler,” he said, adding that the Gabler edition “has one great advantage, you can cite it by line numbers; that is very handy for scholars.” That whole “ ’80s and ’90s thing,” as Spoo called it, receded long ago. “Scholars have made peace with the Gabler.”

In that stretch when the original edition fell out of copyright in the mid-1990s, a lot of editors rushed to publish their own editions. Some have dots, some don’t. Some with “love,” some not. Some editors reversed a selection of Gabler’s changes, some didn’t. Other editions have gone off the rails, as the Joyce scholar Sam Slote told me: One “Ulysses,” currently available online, has a long, weird riff inserted on Page 160, announcing that you will now be reading “The Secret Confessions of a Conservative,” where the anonymous writer explains that his pro-life, pro-death-penalty positions are so consistent that “if an embryo or fetus commits murder, then he should be aborted.”

Out in the distant pampas, meanwhile, the perfect edition remains always close at hand and just out of reach. “I am almosting it,” Stephen Dedalus muttered early on in the novel. The thing is, on Amazon alone, there are nearly a dozen slightly different versions of the novel “as James Joyce wrote it.” None of them are absolutely perfect, but each of them, nevertheless, is “Ulysses.” It’s almost too pat an ending for an author who was asked about all those errors nearly a century ago. “These are not misprints,” he said, “but beauties of my style hitherto undreamt of.”

The Trouble with D3

$
0
0

Recently there were a couple of threads on Twitter discussing the difficulties associated with learning d3.js. I’ve also seen this come up in many similar conversations I’ve had at meetups, conferences, workshops, mailing list threads and slack chats. While I agree that many of the difficulties are real, the threads highlight a common misconception that needs to be cleared up if we want to help people getting into data visualization.

The original thread that spurred quite a bit of discussion and some salient points.
A thread with lots of excellent hard-won perspective.

The misconception at the heart of these threads is that d3 and data visualization are the same thing. That to do data visualization one must learn and use all of d3, and to use d3 one must learn all of data visualization. Instead, I like to emphasize that d3 is a toolkit for communicating complex data-driven concepts on the web.

What I want to get across here is how we can get a more holistic view of d3’s role in web-based data visualization. Let’s use a metaphor inspired by Miles McCrocklin where data visualization is likened to building furniture. All kinds of people might get into building furniture, for all kinds of reasons, especially when they see the beautiful things other people are making:

The Eames chair is considered a masterfully designed chair
There are many aspirational data visualizations made with d3

People see the impressive output and naturally desire the ability to make it themselves, they ask how it is done and often hear “it was made with d3.” This is the start of the problem, because when someone hears that it was made with d3, they think “oh, I should go learn d3”. They go over to the documentation and see something like this:

d3’s API

Many of these tools seem baffling, they require knowledge about woodworking and processes we’ve never thought about before, or even knew we might need to think about. We feel overwhelmed and discouraged, it seems the path to something that seemed within reach is long and treacherous.

This is where I believe we can change things for the better, rather than changing the toolset we can guide people based on their goals along more suitable paths for them. Let’s examine a few common situations where people find themselves wanting to do interactive data visualization and how we might plot a better course for each.

The designer

Our designer is already comfortable communicating ideas visually, they know how to break down complex problems and map them to relatable concepts. They have a suite of tools that enhance their ability to express whats in their mind. They often are not very familiar with programming, perhaps they have some experience with basic HTML and CSS for putting together static web pages. They’ve seen what people can make with d3 and are driven to be able to do the same. When they try to understand what looks like a very small amount of code in a bl.ock they get very confused.

What part of this is JavaScript? What part is specific to d3? What is an asynchronous request? What is this DOM I keep hearing about?

For these folks, d3 offers great power and flexibility, but first they must learn some foundational technical skills to operate in this environment. I often recommend Scott Murray’s excellent d3 tutorial (and book) which covers basic HTML, CSS and JavaScript concepts. I also recommend experimenting with exporting SVG from design tools like Illustrator and Sketch and imbuing them with interactivity and data magic in the browser.

When starting out, I often encourage designers not to focus on the enter/update/exit pattern, reusability or performance concerns. It’s much more helpful to focus on getting the desired output, once you have something close there are lots of friendly folks that can help you make it more performant or robust.

The analyst

Our analyst is already comfortable working with data, writing queries and calling powerful functions with complex APIs. They have a workflow in a powerful environment like R Studio or Jupyter Notebooks. Most likely they come to d3 because they want to publish their analysis in some way. While the analyst is typically more comfortable programming than the designer, they are likely not familiar with the idiosyncrasies of programming in a web browser environment.

What is the difference between SVG and Canvas? What is the JavaScript equivalent to Pandas/Tidy? Why can’t I draw a line chart with an SVG line? What is this “d” attribute on a path?

For these folks I also recommend a primer in web development to familiarize themselves with concepts like the DOM. Again, my favorite starting point is Scott Murray’s d3 tutorial (and book). I would also recommend a crash course in JavaScript and JSON, exporting data from their normal environment as JSON for visualizing in the browser.

When starting out, I often encourage analysts to ignore a lot of d3’s utility functions, as they are probably more familiar with the powerful functions in their own environments. Instead, I think its best to focus on exporting the data into an easy-to-consume JSON or CSV format that matches existing examples.

The software engineer

Our software engineer is an interesting case, because although they have a lot of the foundational skills and knowledge around web development, some of d3’s tools require a foreign way of thinking. In our metaphor, the engineer doesn’t just care about making furniture, they are working on the entire building. There are frameworks and infrastructure that the furniture has to fit inside.

What is this enter/update/exit business? Why are you messing with my DOM? Transitions… How do I unit test those?

Many developers will already be intimately familiar with the DOM and JavaScript, so my advice is to actually try and ignore the parts of d3 which focus on the DOM. Instead, become familiar with some key utilities for data visualizations like d3-scale. D3 is broken up into many smaller modules so it’s pretty easy to cherry-pick the functionality you want to use.

I also emphasize separating the layout of data from the visualization, so using a module like d3-hierarchy you can generate a data structure with d3 and then render it into the DOM using your framework of choice.

Silver bullets

These situations are loose archetypes, many people will fall somewhere between them and that’s perfectly fine too. The idea is to separate out the goals and constraints so that we can better guide the diverse folks entering our community.

I personally think of web standards as the lowest common denominator for global communication. The graphics APIs are not ideal but if you want to instantly distribute your data-driven experience to billions of people I think its reasonable to pay the price of a relatively steep learning curve. The underlying concepts of 2D graphics, visual design, user experience design, information architecture and programming all transfer directly to many other endeavors besides data visualization.

But sometimes, a chair is just a place to sit, we don’t have time or money to care that much and IKEA will do just fine! In those cases there are plenty of charting libraries that only need a little bit of configuration to get going.

Sometimes this is the only tool you need.

Elijah Meeks has made a great map of the d3 API that breaks down the toolbox into useful categories in his recent article.

from Elijah’s D3 is not a Data Visualization Library article.

I’ve also attempted to map out the d3 learning landscape in my article The Hitchhiker’s Guide to D3, which gives some links and starting points for what I believe are some of the more essential concepts.

The journey isn’t easy, but it can certainly be an adventure!

A while back I interviewed a handful of data visualizers who learned d3 in the process of expressing themselves and the datasets they cared about. The common theme was that they had started with goals. They learned what they needed from d3 along the way to achieving those goals.

So grab a map and plot your own course through the vast world of Data Visualization. You can find some trails others have blazed with Blockbuilder search, try out JavaScripts very own Notebook environment Observable, and join over 3,000 like-minded chair makers, I mean data visualizers on the d3 slack channel.

Good luck, I look forward to seeing your visualizations!

I’d like to thank Erik Hazzard, Kerry Rodden, Zan Armstrong, Yannick Assogba, Adam Pearce and Nadieh Bremer for their feedback on this article.

A Child’s Garden of Inter-Service Authentication Schemes

$
0
0

Modern applications tend to be composed from relationships between smaller applications. Secure modern applications thus need a way to express and enforce security policies that span multiple services. This is the “server-to-server” (S2S) authentication and authorization problem (for simplicity, I’ll mash both concepts into the term “auth” for most of this post).

Designers today have a lot of options for S2S auth, but there isn’t much clarity about what the options are or why you’d select any of them. Bad decisions sometimes result. What follows is a stab at clearing the question up.

Cast Of Characters

Alice and Bob are services on a production VPC. Alice wants to make a request of Bob. How can we design a system that allows this to happen?

Here’s, I think, a pretty comprehensive overview of available S2S schemes. I’ve done my best to describe the “what’s” and minimize the “why’s”, beyond just explaining the motivation for each scheme. Importantly, these are all things that reasonable teams use for S2S auth.

Nothing At All

Far and away the most popular S2S scheme is “no auth at all”. Internet users can’t reach internal services. There’s little perceived need to protect a service whose only clients are already trusted.

Bearer Token

Bearer tokens rule everything around us. Give Alice a small blob of data, such that when Bob sees that data presented, he assumes he’s talking to Alice. Cookies are bearer tokens. Most API keys are bearer tokens. OAuth is an elaborate scheme for generating and relaying bearer tokens. SAML assertions are delivered in bearer tokens.

The canonical bearer token is a random string, generated from a secure RNG, that is at least 16 characters long (that is: we generally consider 128 bits a reasonable common security denominator). But part of the point of a bearer token is that the holder doesn’t care what it is, so Alice’s bearer token could also encode data that Bob could recover. This is common in client-server designs and less common in S2S designs.

A few words about passwords

S2S passwords are disappointingly common. You see them in a lot of over-the-Internet APIs (ie, for S2S relationships that span companies). A password is basically a bearer token that you can memorize and quickly type. Computers are, in 2018, actually pretty good at memorizing and typing, and so you should use real secrets, rather than passwords, in S2S applications.

HMAC(timestamp)

The problem with bearer tokens is that anybody who has them can use them. And they’re routinely transmitted. They could get captured off the wire, or logged by a proxy. This keeps smart ops people up at night, and motivates a lot of “innovation”.

You can keep the simplicity of bearer tokens while avoiding the capture-in-flight problem by exchanging the tokens with secrets, and using the secrets to authenticate a timestamp. A valid HMAC proves ownership of the shared secret without revealing it. You’d then proceed as with bearer tokens.

A few words about TOTP

TOTP is basically HMAC(timestamp) stripped down to make it easy for humans to briefly memorize and type. As with passwords, you shouldn’t see TOTP in S2S applications.

A few words about PAKEs

PAKEs are a sort of inexplicably popular cryptographic construction for securely proving knowledge of a password and, from that proof, deriving an ephemeral shared secret. SRP is a PAKE. People go out of their way to find applications for PAKEs. The thing to understand about them is that they’re fundamentally a way to extract cryptographic strength from passwords. Since this isn’t a problem computers have, PAKEs don’t make sense for S2S auth.

Encrypted Tokens

HMAC(timestamp) is stateful; it works because there’s pairwise knowledge of secrets and the metadata associated with them. Usually, this is fine. But sometimes it’s hard to get all the parties to share metadata.

Instead of making that metadata implicit to the protocol, you can store it directly in the credential: include it alongside the timestamp and HMAC or encrypt it. This is how Rails cookie storage works; it’s also the dominant use case for JWTs. AWS-style request “signing” is another example (using HMAC and forgoing encryption).

By themselves, encrypted tokens make more sense in client-server settings than they do for S2S. Unlike client-server, where a server can just use the same secret for all the clients, S2S tokens still require some kind of pairwise state-keeping.

Macaroons

You can’t easily design a system where Alice takes her encrypted token, reduces its security scope (for instance, from read-write to read-only), and then passes it to Dave to use on her behalf. No matter how “sophisticated” we make the encoding and transmission mechanisms, encrypted tokens still basically express bearer logic.

Macaroons are an interesting (and criminally underused) construction that directly provides both delegation and attenuation. They’re a kind of token from which you can derive more restricted tokens (that’s the “attenuation”), and, if you want, pass that token to someone else to use without them being able to exceed the authorization you gave them. Macaroons accomplish this by chaining HMAC; the HMAC of a macaroon is the HMAC secret for its derived attenuated macaroons.

By adding encryption along with HMAC, Macaroons also express “third-party” conditions. Alice can get Charles to attest that Alice is a member of the super-awesome-best-friends-club, and include that in the Macaroon she delivers to Bob. If Bob also trusts Charles, Bob can safely learn whether Alice is in the club. Macaroons can flexibly express whole trees of these kinds of relationships, capturing identity, revocation, and… actually, revocation and identity are the only two big wins I can think of for this feature.

Asymmetric Tokens

You can swap the symmetric constructions used in tokens for asymmetric tokens and get some additional properties.

Using signatures instead of HMACs, you get non-repudiability: Bob can verify Alice’s token, but can’t necessarily mint a new Alice token himself.

More importantly, you can eliminate pairwise configuration. Bob and Alice can trust Charles, who doesn’t even need to be online all the time, and from that trust derive mutual authentication.

The trade-offs for these capabilities are speed and complexity. Asymmetric cryptography is much slower and much more error-prone than symmetric cryptography.

Mutual TLS

Rather than designing a new asymmetric token format, every service can have a certificate. When Alice connects to Bob, Bob can check a whitelist of valid certificate fingerprints, and whether Alice’s name on her client certificate is allowed. Or, you could set up a simple CA, and Bob could trust any certificate signed by the CA. Things can get more complex; you might take advantage of X.509 and directly encode claims in certs (beyond just names).

A few words about SPIFFE

If you’re a Kubernetes person this scheme is also sometimes called SPIFFE.

A few words about Tokbind

If you’re a participant in the IETF TLS Working Group, you can combine bearer tokens and MTLS using tokbind. Think of tokbind as a sort of “TLS cookie”. It’s derived from the client and server certificate and survives multiple TLS connections. You can use a tokbind secret to sign a bearer token, resulting in a bearer token that is confined to a particular MTLS relationship that can’t be used in any other context.

Instead of building an explicit application-layer S2S scheme, you can punt the problem to your infrastructure. Ensure all requests are routed through one or more trusted, stateful proxies. Have the proxies set headers on the forwarded requests. Have the services trust the headers.

This accomplishes the same things a complicated Mutual TLS scheme does without requiring slow, error-prone public-key encryption. The trade-off is that your policy is directly coupled to your network infrastructure.

Kerberos

You can try to get the benefits of magic headers and encrypted tokens at the same time using something like Kerberos, where there’s a magic server trusted by all parties, but bound by cryptography rather than network configuration. Services need to be introduced to the Kerberos server, but not to each other; mutual trust of the Kerberos server, and authorization logic that lives on that Kerberos server, resolves all auth questions. Notably, no asymmetric cryptography is needed to make this work.

Themes

What are the things we might want to achieve from an S2S scheme? Here’s a list. It’s incomplete. Understand that it’s probably not reasonable to expect all of these things from a single scheme.

Minimalism

This goal is less obvious than it seems. People adopt complicated auth schemes without clear rationales. It’s easy to lose security by doing this; every feature you add to an application – especially security features – adds attack surface. From an application security perspective, “do the simplest thing you can get away with” has a lot of merit. If you understand and keep careful track of your threat model, “nothing at all” can be a security-maximizing option. Certainly, minimalism motivates a lot of bearer token deployments.

The opposite of minimalism is complexity. A reasonable way to think about the tradeoffs in S2S design is to think of complexity as a currency you have to spend. If you introduce new complexity, what are you getting for it?

Claims

Authentication and authorization are two different things: who are you, and what are you allowed to do? Of the two problems, authorization is the harder one. An auth scheme can handle authorization, or assist authorization, or punt on it altogether.

Opaque bearer token schemes usually just convey identity. An encrypted token, on the other hand, might bind claims: statements that limit the scope of what the token enables, or metadata about the identity of the requestor.

Schemes that don’t bind claims can make sense if authorization logic between services is straightforward, or if there’s already a trusted system (for instance, a service discovery layer) that expresses authorization. Schemes that do bind claims can be problematic if the claims carried in an credential can be abused, or targeted by application flaws. On the other hand, an S2S scheme that supports claims can do useful things like propagating on-behalf-of requestor identities or supporting distributing tracing.

Confinement

The big problem with HTTP cookies is that once they’ve captured one, an attacker can abuse it however they see fit. You can do better than that by adding mitigations or caveats to credentials. They might be valid only for a short period of time, or valid only for a specific IP address (especially powerful when combined with short expiry), or, as in the case of Tokbind, valid only on a particular MTLS relationship.

Statelessness

Statelessness means Bob doesn’t have to remember much (or, ideally, anything) about Alice. This is an immensely popular motivator for some S2S schemes. It’s perceived as eliminating a potential performance bottleneck, and as simplifying deployment.

The tricky thing about statelessness is that it often doesn’t make sense to minimize state, only to eliminate it. If pairwise statefulness creeps back into the application for some other reason (for instance, Bob has to remember anything at all about Alice), stateless S2S auth can spend a lot of complexity for no real gain.

Pairwise Configuration

Pairwise configuration is the bête noire of S2S operational requirements. An application secret that has to be generated once for each of several peers and that anybody might ever store in code is part of a scheme in which secrets are never, ever rotated. In a relatively common set of circumstances, pairwise config means that new services can only be introduced during maintenance windows.

Still, if you have a relatively small and stable set of services (or if all instances of a particular service might simply share a credential), it can make sense to move complexity out of the application design and into the operational requirements. Also it makes sense if you have an ops team and you never have to drink with them.

I kid, really, because if you can get away with it, not spending complexity to eliminate pairwise configuration can make sense. Also, many of the ways S2S schemes manage to eliminate pairwise configurations involve introducing yet another service, which has a sort of constant factor cost that can swamp the variable cost.

Delegation and Attenuation

People deploy a lot of pointless delegation. Application providers might use OAuth for their client-server login, for instance, even though no third-party applications exist. The flip side of this is that if you actually need delegation, you really want to have it expressed carefully in your protocol. The thing you don’t want to do is ever share a bearer token.

Delegation can show up in internal S2S designs as a building block. For instance, a Macaroon design might have a central identity issuance server that grants all-powerful tokens to systems that in turn filter them for specific requestors.

Some delegation schemes have implied or out-of-band attenuation. For instance, you might not be able to look at an OAuth token and know what it’s restrictions are. These systems are rough in practice; from an operational security perspective, your starting point probably needs to be that any lost token is game-over for its owner.

A problem with writing about attenuation is that Macaroons express it so well that it’s hard to write about its value without lapsing into the case for Macaroons.

Flexibility

If use JSON as your credential format, and you later build a feature that allows a credential to express not just Alice’s name but also whether she’s an admin, you can add that feature without changing the credential format. Later, attackers can add the feature where they turn any user into an admin, and you can then add the feature that breaks that attack. JSON is just features all the way down.

I’m only mostly serious. If you’re doing something more complicated than a bearer token, you’re going to choose an extensible mechanism. If not, I already made the case for minimalism.

Coupling

All things being equal, coupling is bad. If your S2S scheme is expressed by network controls and unprotected headers, it’s tightly coupled to the network deployment, which can’t change without updating the security scheme. But if your network configuration doesn’t change often, that limitation might save you a lot of complexity.

Revocation

People talk about this problem a lot. Stateless schemes have revocation problems: the whole point of a stateless scheme is for Bob not to have to remember anything about Alice (other than perhaps some configuration that says Alice is allowed to make requests, but not Dave, and this gets complicated really quickly and can quickly call into question the value of statelessness but let’s not go there). At any rate: a stateless bearer token will eventually be compromised, and you can’t just let it get used over and over again to steal data.

The two mainstream answers to this problem are short expiry and revocation lists.

Short expiry addresses revocation if: (a) you have a dedicated auth server and the channel to that server is somehow more secure than the channel between Alice and Bob.; (b) the auth server relies on a long-lived secret that never appears on the less-secure channel, and (c) issues an access secret that is transmitted on the less-secure channel, but lives only for a few minutes. These schemes are called “refresh tokens”. Refresh tends to find its way into a lot of designs where this fact pattern doesn’t hold. Security design is full of wooden headphones and coconut phones.

Revocation lists (and, usually, some attendant revocation service) are a sort of all-purpose solution to this problem; you just blacklist revoked tokens, for at least as long as the lifetime of the token. This obviously introduces state, but it’s a specific kind of state that doesn’t (you hope) grow as quickly as your service does. If it’s the only state you have to keep, it’s nice to have the flexibility of putting it wherever you want.

Rigidity

It is hard to screw up a random bearer token. Alice stores the token and supply it on requests. Bob uses the token to look up an entry in a database. There aren’t a lot of questions.

It is extraordinarily easy to screw up JWT. JWT is a JSON format where you have to parse and interpret a JSON document to figure out how to decrypt and authenticate a JSON document. It has revived bugs we thought long dead, like “repurposing asymmetric public keys as symmetric private keys”.

Problems with rigidity creep up a lot in distributed security. The first draft of this post said that MTLS was rigid; you’re either speaking TLS with a client cert or you’re not. But that ignores how hard X.509 validation is. If you’re not careful, an attacker can just ask Comodo for a free email certificate and use it to access your services. Worse still, MTLS can “fail open” in a way that TLS sort of doesn’t: if a service forgets to check for client certificates, TLS will still get negotiated, and you might not notice until an attacker does.

Long story short: bearer tokens are rigid. JWT is a kind of evil pudding. Don’t use JWT.

Universality

A nice attribute of widely deployed MTLS is that it can mitigate SSRF bugs (the very bad bug where an attacker coerces one of your service to make an arbitrary HTTP request, probably targeting your internal services, on their behalf). If the normal HTTP-request-generating code doesn’t add a client certificate, and every internal service needs to see one to honor a request, you’ve limited the SSRF attackers options a lot.

On the other hand, we forget that a lot of our internal services consist of code that we didn’t write. The best example of this is Redis, which for years proudly waved the banner of “if you can talk to it, you already own the whole application”.

It’s helpful if we can reasonably expect an auth control to span all the systems we use, from Postgres to our custom revocation server. That might be a realistic goal with Kerberos, or with network controls and magic headers; with tunnels or proxies, it’s even something you can do with MTLS – this is a reason MTLS is such a big deal for Kubernetes, where it’s reasonable for the infrastructure to provide every container with an MTLS-enabled Envoy proxy. On the other hand it’s unlikely to be something you can achieve with Macaroons or evil puddings.

Performance and Complexity

If you want performance and simplicity, you probably avoid asymmetric crypto, unless your request frequency is (and will remain) quite low. Similarly, you’d probably want to avoid dedicated auth servers, especially if Bob needs to be in constant contact with them for Alice to make requests to him; this is a reason people tend to migrate away from Kerberos.

Our Thoughts

Do the simplest thing that makes sense for your application right now. A true fact we can relate from something like a decade of consulting work on these problems: intricate S2S auth schemes are not the norm; if there’s a norm, it’s “nothing at all except for ELBs”. If you need something, but you have to ask whether that something oughtn’t just be bearer tokens, then just use bearer tokens.

Unfortunately, if there’s a second norm, it’s adopting complicated auth mechanisms independently or, worse, in combination, and then succumbing to vulnerabilities.

Macaroons are inexplicably underused. They’re the Velvet Underground of authentication mechanisms, hugely influential but with little radio airplay. Unlike the Velvets, Macaroons aren’t overrated. They work well for client-server auth and for s2s auth. They’re very flexible but have reassuring format rigidity, and they elegantly take advantage of just a couple simple crypto operations. There are libraries for all the mainstream languages. You will have a hard time coming up with a scenario where we’d try to talk you out of using them.

JWT is a standard that tries to do too much and ends up doing everything haphazardly. Our loathing of JWT motivated this post, but this post isn’t about JWT; we’ll write more about it in the future.

If your inter-service auth problem really decomposes to inter-container (or, without containers, inter-instance) auth, MTLS starts to make sense. The container-container MTLS story usually involves containers including a proxy, like Envoy, that mediates access. If you’re not connecting containers, or have ad-hoc components, MTLS can really start to take on a CORBA feel: random sidecar processes (here stunnel, there Envoy, and this one app that tries to do everything itself). It can be a pain to configure properly, and this is a place you need to get configurations right.

If you can do MTLS in such a way that there is exactly one way all your applications use it (probably: a single proxy that all your applications install), consider MTLS. Otherwise, be cautious about it.

Beyond that, we don’t want to be too much more prescriptive. Rather, we’d just urge you to think about what you’re actually getting from an S2S auth scheme before adopting it.

(But really, you should just use Macaroons.)


How the gig economy is making life harder for North American workers

$
0
0

“The full-time job—to which we’ve attached all of the rules about treating workers fairly—is dissolving, and the community of workers who are treated as second-class citizens, who aren’t protected by the same laws or entitled to the same benefits as other workers, is growing. That is a big, scary problem.” That is how Sarah Kessler concludes her engaging new book, Gigged, about today’s economy. The deputy editor of the news website Quartz at Work’s, Kessler approaches her topic with even-handedness and rigour, and Uber emerges as a particular villain.

After its pricing model was leaked to the press, Uber’s internal calculations for drivers’ wages minus expenses—much less than Uber was boasting in its recruitment advertising—became public. “On average, it estimated they were making $10.75 per hour in the Houston area, $8.77 per hour in Detroit, and $13.17 in Denver, which was slightly less than Walmart’s average full-time hourly rate in 2016… Unlike a minimum-wage job, driving for Uber came without any paid breaks or benefits like health insurance. What it paid could change any time.”

Kessler cites the insightful work of Seattle team David Rolf, a union leader, and Nick Hanauer, an entrepreneur and venture capitalist. “If our captains of industry are so certain that certainty is necessary for industry,” they conclude in an article for the online magazine Evonomics, citing a common argument against changing business regulations and adding new benefit programs, “then it surely must be true that their customer base, the American middle class, needs some of that certainty as well.” From New York, Kessler spoke with Maclean’s about race, the problem with Big Tech, why being classified as independent contractors makes it much harder for labour to organize, and what’s next.

READ ALSO: Inside the fight for the future of grocery delivery in Canada

Q: Why did you write Gigged?

A: The way people work is changing in a major way. A large portion of job growth is in temporary and independent work, which doesn’t come with the benefits, labour protections or predictability of traditional employment. As I was reporting on this trend, I was listening to a lot of conversations between Silicon Valley entrepreneurs, politicians and union leaders. Nobody was talking to workers who had actually participated in this shift, which is what I wanted to know about. What did this change mean for their experience—what does it mean for our lives? I hope the way I told the story—by following five people who are working in different parts of the gig economy—gives people a nuanced understanding of what the shift away from traditional jobs means for the economy, policy and people’s lives, but does so in a way that is personal and interesting to read.

Q: What changed your perspective while researching it?

A: I started reporting on the gig economy in 2011, when I was working at a tech blog. At the time, I was interviewing two or three entrepreneurs every day, and I started to notice a trend in start-ups that were pitching a way to end unemployment with their apps—all you did was press a button, and work came to you. One of them was Uber—I didn’t think it would work out. As a 22-year-old, the whole idea of the “gig economy” sounded pretty good to me. Then, a few years later, when I was working for Fast Company Magazine, I decided to try to make a living in the gig economy for a month. Because I had every advantage in the world—a college degree, professional-, I hope, level writing skills, white skin—I thought this would be easy. But I failed miserably. Finding work was difficult. When I did, it was unpredictable and required me to be available instantly. I kept agreeing to do work for cheaper and cheaper pay with the hope of competing with other workers. I realized that the narrative was much more complicated than the Silicon Valley story, and I wanted to see how the story looked like from different perspectives.

Q: Gigged captures Uber’s bad behaviour with numerous arresting points. “Other drivers, who had signed up to finance their cars through Uber, felt that Uber had made it impossible to quit. As the company [significantly and suddenly] lowered rates, a bigger and bigger portion of their per-mile earnings was automatically deducted from their checks to pay for the car they’d acquired to do the job.” Uber lied about how much their drivers would make, right?

A: The way Uber advertised its jobs, at least early on, was as an opportunity to become an entrepreneur. They published press releases saying that drivers made $90,000 per year, even while internally calculating that—after expenses—drivers took home an amount that looked much more like the minimum wage.

READ ALSO: How one Ontario town used Uber to solve its public transit crisis

Q: And to add to the confusion, as Gigged notes, Uber changed fares frequently and without warning. “In Kansas City, where Abe lived, one seasonal price cut, in January 2015, lowered the price of a 19-mile trip from the airport to downtown from about $38 to $22.80. The company made similar price reductions in 47 other cities around the same time.” The case seems much stronger for Uber and Deliveroo and Grubhub workers, etc., being employees rather than contractors?

A: It’s about 20 per cent to 30 per cent less expensive to hire an independent worker than an employee, so there’s a huge incentive to classify everyone as an independent worker. But we don’t have a clear-cut system for deciding which category a worker should be placed in—there’s a lot of grey territory, and that makes it easier to cheat. When it comes to apps like Uber, Deliveroo and Grubhub, different courts have made different decisions about whether their workers should be employees. Meanwhile, technology makes it easier to toe the line—for instance, if something like training makes it more likely for workers to be declared employees, there’s a way to have the app guide workers without technically providing training.

Q: Being classified as independent contractors (rather than employees) also makes it much harder for labour to organize, doesn’t it?

A: Yes. In the book, I followed a few efforts to organize workers in the gig economy. The traditional tools don’t work. If you’re going to plan an Uber strike, as one person who I followed did, how do you find your fellow drivers to tell them a strike is on? How do you know if they’re striking? At one point, someone was driving around with a megaphone yelling, “There’s a strike.” On top of that, freelancers don’t have a federally protected right to unionize. They’re not protected from retaliation and can be considered to be “colluding” if they negotiate pay.

Q: Is the gig economy essentially—with exceptions like Airbnb and the likes of your NYC programmer Curtis Larson on Gigster—about exploiting cheap marginalized labour? Robert Reich, formerly Clinton’s secretary of labor, calls it a sweatshop. You write: “In New York City, where the living wage for a family of four is $46,000 in a year, a group that said it represents 50,000 ride-hail drivers told The New York Times that more than one-fifth of its members earned less than $30,000 in a year, before expenses.” Supposedly liberal media leaders like Thomas Friedman credulously said the gig economy is “surely part of the answer” to America’s economic malaise. 

A: Yes, early on there was a lot of buy-in to the pitch that the gig economy would solve economic problems. I don’t think that it’s wrong to say the gig economy does solve problems for some people. Curtis, the programmer I followed in my book, for instance, used it to find clients instantly after he quit his job. It gave him immediate flexibility and independence. The problem is that Silicon Valley entrepreneurs act like this is the story for everybody. It’s not. If you can’t save a year’s worth of living expenses, like Curtis did, and you’re living paycheque to paycheque, it’s a big deal that you don’t know how much you’re going to make next week.

READ MORE: How the weekend has disappeared and why we need to take it back

Q: After Obama’s soaring campaign oratory about a fairer American economy, we saw the rise of the unfair aspects of the gig economy under his watch. Was that disappointing?

A: The app-based gig economy is part of a decades-long trend in companies trying to employ as few people as possible. It’s just the mobile and digital tech-fuelled version. Under the Obama administration, the Labor Department issued guidance that made it more difficult to classify workers as independent contractors. It was intended to cut down on employers cheating their workers out of benefits and labour protections by calling them freelancers. The Trump administration rescinded that guidance.

Q:When you investigated being a cleaner for a Dickensian New York gig economy cleaning joint, most of the workers were African Americans. Is there a racial aspect to the gig economy?

A: African Americans face discrimination in the workplace. You would expect this to carry over to the gig economy and online platforms, and the evidence suggests it does. Take a recent study on discrimination on Airbnb, for instance. When guests had African American-sounding names, hosts were less likely to accept them than they were guests with identical profiles but white-sounding names.

Q: “If poor people knew how rich rich people are, there would be riots in the streets,” Chris Rock put it. What’s next?

A: According to one estimate, between 2005 and 2015, almost all jobs created in the U.S. were in categories like temp work and freelancing. This trend is a huge problem for countries that have attached all of their social insurance to the traditional job. Things like health insurance, retirement-savings programs, unemployment insurance, worker’s compensation and regular work help people feel safe and secure. Security helps people buy things and take risks like starting businesses—and it’s worth figuring out how to protect it even as work is changing.

MORE ABOUT GIG ECONOMY:

Summit Supercomputer Up and Running, Claims First Exascale Application

$
0
0

The Department of Energy’s 200-petaflop Summit supercomputer is now in operation at Oak Ridge National Laboratory (ORNL).  The new system is being touted as “the most powerful and smartest machine in the world.”

And unless the Chinese pull off some sort of surprise this month, the new system will vault the US back into first place on the TOP500 list when the new rankings are announced in a couple of weeks. Although the DOE has not revealed Summit’s Linpack result as of yet, the system’s 200-plus-petaflop peak number will surely be enough to outrun the 93-petaflop Linpack mark of the current TOP500 champ, China’s Sunway TaihuLight.

Even though the general specifications for Summit have been known for some time, it’s worth recapping them here:  The IBM-built system is comprised of 4,608 nodes, each one housing two Power9 CPUs and six NVIDIA Tesla V100 GPUs. The nodes are hooked together with a Mellanox dual-rail EDR InfiniBand network, delivering 200 Gbps to each server.

Assuming all those nodes are fully equipped, the GPUs alone will provide 215 peak petaflops at double precision. Also, since each V100 also delivers 125 teraflops of mixed precision, Tensor Core operations, the system’s peak rating for deep learning performance is something on the order of 3.3 exaflops.

Those exaflops are not just theoretical either. According to ORNL director Thomas Zacharia, even before the machine was fully built, researchers had run a comparative genomics code at 1.88 exaflops using the Tensor Core capability of the GPUs. The application was rummaging through genomes looking for patterns indicative of certain conditions. “This is the first time anyone has broken the exascale barrier,” noted Zacharia.

Of course, Summit will also support the standard array of science codes the DOE is most interested in, especially those having to do with things like fusion energy, alternative energy sources, material science, climate studies, computational chemistry, and cosmology. But since this is an open science system available to all sorts of research that frankly has nothing to do with energy, Summit will also be used for healthcare applications in areas such as drug discovery, cancer studies, addiction, and research into other types of diseases. In fact, at the press conference announcing the system’s launch, Zacharia expressed his desire for Oak Ridge to be “the CERN for healthcare data analytics.”

The analytics aspect dovetails nicely with Summit’s deep learning propensities, inasmuch as the former is really just a superset of the latter. When the DOE first contracted for the system back in 2014, the agency probably only had a rough idea of what they would be getting AI-wise.  Although IBM had been touting its data-centric approach to supercomputing prior to pitching its Power9-GPU platform to the DOE, the AI/machine learning application space was in its early stages. Because NVIDIA made the decision to integrate the specialized Tensor Cores into the V100, Summit ended up being an AI behemoth, as well as a powerhouse HPC machine.

As a result, the system is likely to be engaged in a lot of cutting-edge AI research, in addition to its HPC duties. For the time being, Summit will only be open to select projects as it goes through its acceptance process. In 2019, the system will become more widely available, including its use in the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program.

At that point, Summit’s predecessor, the Titan supercomputer, is likely to be decommissioned. Summit has about eight times the performance of Titan, with five times better energy efficiency. When Oak Ridge installed Titan in 2012, it was the most powerful system in the world and is still fastest supercomputer in the US (well, now the second-fastest). Titan has NVIDIA GPUs too, but these are K20X graphics processors and their machine learning capacity are limited to four single precision teraflops per device. Fortunately, all the GPU-enabled HPC codes developed for Titan should port over to Summit pretty easily and should be able to take advantage of the much greater computational horsepower of the V100.

For IBM, Summit represents a great opportunity to showcase its Power9-GPU AC922 server to other potential HPC  and enterprise customers.  At this point, the company’s principle success with its Power9 servers has been with systems sold to enterprise and cloud clients, but generally without GPU accelerators. IBM’s only other big win for its Power9/GPU product is the identically configured Sierra supercomputer being installed at Lawrence Livermore National Lab. The company seems to think its biggest opportunity with its V100-equipped server is with enterprise customers looking to use GPUs for database acceleration or developing deep learning applications in-house.

Summit will also fulfill another important role – that of a development platform for exascale science applications. As the last petascale system at Oak Ridge, the 200-petaflop machine will be a stepping stone for a bunch of HPC codes moving to exascale machinery over the next few years. And now with Summit up and running, that doesn’t seem like such a far-off prospect. “After all, it’s just 5X from where we are,” laughed Zacharia.

Top image: Summit supercomputer; Bottom image: Interior view of node. Credit: ORNL

WAZE for Drones: Expanding the National Airspace

$
0
0

Sitting in New York City, looking up at the clear June skies, I wonder if I am staring at an endangered phenomena. According to many in the Unmanned Aircraft Systems (UAS) industry, skylines across the country soon will be filled with flying cars, quadcopter deliveries, emergency drones, and other robo-flyers. Moving one step closer to this mechanically-induced hazy future, General Electric (GE) announced last week the launch of AiRXOS, a “next generation unmanned traffic” management system.Screen Shot 2018-06-08 at 3.14.57 PM.pngManaging the National Airspace is already a political football with the Trump Administration proposing privatizing the air-control division of the Federal Aviation Administration (FAA), taking its controller workforce of 15,000 off the government’s books. The White House argues that this would enable the FAA to modernize and adopt “NextGen” technologies to speed commercial air travel. While this budgetary line item is debated in the halls of Congress, one certainty inside the FAA is that the National Airspace (NAS) will have to expand to make room for an increased amount of commercial and recreational traffic, the majority of which will be unmanned.

Ken Stewart, the General Manager of AiRXOS, boasts, “We’re addressing the complexity of integrating unmanned vehicles into the national airspace. When you’re thinking about getting a package delivered to your home by drone, there are some things that need to be solved before we can get to that point.” The first step for the new division of GE is to pilot the system in a geographically-controlled airspace. To accomplish this task, DriveOhio’s UAS Center invested millions in the GE startup. Accordingly, the first test deployment of AiRXOS will be conducted over a 35 mile stretch of Ohio’s Interstate 33 by placing sensors along the road to detect and report on air traffic. GE states that this trial will lay the foundation for the UAS industry. As Alan Caslavka, president of Avionics at GE Aviation, explains, “AiRXOS is addressing the rapid changes in autonomous vehicle technology, advanced operations, and in the regulatory environment. We’re excited for AiRXOS to help set the standard for autonomous and manned aerial vehicles to share the sky safely.”

Stewart whimsically calls his new air traffic control platform WAZE for drones. Like the popular navigation app, AiRXOS provides drone operators with real-time flight-planning data to automatically avoid obstacles, other aircraft, and route around inclement weather. The company also plans to integrate with the FAA to streamline regulatory communications with the agency. Stewart explains that this will speed up authorizations as today, “It’s difficult to get [requests] approved because the FAA hasn’t collected enough data to make a decision about whether something is safe or not.” Screen Shot 2018-06-08 at 6.04.57 PM.pngNASA is a key partner in counseling the FAA in integrating commercial UAS into the NAS. Charged with removing the “technical and regulatory barriers that are limiting the ability for civil UAS to fly in the NAS” is Davis Hackenberg of NASA’s Armstrong Flight Research Center. Last year, we invited Hackenberg to present his UAS vision to RobotLabNYC. Hackenberg shared with the packed audience NASA’s multi-layered approach to parsing the skies for a wide-range of aircrafts, including: high altitude long endurance flights, commercial airliners, small recreational craft, quadcopter inspections, drone deliveries and urban aerial transportation. Recently the FAA instituted a new regulation mandating that all aircraft be equipped with Automatic Dependent Surveillance-Broadcast (ADS-B) systems by January 1, 2020. The FAA calls such equipment “foundational NextGen technology that transforms aircraft surveillance using satellite-based positioning,” essentially communicating human-piloted craft to computers on the ground and, quite possibly, in the sky. Many believe this is a critical step towards delivering on the long-awaited promise of the commercial UAS industry with autonomous beyond visual line of sight flights.

I followed up this week with Hackenberg about the news of AiRXOS and the new FAA guidelines. He explained, “For aircraft operating in an ADS-B environment, testing the cooperative exchange of information on position and altitude (and potentially intent) still needs to be accomplished in order to validate the accuracy and reliability necessary for a risk-based safety case.” Hackenberg continued to describe how ADS-B might not help low altitude missions, “For aircraft operating in an environment where many aircraft are not transmitting position and altitude (non-transponder equipped aircraft), developing low cost/weight/power solutions for DAA [Detect and Avoid] and C2 [Command and Control Systems] is critical to ensure that the unmanned aircraft can remain a safe distance from all traffic. Finally, the very low altitude environment (package delivery and air taxi) will need significant technology development for similar DAA/C2 solutions, as well as certified much more (e.g. vehicles to deal with hazardous weather conditions).” The Deputy Project Manager then shared with me his view of the future, “In the next five years, there will be significant advancements in the introduction of drone deliveries. The skies will not be ‘darkened,’ but there will likely be semi-routine service to many areas of the country, particularly major cities. I also believe there will be at least a few major cities with air taxi service using optionally piloted vehicles within the 10-year horizon. Having the pilot onboard in the initial phase may be a critical stepping-stone to gathering sufficient data to justify future safety cases. And then hopefully soon enough there will be several cities with fully autonomous taxi service.”Screen Shot 2018-06-10 at 11.30.50 AMLast month, Uber already ambitiously declared at its Elevate Summit that its ride-hail program will begin shuttling humans by 2023. Uber plans to deploy electric vertical take-off and landing (eVTOL) vehicles throughout major metropolitan areas. “Ultimately, where we want to go is about urban mobility and urban transport, and being a solution for the cities in which we operate,” says Uber CEO, Dara Khosrowshahi. Uber has been cited by many civil planners as the primary cause for increased urban congestion. Its eVTOL plan, called uberAIR, is aimed at alleviating terrestrial vehicle traffic by offsetting commutes with autonomous air taxis that are centrally located on rooftops throughout city centers.

One of Uber’s first test locations for uberAIR will be Dallas-Fort Worth, Texas. Tom Prevot, Uber’s Director of Engineering for Airspace Systems, describes the company’s effort to design a Dynamic Skylane Networks of virtual lanes for its eVTOLs to travel, “We’re designing our flight paths essentially to stay out of the scheduled air carriers’ flight paths initially. We do want to test some of these concepts of maybe flying in lanes and flying close to each other but in a very safe environment, initially.” To accomplish these objectives, Prevot’s group signed a Space Act Agreement with NASA to determine the requirements for its aerial ride-share network. Using Uber’s data, NASA is already simulating small-passenger flights around the Texas city to identify potential risks to an already crowded airspace.

After the Elevate conference, media reports hyped the immanent arrival of flying taxis. Rodney Brooks (considered by many as the godfather of robotics) responded with a tweet: “Headline says ‘prototype’, story says ‘concept’. This is a big difference, and symptomatic of stupid media hype. Really!!!” Dan Elwell, FAA Acting Administrator, was much more subdued with his opinion of how quickly the technology will arrive, “Well, we’ll see…”

Editor’s Note: This week we will explore regulating unmanned systems further with Democratic Presidential Candidate Andrew Yang and New York State Assemblyman Clyde Vanel at the RobotLab forum on “The Politics Of Automation” in New York City. 



Categories: AI, Autonomous Cars, drones, News, Politics, Robotics

Tags: , , , , , , , , , , , , ,

Things I Learned in the Gulag

$
0
0

For fifteen years the writer Varlam Shalamov was imprisoned in the Gulag for participating in “counter-revolutionary Trotskyist activities.” He endured six of those years enslaved in the gold mines of Kolyma, one of the coldest and most hostile places on earth. While he was awaiting sentencing, one of his short stories was published in a journal called Literary Contemporary. He was released in 1951, and from 1954 to 1973 he worked on Kolyma Stories, a masterpiece of Soviet dissident writing that has been newly translated into English and published by New York Review Books Classics this week. Shalamov claimed not to have learned anything in Kolyma, except how to wheel a loaded barrow. But one of his fragmentary writings, dated 1961, tells us more.

1. The extreme fragility of human culture, civilization. A man becomes a beast in three weeks, given heavy labor, cold, hunger, and beatings.

2. The main means for depraving the soul is the cold. Presumably in Central Asian camps people held out longer, for it was warmer there.

3. I realized that friendship, comradeship, would never arise in really difficult, life-threatening conditions. Friendship arises in difficult but bearable conditions (in the hospital, but not at the pit face).

4. I realized that the feeling a man preserves longest is anger. There is only enough flesh on a hungry man for anger: everything else leaves him indifferent.

5. I realized that Stalin’s “victories” were due to his killing the innocent—an organization a tenth the size would have swept Stalin away in two days.

6. I realized that humans were human because they were physically stronger and clung to life more than any other animal: no horse can survive work in the Far North.

7. I saw that the only group of people able to preserve a minimum of humanity in conditions of starvation and abuse were the religious believers, the sectarians (almost all of them), and most priests.

8. Party workers and the military are the first to fall apart and do so most easily.

9. I saw what a weighty argument for the intellectual is the most ordinary slap in the face.

10. Ordinary people distinguish their bosses by how hard their bosses hit them, how enthusiastically their bosses beat them.

11. Beatings are almost totally effective as an argument (method number three).

12. I discovered from experts the truth about how mysterious show trials are set up.

13. I understood why prisoners hear political news (arrests, et cetera) before the outside world does.

14. I found out that the prison (and camp) “grapevine” is never just a “grapevine.”

15. I realized that one can live on anger.

16. I realized that one can live on indifference.

17. I understood why people do not live on hope—there isn’t any hope. Nor can they survive by means of free will—what free will is there? They live by instinct, a feeling of self-preservation, on the same basis as a tree, a stone, an animal.

18. I am proud to have decided right at the beginning, in 1937, that I would never be a foreman if my freedom could lead to another man’s death, if my freedom had to serve the bosses by oppressing other people, prisoners like myself.

19. Both my physical and my spiritual strength turned out to be stronger than I thought in this great test, and I am proud that I never sold anyone, never sent anyone to their death or to another sentence, and never denounced anyone.

20. I am proud that I never wrote an official request until 1955.

21. I saw the so-called Beria amnesty where it took place, and it was a sight worth seeing.

22. I saw that women are more decent and self-sacrificing than men: in Kolyma there were no cases of a husband following his wife. But wives would come, many of them (Faina Rabinovich, Krivoshei’s wife).

23. I saw amazing northern families (free-contract workers and former prisoners) with letters “to legitimate husbands and wives,” et cetera.

24. I saw “the first Rockefellers,” the underworld millionaires. I heard their confessions.

25. I saw men doing penal servitude, as well as numerous people of “contingents” D, B, et cetera, “Berlag.”

26. I realized that you can achieve a great deal—time in the hospital, a transfer—but only by risking your life, taking beatings, enduring solitary confinement in ice.

27. I saw solitary confinement in ice, hacked out of a rock, and spent a night in it myself.

28. The passion for power, to be able to kill at will, is great—from top bosses to the rank-and-file guards (Seroshapka and similar men).

29. Russians’ uncontrollable urge to denounce and complain.

30. I discovered that the world should be divided not into good and bad people but into cowards and non-cowards. Ninety-five percent of cowards are capable of the vilest things, lethal things, at the mildest threat.

31. I am convinced that the camps—all of them—are a negative school; you can’t even spend an hour in one without being depraved. The camps never gave, and never could give, anyone anything positive. The camps act by depraving everyone, prisoners and free-contract workers alike.

32. Every province had its own camps, at every construction site. Millions, tens of millions of prisoners.

33. Repressions affected not just the top layer but every layer of society—in any village, at any factory, in any family there were either relatives or friends who were repressed.

34. I consider the best period of my life the months I spent in a cell in Butyrki prison, where I managed to strengthen the spirit of the weak, and where everyone spoke freely.

35. I learned to “plan” my life one day ahead, no more.

36. I realized that the thieves were not human.

37. I realized that there were no criminals in the camps, that the people next to you (and who would be next to you tomorrow) were within the boundaries of the law and had not trespassed them.

38. I realized what a terrible thing is the self-esteem of a boy or a youth: it’s better to steal than to ask. That self-esteem and boastfulness are what make boys sink to the bottom.

39. In my life women have not played a major part: the camp is the reason.

40. Knowing people is useless, for I am unable to change my attitude toward any scoundrel.

41. The people whom everyone—guards, fellow prisoners—hates are the last in the ranks, those who lag behind, those who are sick, weak, those who can’t run when the temperature is below zero.

42. I understood what power is and what a man with a rifle is.

43. I understood that the scales had been displaced and that this displacement was what was most typical of the camps.

44. I understood that moving from the condition of a prisoner to the condition of a free man is very difficult, almost impossible without a long period of amortization.

45. I understood that a writer has to be a foreigner in the questions he is dealing with, and if he knows his material well, he will write in such a way that nobody will understand him.

From Kolyma Stories by Varlam Shalamov. Translation and introduction copyright © 2018 by Donald Rayfield. Courtesy of NYRB Classics

Beware the ‘Buyback Economy’

$
0
0

Please enable cookies on your web browser in order to continue.

The new European data protection law requires us to inform you of the following before you use our website:

We use cookies and other technologies to customize your experience, perform analytics and deliver personalized advertising on our sites, apps and newsletters and across the Internet based on your interests. By clicking “I agree” below, you consent to the use by us and our third-party partners of cookies and data gathered from your use of our platforms. See our Privacy Policy and Third Party Partners to learn more about the use of data and your rights. You also agree to our Terms of Service.

Viewing all 25817 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>