Quantcast
Channel: Hacker News
Viewing all 25817 articles
Browse latest View live

Evolved Policy Gradients

$
0
0

We're releasing an experimental metalearning approach called Evolved Policy Gradients, a method that evolves the loss function of learning agents, which can enable fast training on novel tasks. Agents trained with EPG can succeed at basic tasks at test time that were outside their training regime, like learning to navigate to an object on a different side of the room from where it was placed during training.

EPG trains agents to have a prior notion of what constitutes making progress on a novel task. Rather than encoding prior knowledge through a learned policy network, EPG encodes it as a learned lossfunction. Agents are then able to use this loss function, defined as a temporal-convolutional neural network, to learn quickly on a novel task. We've shown that EPG can generalize to out of distribution test time tasks, exhibiting behavior qualitatively different from other popular metalearning algorithms. In tests, we've also found that EPG can train agents faster than PPO, an off-the-shelf policy gradient method. EPG is related to previous work on evolvingrewardfunctionsfor RL agents, but generalizes this idea to evolving a complete loss function, which means that the loss function has to effectively learn an RL algorithm internally.

The above video demonstrates how our method (left) teaches a robot how to reach various targets without resetting the environment, in comparison with PPO (right). Top-left text specifies the number of learning updates so far. Note that this video demonstrates the complete learning process in real-time.

The intuition behind EPG comes from something we are all familiar with: trying to pick up a new skill and experiencing the alternating frustration and joy involved in that process. Suppose you are just starting out learning to play the violin. Even without instruction, you will immediately have a feel for what to try, and, listening to the sounds you produce, you will have a sense of whether or not you are making progress – that's because you effectively have access to very well shaped internal reward functions, derived from prior experience on other motor tasks, and through the course of biological evolution. In contrast, most reinforcement learning (RL) agents approach each new task without using prior knowledge. Instead they rely entirely on external reward signals to guide their initial behavior. Coming from such a blank slate, it is no surprise that current RL agents take far longer than humans to learn simple skills. EPG takes a step toward agents that are not blank slates but instead know what it means to make progress on a new task, by having experienced making progress on similar tasks in the past.

EPG consists of two optimization loops. In the inner loop, an agent learns, from scratch, to solve a particular task sampled from a family of tasks. The family of tasks might be "move gripper to target location [x, y]" and one particular task in this family could be "move gripper to position [50, 100]". The inner loop uses stochastic gradient descent (SGD) to optimize the agent’s policy against a loss function proposed by the outer loop. The outer loop evaluates the returns achieved after inner-loop learning and adjusts the parameters of the loss function, using Evolution Strategies (ES), to propose a new loss that will lead to higher returns.

Having a learned loss offers several advantages compared to current RL methods: using ES to evolve the loss function allows us to optimize the true objective (final trained policy performance) rather than short-term returns, and EPG improves on standard RL algorithms by allowing the loss function to be adaptive to the environment and agent history.

The above video demonstrates how our method (left) teaches a robot how to hop in the backwards direction, in comparison with PPO (right). EPG results in exploratory behavior where the agent first tries out walking forwards before realizing that backwards gives higher rewards. Top-left text specifies the number of learning updates so far. Note that this video demonstrates the complete learning process in real-time.

There has been a flurry of recent workonmetalearningpolicies, and it's worth asking why learn a loss function as opposed to directly learning a policy? Learning recurrent policies tends to overfit the task at hand, while learning policy initializations has limited expressivity when it comes to exploration. Our motivation is that we expect loss functions to be the kind of object that may generalize very well across substantially different tasks. This is certainly true of hand-engineered loss functions: a well-designed RL loss function, such as that in PPO, can be very generically applicable, finding use in problems ranging from playing Atari games to controlling robots.

To test the generalization ability of EPG, we conducted a simple experiment. We evolved the EPG loss to be effective at getting "ants" to walk to randomly located targets on the right half of an arena. Then, we froze the loss, and gave the ants a new target, this time on the left half of the arena. Surprisingly, the ants learned to walk to the left! Here is how their learning curves looked (red lines on graphs):

This result is exciting to us because it demonstrates generalization to a task outside the training distribution. This kind of generalization can be quite hard to achieve. We compared EPG to an alternative metalearning algorithm, called RL2, which tries to directly learn a policy that can adapt to novel tasks. In our experiment, RL2 was indeed successful at getting agents to walk to targets on the right half of the screen. However, when given a test time target on the left half of the screen, it qualitatively failed, and just kept walking to the right. In a sense, it "overfit" to the set of tasks on which it was trained (i.e. walking to the right).

The above video demonstrates how our method (left) teaches an ant robot how to walk and reach a target (green circle) from scratch, in comparison with RL2 (right). Top-left text specifies the number of learning updates so far. Note that this video demonstrates the complete learning process at 3X real-time speed.

As do all metalearning approaches, our method still has many limitations. Right now, we can train an EPG loss to be effective for one small family of tasks at a time, e.g., getting an ant to walk left and right. However, the EPG loss for this family of tasks is unlikely to be at all effective on a wildly different kind of task, like playing Space Invaders. In contrast, standard RL losses do have this level of generality -- the same loss function can be used to learn a huge variety of skills. EPG gains on performance by losing on generality. There is a long road ahead toward metalearning methods that both outperform standard RL methods and have the same level of generality.


The End of My VC Career

$
0
0

Stefan Glaenzer, the prominent European VC and former chairman of Last.fm and founder of Ricardo.de, has quit his role as Partner at Passion Capital. He co-founded the London-based early-stage firm seven years ago with partners Eileen Burbidge and Robert Dighero.

The decision to resign, which the firm’s staff and Limited Partners were informed of last Thursday, is linked to Glaenzer’s arrest and subsequent conviction in 2012 when he pleaded guilty to sexually assaulting a woman on the London Underground Tube network. He claimed to be high on cannabis at the time and was given a suspended prison sentence and a fine, banned from using the Tube for 18 months, and placed on the U.K.’s sex offender registry.

Passion Capital is in the midst of fundraising and Glaenzer’s conviction has become an obstacle to some LPs backing a third fund. This contrasts with 2015 when the London VC firm successfully raised£45 million for fund two, including £17.5 million coming from the U.K. taxpayer via the British Business Bank. In 2012, following Glaenzer’s sexual assault conviction, existing LPs and Passion Capital partners also unanimously voted that he should remain in his role at the firm.

In an interview offered to TechCrunch — which at first I was hesitant to accept until it became clear there was a legitimate news angle — I sat down with Glaenzer to discuss the events that led to his resignation and put questions to him that have persisted over the years within the London investment and technology startup community and have become ever louder following high-profile cases of alleged sexual harassment in Silicon Valley and the wider #metoo movement.

They include why he wasn’t fired from his job at the time of the sexual assault conviction, why he didn’t resign earlier, and how Passion Capital and its investors dealt internally with the incident. I also wanted to understand what changed in 2018. The only red line was that he didn’t want to talk about how it impacted his private life and family.

German-born Glaenzer — a multimillionaire twice over through the sale of Ricardo.de to QXL in 2000 and Last.fm to CBS in 2007 — says Thursday 16th of November 2017 was the day he “instinctively knew” his VC career was over. He and Passion’s two other partners, Burbidge and Dighero, were meeting with an institutional investor who had been lined up as a cornerstone LP in fund three. Quite far along in the due diligence process and with the outcome looking positive, the conference room had been booked for 2.5 hours in preparation for an intense final round of negotiations. Thirty minutes in, however, the meeting was over. The operational team had passed the deal to the investment firm’s compliance department and Glaenzer had turned from key person to “headline risk”.

“It was clear, we banked on them as our cornerstone, everything was positive, and after four or five months they said no and we knew we needed to restart,” he says. “I knew that this chapter was over”.

What that “headline risk” is was never explicitly stated, says Glaenzer, who didn’t think to ask, but it seems almost certainly the reputational damage that could be inflicted on any investor associated with Passion Capital if Glaenzer remained involved and should his conviction resurface in the media. Optics matter more than ever in 2018.

That is precisely what happened two months prior to the investor meeting when Bloomberg news ran a story asking: ‘Will Britain Keep Investing in a Sex Offender’s Venture Fund?’. The article placed Glaenzer’s conviction in the context of a wider debate about the role LPs should play in policing bad behaviour by VCs, even if his conviction was for something that happened outside of work.

“In the end the institution made the right call,” says Glaenzer. “I think, luckily, in some societies we have made sure that compliance has a big function. Over the last ten years this has become more ingrained”.

But if it was the right call not to invest in Glaenzer in 2018, shouldn’t the same call have been made in either 2012 or later in 2015. He says the sentiment has changed a lot since then and that, more broadly, the ecosystem is “stunningly different” today.

“I think all participants agreed on the view there’s a difference between what happens in private and what happens in business.

“There wasn’t this thinking or discussion about it. It was just, with these conditions — they were concerned about drug use or another incident, and we clearly defined consequences for this — people accepted”.

(Glaenzer declined to specify what those conditions were as he says they were private matters, although one was that he undergo regular drug testing for two years.)

He says that everybody legally involved in Passion Capital’s first fund voted that he should remain a Partner. “There was not a single against vote,” he says.

But why didn’t he just resign at the time of the incident?

“In 2002, when I was on my break doing nothing, I watched 62 out of the 64 games in the World Cup in Japan and South Korea. Germany had a terrible team, it was a disaster, other than [goalkeeper] Oli Kahn, who brought us into the final. And this man made a mistake in the 66th minute and we lost the game. And we or rather he didn’t win the trophy. He said after the game, ‘and continue’. You have to accept that you made a mistake and you have to take the consequences. Don’t run away. And that is my fundamental belief”.

I suggest that by remaining in his position he took very few consequences, and that in almost any other walk of life a person with less privilege would automatically lose their job after being convicted of sexual assault.

“I’m struggling to find a correlation between having done a private mistake, where we all agreed this was not business related, this was in no way using power or money,” says Glaenzer. “It was a personal mistake which I on the spot acknowledged and accepted and apologised [for]. And I said from day one to my partners and the CFE [now the BBB], it is not my decision, I want to carry on doing this, but I will of course accept any decision. If people have a different opinion, I do understand”.

Glaenzer is almost certain that Passion Capital would not have survived had he quit in 2012 and says that doing so would have let his partners and investors down. With two multimillion dollar exits behind him and regarded as a dot-com poster child back in Germany, he was indisputably the biggest draw for Passion Capital’s original LPs.

“Do you run away or do you accept… and continue what you promised to your partners and to your investors? I went to families, I went to people and said, you know what, this is what I want to do, there’s going to be money, we are aiming for [and] have our own expectations of what sort of return a small venture fund should deliver, and then run away? No. I can understand why people think differently, of course. But I personally, in my value system, I can not.”

That’s not to say there weren’t business consequences for Passion Capital and on Glaenzer’s ability to carry out his job, which he says he “100 percent” underestimated. “I was not even thinking about business consequences. It was more about the private…” he says.

The fund was suspended for five weeks after the incident, as per the LP agreement and so a decision about his future could be voted on. His conviction and details of the sexual assault were widely reported in the British media and he says the perception of him understandably changed amongst some people in the tech industry. This resulted in a halt to public appearances and networking and he says he initially saw a 70-80 percent reduction in unsolicited pitches. Passion also lost at least one deal due to Glaenzer’s conviction.

“With every deal there was this awkward situation,” he says. “We always disclosed this to our founders before we signed the deal, and that is, on many levels, a very awkward situation. For founders and [for] us”.

From the outside, at least, I say that it feels as though Passion Capital quickly underwent a re-branding post-incident that saw partner Burbidge replace Glaenzer as the more visible face of the VC firm, which otherwise has always made a virtue of its openness, pushing initiatives like its ‘Plain English Term Sheet’ and making its investment terms public.

“It was a 180 degree change,” says Glaenzer. A change, nonetheless, that he says would have happened over time anyway.

“We used our respective strengths. The respective strength of Eileen [Burbidge] has been [there] from day one, even though I was probably doing more of the visible media. She was organising every single thing; she should become the face of the company… It was very, very clear because she is way more talented than I will ever be. It was known”.

So what’s next for Glaenzer? He gives little away but says he has spent the last few months quietly working on a couple of MVPs, including one idea he has fallen in love with. “My fundamental goal is I don’t want to have my kids being solely educated from American media and digital platforms,” he says.

More than anything Glaenzer says he is ready to embrace change: admitting that he had become increasingly unhappy working in early-stage venture and now very clearly a burden on Passion, he doesn’t dispute that a simple version of this story is that the events of 2012 have finally caught up with him.

On several occasions during the interview Glaenzer quotes a passage from the poem “Steps” by the German poet Hermann Hesse, which he’s handwritten across several sheets of plain white paper, revealing each line one page at a time.

He says he used the same poem to explain his resignation to members of the Passion team last week and also when he quit Recardo.de in 2000.

“‘A magic dwells in each beginning, protecting us, telling us how to live’,” he reads. “It’s a fundamental belief that this magic is in new beginnings.”

Another Scam ICO? Savedroid Founder Exits with $50M to Chill on a Beach

$
0
0

Savedroid's owner seems to be chilling on a beach after raising funds, shutting down the ICO site and taking off.

articleStartImage

In what is either a joke in very bad taste or another ICO exit scam, the founder of Savedroid ICO Tweeted ‘Over and out’, with a picture of himself at the airport and then chilling on a beach.

It is believed that the ICO raised around 40 million Euros, or $50 million USD via the token sale, claiming that they will build a smart, A.I managed application which would automatically invest user funds into profitable ICO portfolios. There were also claims of a cryptocurrency credit card, but it seems all that is gone now, with the official site displaying a Southpark meme.

If this is indeed an exit scam, what is alarming is that the founder, Dr. Yassin Hankar, had several public appearances and participated in events, giving talks and answering questions.

There do seem to be some suspicious pointers, firstly the lack of following and engagement on the ICO’s subreddit, followed by posts such as this from the savedroid_support account:

“Smart people got their SVD tokens also bought more!! and they are waiting for it to height, because they search and ask the cryptocurrency geniuses While the others complain this is the link of savedroid support : https://savedroid.support this is the link to buy SVD tokens : https://savedroid.support/product/buy/

Visiting the link above, the token buying page is still online, but the content on it is hardly professional.

However, these are naturally observations in hindsight, and given how the majority of the crypto space is big on speculative investments, it is rather easy for scam ICOs to make a killing, and Savedroid’s founder seems to have made it big.

Improved fraud prevention with Radar 2.0

$
0
0

We launched Radar in 2016 to help protect our users from fraud. We’ve blocked billions of dollars in fraud across the Stripe network for companies of all sizes—from startups like Slice and WeSwap to larger companies like Fitbit and OpenTable. Since launch, we’ve continuously invested in our suite of fraud prevention tools, and today, we’re excited to launch the result of those efforts.

The next generation of machine learning

As of today, we’ve rebuilt almost every component of our fraud detection stack to dramatically improve performance. In early testing, the upgraded machine learning models helped reduce fraud by over 25% compared to previous models, without increasing the false positive rate.

  • Hundreds of new signals for improved accuracy: We’ve added new signals to better distinguish fraudsters from legitimate customers, including certain data from buyer patterns that are highly predictive of fraud. Some signals now use new, high-throughput data infrastructure to process hundreds of billions of historical events. (Even if a card is new to your business, there’s an 89% chance it’s been seen before on the Stripe network.)
  • Nightly model training: Fraud evolves and changes rapidly. Radar can now adapt even faster by training and evaluating new machine learning models daily.
  • Algorithmic changes for better recall and precision: We’ve optimized our machine learning algorithms in hundreds of ways—from boosting the performance of our decision trees to tweaking the minutiae of how we handle class imbalance, missing values, and more.
  • Custom models for your business to maximize performance: Radar is constantly evaluating how to balance patterns from across the Stripe network with patterns that are unique to your business. Radar now trains and evaluates multiple models daily and determines which one achieves the best performance for you.

“Radar cut our fraud rates by over 70% without any configuration, saving our pizzerias thousands of dollars every month and allowing us to focus on delivering the best local pizza experience possible.”
— Finn Borge, Product Manager at Slice

Introducing Radar for Fraud Teams

While most Stripe businesses can rely entirely on Radar’s automated fraud protection, it makes sense for some companies with more fraud risk to invest more deeply. We’ve now made that as easy and powerful as possible on Stripe. We’ve honed features to be even more useful to fraud professionals, built new features, and packaged them all together in a new bundle called Radar for Fraud Teams.

UpdateOptimized reviews to spot fraud faster

When reviewing payments, we now show relevant info for faster and more accurate reviews. You can see data related to the device used, compare the geolocated IP address and the credit card address, or see whether the purchase pattern is anomalous compared to typical legitimate payments for your business.

UpdateRelated payments for more accurate reviews

We now help you evaluate payments holistically rather than in isolation by surfacing previous related payments your business has processed that match certain attributes like email address, IP address, or card number.

UpdateCustom rules with real-time feedback

When you create a rule, Stripe will use your historical data to show how that rule would have impacted real transactions your business has seen. We’ve added dozens of new properties you can use in rules to give your teams even more fine-grained levers.

NewCustom risk thresholds

Radar for Fraud Teams surfaces a numerical risk score (0–100) for every payment. Depending on your business’s appetite for fraud, you can tweak the threshold at which to block payments to maximize revenue.

NewBlock and allow lists

Fraud teams now have an easy way to create and maintain lists of attributes—card numbers, emails, IP addresses, and more—that you want to consistently block or allow.

NewRich analytics on fraud performance

Get a snapshot that helps focus your fraud team. The new overview highlights dispute trends, the effectiveness of reviewing flagged payments, and the impact of rules you’ve written for your business.

Radar for Fraud Teams has already made fraud management easier and more effective for beta users at Watsi, Fitbit, Restocks, Patreon, and more.

“The related payments feature helped our fraud team quickly spot a nuanced fraud ring and avoid significant potential loss. It’s been a great asset in our fraud detection arsenal.”
— Alison Cleggett, Head of Risk and Compliance at WeSwap

Get early access

We’re gradually rolling out the upgraded machine learning models to all users over the next few weeks. If you’d like, you can also activate the models today. Activating early requires that your Stripe integration follows a few basic best practices. Most users already follow these best practices and won’t need to make any changes to get early access—you can check your integration by logging in to the Dashboard.

Radar for Fraud Teams is also available starting today as an optional add-on. If you’re already using any of features included in Radar for Fraud Teams (like rules or reviews), there are no changes to your pricing—the machine learning updates and all new Radar for Fraud Teams features are included at no extra cost for your account.

We’re constantly updating and improving Radar to help Stripe businesses fight fraud. If you have any questions or feedback, we’d love to hear from you!

Fight fraud with the strength of the Stripe network. Explore Radar

Design and Implementation of a 256-Core BrainFuck Computer [pdf]

Facebook to ask everyone to accept being tracked so they can keep using it

$
0
0

Facebook is going to ask all of its users whether they want to be tracked as they use Facebook.

The company won’t, it said. Instead, it will ask all of its users to explicitly opt in to having themselves tracked so that they can keep using Facebook, and tell them to limit that collection using its settings if they don't.

But the site has warned that it will never be possible to turn off all ad tracking, as it isn't on many websites. People will always have the option to leave entirely, a senior member of staff said.

New privacy laws come into effect from the European Union this May, which apply to all companies who collect data on people within Europe. Many parts of those new regulations seem to pose challenges for Facebook’s business, including new rules about what information can be harvested about users.

The company said that it will now give people the chance to opt into targeted marketing, and having their data collected for ads. If they don't want to do that, they will be able to turn some of the settings off, but the site has made very clear that will not apply to all of them.

The EU law known as the General Data Protection Regulation (GDPR), which takes effect next month, promises the biggest shakeup in online privacy since the birth of the internet. Companies face fines if they collect or use personal information without permission.

Facebook deputy chief privacy officer Rob Sherman said the social network would begin seeking Europeans’ permission this week for a variety of ways Facebook uses their data, but he said that opting out of targeted marketing altogether would not be possible.

“Facebook is an advertising-supported service,” Sherman said in a briefing with reporters at Facebook’s headquarters.

Facebook users will be able to limit the kinds of data that advertisers use to target their pitches, he added, but “all ads on Facebook are targeted to some extent, and that’s true for offline advertising, as well”.

Facebook, the world’s largest social media network, will use what are known as “permission screens” – pages filled with text that require pressing a button to advance – to notify and obtain approval.

The screens will show up on the Facebook website and smartphone app in Europe this week and globally in the coming months, Sherman said.

The screens will not give Facebook users the option to hit “decline.” Instead, they will guide users to either “accept and continue” or “manage data setting,” according to copies the company showed reporters on Tuesday.

“People can choose to not be on Facebook if they want,” Sherman said.

Regulators, investors and privacy advocates are closely watching how Facebook plans to comply with the EU law, not only because Facebook has been embroiled in a privacy scandal but also because other companies may follow its lead in trying to limit the impact of opt-outs.

Last month, Facebook disclosed that the personal information of millions of users, mostly in the United States, had wrongly ended up in the hands of political consultancy Cambridge Analytica, leading to US congressional hearings and worldwide scrutiny of Facebook’s commitment to privacy.

Facebook chief financial officer David Wehner warned in February the company could see a drop-off in usage due to the GDPR.

Additional reporting by agencies

The Independent's bitcoin group on Facebook is the best place to follow the latest discussions and developments in cryptocurrency. Join here for the latest on how people are making money – and how they're losing it.

New tools for open source maintainers

$
0
0

Whether you want to make repository conversations more productive or keep your code safe from accidental pull requests, our new maintainer tools are for you. Minimized comments, retired namespaces for popular projects, and new pull request requirements are just a few of the ways we’re making it easier for maintainers to grow healthy open source communities on GitHub. Here’s some more information about how they work:

Developers use comments in issues and pull requests to have conversations about the software they’re building on GitHub, but not all of the comments are equally constructive. Sometimes contributors share comments that are off-topic, misleading, or offensive.

While maintainers can edit or delete disruptive comments, they may not feel comfortable doing this, and it doesn’t allow the comment author to learn from their mistake. As part of our tiered moderation tools available to project owners, maintainers can now click in the top-right corner to minimize and hide comments—in addition to editing, deleting, or reporting them.

minimize

Minimized comments will be hidden by default with a reason for why it was minimized, giving more space to the comments that advance the conversation. Developers who view the project can choose to temporarily expand minimized comments by clicking “Show comment”.

Learn more about minimized comments

Many package managers allow developers to identify packages by the maintainer’s login and the project name, for example: Microsoft/TypeScript or swagger-api/swagger-codegen. This is an efficient way to describe a dependency, but sometimes maintainers delete or rename their accounts, allowing developers to intentionally or unknowingly create projects with the same name.

To prevent developers from pulling down potentially unsafe packages, we now retire the namespace of any open source project that had more than 100 clones in the week leading up to the owner’s account being renamed or deleted. Developers will still be able to sign up using the login of renamed or deleted accounts, but they will not be able to create repositories with the names of retired namespaces.

Accidental and “drive-through” pull request prevention

Popular open source projects receive lots of pull requests. While most of them are constructive, occasionally project owners receive a pull request from a collaborator who suggests changes from a stale branch or another collaborator’s fork.

Since the author can’t always respond to feedback on the proposed changes, these pull requests create noise for maintainers and do little to push the project forward.

To minimize noise, we no longer allow pull requests from contributors unaffiliated with the project or the changes proposed. Specifically, pull requests will be restricted if:

  • There’s no explanation of changes in the body of the commit, and
  • The author is not a bot account, and
  • The author is not the owner or a member of the owning organization, and
  • The author doesn’t have push access to the head and the source branches

This should not affect automated workflows, private repositories, or repositories on GitHub Enterprise.

Learn more

If you have questions about how these tools make it easier for your to grow welcoming communities around your project, check out this guide on building open source communities or get in touch with us.

A look at VDO, the new Linux compression layer

$
0
0

Did you ever feel like you have too much storage?

Probably not - there is no such thing as ‘too much storage’. For a long time, we have used userland tools like gzip and rar for compression. Now with Virtual Data Optimizer (VDO), all required pieces for a transparent compression/deduplication layer are available in the just-released Red Hat Enterprise Linux 7.5. With this technology, it is possible to trade CPU/RAM resources for disk space. VDO becoming available is one of the results of Red Hat acquiring Permabit Technology Corporation in 2017. The code is available in the source RPMs, and upstream projects are getting established.

image

 

Regarding use cases, VDO can, for example, be used under local filesystems, iSCSI or Ceph. File servers can use it as base for local filesystems, and hand out NFS, CIFS or Gluster services. Remember dozens of Linux systems sharing read-only NFS root file systems to save storage? You could now give each of these systems its own individual image via iSCSI, stored on a common file system with VDO backend, and have VDO deduplicate and compress the common parts of these images.

When reading about VDO appearing in the Red Hat Enterprise Linux 7.5 beta, I wondered about the following:

  • How to setup and configure VDO?

  • How is VDO influencing read/write performance?

  • How much storage space can I save for my use cases?

Let’s find out!

 

How to setup and configure VDO?

The authoritative resource regarding VDO is the Storage Administration Guide. With Red Hat Enterprise Linux 7.5 repos available, it can be installed using:

[root@rhel7u5a ~]# yum install vdo kmod-kvdo

 

When configuring VDO, various things have to be taken into consideration. The smallest system I used here is a KVM guest with 2GB of RAM. While sufficient for first tests, for production one should obey the sizing recommendations in the Storage Administration Guide. They depend on the size of the storage below your VDO devices. Attention should also be paid to the order of layers: placing VDO below encryption, for example, makes no sense.

Assuming a disk available as /dev/sdc, we can use the following command to create a VDO device on top. For a 10GB disk, depending on the workload, one could decide to have VDO offer 100GB to the upper layers:

[root@rhel7u5a ~]# vdo create --name=vdoasync --device=/dev/sdc \   --vdoLogicalSize=100G --writePolicy=asyncCreating VDO vdoasyncStarting VDO vdoasyncStarting compression on VDO vdoasyncVDO instance 0 volume is ready at /dev/mapper/vdoasync[root@rhel7u5a ~]#

After creating a filesystem using ‘mkfs.xfs -K /dev/mapper/vdoasync’, the device can be mounted to /mnt/vdoasync.

VDO supports three write modes.

  • The ‘sync’ mode, where writes to the VDO device are acknowledged when the underlying storage has written the data permanently.

  • The ‘async’ mode, where writes are acknowledged before being written to persistent storage. In this mode, VDO is also obeying flush requests from the layers above. So even in async mode it can safely deal with your data - equivalent to other devices with volatile write back caches. This is the right mode, if your storage itself is reporting writes as ‘done’ when they are not guaranteed to be written.

  • The ‘auto’ mode, now the default, which selects async or sync write policy based on the capabilities of the underlying storage.

Sync mode commits the data out to media before trying to either identify duplicates or pack the compressed version with other compressed blocks on media. This means sync mode will always be slower on sequential writes, even if reads are much faster than writes. However, sync mode introduces less latency, so random IO situations can be much faster. Sync mode should never be used in production without backed storage (typically using batteries or capacitors) designed for this use case.

Using ‘auto’ to select the mode is recommended for users. The Storage Admin Guide has further details regarding the write modes.

 

How is VDO influencing read/write performance?

So if we put the VDO layer between a file system and a block device, how will it influence the I/O performance for the filesystem?

The following data was collected using spinning disks as backend, on a system with 32 GB RAM and 12 cores at 2.4Ghz.

Column ‘deploy to file system’ shows the time it took to copy a directory with ~5GB of data from RAM (/dev/shm) to the filesystem. Column ‘copy on file system’ shows the time required to make a single copy of the 5GB directory which was deployed in the first step. The time was averaged over multiple runs, always after removing the data and emptying the file system caches. Publicly available texts from the Project Gutenberg were used as data, quite nicely compressible.

filesystem backend

deploy to file system

copy on file system

XFS ontop of normal LVM volume

28sec

35sec

XFS on VDO device, async mode

55sec

58sec

XFS on VDO device, sync mode

71sec

92sec

Deployment to VDO backend is slower than to plain LVM backend.

I was initially wondering whether copies on top of a VDO backed volume would be faster than copies on top of a LVM volume - they are not. When data on a VDO backed file system is copied, for example with ‘cp’, then the copy is by definition a duplicate. After VDO identifies this as candidate for duplicate data, it does a read comparison to be sure.

When using SSD or NVDIMM as backend, extra tuning should be done. VDO is designed to properly deal with many parallel I/O requests in mind. In my tests here I did not optimize for parallelization, I just used single instances of ‘rsync’, ‘tar’ or ‘cp’. Also for these, VDO can break up requests from applications to write big files into many small requests - if the underlying media is, for example, a high-speed NVMe SSD, then this can help performance.

Something good to know for testing: I noticed GNU tar reading data incredibly fast when writing to /dev/null. Turns out that ‘tar’ is detecting this situation and not reading at all. So ‘tar cf /dev/null /usr’ is not doing what you probably expect, but ‘tar cf - /usr|cat >/dev/null’ is.

 

How much storage space can I save for my use cases?

This depends of course on how compressible your data is - creating a test setup and storing your data directly on VDO is a good way to find out. You can also to some degree decide how many cpu/memory resources you want to invest and tune for your use case: deduplication and compression can be enabled separately for VDO volumes.

VDO reserves 3-4GB of space for itself: using a 30GB block device as the VDO backend, you can use around 26GB to store your real data, so not what, for example, the filesystem above stores, but what VDO after deduplication/compression needs to store. The high overhead here is the result of the relatively small size of the device. On larger (multi-TB) devices, VDO is designed to not incur more than 2% overhead.

VDO devices can, and should use thin provisioning: this way the system reports more available space to applications than the backend actually has. Doing this, you can benefit from compression and deduplication. Like LVM volumes, VDO devices can after initial creation be grown on-the-fly.

The best data for monitoring the actual fillstate is from ‘vdostats --verbose’, for example for a 30GB volume:

[root@rhel7u5a ~]# vdostats --verbose /dev/mapper/vdoasync |grep -B6 'saving percent' physical blocks                  : 7864320 logical blocks                   : 78643200 1K-blocks                        : 31457280 1K-blocks used                   : 17023768 1K-blocks available              : 14433512 used percent                     : 54 saving percent                   : 5

So this volume is 54% full. This is a thin provisioned volume with 30GB backend, so 7864320 blocks of 4k size. We show ten times of that to the upper layers, so 300GB, here appearing as ‘logical blocks’.

As we are dealing with compression/deduplication here, having a 30GB backend does not mean that you can only store 30GB: if your data can be nicely deduplicated/compressed, you can store much more. Above output of ‘vdostats’ is from a volume with 13GB of data. After making a copy of that data on the volume, ‘df’ shows 26GB of data on the filesystem. Looking at ‘vdostats’ again, we nicely see dedup in action:

[root@rhel7u5a ~]# vdostats --verbose /dev/mapper/vdoasync |grep -B6 'saving percent' physical blocks                  : 7864320 logical blocks                   : 78643200 1K-blocks                        : 31457280 1K-blocks used                   : 17140524 1K-blocks available              : 14316756 used percent                     : 54 saving percent                   : 52

Thanks to dedup, the copy of these 13GB occupies just ~120MB on the VDO layer!

Let’s copy data to VDO backed devices (separating sync and async mode) and to devices with a plain block device backend. After an initial copy of a directory with 5GB data to the mountpoint, three copies of that directory on the device were created.

image

From the earlier tests we know that the initial deployment to the plain backend finishes first, this can also be seen here. The violet and green lines illustrate the blocks used by our data on VDO devices. Async and sync mode are similar in this aspect. Both VDO volumes also start with reporting ‘0 bytes occupied’ via the ‘df’ command, but we see that right from the start some blocks are used internally. For the VDO backends, we see that the initial copy takes ~50 seconds, then the copies on top of VDO start. Due to deduplication, almost no additional blocks are taken into use in that phase, but we see how ‘used bytes’, so what the filesystem layer reports, is growing.

This graph can be different, depending on the data which is copied, RAM, cpu performance and the backend which is used.

 

Final thoughts

When designing VDO, to date the focus was to serve as a primary storage service. It is is especially designed for high performance in environments with random IO, so using VDO as shared storage with multiple tasks on top doing I/O. Especially use cases like running multiple VMs on a single VDO volume let VDO shine.

We have seen that VDO is quite easy to use. I have not done any tuning here - but learned many things around benchmarking. ‘man vdo’ has details about many tuning options like read caches and so on. The Block Deduplication and Compression with VDO video from Devconf 2018 is good, more details are also in the article Understanding the concepts behind Virtual Data Optimizer (VDO) in RHEL 7.5 Beta from Nikhil Chawla.

20160619_154558_portrait.jpgChristian Horn is a Red Hat TAM based in Tokyo, working with partners and customers. In his work as Linux Engineer/Architect in Germany since 2001, later as Red Hat TAM in Munich and Tokyo, Virtualization, operations and performance tuning are among the returning topics of this daily work. In his spare time, Christian works on improving his Japanese, cycles to Tokyo’s green spots and enjoys onsen, Japanese bathes. More posts from Christian are here.

A Red Hat Technical Account Manager (TAM) is a  specialized product expert who works collaboratively with IT organizations to strategically plan for successful deployments and help realize optimal performance and growth. The TAM is part of Red Hat’s world classCustomer Experience and Engagement organization and provides proactive advice and guidance to help you identify and address potential problems before they occur. Should a problem arise, your TAM will own the issue and engage the best resources to resolve it as quickly as possible with minimal disruption to your business.

Connect with TAMs at a Red Hat Convergence event near you! Red Hat Convergence is a free, invitation-only event offering technical users an opportunity to deepen their Red Hat product knowledge and discover new ways to apply open source technology to meet their business goals. These events travel to cities around the world to provide you with a convenient, local one-day experience to learn and connect with Red Hat experts and industry peers.

Open source is collaborative curiosity. Join us at Red Hat Summit, May 8-10, in San Francisco to connect with TAMs and other Red Hat experts in person! Register now for only US$1,100 using code CEE18.


Cloud SQL for PostgreSQL now generally available

$
0
0

Among open-source relational databases, PostgreSQL is one of the most popular—and the most sought-after by Google Cloud Platform (GCP) users. Today, we’re thrilled to announce that PostgreSQL is now generally available and fully supported for all customers on our Cloud SQL fully-managed database service.

Backed by Google’s 24x7 SRE team, high availability with automatic failover, and our SLA, Cloud SQL for PostgreSQL is ready for the demands of your production workloads. It’s built on the strength and reliability of Google Cloud’s infrastructure, scales to support critical workloads and automates all of your backups, replication, patches and updates while ensuring greater than 99.95% availability anywhere in the world. Cloud SQL lets you focus on your application, not your IT operations.

While Cloud SQL for PostgreSQL was in beta, we added high availability and replication, higher performance instances with up to 416GB of RAM, and support for 19 additional extensions. It also joined the Google Cloud Business Associates Agreement (BAA) for HIPAA-covered customers.

Cloud SQL for PostgreSQL runs standard PostgreSQL to maintain compatibility. And when we make improvements to PostgreSQL, we make them available for everyone by contributing to the open source community.

Throughout beta, thousands of customers from a variety of industries such as commercial real estate, satellite imagery, and online retail, deployed workloads on Cloud SQL for PostgreSQL. Here’s how one customer is using Cloud SQL for PostgreSQL to decentralize their data management and scale their business.

How OneMarket decentralizes data management with Cloud SQL


OneMarket is reshaping the way the world shops. Through the power of data, technology, and cross-industry collaboration, OneMarket’s goal is to create better end-to-end retail experiences for consumers.

Built out of Westfield Labs and Westfield Retail Solutions, OneMarket unites retailers, brands, venues and partners to facilitate collaboration on data insights and implement new technologies, such as natural language processing, artificial intelligence and augmented reality at scale.

To build the platform for a network of retailers, venues and technology partners, OneMarket selected GCP, citing its global locations and managed services such as Kubernetes Engine and Cloud SQL.

"I want to focus on business problems. My team uses managed services, like Cloud SQL for PostgreSQL, so we can focus on shipping better quality code and improve our time to market. If we had to worry about servers and systems, we would be spending a lot more time on important, but somewhat insignificant management tasks. As our CTO says, we don’t want to build the plumbing, we want to build the house." 
— Peter McInerney, Senior Director of Technical Operations at OneMarket 

The OneMarket team employs a microservices architecture to develop, deploy and update parts of their platform quickly and safely. Each microservice is backed by an independent storage service. Cloud SQL for PostgreSQL instances back many of the platform’s 15 microservices, decentralizing data management and ensuring that each service is independently scalable.
 "I sometimes reflect on where we were with Westfield Digital in 2008 and 2009. The team was constantly in the datacenter to maintain servers and manage failed disks. Now, it is so easy to scale." 
— Peter McInerney 

Because the team was able to focus on data models rather than database management, developing the OneMarket platform proceeded smoothly and is now in production, reliably processing transactions for its global customers. Using BigQuery and Cloud SQL for PostgreSQL, OneMarket analyzes data and provides insights into consumer behavior and intent to retailers around the world.

Peter’s advice for companies evaluating cloud solutions like Cloud SQL for PostgreSQL: “You just have to give it a go. Pick a non-critical service and get it running in the cloud to begin building confidence.”

Getting started with Cloud SQL for PostgreSQL 


Connecting to a Google Cloud SQL database is the same as connecting to a PostgreSQL database—you use standard connectors and standard tools such as pg_dump to migrate data. If you need assistance, our partner ecosystem can help you get acquainted with Cloud SQL for PostgreSQL. To streamline data transfer, reach out to Google Cloud partners Alooma, Informatica, Segment, Stitch, Talend and Xplenty. For help with visualizing analytics data, try ChartIO, iCharts, Looker, Metabase, and Zoomdata.

Sign up for a $300 credit to try Cloud SQL and the rest of GCP. You can start with inexpensive micro instances for testing and development, and scale them up to serve performance-intensive applications when you’re ready.

Cloud SQL for PostgreSQL reaching general availability is a huge milestone and the best is still to come. Let us know what other features and capabilities you need with our Issue Tracker and by joining the Cloud SQL discussion group. We’re glad you’re along for the ride, and look forward to your feedback!

Reading Aloud to Young Children Has Benefits for Behavior and Attention

$
0
0

It’s a truism in child development that the very young learn through relationships and back-and-forth interactions, including the interactions that occur when parents read to their children. A new study provides evidence of just how sustained an impact reading and playing with young children can have, shaping their social and emotional development in ways that go far beyond helping them learn language and early literacy skills. The parent-child-book moment even has the potential to help curb problem behaviors like aggression, hyperactivity and difficulty with attention, a new study has found.

“We think of reading in lots of different ways, but I don’t know that we think of reading this way,” said Dr. Alan Mendelsohn, an associate professor of pediatrics at New York University School of Medicine, who is the principal investigator of the study, “Reading Aloud, Play and Social-Emotional Development,” published in the journal Pediatrics.

The researchers, many of whom are my friends and colleagues, showed that an intervention, based in pediatric primary care, to promote parents reading aloud and playing with their young children could have a sustained impact on children’s behavior. (I am among those the authors thanked in the study acknowledgments, and I should acknowledge in return that I am not only a fervent believer in the importance of reading aloud to young children, but also the national medical director of Reach Out and Read, a related intervention, which works through pediatric checkups to promote parents reading with young children.)

This study involved 675 families with children from birth to 5; it was a randomized trial in which 225 families received the intervention, called the Video Interaction Project, and the other families served as controls. The V.I.P. model was originally developed in 1998, and has been studied extensively by this research group.

Participating families received books and toys when they visited the pediatric clinic. They met briefly with a parenting coach working with the program to talk about their child’s development, what the parents had noticed, and what they might expect developmentally, and then they were videotaped playing and reading with their child for about five minutes (or a little longer in the part of the study which continued into the preschool years). Immediately after, they watched the videotape with the study interventionist, who helped point out the child’s responses.

“They get to see themselves on videotape and it can be very eye-opening how their child reacts to them when they do different things,” said Adriana Weisleder, one of the authors of the study, who is an assistant professor in the Department of Communication Sciences and Disorders at Northwestern University. “We try to highlight the positive things in that interaction — maybe they feel a little silly, and then we show them on the tape how much their kid loves it when they do these things, how fun it is — it can be very motivating.”

“Positive parenting activities make the difference for children,” said Dr. Benard Dreyer, a professor of pediatrics at New York University School of Medicine and past president of the American Academy of Pediatrics, who was the senior author on the study. He noted that the critical period for child development starts at birth, which is also a time when there are many pediatric visits. “This is a great time for us to reach parents and help them improve their parenting skills, which is what they want to do.”

The Video Interaction Project started as an infant-toddler program, working with low-income urban families in New York during clinic visits from birth to 3 years of age. Previously published data from a randomized controlled trial funded by the National Institute of Child Health and Human Development showed that the 3-year-olds who had received the intervention had improved behavior — that is, they were significantly less likely to be aggressive or hyperactive than the 3-year-olds in the control group.

This new study looked at those children a year and a half later — much closer to school entry — and found that the effects on behavior persisted. The children whose families had participated in the intervention when they were younger were still less likely to manifest those behavior problems — aggression, hyperactivity, difficulty with attention — that can so often make it hard for children to do well and learn and prosper when they get to school.

Some children were enrolled in a second stage of the project, and the books and toys and videotaping continued as they visited the clinic from age 3 to 5; they showed additional “dose-response” effects; more exposure to the “positive parenting” promotion meant stronger positive impacts on the children’s behavior.

“The reduction in hyperactivity is a reduction in meeting clinical levels of hyperactivity,” Dr. Mendelsohn said. “We may be helping some children so they don’t need to have certain kinds of evaluations.” Children who grow up in poverty are at much higher risk of behavior problems in school, so reducing the risk of those attention and behavior problems is one important strategy for reducing educational disparities— as is improving children’s language skills, another source of school problems for poor children.

But all parents should appreciate the ways that reading and playing can shape cognitive as well as social and emotional development, and the power of parental attention to help children flourish. Dr. Weisleder said that in reading and playing, children can encounter situations a little more challenging than what they usually come across in everyday life, and adults can help them think about how to manage those situations.

“Maybe engaging in more reading and play both directly reduces kids’ behavior problems because they’re happier and also makes parents enjoy their child more and view that relationship more positively,” she said.

Reading aloud and playing imaginative games may offer special social and emotional opportunities, Dr. Mendelsohn said. “We think when parents read with their children more, when they play with their children more, the children have an opportunity to think about characters, to think about the feelings of those characters,” he said. “They learn to use words to describe feelings that are otherwise difficult and this enables them to better control their behavior when they have challenging feelings like anger or sadness.”

“The key take-home message to me is that when parents read and play with their children when their children are very young — we’re talking about birth to 3 year olds — it has really large impacts on their children’s behavior,” Dr. Mendelsohn said. And this is not just about families at risk. “All families need to know when they read, when they play with their children, they’re helping them learn to control their own behavior,” he said, so that they will come to school able to manage the business of paying attention and learning.

California Opens Investigation into Tesla Workplace Conditions

$
0
0

California’s Division of Occupational Safety and Health has opened an investigation into Tesla Inc. following a report about worker protections at the company’s lone auto plant in Fremont, California.

The state agency “takes seriously reports of workplace hazards and allegations of employers’ underreporting recordable work-related injuries and illnesses” and “currently has an open inspection at Tesla,” said Erika Monterroza, a spokeswoman for the state’s industrial relations department.

California requires employers to maintain what are called Log 300 records of injuries and illnesses. Monterroza said that while the state doesn’t disclose details of open inspections, they typically include a review of employers’ Log 300 records and checks to ensure that serious injuries are reported within eight hours as required by law.

A story this week by the Center for Investigative Reporting’s Reveal alleged that Tesla failed to report serious injuries on legally mandated reports to make its numbers appear better than they actually were. The website cited former members of Tesla’s environment, health and safety team saying Chief Executive Officer Elon Musk’s personal preferences were often invoked as reason not to address potential hazards.

Tesla pushed back against the story in a lengthy blog post on Monday, calling it “an ideologically motivated attack by an extremist organization working directly with union supporters to create a calculated disinformation campaign against Tesla.” The United Auto Workers union has been trying to organize Fremont workers for more than a year.

California’s Division of Occupational Safety and Health, known as CAL/OSHA, opened its inspection late Tuesday. Investigations can be triggered by a number of reasons, including internal complaints from employees. The agency declined to say what triggered its latest probe, which could take as long as six months to complete.

A Tesla spokesman didn’t immediately comment.

Cal/OSHA’s regulations define a serious injury or illness as one that requires employee hospitalization for more than 24 hours for a matter other than medical observation, or one in which part of the body is lost or permanent disfigurement occurs.

A ‘thrilling’ mission to get the Swedish to change overnight

$
0
0

The Economics of Change

“Thrilling” is the word repeatedly used by Jan Ramqvist to describe how he felt about participating in a nationwide mission to get all Swedish motorists and cyclists to change the habits of a lifetime and begin driving on the right-hand side of the road for the first time.

“Everyone was talking about it, but we really didn’t know how it would work out,” explains the 77-year-old, who was just 26 and a newly qualified traffic engineer working in the city of Malmö when the potentially catastrophic changeover took place on 3 September 1967.

The day was officially known as Högertrafikomläggningen (right-hand traffic diversion) or simply Dagen H (H-Day). Its mission was to put Sweden on the same path as the rest of its continental European neighbours, most of which had long followed the global trend to drive cars on the right.

As well as hoping to boost the country’s international reputation, the Swedish government had grown increasingly concerned about safety, with the number of registered vehicles on the roads shooting up from 862,992 a decade earlier to a figure of 1,976,248 recorded by Statistics Sweden at the time of H-Day. Sweden’s population was around 7.8 million.

You would be sitting on the opposite side to what made sense… and you were looking down into the ditch! – Lars Magnusson

Despite driving on the left, many Swedes already owned cars with the steering wheel on the left-hand side of the vehicle, since many bought from abroad and major Swedish car manufacturers such as Volvo had chosen to follow the trend. However, there were concerns that this was a factor in rising numbers of fatal road traffic accidents, up from 595 in 1950 to 1,313 in 1966, alongside an increased frequency of collisions around Sweden’s borders with Denmark, Norway and Finland.

“The market for cars in Sweden was not so big and so we tended to buy left-driven cars,” explains Lars Magnusson, a professor in economic history at Uppsala University. “But that meant that you would be sitting on the opposite side to what made sense… and you were looking down into the ditch!”

‘Incredibly hard’

In the run-up to H-Day, each local municipality had to deal with issues ranging from repainting road markings to relocating bus stops and traffic lights, and redesigning intersections, bicycle lanes and one-way streets.

Several cities including Stockholm, Malmö and Helsingborg also used the change to implement more wide-ranging transport changes, such as closing tram lines to allow for more bus routes. Hundreds of new buses were purchased by municipalities around the country, and around 8,000 older buses were reconfigured to provide doors on both sides. The total cost of amending public transportation came in at 301,457,972 Swedish kronor.

Some 360,000 street signs had to be switched nationwide, which largely took place on a single day before the move to right-hand driving, with council workers joined by the military and working late into the night to ensure the task got done before H-Day formally revved into gear on Sunday morning. All but essential traffic was banned from the roads.

The most challenging thing was the shortage of time, no vacation at all, too many hours a day for months – Arthur Olin

“I worked incredibly hard on the night itself,” remembers Ramqvist, who shared the responsibility for ensuring around 3,000 signs in Malmö were moved correctly.

“My boss was very proud because we were one of the first (municipalities) to ring Stockholm and tell the head of the commission that we were finished,” he says, recalling a charged and celebratory atmosphere. “We found ourselves eating cake and drinking coffee in the middle of the night!”

Others remember the stress of the project more vividly. 

“The most challenging thing was the shortage of time, no vacation at all, too many hours a day for months, I almost killed myself,” says Arthur Olin, now 82, who was working as a traffic consultant in the city of Helsingborg and says he spent a full year knee-deep in logistical planning.

The stress caused him to “hit the wall” a year later. “I had to go to Africa for two weeks just to cut all connections to the job – doctor’s sharp instructions!”

A new era

But as Dagen H finally dawned, the hard work all appeared to pay off. Swedes began cautiously driving on the right-hand side of roads around the country at precisely 5am on 3 September 1967, following a radio countdown.

Olof Palme, the Swedish Minister of Communication (who later became Prime Minister), went on air to say that the move represented “a very large change in our daily existence, our everyday life”.

“I dare say that never before has a country invested so much personal labour, and money, to achieve uniform international traffic rules,” he announced.

Never before has a country invested so much personal labour, and money, to achieve uniform international traffic rules – Olof Palme

In total, the project cost 628 million kronor, just 5% over the government’s estimated budget two years earlier, and the equivalent of around 2.6 billion kronor ($316m) today.

But economic historian Lars Magnusson argues that this figure is actually relatively small, given the scale of the plan, which was the biggest infrastructure project Sweden had ever seen.

As a point of comparison, he refers to the total 2017 budget given to the Swedish Transport Administration (the government agency responsible for transport planning) for roads and railways – some 25 billion kronor ($2.97bn).

“[Dagen H] was a fairly cheap transfer in a sense – it was not a very big sum even at that time,” he explains.

This, he says, was partly due Swedish officials living up to their global reputation for efficiency and careful planning, alongside the logistics of the era.

“The road system was not so developed as it is today and so the costs in infrastructure were not extremely high and it was also because we already had the left-hand drive cars.”

Click and pinch to zoom on mobile.

Crisis averted

In safety terms, the project was declared a success almost immediately. As Swedes began their working week on the day after H-Day, 157 minor traffic accidents were reported around the country, slightly less than average for a typical Monday. Nobody died.

Peter Kronborg, a Stockholm-based traffic consultant and author of a book about Dagen H, Håll dig till höger Svensson (Keep to the right, Svensson), was 10 years old on the day of the switch and recalls excitedly riding his bicycle on the right-hand side of the road for the first time, as well as a buzz around global media gathering in the Swedish capital to report on the day’s events.

The journalists were waiting for this bloodbath – a huge number of accidents. They were a little disappointed – Peter Kronborg

“It was the most important thing to happen in Sweden in 1967,” he says. “The journalists – especially the guys from BBC – they were waiting for this bloodbath – a huge number of accidents. They were a little disappointed. At least that’s what I read!”

A total of 1,077 people died and 21,001 were injured in 1967, the year of Dagen H, down from 1,313 and 23,618 respectively in 1965, which was largely considered to be a result of the extra caution taken by Swedes as a result of the switchover and the state’s nationwide campaign. It took another three years before accident and fatality rates returned to their original levels, during which time car ownership continued to increase rapidly across the country.

Click and pinch to zoom on mobile.

Driving school

The investment in the planning and logistics needed to prepare the roads clearly helped to avoid confusion among drivers. But a large part of the government’s budget for Dagen H was also spent on communication initiatives designed to educate the Swedish public and get them behind the change. On paper, it didn’t look easy: in a public referendum in 1955, 83% of voters had actually been against the switch.

The information campaign – costing around 43 million kronor (out of the total 628,349,774 kronor spent) – included television, radio and newspaper advertisements, and talks in schools. Dagen H had its own logo, emblazoned on billboards, buses and milk cartons.

There was even a theme tune to accompany the switch, reaching number five on the Swedish hit parade

There was even a song contest to select a theme tune to accompany the switch, with the track Håll dig till höger Svensson (the title of Peter Kronborg’s book) selected in a national vote and reaching number five on the Swedish hit parade. Meanwhile public service television booked global celebrities to appear on its most popular television shows, designed to attract large audiences who would be informed about the Dagen H during the same programmes.

“The politicians realised that it wasn’t enough to have an information programme, they needed a propaganda campaign!” laughs Kronborg. “The ambition was not to reach 99% of everyone but to reach 100%.”

Meanwhile Lars Magnusson adds that a more general “culture of conformism” and trust in authorities prevalent in Sweden at the time helped enable the shift in public opinion.

“The media was at that time less critical and they were reporting what the experts told them and if the experts said that this would not be very costly and it would benefit everybody, well, the media would accept that and I suppose the public would accept it as well.”

Magnusson believes that as well as being important for Sweden’s global reputation, when viewed as part of the Nordic nation’s wider efforts to be seen as a major player in Europe, the switch might potentially also have had other longer-term costs and benefits such as increased trade and transportation from other parts of the continent. However, this broader economic impact is, he argues “difficult to estimate” since the changeover occurred “during a period where the economy was growing a lot - and GDP - each year, so it is difficult to distinguish the possible benefits on trade and transport”.

Future lessons

So could Sweden pull off anything like Dagen H today?

Recently ranked top in Europe in Bloomberg’s global innovation ranking, with a transport infrastructure quality above the EU average and one of the region’s strongest digital economies, the Nordic nation would certainly have a head start should it decide to embark on a similarly disruptive transportation project.

But the prevailing sentiment among those who’ve studied Dagen H closely is that today’s political, economic and media climates would present numerous fresh challenges to those that existed at the time Dagen H.

Peter Kronborg’s key argument is that ministers and public authorities would struggle to shift public opinion and shape a new consensus so dramatically. He says that “everything about Swedish society became a little more individualistic” just a year after Dagen H, in the wake of student radicalism and counter-culturism across Europe, and he believes that today the Swedish public would be outraged if politicians went ahead with a project so vehemently opposed in a referendum.

Meanwhile he suggests our current media diet of YouTube and Netflix amid the demise of “prime time television” would make it “much more complicated” for politicians and campaigners to reach the entire population, whereas at the time of Dagen H there was only one television channel and one radio channel and “everybody watched and listened to them”.

From an economic perspective, Lars Magnusson estimates that the financial cost of implementing Dagen H today would be greatly increased due to Sweden’s road networks and infrastructure being “much more developed” than 50 years ago.

The financial cost of implementing Dagen H today would go up at least 10 times

“It’s difficult to give an exact estimate, but I would say it would go up at least 10 times. This is my guess,” he says.

Even Sweden’s current transportation strategists are sceptical that an equivalent to Dagen H could be implemented anywhere near as smoothly today as it was in 1967.

“My personal belief is that it would be very difficult,” says Mattias Lundberg, head of traffic planning for the city of Stockholm.

“In those days, a few – normally men – could really gain a lot of power in order to influence things on a very broad scale. Today society is so much more diverse.”

But he notes that Dagen H is a monumental event which is still occasionally talked about in his office and that it helped to encourage an ongoing focus on road safety in both public and political discourse.

In 1997 Sweden started what became a multinational project to work towards no fatalities on public highways, known as Vision Zero. The country currently has one of the world’s lowest road death rates. 270 people died in 2016, compared to 1,313 in 1966, the year before Dagen H.

Yet these days a large proportion of the work done by Lundberg’s team is about planning for a future where far fewer Swedes get behind the wheel at all.

The Swedish capital’s current strategy has a sustainability focus that prioritises walking, cycling and public transportation. The nation’s first self-driving buses launched in Stockholm in January and officials are also looking at what will perhaps be the most major shift in travel since Dagen H: the arrival of autonomous cars.

Lundberg argues that any dramatic changes are still “quite a long time away” though and – unlike Dagen H – they will most definitely follow a period of heavy public consultation.

--

To comment on this story or anything else you have seen on BBC Capital, please head over to our Facebook  page or message us on Twitter.

If you liked this story, sign up for the weekly bbc.com features newsletter called "If You Only Read 6 Things This Week". A handpicked selection of stories from BBC Future, Culture, Capital and Travel, delivered to your inbox every Friday.

What Dockless Bikes and Scooters Are Exposing

$
0
0
503 Service Unavailable

The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.

Additionally, a 503 Service Unavailable error was encountered while trying to use an ErrorDocument to handle the request.

Machine Learning’s ‘Amazing’ Ability to Predict Chaos

$
0
0

Half a century ago, the pioneers of chaos theory discovered that the “butterfly effect” makes long-term prediction impossible. Even the smallest perturbation to a complex system (like the weather, the economy or just about anything else) can touch off a concatenation of events that leads to a dramatically divergent future. Unable to pin down the state of these systems precisely enough to predict how they’ll play out, we live under a veil of uncertainty.

But now the robots are here to help.

In a series of results reported in the journals Physical Review Letters and Chaos, scientists have used machine learning— the same computational technique behind recent successes in artificial intelligence — to predict the future evolution of chaotic systems out to stunningly distant horizons. The approach is being lauded by outside experts as groundbreaking and likely to find wide application.

“I find it really amazing how far into the future they predict” a system’s chaotic evolution, said Herbert Jaeger, a professor of computational science at Jacobs University in Bremen, Germany.

The findings come from veteran chaos theorist Edward Ott and four collaborators at the University of Maryland. They employed a machine-learning algorithm called reservoir computing to “learn” the dynamics of an archetypal chaotic system called the Kuramoto-Sivashinsky equation. The evolving solution to this equation behaves like a flame front, flickering as it advances through a combustible medium. The equation also describes drift waves in plasmas and other phenomena, and serves as “a test bed for studying turbulence and spatiotemporal chaos,” said Jaideep Pathak, Ott’s graduate student and the lead author of the new papers.

After training itself on data from the past evolution of the Kuramoto-Sivashinsky equation, the researchers’ reservoir computer could then closely predict how the flamelike system would continue to evolve out to eight “Lyapunov times” into the future, eight times further ahead than previous methods allowed, loosely speaking. The Lyapunov time represents how long it takes for two almost-identical states of a chaotic system to exponentially diverge. As such, it typically sets the horizon of predictability.

“This is really very good,” Holger Kantz, a chaos theorist at the Max Planck Institute for the Physics of Complex Systems in Dresden, Germany, said of the eight-Lyapunov-time prediction. “The machine-learning technique is almost as good as knowing the truth, so to say.”

The algorithm knows nothing about the Kuramoto-Sivashinsky equation itself; it only sees data recorded about the evolving solution to the equation. This makes the machine-learning approach powerful; in many cases, the equations describing a chaotic system aren’t known, crippling dynamicists’ efforts to model and predict them. Ott and company’s results suggest you don’t need the equations — only data. “This paper suggests that one day we might be able perhaps to predict weather by machine-learning algorithms and not by sophisticated models of the atmosphere,” Kantz said.

Besides weather forecasting, experts say the machine-learning technique could help with monitoring cardiac arrhythmias for signs of impending heart attacks and monitoring neuronal firing patterns in the brain for signs of neuron spikes. More speculatively, it might also help with predicting rogue waves, which endanger ships, and possibly even earthquakes.

Ott particularly hopes the new tools will prove useful for giving advance warning of solar storms, like the one that erupted across 35,000 miles of the sun’s surface in 1859. That magnetic outburst created aurora borealis visible all around the Earth and blew out some telegraph systems, while generating enough voltage to allow other lines to operate with their power switched off. If such a solar storm lashed the planet unexpectedly today, experts say it would severely damage Earth’s electronic infrastructure. “If you knew the storm was coming, you could just turn off the power and turn it back on later,” Ott said.

He, Pathak and their colleagues Brian Hunt, Michelle Girvan and Zhixin Lu (who is now at the University of Pennsylvania) achieved their results by synthesizing existing tools. Six or seven years ago, when the powerful algorithm known as “deep learning” was starting to master AI tasks like image and speech recognition, they started reading up on machine learning and thinking of clever ways to apply it to chaos. They learned of a handful of promising results predating the deep-learning revolution. Most importantly, in the early 2000s, Jaeger and fellow German chaos theorist Harald Haas made use of a network of randomly connected artificial neurons — which form the “reservoir” in reservoir computing — to learn the dynamics of three chaotically coevolving variables. After training on the three series of numbers, the network could predict the future values of the three variables out to an impressively distant horizon. However, when there were more than a few interacting variables, the computations became impossibly unwieldy. Ott and his colleagues needed a more efficient scheme to make reservoir computing relevant for large chaotic systems, which have huge numbers of interrelated variables. Every position along the front of an advancing flame, for example, has velocity components in three spatial directions to keep track of.

It took years to strike upon the straightforward solution. “What we exploited was the locality of the interactions” in spatially extended chaotic systems, Pathak said. Locality means variables in one place are influenced by variables at nearby places but not by places far away. “By using that,” Pathak explained, “we can essentially break up the problem into chunks.” That is, you can parallelize the problem, using one reservoir of neurons to learn about one patch of a system, another reservoir to learn about the next patch, and so on, with slight overlaps of neighboring domains to account for their interactions.

Parallelization allows the reservoir computing approach to handle chaotic systems of almost any size, as long as proportionate computer resources are dedicated to the task.

Ott explained reservoir computing as a three-step procedure. Say you want to use it to predict the evolution of a spreading fire. First, you measure the height of the flame at five different points along the flame front, continuing to measure the height at these points on the front as the flickering flame advances over a period of time. You feed these data-streams in to randomly chosen artificial neurons in the reservoir. The input data triggers the neurons to fire, triggering connected neurons in turn and sending a cascade of signals throughout the network.

The second step is to make the neural network learn the dynamics of the evolving flame front from the input data. To do this, as you feed data in, you also monitor the signal strengths of several randomly chosen neurons in the reservoir. Weighting and combining these signals in five different ways produces five numbers as outputs. The goal is to adjust the weights of the various signals that go into calculating the outputs until those outputs consistently match the next set of inputs — the five new heights measured a moment later along the flame front. “What you want is that the output should be the input at a slightly later time,” Ott explained.

To learn the correct weights, the algorithm simply compares each set of outputs, or predicted flame heights at each of the five points, to the next set of inputs, or actual flame heights, increasing or decreasing the weights of the various signals each time in whichever way would have made their combinations give the correct values for the five outputs. From one time-step to the next, as the weights are tuned, the predictions gradually improve, until the algorithm is consistently able to predict the flame’s state one time-step later.

“In the third step, you actually do the prediction,” Ott said. The reservoir, having learned the system’s dynamics, can reveal how it will evolve. The network essentially asks itself what will happen. Outputs are fed back in as the new inputs, whose outputs are fed back in as inputs, and so on, making a projection of how the heights at the five positions on the flame front will evolve. Other reservoirs working in parallel predict the evolution of height elsewhere in the flame.

In a plot in their PRL paper, which appeared in January, the researchers show that their predicted flamelike solution to the Kuramoto-Sivashinsky equation exactly matches the true solution out to eight Lyapunov times before chaos finally wins, and the actual and predicted states of the system diverge.

The usual approach to predicting a chaotic system is to measure its conditions at one moment as accurately as possible, use these data to calibrate a physical model, and then evolve the model forward. As a ballpark estimate, you’d have to measure a typical system’s initial conditions 100,000,000 times more accurately to predict its future evolution eight times further ahead.

That’s why machine learning is “a very useful and powerful approach,” said Urlich Parlitz of the Max Planck Institute for Dynamics and Self-Organization in Göttingen, Germany, who, like Jaeger, also applied machine learning to low-dimensional chaotic systems in the early 2000s. “I think it’s not only working in the example they present but is universal in some sense and can be applied to many processes and systems.” In a paper soon to be published in Chaos, Parlitz and a collaborator applied reservoir computing to predict the dynamics of “excitable media,” such as cardiac tissue. Parlitz suspects that deep learning, while being more complicated and computationally intensive than reservoir computing, will also work well for tackling chaos, as will other machine-learning algorithms. Recently, researchers at the Massachusetts Institute of Technology and ETH Zurich achieved similar results as the Maryland team using a “long short-term memory” neural network, which has recurrent loops that enable it to store temporary information for a long time.

Since the work in their PRL paper, Ott, Pathak, Girvan, Lu and other collaborators have come closer to a practical implementation of their prediction technique. In new research accepted for publication in Chaos, they showed that improved predictions of chaotic systems like the Kuramoto-Sivashinsky equation become possible by hybridizing the data-driven, machine-learning approach and traditional model-based prediction. Ott sees this as a more likely avenue for improving weather prediction and similar efforts, since we don’t always have complete high-resolution data or perfect physical models. “What we should do is use the good knowledge that we have where we have it,” he said, “and if we have ignorance we should use the machine learning to fill in the gaps where the ignorance resides.” The reservoir’s predictions can essentially calibrate the models; in the case of the Kuramoto-Sivashinsky equation, accurate predictions are extended out to 12 Lyapunov times.

The duration of a Lyapunov time varies for different systems, from milliseconds to millions of years. (It’s a few days in the case of the weather.) The shorter it is, the touchier or more prone to the butterfly effect a system is, with similar states departing more rapidly for disparate futures. Chaotic systems are everywhere in nature, going haywire more or less quickly. Yet strangely, chaos itself is hard to pin down. “It’s a term that most people in dynamical systems use, but they kind of hold their noses while using it,” said Amie Wilkinson, a professor of mathematics at the University of Chicago. “You feel a bit cheesy for saying something is chaotic,” she said, because it grabs people’s attention while having no agreed-upon mathematical definition or necessary and sufficient conditions. “There is no easy concept,” Kantz agreed. In some cases, tuning a single parameter of a system can make it go from chaotic to stable or vice versa.

Wilkinson and Kantz both define chaos in terms of stretching and folding, much like the repeated stretching and folding of dough in the making of puff pastries. Each patch of dough stretches horizontally under the rolling pin, separating exponentially quickly in two spatial directions. Then the dough is folded and flattened, compressing nearby patches in the vertical direction. The weather, wildfires, the stormy surface of the sun and all other chaotic systems act just this way, Kantz said. “In order to have this exponential divergence of trajectories you need this stretching, and in order not to run away to infinity you need some folding,” where folding comes from nonlinear relationships between variables in the systems.

The stretching and compressing in the different dimensions correspond to a system’s positive and negative “Lyapunov exponents,” respectively. In another recent paper in Chaos, the Maryland team reported that their reservoir computer could successfully learn the values of these characterizing exponents from data about a system’s evolution. Exactly why reservoir computing is so good at learning the dynamics of chaotic systems is not yet well understood, beyond the idea that the computer tunes its own formulas in response to data until the formulas replicate the system’s dynamics. The technique works so well, in fact, that Ott and some of the other Maryland researchers now intend to use chaos theory as a way to better understand the internal machinations of neural networks.

The Quest for the Next Billion-Dollar Color

$
0
0

Subramanian is 64 and short, with a slight paunch and a dark mustache that curls down the sides of his mouth. Raised in Chennai, on the southeastern coast of India, he developed a fascination with the makeup of objects by examining beautiful seashells that had washed ashore. “How does nature make these things?” he would ask himself. It wasn’t until much later that he began asking how the shells got their colors.

Technically speaking, colors are the visual sensates of light as it’s bent or scattered or reflected off the atomic makeup of an object. Modern computers can display about 16.8 million of them, far more than people can see or printers can reproduce. To transform a digital or imagined color into something tangible requires a pigment. “Yes, you have this fabulous blue,” says Laurie Pressman, vice president of the Pantone Color Institute, which assists companies with color strategies for branding or products. “But wait, can I actually create the blue in velvet, silk, cotton, rayon, or coated paper stock?

“It’s not just the color,” she adds. “It’s the chemical composition of the color. And can that composition actually be realized in the material I’m going to apply it to?”

Out of the oven came a blue so radiant, so fantastic, it appeared almost extraterrestrial

This limitation restricts the pool of pigments available to the garment, construction, tech, and other industries. A single one, titanium dioxide, accounts for almost two-thirds of the pigments produced globally; valued at about $13.2 billion, it’s responsible for the crisp whiteness of traffic lines, toothpaste, and powdered doughnuts. Getting other colors has historically meant incorporating dangerous inorganic elements or compounds, such as lead, cobalt, or even cyanide. In recent years, health and environmental regulations have created a heavy push toward more benign organic pigments, leading researchers to discover plenty of blacks, yellows, greens. Blue is a different story.

YInMn blue powder.

PHOTOGRAPHER: IAN ALLEN FOR BLOOMBERG BUSINESSWEEK

Subramanian entered the annals of pigment lore even though he wasn’t looking for a pigment or even mixing ingredients thought capable of making a distinctive color. He and his co-investigators were after electronics—specifically a multiferroic, a material that’s both electrically and magnetically polarized, which is useful for computing. The yttrium began as pale white, the indium oxide black, and the manganese a bilious yellow. One of Subramanian’s postdoctoral students, Andrew Smith, ground them to gray, placed the blend in a small dish, and stuck it in a furnace heated to 2,200F. Twelve hours later, out of the oven came a deep, vibrant, intoxicating blue. It was so radiant, so fantastic, it appeared almost extraterrestrial—the ripest Venusian blueberry, cleaned, polished, and glowing from within.

“What the heck happened?” asked Subramanian when he saw it.

“I did exactly what you told me to do!” Smith said.

“Are you sure you made the right one?”

“Yes.”

“Let’s try it again.”

Subramanian knew a little something about discovery. After getting his doctorate in chemistry at the Indian Institute of Technology at Madras, he’d spent three decades researching solid-state materials chemistry at DuPont Co., essentially studying the composition of anything that wasn’t a liquid. He had 54 patents to his name, mostly involving superconductors, thermoelectric materials, and other esoterica compelling only to a narrow band of chemists concerned with electronics. Nothing colorful. But Subramanian could tell something was up with this.

He called some colleagues at the University of California at Santa Barbara. “You’ve got to see this to believe it,” he said. They didn’t share his extracurricular fascination.

Different concentrations of manganese lead to different saturations and densities of the color.

PHOTOGRAPHER: IAN ALLEN FOR BLOOMBERG BUSINESSWEEK

“I’d never seen a color like that in my life,” he recalls. “I’ve made so many oxides. Superconductors are always black or brown or sometimes yellow. Never made this.” It was as though he’d crossbred tomatoes with onions and sprouted a cantaloupe. “I was always worried, is this true? Am I dreaming?”

Blue is one of nature’s most abundant tones, but it’s proved hard for human hands to create. When the ancient Egyptians tried to replicate the deep, oceanic tone of ultramarine to adorn tombs, papyrus, and art, they wound up with something more like turquoise. During the Renaissance, ultramarine could be costlier than gold, because the lapis lazuli from which it derives was mined in remote Afghanistan. (Michelangelo nevertheless scored some for the Sistine Chapel ceiling.) The first modern synthetic pigment, Prussian blue, or ferric ferrocyanide, wasn’t discovered until the early 18th century, by a German chemist trying to make red. Since then, many common blues (cerulean, midnight, aquamarine, smalt) have contained traces of cobalt, a suspected carcinogen.

Subramanian and Smith began testing their compound by dunking it into acid; they were pleased to find it didn’t dissolve. YInMn also proved to be inert, unfading, and nontoxic. It was more durable than ultramarine and Prussian blue, safer than cobalt blue, lighter than phthalocyanine blue, darker than Victoria blue. It was remarkably heat-reflecting, potentially allowing whatever object it coated to remain cool under the sun. Subramanian started keeping two wooden birdhouses positioned beneath a pair of heated lamps on a table in his office. One of the roofs was painted with equal parts black chromium oxide and cobalt blue; the other was black mixed with YInMn blue. The YInMn house stayed around 55 degrees cooler than its counterpart.

Subramanian wrote a paper describing his blue’s properties, eventually publishing it in the Journal of the American Chemical Society, and filed for a patent (No. 8,282,728, issued in October 2012 to Subramanian, Smith, and a colleague). Word that he’d fathered some sort of new blue generated media attention—and corporate suitors in turn. Subramanian was surprised by the interest and quickly applied for more government funding. “I thought everything was known about this,” he says. “Who was going to give me money to do research on pigments?”

Mixing together yttrium, indium, and manganese to make YInMn.

PHOTOGRAPHER: IAN ALLEN FOR BLOOMBERG BUSINESSWEEK

There was more at stake than he initially grasped. The research company Ceresana estimates that pigments are a $30 billion industry, headlined by major chemical companies such as Lanxess, BASF, Venator (a spinoff from Huntsman), and Chemours (a spinoff from DuPont). High-performance pigments—the most colorful, stable, and durable ones—are a rapidly growing market segment, accounting for almost a sixth of the total value in 2016, according to Smithers Rapra Ltd. Demand is rising as lead-based pigments are phased out and emerging markets put high-performance ones in industrial and building coatings.

A safe, durable, environmentally friendly blue ought to be enormously lucrative. It’s overwhelmingly America’s favorite color, according to Pressman of the Pantone Color Institute. “Blue is that concept of hope, promise, dependability, stability, calm, and cool,” she says. “We think of it as a color of constancy and truth. It’s one of the most approachable colors, the color that’s the most comfortable.” Blue is central to the brand imaging of Ikea, Ford, Walmart, and Facebook. It’s on our refrigerator shelves, our walls, our clothes. Two-thirds of Major League Baseball teams feature blue on their uniforms. Blue is everywhere.

The companies calling Subramanian had plenty of ideas for YInMn. HP wanted to know if the pigment could be converted to an ink. Chanel was interested in it for cosmetics. Merck wondered about skin care. Nike was curious whether it could be used in sneaker leather to keep feet cool. Subcontractors to companies working on self-driving cars thought YInMn’s reflective properties might improve the vehicles’ sensors.

Pigment sellers were interested, too. Shepherd Color Co. sent representatives to Oregon State within a week of the paper’s publication, then spent two years testing YInMn for environmental resilience, regulatory fitness, and cost. The next step was licensing. The patent belonged to Subramanian, but Oregon State was entitled to split the royalties because the discovery had occurred in a university-owned laboratory. Shepherd won the exclusive license in 2015 and began preparing to produce half-ton batches for what it decided was the most viable market: industrial coatings for sidings and roofs. (The company declined to disclose the terms of the deal.) Last September, eight years after Subramanian’s discovery, the U.S. Environmental Protection Agency finally approved YInMn for commercial sale in industrial coatings and plastics. Shepherd swiftly went to market.

PHOTOGRAPHER: IAN ALLEN FOR BLOOMBERG BUSINESSWEEK

A natural next step would be for Shepherd to submit an application to be listed on the EPA’s Toxic Substances Control Act inventory, which would approve it for all applications—potentially including some of the ones in which Nike et al were interested. But Shepherd has yet to apply. So far, YInMn’s only other forays into the market have been from Crayola LLC, whose first new crayon in a decade, Bluetiful, was purportedly “inspired” by YInMn—Shepherd wouldn’t comment on whether the company is paying royalties—and Derivan, the Australian paint maker, which has transformed the pigment into an acrylic that’s being offered to artists at a handful of retailers, on a sample basis.

The early market for Shepherd has been limited somewhat by its high price, a function of the cost of indium, a metal primarily used in the clear, thin, conductive layer of smartphone touchscreens. For this purpose it needs to be exceptionally pure, which, coupled with high demand, meant it was selling for $720 per kilogram at the end of 2017. (The figure for manganese was $1.74.) As a result, Shepherd lists YInMn blue at $1,000 per kilogram, by far its most expensive pigment. Ryan, Shepherd’s marketing manager, jokes that unless an indium meteor crashes into southwestern Ohio, the price will remain high.

That doesn’t mean it can’t generate a lot of money. Geoffrey Peake, R&D manager at Shepherd, says YInMn and others in its class, complex inorganic colored pigments, are the company’s most durable offerings. As paint coatings, they can come with a warranty of up to 50 years—well worth the investment for metal roofing or skyscraper facades. Other applications, and lower prices, will have to wait until researchers at Shepherd or Oregon State can replace the indium without dulling the blue.

The slow pace of testing and regulatory approval, plus attorney fees and other licensing expenses, has meant that, almost nine years after his discovery, Subramanian still hasn’t seen any royalties. Still, YInMn has rejuvenated his career and given it new direction. “If we can create a beautiful red pigment, which is stable and nontoxic, it’s going to be a big hit,” he says. “That’s what I’m hoping.”


Login with Facebook data hijacked by JavaScript trackers

$
0
0

confirms to TechCrunch that it’s investigating a security research report that shows Facebook user data can be grabbed by third-party JavaScript trackers embedded on websites using Login With Facebook. The exploit lets these trackers gather a user’s data including name, email address, age range, gender, locale, and profile photo depending on what users originally provided to the website. It’s unclear what these trackers do with the data, but many of their parent companies including Tealium, AudienceStream, Lytics, and ProPS sell publisher monetization services based on collected user data.

The abusive scripts were found on 434 of the top 1 million websites including freelancer site Fiverr.com, camera seller B&H Photo And Video, and cloud database provider MongoDB. That’s according to Steven Englehardt and his colleagues at Freedom To Tinker, which is hosted by Princeton’s Center For Information Technology Policy.

Meanwhile, concert site BandsInTown was found to be passing Login With Facebook user data to embedded scripts on sites that install its Amplified advertising product. An invisible BandsInTown iframe would load on these sites, pulling in user data that was then accessible to embedded scripts. That let any malicious site using BandsInTown learn the identity of visitors. BandsInTown has now fixed this vulnerability.

TechCrunch is still awaiting a formal statement from Facebook beyond “We will look into this and get back to you.” After TechCrunch brough the issue to MongoDB’s attention this morning, it investigated and just provided this statement “We were unaware that a third-party technology was using a tracking script that collects parts of Facebook user data. We have identified the source of the script and shut it down.” Fiverr and BandsInTown did not respond before press time.

The discovery of these data security flaws comes at a vulnerable time for Facebook. The company is trying to recover from the Cambridge Analytica scandal, CEO Mark Zuckerberg just testified before congress, and today it unveiled privacy updates to comply with Europe’s GDPR law. But Facebook’s recent API changes designed to safeguard user data didn’t prevent these exploits. And the situation shines more light on the little-understood ways Facebook users are tracked around the Internet, not just on its site.

“When a user grants a website access to their social media profile, they are not only trusting that website, but also third parties embedded on that site” writes Englehardt. This chart shows that what some trackers are pulling from users. Freedom To Tinker warned OnAudience about another security issue recently, leading it to stop collecting user info.

Facebook could have identified these trackers and prevented these exploits with sufficient API auditing. It’s currently ramping up API auditing as it hunts down other developers that might have improperly shared, sold, or used data like how Dr. Aleksandr Kogan’s app’s user data ended up in the hands of Cambridge Analytica. Facebook could also change its systems to prevent developers from taking an app-specific user ID and employing it to discover that person’s permanent overarching Facebook user ID.

Revelations like this are likely to beckon a bigger data backlash. Over the years, the public had became complacent about the ways their data was exploited without consent around the web. While it’s Facebook in the hot seat, other tech giants like Google rely on user data and operate developer platforms that can be tough to police. And news publishers, desperate to earn enough from ads to survive, often fall in with sketchy ad networks and trackers.

Zuckerberg makes an easy target because the Facebook founder is still the CEO, allowing critics and regulators to blame him for the social network’s failings. But any company playing fast and loose with user data should be sweating.

PostgreSQL website – new design now live

$
0
0

PostgreSQL is a powerful, open source object-relational database system with over 30 years of active development that has earned it a strong reputation for reliability, feature robustness, and performance.

There is a wealth of information to be found describing how to install and use PostgreSQL through the official documentation. The PostgreSQL community provides many helpful places to become familiar with the technology, discover how it works, and find career opportunities. Reach out to the community here.

Android Intern at Tovala (YC W16)

$
0
0

As a Director of Strategy and Operations on the Tovala team, you will be charged with scaling operations as well as finding and implementing efficiencies and optimizations across our entire operation. You’ll be responsible for devising strategies and executing on making our process more efficient, higher quality and cheaper. You will focus on our food operation (everything from procuring the food to packaging to shipping), and support our oven operation, particularly as it relates to getting ovens to our customers.

About You

  • You have experience devising and implementing strategies that yield efficiencies and optimize processes.
  • You have experience managing people.
  • You believe in taking a data-centric approach to decision making.
  • You’re a team player willing to wear multiple hats and get your hands dirty.
  • You’ve worked at a small-to-medium sized company or in a small-group work environment.
  • You want to work in an environment that takes a people-first approach; values humility, integrity and ambition; and is focused on creating a culture that keeps people happy and challenged.
  • You understand and believe in the importance of delicious, healthy food.
  • You have an MBA.
  • Management consulting and/or operations experience is preferred.
  • You live in Chicago or are willing to move.

About the Role

We have built a product that is truly changing the way people eat at home. It’s convenient, it’s delicious and it’s healthy. We’ve only just started optimizing our operation, but intend to grow our strategy/ops team and leverage our engineering talent to build a world-class, state of the art food operation.

You would report directly to our COO and work with her to devise projects to make our process better, cheaper and faster. You would spearhead those projects, manage the people involved, and ensure that we are constantly measuring and tracking the effectiveness of each project. You’d work with industry leaders that have already figured a lot of this out to see what we can learn from others.

Our ultimate goal for our food operation is to have one of the most streamlined, automated processes in the world. We believe we can achieve that because we are as much a technology company as we are a food company – and we plan to use our engineers to bring a unique perspective to our food operation.

You would be tasked with working with our engineers to leverage their skills and expertise to help us build tools (both digital and physical) and create processes to make everything work better. This will run the gamut from how/where we source our products to leveraging different shipping methods/technologies to replicating processes across multiple, nationwide production facilities.

Find out more and apply

Jeff Bezos' annual shareholders letter

$
0
0
EX-99.1
  

LOGO

 

   Exhibit 99.1 

To our shareowners:

The American Customer Satisfaction Index recently announced the results of its annual survey, and for the 8th year in a row customers ranked Amazon #1. The United Kingdom has a similar index, The U.K. Customer Satisfaction Index, put out by the Institute of Customer Service. For the 5th time in a row Amazon U.K. ranked #1 in that survey. Amazon was also just named the #1 business on LinkedIn’s 2018 Top Companies list, which ranks the most sought after places to work for professionals in the United States. And just a few weeks ago, Harris Poll released its annual Reputation Quotient, which surveys over 25,000 consumers on a broad range of topics from workplace environment to social responsibility to products and services, and for the 3rd year in a row Amazon ranked #1.

Congratulations and thank you to the now over 560,000 Amazonians who come to work every day with unrelenting customer obsession, ingenuity, and commitment to operational excellence. And on behalf of Amazonians everywhere, I want to extend a huge thank you to customers. It’s incredibly energizing for us to see your responses to these surveys.

One thing I love about customers is that they are divinely discontent. Their expectations are never static – they go up. It’s human nature. We didn’t ascend from our hunter-gatherer days by being satisfied. People have a voracious appetite for a better way, and yesterday’s ‘wow’ quickly becomes today’s ‘ordinary’. I see that cycle of improvement happening at a faster rate than ever before. It may be because customers have such easy access to more information than ever before – in only a few seconds and with a couple taps on their phones, customers can read reviews, compare prices from multiple retailers, see whether something’s in stock, find out how fast it will ship or be available forpick-up, and more. These examples are from retail, but I sense that the same customer empowerment phenomenon is happening broadly across everything we do at Amazon and most other industries as well. You cannot rest on your laurels in this world. Customers won’t have it.

How do you stay ahead of ever-rising customer expectations? There’s no single way to do it – it’s a combination of many things. But high standards (widely deployed and at all levels of detail) are certainly a big part of it. We’ve had some successes over the years in our quest to meet the high expectations of customers. We’ve also had billions of dollars’ worth of failures along the way. With those experiences as backdrop, I’d like to share with you the essentials of what we’ve learned (so far) about high standards inside an organization.

Intrinsic or Teachable?

First, there’s a foundational question: are high standards intrinsic or teachable? If you take me on your basketball team, you can teach me many things, but you can’t teach me to be taller. Do we first and foremost need to select for “high standards” people? If so, this letter would need to be mostly about hiring practices, but I don’t think so. I believe high standards are teachable. In fact, people are pretty good at learning high standards simply through exposure. High standards are contagious. Bring a new person onto a high standards team, and they’ll quickly adapt. The opposite is also true. If low standards prevail, those too will quickly spread. And though exposure works well to teach high standards, I believe you can accelerate that rate of learning by articulating a few core principles of high standards, which I hope to share in this letter.

Universal or Domain Specific?

Another important question is whether high standards are universal or domain specific. In other words, if you have high standards in one area, do you automatically have high standards elsewhere? I believe high standards are domain specific, and that you have to learn high standards separately in every arena of interest. When I started Amazon, I had high standards on inventing, on customer care, and (thankfully) on hiring. But I didn’t have high standards on operational process: how to keep fixed problems fixed, how to eliminate defects at the root, how to inspect processes, and much more. I had to learn and develop high standards on all of that (my colleagues were my tutors).


Understanding this point is important because it keeps you humble. You can consider yourself a person of high standards in general and still have debilitating blind spots. There can be whole arenas of endeavor where you may not even know that your standards are low or non-existent, and certainly not world class. It’s critical to be open to that likelihood.

Recognition and Scope

What do you need to achieve high standards in a particular domain area? First, you have to be able to recognize what good looks like in that domain. Second, you must have realistic expectations for how hard it should be (how much work it will take) to achieve that result – the scope.

Let me give you two examples. One is a sort of toy illustration but it makes the point clearly, and another is a real one that comes up at Amazon all the time.

Perfect Handstands

A close friend recently decided to learn to do a perfect free-standing handstand. No leaning against a wall. Not for just a few seconds. Instagram good. She decided to start her journey by taking a handstand workshop at her yoga studio. She then practiced for a while but wasn’t getting the results she wanted. So, she hired a handstand coach. Yes, I know what you’re thinking, but evidently this is an actual thing that exists. In the very first lesson, the coach gave her some wonderful advice. “Most people,” he said, “think that if they work hard, they should be able to master a handstand in about two weeks. The reality is that it takes about six months of daily practice. If you think you should be able to do it in two weeks, you’re just going to end up quitting.” Unrealistic beliefs on scope – often hidden and undiscussed – kill high standards. To achieve high standards yourself or as part of a team, you need to form and proactively communicate realistic beliefs about how hard something is going to be – something this coach understood well.

Six-Page Narratives

We don’t do PowerPoint (or any other slide-oriented) presentations at Amazon. Instead, we write narratively structured six-page memos. We silently read one at the beginning of each meeting in a kind of “study hall.” Not surprisingly, the quality of these memos varies widely. Some have the clarity of angels singing. They are brilliant and thoughtful and set up the meeting for high-quality discussion. Sometimes they come in at the other end of the spectrum.

In the handstand example, it’s pretty straightforward to recognize high standards. It wouldn’t be difficult to lay out in detail the requirements of a well-executed handstand, and then you’re either doing it or you’re not. The writing example is very different. The difference between a great memo and an average one is much squishier. It would be extremely hard to write down the detailed requirements that make up a great memo. Nevertheless, I find that much of the time, readers react to great memos very similarly. They know it when they see it. The standard is there, and it is real, even if it’s not easily describable.

Here’s what we’ve figured out. Often, when a memo isn’t great, it’s not the writer’s inability to recognize the high standard, but instead a wrong expectation on scope: they mistakenly believe a high-standards, six-page memo can be written in one or two days or even a few hours, when really it might take a week or more! They’re trying to perfect a handstand in just two weeks, and we’re not coaching them right. The great memos are written and re-written, shared with colleagues who are asked to improve the work, set aside for a couple of days, and then edited again with a fresh mind. They simply can’t be done in a day or two. The key point here is that you can improve results through the simple act of teaching scope – that a great memo probably should take a week or more.

Skill

Beyond recognizing the standard and having realistic expectations on scope, how about skill? Surely to write a world class memo, you have to be an extremely skilled writer? Is it another required element? In my view, not so much, at least not for the individual in the context of teams. The football coach doesn’t need to be able to throw, and a film director doesn’t need to be able to act. But they both do need to recognize high standards for those things and teach realistic expectations on scope. Even in the example of writing a six-page memo, that’s


teamwork. Someone on the team needs to have the skill, but it doesn’t have to be you. (As a side note, by tradition at Amazon, authors’ names never appear on the memos – the memo is from the whole team.)

Benefits of High Standards

Building a culture of high standards is well worth the effort, and there are many benefits. Naturally and most obviously, you’re going to build better products and services for customers – this would be reason enough! Perhaps a little less obvious: people are drawn to high standards – they help with recruiting and retention. More subtle: a culture of high standards is protective of all the “invisible” but crucial work that goes on in every company. I’m talking about the work that no one sees. The work that gets done when no one is watching. In a high standards culture, doing that work well is its own reward – it’s part of what it means to be a professional.

And finally, high standards are fun! Once you’ve tasted high standards, there’s no going back.

So, the four elements of high standards as we see it: they are teachable, they are domain specific, you must recognize them, and you must explicitly coach realistic scope. For us, these work at all levels of detail. Everything from writing memos to whole new, clean-sheet business initiatives. We hope they help you too.

Insist on the Highest Standards

Leaders have relentlessly high standards – many people may think these standards are unreasonably high.

-- from the Amazon Leadership Principles

Recent Milestones

The high standards our leaders strive for have served us well. And while I certainly can’t do a handstand myself, I’m extremely proud to share some of the milestones we hit last year, each of which represents the fruition of many years of collective effort. We take none of them for granted.

 • Prime– 13 years post-launch, we have exceeded 100 million paid Prime members globally. In 2017 Amazon shipped more than five billion items with Prime worldwide, and more new members joined Prime than in any previous year – both worldwide and in the U.S. Members in the U.S. now receive unlimited free two-day shipping on over 100 million different items. We expanded Prime to Mexico, Singapore, the Netherlands, and Luxembourg, and introduced Business Prime Shipping in the U.S. and Germany. We keep making Prime shipping faster as well, with Prime Free Same-Day and Prime FreeOne-Day delivery now in more than 8,000 cities and towns. Prime Now is available in more than 50 cities worldwide across nine countries. Prime Day 2017 was our biggest global shopping event ever (until surpassed by Cyber Monday), with more new Prime members joining Prime than any other day in our history.
 • AWS– It’s exciting to see Amazon Web Services, a $20 billion revenue run rate business, accelerate its already healthy growth. AWS has also accelerated its pace of innovation – especially in new areas such as machine learning and artificial intelligence, Internet of Things, and serverless computing. In 2017, AWS announced more than 1,400 significant services and features, including Amazon SageMaker, which radically changes the accessibility and ease of use for everyday developers to build sophisticated machine learning models. Tens of thousands of customers are also using a broad range of AWS machine learning services, with active users increasing more than 250 percent in the last year, spurred by the broad adoption of Amazon SageMaker. And in November, we held our sixth re:Invent conference with more than 40,000 attendees and over 60,000 streaming participants.
 • Marketplace– In 2017, for the first time in our history, more than half of the units sold on Amazon worldwide were from our third-party sellers, including small and medium-sized businesses (SMBs). Over 300,000 U.S.-based SMBs started selling on Amazon in 2017, and Fulfillment by Amazon shipped billions of items for SMBs worldwide. Customers ordered more than 40 million items from SMBs worldwide during Prime Day 2017, growing their sales by more than 60 percent over Prime Day 2016. Our Global Selling program (enabling SMBs to sell products across national borders) grew by over 50% in 2017 and cross-border ecommerce by SMBs now represents more than 25% of total third-party sales.

 • Alexa– Customer embrace of Alexa continues, with Alexa-enabled devices among the best-selling items across all of Amazon. We’re seeing extremely strong adoption by other companies and developers that want to create their own experiences with Alexa. There are now more than 30,000 skills for Alexa from outside developers, and customers can control more than 4,000 smart home devices from 1,200 unique brands with Alexa. The foundations of Alexa continue to get smarter every day too. We’ve developed and implemented an on-device fingerprinting technique, which keeps your device from waking up when it hears an Alexa commercial on TV. (This technology ensured that our Alexa Super Bowl commercial didn’t wake up millions of devices.) Far-field speech recognition (already very good) has improved by 15% over the last year; and in the U.S., U.K., and Germany, we’ve improved Alexa’s spoken language understanding by more than 25% over the last 12 months through enhancements in Alexa’s machine learning components and the use of semi-supervised learning techniques. (These semi-supervised learning techniques reduced the amount of labeled data needed to achieve the same accuracy improvement by 40 times!) Finally, we’ve dramatically reduced the amount of time required to teach Alexa new languages by using machine translation and transfer learning techniques, which allows us to serve customers in more countries (like India and Japan).
 • Amazon devices– 2017 was our best year yet for hardware sales. Customers bought tens of millions of Echo devices, and Echo Dot and Fire TV Stick with Alexa were the best-selling products across all of Amazon – across all categories and all manufacturers. Customers bought twice as many Fire TV Sticks and Kids Edition Fire Tablets this holiday season versus last year. 2017 marked the release of our all-new Echo with an improved design, better sound, and a lower price; Echo Plus with a built-in smart home hub; and Echo Spot, which is compact and beautiful with a circular screen. We released our next generation Fire TV, featuring 4K Ultra HD and HDR; and the Fire HD 10 Tablet, with 1080p Full HD display. And we celebrated the 10th anniversary of Kindle by releasing the all-new Kindle Oasis, our most advanced reader ever. It’s waterproof – take it in the bathtub – with a bigger 7” high-resolution 300 ppi display and has built-in audio so you can also listen to your books with Audible.
 • Prime Video – Prime Video continues to drive Prime member adoption and retention. In the last year we made Prime Video even better for customers by adding new, award-winning Prime Originals to the service, like The Marvelous Mrs. Maisel, winner of two Critics’ Choice Awards and two Golden Globes, and the Oscar-nominated movie The Big Sick. We’ve expanded our slate of programming across the globe, launching new seasons of Bosch and Sneaky Pete from the U.S., The Grand Tour from the U.K., and You Are Wanted from Germany, while adding new Sentosha shows from Japan, along with Breathe and the award-winning Inside Edge from India. Also this year, we expanded our Prime Channels offerings, adding CBS All Access in the U.S. and launching Channels in the U.K. and Germany. We debuted NFL Thursday Night Football on Prime Video, with more than 18 million total viewers over 11 games. In 2017, Prime Video Direct secured subscription video rights for more than 3,000 feature films and committed over $18 million in royalties to independent filmmakers and other rights holders. Looking forward, we’re also excited about our upcoming Prime Original series pipeline, which includes Tom Clancy’s Jack Ryan starring John Krasinski; King Lear, starring Anthony Hopkins and Emma Thompson; The Romanoffs, executive produced by Matt Weiner; Carnival Row starring Orlando Bloom and Cara Delevingne; Good Omens starring Jon Hamm; and Homecoming, executive produced by Sam Esmail and starring Julia Roberts in her first television series. We acquired the global television rights for a multi-season production of The Lord of the Rings, as well as Cortés, a miniseries based on the epic saga of Hernán Cortés from executive producer Steven Spielberg, starring Javier Bardem, and we look forward to beginning work on those shows this year.
 • Amazon Music– Amazon Music continues to grow fast and now has tens of millions of paid customers. Amazon Music Unlimited, our on-demand,ad-free offering, expanded to more than 30 new countries in 2017, and membership has more than doubled over the past six months.
 • 

Fashion– Amazon has become the destination for tens of millions of customers to shop for fashion. In 2017, we introduced our first fashion-oriented Prime benefit, Prime Wardrobe – a new service that brings the fitting room directly to the homes of Prime members so they can try on the latest styles before they buy. We introduced Nike and UGG on Amazon along with new celebrity collections by Drew Barrymore and Dwyane Wade, as well as dozens of new private brands, like Goodthreads and


 

Core10. We’re also continuing to enable thousands of designers and artists to offer their exclusive designs and prints on demand through Merch by Amazon. We finished 2017 with the launch of our interactive shopping experience with Calvin Klein, including pop-up shops, on-site product customization, and fitting rooms with Alexa-controlled lighting, music, and more.

 • Whole Foods– When we closed our acquisition of Whole Foods Market last year, we announced our commitment to making high-quality, natural and organic food available for everyone, then immediately lowered prices on a selection of best-selling grocery staples, including avocados, organic brown eggs, and responsibly-farmed salmon. We followed this with a second round of price reductions in November, and our Prime member exclusive promotion broke Whole Foods’ all-time record for turkeys sold during the Thanksgiving season. In February, we introduced free two-hour delivery on orders over $35 for Prime members in select cities, followed by additional cities in March and April, and plan continued expansion across the U.S. throughout this year. We also expanded the benefits of the Amazon Prime Rewards Visa Card, enabling Prime members to get 5% back when shopping at Whole Foods Market. Beyond that, customers can purchase Whole Foods’ private label products like 365 Everyday Value on Amazon, purchase Echo and other Amazon devices in over a hundred Whole Foods stores, and pick-up or return Amazon packages at Amazon Lockers in hundreds of Whole Foods stores. We’ve also begun the technical work needed to recognize Prime members at the point of sale and look forward to offering more Prime benefits to Whole Foods shoppers once that work is completed.
 • Amazon Go– Amazon Go, a new kind of store with no checkout required, opened to the public in January in Seattle. Since opening, we’ve been thrilled to hear many customers refer to their shopping experience as “magical.” What makes the magic possible is a custom-built combination of computer vision, sensor fusion, and deep learning, which come together to create Just Walk Out shopping. With JWO, customers are able to grab their favorite breakfast, lunch, dinner, snack, and grocery essentials more conveniently than ever before. Some of our top-selling items are not surprising – caffeinated beverages and water are popular – but our customers also love the Chicken Banh Mi sandwich, chocolate chip cookies, cut fruit, gummy bears, and our Amazon Meal Kits.
 • Treasure Truck– Treasure Truck expanded from a single truck in Seattle to a fleet of 35 trucks across 25 U.S. cities and 12 U.K. cities. Our bubble-blowing, music-pumping trucks fulfilled hundreds of thousands of orders, from porterhouse steaks to the latest Nintendo releases. Throughout the year, Treasure Truck also partnered with local communities to lift spirits and help those in need, including donating and delivering hundreds of car seats, thousands of toys, tens of thousands of socks, and many other essentials to community members needing relief, from those displaced by Hurricane Harvey, to the homeless, to kids needing holiday cheer.
 • India– Amazon.in is the fastest growing marketplace in India, and the most visited site on both desktop and mobile, according to comScore and SimilarWeb. The Amazon.in mobile shopping app was also the most downloaded shopping app in India in 2017, according to App Annie. Prime added more members in India in its first year than any previous geography in Amazon’s history. Prime selection in India now includes more than 40 million local products from third-party sellers, and Prime Video is investing in India original video content in a big way, including two recent premiers and over a dozen new shows in production.
 • 

Sustainability– We are committed to minimizing carbon emissions by optimizing our transportation network, improving product packaging, and enhancing energy efficiency in our operations, and we have a long-term goal to power our global infrastructure using 100% renewable energy. We recently launched Amazon Wind Farm Texas, our largest wind farm yet, which generates more than 1,000,000 megawatt hours of clean energy annually from over 100 turbines. We have plans to host solar energy systems at 50 fulfillment centers by 2020, and have launched 24 wind and solar projects across the U.S. with more than 29 additional projects to come. Together, Amazon’s renewable energy projects now produce enough clean energy to power over 330,000 homes annually. In 2017 we celebrated the 10-year anniversary of Frustration-Free Packaging, the first of a suite of sustainable packaging initiatives that have eliminated more than 244,000 tons of packaging materials over the past 10 years. In addition, in 2017 alone our programs significantly reduced packaging waste, eliminating the


 

equivalent of 305 million shipping boxes. And across the world, Amazon is contracting with our service providers to launch our first low-pollution last-mile fleet. Already today, a portion of our European delivery fleet is comprised of low-pollution electric and natural gas vans and cars, and we have over 40 electric scooters and e-cargo bikes that complete local urban deliveries.

 • Empowering Small Business– Millions of small and medium-sized businesses worldwide now sell their products through Amazon to reach new customers around the globe. SMBs selling on Amazon come from every state in the U.S., and from more than 130 different countries around the world. More than 140,000 SMBs surpassed $100,000 in sales on Amazon in 2017, and over a thousand independent authors surpassed $100,000 in royalties in 2017 through Kindle Direct Publishing.
 • Investment & Job Creation– Since 2011, we have invested over $150 billion worldwide in our fulfillment networks, transportation capabilities, and technology infrastructure, including AWS data centers. Amazon has created over 1.7 million direct and indirect jobs around the world. In 2017 alone, we directly created more than 130,000 new Amazon jobs, not including acquisitions, bringing our global employee base to over 560,000. Our new jobs cover a wide range of professions, from artificial intelligence scientists to packaging specialists to fulfillment center associates. In addition to these direct hires, we estimate that Amazon Marketplace has created 900,000 more jobs worldwide, and that Amazon’s investments have created an additional 260,000 jobs in areas like construction, logistics, and other professional services.
 • Career Choice– One employee program we’re particularly proud of is Amazon Career Choice. For hourly associates with more than one year of tenure, we pre-pay 95% of tuition, fees, and textbooks (up to $12,000) for certificates and associate degrees in high-demand occupations such as aircraft mechanics, computer-aided design, machine tool technologies, medical lab technologies, and nursing. We fund education in areas that are in high demand and do so regardless of whether those skills are relevant to a career at Amazon. Globally more than 16,000 associates (including more than 12,000 in the U.S.) have joined Career Choice since the program launched in 2012. Career Choice is live in ten countries and expanding to South Africa, Costa Rica, and Slovakia later this year. Commercial truck driving, healthcare, and information technology are the program’s most popular fields of study. We’ve built 39 Career Choice classrooms so far, and we locate them behind glass walls in high traffic areas inside our fulfillment centers so associates can be inspired by seeing their peers pursue new skills.

The credit for these milestones is deserved by many. Amazon is 560,000 employees. It’s also 2 million sellers, hundreds of thousands of authors, millions of AWS developers, and hundreds of millions of divinely discontent customers around the world who push to make us better each and every day.

Path Ahead

This year marks the 20th anniversary of our first shareholder letter, and our core values and approach remain unchanged. We continue to aspire to be Earth’s most customer-centric company, and we recognize this to be no small or easy challenge. We know there is much we can do better, and we find tremendous energy in the many challenges and opportunities that lie ahead.

A huge thank you to each and every customer for allowing us to serve you, to our shareowners for your support, and to Amazonians everywhere for your ingenuity, your passion, and your high standards.

As always, I attach a copy of our original 1997 letter. It remains Day 1.

Sincerely,

LOGO

Jeffrey P. Bezos

Founder and Chief Executive Officer

Amazon.com, Inc.


LOGO

 

1997 LETTER TO SHAREHOLDERS

(Reprinted from the 1997 Annual Report)

 

To our shareholders:

 

Amazon.com passed many milestones in 1997: by year-end, we had served more than 1.5 million customers, yielding 838% revenue growth to $147.8 million, and extended our market leadership despite aggressive competitive entry.

 

But this is Day 1 for the Internet and, if we execute well, for Amazon.com. Today, online commerce saves customers money and precious time. Tomorrow, through personalization, online commerce will accelerate the very process of discovery. Amazon.com uses the Internet to create real value for its customers and, by doing so, hopes to create an enduring franchise, even in established and large markets.

 

We have a window of opportunity as larger players marshal the resources to pursue the online opportunity and as customers, new to purchasing online, are receptive to forming new relationships. The competitive landscape has continued to evolve at a fast pace. Many large players have moved online with credible offerings and have devoted substantial energy and resources to building awareness, traffic, and sales. Our goal is to move quickly to solidify and extend our current position while we begin to pursue the online commerce opportunities in other areas. We see substantial opportunity in the large markets we are targeting. This strategy is not without risk: it requires serious investment and crisp execution against established franchise leaders.

 

It’s All About the Long Term

 

We believe that a fundamental measure of our success will be the shareholder value we create over the long term. This value will be a direct result of our ability to extend and solidify our current market leadership position. The stronger our market leadership, the more powerful our economic model. Market leadership can translate directly to higher revenue, higher profitability, greater capital velocity, and correspondingly stronger returns on invested capital.

 

Our decisions have consistently reflected this focus. We first measure ourselves in terms of the metrics most indicative of our market leadership: customer and revenue growth, the degree to which our customers continue to purchase from us on a repeat basis, and the strength of our brand. We have invested and will continue to invest aggressively to expand and leverage our customer base, brand, and infrastructure as we move to establish an enduring franchise.

 

Because of our emphasis on the long term, we may make decisions and weigh tradeoffs differently than some companies. Accordingly, we want to share with you our fundamental management and decision-making approach so that you, our shareholders, may confirm that it is consistent with your investment philosophy:

 

 • 

We will continue to focus relentlessly on our customers.

 

 • 

We will continue to make investment decisions in light of long-term market leadership considerations rather than short-term profitability considerations or short-term Wall Street reactions.

 

 • 

We will continue to measure our programs and the effectiveness of our investments analytically, to jettison those that do not provide acceptable returns, and to step up our investment in those that work best. We will continue to learn from both our successes and our failures.


 • 

We will make bold rather than timid investment decisions where we see a sufficient probability of gaining market leadership advantages. Some of these investments will pay off, others will not, and we will have learned another valuable lesson in either case.

 

 • 

When forced to choose between optimizing the appearance of our GAAP accounting and maximizing the present value of future cash flows, we’ll take the cash flows.

 

 • 

We will share our strategic thought processes with you when we make bold choices (to the extent competitive pressures allow), so that you may evaluate for yourselves whether we are making rational long-term leadership investments.

 

 • 

We will work hard to spend wisely and maintain our lean culture. We understand the importance of continually reinforcing a cost-conscious culture, particularly in a business incurring net losses.

 

 • 

We will balance our focus on growth with emphasis on long-term profitability and capital management. At this stage, we choose to prioritize growth because we believe that scale is central to achieving the potential of our business model.

 

 • 

We will continue to focus on hiring and retaining versatile and talented employees, and continue to weight their compensation to stock options rather than cash. We know our success will be largely affected by our ability to attract and retain a motivated employee base, each of whom must think like, and therefore must actually be, an owner.

 

We aren’t so bold as to claim that the above is the “right” investment philosophy, but it’s ours, and we would be remiss if we weren’t clear in the approach we have taken and will continue to take.

 

With this foundation, we would like to turn to a review of our business focus, our progress in 1997, and our outlook for the future.

 

Obsess Over Customers

 

From the beginning, our focus has been on offering our customers compelling value. We realized that the Web was, and still is, the World Wide Wait. Therefore, we set out to offer customers something they simply could not get any other way, and began serving them with books. We brought them much more selection than was possible in a physical store (our store would now occupy 6 football fields), and presented it in a useful, easy-to-search, and easy-to-browse format in a store open 365 days a year, 24 hours a day. We maintained a dogged focus on improving the shopping experience, and in 1997 substantially enhanced our store. We now offer customers gift certificates, 1-ClickSM shopping, and vastly more reviews, content, browsing options, and recommendation features. We dramatically lowered prices, further increasing customer value. Word of mouth remains the most powerful customer acquisition tool we have, and we are grateful for the trust our customers have placed in us. Repeat purchases and word of mouth have combined to make Amazon.com the market leader in online bookselling.

 

By many measures, Amazon.com came a long way in 1997:

 

 • 

Sales grew from $15.7 million in 1996 to $147.8 million – an 838% increase.

 

 • 

Cumulative customer accounts grew from 180,000 to 1,510,000 – a 738% increase.

 

 • 

The percentage of orders from repeat customers grew from over 46% in the fourth quarter of 1996 to over 58% in the same period in 1997.

 

 • 

In terms of audience reach, per Media Metrix, our Web site went from a rank of 90th to within the top 20.

 

 • 

We established long-term relationships with many important strategic partners, including America Online, Yahoo!, Excite, Netscape, GeoCities, AltaVista, @Home, and Prodigy.


Infrastructure

 

During 1997, we worked hard to expand our business infrastructure to support these greatly increased traffic, sales, and service levels:

 

 • 

Amazon.com’s employee base grew from 158 to 614, and we significantly strengthened our management team.

 

 • 

Distribution center capacity grew from 50,000 to 285,000 square feet, including a 70% expansion of our Seattle facilities and the launch of our second distribution center in Delaware in November.

 

 • 

Inventories rose to over 200,000 titles at year-end, enabling us to improve availability for our customers.

 

 • 

Our cash and investment balances at year-end were $125 million, thanks to our initial public offering in May 1997 and our $75 million loan, affording us substantial strategic flexibility.

 

Our Employees

 

The past year’s success is the product of a talented, smart, hard-working group, and I take great pride in being a part of this team. Setting the bar high in our approach to hiring has been, and will continue to be, the single most important element of Amazon.com’s success.

 

It’s not easy to work here (when I interview people I tell them, “You can work long, hard, or smart, but at Amazon.com you can’t choose two out of three”), but we are working to build something important, something that matters to our customers, something that we can all tell our grandchildren about. Such things aren’t meant to be easy. We are incredibly fortunate to have this group of dedicated employees whose sacrifices and passion build Amazon.com.

 

Goals for 1998

 

We are still in the early stages of learning how to bring new value to our customers through Internet commerce and merchandising. Our goal remains to continue to solidify and extend our brand and customer base. This requires sustained investment in systems and infrastructure to support outstanding customer convenience, selection, and service while we grow. We are planning to add music to our product offering, and over time we believe that other products may be prudent investments. We also believe there are significant opportunities to better serve our customers overseas, such as reducing delivery times and better tailoring the customer experience. To be certain, a big part of the challenge for us will lie not in finding new ways to expand our business, but in prioritizing our investments.

 

We now know vastly more about online commerce than when Amazon.com was founded, but we still have so much to learn. Though we are optimistic, we must remain vigilant and maintain a sense of urgency. The challenges and hurdles we will face to make our long-term vision for Amazon.com a reality are several: aggressive, capable, well-funded competition; considerable growth challenges and execution risk; the risks of product and geographic expansion; and the need for large continuing investments to meet an expanding market opportunity. However, as we’ve long said, online bookselling, and online commerce in general, should prove to be a very large market, and it’s likely that a number of companies will see significant benefit. We feel good about what we’ve done, and even more excited about what we want to do.

 

1997 was indeed an incredible year. We at Amazon.com are grateful to our customers for their business and trust, to each other for our hard work, and to our shareholders for their support and encouragement.

LOGO

 

Jeffrey P. Bezos

Founder and Chief Executive Officer

Amazon.com, Inc.

The ASUS Tinker Board is a compelling upgrade from a Raspberry Pi 3 B+

$
0
0

I've had a long history playing around with Raspberry Pis and other Single Board Computers (SBCs); from building a cluster of Raspberry Pis to run Drupal, to building a distributed home temperature monitoring system with Raspberry Pis, I've spent a good deal of time testing the limits of an SBC, and also finding ways to use their strengths to my advantage.

ASUS Tinker Board SBC

ASUS sent me a Tinker Board late last year, and unfortunately due to health reasons, I had to delay working on a review of this nice little SBC until now. In the mean time, the Raspberry Pi foundation released the Pi model 3 B+, which ups the ante and also negates a few of the advantages the more expensive Tinker Board had over the older model 3 B (not +). I just posted a comprehensive review of the Pi model 3 B+, and am now posting this review of the Tinker Board to compare and contrast it to the latest Pi offering.

Raspberry Pi model 3 B+ and ASUS Tinker Board overview comparison

Here's a really quick overview of how the two models stack up:

 Raspberry Pi model 3 B+ASUS Tinker Board
CPUCortex-A53 Quad Core @ 1.4GHzCortex A17 Quad Core @ 1.8GHz
RAM1 GB LPDDR2 (900 MHz)2 GB LPDDR3 (dual channel)
GPUBroadcom VideoCore IV @ 400 MHzMail-T764 @ 600MHz
Network10/100/1000 Mbps (~230 Mbps real world)10/100/1000 Mbps
GPIO40 pin header (not color coded)40 pin header (color coded)
Price$35$49.99

I'm leaving out a lot of other specs that don't affect my day-to-day usage of an SBC, but what really stands out to me—and what may still make the Tinker Board worth the extra $15—is the slightly higher-clocked CPU and GPU, the faster onboard networking (which isn't crippled by being shared with the USB 2.0 bus like on the Pi), and double the RAM. The color-coded GPIO pins are a nice bonus, too, as I spend less time fumbling around counting pins on the Pi's unmarked header when experimenting with GPIO projects.

My initial impressions of the Tinker Board hardware are very good; the hardware is almost a perfect match to the layout of the Pi model 2 B, 3 B, and 3 B+—enough so that the Tinker Board fit in all my Pi cases. Note that the CPU seems to stand off ever so slightly higher than the Pi's, so some cases with active cooling (e.g. a fan) might not fit the Tinker Board perfectly (one of my cases wouldn't clamp down all the way).

But as with many other SBCs, the software support is where the rubber meets the road, so let's dive in and see how things stack up!

Getting started with the Tinker Board

The process for getting the OS onto a microSD card is exactly the same as the Raspberry Pi's Raspbian or most other SBCs (this is written for a Mac, but it's similar if you use Linux):

  1. Download and expand the TinkerOS disk image.
  2. Insert a microSD card (I recommend the Samsung Evo+ 32GB), and unmount it: diskutil unmountDisk /dev/disk2 (use diskutil list to find which disk it is; could be /dev/disk3 or something else).
  3. Write the disk image to the card: pv YYYYMMDD-tinker-board-linaro-stretch-alip-vX.X.img.img | sudo dd of=/dev/rdisk2 bs=1m (I use pv to track progress while the card is written).
  4. Eject the new NO NAME disk that appears after the image is written.
  5. Insert the microSD card into the Asus Tinker Board, and boot it up!

I am using the Debian-based OS (based on Debian 9 / Stretch), since that's the closest to Raspbian and has the best support and usability for general SBC usage and testing.

The Debian flavor of TinkerOS is as close to bare bones Debian as you can get, and it boots right into the desktop (no need for a login) when you turn on the Tinker Board. If you plug in networking, it will automatically grab an IP address and join the network via DHCP, just like the Pi, and it has SSH enabled out of the box, so you can ssh into it with ssh linaro@[tinker-board-ip-here] right away (I used sudo fing with Fing to find out the IP address of the Tinker Board from my Mac, so I could do everything headless).

Networking

After my experience with the ODROID-C2 and Orange Pi—both of which clobbered the Raspberry Pi's wired networking performance—I was excited to see if the Tinker Board lived up to its promise of true 1 Gbps networking:

ASUS Tinker Board and Raspberry Pi model 3 B+ Benchmarks - Network Onboard LAN speeds

For file copies, which involve both networking and reading/writing on the microSD card, the Tinker Board does a respectable job. Especially for the writes, where the Tinker Board's much improved microSD controller bandwidth shines, allowing speeds up to 3x faster than the Raspberry Pi.

ASUS Tinker Board and Raspberry Pi model 3 B+ Benchmarks - Network iperf Onboard LAN speeds

In terms of raw network performance, as measured by iperf, there's no competition. The ASUS Tinker Board's dedicated gigabit LAN bandwidth allows it to saturate a gigabit Ethernet connection, and it's not shared with the USB bus, nor with the microSD bus, meaning all other operations can operate at full speed as well.

For more benchmark details, see Networking Benchmarks on the Raspberry Pi Dramble website.

microSD

One performance metric that trips people up the first time they use an SBC is the slow performance of the main drive—the microSD card that runs the OS and impacts the performance of almost everything else on the system. MicroSD cards are not known for great performance in general computing tasks (which involve lots of small file reads and writes); they are usually optimized for reading or writing very large amounts of data as fast as possible, as they're most optimized for media recording devices and multiple-GB files. But the Raspberry Pi cripples the microSD performance even further, as you usually can't even sustain a large file read or write at more than 20 MB/second. Some of the microSD cards I've tested can hit 60 or more MB/second when I'm using them in my Mac with a USB 3.0 UHS-II SD card reader! So how does the ASUS Tinker Board compare?

ASUS Tinker Board and Raspberry Pi model 3 B+ Benchmarks - microSD card performance

The microSD controller in the Tinker Board is clearly operating at a much higher data rate than the one in the Pi; even with the SD overclock enabled on the Pi, it gets nowhere near the performance of the Tinker Board. The hdparm raw read throughput looks impressive, but isn't really a practical aid in measuring how the two devices would feel in real world scenarios. However, the 4k read and write benchmarks track much more realistic scenarios (e.g. when you're booting the SBC, or opening an app, compiling something, browsing the web, etc.), and even here, the ASUS Tinker Board is up to 35% faster than the Pi 3 B+!

The microSD card I tested with is the fastest one I have benchmarked for SBC use (and I have benchmarked them all!), Samsung's Evo+ 32GB microSD card. Highly recommended!

For more benchmark details, see microSD Card Benchmarks on the Raspberry Pi Dramble website.

Power Consumption

One thing most SBCs do very well is conserve power; the mobile System-on-a-Chip (SoC) on which these computers are based is usually optimized for conserving power, especially when not doing intensive operations, because they're designed to operate primarily on battery power. Comparing the power consumption across different activities on the Pi and Tinker Board:

ASUS Tinker Board and Raspberry Pi model 3 B+ Benchmarks - Power consumption

The idle power consumption is neck-in-neck, and bodes well for the Tinker Board; as we'll see later, the higher CPU frequency and CPU performance gains more than justify the increased power consumption under load, but it's very good to see the idle power consumption matches (within a margin of error) the Pi 3 B+. This means the Tinker Board does a good job of sipping power normally, but can quickly ramp up to tackle a CPU-intensive task, with slightly better power-efficiency (all things considered) than the Pi.

For more benchmark details, see Power Consumption Benchmarks on the Raspberry Pi Dramble website.

The importance of a good power supply

Just as with Raspberry Pis, the Tinker Board needs a good, high-output power supply to be able to run at full capacity and consistently. Especially when under heavy load, the Tinker Board can pull more than 2A peak power, and your best bet is to get a good, dedicated power supply like the NorthPada Tinker Board 5V 3A power supply.

CPU and Memory Performance

Let's see how the Pi and Tinker Board stack up when it comes to raw CPU and memory read/write performance, as measured by sysbench:

ASUS Tinker Board and Raspberry Pi model 3 B+ Benchmarks - CPU and Memory speed

Even with the latest Pi 3 B+ CPU frequency increase, and the better thermal control from the new CPU package, the older Tinker Board's CPU soundly beats the Pi's by 25%. However, the memory performance is almost an inverse, as the Pi beats the Tinker Board by about 30% for both read and write operations.

In real-world usage, these numbers are much closer, but as we'll see in a minute, there are use cases where the overall system performance gives a thorough recommendation to the Tinker Board.

Drupal Performance

The most important benchmarks are real-world use cases. For something like logging temperature data, even the lowliest Pi Zero or Pi model A could handle it with aplomb. But for more advanced use cases, like using an SBC like the Tinker Board as a general purpose Debian workstation, or serving a website, the overall system performance has a huge impact on whether the computer feels fast or not. I do a lot of work with the Drupal open source Content Management System, so one of my best benchmarks is to install and run the latest version of Drupal, and run two tests: anonymous (cached) page loads, which test networking and memory access mostly; and authenticated (uncached) page loads, which tests CPU, database, and disk access:

ASUS Tinker Board and Raspberry Pi model 3 B+ Benchmarks - Drupal CMS page load performance

The anonymous benchmark ran 55% faster on the Tinker Board, and uncached page loads were 50% faster. It's interesting, because individually, the microSD, CPU, and memory benchmarks don't show more than a 20-30% performance improvement over the Pi 3 B+. But if you put everything together in this real-world benchmark, there's a huge performance delta, completely in the Tinker Board's favor.

Overall, it feels to me like the A17 SoC on the ASUS Tinker Board is tuned a little better for peak performance than the Pi's A53 SoC, and even though there's a little higher peak energy consumption, the performance boost more than justifies it.

Because of a few slight differences in the default OS layout with TinkerOS vs Raspbian, I had to modify the way I installed Drupal and the LEMP stack using Drupal Pi to make sure all the installation steps worked on the Tinker Board. After working through a few missing packages, everything seemed to run well.

For more benchmark details, see Drupal Benchmarks on the Raspberry Pi Dramble website.

Summary

So in the end, I think the question most people would ask themselves is: is the ASUS Tinker Board worth $15 more than a Raspberry Pi model 3 B or model 3 B+? I'd say yes, if you answer the following:

  • You need the best performance out of an SBC
  • You need more than 1 GB of RAM
  • You want the slightly nicer setup experience, support, and fit-and-finish of the Tinker Board vs. other Pi clones (like Orange Pi, ODROID, etc.)
  • You need fast networking

But if you didn't answer yes to one or more of those statements, I'd have a harder time recommending it.

The Pi community is pretty diverse and has you covered for so many projects and use cases, whereas there's fewer developers and bloggers posting about experiences with the Tinker Board. So the community still has room for improvement, but it's a little nicer, in my limited experience, than that of the other SBCs I've used.

The Pi is fast enough for most use cases, and especially with the modest speed bumps in the model 3 B+ that put it almost on par with the Tinker Board in some uses, it makes it hard to justify the increased price of the Tinker Board.

Even if the Tinker Board isn't right for you, it's worth watching ASUS and their next moves in this space. I like the design of the Tinker Board's colorful GPIO 100x more than the Pi, and their guides, forums, downloads, and support are pretty decent—as long as they keep them up to date. I'm really interested to see if ASUS will release a newer model in the next year that might go beyond where even the current Tinker Board is, performance-wise, as that would more than justify purchasing another one or two for me!

What about the Tinker Board S?

This year, ASUS released a slightly improved version of the Tinker Board, the Tinker Board S. It includes 16 GB of onboard eMMC memory—much faster for general computing than a microSD card—for an extra $20 or so. There are a few other small differences, but the eMMC addition is the biggest improvement. For my use cases, the eMMC doesn't make a big enough difference to justify the cost, so I'm more interested in products in the $50 range and lower. Once an SBC approaches $100 or so, there are other options (like a used Intel core i5 desktop) which offer 10-100x the performance and infinite expansion options.

But that's just me—I am okay with a larger footprint desktop computer for use cases which require more desktop-like disk, CPU, and memory performance. If you need that kind of performance in an SBC form factor, the S might be a good option too. The Tinker Board S should be available for sale starting on April 19, 2018.

Viewing all 25817 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>