When you install something, you expect to be able to remove it too. But when a reader came to uninstall BlueStacks, an Android emulator, from his Mac running High Sierra 10.13.2, he found the way blocked. The Finder kindly informed him that “The operation can’t be completed because you don’t have the necessary permission.”
The moment that we see the word permission, all becomes clear: it’s a permissions problem. So the next step is to select the offending item in the Finder, press Command-I to bring up the Get Info dialog, and change the permissions. It does, though, leave the slight puzzle as to why the Finder didn’t simply prompt for authentication instead of cussedly refusing.
Sure enough, after trying that, the app still won’t go and the error message is unchanged.
Another strange thing about this ‘app’ is that it’s not an app at all. Tucked away in a mysterious folder, new to High Sierra, in /Library/StagedExtensions/Applications, its icon is defaced to indicate that the user can’t even run it. Neither did the user install it there.
Trying to remove it using a conventional Terminal command sudo rm -rf /Library/StagedExtensions/Applications/BlueStacks.app also fails, with the report Operation not permitted.
High Sierra leaves the user wondering what has happened. There’s nothing in Apple’s scant documentation to explain how this strange situation has arisen, and seemingly nothing more that the user can do to discover what is wrong, or to do anything about it.
The clue comes from probing around in Terminal, specifically using a command like ls -lO /Library
Try that in High Sierra, and you’ll see drwxr-xr-x@ 4 root wheel restricted 128 2 Jan 13:03 StagedExtensions
There are two relevant pieces of information revealed: the @ sign shows that directory has extended attributes (xattrs), and the word restricted that it is protected by System Integrity Protection (SIP). A quick peek inside /Library/StagedExtensions/Applications/BlueStacks.app shows that it is a stub of an app, lacking any main code, but it does contain a kernel extension (KEXT) which is also protected by SIP, by virtue of being inside a SIP-protected folder.
> ls -lO /Library/StagedExtensions/Applications drwxr-xr-x 3 root wheel restricted 96 2 Jan 13:03 BlueStacks.app
So how did this third-party kernel extension end up in this mysterious folder, complete with SIP protection? Surely SIP is there to protect macOS, not third-party app components installed later by the user? Who or what enabled SIP on that extension, and how can it be removed?
Perhaps not unsurprisingly, even Apple’s developer documentation doesn’t seem to answer any of those questions. So here is what I have been able to discover.
High Sierra has a new mechanism for handling third-party kernel extensions (User-Approved Kernel Extension Loading, or UAKL), which requires the user to authorise them. When a third-party installer tries to install a kernel extension, you see the warning
Assuming that you open Security preferences, you will there click on the Allow button to permit the extension to be loaded.
High Sierra then packages the extension in the form of a non-executable stub app, which it installs in /Library/StagedExtensions/Applications. What you see there looks like a mutated form of the app.
When you then try to remove the app proper, you and it will both think that it has gone for good.
But in truth, its kernel extension has been left in /Library/StagedExtensions/Applications/, looking just like an app.
In its infinite wisdom, Apple has given the folder /Library/StagedExtensions the full protection of SIP, by attaching a com.apple.rootless xattr to it.
My reading of that xattr is that only Apple’s KernelExtensionManagement service can give permission for changes to be made within that folder, and the folders within it.
So now the user cannot touch that residual extension, and they certainly can’t uninstall, move, or trash it. Until the user can gain access to that volume with its SIP inactive, that stub app and the extension inside it stay put. It has been suggested that macOS automatically cleans /Library/StagedExtensions, although I have yet to see any evidence of that occurring. Thus SIP prevents the user from uninstalling a third-party app which the user installed, even though the kernel extension might be rendering macOS unstable, or have other significant side-effects.
The solution is to restart in Recovery mode, and delete the stub app using Terminal there, with a command like rm -rf /Volumes/Macintosh\ HD/Library/StagedExtensions/Applications/BlueStacks.app
You don’t need to alter SIP there, as SIP is only applied to the startup volume. As you have now started up from the Recovery volume, SIP no longer protects the contents of your normal startup volume.
This is such a good piece of security that, when some malware does manage to slip an evil kernel extension past a user and is rewarded with the protection of SIP, neither the user nor any anti-malware tool will be able to remove that extension, unless the user restarts from a different boot volume, or KernelExtensionManagement allows it.
Unless I’m missing something here, this doesn’t seem particularly good.
We saw some really bad Intel CPU bugs in 2015, and we should expect to see more in the futureWe saw some really bad Intel CPU bugs in 2015, and we should expect to see more in the future
2015 was a pretty good year for Intel. Their quarterly earnings reports exceeded expectations every quarter. They continue to be the only game in town for the serious server market, which continues to grow exponentially; from the earnings reports of the two largest cloud vendors, we can see that AWS and Azure grew by 80% and 100%, respectively. That growth has effectively offset the damage Intel has seen from the continued decline of the desktop market. For a while, it looked like cloud vendors might be able to avoid the Intel tax by moving their computation onto FPGAs, but Intel bought one of the two serious FPGA vendors and, combined with their fab advantage, they look well positioned to dominate the high-end FPGA market the same way they’ve been dominating the high-end server CPU market. Also, their fine for anti-competitive practices turned out to be $1.45B, much less than the benefit they gained from their anti-competitive practices.
Things haven’t looked so great on the engineering/bugs side of things, though. I don’t keep track of Intel bugs unless they’re so serious that people I know are scrambling to get a patch in because of the potential impact, and I still heard about two severe bugs this year in the last quarter of the year alone. First, there was the bug found by Ben Serebrin and Jan Beulic, which allowed a guest VM to fault in a way that would cause the CPU to hang in a microcode infinite loop, allowing any VM to DoS its host.
Major cloud vendors were quite lucky that this bug was found by a Google engineer, and that Google decided to share its knowledge of the bug with its competitors before publicly disclosing. Black hats spend a lot of time trying to take down major services. I’m actually really impressed by both the persistence and the cleverness of the people who spend their time attacking the companies I work for. If, buried deep in our infrastructure, we have a bit of code running at DPC that’s vulnerable to slowdown because of some kind of hash collision, someone will find and exploit that, even if it takes a long and obscure sequence of events to make it happen. And they’ll often wait until an inconvenient time to start the attack, such as Christmas, or one of the big online shopping days. If this CPU microcode hang had been found by one of these black hats, there would have been major carnage for most cloud hosted services at the most inconvenient possible time.
Intel has identified an issue that potentially affects the 6th Gen Intel® Core™ family of products. This issue only occurs under certain complex workload conditions, like those that may be encountered when running applications like Prime95. In those cases, the processor may hang or cause unpredictable system behavior.
which reveals almost nothing about what’s actually going on. If you look at their errata list, you’ll find that this is typical, except that they normally won’t even name the application that was used to trigger the bug. For example, one of the current errata lists has entries like
Certain Combinations of AVX Instructions May Cause Unpredictable System Behavior
AVX Gather Instruction That Should Result in #DF May Cause Unexpected System Behavior
Processor May Experience a Spurious LLC-Related Machine Check During Periods of High Activity
Page Fault May Report Incorrect Fault Information
As we’ve seen, “unexpected system behavior” can mean that we’re completely screwed. Machine checks aren’t great either – they cause Windows to blue screen and Linux to kernel panic. An incorrect address on a page fault is potentially even worse than a mere crash, and if you dig through the list you can find a lot of other scary sounding bugs.
And keep in mind that the Intel errata list has the following disclaimer:
Errata remain in the specification update throughout the product’s lifecycle, or until a particular stepping is no longer commercially available. Under these circumstances, errata removed from the specification update are archived and available upon request.
Once they stop manufacturing a stepping (the hardware equivalent of a point release), they reserve the right to remove the errata and you won’t be able to find out what errata your older stepping has unless you’re important enough to Intel.
Anyway, back to 2015. We’ve seen at least two serious bugs in Intel CPUs in the last quarter, and it’s almost certain there are more bugs lurking. Back when I worked at a company that produced Intel compatible CPUs, we did a fair amount of testing and characterization of Intel CPUs; as someone fresh out of school who’d previously assumed that CPUs basically worked, I was surprised by how many bugs we were able to find. Even though I never worked on the characterization and competitive analysis side of things, I still personally found multiple Intel CPU bugs just in the normal course of doing my job, poking around to verify things that seemed non-obvious to me. Turns out things that seem non-obvious to me are sometimes also non-obvious to Intel engineers. As more services move to the cloud and the impact of system hang and reset vulnerabilities increases, we’ll see more black hats investing time in finding CPU bugs. We should expect to see a lot more of these when people realize that it’s much easier than it seems to find these bugs. There was a time when a CPU family might only have one bug per year, with serious bugs happening once every few years, or even once a decade, but we’ve moved past that. In part, that’s because “unpredictable system behavior” have moved from being an annoying class of bugs that forces you to restart your computation to an attack vector that lets anyone with an AWS account attack random cloud-hosted services, but it’s mostly because CPUs have gotten more complex, making them more difficult to test and audit effectively, while Intel appears to be cutting back on validation effort. Ironically, we have hardware virtualization that’s supposed to help us with security, but the virtualization is so complicated that the hardware virtualization implementation is likely to expose “unpredictable system behavior” bugs that wouldn’t otherwise have existed. This isn’t to say it’s hopeless – it’s possible, in principle, to design CPUs such that a hang bug on one core doesn’t crash the entire system. It’s just that it’s a fair amount of work to do that at every level (cache directories, the uncore, etc., would have to be modified to operate when a core is hung, as well as OS schedulers). No one’s done the work because it hasn’t previously seemed important.
Update
After writing this, an ex-Intel employee said “even with your privileged access, you have no idea” and a pseudo-anonymous commenter on reddit made this comment:
As someone who worked in an Intel Validation group for SOCs until mid-2014 or so I can tell you, yes, you will see more CPU bugs from Intel than you have in the past from the post-FDIV-bug era until recently.
Why?
Let me set the scene: It’s late in 2013. Intel is frantic about losing the mobile CPU wars to ARM. Meetings with all the validation groups. Head honcho in charge of Validation says something to the effect of: “We need to move faster. Validation at Intel is taking much longer than it does for our competition. We need to do whatever we can to reduce those times… we can’t live forever in the shadow of the early 90’s FDIV bug, we need to move on. Our competition is moving much faster than we are” - I’m paraphrasing. Many of the engineers in the room could remember the FDIV bug and the ensuing problems caused for Intel 20 years prior. Many of us were aghast that someone highly placed would suggest we needed to cut corners in validation - that wasn’t explicitly said, of course, but that was the implicit message. That meeting there in late 2013 signaled a sea change at Intel to many of us who were there. And it didn’t seem like it was going to be a good kind of sea change. Some of us chose to get out while the getting was good. As someone who worked in an Intel Validation group for SOCs until mid-2014 or so I can tell you, yes, you will see more CPU bugs from Intel than you have in the past from the post-FDIV-bug era until recently.
I haven’t been able to confirm this story from another source I personally know, although another anonymous commenter said “I left INTC in mid 2013. From validation. This … is accurate compared with my experience.” Another anonymous person, someone I know, didn’t hear that speech, but found that at around that time, “velocity” became a buzzword and management spent a lot of time talking about how Intel needs more “velocity” to compete with ARM, which appears to confirm the sentiment, if not the actual speech.
I’ve also heard from formal methods people that, around that time, there was an exodus of formal verification folks. One story I’ve heard is that people left because they were worried about being made redundant. I’m told that, at the time, early retirement packages were being floated around and people strongly suspected layoffs. Another story I’ve heard is that things got really strange due to Intel’s focus on the mobile battle with ARM, and people wanted to leave before things got even worse. But it’s hard to say of this means anything, since Intel has been losing a lot of people to Apple because Apple offers better compensation packages and the promise of being less dysfunctional.
I also got anonymous stories about bugs. One person who works in HPC told me that when they were shopping for Haswell parts, a little bird told them that they’d see drastically reduced performance on variants with greater than 12 cores. When they tried building out both 12-core and 16-core systems, they found that they got noticeably better performance on their 12-core systems across a wide variety of workloads. That’s not better per-core performance – that’s better absolute performance. Adding 4 more cores reduced the performance on parallel workloads! That was true both in single-socket and two-socket benchmarks.
And of course Intel isn’t the only company with bugs – this AMD bug found by Robert Swiecki not only allows a VM to crash its host, it also allows a VM to take over the host.
I doubt I’ve even heard of all the recent bugs and stories about verification/validation. Feel free to send other reports my way.
More updates
A number of folks have noticed unusual failure rates in storage devices and switches. This appears to be related to an Intel Atom bug. I find this interesting because the Atom is a relatively simple chip, and therefore a relatively simple chip to verify. When the first-gen Atom was released, folks at Intel seemed proud of how few internal spins of the chip were needed to ship a working production chip that, something made possible by the simplicity of the chip. Modern Atoms are more complicated, but not that much more complicated.
On the AMD side, there might be a bug that’s as serious any recent Intel CPU bug. If you read that linked thread, you’ll see an AMD representative asking people to disable SMT, OPCache Control, and changing LLC settings to possibly mitigate or narrow down a serious crashing bug. On another thread, you can find someone reporting an #MC exception with “u-op cache crc mismatch”.
Some FreeBSD folks have also noticed seemingly unrelated crashes and have been able to get a reproduction by running code at a high address and then firing an interrupt. This can result in a hang or a crash. The reason this appears to be unrelated to the first reported Ryzen issues is that this is easily reproducible with SMT disabled.
There is a bug in Ryzen related to the kernel iretq’ing into a high user %rip address near the end of the user address space (top of user stack). This is a temporary workaround for the issue.
The original %rip for sigtramp was 0x00007fffffffffe0. Moving it down to fa0 wasn’t sufficient. Moving it down to f00 moved the bug from nearly instant to taking a few hours to reproduce. Moving it down to be0 it took a day to reproduce. Moving it down to 0x00007ffffffffba0 (this commit) survived the overnight test.
CPU internals series
Thanks to Leah Hanson, Jeff Ligouri, Derek Slager, Ralph Corderoy, Joe Wilder, Nate Martin, Hari Angepat, JonLuca De Caro, and a number of anonymous tipsters for comments/corrections/discussion.
A little over a year ago, I joined Cloud Foundry
to work on Loggregator,
Cloud Foundry’s application logging component. Its core concern is best-effort
log delivery without pushing back on upstream writers. Loggregator is written
entirely in Go.
After spending more than a thousand hours working with Go in a non-trivial
code base, I still admire the language and enjoy using it. Nonetheless, our
team struggled with a number of problems, many of which seem unique to Go.
What follows is a list of the most salient problems.
Project Organization
Cloud Foundry was an early adopter of Go at a time when few people knew what
idiomatic Go looked like or knew how to structure a large project. As a result,
a year ago Loggregator suffered from a haphazard organization which made
understanding the code difficult, let alone identifying dead code paths or places for
possible refactoring. There seemed to be a tendency to extract tiny packages first
instead of waiting for a shared concern to emerge from the code and only then
extracting a package. There were many examples of stuttering between package
names and types. Worst of all, there was little reusable code in the project.
Given the code’s state of organization, Peter Bourgon’s advice
on how to organize Go code has been invaluable, as is the rest of his
material on best practices. Likewise, the Go blog’s post
on package names provides
many helpful guiding principles. For especially large projects, the distinction
between cmd and pkg has become a best-practice. See, for example,Moby,Kubernetes, andDelve. More recently, there is the
excellent Style guideline for Go packages.
When just starting a project, the pkg package seems unnecessary. I prefer to
begin with a cmd directory and with whatever go files at the top level, as if
the project were a library. As the project grows, I like to identify packages,
which start out as peers of the cmd package. When the time seems right, it is
easy to move those various peers of cmd into a pkg package.
Small main functions
Just as a poorly organized project results in a ball of mud, a careless approach
to a main function can result in needless complexity. Compare two versions of the
same main function:before
andafter.
One version is over 400 lines. The other is about 40 lines. That’s an order of
magnitude. One will be easy to change. The other will not be. Delve
is exemplary in its clean and focused main function.
A main function should be a particular invocation of library code. That means
collecting any input necessary for the process and then passing that input to
library code. This style of main functions is more likely to result in testable
and composable code.
Dependency Management
Dependency management has been a perennial topic in the Go community. Loggregator
has used git submodules to vendor dependencies. The approach works, but it’s also
cumbersome. Spending some time with Rust has reminded me how sorely Go needs an
officially supported dependency management tool as part of the Go toolchain.
The work on dep is encouraging.
Without running Go Meta Linter
regularly, all sorts of mistakes will creep into a code base. In particular,
I have discovered the value of Package Driven Development, i.e., writing code
that looks good when running godoc some-package, a practice which shares a
history with conventions in thePython community.
The documentation for a package should be easy to understand, it should
be intention revealing, and it should be meaningful.
Over the course of the year on numerous occasions I lamented the lack of
documentation for Loggregator internals, which slowed down the process of
understanding even further. Fortunately, our team has come to share the view
that documentation is important and has been gradually working to ensure all
files within the project pass the Go Meta Linter.
Writing Performant Code starts with measuring
Go is capable of fast performance. It is tempting to prematurely optimize code
with the idea that a particular design is “faster.” In fact, until you have
measured current performance and determined that current performance is inadequate,
“faster” is a totally meaningless word.
Such a statement is hardly controversial, and yet I have worked with numerous
well-intentioned individuals who immediately reach for sophisticated designs
on the dubious grounds of their being “faster.” Fortunately, there is a strong
interest in the discipline of writing high performance code. See, for example,
Dave Cheney’s
High Performance Go Workshop
or Damian Gryski’s in progress
book on Go Performance.
Having a shared nomenclature for testing
There seems to be a consensus that writing well-tested code is important.
What is lacking, though, is a clear understanding of the differences between
test-doubles, mocks, spies, stubs, and fakes. Uncle Bob has explained what each
of these terms mean in hisThe Little Mocker.
On Loggregator, we had a mix of all these terms and they were rarely used
correctly. It may seem pedantic to insist on using these terms correctly, but
then again, software engineering leaves little room for ambiguity, so
why would we not use the same standards for our choice of words? In my view,
a shared nomenclature – what has elsewhere been called a “ubiquitous language” –
is the first and most important thing for a team of engineers.
The Basics
Finally, both for people new to Go and people experienced with Go, I continue
to find immense value in the following posts.
Edit
Thanks to Jason Keene for reading through this
post and pointing out GoDoc’s relationship to Python.
His team produced a successful engine: The prototype of a stylish rotary-powered coupe called the Cosmo made its debut at the Tokyo Motor Show in 1963. Mr. Yamamoto patiently talked about the Cosmo to visitors and then drove the car across Japan with Toyo Kogyo’s president over two weeks.
PhotoA 1967 Mazda Cosmo Sport, powered by a rotary engine developed by Kenichi Yamamoto and his team.Credit
Mazda Motor Corporation
Rotary engines were mass produced and featured in the company’s sporty compacts, but they had a significant flaw: poor fuel economy, which became a liability when energy crises struck in the early 1970s. Sales dropped, and the company was close to bankruptcy.
In 1974, Mr. Yamamoto became the head of Toyo Kogyo’s project to find fuel-saving innovations; he was adamant that the company could not abandon a breakthrough technology that set it apart from its competitors.
“It would have announced to the world that what we had started doing was not good,” he told The Times. “And then we wouldn’t have been able to succeed at anything — even just selling the piston engine.”
The engine overhaul worked. The engine’s fuel economy rose significantly. And sales of the rotary-powered Mazda RX-7 model soared.
But while the rotary engines remained the hallmark of the company’s innovation, the fuel shocks of the 1970s led it to rely increasingly on piston-engine vehicles, like the GLC.
Mr. Yamamoto was born in Japan on Sept. 16, 1922, in Kumamoto Prefecture, on the southwestern island of Kyushu, and later moved with his family to Hiroshima, where Mazda has long had its headquarters. He graduated with a mechanical engineering degree from what was then called Tokyo Imperial University (now the University of Tokyo) and served in the Japanese Navy.
The detonation of the first atomic bomb by the United States over Hiroshima on Aug. 6, 1945, killed Mr. Yamamoto’s sister and destroyed the family’s home. But his parents were still alive. He returned to Hiroshima and was hired by Toyo Kogyo to work at a plant outside the city that made transmissions for its three-wheeled trucks.
It was grimy work, but it allowed him to explore the engineering challenges of making transmissions. One day, he found blueprints of the transmissions “and he began to check their tolerances, acting as his own quality control,” according to the website Japanese Nostalgic Car.
His diligence and inquisitiveness placed him on a path to management.
Don Sherman, a longtime automotive journalist, remembered Mr. Yamamoto as warm and candid. “He’d greet you like a long-lost friend,” he said in a phone interview. “And he’d ask you basic questions, like: ‘What should Mazda do? Where should Mazda go?’ ”
PhotoA rotary engine Mr. Yamamoto and his team developed in the 1960s.Credit
Mazda Motor Corporation
In 1985, after Mr. Yamamoto became president, he recommended that the company’s board approve production of the car: the MX-5 Miata, which proved to be immensely popular.
One of Mr. Yamamoto’s other priorities as president was to expand Mazda’s presence in the United States by building an assembly plant in Flat Rock, Mich. At the groundbreaking, in 1985, he acknowledged the difficulty of bringing a Japanese production system to the Midwest.
“We recognized that the guiding principles to which we have long subscribed in operating our company will be put to a real test here at Flat Rock,” he said.
The factory became a symbol of Mazda’s continuing partnership with the Ford Motor Company, which bought 50 percent of the plant in 1992, augmenting its existing 25 percent stake in Mazda.
Mr. Yamamoto stepped down that year after five years as chairman. Mazda’s last mass-produced rotary-engine car was the 2012 RX-8.
Information on his survivors was not immediately available.
In 2003, Mr. Yamamoto reminisced about pioneering the rotary engine, which went on to power 1.8 million Mazda vehicles.
A report that Intel Corp. chips are vulnerable to hackers raised concerns about the company’s main products and brand.
On Tuesday, the technology website The Register said a bug lets some software gain access to parts of a computer’s memory that are set aside to protect things like passwords. All computers with Intel chips from the past 10 years appear to be affected, the report said, and patches to Microsoft Corp.’s Windows and Apple Inc.’s OS X operating systems will be required. The security updates may slow down older machinery by as much as 30 percent, according to The Register.
Flaws in the designs of microprocessors, which go through rigorous testing and verification, are usually easily fixed by patches in the code that they use to communicate with the rest of the computer. But if the error can’t be fixed easily in software, it could be necessary to redesign the chip, which can be extremely costly and time consuming.
Intel is expected to put out a statement but hasn’t yet commented on the issue. Historically, the way companies respond to such issues and how quickly they address them has determined how big the problem becomes.
“This is a potential PR nightmare,” said Dan Ives, head of tech research at GBH Insights. “They need to get ahead of this and try to contain any of the damage to customers as well to the brand.”
The report hit Intel shares, which fell as much as 3.8 percent, the most since April, and gave a boost to rival Advanced Micro Devices Inc., which surged as much as 7.3 percent to $11.78 Wednesday. An Intel spokesman declined to comment.
Chip design flaws are exceedingly rare. More than 20 years ago, a college professor discovered a problem with how early versions of Intel’s Pentium chip calculated numbers. Rival International Business Machines Corp. was able to make use of the finding and claim Intel products would cause frequent problems for consumers’ computers. While that didn’t happen, Intel had to recall some chips and took a charge of more than $400 million.
Intel’s microprocessors are the fundamental building block of the internet, corporate networks and PCs. The company has added to its designs over the years trying to make computers less vulnerable to attack, arguing that hardware security is typically tougher to crack than software.
The Santa Clara, California-based company’s chips have more than 80 percent market share overall and more than 90 percent in laptops and servers.
Programmers have been working for two months to try to patch the flaw in open-source Linux system, The Register said, adding that Microsoft was expected to release a patch for the issue soon.
Unlike the failed CyanogenMod, Duval has no intention of turning eelo into a business. "I want eelo to be a non-profit project 'in the public interest,'" he said.
Before turning his talents to building eelo, Duval looked at alternatives. Duval said [sic], "I looked at Firefox OS. But, as I want eelo to be for 'for Mum and Dad.'"
Duval, however, will not try create a Linux-based smartphone operating system as others have attempted. That's because, frankly, building a complete operating system on smartphone hardware isn't easy. Just ask Mozilla, Canonical, or even Microsoft. Instead, Duval is launching eelo from the existing Android clone LineageOS.
LineageOS is a CyanogenMod fork. But, Duval explained, it's not enough for his purposes: "The core of AOSP [Android Open Source Project]/LineageOS is usable, and performing well, but it's not good enough for my needs: the design is not very attractive and there are tons of micro-details that can be showstoppers for a regular user. Also, unless you are a geek, LineageOS is not realistically usable if you don't want Google inside."
Duval admits he's no Android expert. "The bad news is that I'm new to Android development and I don't consider myself a great developer," he said. Fortunately, "The good news is that I have found a very talented full-stack developer who is interested in the project. We have agreed, as a first collaboration, to release a new launcher, new notification system and new 'control center.'"
After several weeks of development, eelo is running as a beta.
The real challenge isn't building a new front-end. It's removing Google Play Store, Google Play Services, and Google Services. That's not easy. While Android developers don't have to use any of them, they are very useful.
For installing programs, Duval is turning to the alternative Android program repositories F-Droid and APKPure. Ideally, he wants an an "eelo store," which would deliver both official free applications like APKPure and open-source applications such as offered in F-Droid.
To replace Google Services, Duval plans on using MicroG. This is an open-source implementation of Google's proprietary Android user space apps and libraries. To deal with programs that use Google's SafetyNet Attestation Application Programming Interface (API) -- an API that checks to make sure the application runs in a Google Android compliant environment -- Duval thinks eelo will probably use Magisk Manager. This is a program that enables Android applications to run on smartphones, such as rooted systems, that would normally block them.
For search, the plan is to offer privacy-enabled DuckDuckGo and the new privacy oriented search engine Qwant. You'll also be able to pick your own search engine, since as Duval admits, "in some cases, it [Google] is still offering the best results."
Then, there are all the invisible internet services most people never think about, such as Domain Name System (DNS), which can also be used to track you. To deal with this, by default, eelo will use the Quad 9 DNS. The Global Cyber Alliance (GCA)'s Quad 9 both preserves privacy while blocking access to known malicious sites.
Low-level proprietary smartphone hardware drivers remain a problem -- but, short of building an eelo phone from the circuits up, that's beyond eelo's current scope.
It's still early days for eelo, and Duval is welcoming support both on eelo's KickStarter page, where the current goal is to raise $120,000, and by talking directly to him via e-mail at gael@eelo.io or by following him on Twitter or Mastodon.
Can it work? While alternatives to Android and iOS have failed more often than not, Android forks have had more success. With people increasingly desiring more privacy, I think eelo has an excellent chance of becoming a viable niche operating system.
Hyperapp is a JavaScript library for building frontend applications.
Minimal: Hyperapp was born out of the attempt to do more with less. We have aggressively minimized the concepts you need to understand while remaining on par with what other frameworks can do.
Functional: Hyperapp's design is inspired by The Elm Architecture. Create scalable browser-based applications using a functional paradigm. The twist is you don't have to learn a new language.
Batteries-included: Out of the box, Hyperapp combines state management with a VDOM engine that supports keyed updates & lifecycle events — all with no dependencies.
Last year, Silicon Valley entrepreneur Doug Evans brought us the Juicero machine, a $400 gadget designed solely to squeeze eight ounces of liquid from proprietary bags of fruits and vegetables, which went for $5 to $8 apiece. Though the cold-pressed juice company initially wrung millions from investors, its profits ran dry last fall after journalists at Bloomberg revealed that the pricy pouch-pressing machine was, in fact, unnecessary. The journalists simply squeezed juice out of the bags by hand.
But this didn’t crush Evans. He immediately plunged into a new—and yet somehow even more dubious—beverage trend: “raw” water.
The term refers to unfiltered, untreated, unsterilized water collected from natural springs. In the ten days following Juicero’s collapse, Evans underwent a cleanse, drinking only raw water from a company called Live Water, according to TheNew York Times. “I haven’t tasted tap water in a long time,” he told the Times. And Evans isn’t alone; he’s a prominent member of a growing movement to “get off the water grid,” the paper reports.
Members are taking up the unrefined drink due to both concern for the quality of tap water and the perceived benefits of drinking water in a natural state. Raw water enthusiasts are wary of the potential for contaminants in municipal water, such as traces of unfilterable pharmaceuticals and lead from plumbing. Some are concerned by harmless additives in tap water, such as disinfectants and fluoride, which effectively reduces tooth decay. Moreover, many believe that drinking “living” water that’s organically laden with minerals, bacteria, and other “natural” compounds has health benefits, such as boosting “energy” and “peacefulness.”
Mukhande Singh (né Christopher Sanborn), founder of Live Water, told the Times that tap water was “dead” water. “Tap water? You’re drinking toilet water with birth control drugs in them,” he said. “Chloramine, and on top of that they’re putting in fluoride. Call me a conspiracy theorist, but it’s a mind-control drug that has no benefit to our dental health.” (Note: There is plenty of data showing that fluoride improves dental health, but none showing water-based mind control.)
Dirty water
Three years ago, Singh began selling raw water collected from Opal Springs in Culver, Oregon, which he claims contains unique probiotics. Consumers in certain areas of California can now sign up for raw water deliveries for as much as $6.40 per gallon.
Raw water, on the other hand, clearly poses risks—and its benefits are unproven.
Natural water sources are vulnerable to all manner of natural pathogens. These include any bacteria, viruses, and parasites normally found in water or shed from nearby flora and fauna, such as Legionella and Giardia lamblia. They also can easily pick up environmental contaminants and naturally occurring hazards such as radiation from certain mineral deposits. Under the Safe Drinking Water Act, the EPA has set standards and regulations for 90 different contaminants in tap water, including microorganisms, disinfectants, and radionuclides. And for bottled water, the Food and Drug Administration has set standards and can inspect bottling facilities. But such assurances aren’t in place for scouted spring water.
For its part, Live Water posted on its website a water quality report from an analysis conducted in 2015. The analysis looked at many contaminants but doesn’t appear to cover everything that the EPA monitors. For instance, there’s no mention of testing for pathogens such as Legionella and Giardia.
Crude microbiology
Live Water did try to identify some bacteria present, though. Through third-party testing, Live Water identified bacteria that it claims are probiotics with health benefits. On its website, Live Water attempts to back up this claim by linking to a study that, according to the raw water company, "prov[es] raw spring water has vast healing abilities." However, the linked study does no such thing. In the authors' own words, the study "provided only preliminary data" on the presence of certain nonpathogenic bacteria in water from a spring in Italy. The authors merely speculate that these bacteria may produce beneficial "molecular mediators" that “thus far, remain unknown."
Enlarge/ Results of third-party microbial testing of Live Water's water.
Additionally, the bacteria isolated from the Italian spring water are a different set than those found in Live Water’s water. The two water samples only have one bacterium in common, Pseudomonas putida, which has no established health benefits. P. putida is a species of soil bacteria well known for degrading organic solvents, such as toluene, which is found in coal tar and petroleum. As such, the species is thought of as a potential tool to clean up contaminated soils (aka, bioremediation).
Live Water also found Pseudomonas oleovorans in its water. This is an environmental bacterium and opportunistic pathogen. Lastly, the company reports unidentified Pseudomonas species and unidentified species in the Acidovorax genus. Without species-level identification, it's not possible to know what these bacteria may be up to in water. Both genera contain well-known plant-associated bacteria, but Pseudomonas contains well-studied human pathogens, too, such as P. aeruginosa, which is drug resistant and tends to plague patients with cystic fibrosis.
Live Water goes further on its website, adding that “beneficial bacteria are also proven to have abilities to transform harmful bacteria." This, a reader could infer, suggests that the bacteria present in the raw water may reduce or protect drinkers from bacterial pathogens. But to support that statement, Live Water links to a Wikipedia page about phage therapy, which uses viruses (not bacteria) to combat bacterial infections (phage or bacteriophage are terms for viruses that infect bacteria).
Ars reached out to Live Water and asked about all these issues as well as its water testing, but the company did not immediately respond. If Live Water does get back to us, we’ll update this story.
At ZeroCater - we help bring companies together over shared meals. We’ve scaled to well over $100M in sales with minimal funding while feeding hungry offices across six markets.
As we move deeper into corporate food programs, we’re looking to grow our < 10 person engineering team with a Software Engineering Manager to help grow and scale our business. This hands-on leader will be responsible for mentoring a small team, building & improving code across the entire stack, and establishing and participating in cross-functional teams to build new functionality with a focus on creating delight for clients.
This is a high visibility role that will be groomed to be a Director, so will need to sit at our San Francisco headquarters.
SAN FRANCISCO (Reuters) - Tesla Inc (TSLA.O) delayed a production target for its new Model 3 sedan for the second time on Wednesday, disappointing investors even as it claimed “major progress” overcoming manufacturing challenges that have hampered the vehicle’s rollout.
The electric vehicle maker headed by Elon Musk said it would likely build about 2,500 Model 3s per week by the end of the first quarter, half the number it had earlier promised. Instead, Tesla said it now plans to reach its goal of 5,000 vehicles per week by the end of the second quarter.
The delay sent shares of the Palo Alto, California-based company down 2 percent in after-market trading.
The Model 3 is critical to Tesla’s long-term success, as it is the most affordable of its cars to date and is the only one capable of transforming the niche automaker to a mass producer amid a sea of rivals entering the nascent electric vehicle market.
Building the car efficiently and delivering it without delays to customers is also critical, as the money-losing company faces high cash burn. Delays increase the risk that reservation-holders will cancel orders.
”The further delay to (production volume) will leave analysts and investors focused on the implications for cash as we head through the first half of the year,” Evercore analyst George Galliers told Reuters.
The company burned through $1.1 billion in capital expenditures in its third quarter and said in November that fourth-quarter capex would also be about $1.1 billion.
RBC Capital Markets analyst Joseph Spak wrote in a note that he did not believe Tesla will be required to do a capital raise.
“We have them hovering about $1 billion in cash ... They don’t have a ton of wiggle room though in our view,” Spak said.
In delivering 1,550 of its new Model 3 electric vehicles in the fourth quarter, Tesla fell short of Wall Street expectations. Analysts had expected 4,100 Model 3 sedans to be delivered in the fourth quarter, according to financial data and analytics firm FactSet.
The estimates for Model 3 deliveries by different brokerages varied widely. While Evercore analysts estimated 5,800 deliveries, Cowen analysts expected just 2,250.
FILE PHOTO: Tesla Model 3 cars wait for their new owners as they come off the Fremont factory's production line during an event at the company's facilities in Fremont, California, U.S., July 28, 2017. REUTERS/Alexandria Sage/File Photo
Tesla said 860 Model 3 vehicles were in transit to customers at the end of the fourth quarter.
The company said it delivered a total of 29,870 vehicles in the fourth quarter, including 15,200 Model S vehicles and 13,120 Model X cars. Analysts had expected total deliveries of about 30,000.
PRODUCTION ACCELERATING
Tesla had initially predicted to reach the milestone of 5,000 vehicles per week in December, but in November deferred the target to the end of the first quarter.
Tesla said on Wednesday its production rate had increased significantly despite the delays.
“In the last seven working days of the quarter, we made 793 Model 3s, and in the last few days, we hit a production rate on each of our manufacturing lines that extrapolates to over 1,000 Model 3s per week,” the company said in a statement.
That pace of production is still below that of many carmakers. A conventional car factory operating at full speed on two shifts can churn out nearly 1,000 vehicles a day.
The Model 3 - which starts at $35,000, or about half the price of its flagship Model S - was met with great enthusiasm when its prototype was first unveiled in early 2016. Tesla said in August it had about 500,000 reservations for the car and demand was not a constraint.
Production, however, hit snags during a period of “manufacturing hell” Musk first warned of in July. Among the issues Tesla faced was its battery module assembly line at its Nevada Gigafactory, which required a redesign.
In its third quarter, Tesla built just 260 Model 3s.
Although bullish investors have generally waved off early Model 3 problems, instead focusing on the future prospects of Tesla, critics warn that Model 3 issues could sour demand for Tesla’s mass-market vehicle, delay sorely needed revenue and compromise the company’s ability to raise cash in the future.
Tesla shares, despite paring some gains from a high of $385 in September, still trade at 46 percent above their price a year ago.
Reporting by Alexandria Sage in San Francisco and Supantha Mukherjee in Bengaluru; Editing by Shounak Dasgupta and Tom Brown
On Wed, Jan 3, 2018 at 3:09 PM, Andi Kleen <andi@firstfloor.org> wrote: > This is a fix for Variant 2 in > https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html > > Any speculative indirect calls in the kernel can be tricked > to execute any kernel code, which may allow side channel > attacks that can leak arbitrary kernel data.
Why is this all done without any configuration options?
A *competent* CPU engineer would fix this by making sure speculation doesn't happen across protection domains. Maybe even a L1 I$ that is keyed by CPL.
I think somebody inside of Intel needs to really take a long hard look at their CPU's, and actually admit that they have issues instead of writing PR blurbs that say that everything works as designed.
.. and that really means that all these mitigation patches should be written with "not all CPU's are crap" in mind.
Or is Intel basically saying "we are committed to selling you shit forever and ever, and never fixing anything"?
Because if that's the case, maybe we should start looking towards the ARM64 people more.
Please talk to management. Because I really see exactly two possibibilities:
- Intel never intends to fix anything
OR
- these workarounds should have a way to disable them.
Opera today launched version 50 of its desktop browser. Sadly, this release doesn’t come with a cake to celebrate this milestone (not even a tiny cupcake), but the newest release does include a new feature that makes sure that nobody can mine cryptocurrencies in your browser.
While browsers and JavaScript aren’t exactly the most efficient way to mine coins, the sheer number of users who could be running these scripts makes up for that (and the fact that the attackers don’t have to pay for power also helps). For the most part, though, these sites mine coins like Monero that use very compute-heavy algorithms where CPUs are able to compete with what is traditionally a GPU-centric approach (reportedly, that’s also what North Korean hacking units occasionally use to mine coins on hijacked machines).
It’s worth noting that there are extensions for Chrome and Firefox that will perform the same service for users on those browsers. In Opera, this new cryptojacking feature is automatically enabled when you turn on the browser’s ad blocking tool.
“We are fans of cryptocurrencies but we simply don’t accept that websites are using people’s computers to mine coins without their knowledge or consent,” said Krystian Kolondra, head of Desktop Browser at Opera. “With the new Opera 50, we want to kick off 2018 by providing people a simple way to regain control of their computers.”
How much does Opera love cryptocurrencies? Enough to build a currency converter for Bitcoin, Ethereum, Bitcoin Cash and Litecoin into its browser.
Other new features in Opera 50 include support for streaming videos to Chromecast and a built-in VR player that lets Oculus Rift users enjoy 360-degree videos in their headsets.
Enlarge/ Still from an industry-funded ad warning against municipal broadband in Fort Collins, Colorado.
The city council in Fort Collins, Colorado, last night voted to move ahead with a municipal fiber broadband network providing gigabit speeds, two months after the cable industry failed to stop the project.
Last night's city council vote came after residents of Fort Collins approved a ballot question that authorized the city to build a broadband network. The ballot question, passed in November, didn't guarantee that the network would be built because city council approval was still required, but that hurdle is now cleared. Residents approved the ballot question despite an anti-municipal broadband lobbying campaign backed by groups funded by Comcast and CenturyLink.
The Fort Collins City Council voted 7-0 to approve the broadband-related measures, a city government spokesperson confirmed to Ars today.
"Last night's three unanimous votes begin the process of building our city's own broadband network," Glen Akins, a resident who helped lead the pro-municipal broadband campaign, told Ars today. "We're extremely pleased the entire city council voted to support the network after the voters' hard fought election victory late last year. The municipal broadband network will make Fort Collins an even more incredible place to live."
Net neutrality and privacy
While the Federal Communications Commission has voted to eliminate the nation's net neutrality rules, the municipal broadband network will be neutral and without data caps.
"The network will deliver a 'net-neutral' competitive unfettered data offering that does not impose caps or usage limits on one use of data over another (i.e., does not limit streaming or charge rates based on type of use)," a new planning document says. "All application providers (data, voice, video, cloud services) are equally able to provide their services, and consumers' access to advanced data opens up the marketplace."
The city will also be developing policies to protect consumers' privacy. FCC privacy rules that would have protected all Americans were eliminated by the Republican-controlled Congress last year.
The items approved last night (detailed here and here) provide a $1.8 million loan from the city's general fund to the electric utility for first-year start-up costs related to building telecommunications facilities and services. Later, bonds will be "issued to support the total broadband build out," the measure says.
The city intends to provide gigabit service for $70 a month or less and a cheaper Internet tier. Underground wiring for improved reliability and "universal coverage" are two of the key goals listed in the measure.
Building a citywide network is a lengthy process—the city says its goal is to be done in "less than five years."
Telecom lobby failure
The telecom industry-led campaign against the project spent more than $900,000, most of which was supplied by the Colorado Cable Telecommunications Association. Comcast is a member of that lobby group.
Fort Collins Mayor Wade Troxell criticized incumbent ISPs and the local Chamber of Commerce for spreading "misinformation" to voters, The Coloradoanreported at the time.
The pro-municipal broadband effort led by community members won despite spending just $15,000. More than 57 percent of voters approved the measure.
"We're incredibly excited about the voting results from last night," Colin Garfield, who led the residents' pro-broadband effort, told Ars today. "The tireless work our committee performed and the voice of the voters have been rewarded."
Intel and other technology companies have been made aware of new security research describing software analysis methods that, when used for malicious purposes, have the potential to improperly gather sensitive data from computing devices that are operating as designed. Intel believes these exploits do not have the potential to corrupt, modify or delete data.
Recent reports that these exploits are caused by a “bug” or a “flaw” and are unique to Intel products are incorrect. Based on the analysis to date, many types of computing devices — with many different vendors’ processors and operating systems — are susceptible to these exploits.
Intel is committed to product and customer security and is working closely with many other technology companies, including AMD, ARM Holdings and several operating system vendors, to develop an industry-wide approach to resolve this issue promptly and constructively. Intel has begun providing software and firmware updates to mitigate these exploits. Contrary to some reports, any performance impacts are workload-dependent, and, for the average computer user, should not be significant and will be mitigated over time.
Intel is committed to the industry best practice of responsible disclosure of potential security issues, which is why Intel and other vendors had planned to disclose this issue next week when more software and firmware updates will be available. However, Intel is making this statement today because of the current inaccurate media reports.
Check with your operating system vendor or system manufacturer and apply any available updates as soon as they are available. Following good security practices that protect against malware in general will also help protect against possible exploitation until updates can be applied.
Intel believes its products are the most secure in the world and that, with the support of its partners, the current solutions to this issue provide the best possible security for its customers.
On the last day of his life, Gary Benefield expressed hope for the future. He was finally about to “get right,” he said.
A Harley-riding tough guy and retired utility worker, Mr. Benefield had let addiction get the better of him. He was downing a dozen Budweisers a day and smoking nonstop, despite needing an oxygen tank to breathe. But that July day in 2010, he was headed to A Better Tomorrow, a California treatment center promising 24-hour care while he got sober.
Mr. Benefield was about to join the millions of Americans who have placed their fate in the hands of the nation’s sprawling, haphazardly regulated addiction-treatment industry. It is a wildly profitable business, thanks in large part to addicts like Gary.
He kissed his wife, Kelly, goodbye at the tiny airport in Show Low, Ariz., a town named for the lucky turn of a playing card more than a century earlier. “He told me he loved me,” she said later. That evening, he checked in at the treatment center.
The next morning, as dawn broke over A Better Tomorrow’s postage-stamp lawn and stucco walls, two new voice mail messages awaited an employee there.
One came from Kelly Benefield, asking: Could someone help her bring Gary a cake that day? It was his 53rd birthday.
The other came from a manager: Gary Benefield was dead.
America is convulsed by an addiction crisis — painkillers, heroin, alcoholism, meth — and its victims die with tragic regularity. But Mr. Benefield’s case is extraordinary.
His death in July 2010 kicked off a six-year battle that very nearly brought down one of the biggest addiction-treatment companies in the country, an epic clash between an addiction-treatment multimillionaire and a college kid and budding financial wizard.
On one side was Michael Cartwright, a former addict pursuing his dream of building a nationwide empire of trustworthy drug-treatment clinics — a kind of Mayo Clinic for addiction. His Nashville, Tenn., company owned the clinic where Mr. Benefield had died.
On the other was Chris Drose, who, uninspired by his class work at Furman University, became fascinated with short selling — a risky investment strategy of trying to make money by betting that a company’s stock will fall.
In Mr. Cartwright’s company, the college junior saw a very big “short.” And he attacked.
Addiction treatment is one of the most lucrative health care industries to emerge in a generation, a massive business fed by a national addiction crisis that, by most measures, is out of control. Drug overdoses kill more Americans than car accidents, but only a fraction of the country’s 23 million addicts go into rehab, creating an untapped market — and an enormous business opportunity.
Yet the industry focused on curing addiction has its shortcomings. One of the most significant: There is little consensus on the most effective ways to treat patients.
Should patients travel far from home, as Mr. Benefield did, to isolate them from temptation? Or should they stay close to their support networks of family and friends? Should they be treated with medications that reduce the appetite for opioids? Or should they be coached to conquer their illness through willpower?
The field is also covered by a patchwork of regulations that haven’t kept pace with its growth. That has created room for opportunistic small operators to spring up, some with questionable track records.
The industry started taking off when President George W. Bush in 2008 signed a law requiring insurers to pay for rehab. The 2010 Affordable Care Act expanded coverage further. Suddenly just about anyone with insurance could get help.
Today, private insurance covers treatment for millions of working and middle-class Americans. Annual spending by private insurers on opiate addiction alone rose more than 1,000 percent in the five-year period ending in 2015, to roughly $721 million, according to Fair Health, an independent nonprofit that keeps a database of private insurance claims.
Mr. Cartwright’s company, American Addiction Centers, operates treatment centers in eight states around the country. That was how Mr. Benefield ended up in a treatment facility in California: Eager to get sober, he and his wife searched online from their home in Arizona for a clinic, found A Better Tomorrow — which eventually became part of Mr. Cartwright’s business — and then called up to book a spot.
This account of Mr. Benefield’s final days, and the battle over American Addiction Centers, draws on interviews with executives, front-line employees, addicts, police and investors, as well as thousands of court documents.
On October 3, 2014 — four years after Mr. Benefield’s death — Michael Cartwright and his business partner, Jerrod Menz, rang the opening bell on the New York Stock Exchange to announce the first of day trading in their company’s shares. Mr. Cartwright, standing next to his teenage daughter, clapped. Mr. Menz gave the thumbs up.
At the close of trading that day, shares in American Addiction Centers held by the two men — the two largest shareholders — were worth a combined $202 million.
It was a crowning achievement for both men, but particularly for Mr. Cartwright, who had started his career running a rehab center for mentally ill addicts in inner city Nashville.
In 2012, Mr. Cartwright started buying up smaller providers, including one that had been started by Mr. Menz, his longtime friend, in Southern California, and assembling them into a national chain.
Mr. Cartwright himself had abused drugs as a younger man, “anything I could get my hands on,” he said in an interview. Mr. Cartwright also spent time in a psychiatric hospital after a breakdown, an experience that helped to shape his belief that anyone can overcome addiction as long as they have the right mind-set.
He credited his grandmother’s tough love with helping him turn his life around. “She wasn’t about to let me wallow in my own poop,” he wrote in his book, “Believable Hope.”
Addiction should be overcome by willpower and hard work in therapy, Mr. Cartwright said. Other treatment providers put more emphasis on medicines to help addicts — particularly opiate addicts — function in the long term.
The treatment world is split by methodology and motivation — inpatient and outpatient, religious and secular, nonprofit and moneymaking. Such a Balkanized industry seemed ripe for consolidation by a businessman like Mr. Cartwright.
With aggressive internet marketing and a central call center, Mr. Cartwright pulled in patients from around the country. Patients stay for weeks at a time in a treatment house, undergoing therapy paid for by private insurance.
“If you can create a great brand, which I think Michael can,” said Lucius Burch III, an early investor in American Addiction Centers and a former chairman of a Nashville company that runs private prisons, “you have an opportunity to build a huge company.”
But there were people who believed they saw big vulnerabilities in that emerging brand, including a college student in South Carolina.
With bushy eyebrows and a boyish face, Mr. Drose comes from a family of skeptics and crusaders. His grandfather helped plot the assassination of Rafael Trujillo, a dictator in his native Dominican Republic, in the 1960s.
He remembered the day, in late 2014, that he discovered the investment idea that would launch his career. He was 19, sitting on a purple armchair in his dorm room at Furman, where he liked to spend his spare time scanning financial filings in search of stocks to “short,” or bet against.
“It’s like a big Easter egg hunt,” Mr. Drose said recently. “Except, for something bad.”
That day, one company stood out to him — American Addiction Centers. In just a few months in existence as a publicly traded stock, its shares had risen a blistering 60 percent.
Any time a stock rises that quickly, short sellers like Mr. Drose sense an opportunity. But there was something else that piqued his interest.
A business that profited from people’s desperation seemed like “an industry that might have something weird going on,” Mr. Drose said.
But weird wouldn’t begin to describe the world he was about to enter.
Culling through American Addiction Centers’ public filings, he noticed that a health insurer had sued one of the company’s subsidiaries, claiming that it had conducted unnecessary drug tests on patients’ urine. Posing as a prospective patient, Mr. Drose called the company and asked how often it conducted drug tests, and came to believe that American Addiction Centers was testing much more frequently than other providers.
He wrote an article about his findings on a website called Seeking Alpha, which short sellers frequent for investment tips. The company’s stock promptly dropped 10 percent.
That was a big win for anyone short-selling the stock. Short sellers borrow shares in a company, then sell them hoping that the price falls. Their goal is to buy back the shares later, at a cheaper price, and return them to the lender. The price decline is their profit.
In Nashville, Mr. Cartwright watched his stock fall, and was stunned. “We found no substantiation for any one of his claims,” he said in an interview, referring to Mr. Drose’s article.
Battle lines were being drawn and Mr. Drose’s work was attracting attention elsewhere. Kingsford Capital, a hedge fund based outside of San Francisco, hired him as a consultant and told him to keep digging into the treatment business.
And another interested party had noticed Mr. Drose’s work, too: a man named Charles Hill, who ran a treatment center in Temecula, Calif., the town where Mr. Benefield had died.
Mr. Hill told Mr. Drose that he knew a great deal about American Addiction Centers. He congratulated him on his article about the urine testing, and suggested he start searching for lawsuits brought by families of patients who had died in California. “You’ve been looking in the wrong place,” he told Mr. Drose.
Following Mr. Hill’s advice, Mr. Drose flew to California in the spring of 2015. He may have been working for a major hedge fund, but he was so young — still only 20 — that he had to pay extra to rent a car.
The two men met at Mr. Hill’s rehab center in Temecula, a maze of cul-de-sacs and shopping plazas book-ended by the velvet green Santa Rosa mountains and the snowcapped peaks of the San Bernardino range.
Mr. Hill thought the college kid was an investigative reporter. So he was thrilled to have someone to listen to his concerns about the treatment business.
“I didn’t even know what short selling was,” Mr. Hill recalled.
Standing five feet seven inches in scuffed leather boots and faded jeans, Mr. Hill is a former painkiller addict and a natural storyteller. The story that perhaps best defines his life involved a football injury back in the 1990s. It shaped his personal philosophy of drug treatment, one that puts its trust in modern medications — and is at odds with people like Mr. Cartwright, who want patients to ultimately lead a truly drug-free life.
Mr. Hill was injured at the age of 42, too old to be playing tackle football. But eager to relive his high school glory days, Mr. Hill — his nickname is Rocky — had joined some friends to start a full-contact football league, The Over the Hill Pigskin Shootout. They used old pads donated by the Los Angeles Rams.
One fateful tackle, though, ended the fun. He tore the rotator cuff in each of his shoulders. His doctor prescribed a powerful painkiller, Norco.
And Mr. Hill — who was already in the addiction treatment business — became an opiate addict himself.
When he tried to stop taking the Norco, his life unraveled. He couldn’t sleep. He subsisted on Ensure, the nutrition drink, because eating made him vomit. He considered suicide.
A doctor specializing in pain management told Mr. Hill that the opiates had permanently altered his brain. Therapy and group meetings couldn’t fix that.
The doctor prescribed Mr. Hill a different drug, buprenorphine, which satisfies the craving for opiates but does not result in a high. Minutes after the first dose dissolved under Mr. Hill’s tongue, the world righted itself.
“Before, it felt like someone had put a vacuum in me and sucked out all the joy,” Mr. Hill said. “And then, it was like someone had suddenly reversed it.”
Mr. Hill’s successful treatment with buprenorphine was for him a revelation. Today, he sends the opiate addicts he treats to a local doctor for a prescription for the same drug he still takes every morning.
For many addicts, that prescription is all they need to get on with their lives, Mr. Hill said. Mr. Cartwright, by contrast, believes that ultimately “abstinence has to be the goal,” he said in an interview.
That is only the start of their differences.
Mr. Cartwright’s company specializes in enrolling addicts in intensive inpatient programs, often far from their families — where they stay full-time in a sober living center with other recovering addicts.
Mr. Hill prefers an outpatient approach that is close to the patient’s support network. During the day, addicts come to Mr. Hill’s two-story building, where they meet with therapists. At night, they go home.
Mr. Hill believes the inpatient model is motivated more by greed than doing good. Inpatient providers can bill insurers up to $10,000 for 28 days of services; Mr. Hill charges $1,400 a month for his outpatient treatments.
There is great debate about which treatment approaches work best, and even how to measure their efficacy.
“A lot of organizations say they have the cure, but they have no incentive to try to prove it through the data,” said Robert Poznanovich, executive director of Business Development at Hazelden Betty Ford Foundation, one of the best-known addiction-treatment providers in the United States.
Hazelden offers a mix of outpatient and inpatient treatment. Modern addiction treatment grew directly from Hazelden and its secluded farm in Minnesota. In 1949, a group of businessmen and a Catholic priest pioneered the idea of bringing alcoholics to a rural location, where the men (they were all men back then) could focus on the 12-step principles without the distractions and temptations of everyday life.
The approach became known as the Minnesota Model and was copied by other nonprofits for decades. Then, the new insurance laws in 2008 and 2010 transformed what had largely been a government-funded and charitable-minded field into an enticing for-profit business. In just a few years, that gave rise to a $35 billion industry of inpatient programs such as the one offered by American Addiction Centers.
Mr. Hill’s concerns about American Addiction Centers were not just about the debate between inpatient vs. outpatient philosophies of treatment. He told Mr. Drose about patients who had died in rehab homes around Temecula and nearby Murrieta that Mr. Cartwright later acquired.
The deaths, Mr. Hill contended, showed how the company was unequipped to deal with medically fragile addicts. Yet, Mr. Hill claimed that for years the company kept taking those patients, assuring them that they would receive adequate care.
As far back as 2008, Mr. Hill had told the California agency that oversees drug-treatment programs that he believed some patients at Mr. Menz’s facilities (ones that later became part of American Addiction Centers) were in danger. When some of the dead patients’ families later filed lawsuits against those companies, he followed every twist and turn of the cases.
He had also spent a good part of 2011 and 2012 working with Hardy Gold, a prosecutor in the state attorney general’s office who was interested in Mr. Benefield’s death. The two began trading emails discussing aspects of the investigation — emails that would later cause headaches for the prosecutor. At one point, Mr. Gold visited Mr. Hill’s office to get a tutorial on the addiction-treatment industry.
During that time, American Addiction Centers sued Mr. Hill for defamation, saying he had made false statements about its patient care. A judge dismissed the suit.
Mr. Hill kept up his attacks on American Addiction Centers. And in Mr. Drose, he believed that he had found a new way to take on the company.
Mr. Drose spent hours talking to Mr. Hill that day in Temecula. When he returned home, his backpack was stuffed with lawsuits, depositions and autopsy reports. “Damn,” Mr. Drose said he thought at the time. “I left there thinking I had stumbled across a gold mine.”
Mr. Drose returned to Kingsford’s offices in Atlanta, took over a glass-enclosed conference room, and made piles of documents related to each death.
None of the deaths had been disclosed to investors when the stock of American Addiction Centers began trading publicly.
The dead included Shaun Reyna, an alcoholic, who killed himself with a shaving razor in one of the company’s treatment houses. Mr. Reyna’s widow said in a 2014 lawsuit that the staff had ignored signs that her husband was suffering withdrawal symptoms that required urgent medical care. The case is expected to go to trial early in 2018.
There was also Gregory Thomas, who hanged himself from a bridge one block from the company’s main office in Temecula in November 2010. He had been brought to the office by a company employee, but never went through with the treatment.
Mr. Thomas’s body hung from a bridge for several days before anyone noticed. A judge ruled that the company wasn’t liable because Mr. Thomas had not been admitted.
But the circumstances of Mr. Benefield’s death, as detailed in a lawsuit his wife brought in 2011, stood out.
The treatment house that he ended up traveling to, A Better Tomorrow, was founded by Mr. Menz, Mr. Cartwright’s partner in American Addiction Centers. It would become a core part of the publicly traded company.
When Mr. Benefield called A Better Tomorrow in late July 2010, he was about to help the company solve a problem: a patient shortage.
There were too many empty beds, Mr. Menz had told his staff members at a monthly meeting, and they needed to fill them. The employees — who referred to signing up new patients as “closing a sale” — understood the risk of failure.
“If you’re not closing,” Jody Brueske, the former sales representative who enrolled Mr. Benefield would later testify in a separate case involving his death, “you’re going to be the next one walking out the door.”
From their kitchen in Springersville, Ariz., the Benefields did not know any of that. All they knew was that A Better Tomorrow had come up in an internet search. A former snowboarder, Mr. Benefield was so excited he even packed his gym clothes, thrilled at the prospect of getting healthy again, according to court records. Ms. Benefield declined to be interviewed; her statements primarily come from court documents.
From his first phone call to his death, Mr. Benefield’s relationship with A Better Tomorrow lasted a mere two days. Compressed into those 48 hours is a case study in how financial pressures and business motivations can collide with the needs and expectations of the fragile patients who represent the industry’s bread and butter.
Just days after the staff meeting led by Mr. Menz, Ms. Brueske took the call from Mr. Benefield and his wife, Kelly. His case was pretty typical — an out-of-control drinking problem that was hurting someone’s marriage. But one thing set it apart: Mr. Benefield had chronic lung disease that forced him to use an oxygen tank.
Ms. Brueske had never dealt with a patient who used oxygen. But she felt that she couldn’t turn anyone away, because much of her pay came from commissions, she later testified.
According to a court transcript, a lawyer grilled Ms. Brueske on that point, asking her: “Did you feel personally pressured to get more clients in because of that sales meeting?”
Ms. Brueske responded, “Yes.”
Ultimately, Ms. Brueske assured the Benefields that A Better Tomorrow could provide the oxygen he needed.
In an interview, Mr. Cartwright disputed the sales rep’s testimony. He said that her perception of the pressures to fill beds “did not match reality.” He said the company would never promise to provide oxygen to a patient, because it would require a prescription. “If she promised that, she was out there on an island,” he said.
But from the moment Mr. Benefield stepped off the plane in California, there were signs the company wasn’t equipped to handle his care.
A Better Tomorrow sent a driver to pick him up at the San Diego airport. When that driver, a recovering meth addict with no medical training, got to the airport, he found that Mr. Benefield was having trouble breathing. His oxygen tank was empty.
The driver’s supervisor instructed him to give Mr. Benefield a sedative called Serax, even though Mr. Benefield had not been prescribed that drug. According to a transcript of court testimony, the Serax pills were leftovers from previous patients that were simply kept in the car in case the driver needed to administer a sedative on the spot.
Mr. Menz, in an interview, said it had never been company policy to dispense drugs without a prescription, nor did the company keep leftover medicine.
The driver and other employees also testified that staff members were discouraged from taking patients to a hospital emergency room — even when they appeared to be in distress — because A Better Tomorrow might risk losing a paying customer. The feeling was, “they are taking our clients,” the driver said of the hospital.
The treatment house where Mr. Benefield was taken was not a medical facility but a five-bedroom home with a two-car garage and a hot tub in the back. And the employees there did not know what to make of Mr. Benefield’s behavior — they were familiar with addiction symptoms, not respiratory ailments.
The afternoon of Mr. Benefield’s arrival, as he grew more distressed, the house’s employees called their managers at home for guidance. One supervisor they phoned — a licensed massage therapist and marriage counselor, not a doctor — told them to administer more sedatives.
Mr. Benefield’s oxygen tank remained unfilled.
That night, the employees found that Mr. Benefield had slid out of bed and was sitting on the floor. They hoisted him back into bed. When they went to check on him again in the morning, he was dead.
As part of the testimony by Ms. Brueske, the sales rep who worked with the Benefields, she described her boss’s advice after she learned of the death of her client. “You need to get thicker skin,” she recalled her saying. “People die in rehab all of the time.”
Mr. Drose said that he wasn’t sure what to think after he reviewed the testimony and other documents. “I’ve seen companies screw over shareholders,” Mr. Drose said. “This company seemed like it was hurting people.”
But despite piles of documents detailing tragic deaths, it wasn’t clear to him that he had enough useful material to move the stock price down, which remained his ultimate goal. After all, addiction patients die with tragic regularity.
Mr. Drose’s boss at Kingsford Capital, the hedge fund where he was working, also wasn’t sure what all the material added up to.
“This seems like a lot of stuff,” Mr. Drose said his boss had told him as he stood among his piles of documents in the summer of 2015. “But what is really in here?”
A spokesman for Kingsford said the “firm does not comment on its investments.”
Bit by bit, Mr. Drose struggled to piece together the meaning of the deaths at American Addiction Centers.
He determined — after filtering through the reams of legal filings and other documents he had collected — that the prosecutor, Mr. Gold, had taken an interest in Mr. Benefield’s case. After speaking with a former police detective about what the possible charges against the company could be, he came up with a startling theory: The company could face a murder charge.
It was a wild idea. No other public company in California history had been charged with murder.
In California, second-degree murder involves someone acting with “implied” malice that reflects an “abandoned or malignant heart.” While that might sound like a legal concept straight out of Edgar Allan Poe, the theory was that the employees who gave Mr. Benefield sedatives, instead of taking him to the hospital, may have acted with implied malice.
Mr. Drose liked the sound of that. “Second-degree murder is what will destroy this company,” Mr. Drose wrote in an email to Mr. Hill. “I am sure of it.”
On and off in the years since Mr. Benefield’s death, a cast of characters — the empire builder Mr. Cartwright, the budding short-seller Mr. Drose, the crosstown rival Mr. Hill — had made A Better Tomorrow and American Addiction Centers a focus of their lives. Some hoped to build it up. Others dreamed of tearing it down.
Mr. Cartwright and Mr. Drose, in particular, saw fortunes to be made.
But in Mr. Benefield’s death, would a company — and by extension, an industry — be held to account? Would future patients benefit from the lessons learned from his death?
On July 21, 2015, a prosecutor charged a subsidiary of American Addiction Centers and four employees with second-degree murder in the Benefield case.
The company’s stock price fell 53 percent, erasing more than half its value in just one day.
Mr. Cartwright, vacationing in Italy, didn’t know what hit him. He called his board of directors to ask for help figuring out what to do.
One board member, the Nashville investor Mr. Burch, recently recalled his advice to Mr. Cartwright. “You can’t run away from it,” Mr. Burch told him. “There is not much you can do if someone calls you a son of a bitch, other than deny it and prove you are not. You get all the bullets you can, and fire at the enemy as quickly as you can.”
That’s exactly what Mr. Cartwright did.
First his company lawyers argued, among other things, that the official coroner’s report said Mr. Benefield had died of natural causes, but the prosecutor had relied on paid testimony from a different coroner to make a case for homicide. The lawyers got the murder charge reduced to abuse.
That was a big win. But Mr. Cartwright was just getting started.
“How do you bring a murder charge on a five-year-old, natural-cause death?” Mr. Cartwright said recently. “It makes no sense.”
Looking for answers, two private investigators were sent to interview Mr. Drose in Atlanta. Only then did Mr. Cartwright discover how all his enemies fit together.
Mr. Drose said that he had guessed that a murder charge might be coming — well before the actual charge was made public.
This gave Mr. Cartwright powerful new ammunition in his court battle: He didn’t believe Mr. Drose had merely guessed. He suspected someone had leaked confidential grand jury information to an investor in a position to make a financial killing.
“I think the charge was brought to make money,” Mr. Cartwright said in the interview earlier this year. “Can I prove it? No.”
Presented with that argument, the judge said that Mr. Gold, the prosecutor, appeared to have a conflict because of his relationship with Mr. Hill, but that it was not enough to unfairly influence the case. But by then, it was June 2016, almost six years since Mr. Benefield’s death. Two key witnesses had died and the case was running out of steam. Ultimately, American Addiction Centers agreed to pay a $200,000 fine and allow a monitor to oversee its operations in California.
In a statement, a spokeswoman for the California attorney general’s office emphasized that “the judge concluded that any connection between Mr. Gold and Mr. Hill did not create a likelihood of unfairness to the defendants in the case.”
The unprecedented murder case against a company, which would have sent a message to American Addiction Centers — and, by extension a desperately needed but poorly understood industry — had fizzled out.
Today, Mr. Cartwright is still angry that his company had to fight off what he considers an unfair attack. The stock price is hovering around a quarter of its peak value, diminishing his personal wealth.
But American Addiction Centers is growing again. In September, it bought a treatment company with a 114-bed hospital in Massachusetts. As part of the deal, it also acquired several toll-free numbers to generate referrals, including 1-800-ALCOHOL.
And he points out that while the care at American Addiction Centers has evolved since Mr. Benefield’s death, the resulting court fight did not prompt him to alter his treatment practices. “Why would I change policies and procedure because of someone dying of a heart attack at 3 o’clock in the morning?” he said.
His persistent critic, Mr. Hill, worries about what he considers the industry’s sorry state and the quality of addiction treatment. “It is going to take turning the field upside down,” he said. But he has also moved on, buying a place in Mexico where he enjoys swimming with dolphins. One day he hopes to retire there.
Christopher Drose’s work earned him a spot on Forbes magazine’s “30 Under 30” list of up-and-coming young investors. Today he works for a new hedge fund in New York doing what he loves: digging up dirt on companies, and betting against them.
“Some part of me likes to see people who are not telling the truth come face to face with it,” Mr. Drose said. “I’ve found a way to express that with stocks.”
American Addiction Centers has estimated that short sellers walked away with $250 million because of Mr. Benefield’s death and the murder charge against the company. Mr. Drose called that number overstated but declined to say how much money he made.
In 2015, Kingsford Capital sent out a holiday letter detailing its investment successes, which mentioned the murder charge involving American Addiction Centers. In the letter, the firm summed up its strategy that year as finding “gifts from garbage.”
“One man’s trash,” Kingsford Capital wrote, “is our treasure.”
Today in America, overdoses claim more lives than guns. Yet as the addiction crisis deepens, patients and their families are still struggling to sort out the most effective forms of treatment offered by the sprawling industry.
Mr. Benefield, a bear of a man with a big drinking problem, had pursued his hope of getting well. His wife, Kelly, was left to settle a civil wrongful-death lawsuit against the company that they had hoped would help him.
On Facebook, years after he died, Ms. Benefield wrote, “I miss him so much.”
I’ve been factoring times while I walk my son to sleep, and it’s become kind of a game: can I factor the current time faster than he can fall asleep? After a few big matches, he’d developed quite a win streak, and I needed to up my game. I kept getting bogged down in the double digits. I needed a way to check a number’s divisibility faster than doing the mental long division.
Long division is not only a terrible algorithm to do mentally, it’s also kind of a terrible algorithm. It’s mostly trial and error. Personally, I spend a lot of time rounding and guessing. Mental long division is even worse. Even if you don’t have to do any guessing, you still have to keep track of a lot of numbers. Long division gains speed by consuming memory–just look at all the paper it uses. Good mental math algorithms require the opposite tradeoff, taking longer to execute but demanding less of your short-term memory.
Consequently, long division is a costly way to hunt for a number’s factors. While you’ll need the quotient if you find a factor, factors are sparse. Most primes won’t be factors of any given given number, and all the work of division will be wasted. What you need to get a leg up on your mental factoring game is a quicker way of rejecting primes as factors.
The first three primes have elementary techniques for checking divisibility. Two is a factor of any number ending in 0, 2, 4, 6, or 8, and 5 is a factor of numbers ending in 0 or 5. Any number with 3 as a factor has the curious feature that its digits add up to a number that’s evenly divisible by 3, a property we’ll take a closer look at later. Primes above 5 aren’t so easy to check, but it turns out that neither are they as hard to check as doing the long division.
David Wilson’s post “Divisibility by 7 is a Walk on a Graph” reveals an interesting technique that goes beyond 7. I’ve recreated David’s graph and algorithm below, and then I’ll explain how to extend his concept to numbers greater than 7.
Given a number $n$, break it into digits $n_1$, $n_2$, $n_3$, etc., where $n_1$ has the largest place value.
Put your finger on the above graph at zero.
Move your finger along the gray arrows $n_1$ times.
If there are any digits left to consider, move your finger along one red arrow, then repeat from Step 3 using the next digit. If there are no digits left, your finger is on the remainder of $n$ when divided by 7.
As an example, let’s see if 7 is a factor of 243. Starting at 0 at the top, go clockwise along the circle 2 steps to 2. Follow the red arrow to 6. Continue around the circle 4 steps to 3. Follow the red arrow to 2. Around the circle 3 steps to 5. The remainder when dividing 243 by 7 is 5, so 7 isn’t a factor of 243.
The logic behind David’s graph is this: the red arrows point from $i$ to $10i \space modulo \space 7$. Whenever we move to the next digit, we multiply by 10 and take the remainder when dividing by 7, thus slowly moving all values to their proper place values while at the same time subtracting out multiples of 7 and keeping all our numbers small. Keeping numbers few and small is important for doing the process in your head.
Let’s look at the graph for 11.
Following the red arrows becomes as simple as reflecting across to the other side of the circle—1 points to 10, 2 points to 9, 3 points to 8, and so on up to 10, which points back to 1. Where David’s algorithm for checking 7 requires memorizing a seemingly arbitrary graph, checking 11 needs nothing more than knowing what number gets you to 11.
It turns out that this gives us a trick for checking 11 that’s a close cousin to the summing-digits trick we use for 3. Notice that the red arrows in the graph for 11 never change your distance to 0, only whether 0 is ahead or behind you. In other words, rather than following a red arrow before feeding in the next digit, you could just reverse the direction of the gray arrows. In fact, a graph isn’t even necessary for checking 11. You can just alternate between adding and subtracting digits. Is 11 a factor of 341? $0 + 3 - 4 + 1 = 0$, so yes.
Let’s revisit the curious shortcut for checking 3, in which we instead check if the sum of the number’s digits is divisible by 3. Nine, though it isn’t prime, has a similar rule: if all the digits add up to a number that’s evenly divisible by 9, the original number is also evenly divisible by 9. This always struck me as strange. Why should adding up a number’s digits tell us anything about its factors? And why 3 and 9 should be the only numbers for which this works? The Wilson graphs for 3 and 9 shed some light on the mystery.
The red arrows don’t change your place on either graph. Following the gray arrows around the circle is simply addition modulo 3 or 9. The fact that 3 and 9 are the only cases of this property stems not from 3, but from 9. I’ve been drawing these graphs in base 10, but drawing graphs in some other bases reveals that when the number in question is one less than the base, the red arrows all loop back to the number they started at. Nine looks like this in base 10, and so does 3 by virtue of it being a factor of 9. In base 13, not only do you get 42 when multiplying 6 by 9, but adding up a number’s digits can tell you if the number is divisible by 2, 3, 4, 6, and 12.
For primes beyond 11, Wilson graphs are interesting to look at but difficult to use. Here they are for 13, 17, and 19.
That’s only half of the weird trick for factoring numbers in your head, we now need the other half—a way to remember where the red arrows point. Instead of plotting $10n \space modulo \space 13$ on a circle, let’s look at it as a scatter plot.
At the heart of this plot are the integers on the line $10x$. Any value greater than 13 has had its thirteens subtracted out, leaving us with a set of surprisingly well-organized dots. Notice how they fall into parallel lines. Two of these lines in particular are worth taking a closer look at: the positively sloped line through $(0,0)$ and the point nearest it, and the negatively sloped line through $(0,p)$ and the point nearest it. The equations for these lines are $\frac{1}{4}x$ and $-3x+13$.
Here are similar plots for 17, 19, and 23.
Parallel lines seem to be a common feature of these plots. Here’s a list of the aforementioned line slopes for all primes from 7 to 89.
$p$
positive
negative
$7$
$3$
$-\frac{1}{2}$
$11$
$\frac{5}{6}$
$-1$
$13$
$\frac{1}{4}$
$-3$
$17$
$\frac{3}{2}$
$-\frac{4}{3}$
$19$
$\frac{1}{2}$
$-\frac{8}{3}$
$23$
$\frac{4}{5}$
$-\frac{3}{2}$
$29$
$\frac{1}{3}$
$-\frac{9}{2}$
$p$
positive
negative
$31$
$\frac{9}{4}$
$-\frac{1}{3}$
$37$
$\frac{3}{4}$
$-\frac{7}{3}$
$41$
$10$
$-\frac{1}{4}$
$43$
$\frac{7}{5}$
$-\frac{3}{4}$
$47$
$\frac{3}{5}$
$-\frac{7}{4}$
$53$
$\frac{7}{6}$
$-\frac{3}{5}$
$59$
$\frac{1}{6}$
$-\frac{9}{5}$
$p$
positive
negative
$61$
$10$
$-\frac{1}{6}$
$67$
$\frac{3}{7}$
$-\frac{7}{6}$
$71$
$10$
$-\frac{1}{7}$
$73$
$10$
$-\frac{3}{7}$
$79$
$\frac{1}{8}$
$-\frac{9}{7}$
$83$
$10$
$-\frac{9}{8}$
$89$
$\frac{1}{9}$
$-\frac{9}{8}$
Can you spot the connection between a prime and its slopes? Let me pick out the slopes that show the relation most clearly, and make all of them fractions.
$p$
slope
$7$
$\frac{3}{1}$
$11$
$-\frac{1}{1}$
$13$
$-\frac{3}{1}$
$17$
$\frac{3}{2}$
$19$
$\frac{1}{2}$
$23$
$-\frac{3}{2}$
$29$
$\frac{1}{3}$
$p$
slope
$31$
$-\frac{1}{3}$
$37$
$\frac{3}{4}$
$41$
$-\frac{1}{4}$
$43$
$-\frac{3}{4}$
$47$
$\frac{3}{5}$
$53$
$-\frac{3}{5}$
$59$
$\frac{1}{6}$
$p$
slope
$61$
$-\frac{1}{6}$
$67$
$\frac{3}{7}$
$71$
$-\frac{1}{7}$
$73$
$-\frac{3}{7}$
$79$
$\frac{1}{8}$
$83$
$-\frac{9}{8}$
$89$
$\frac{1}{9}$
The relationship is subtle. For prime $p$ and slope $\frac{a}{b}$ (where $b$ is always positive), $p = 10b - a$. So $7 = 10 \cdot 1 - 3$, $11 = 10 \cdot 1 - (-1)$, $13 = 10 \cdot 1 - (-3)$, $17 = 10 \cdot 2 - 3$, and so on. In fact, the first listing of slopes has several primes for which both fractions exhibit this pattern: $29 = 10 \cdot 2 - (-9)$, $31 = 10 \cdot 4 - 9$, $37 = 10 \cdot 4 - (-7)$, etc. This is a general pattern, though the way I selected lines didn’t always pick the right ones. In the scatter plot of 17, you can find parallel lines with the slopes $\frac{3}{2}$ as well as ones with the slopes $\frac{-7}{1}$.
So here’s the second weird trick for factoring numbers, and the more general one. It’s useful for primes from 13 to 89.
Given the number $n$ and prime $p$, calculate $a$ and $b$. $a$ is $p$ subtracted from the nearest multiple of 10. $b$ is that same multiple of 10 divided by 10.
Break $n$ into its digits, $n_1$, $n_2$, $n_3$, etc.
Start with $n_1$.
If $b$ evenly divides it, divide by $b$. If not, add or subtract $p$ until the result is evenly divisible by $b$, then divide by $b$.
Take the remainder when divided by $p$.
If there are any digits left to consider, add it to the result and continue at Step 3. If there are no more digits, the result is the remainder of $n$ when divided by $p$.
Try it out with $n = 986$ and $p = 17$. The multiple of 10 closest to 17 is 20, so $a = 3$ and $b = 2$. Starting with $9$, $9 \cdot 3 = 27$, $27 - 17 = 10$, $10 \div 2 = 5$. Add the second digit, $8$, to get $13$ and repeat: $13 \cdot 3 = 39$, $39 - 17 = 22$, $22 \div 2 = 11$. Add the last digit, $6$, to get $17$, which is, clearly, evenly divisible by 17, so 17 is a factor of 986.
This turns out to be a good algorithm for mental factoring, since you never have to multiply by more than 3, you divide by numbers less than 10, and otherwise you add and subtract primes. Additionally, you only have to hold onto a few numbers from the beginning of the algorithm to the end, three of which stay never change ($p$, $a$, and $b$). The process itself is simple enough and repetitious enough to quickly get the hang of.
I can’t always factor the current time faster than my son falls asleep, but armed with this algorithm, I now have a fighting chance. If you have any other interesting mental math shortcuts, I’m happy to hear them. Contact me on Twitter or by email.
Degraded performance after forced reboot due to AWS instance maintenance
Posted on:
Dec 19, 2017 11:15 AM
Five days ago I received email from AWS (see below for full text) which informed me that a reboot of one of my instances was necessary due to "updates". To pre-empt auto-reboot on 5th Jan I manually rebooted 3 days ago. Immediately following the reboot my server running on this instance started to suffer from cpu stress. Looking at cpu stats there was a very clear change in daily cpu usage pattern, despite continuing normal traffic to my server. I performed extensive review of what might have changed on my server configuration but drew a complete blank - configuration of the server did not change.
It is simply as if the instance (m1.medium) was somehow degraded to a lesser performing one following the reboot. I simply can't find any explanation other than a change to the instance capability that took effect when I rebooted.
What could possible be causing this? I'm at wits end trying to understand what happened? Is it possible that AWS maintenance is responsible for this degradation?
See attached file showing changed cpu pattern that started on 15th Dec immediately following the reboot.
==============================
Full email from AWS announcing need to reboot:
Dear Amazon EC2 Customer,
One or more of your Amazon EC2 instances in the ap-southeast-2 region requires important security and operational updates which will require a reboot of your instance. A maintenance window has been scheduled between Sat, 6 Jan 2018 03:00:00 GMT and Sat, 6 Jan 2018 05:00:00 GMT during which the EC2 service will automatically perform the required reboot. During the maintenance window, the affected instance will be unavailable for a short period of time as it reboots. You may instead choose to reboot the instance yourself at any time before the maintenance window. If you choose to do this, the maintenance will be marked as completed and no reboot will occur during the maintenance window. For more information on EC2 maintenance please see our documentation here: https://aws.amazon.com/maintenance-help/. More details on rebooting your instances yourself can be found here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-reboot.html
To see which of your instances are impacted please visit the 'Events' page on the EC2 console to view your instances that are scheduled for maintenance:
If you have any questions or concerns, you can contact the AWS Support Team on the community forums and via AWS Premium Support at: https://aws.amazon.com/support
Edited by: ajnaware on Dec 19, 2017 11:19 AM
Re: Degraded performance after forced reboot due to AWS instance maintenanc
Posted on:
Dec 20, 2017 10:54 AM
We are experiencing the same thing across all roles in our fleet.
Attached is a CPU graph (statistic: average, period: 1 hour) for one instance type. The arrows point at reboot events. blue lines are systems that have been rebooted at some point in this graph. red lines are systems that have not been rebooted.
These hosts are all behind the same ELB handing uniform traffic patterns throughout this graphed time period.
Edited by: sfdanb on Dec 20, 2017 10:55 AM
Re: Degraded performance after forced reboot due to AWS instance maintenanc
Posted on:
Dec 20, 2017 3:47 PM
Hi,
The update that is being applied to a portion of EC2 instances can, in some corner cases, require additional CPU resources. We always attempt to make updates and maintenance smooth and non-disruptive for customers, and in the vast majority of cases we are able to apply updates without scheduling maintenance events like instance reboots. For this update, we have attempted to find and eliminate as many of the corner cases that influence performance as possible.
For some time we have recommended that customers use our latest generation instances with HVM AMIs to get the best performance from EC2 (see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/virtualization_types.html). If moving to a HVM based AMI is not easy, changing your instance size to m3.medium, which provides more compute than m1.medium at a lower price, may be a workaround.
As the notice points out, the update that is being applied is important to maintain the high security and operational aspects for your instances. We want to make every effort to make this as non-disruptive as possible. If this information does not help you resolve the CPU utilization issue you're experiencing, please reach out again.
Kind regards,
Matt
Re: Degraded performance after forced reboot due to AWS instance maintenanc
Posted on:
Dec 20, 2017 6:20 PM
Fortunately we had been working on a path to upgrade our fleet to HVM. This upgrade forced our hand, so we can confirm that on the same instance type (c3.xlarge) and using the same code we have returned to an acceptable performance level on affected hosts with HVM AMIs.
It was a very long couple days..
Re: Degraded performance after forced reboot due to AWS instance maintenanc
Posted on:
Dec 20, 2017 6:22 PM
Thanks for the detailed reply. I guess you are essentially confirming that the instance maintenance was likely to be the reason for the major change to cpu usage, and that I am one of those edge cases, and that the only solution now is to change instance type. Of course I am not entirely happy about this. I bought a 3-year reserved instance 2 years ago, and now have to hope I can sell the remaining year for a reasonable amount (which may be a stretch given that I am apparently using legacy instance type), and then purchase new reserved instance after upgrading. I will likely only buy 1 year reserved instances henceforth, given that there is apparently no guarantee that the instance will actually remain viable for the full duration, should you undertake any future maintenance causing similar issues. Also as I am not personally expert enough to carry out the new instance selection and update myself, I am having to pay for hired expert assistance to do this. Also the cpu max-outs I've experienced have caused some grief regarding my own user relations. So all in all I'm pretty disappointed about this issue. If I manage to sort it all out I'll mark as answered.
Re: Degraded performance after forced reboot due to AWS instance maintenanc
Posted on:
Dec 20, 2017 7:55 PM
We are in the exact same situation with an m1.medium instance that has a bit over a year to go on a three year reserved instance. As our business is primarily online, we have now suffered significant losses. In our case, Amazon tried to say that our problem was not the same as what everyone else is reporting, and instead was due to our running an old kernel and Linux distribution in general, despite the fact that we had exactly the same symptoms. We have now upgraded our distro, but are having the same problem. It really sounds as if Amazon screwed up with an inadequately tested upgrade and are now trying to avoid responsibility. That is very unfortunate, as we could accept the admission of an honest mistake, but not these excuses with no attempt on their part to fix the problem. We may have to upgrade now, but we'll certainly be looking to move to different hosting. I suspect they are afraid to admit liability, or maybe they just don't care about the people who will be affected, as I suspect it is mostly smaller customers. If that is the case, then they may have even known this would be the result, and just made a decision that the resulting loss of goodwill and probable lawsuits was worth the tradeoff.
Re: Degraded performance after forced reboot due to AWS instance maintenanc
Posted on:
Dec 20, 2017 10:05 PM
For what it's worth, we also had reserved instances (c3.xlarge) and we were able to relaunch our instances on that same instance type with HVM. So while I'm sure it's small consolation (as it was for us) at least you don't need to immediately deal with reselling the RIs. You just need to figure out how to migrate to HVM and relaunch the instances... which is no small task, to be sure.
Re: Degraded performance after forced reboot due to AWS instance maintenanc
Posted on:
Dec 20, 2017 10:14 PM
Unfortunately, M1 instances do not support HVM.
Re: Degraded performance after forced reboot due to AWS instance maintenanc
Posted on:
Dec 20, 2017 10:19 PM
Hi,
All instance types can now run HVM AMIs for any operating system. Previously only Windows HVM AMIs could be used in HVM mode on M1, M2, C1, and T1 instances. This is no longer the case.
Kind regards,
Matt
Re: Degraded performance after forced reboot due to AWS instance maintenanc
Posted on:
Dec 20, 2017 10:38 PM
Hi,
Before selling your RI, can you try running your workload on a HVM AMI running on a m1.medium instance?
I am also disappointed that we have fallen short in making this maintenance completely painless for you, despite our continuing efforts. We will follow up directly to make sure your issues are resolved.
Kind regards,
Matt
Re: Degraded performance after forced reboot due to AWS instance maintenanc
Posted on:
Dec 20, 2017 11:41 PM
Hi,
I have reviewed support case 4743634091 regarding what you're experiencing on your instance. You're correct that what you are seeing is not the same issue as what others are reporting. In the first correspondence from support they correctly pointed out that the kernel in your instance is encountering an out of memory (OOM) condition and made suggestions about how to adjust the configuration within your instance to avoid the OOM processor killer from kicking in.
The update that is being applied to instances that have scheduled reboot maintenance can cause slight changes to system resources available to paravirtualized instances, including a small reduction in usable memory. This can cause smaller instances, like m1.medium, that run workloads that were previously just fitting within the usable memory available to the instance to trigger out of memory conditions. Adding a swap file (as no swap is configured in your instance) or reducing the number of processes may resolve the issue on your existing PV instance.
Replacing your instance with one started from a HVM AMI will provide more system resources than PV instances. In either case (adjusting your configuration or moving to HVM), you should be able to run your existing workload on a m1.medium instance if you do not want to change to a different size.
I'm sorry that the additional information that would have provided a better explanation for the recommendations made in the case was not originally included, and that this important update is requiring additional effort beyond the reboot for your workload and configuration.
Kind regards,
Matt
Re: Degraded performance after forced reboot due to AWS instance maintenanc
Posted on:
Dec 30, 2017 9:23 AM
I have now moved to an m3.medium instance which brought typical cpu loads down from about 50% (with many max-outs) to about 15%.
As there was no simple migration path from my m1.medium instance to HVI AMI I had to re-install server software from scratch, which is a lengthy process. I did not test m1.medium HVI AMI because I couldn't afford to waste time testing configurations. I just needed a solution that would allow my server to run reliably, henceforth, and based on your advice m3.medium seemed the safest bet.
I have put my m1.medium reserved instance up for sale for the recommended $470. However, given that it is an obsolete instance type I am not optimistic that it will sell.
Overall, in direct monetary terms I would estimate its going to cost me at least $1000 for the wasted reserved instance plus contractor time. And that figure doesn't include my own time and stress, nor the intangible loss of user confidence that came from the problems when my instance started maxing out in cpu.
So while I appreciate the "better late than never" advice that you gave after I posted the problem here, which has subsequently allowed me to find a solution at my own effort and expense, given that the problem arose entirely due to AWS actions causing the service to degrade, I would expect something a bit more proportionate from you than just a verbal expression of regret.
aEnvironmental Science Department, Wageningen University, 6700 HB Wageningen, The Netherlands
bDepartment of History, Utrecht University, 3508 TC Utrecht, The Netherlands
Edited by Simon A. Levin, Princeton University, Princeton, NJ, and approved November 3, 2017 (received for review April 18,
2017)
Significance
Inequality is one of the main drivers of social tension. We show striking similarities between patterns of inequality between
species abundances in nature and wealth in society. We demonstrate that in the absence of equalizing forces, such large inequality
will arise from chance alone. While natural enemies have an equalizing effect in nature, inequality in societies can be suppressed
by wealth-equalizing institutions. However, over the past millennium, such institutions have been weakened during periods
of societal upscaling. Our analysis suggests that due to the very same mathematical principle that rules natural communities
(indeed, a “law of nature”) extreme wealth inequality is inevitable in a globalizing world unless effective wealth-equalizing
institutions are installed on a global scale.
Abstract
Most societies are economically dominated by a small elite, and similarly, natural communities are typically dominated by
a small fraction of the species. Here we reveal a strong similarity between patterns of inequality in nature and society,
hinting at fundamental unifying mechanisms. We show that chance alone will drive 1% or less of the community to dominate 50%
of all resources in situations where gains and losses are multiplicative, as in returns on assets or growth rates of populations.
Key mechanisms that counteract such hyperdominance include natural enemies in nature and wealth-equalizing institutions in
society. However, historical research of European developments over the past millennium suggests that such institutions become
ineffective in times of societal upscaling. A corollary is that in a globalizing world, wealth will inevitably be appropriated
by a very small fraction of the population unless effective wealth-equalizing institutions emerge at the global level.
Several societies have seen as little as 1% of their population own approximately 50% of the total wealth. This was the case
in many Western countries around 1900, including Britain, France, and Sweden, and some claim that at present, roughly 1% of
the population owns 50% of total wealth at the global level (1, 2). Similarly, in natural communities, a small fraction of the total species often makes up most of the biomass; for instance,
a recent study of the Amazon rainforest revealed that roughly 1% of the tree species account for 50% of the total stored carbon
(3). Although the correspondence between the dominance in society and this famously diverse ecosystem may be a coincidence,
it raises the questions of whether there might be generic intrinsic tendencies to such inequality, and what could be the unifying
mechanisms behind it.
We first turn to the question of the extent to which patterns in nature and society are actually similar. The natural communities
that we analyze range from mushrooms, trees, intestinal bacteria, and algae to flies, rodents, and fish (SI Appendix, section 1). Our societal data consist of estimates for different countries (1, 4⇓⇓–7) (SI Appendix, section 1). We focus on wealth and not income distribution, which is much less unequal and—perhaps surprisingly—poorly correlated with
wealth inequalities across countries (SI Appendix, section 2). While income concerns a flow, wealth concerns a stock, just as biomass in species.
As a first illustration of the similarities of patterns in nature and society, consider the wealth distribution of the world’s
richest individuals compared with the abundance distribution of the Amazon’s most common trees (Fig. 1 A and B). The patterns are almost indistinguishable from one another. For a more systematic comparison, we also analyzed the Gini
indices of a wide range of natural communities and societies (Fig. 1 C and D). The Gini index is an indicator of inequality that ranges from 0 for entirely equal distributions to 1 for the most unequal
situation. It is a more integrative indicator of inequality than the fraction that represents 50%, but the two are closely
related in practice (SI Appendix, section 3). Surprisingly, Gini indices for our natural communities are quite similar to the Gini indices for wealth distributions of
181 countries (data sources listed in SI Appendix, section 1).
Fig. 1.
Inequality in society (Left) and nature (Right). The Upper panels illustrate the similarity between the wealth distribution of the world’s 1,800 billionaires (A) (8) and the abundance distribution among the most common trees in the Amazon forest (B) (3). The Lower panels illustrate inequality in nature and society more systematically, comparing the Gini index of wealth in countries (C) and the Gini index of abundance in a large set of natural communities (D). A complete list of data sources is provided in SI Appendix.
In societies, inequality is also found for other units besides the wealth of single actors or households. For instance, power
law-like distributions characterized by high inequality are found in statistics on city sizes, number of copies sold of bestseller
books, number of adherents of religious bodies, and number of links to web sites (9). In addition, firm size typically varies widely, with a few companies dominating the market (10, 11). At first glance, firm size may seem comparable on an abstract level to the wealth of households. Indeed, firms may grow
and shrink depending on vagaries of markets and other factors. However, there are also important differences. For instance,
firms are relatively ephemeral entities that are linked through a global web of shareholders (12) and may be fused or split depending on shareholders’ decisions and antitrust legislation. In this paper, we limit our discussion
to the wealth of households for our comparison of nature and society.
The patterns that we describe (Fig. 1) raise the question of whether the similarities between nature and society are a coincidence or might hint at universal underlying
processes. Viewed in detail, the complex interplay of mechanisms that govern wealth distribution in society is obviously very
different from the processes regulating the abundance of species in nature. However, as we argue, on an abstract level, there
are in fact comparable generic processes at play (Fig. 2).
Fig. 2.
Four unifying mechanisms that shape inequality and their specific drivers in nature (solid lines) and society (text boxes
with dashed borders).
Drivers of Inequality
The most obvious cause of inequality is an inherent difference in competitive power of the actors (Fig. 2, I). Particular sets of traits give some species a competitive edge, just as in society some individuals have traits that
set them up for entrepreneurial success. Furthermore, dominance can be self-reinforcing. In most societies, wealth can come
with power to set the rules in ways that favor further wealth concentration (4, 13, 14). In contrast, in nature, dominant species tend to have a disadvantage due to a disproportionally higher burden from natural
enemies (15), as we discuss below. Some of the inequality in nature can be related to the fact that data represent abundance in terms
of counts of individuals, which tend to be higher for smaller species; however, the sizes are not all that different within
many of the analyzed communities (e.g., trees, rodents). In addition, our bacterial abundance is estimated from RNA, and the
patterns are quite similar, suggesting that inequality is driven mostly by other factors.
Surprisingly, chance may be another particularly powerful driver of inequality. Even if no actor is intrinsically superior
to others, inequality can emerge naturally if wealth (or abundance) is subject to random losses or gains (Fig. 2, II). This counterintuitive phenomenon is known from null models in ecology (16⇓–18) as well as economics (2, 19). In society, gains and losses resulting from fluctuating financial stocks, business ownership, and other forms of wealth
have a multiplicative character. In nature, the effects of fluctuating weather and natural enemies on birth and death rates
have multiplicative effects on population sizes of all species. It is well known that multiplicative gains and losses tend
to lead to lognormal distributions (18⇓–20). The extent of the inequality in such a distribution (e.g., in terms of Gini index) depends on its SD. As we show below,
for a finite world in which the gains of one actor imply losses of others, the effect of multiplicative random processes blows
up to create extreme inequality.
Before getting to the fundamental explanation, we illustrate the phenomenon using two minimalistic null models. The first
model describes the dynamics of the wealth of economic actors (e.g., households) depending a stochastic return on wealth (SI Appendix, section 4). The complement is an equally simple model of neutral ecological competition driven by stochastic growth rates (SI Appendix, section 5). We take the economic model as the central example. Starting with a perfectly equal distribution of wealth, inequality quickly
rises until a few actors appropriate most of the wealth (Fig. 3 A and B) and the vast majority ends up with almost zero wealth. Very much the same pattern arises from the ecological model (SI Appendix, section 5). The extreme inequality may seem surprising, as no actor is intrinsically better than the others in these entirely chance-driven
worlds. The explanation, mathematically, is that due to the multiplicity (gains and losses are multiplied by the actual wealth),
absolute rates of the change tend to nil as wealth goes to zero (19). This causes very low wealth to be a “sticky” state, in the sense that getting out of it is extremely slow. The fundamental
effect of this mechanism can be seen most easily from a two-actor version of the model, where despite the absence of intrinsic
differences in competitive power, one of the actors entirely dominates at any given time (SI Appendix, section 6).
Fig. 3.
Examples showing how simulations of wealth of actors (Left) starting from an entirely equal situation quickly lead to inequality (Right) emerging solely from multiplicative gains and losses of otherwise equivalent competitors. The simulations shown in A and B are without savings, while those in C and D represent simulations with savings, illustrating that such an additive process reduces the tendency for hyperdominance generated
by the multiplicative gains and losses. The results are generated by a minimal model of wealth (SI Appendix, section 4). Similar results can be obtained from a model of neutrally competing species in a natural community (SI Appendix, section 5).
The stickiness of the close-to-zero state does not imply irreversibility. On rare occasions, there are shifts in dominance,
illustrating that indeed, this kind of dominance results from chance rather that intrinsic superiority. For increasing numbers
of actors, the result remains the same, but as the dominant position is always taken by a small minority or a single actor,
the remaining small portion of wealth is shared by increasing numbers. The essential result is that intermediate wealth (the
middle class) is intrinsically unstable. It repels any actors toward either the rich or (more likely) the low-wealth state.
Those are “quasi-attractors” that occur only in the stochastically forced version of the otherwise entirely neutral model.
Although we are not aware of previous studies revealing the fundamental instability of intermediate wealth (or abundance)
in stochastic neutral models, there is a long history of modeling chance-driven inequality (2, 16⇓⇓–19). Most of those studies build on multiplicative chance effects, but there is also a somewhat separate line of work inspired
by the parallel between molecules in a gas-exchanging momentum and monetary exchange between actors in society (21, 22). Gases tend toward a state of maximum entropy in which the energy of molecules follows an exponential distribution. On closer
look, inequality actually is not very great in such situations (SI Appendix, section 7). This makes intuitive sense, as the exchange of momentum among molecules can have an equalizing component. If one modifies
the rules to capture the nature of economic transactions more realistically (e.g., assuming that transfer is never more than
the capital of the poorest of the two in any direction), then the predictions of such physics-inspired models (22) do come very close to the multiplicative dynamics that we described and can indeed produce great inequality (23) (SI Appendix, section 7). In addition, relatively elaborate and realistic agent-based models of artificial societies predict the inevitable emergence
of great inequality (24).
The bottom line when it comes to the drivers of inequality is that, all else being equal, great inequality tends to emerge
from chance alone. This is quite counterintuitive. Imagine a simple classroom game (SI Appendix, section 7) in which each participant gets $100 to start. During each round, a dice roll determines random fractional gains or losses
for each participant. The total classroom sum is kept constant in each step through a correction tax (a fixed percentage of
each player’s wealth). How unequal would the long-run outcome be expected to be? The surprising answer is that one of the
players will typically hold almost all of the money—not because that player is superior, but just by chance. As we show, such
inequality arises robustly from a wide range of models, including situations in which economies can grow or suffer occasional
destructions (SI Appendix, section 9).
Equalizing Mechanisms
There are essentially two classes of mechanisms that can reduce inequality: suppression of dominance (Fig. 2, III) or lifting the majority out of the sticky state close to zero (Fig. 2, IV). Starting with the latter, a small additive influx is a powerful antidote to the stickiness effect. In nature, local
populations typically receive a trickle of immigration that contributes to population size in such an additive way (25). In society, savings from income represent an additive contribution to wealth. Adding such a flux to our minimal model of
wealth allows more households to gain wealth, thus populating the middle class and regularly breaking episodes of dominance
by the previously dominant households (Fig. 3 D and E and SI Appendix, section 8). Very much the same effect is seen in the ecological model if populations receive a small additive influx of individuals
from neighboring populations (SI Appendix, section 5). The effect of an additive process is consistent with what we know from ecosystems and societies. In ecology, the “rescue
effect” of immigration preventing population extinction is well documented (26, 27). In societies, saving is plausibly a way out of poverty (28); however, both historical and contemporary rates of savings are often close to zero for most households. This is either
a result of consumption using up all income (because of low income or high consumptive wants) or the lack of need to save
(because of the presence of alternative systems to cover future needs or buffer shocks, including kinship and welfare systems).
Thus, while true poverty traps will contribute to wealth inequality, many households do not accumulate wealth in the developed
world either (29). Therefore, the observed wealth concentration in most societies is consistent with predictions of inequality driven by return
on assets in the absence of an additive saving process.
Perhaps a more intuitive antidote to inequality is repression of dominance (Fig. 2, III). In nature, this is an omnipresent phenomenon. The most abundant species tend to suffer proportionally more from natural
enemies, including diseases, a mechanism that reduces dominance and allows a larger number of species to share resources (15). In the literature on microbial systems, this is known as the “kill the winner” principle (30). In societies, there is no comparable natural mechanism to constrain dominance. Occasional disasters, such as major wars,
may have an equalizing effect by destroying capital or inducing redistribution, but in the long run inequality generally returns
to the previous level (31). Economic growth also has been suggested to dampen inequality (2, 6, 29). However, analysis of our minimal model suggests that neither the occasional destruction of capital (in contrast to ref.31) nor economic growth (in contrast to ref. 2) should be expected to markedly reduce inequality in the theoretical context of chance-driven dynamics (SI Appendix, section 9). On the other hand, societies do install institutions that may either sustain wealth inequality or reduce it and that have
long-lasting (or quasi-permanent) effects on levels of wealth inequality. Power associated with wealth tends to facilitate
further enrichment through the installation of wealth-protecting institutions, such as absolute property rights and the right
to inherit (4, 13, 31). In contrast, societies may also install institutions that dampen inequality, such as taxation schemes (4, 29) (SI Appendix, section 8).
Long-Run Instability of Equalizing Mechanisms
Although the four forces that we have highlighted (Fig. 2) may shape much of the observed patterns of inequality, determining their relative importance is not easy. Occasionally,
however, perturbations of the balance provide valuable clues. In nature, the importance of repression of dominance (Fig. 2, III) is vividly illustrated by the occasional spectacular population explosion of newly invading species, explained by the
release from the natural enemies they left behind: the so-called “parasites lost” phenomenon (32). The balance is typically restored over the subsequent decades as natural enemies catch up with the newcomers.
In societies, control of wealth inequality is also notoriously unstable over time. The drop and rebound in inequality over
the last century has received much attention (2, 6, 29), but a careful analysis of historical sources reveals several waves of rising and falling inequality in history (4, 7, 33). Some of those cycles look surprisingly regular (33), suggesting that they might be governed by universal basic forces. Indeed, inequality and conflict are common elements across
historical analyses, even though precise mechanisms of their interaction differ among cases (7, 31, 33).
It is becoming increasingly clear that institutions can play a dominant and long-lasting role in shaping societal prosperity
and inequality (34). Indeed, on closer look, several historical cycles of inequality may be explained, at least in part, by the emergence of
equalizing institutions followed by periods during which various mechanisms undermined the effectiveness of these institutions
(4). An often-overlooked mechanism that may undermine the power of wealth-equalizing institutions is societal upscaling. Focusing
on Western Europe, we can see how in the Middle Ages, and especially in the 12th to 14th centuries, local communities reduced
inequality by limiting opportunities for transacting and accumulating land and capital, and developing mechanisms of redistribution,
through guild or community systems, operating at the local level, where most of the exchange and allocation of land and capital
took place (4). However, these town and village communities saw their institutional frameworks eroded by the growth of international trade,
migration, and interregional labor and capital markets, as well as by the process of state formation with the rise of more
centralized bureaucracies in the (early) modern period, triggering a long episode of rising inequality (5, 7, 35). In the late 19th century and early 20th century, institutions aimed at effectively constraining wealth accumulation were
developed at the level of the nation state, with the emergence of tax-funded welfare states. Perhaps the most conspicuous
of these institutions is the introduction of the inheritance tax, which limits wealth transfer to the next generation (2). Over the past decades, however, globalization has given way to a more unconstrained use and accumulation of wealth (29). The financial playing field for the wealthiest is now global, and mobility of wealth has greatly increased, providing immunity
to national taxation and other institutional obstacles to wealth accumulation.
Prospects
Our analysis suggests that even if all actors are equivalent, in the absence of counteracting forces, there is an intrinsic
tendency for significant inequality to arise from multiplicative chance effects. Although the surprising similarity between
inequality of species abundances and wealth may have the same roots on an abstract level, this does not imply that wealth
inequality is “natural.” Indeed, in nature, the amount of resources held by individuals (e.g., territory size) is typically
quite equal within a species. While wealth inequality may have emerged as far back as the Neolithic era (31, 36), the relative amount of wealth appropriated by the richest has increased as societies have scaled up. One explanation for
this effect is scale itself. Put simply, one can accumulate less wealth in a village than across the globe. However, as we
have argued, another explanation is that installing effective institutions to dampen inequality becomes more challenging as
scale increases. Excessive concentration of wealth is widely thought to hamper economic growth, concentrate power in the hands
of a small elite, and increase the chance of social unrest and political instability (1, 2, 4, 37⇓–39). This raises questions about the prospects for current societies. Phases of upscaling of governance successfully curbed
unconstrained growth of inequality first in the communities of late medieval Europe and later in the nation states of the
20th century, but in both cases, this was a lengthy and painful process. Whether scaling up of effective governance can now
be done at the global level and, if so, what this new form of governance might look like, remains unclear.
Acknowledgments
We thank Diego G. F. Pujoni, Gerben Straatsma, and Willem M. de Vos for assistance with our ecological data search and insightful
discussions on inequality in nature, and Jan Luiten van Zanden, Michalis Moatsos, and Wiemer Salverda for discussions on inequality
in society. This project was supported by the European Research Council Fund (339647, to B.v.B.). The work of M.S. is supported
by a Spinoza Award from the Dutch National Science Foundation.
Footnotes
Author contributions: M.S. designed research; B.v.B., I.A.v.d.L., and E.H.v.N. performed research; I.A.v.d.L. and E.H.v.N.
analyzed data; and M.S. and B.v.B. wrote the paper.
Conflict of interest statement: Simon A. Levin coauthored a review article published in Critical Care Medicine in 2016 with M.S., I.A.v.d.L., and E.H.v.N.
RALEIGH, NC., January 3, 2018 -- The
Great Internet Mersenne Prime Search (GIMPS) has discovered the
largest known prime number, 277,232,917-1, having
23,249,425
digits. A computer volunteered by Jonathan Pace made the find on
December 26, 2017. Jonathan is one of thousands of volunteers using
free GIMPS software available at www.mersenne.org/download/.
The new prime number, also known as M77232917, is calculated by multiplying
together 77,232,917 twos, and then subtracting one. It is nearly one million digits larger than
the previous record prime number,
in a special class of extremely rare prime numbers known as Mersenne primes.
It is only the 50th known Mersenne prime ever discovered, each increasingly
difficult to find. Mersenne primes were named for the French monk
Marin Mersenne,
who studied these numbers more than 350 years ago. GIMPS, founded in 1996, has discovered
the last 16 Mersenne primes.
Volunteers download a free program to search for these primes, with a cash
award offered to anyone lucky enough to find a new prime.
Prof. Chris Caldwell maintains an authoritative web site on
the largest known primes,
and has an excellent history of Mersenne primes.
The primality proof took six days of non-stop computing on a PC with an Intel i5-6600 CPU.
To prove there were no errors in the prime discovery process, the new prime was independently verified
using four different programs on four different hardware configurations.
Aaron Blosser verified it using Prime95 on an Intel Xeon server in 37 hours.
David Stanfill verified it using
gpuOwL on an AMD RX Vega 64 GPU in 34 hours.
Andreas Höglund verified the prime using CUDALucas
running on NVidia Titan Black GPU in 73 hours.
Ernst Mayer also verified it using his own program Mlucas
on 32-core Xeon server in 82 hours. Andreas Höglund also confirmed using Mlucas running on an Amazon AWS instance in 65 hours.
Jonathan Pace is a 51-year old Electrical
Engineer living in Germantown, Tennessee. Perseverance has finally paid off for Jon - he has been
hunting for big primes with GIMPS for over 14 years.
The discovery is eligible for a $3,000 GIMPS research discovery award.
GIMPS Prime95 client software was developed by founder George Woltman.
Scott Kurowski wrote the PrimeNet system software that coordinates GIMPS'
computers.
Aaron Blosser is now the system administrator, upgrading and maintaining PrimeNet as needed.
Volunteers have a chance to earn research discovery awards of $3,000 or $50,000
if their computer discovers a new Mersenne prime. GIMPS' next major goal is to win the
$150,000 award
administered by the Electronic Frontier Foundation offered
for finding a 100 million digit prime number.
Credit for this prime goes not only to Jonathan Pace for
running the Prime95 software, Woltman for writing the software, Kurowski and Blosser for
their work on the Primenet server, but also the thousands of GIMPS volunteers that sifted through millions of non-prime candidates.
In recognition of all the above people, official credit for this discovery goes to "J. Pace, G. Woltman, S. Kurowski, A. Blosser, et al."
About Mersenne.org's Great Internet Mersenne Prime Search
The Great Internet Mersenne
Prime Search (GIMPS) was formed in January 1996 by George Woltman to
discover new world record size Mersenne primes. In 1997 Scott Kurowski enabled GIMPS
to automatically harness the power of thousands of ordinary computers to search for these "needles in a
haystack". Most GIMPS members join the search for the thrill of possibly
discovering a record-setting, rare, and historic new Mersenne prime.
The search for more Mersenne primes is already under way. There may be
smaller, as yet undiscovered Mersenne primes, and there almost certainly are larger
Mersenne primes waiting to be found. Anyone with a reasonably powerful PC
can join GIMPS and become a big prime hunter, and possibly earn a cash research
discovery award. All the necessary software
can be downloaded for free at www.mersenne.org/download/.
GIMPS is organized as Mersenne Research, Inc., a 501(c)(3) science research
charity. Additional information may be found at
www.mersenneforum.org and
www.mersenne.org; donations are welcome.
For More Information on Mersenne Primes
Prime numbers have long fascinated both amateur and professional mathematicians.
An integer greater than one is called a prime number if its only divisors are
one and itself. The first prime numbers are 2, 3, 5, 7, 11, etc. For example,
the number 10 is not prime because it is divisible by 2 and 5.
A Mersenne prime is a prime number of the form 2P-1.
The first Mersenne primes are 3, 7, 31, and 127 corresponding to
P = 2, 3, 5, and 7 respectively. There are now 50 known Mersenne primes.
Mersenne
primes have been central to number theory since they were first discussed by
Euclid about 350 BC. The man whose name they now bear, the French monk Marin Mersenne (1588-1648),
made a famous conjecture on which values of P would
yield a prime. It took 300 years and several important discoveries in
mathematics to settle his conjecture.
At present there are few practical uses for this new large prime,
prompting some to ask "why search for these large primes"? Those
same doubts existed a few decades ago until important cryptography
algorithms were developed based on prime numbers. For seven more
good reasons to search for large prime numbers, see here.
Previous GIMPS Mersenne prime discoveries were made by members
in various countries.
In January 2016, Curtis Cooper et al. discovered
the 49th known Mersenne
prime in the U.S.
In January 2013, Curtis Cooper et al. discovered
the 48th known Mersenne
prime in the U.S.
In April 2009, Odd Magnar Strindmo et al. discovered
the 47th known Mersenne
prime in Norway.
In September 2008, Hans-Michael Elvenich et al. discovered
the 46th known Mersenne
prime in Germany.
In August 2008, Edson Smith et al. discovered
the 45th known Mersenne
prime in the U.S.
In September 2006, Curtis Cooper and Steven Boone et al. discovered
the 44th known Mersenne
prime in the U.S.
In December 2005, Curtis Cooper and Steven Boone et al. discovered
the 43rd known Mersenne
prime in the U.S.
In February 2005, Dr. Martin Nowak et al. discovered
the 42nd known Mersenne
prime in Germany.
In May 2004, Josh Findley et al. discovered
the 41st known Mersenne
prime in the U.S.
In November 2003, Michael Shafer et al. discovered
the 40th known Mersenne
prime in the U.S.
In November 2001, Michael Cameron et al. discovered the 39th Mersenne prime
in Canada.
In June 1999, Nayan Hajratwala et al. discovered the 38th Mersenne prime
in the U.S.
In January 1998, Roland Clarkson et al. discovered the 37th Mersenne prime
in the U.S.
In August 1997, Gordon Spence et al. discovered the 36th Mersenne prime
in the U.K.
In November 1996, Joel Armengaud et al. discovered the 35th Mersenne prime
in France.
Euclid proved that every Mersenne prime generates a perfect number. A perfect
number is one whose proper divisors add up to the number itself. The smallest
perfect number is 6 = 1 + 2 + 3 and the second perfect number is 28 = 1 + 2 + 4
+ 7 + 14. Euler (1707-1783) proved that all even perfect numbers come from
Mersenne primes. The newly discovered perfect number is 277,232,916 x (277,232,917-1). This
number is over 46 million digits long!
It is still unknown if any odd perfect numbers exist.
There is a unique history to the arithmetic algorithms underlying the GIMPS
project. The programs that found the recent big Mersenne primes are based on a
special algorithm. In the early 1990's, the late
Richard Crandall, Apple Distinguished
Scientist, discovered ways to double the speed of what are called convolutions
-- essentially big multiplication operations. The method is applicable not
only to prime searching but other aspects of computation. During that work
he also patented the Fast Elliptic Encryption system, now owned by
Apple Computer, which uses Mersenne primes to quickly encrypt and decrypt
messages. George Woltman implemented Crandall's algorithm in assembly
language, thereby producing a prime-search program of unprecedented
efficiency, and that work led to the successful GIMPS project.
School teachers from elementary through high-school grades have used
GIMPS to get their students excited about mathematics. Students who run the
free software are contributing to mathematical research. David Stanfill's and Ernst Mayer's
verification computations for this discovery was donated by
Squirrels LLC
(http://www.airsquirrels.com)
which services K-12 education and other customers.
[1]Science (American Association for the Advancement of Science), May 6, 2005 p810.