The following hardware is used for the development of Muen. There is a good
chance similar hardware works out-of-the box if the microarchitecture is Ivy
Bridge or newer.
Intel NUC 5i5MYHE
Broadwell
i5-5300U
Cirrus7 Nimbus
Haswell
i7-4765T
Lenovo ThinkPad T440s
Haswell
i7-4600U
Lenovo ThinkPad T430s
Ivy Bridge
i7-3520M
Intel NUC DC53427HYE
Ivy Bridge
i5-3427U
Kontron Technology KTQM77/mITX
Ivy Bridge
i7-3610QE
The first step to build Muen is to install the required packages:
The Ada and SPARK packages currently available in Debian and Ubuntu are too old
to build Muen. GNAT/SPARK GPL 2016 from AdaCore’s [libre] site must be
installed instead. Extend your PATH to make the GPL compiler and tools
visible to the Muen build system (assuming that they are installed below/opt):
$ export PATH=/opt/gnat/bin:/opt/spark/bin:$PATH
To build the Muen tools, RTS, kernel and example components change to the Muen
source directory and issue the following command:
This will create an image containing the example system which can be booted by
any Multiboot [mboot] compliant bootloader.
On Ubuntu 16.04.1 you might encounter an error of the form:
/usr/lib/x86_64-linux-gnu/crti.o: unrecognized relocation (0x2a) in section .init
If this is the case, rename the linker binary ld of GNAT GPL 2016 in order to
use the one provided by Ubuntu.
$ cd /opt/gnat/libexec/gcc/x86_64-pc-linux-gnu/4.9.4
$ mv ld ld-archive
One drab afternoon a few years ago something very unusual happened to me.
I was lounging under a tree in a packed east London park when I experienced a sudden feeling of vertigo, followed immediately by an overwhelming and intense sense of familiarity.
The people around me vanished and I found myself lying on a tartan picnic blanket amid a field of high golden wheat. The memory was rich and detailed. I could hear the sway of the wheat ears as a gentle breeze brushed through them. I felt warm sunlight on the back of my neck and watched as birds wheeled and floated above me.
It was a pleasant and extremely vivid recollection. The problem was that it never actually happened. What I was experiencing was an extreme form of a very common mental illusion: déjà vu.
We view our memories as sacred. One of the most fundamental doctrines of Western philosophy was established by Aristotle. He saw a newborn baby as a kind of empty ledger, one that is gradually filled as the child grows and accumulates knowledge and experience.
Whether it’s how to tie a shoelace or recalling your first day at school, memories make up the autobiographical map that helps us navigate the present day. Jingles from old television adverts, the name of the second-to-last prime minister, the punchline to a joke: memories are the constituent parts of individual identities.
Most of the time memory systems run quietly in the background as we go about the business of everyday life. We take their efficiency for granted. Until, that is, they fail.
For the past five years I have been suffering epileptic seizures resulting from the growth and eventual removal of a lemon-sized tumour from the right-hand side of my brain. Before my diagnosis I appeared fit and healthy: I was in my mid-30s and displayed absolutely no symptoms. Until, that is, the afternoon that I woke up on the kitchen floor with two black eyes after suffering my first recorded seizure.
Seizures, or fits, occur after an unanticipated electrical discharge in the brain. They are usually preceded by something called an ‘aura’, a sort of minor foreshock lasting anything up to a couple of minutes before the main event begins. The nature of this aura differs greatly from patient to patient. Some people experience synaesthesia, extreme euphoria and even orgasm at the onset of a seizure. My own aren’t nearly as exciting-sounding, being distinguished by sudden shifts in perspective, a rapidly increased heart rate, anxiety, and the occasional auditory hallucination.
Pioneering English neurologist John Hughlings Jackson was the first to define the epileptic aura, observing in 1898 that its hallmarks included vivid memory-like hallucinations, often alongside the feeling of déjà vu. “Old scenes revert,” one patient told him. “I feel,” said another, “in some strange place.”
By far the most significant trait of my aura is the striking sense of having lived through that precise moment before at some point in the past – even though I never have. During my most intense seizures, and for a week or so afterwards, this feeling of precognition becomes so pervasive that I routinely struggle to discern the difference between lived events and dreams, between memories, hallucinations and the products of my imagination.
I don’t remember déjà vu happening with any kind of regularity before the onset of my epilepsy. Now it occurs with varying degrees of magnitude up to ten times a day, whether as part of a seizure or not. I can find no pattern to explain when or why these episodes manifest themselves, only that they usually last for the length of a pulse before vanishing.
Many of the estimated 50 million people in the world with epilepsy experience long-term memory decline and psychiatric problems. And it’s hard for me not to worry whether the blurring of fact and fiction that I experience might one day engender a kind of mania. By trying to understand more about déjà vu, I’m hoping to make sure that I never lose my way on the path back to reality from that same ‘strange place’.
§
In Catch-22, Joseph Heller described déjà vu as “a weird, occult sensation of having experienced the identical situation before in some prior time or existence”. Peter Cook put it his own way in a magazine column: “All of us at one time or another have had a sense of déjà vu, a feeling that this has happened before, that this has happened before, that this has happened before.”
Taken from the French for ‘already seen’, déjà vu is one of a group of related quirks of memory. Research from 50 different surveys suggests that around two-thirds of healthy people have experienced déjà vu at one time or another. For the majority, it is dismissed as a curiosity or a mildly interesting cognitive illusion.
While déjà vu is instantaneous and fleeting, déjà vécu (already lived) is far more troubling. Unlike déjà vu, déjà vécu involves the sensation that a whole sequence of events has been lived through before. What’s more, it lacks both the startling aspect and instantly dismissible quality of déjà vu.
A defining feature of the normal déjà vu experience is the ability to discern that it isn’t real. On encountering déjà vu, the brain runs a sort of sense check, searching for objective evidence of the prior experience and then disregarding it as the illusion that it is. People with déjà vécu have been known to lose this ability completely.
Professor Chris Moulin, one of the foremost experts on the déjà experience, describes a patient he encountered while working at a memory clinic at a hospital in Bath, England. In 2000, Moulin received a letter from a local GP referring an 80-year-old former engineer known as AKP. As a result of gradual brain-cell death caused by dementia, AKP was now suffering from chronic and perpetual déjà vu: déjà vécu.
AKP claimed that he had given up watching television or reading the newspaper because he knew what was about to happen. “His wife said that he was someone who felt as though everything in his life had happened before,” says Moulin, now at the Laboratoire de Psychologie et NeuroCognition CNRS in Grenoble. AKP was resistant to the idea of visiting the clinic because he felt as though he’d already been there, despite the fact that he never had. On being introduced to Moulin for the first time, the man even claimed to be able to give specific details of occasions that they had met before.
AKP did retain some self-awareness. “His wife would ask him how he could know what would happen in a television programme if he’d never seen it before,” says Moulin, “to which he would respond, ‘How would I know? I have a memory problem.’”
On that day in the park, my vision of the picnic blanket and the wheat field disappeared when a paramedic began to shake my shoulder. Despite the fact that my memories had been hallucinations, they still felt as valid as any truly autobiographical memory. Moulin classes this as a form of déjà experience in which an image is somehow imbued with a sense of reality.
“Our feeling is that déjà vu is caused by a sense of familiarity,” he says. “Rather than just feeling like something has a feeling of ‘pastness’ about it, something comes to mind that has a phenomenological characteristic, so that it appears to be a real reminiscence.”
Other patients of Moulin’s have exhibited what cognitive scientists call ‘anosognosic’ tendencies, either being unaware of their condition or lacking the immediate capacity to tell memory from fantasy. “I spoke to one woman who said that her feelings of déjà vu were so strong that they were for her exactly like autobiographical memories,” Moulin tells me. “Some of the things that happened to her were quite fantastic; she’d have memories of taking helicopter flights. These memories were hard for her to overcome because she had to spend a long time trying to work out whether something had happened.”
After his first encounter with AKP, Moulin began to become interested in the causes of déjà vu and how subjective feelings can interfere with day-to-day memory processes. Discovering that there was very little credible literature describing the causes of déjà vu, Moulin and colleagues at the Language and Memory Lab at the Institute of Psychological Sciences, University of Leeds, began to study epileptics and other sufferers of profound memory defects in order to draw conclusions about déjà experiences in the healthy brain and explore what déjà vu means for the workings of consciousness generally.
They were faced with an immediate problem: déjà vu experiences can be so transitory and short-lived that they are almost impossible to recreate in clinical conditions. The job that they faced, then, was one of trying to catch lightning in a bottle.
§
Émile Boirac was a 19th-century psychic researcher and parapsychologist with an interest in clairvoyance typical of the Victorian era. In 1876, he wrote to a French philosophy journal to describe his experience of arriving in a new city but feeling as though he had visited it before. Boirac coined the phrase déjà vu. He suggested that it was caused by a sort of mental echo or ripple: that his new experience simply recalled a memory that had previously been forgotten.
While this theory is still considered plausible, subsequent attempts at explaining déjà vu experiences have tended towards the more outlandish.
Sigmund Freud’s 1901 book The Psychopathology of Everyday Life is most famous for exploring the nature of the Freudian slip, but it also discusses other defects in the process of recollection. The book documents one female patient’s déjà experiences: on entering a friend’s house for the first time, the woman had the feeling that she had visited before and claimed to know each successive room in the house before she walked through it.
What Freud’s patient experienced as she walked through the house would now be described specifically as déjà visité, or ‘already visited’. Freud attributed his patient’s feeling of déjà visité to the manifestation of a repressed fantasy that only surfaced when the woman encountered a situation analogous to an unconscious desire.
Again, this theory hasn’t been entirely discredited, although somewhat typically Freud also suggested that déjà vu could be traced back to a fixation on the mother’s genitals, the sole place that, he wrote, “one can assert with such conviction that one has been there before”.
The accepted scientific definition of déjà vu was formulated by South African neuropsychiatrist Vernon Neppe in 1983 as “any subjectively inappropriate impression of familiarity of the present experience with an undefined past”. He also identified 20 separate forms of déjà experience. Not all of them were related to seeing: one of Chris Moulin’s patients was a man who had been blind since birth yet claimed to have experienced déjà vu, while Neppe’s descriptions of déjà experiences include déjà senti (already felt) and déjà entendu (already heard).
Freud’s diagnosis of déjà vu as a purely psychological phenomenon – rather than one caused by neurological errors – had the unfortunate effect of shifting explanations for déjà experiences towards the absurdly mystical.
In 1991 a Gallup poll of attitudes towards déjà vu placed it alongside questions about astrology, paranormal activity and ghosts. Many people consider déjà vu to be outside the realm of everyday cognitive experience, with assorted cranks and crackpots claiming it to be incontrovertible proof of extrasensory perception, alien abduction, psychokinesis or past lives.
It’s not hard for me to feel sceptical about this last explanation in particular, but these fringe theories mean that déjà vu has received very little attention from mainstream science. Only now, almost 150 years after Émile Boirac invented the phrase, are researchers like Chris Moulin beginning to understand what actually causes the system errors in what neuroscientist Read Montague memorably called the “wet computer” of the brain.
§
The hippocampus is a beautiful looking thing. The mammalian brain contains two hippocampi, positioned symmetrically at the bottom of the brain. ‘Hippocampus’ is the ancient Greek word for seahorse, and there’s a resemblance in the way that a seahorse’s delicate tail coils in on itself to meet its long snout. It’s only in the last 40 years that we have really begun to understand what these delicate structures do.
Scientists used to think of memories as being arranged together tidily in one place, like documents in a filing cabinet. This consensus was overturned in the early 1970s when cognitive neuroscientist Professor Endel Tulving proposed his theory that memories actually belong to one of two distinct groups.
What Tulving called “semantic memory” refers to general facts that have no real bearing on personality, being independent of personal experience. “Episodic memories”, meanwhile, consist of recollections of life events or experiences. The fact that the Natural History Museum is in London is a semantic memory. The time that I visited it on a school trip at the age of 11 is an episodic one.
Aided by advances in neuroimaging, Tulving discovered that episodic memories are generated as small pieces of information at different points across the brain and then reassembled into a coherent whole. He saw this process as akin to actually experiencing episodic memories again. “Remembering,” he said in 1983, “is mental time travel, a sort of reliving of something that happened in the past.”
Many of these memory signals arose from the hippocampus and the area surrounding it, suggesting that the hippocampus is the brain’s librarian, responsible for receiving information already processed by the temporal lobe, then sorting, indexing and filing it as episodic memory. Just as a librarian might order books by subject matter or author, so the hippocampus identifies common features between memories. It might use analogy or familiarity, for example grouping all memories of various museum visits together in one place. These commonalities are then used to link the constituent parts of episodic memories together for future retrieval.
It’s no coincidence that people with epilepsy whose seizures tend to trigger déjà vu are those whose seizures originate in the part of the brain most involved with memory. Nor is it surprising to learn that temporal lobe epilepsy affects episodic memory more than semantic memory. My own epilepsy originates in the temporal lobe, a region of the cerebral cortex tucked behind the ear and responsible primarily for the processing of incoming sensory information.
In his book The Déjà Vu Experience, Professor Alan S Brown offers 30 different explanations for déjà vu. According to him, any one alone may be enough to trigger a déjà experience. As well as a biological dysfunction like epilepsy, Brown writes that stress or tiredness could cause déjà vu.
My experiences of déjà vu began during the long period of recuperation following my brain surgery, a time spent almost entirely indoors, moving in and out of a series of semi-conscious states that mostly included being sedated with opiates, sleeping and watching old movies. This recuperative twilight state might have made me more susceptible to déjà experiences, through being fatigued, taking in an excess of sensory information or relaxing to the point of being comatose. But my situation was clearly an unusual one.
Brown is also a proponent of what is called the divided perception theory. First described in the 1930s by Dr Edward Bradford Titchener, divided perception refers to the times when the brain isn’t quite paying enough attention to its surroundings. Titchener used the example of a person about to cross a busy street before being distracted by a shop window display. “As you cross,” he wrote, “you think, ‘Why, I crossed this street just now’; your nervous system has severed two phases of a single experience, and the latter appears as a repetition of the earlier.”
For much of the last century this idea was accepted as a plausible trigger of déjà vu. Another common explanation was one offered by a doctor working at the Boston veterans’ hospital. In 1963 Robert Efron suggested that déjà vu could be caused by a sort of processing error: he believed that brains were responsible for assimilating events through the temporal lobe before then adding a sort of timestamp to them to determine when they happened.
Efron saw déjà vu as resulting from the lag between seeing and adding that timestamp: if the process took too long, the brain would think that an event had already happened.
But Alan Brown and Chris Moulin both agree that the way that the hippocampus indexes memories by cross-referencing them according to familiarity is a more likely cause of déjà vu.
“My belief is that a per-seizure déjà vu experience is triggered by spontaneous activity in that area of the brain that handles familiarity evaluations,” says Brown. Probably, he says, in the area surrounding the hippocampus, and most likely on the right side of the brain. The precise point at which I have a lemon-shaped hole.
§
At Duke University’s Department of Psychology & Neuroscience, Alan Brown and Elizabeth Marsh devised an experiment to test Brown’s theory that déjà vu experiences are caused by an error when the hippocampus does its job of grouping memories. At the start of the experiment, students from Marsh’s and Brown’s universities (Duke and the Southern Methodist University in Dallas) were briefly shown photographs of locations – dorm rooms, libraries, classrooms – on the two campuses.
A week later the students were shown the same pictures, this time with new images inserted into the set. When asked if they’d visited all of the locations in the photographs, a portion of the students replied yes – even if the photograph in question was of the rival campus. Many university buildings look the same, so by planting the seed of confusion about which places the students had actually visited, Brown and Marsh were able to conclude that just one element of an image or experience can be enough for the brain to call up a familiar memory.
Chris Moulin and his University of Leeds colleague Dr Akira O’Connor had already recreated déjà vu in lab conditions in 2006. Their aim was to find out more about the process of memory retrieval by exploring the difference between the brain registering an experience and then running that sense check to see if the same experience had actually occurred before or not.
Moulin suggests that déjà vu is caused by a “momentary overinterpretation of familiarity, something that comes about through panic or stress or that triggers a sense of something other. You’ve got this very excitable part of the brain which is just scanning the environment all the time looking for familiarity,” he says, “and something goes on in déjà vu which means [there’s] some other information inbound later that says: ‘This can’t be familiar.’”
Moulin concluded that the brain operates a sort of spectrum of memory retrieval, ranging from the completely successful interpretation of visual memory at one end and a full state of perpetual déjà vécu at the other. At some point along the spectrum lies déjà vu – neither as serious as déjà vécu, nor as seamless as the way that the brain should be working.
Moulin also suggests that somewhere in the temporal lobe is a mechanism for regulating the process of remembering. Problems with this – like the ones caused by my temporal lobe epilepsy – can leave patients without any kind of fallback to let them know that what they’re seeing has never happened to them before, effectively trapping them forever in a Moebius strip of memory.
But why do normally healthy people encounter it?
Brown suggests that déjà vu happens to healthy people only a few times a year at most, but can be stimulated by environmental factors. “People experience it mainly when they are indoors,” he says, “doing leisure activities or relaxing, and in the company of friends; fatigue or stress frequently accompany the illusion.” He says that déjà vu is relatively brief (10 to 30 seconds), and is more frequent in the evening than in the morning, and on the weekend than on weekdays.
Some researchers claim a connection between the ability to remember dreams and the likelihood of experiencing déjà vu. In his work, Brown suggests that although déjà vu occurs equally in women and men, it is more common in younger people, those that are well-travelled, earn higher incomes and whose political and social outlooks are more aligned to the liberal.
“There are some plausible explanations for this,” he tells me. “People who travel more have more opportunities to encounter a new setting that they may find strangely familiar. People with liberal beliefs may be more likely to admit to having unusual mental experiences and willing to figure them out. A conservative mindset would likely avoid admitting to having strange mental events, as they might be seen as a sign that they are unstable.
“The age issue is a puzzle because memory usually gets more quirky as we age, rather than the other way around. I would guess that young people are more open to experiences and more in touch with unusual mental happenings.”
One of the first comprehensive studies of déjà vu was conducted in the 1940s by a New York undergraduate student called Morton Leeds. Leeds kept an extraordinarily detailed diary of his frequent déjà experiences, noting 144 episodes over the course of a year. One of these episodes, he wrote, was “so strong that it almost nauseated me”.
Following my most recent seizures I’ve experienced something similar. The shock of repeated déjà vu isn’t physical, necessarily, but instead causes a kind of psychic pain that can feel physically sickening. Dream images suddenly interrupt normal thoughts. Conversations seem to have already taken place. Even banal things like making a cup of tea or reading a particular newspaper headline seem familiar. It feels occasionally like I’m flicking through a photo album containing nothing but the same picture reproduced endlessly.
Some of these sensations are easier to dismiss than others. Coming closer to finding an answer to what causes déjà vu also means approaching a kind of resolution for my more persistent déjà episodes, the ones that are the hardest of all to live with.
The night before completing this piece I had another seizure. The deadline had clearly been on my mind, as I suddenly had an intense memory of sitting down to write these closing sentences. When I regained my composure enough to read the finished article the next day, there was nothing here but blank space. It was another illusion. Now I’m actually typing this conclusion. It is, to borrow a famous solecism, like déjà vu all over again.
The vast majority of online content creators fund their work with advertising. That means they want the ads that run on their sites to be compelling, useful and engaging--ones that people actually want to see and interact with. But the reality is, it’s far too common that people encounter annoying, intrusive ads on the web--like the kind that blare music unexpectedly, or force you to wait 10 seconds before you can see the content on the page. These frustrating experiences can lead some people to block all ads--taking a big toll on the content creators, journalists, web developers and videographers who depend on ads to fund their content creation.
We believe online ads should be better. That’s why we joined the Coalition for Better Ads, an industry group dedicated to improving online ads. The group’s recently announced Better Ads Standards provide clear, public, data-driven guidance for how the industry can improve ads for consumers, and today I’d like to share how we plan to support it.
BioImplant™ is the world’s first – and only – custom milled, fully ceramic, truly anatomic, root analogue, immediate dental implant which can be placed without surgery. BioImplant™ is not a simple copy of the root, but provides an individual surface based on the “Principle of Differentiated Osseointegration”(patented), with the roughest surface (patented) in the industry, delivering enhanced osseointegration.
Even after 11 years, the major implant manufacturers and their recruited dental experts mainly at University clinics are still completely unaware and remain ignorant (!) of this simple and logic dental implant solution (see Semmelweis reflex), and thus BioImplant™ is still only available in our Vienna clinic.
It is simply not possible to discuss a non-surgical implant solution with dental surgeons making money with surgery and by teaching drilling protocols and guidelines at congresses. History will make a judgement on such unbelievable ignorant scientists to the disadvantage of the patient.
The leading edge “simply fit” technology simplifies the implant placement procedure to such an extent that it is impossible to make it simpler. Our immediate dental implants can be placed absolutely non-surgical in less than one minute by any general dentist.
This most innovative method is simple, logical, and non-surgical, with no metal ever showing leading to an esthetic failure. Such a guarantee will never be given by any dentists or companies using titanium screw-type implants from the last century.
Unfortunately, even with 11 years’ follow-up, this fast, gentle, most natural and aesthetic treatment is exclusively available for the discerning patient in Vienna/Austria, but we are working hard to find investors to bring this implant solution to your dentist wherever you are living.
Nearest To Nature
Our anatomic implant replicates the natural form and color of the tooth, so it simply fits into the tooth socket. And just like your original tooth, BioImplant™ comes in single- and multi-rooted forms.
Biocompatible
Zirconia has the best biocompatibility of any implant material currently available.
No Drilling/Cadaver bone
No drilling or surgery, or cadaver bone, is necessary. You never need a sinus lift. The implant is quickly placed with the simplest tools in less than one minute.
Immediate
Gently placed into a tooth socket immediately after tooth removal. Injury to neighbour roots, nerves or sinus is impossible.
Esthetic
The all-ceramic structure closely resembles a natural tooth in color. You’ll never experience discoloration, as commonly seen with titanium implants.
Simple and Logical
We use your existing tooth socket, and always adapt the implant to you – never the other way around. No operation is needed because the implant just fits!
Watch this amazing 1 minute implant placement video.
Why is BioImplant™ Unique?
Each BioImplant™ is fully metal-free and specially customized to perfectly fit only you (it’s as unique as your fingerprint). Since it exactly fills the gap left after your tooth is extracted, you never need surgery.
The treatment (currently only available in our Vienna clinic) consists of three simple steps:
Careful tooth extraction;
Wait a day while your implant is prepared;
Gentle placement of the BioImplant™.
Recovery time is very fast as neither soft nor hard tissue is traumatized; regulary the next day there will be no swelling, bruising or pain. After 8-12 weeks healing period, the final crown may be fitted by your dentist.
BioImplant™ provides a natural dental implant in form and colour which perfectly matches your extraction socket
Every tooth is unique, with a corresponding unique tooth socket
After a tooth is pulled, there is a unique irregular extraction socket, with sometimes up to four roots.
An implant surgeon makes you fit an ‘off the shelf’ screw with drilling healthy bone and filling voids
A common dental implant screw can’t fit a naturally irregular tooth socket – the bone must be drilled and filled with cadaver/bovine bone or bone substitutes before the implant can be screwed in. Drilling healthy bone leads to severe alteration of the anatomy.
BioImplant “simply fits” and therefore the anatomic implant is placed without surgery
BioImplant™ is a ceramic dental implant that fits the tooth socket perfectly, and is placed without any surgery, leaving the anatomy undisturbed. It’s the most natural esthetic alternative to a real tooth.
Our patients come to us not for surgery, but for an exclusive solution.
Of all dental implant companies, we alone provide a modern, ceramic, fully metal free, non-surgical solution for an artificial surgical problem.
Now that everything else in dentistry is produced using modern computerized CAD/CAM technology, why not implants? No serious argument against this has ever been entered into by any dental expert in the last 10 years: why not use such modern technology to avoid surgery? Obviously implant industry and their recruited experts prefer this situationand that is the reason why they are still completely unaware on root analogue dental implants. But at the same time, many general dentists and their patients are open to modern technology and genuinely interested in a non-surgical metal-free immediate implant solution (see the overwhelming feedback from dentists and patients on our BioImplant Facebook page).
A titanium screw type implant means a lot of surgery. A hi-tech custom ceramic implant means simply no surgery.
The better the implant fits you, the less surgery you have to expect. Yet not a single dental association in the world has ever delivered one reasonable argument as to why not; inexplicably, they won’t even consider joining the scientific research into this root analogue implant solution. No one can argue against this logical access to immediate implantology.
Industry likes simple titanium screws in every conceivable variation.
In reality titanium screw implants are cheap and easy to produce, but generate a lot of risky surgery (and money). The implant industry profits hugely from making you fit the implant, and not the implant simply fit you. This is why just screw implants and their prosthetic parts alone cost as much as a high-end smartphone. Additional profits come from sales of bone or bone substitutes, drilling kits, drilling guides, membranes and teaching dentists complicated guidelines.
Surgeons like simple titanium screws because they mean more surgery. More surgery means more money.
As well as cost of the implant, there are additional costs for the accompanying surgery and associated risks, and the system of tooling and training that has grown up around it. Industry experts like to deliver expensive training courses, teaching the dentist drilling sequences and guidelines, and troubleshooting.
Patients certainly don’t like surgery.
We are calling on you, as a patient or dentist, to help elevate dental implantology from the stone age. Ignorance over 10 years is proof that the unintelligent from industry recruited drill and fill expert system can’t change itself; it must be changed from outside – by patients and hard-working general dentists who serve their patients with the best possible solution they can offer without any surgical risks.
As a patient, please make your dentist aware of BioImplant™, spread the news on social media, or see our upcoming crowdfunding campaign.
If you are a dentist who is truly interested in this new field of implantology, you are welcome to contact us so we can inform you when we start with our first roll-out. Be part of the paradigm shift. Be a game changer. This technical solution opens the door to a cheaper and more efficient treatment for you and your patients.
A real revolution always comes from the people who no longer want to get screwed by a system and the few people who profit from it. The scientific output of 60 years of osseointegration research is simple that any of the approximately 3500 screw variations on the market will heal in if the dental surgeon is able to make you fit the screw he preferred.
Anyway, you’re still welcome to come to Vienna if you’re in need of an implant solution, or if you’re a dentist who just wants to see or hear more about BioImplant™. But our goal is to bring this to your family dentist so that you need not travel internationally.
What’s Wrong With Screw-Type Implants?
Implant companies won’t tell you that screw-type implants are unnatural in form and color, and – most importantly – that they don’t even fit the tooth socket.
Even a single-rooted tooth is nearly twice as wide in one direction as in the other. A cylindrical screw implant cannot fit into a tooth socket without surgery, such as:
Drilling into healthy bone;
Filling gaps between implant and bone either with bone or bone substitutes;
Sinus lift procedures, with all their invasive consequences…
Titanium screws are prone to periimplantitis and plaque accumulation leading to further interventions.
This technology has not progressed significantly in 60 years, leading to over a thousand screw type variations, instead of real innovation. Present immediate dental implant procedures are unnecessarily invasive, expensive, and inefficient, and esthetic outcome is highly unpredictable. To get a brief overview of the unbelievable variety of dental implants, just visit OSSEOsource or MedicalExpo.
We hope you are not paralyzed by choice– or do you know which implant fits you best? Don’t worry, your dentist will make you fit to any screw he has on his shelf, with some surgery…
What’s wrong with root fillings?
Most teeth have more than one canal. With a root canal treatment, the tooth will be lost if there is a single failure in any one of the roots. So, for example, treatment may be 90% successful for one canal, but if you have three or more canals, the possibility for failure will be summed. If there is failure in one canal, the whole tooth is at risk. Therefore, treatment must be successful in all canals. (It’s comparable to a car with only one flat tyre: it won’t drive far!)
Even if a root canal treatment works, some bacteria always remains in the tooth; hopefully this is capped by the root filling.
BioImplant™ is the alternative to root canals. Currently, the success rate is 90%, but this will improve with better technology and if the indication is narrowed. Even so, if you want to avoid any toxic root canal fillings and residual bacteria, it’s always better than root canal treatment.
Pro stone skipper Max "Top Gun" Steiner practices at Mackinac Island
photo by ryan seitz. courtesy of the documentary skipping stones for fudge
Late one afternoon last summer, our family arrived at a campsite on the western shore of Lake Michigan. We had been driving all day, across Wisconsin on our way further east. The four of us—my wife and two daughters, ages 7 and 10—set up our tent, made dinner, then went down to the water. Two-foot waves were rolling across the lake, a taste of what lay ahead: We were going to the Mackinac Island Stone Skipping Competition—the oldest, most prestigious rock-skipping tournament in the United States, if not the world. Every Fourth of July, elite skippers (many former and current world-record holders) take turns throwing their stones into the waters where lakes Huron and Michigan meet, also known for having rolling, two-foot waves crashing on the beach.
I looked down, saw a decent skipping stone, and picked it up. My daughters were watching. The older one spoke up.
“Are you prepared for the fact that you probably won’t win?” she asked.
I threw the stone.
“Four,” she said. “But it caught a wave.”
My shoulders sagged.
“Don’t doubt yourself, Daddy!”
Her younger sister looked at her. “But you doubted him,” she said.
“That’s different.”
Prepared or not, I knew I had a knack for skipping. Some years earlier, I’d been driving through the mountains when I stopped at a roadside lake. The water was smooth as glass. I bent down, picked up a wide, flat stone, and sent it skimming across the water. It went on for what felt like forever, until it finally hit the rocky shore on the other side.
Behind me, a young boy spoke up.
“Wow,” he said. “You must be the world-champion rock skipper.”
I wasn’t. At least not yet. But I’d been skipping stones my whole life, ever since I was around my daughters’ ages, always getting better and better. There was almost nothing I loved better than the feeling of knowing—even before it hit the water—that you had a perfect throw, one that defies nature by making a stone both fly and float.
Mackinac, I had learned, was the place where such things were decided. These were my people—the ones who could spend hours on a beach looking for just the right stone, who would fill bags and boxes with skippers from secret locations, who would throw until their arm gave way, lost in the simple sorcery of stone skipping.
Kurt "Mountain Man" Steiner practices skipping stones
photo by diane soisson
To reach the upper echelons of the skipping world was not easy. Mackinac was divided into two heats. First there was the “Open” division, in which every fudge-eating tourist on the island was welcome. Usually there were a few hundred people who entered. Only by winning the Open can you move up into the “Professional” division, which features heavy hitters such as Russ “Rockbottom” Byars, whose Guinness World Record held for years at 51 skips; Max “Top Gun” Steiner, who took the title from Byars with 65; and Kurt “Mountain Man” Steiner (no relation to Max) who currently holds the title with 88.
But now that I stood on the edge of the big lake watching the waves roll in, I wondered how anyone could skip a rock more than a few times on water like this.
The next day, we packed up and drove north, through the melancholy streets of Escanaba, near the southwest corner of Michigan’s Upper Peninsula, then north into the woods whose narrow highways were lined with old motels, “available” Adopt-a-Highway sections, and the occasional teepee-shaped trinket shop. We stopped at Pictured Rocks National Lakeshore and walked down to the beach only to find it packed full of weekend sightseers, outfitted with hydration packs and selfie sticks.
The beach, however, was a goldmine of beautiful stones perfect for skipping: flat and smooth and almost perfectly weighted. For half an hour, I practiced in the waves. It was tricky, but it could be done. Some sank straight into the crest. Others went surprisingly far, riding up the front and down the back of the swells of Lake Superior.
“Thirteen,” my daughter said. “Not too bad.”
That night we camped at a rustic site on a lake in the forest. The next morning we packed up and drove the rest of the way to the small town of St. Ignace at the eastern end of the UP. There we found our campsite, set up the tent, and settled in. I dumped out my box of skippers to try to pick out my winners.
When I spread them on a tarp, it immediately became clear that there were not as many as I had hoped. I had around a hundred stones of varying quality: some rough sandstone from the Mississippi; others dense basalt from Lake Superior; a few random, jagged stones from nameless dirt roads.
Kurt "Mountain Man" Steiner sorting his skipping stones
photo by ryan seitz. Courtesy of the documentary skipping stones for fudge
Some, I could see, were too light, and would probably fly up off the water. Others were too smooth and symmetrical, and would “stick” to the surface instead of popping up off it. A few were too “edgy” and would risk either snagging and sinking, or have the tendency to tilt and hook to the left or right, instead of running straight out into the water. What you want, according to reigning champ Kurt Steiner, is a stone that is not perfectly round but that has points, or lobes, that act as spokes. As the stone spins, these points will push the stone up off the water, keeping it airborne and preventing it from sticking.
“If you spin it fast enough, the stone will essentially walk on those spokes,” Steiner told me, when I had called him for skipping advice. “A really good skip tends to walk like that.”
Tomorrow, I would have six throws, and I found my six best stones. Three were nearly perfect. Three were flawed, either too light, or poorly weighted. But they all had good, flat bottoms. I would have to make the best of them.
That night on mackinac, some of the pros were gathering for dinner and a screening of a new documentary about the scene, called Skips Stones For Fudge, which I wanted to see. So I went down to the harbor and took a ferry over.
Mackinac Island is a strange place. It has been a tourist trap for well over a century, but it is also a state park, and a kind of living history center. Motorized vehicles are banned, so all transport—including hauling garbage—must be done by horse. The main street is lined with fudge shops, and on hot summer days the mingling smells of chocolate and horse manure are the first things you notice when you get off the boat.
The street was so packed with people I could barely walk down it. But I finally made it to the Yacht Club, an old house that looked out over the harbor. Inside, I found John “The Sheriff” Kolar, who helps organize the tournament (and who held the world record in a three-way tie from 1977 to 1984) and he introduced me to other skippers, including Max Steiner, Glen “Hard Luck” Loy (another 1977-1984 title holder), and Mike “Airtight Alibi” Williamson, 2014 winner (who recently went rogue and skipped stones across the Lincoln Memorial reflecting pool).
The room, in other words, was filled with rock-skipping royalty. But you wouldn’t have known it if you just wandered in. Elite stone skippers are more like athletes in the vein of Olympic bowlers. By day they are electrical engineers, computer technicians, rental car franchise managers. But here on Mackinac, they have climbed to the very height of this sport. Here, once a year, they are the best in the world.
Before dinner, I sat down next to Williamson. He’s a former DOT worker who reminded me of an uncle you don’t know very well at a family reunion. He is aging, balding, and not in particularly good shape. Two years ago he tore one of his biceps at the competition. But he keeps coming back, and it was clear why.
“Whenever I throw stones out on the water,” he told me, “it’s like throwing a shooting star.”
After dinner, and the film (which hinges on the rivalry between Russ Byars and Kurt Steiner and is full of gorgeous, long throws on still water…stone skipping porn), I caught the last ferry to the mainland. That night, I lay in our tent and dreamed of rocks gliding, of weightlessness, and of stones sinking into the waves.
The next morning, the sky was clear. Waking up, I couldn’t hear any waves, but I could see the sun peeking across the lake. I gathered my stones, and the four of us drove down to the harbor to catch the early ferry. As we sailed across the water, I fumbled with my rocks, wondering how they would do. The water seemed calm. My stomach less so.
Near the island, I could see American flags everywhere, in honor of Independence Day. As we disembarked, the air was still cool and there was little manure in the streets. The island had a celebratory, historic feel. It evoked a mix of hope and nostalgia I hadn’t felt for a long time.
Windermere Pointe Beach at the Iroquois Hotel, site of the Mackinac island stone skipping competition
photo by christi dupre
We walked to the rocky beach where the competition would be held and each of us threw a few practice stones. The water was calm when no ferries came by, but it was a mess of chop when they did.
We lingered on the beach until the registration opened. We needed to fill out a form with our name and our nickname, which I had forgotten about. I panicked and scribbled “Flatbottom.” My oldest daughter looked at my sheet.
“What kind of nickname is that?” she asked.
“You know, the flat bottom of the skipping stone,” I said.
“So if I have a triangle stone, with a flat bottom, that would be a good skipper?"
“No,” I said.
She shrugged.
Max Steiner
photo by ryan seitz
Soon the pros started to arrive. Max Steiner (65 skips) gave our girls lessons while I tried to eavesdrop. I heard someone say, “Russ is here,” and I looked over to see Byars (51 skips), the six-time Mackinac champ who still held the highest score ever recorded here: 33 skips. He brought two large bags full of stones from the south shore of Lake Erie for anyone who needed them. I introduced myself. “What do you think your chances are today?” I asked.
Byars took a drag on his cigarette. “Good as anyone,” he said. “I’ve seen really good skippers throw straight to the bottom every time. You never know.”
A loudspeaker crackled. “Let he who is without Frisbee cast the first stone!”
The judges gathered their clipboards and spread out along the water’s edge. The open competition had begun. Slowly, a few brave souls wandered down, handed over their score sheets, and began skipping while the judges counted as best they could (video replays were banned in the 1970s) and wrote down their scores. Some were clearly tourists who had happened upon the tournament. Others were like me—those who had been quietly honing their craft on unknown lakes and rivers. They had driven hundreds of miles in hopes of making it to the elite level.
As we watched other amateurs skip, I was nervous. I looked at the water. We were supposed to skip as a family and I wanted to time our turn between the ferries. In a lull, my oldest daughter looked at me. There were no boats in sight. I nodded. She walked down to the nearest judge, gave him her sheet and started skipping her stones. She threw a couple threes and fours, and finally an eight. Our youngest wanted to do the “Gerplunking” contest across the beach, where kids tried to make the loudest splash, so my wife went next, also throwing some low single digits, but ending up with a solid 12.
The judge looked at me. “You going too?”
I looked at the water. There was a boat far off toward the mainland. A few others sat in the docks on the island. There was no telling how long they would stay there. I handed over my sheet, and took out my first stone.
Just then, a ferry started to back away from the island. If I went fast, I could still have good water. More ferries appeared on the horizon. Maybe I could thread the needle.
As I got ready to throw, some wide waves began to roll in. I drew my first stone back and let it fly. It went out strong and straight, then plunged into the side of a wave.
“Four,” the judge said, writing it down.
Sweat ran down my back. I waited for the water to settle, drew back the second stone and threw. It had good spin, and was not sticking or hooking. It went fast and hard. On calm water it would have gone forever. At Mackinac, it slowed in the waves and then sank gently.
“Nineteen.”
I knew I could do better. The next stone I tried to shoot through a trench between two big rollers. But it jumped the edge and got caught in the wave behind.
“Twelve.”
“The last stone. It was my best one, a perfect oval just the right weight. I relaxed my, drew it back, and sent it sailing,”
Someone down the beach yelled, “The Canadian just hit the ferry!”
Was that Drew “The Canadian” Quayle, who had made an appearance in the documentary? Distracted, I tried to focus. My stones were getting worse. Fourth one: went out like a bullet, clipped the top of a wave, and shot straight up into the sky.
“Seven.”
I grabbed the fifth stone tight. I threw it as hard as I could. Too hard.
“Eight.”
The last stone. It was my best one, a perfect oval just the right weight. I needed to get up over 20. I cleared my mind. I relaxed my arm, drew it back, and sent it sailing. As soon as it left my fingers,
I knew it was a good throw. It ran straight out across the water before finally getting caught in the waves.
“Eighteen,” was the call.
The four of us walked back up the beach. My daughter took my sheet and analyzed my scores.
“Pretty good, Daddy!” she said.
A little while later, they announced a three-way tie for first place with 22 skips each. I had tied for second with a guy from Ohio. This year, at least, I would not be moving into the pros.
The amateurs’ tie was broken with a sudden-death skip off, which “The Canadian” (yes, it was Drew Quayle) won. After that, the real show began, with a competition featuring four former Guinness World Record holders, six past Mackinac winners, and a few young newcomers.
“We got a lot of 60-year-old arms out there,” one of the judges next to me muttered. “It’s good to have some new blood.”
The beach filled with spectators and amateurs. We found seats where we could see. One by one the masters walked down to the water and spun their stones out across its surface as a three-judge panel counted, conferred, and scored. The crowd cheered at the good throws and groaned at the bad ones. The old arms warmed up and the numbers climbed from 19 to 22 to 25 and finally to a beautiful, long, left-handed 27 by Dave “Lefty” Kolar, that no one was able to top. (A few of the pros, I noted, didn’t break 19.)
When the last stone was thrown, the announcer closed the games. Skippers and fans wandered back to the island’s streets. As the beach emptied, I felt a sudden wistfulness, and realized I didn’t want the day to end.
There would be other years, other lakes. Still, we lingered on the empty beach, feeling the sun, listening to the waves, and scanning the ground for the next stone waiting for its turn to fly.
Nintendo released its new hardware, the Nintendo Switch, and made the choice to store games on cartridges. This will make those the preferred target for hackers that may not have to compromise the console itself to counterfeit games.
Before talking about the resilience of the games against any form of piracy, let’s discover what stands inside the game package. For our study, we will work on the famous game "Zelda Breath of the Wild".
1. Opening the cartridge
The cartridge itself is small and exhibits 16 connections to the console on what appears to be a PCB. The first step of this teardown is to open the cartridge to further assess its construction.
Zelda cartridge's front and back
Opening the cartridge is fairly easy and can be done with a scalpel for example. Once opened, a single chip is visible with the PCB visible from the outside being part of the chip.
Zelda cartridge opened
2. A System In Package
The chip is labelled with the manufacturer name being MXIC and has a number of reference.
Chip package
Few drops of hot fuming nitric acid can be used to create a partial opening on top of the chip.
System in package after partial opening
Two chips are standing inside the package epoxy which makes the MXIC device a System In Package. The biggest chip may be a memory that stores the game while a second one can serve as an authentication device and can possibly be used to decrypt the main memory content.
3. Cross section of the entire package
From that point, a cross section of the entire package can be done on the same sample or on a new one to keep the result clean.
Package cross section
This package cross section shows that the package internal PCB is made of two layers that can be made visible by using a fine abrasive and « polishing » its bottom layer to make the second one visible. In that process, the intermediate layer with via connecting both PCB layers can also be made visible.
Close up of the cross section
4. Wires
Using Nitric acid to create a partial opening is etching copper away. In that particular setup, the bonding wires that connect the two chips together and to the PCB would also be etched away if made of Copper. This is obviously not the case which indicates gold bonding wires.
Bonding wires
As a funny fact, the bonding wires do not connect the two chips directly. The « security » module as well as the main memory have their bonding pads on one side but those two sides are not facing each other which results in the bonding wires being connected to the PCB that routes the signals underneath the main memory.
Memory chip
5. The chips
More nitric acid can be used to dissolve the remainder of the package and access the naked dies.
From this point, the two chips can be studied through Reverse-Engineering to give their secrets away.
It is pretty easy to guess the strategy pirates would use to create counterfeited games. Some assumptions can be made at this early phase of the study. The memory can be either ROM or Flash based. This can be easily verified by etching the chip interconnections and looking at the patterned silicon.
The memory can use a proprietary protocol which would require a custom chip design which would not be necessary and would add an extra cost for the design and manufacturing.
It is also likely that the chip is a standard flash storing encrypted data. In this condition, the second chip would be used to decrypt data on the fly when needed.
To further protect their IP, the « security » module can also be used to authenticate the game when plugged to the console. Therefore, it would serve a dual purpose: authentication and decrypting.
Optical scan of the 2nd chip top layer
Conclusion
From this point, this « security » module would become the main target for a counterfeiter. Reverse-Engineering can then be used to extract its firmware and potential cryptographic keys. With that knowledge and assuming the attacker can emulate the authentication protocol and the decryption algorithm on a publicly available micro-controller, a counterfeit product could be designed.
Of course, at this stage of the study, it is impossible to conclude on the different aspects discussed here.
Coming soon: the Chip ID that goes into more details with:
Cross sections of the package and chips
Pictures of the PCB layers
Top optical scans of the chips
Visible die marks
Substrate optical scans
SEM close up showing the system in greater details
Storage costs, a critical component of renewable energy systems, have also fallen. “The crucial question has been, ‘Yes, but what do you do when the wind doesn’t blow and the sun doesn’t shine?’” said Adair Turner, the chairman of the Energy Transitions Commission, which studies climate issues.
The cost of lithium ion batteries, the gold standard in solar power storage, has fallen significantly, Mr. Turner said, largely because of economies of scale. Where the price was about $1,000 per kilowatt-hour more than five years ago, it is now $273 and dropping, Mr. Mathur said.
The price needs to fall to $100 per kilowatt-hour for renewable energy to be comparable in price to coal, Mr. Mathur says. Mr. Turner thinks that will happen far sooner than the year 2030, which his group had been predicting.
“To be blunt, the success of this has been bigger than I certainly realized,” Mr. Turner said. “There were people who were optimists, and it’s the optimists who have won out.”
New Delhi had long argued that it was hypocritical of Western nations that have burned fossil fuels for centuries to ask Indians to sacrifice their growth to cope with the effects. But the Modi administration has set ambitious targets for a greener Indian future.
The government pledged in 2015, when the country’s electricity capacity from renewables was 36 gigawatts, to increase it to 175 gigawatts by 2022.
Piyush Goyal, India’s power minister, announced in a speech in late April that the country would take steps to assure that by 2030 only electric cars would be sold.
“That’s rather ambitious,” Rahul Tongia, a fellow at Brookings India, said. “The targets are there. The vision is there. The question is: ‘Is it going to happen? How?’”
The Indian government’s policy research arm, the National Institution for Transforming India, or NITI Aayog, recently released a report in collaboration with the Rocky Mountain Institute in Boulder, Colo., that calculated India could save $60 billion and reduce its projected carbon emissions by 37 percent by 2030 if it adopted widespread use of electric vehicles and more public transportation.
So anyone considering this sort of thing has to ask: Is the insurance industry overstating the risk of playing along with this cutting-edge idea, is RelayRides underestimating your exposure, or both?
RelayRides is one of several car-sharing services to arrive on the scene in recent years. Getaround is another start-up, as are JustShareIt and Wheelz, a company that the car-sharing giant Zipcar invested in last month.
They’re all part of a larger “collaborative consumption” movement that has captured the imagination of a growing number of civic-minded, Web-addicted people who want to both save some money and use a bit less of the world’s resources. This includes home-sharing services like Airbnb, office-sharing services like Loosecubes and general sharing sites like NeighborGoods and Rentabilities.
The car-sharing services allow you, in effect, to turn your personal car into a Zipcar and rent it out by the hour or the day. You set the price, and the intermediary service lists your car online, connects you with people who want to rent it and takes a cut of the fee. Renters use a smart card to open your locks and get to the key, or you can exchange the key in person. G.M.’s investment in RelayRides holds out the promise of G.M.’s OnStar service opening the car for you, too.
For all of this to work, there are a few mental hurdles that car owners need to clear besides generalized fear of strangers and whatever cooties they leave on the steering wheel. Are they safe drivers? (Car-sharing services generally check driving records.) Will someone try to steal my car? (Yes, they will, if it’s expensive enough and the car-sharing company lacks proper controls; this problem has already put one company out of business.)
But the biggest challenge is insurance. Here’s the basic problem: Car insurance companies generally will not cover a claim that results from you putting your personal vehicle into commercial use, say by running a taxi service on the side — or making yourself into a one-person Hertz. RelayRides is well aware of this and provides $1 million of liability coverage in the event that a driver kills or maims somebody else while using your car. This is intended to fill the gap in coverage created by the fact that your own insurance company would refuse to pay this claim if the victim came after you.
This raises questions about three potential situations.
First, if some sort of catastrophic accident results in a claim of more than $1 million, what happens then? The answer is that you could be responsible for paying it. The odds of an injury this horrid and a legal judgment that blames you for renting your car to someone who crashes it are extremely low. I laid out the long odds in a column last year about Zipcar’s insurance coverage for renters (I link to it in the online version of this column.)
Only you can be the judge of how uncomfortable this makes you.
Second, do the rules change if you haven’t been taking good care of your car and that contributes to an accident? RelayRides’s terms of service seem to protect the company here, since it “disclaims” any “warranty” for “fitness for a particular purpose.” Meanwhile, a law in Oregon that relates to insurance coverage for car sharing quite specifically gives car-sharing companies the right to go after vehicle owners who engage in “material misrepresentation in the maintenance of the vehicle.”
RelayRides and its general counsel counter with two points. First, they say that language elsewhere in the company’s terms supersedes the fitness disclaimer. Second, the Oregon statute and its presumably high bar for “material misrepresentation” aside, RelayRides’ insurance broker, Bill Curtis, makes the following pledge: “I’m willing to raise my hand and say, ‘Yes,’ to the question of whether the owner will have protection in the event that they are sued and the allegation is that the car wasn’t maintained,” he said.
Third, there’s the question of what your insurance company thinks about all of this. I had a hard time finding out, frankly. Geico wouldn’t respond to any of my requests for comment.
An industry group, the Insurance Information Institute, meanwhile, is not pleased. “If the ‘renter’ were involved in an accident, most likely the insurer would non-renew or maybe even rescind the auto policy,” Loretta Worters, its spokeswoman, said in an e-mailed statement. Translation: If someone wrecks your car and injures someone and a lawyer tries to reel in your insurer as well as the car-sharing company’s insurer, your insurer may take away your coverage.
RelayRides takes exception with this, given that the word “rescind” could make people think that insurance companies would take away coverage retroactively. “It’s ridiculous,” Mr. Curtis said.
USAA, which has always gotten high marks for customer service, takes an even sterner approach than the institute. I’m a USAA customer myself, and I asked the company what would happen if I or others called and confessed that we’d signed up for RelayRides.
“We would inform them that participating in such a program will generally result in non-renewal,” Roger Wildermuth, a USAA spokesman, said in an e-mail message.
Allstate took a similar tack. “The owner could put their current coverage for personal use of the vehicle in jeopardy as the act of making the vehicle available for rental purposes could inherently change the risk profile of the vehicle,” said Kevin Smith, a company spokesman. “And by entering into commercial arrangements with their vehicle, the insured may risk being unable to secure auto coverage from our company in the future.”
Not every insurer responded this way. A Progressive spokesman, Jeff Sibel, said that while there were certain risks that would cause the company to cancel coverage on the spot if it found out about them, car sharing was not one of them “at this time.”
Meanwhile, at least three states (California, Oregon and Washington) have passed laws that generally prohibit insurance companies from dropping your coverage simply because you’re renting your car out via a car-sharing service.
I wish I knew which way the wind was ultimately blowing on this, but of the other seven major insurers I approached for comment, all either declined to talk at all or declined to squarely address this non-renewal question.
Their general wariness doesn’t surprise industry watchers, though. “The easiest answer for an insurance company is no,” said Sunil Paul, a venture capitalist who helped get the California law passed and has flirted with entering the car-sharing business. “There is no downside to no. Their knee-jerk reaction is why we need laws like the one in California.”
Or it’s simply an effort to sell more insurance. After all, fewer cars on the road with more people sharing them means fewer sales of personal policies.
But none of this leaves consumers with a clear sense of what to do. If you call your insurance company and ask for permission to rent out your car, as some of my colleagues did this week, the people you talk to may tell you that you’re nuts to even consider this, which is indeed what we heard.
To RelayRides, however, dutifully doing what some ignorant frontline agent tells you to do is to stand against innovative companies and progressive values.
“The insurance industry has already demonstrated acceptance of peer-to-peer cars sharing through their support of car-sharing legislation in three states,” the company said in a statement.
“Insurers also regularly deal with exclusions, for example, for part-time commercial purposes such as pizza delivery,” the statement continued. “Since our founding almost two years ago, we’ve been operating in Massachusetts without car-sharing legislation and without any problems. Given that we provide insurance for the rental period, we do not anticipate any problems for car owners. As with any new service, we work closely with all organizations to ensure that the best interests of all parties are protected.” I asked both Google and G.M. for comment, and neither offered one.
I, for one, am glad RelayRides is out there taking the bullets. Their idea is a good one, and I’d consider participating myself, but I’m also not about to willingly defy my insurance company.
Once a regular fixture, movie night had become a rarity in the Cipriano household of late. Giovanni had started high school a few weeks earlier and been given extra homework, while his mum, Georgina, was working two jobs after separating from his dad. But one evening in October 2013 they found the time to retire to the TV room at their home in Long Island, New York, where she put out a bowl of cookies and pretzels to snack on while they watched the film.
After taking a few bites, the 14-year-old started to feel unwell. “Mum, I think this has got peanuts in it,” she recalls him saying. She thought he must be mistaken. She had already checked the allergy warning on the label, which cautioned that the mixture contained tree nuts but made no mention of the peanuts to which her son was severely allergic.
The sequence of events that subsequently unfolded is so tragic, so full of ill fate, that I wonder how Georgina can bear to recount it. “It’s difficult to talk about what happened,” she says. “But it’s important to tell his story. I’m going to be his voice.”
Although peanuts had been omitted from the allergy warning box, they were in fact listed under main ingredients, as Georgina discovered when she frantically reread the discarded packet. She gave Giovanni an antihistamine and told him to get dressed for a visit to the doctor. The pair argued briefly — he thought she was making a fuss and wanted to stay home — but she insisted. “I just wanted to be safe.”
Georgina threw an EpiPen, an injection used to stave off anaphylactic shock, into her handbag, and drove Giovanni to the local urgent care centre. They arrived a few minutes after 9pm to find it had just closed its doors. Panic set in. Giovanni started having breathing difficulties; he was puffing on an asthma inhaler and asked for the EpiPen but, to Georgina’s horror, she could not find it. “It must have fallen out of my pocketbook. All I could think about now was getting to the emergency room.”
The journey from the care centre to the ER would take seven minutes, but after five Giovanni turned to his mother and spoke his last words. “I don’t think I can make this. I can’t do it. I don’t want to die.” As she turned into the hospital car park, she took her son’s hand to comfort him. “It was cold and when I looked at his face, it was grey. I started beeping the horn and screaming, ‘Please help me.’”
Giovanni was rushed into the emergency room, where doctors managed to resuscitate him but were unable to wake him from a coma. “They tried everything but his brain had been starved of oxygen for too long and his body couldn’t cope. He never woke up. Eighteen days later, he passed away.”
Giovanni Cipriano playing baseball in 2013
Shortly before he died, Giovanni had found an article online about two experimental drugs that might protect peanut allergy sufferers in cases of accidental exposure. “He was scared about going into a study,” recalls Georgina. “But he also said, ‘I hope that one day I might be able to try it.’”
Four years later, these drugs are in the final stages of clinical testing. One is a capsule that is broken and sprinkled over food, the other a stick-on skin patch placed on a patient’s back. Both are underpinned by the same scientific theory: that exposing seriously allergic children to tiny amounts of peanut flour will retrain their immune systems to cope better with the real thing. After years of research and hundreds of millions of dollars of investment, the two companies behind these drugs hope they can keep children such as Giovanni safe in the future.
***
The number of people afflicted with food allergies has exploded since the start of the 20th century, when two French scientists, Charles Richet and Paul Portier, described the first case of fatal anaphylactic shock. When the pair collected a Nobel Prize for their work in 1913, the illness was still a curiosity rather than a public health issue. But today roughly 15 million Americans and 17 million Europeans suffer from food allergies, with many of the most serious cases afflicting children.
Among the biggest factors behind this dramatic rise has been the soaring prevalence of peanut allergy, which accounts for more than a quarter of all childhood cases. Three million people in the US have allergies to peanuts, tree nuts or both. About 2 per cent of American children are now allergic to peanuts, a figure that has more than quadrupled since 1997. The number of fatal reactions is small, with fewer than 100 cases usually recorded each year, but the fear among parents remains high. Almost all the deaths have been people who knew they had the illness but ended up ingesting nuts by mistake.
More and more schools are going “nut free”, while some food producers are cutting peanuts out of all their products, but scientists warn that widespread abstinence will only exacerbate matters. A better solution would be to develop a treatment that would protect sufferers in the case of accidental exposure.
At 105, Dr Bill Frankland has more than earned his status as the “grandfather of allergy”. The evening we speak, the British immunologist is about to rush out but he still has time to regale me with stories from his long career, including the time in 1979 when he treated Saddam Hussein, the late Iraqi dictator. “People have been eating eggs and peanuts and dairy for years, so why is this happening now?” he asks. “There are so many reasons, it is multifactorial: general pollution, diesel fumes and so on, but also because allergy is now more recognised as a chronic disease, so doctors are very interested in it.
“One thing you mustn’t put the increase down to is genes,” he adds. Although some people are more genetically predisposed to developing food allergies, the proportion has not changed over time, he says, meaning environmental factors must be to blame. “Now we’re very interested in the beginning of a baby’s life — what they’re eating and what they’re breathing in, and we need to do more research on that.”
Frankland is puzzled as to why it has taken so long to develop treatments that might protect peanut allergy sufferers. He recalls that, 60 years ago, he successfully treated people with severe fish, egg and milk allergies by admitting them to hospital and giving them controlled injections containing the very substances that could kill them. “They were in for 12 or 13 days and they went out cured,” he says.
Progress in tackling peanuts has been much slower, in part because even tiny amounts can prove fatal. Researchers experimented with peanut injections in the early 1990s but the results were not encouraging. Although some patients were successfully desensitised, a significant number had severe reactions. One study was shut down after a participant died of anaphylactic shock.
In 2009, a team of scientists led by Dr Wesley Burks, then a paediatrician at Duke University in North Carolina, published a small trial that is now widely hailed as a breakthrough. “The concept was to give someone something in a very small amount and increase it over time,” Burks recalls. “The desensitisation started with a thousandth of a peanut and we increased that. Then, all of a sudden, we were giving the children a peanut every day and they were not having serious reactions.” After 10 months, some could ingest as many as 15 peanuts a day, giving them a meaningful buffer in the event of accidental exposure.
This pioneering treatment gave birth to a cottage industry in the US: some allergists now offer their own home-brew versions of Burks’ “oral immunotherapy” in their practices. But the majority do not want to offer a makeshift solution, for fear they would be held liable if something went wrong.
Burks’ research also caught the attention of advocacy group Food Allergy Research and Education (FARE), which tried to persuade several large drugmakers to turn it into an approved drug. Big Pharma was unmoved, believing it would be impossible to patent a medicine that was essentially a ground-up peanut. So FARE’s leaders decided to back efforts to start a new company. They secured $12m of seed funding from wealthy investors, including David Bunning, the Citadel financier whose children suffer from severe food allergies. The company, now known as Aimmune, went on to raise a further $414m.
Aimmune has used the cash to develop a low-dose version of Burks’ therapy that can be professionally manufactured at scale: a capsule containing pharmaceutical-grade peanut protein, which is snapped open and sprinkled over an appropriate food, such as chocolate pudding. After more than four years of clinical trials, this drug, codenamed AR101, is in the final stages of testing, with results due either at the end of this year or in early 2018. In small studies, patients were able to tolerate the equivalent of between two and three peanuts after nine months of treatment, and research suggests this will increase over time.
Three peanuts might not sound a lot but it would be sufficient to protect a person against accidental ingestion. It would also probably convince US regulators to approve the medicine for children aged between four and 17. Shares in Aimmune have appreciated by around a third over the past 12 months, giving the company a market value of almost $1bn, as investors bet that the Food and Drug Administration and European regulators will give a green light to the medicine in 2019. Analysts at Credit Suisse predict the drug will generate sales that will crest at $1.4bn a year in 2024.
Success is not guaranteed. The history of drug development is littered with examples of medicines that looked like a sure thing but went on to fail. Nor is treatment with AR101 a panacea. It takes a long time, involves regular trips to an allergist and is quite unpleasant. “It’s not for everyone, because there are side effects associated with the treatment and you have to invest time and effort in repeated visits to the doctor,” says Stephen Dilly, chief executive of Aimmune and a former vice-president of pharmaceutical company SmithKline Beecham. “The utility of our treatment tends towards the super-sensitive patient that is more likely to have a life-threatening reaction if they get exposed.”
The first visit takes three hours, during which the doctors slowly increase the dose of AR101 until they work out the maximum tolerable amount. The child must then return every fortnight to see whether he or she is ready to progress to a larger pill. The vast majority will experience nasty reactions, such as hives or stomach aches, and the process typically takes between 24 and 26 weeks before the patient can go on a regular maintenance capsule. Although the company says it is not a treatment for life, it is still unclear how long a patient might have to take the drug.
Aimmune has ties to the food industry, which has a vested interest in finding treatments for allergies. Its largest investor is Nestlé, which acquired a 15 per cent stake for $145m in November 2016. The success of AR101 and other treatments could help protect the food giant’s core business, which sells many products containing peanuts and other allergens.
The peanut trade also has a lot riding on the success of these treatments, with one of the largest companies, Golden Peanut, agreeing to provide the protein for the capsules under an exclusive 10-year deal. Dilly says it is more than just a commercial arrangement. “They’re very interested in being seen to be helping us,” he says. “Because they’ve gone from being good old peanut farmers in Georgia to being the people that produce stuff that kills people.”
For Bob Parker, president of the National Peanut Board, the reputational hit hurts. He has spent a lifetime in the industry and grew up in the heart of peanut country in Georgia, where, at the age of six, he was selling bags of boiled nuts at the side of the road. Today, everyone he meets asks him the same two questions. Has he met Jimmy Carter, the former US president and world’s most famous peanut farmer? “Many times. He’s a fine man.” And how is the industry coping with a spike in peanut allergy? “It’s the single largest barrier to consumption.”
***
It is early spring, and New York’s Central Park basks in glorious sunshine. The smell of recently cut grass hangs in the air. But for Hayley Maultasch, a seven-year-old attending a clinic a few blocks away, today means one thing: allergies. She has already taken an antihistamine for her hay fever and her mum, Erica, has put in drops to stop her eyes streaming.
Hayley was diagnosed with suspected peanut allergy six months ago. Today she has come to get a conclusive answer at the Jaffe Food Allergy Institute at Mount Sinai Hospital, where she will eat crackers spread with ever-greater amounts of peanut butter to see if they elicit a reaction. Dr Malika Gupta, an assistant professor of paediatrics, is conducting the test, armed with an EpiPen in case anything should go awry.
At first, Hayley is a bundle of excitement, recounting a string of stories, including the time her mum appeared in a TV advert for a local party store. She requests a tissue, takes a theatrical blow of the nose, and giggles loudly. “She’s really shy,” quips Erica. “We’re working on it.”
But just before the challenge begins, Hayley falls silent. She hunches her shoulders up into her pink love-heart T-shirt, and her face — all smiles until now — becomes pensive. “I’m nervous,” she says. “This kinda test is way more nerve-racking than a school test; that is just a grade, but this is my whole life.” It might sound dramatic but to hear her recount the past few months is to understand how a peanut allergy diagnosis can turn the world of a child upside down.
Hayley tested positive for the allergy in December, following a skin test after a peanut-butter sandwich caused hives around her mouth. The allergist advised her mother to cut peanuts out of Hayley’s diet completely, prompting her to seek a second opinion at Mount Sinai.
The biggest upheaval has been at school, which has joined many public institutions in taking a tough approach to the spike in childhood food allergies. Pupils with a diagnosis must sit at a separate table in the canteen, and are excluded from the tradition of celebrating everyone’s birthday, unless the birthday boy or girl brings something in from home that is entirely free of allergens. “There’s a special table for kids that have allergies and no one sits there — it’s like the emptiest table in the canteen,” says Hayley. (In an act of kindness, one of her best friends has cut nuts out of her diet so they can sit together.)
Nor has it been much easier for Hayley outside of school: the family recently went to dinner at her favourite restaurant, an Asian “hibachi” place where the food is cooked at the centre of the table. Her dish had to be prepared in the back, and tasted less good than before. “She actually cried that night,” recalls Erica.
Gupta asks the photographer and me to leave the clinic before Hayley completes the food challenge, on the grounds that it would be unethical for us to be there if she were to have a bad reaction. A few days later, I text Erica to find out how it went. “She passed :-)”
It turns out that Hayley’s earlier reaction was probably caused by a seasonal allergy to birch tree pollen, which contains a protein similar to one found in some foods, including peanuts. Sometimes the body can mistake the peanut protein for the pollen, causing hives and hay fever-like symptoms, although not anaphylactic shock. “She’s so happy,” says Erica. “But I’m not going to lie to you — she still hasn’t eaten anything with peanuts in it.”
***
Theories behind the reason for the recent spike in food allergies differ, although they would all fit quite neatly under the heading, “The Way We Live Now”: high levels of pollution, intensive farming, processed food and a childhood where dirty, scraped knees have been replaced by a coddled existence.
Dr Burks, now executive dean at UNC School of Medicine, says the increasing incidence of allergy closely tracks other diseases linked to modern life. “If you go to a scientific talk about rates of diabetes, auto-immune diseases like arthritis, and allergies, then all the introductory PowerPoint slides are exactly the same,” he says. “The rises in the past 20 to 30 years parallel each other. We are breathing more diesel exhaust particles, which causes inflammation as we inhale them, and feeding practices have changed.”
Some scientists also believe changing diets are affecting the microbiome, a vast and poorly understood population of microbes in the human gut, which is increasingly thought to play a leading role in disease. “There is a symbiosis between the immune system and the bacteria that resides within us. If it’s disturbed, it will promote inflammation,” says Burks, noting that allergy rates are much lower in Amish communities that shun modernity in favour of an old-fashioned rural life. Other allergists subscribe to the hygiene hypothesis, which holds that the immune system has evolved to deal with the much stronger stuff it contends with today. In the absence of any real invaders to go to war with, it instead overreacts to something relatively benign, such as peanuts. “We evolved as an organism that was used to seeing lots of bacteria from outside our world, including fecal matter from animals,” says Dr Stephen Tilles, an allergist in Seattle. “As we have become more developed economically, that has gone away.”
In the case of peanut allergy, there has been an aggravating factor that some scientists blame for a recent acceleration in the number of cases: for almost a decade between 2000 and 2008, American parents were told by public health officials to avoid feeding peanuts to children under the age of three. It turns out the advice, which echoed around the world, was entirely wrong and could have contributed to some children developing peanut allergies.
The recommendation was based on the assumption that avoiding exposure to the allergen could protect people. But the theory was turned on its head following a groundbreaking study led by Dr Gideon Lack, a professor at King’s College London. He started to question conventional thinking after noticing that rates of peanut allergy were extremely low in Israel, where almost every child is given Bamba, a teething snack that contains peanuts.
Lack’s initial study, published in 2008, found that rates of the condition among Jewish children in Britain were around 10 times higher than in Israel. A subsequent placebo-controlled trial was published in 2015, which found that infants who had been given foods such as peanut butter were far less likely to become allergic.
The American Academy of Pediatrics had already dropped its avoidance recommendation in 2008 and, at the start of this year, it said most babies should start eating foods containing peanuts well before their first birthday. It was a complete reversal of its previous stance. “By having patients avoid peanuts, we were systematically exposing them to peanut allergy,” says Tilles.
***
Spencer Baty is peering over the top of a Reese’s Peanut Butter Cup that has been cut into 12 pieces. Every night, the 13-year-old eats one small square. “It’s pretty cool to have candy as your medicine,” he says, placing air quotes around the word “medicine”.
A few years ago, the idea of consuming anything containing peanuts would have been unthinkable because Spencer is severely allergic. “It felt like somebody was choking me from the inside,” he says of the anaphylactic shocks that hospitalised him on several occasions.
His parents first knew something was amiss when, at the age of three, Spencer opened the freezer, removed a block of ice, and placed it on his tongue. He had just eaten some peanuts and the inside of his mouth and cheeks had swollen up. As the emergency room visits racked up, he started to become anxious whenever peanuts were in the house. His parents likened him to a “little drug dog”: he was able to tell if they had ordered takeaway containing nuts — even if he was sitting several rooms away. His father David thought the peanut allergy was exacerbating his already picky eating habits and started to worry that Spencer would be consigned to a lifelong diet of hamburgers and chicken nuggets.
“We were getting kind of frustrated because we felt like it was having a big influence on his food choices, which were really narrowing,” recalls David. “And then one day I heard about the trials on the radio and I called his allergist. ‘Are you doing any studies?’ I asked. I just kept bugging them.” Eventually, when Spencer was nine, he got a place on a clinical trial. Rather than the oral immunotherapy being developed by Aimmune, he was given a skin patch containing a tiny amount of peanut protein that is placed on a patient’s back. When the trials ended last year, his allergist switched him to the Reese’s Peanut Butter Cup to maintain his levels of desensitisation. “Sometimes, I still just get a tiny little reaction, a little tingling in my mouth,” says Spencer. “But most of the time, there’s nothing at all — it just tastes like candy, which is good. My goal is to eat a whole Snickers bar.”
The Viaskin patch is being developed by DBV, a French company. Like AR101, the patch is in the final stage of clinical trials, with results due in the second half of this year. A small study published in March found that 76 per cent of patients responded to the treatment after three years; they were either able to consume a gram of peanut protein — the equivalent of four nuts — or tolerate an amount 10 times larger than when they entered the trial. If the results are replicated in larger studies, the group could win approval in the US and Europe and launch the product at some point next year.
If both products are given a green light, then parents of children with peanut allergies will soon have two options — a capsule and a patch. The advantage of Viaskin is that it is easier: there are fewer visits to the allergist and almost no chance of severe side effects, although almost all patients develop a rash. The big drawback, say allergists, is that it takes longer to desensitise a patient using the patch than it does with Aimmune’s product.
***
Spencer’s allergist is Dr Tilles, who is based a few miles away from the Baty household in Seattle. On his office wall is a poster he was given for his 50th birthday, entitled “Conquering 50 Years of Life!” Yet many of the parents in his practice fear their children will not live to see 15, let alone 50. It is this crippling fear that demands a treatment is found rather than the absolute number of peanut allergy deaths, he says.
“Death is a very rare outcome, so in my opinion the main drive for pursuing treatments is not that. Otherwise we should all be working hard to make cars safer, because cars are much more dangerous for peanut allergy sufferers.”
Tilles says many parents of children with peanut allergy struggle to let go when their offspring become teenagers. Having managed to stave off a deadly anaphylactic shock for so many years, they worry that their kids will make one fatal mistake: eating the wrong snack after their first taste of alcohol or forgetting their EpiPen on a night out.
“It is well documented that the adolescent-to-young-adult age group is at much higher risk of a fatal or at least near-fatal outcome because of accidental exposure,” he says.
Every allergist will tell you tales of overbearing parenting, like the mother who secretly destroyed birthday invitations so her child would not find them, or the father who turned up to the start of every high-school sleepover to scour the house for peanuts and disinfect the surfaces. The children can develop anxiety themselves, says Tilles. “Some kids pick up on all of it and refuse to eat, so they lose weight. There is a lot of counselling going on.”
However, it is not hard to find allergists who think the treatments being developed by Aimmune and DBV are the wrong answer. Dr Robert Wood, a professor of paediatrics at Johns Hopkins, says most parents will think again when they learn of the huge commitment required for the capsule.
“Once they dive in, they realise that the reality is that it has significant risks,” he says, pointing to the high incidence of side effects. “The most informative statistic is that if you’re treated with oral immunotherapy, you’re definitely going to have more reactions than if you were practising strict avoidance.”
Wood is even more critical of the patch, which he thinks could lull patients into a false sense of security. He points out that in clinical trials there has been a sizeable minority of patients who do not respond to the treatment. “They may be put at a higher level of risk by thinking they do have some protection when they don’t,” he argues. “The reality is that abstinence works very well in these days of good labelling.”
But as Georgina Cipriano will attest, abstinence cannot stave off every anaphylactic shock or save every life. Shortly after her son died, she set up a foundation, Love for Giovanni, to improve awareness of food allergies. “I hear stories like mine all the time,” she says. “We were very strict, we never had anything with peanuts in the house, and we thought we practised good label reading. But we were not as safe as we should have been. We didn’t know enough.”
David Crow is the FT’s senior US business correspondent
Portraits by Adam Golfer and Andrew Miksys
Still-life photographs by John Gribben
The FT is interested to hear from readers who have had experience with peanut allergies and/or the treatments described. Share your story in the comments below. The author, David Crow, will be responding intermittently to your questions and thoughts.
Another one of SpaceX’s Falcon 9 rockets successfully landed back on Earth this evening after launching cargo and supplies to the International Space Station. This time, the rocket touched down at the company’s landing site at Cape Canaveral, Florida, called Landing Zone 1.
That means SpaceX’s success streak of recovering its vehicles on solid ground continues. So far, all five of the company’s attempts to land on land have worked just fine. Now, SpaceX is in possession of 11 Falcon 9 rockets that have flown to space and back — either by landing on ground or on one of the company’s drone ships at sea.
SpaceX’s landings may seem fairly routine at this point, but the cargo the rocket was carrying before it landed was pretty significant — or at least, what was carrying the cargo was unique. For this flight, SpaceX used a Dragon cargo capsule that had already been to space before. The Dragon previously flew on SpaceX’s fourth cargo resupply mission for NASA back in September 2014. It remained in space at the ISS for nearly a month before returning to Earth and splashing down in the Pacific Ocean. It’s the first time a Dragon has been reused for a flight, making SpaceX the first private company to send a vehicle into orbit for a second time.
The Dragon is carrying around 6,000 pounds of supplies and science experiments for the crew of the ISS. That includes a group of fruit flies to test out how the cardiovascular system functions in microgravity, as well as a group of mice to study bone loss in the space environment. Some unique technologies are also riding up inside the Dragon’s trunk — the unpressurized structure attached to the spacecraft that provides support and houses the vehicle’s solar panels. The trunk contains an instrument called NICER, which will eventually be mounted to the outside of the space station to look for neutron stars, as well as a specialized solar panel called ROSA which can be unfurled a bit like a flag. The spacecraft is slated to rendezvous with the station on Monday.
Eventually, SpaceX hopes to keep reusing its Dragon cargo capsules moving forward. That way, the company can at some point shut down production of the vehicle and focus on the production of another spacecraft: the upgraded Dragon. That vehicle, known as Crew Dragon or Dragon 2, will be able to carry people to and from the ISS. A future iteration of the spacecraft will also be able to land propulsively, rather than rely strictly on parachutes to lower down to Earth. Thrusters embedded in the hull of Dragon will allow the vehicle to make a controlled descent to the ground.
And the Dragon cargo vehicles aren’t the only hardware that SpaceX is going to start reusing more often. With a growing stockpile of recovered rockets, the company is slowly starting to send them back into space. SpaceX launched its first previously flown Falcon 9 in March and plans to fly its next used booster in a couple of weeks to launch a satellite for Bulsatcom. The rockets and Dragon capsules have to be inspected and potentially refurbished before they fly again, which requires time and money. But SpaceX is hoping to reduce that turnaround time, potentially saving them a good chunk of money in manufacturing costs in the process.
SpaceX says it could re-fly as many as six used Falcon 9s before the end of 2017. And it looks like there may be a lot of opportunity for the company to do so. SpaceX has been upping the cadence of its launches as of late. Its last launch was just two weeks ago, and its next one is tentatively two weeks away as well. SpaceX President Gwynne Shotwell promised the company would increase its launch frequency this year, and it looks like that may actually be the case.
Today’s launch marked the 100th mission from NASA’s LC-39A, a historic site at the space agency’s Kennedy Space Center in Florida. The same pad was also used to launch the first crewed mission to the Moon as well as the last Shuttle mission.
It is a wonderful accident of history that the internet and web were created as open platforms that anyone — users, developers, organizations — could access equally. Among other things, this allowed independent developers to build products that quickly gained widespread adoption. Google started in a Menlo Park garage and Facebook started in a Harvard dorm room. They competed on a level playing field because they were built on decentralized networks governed by open protocols.
Today, tech companies like Facebook, Google, Amazon, and Apple are stronger than ever, whether measured by market cap, share of top mobile apps, or pretty much any other common measure.
Big 4 tech companies dominate smartphone apps (source); while their market caps continue to rise (source)
These companies also control massive proprietary developer platforms. The dominant operating systems — iOS and Android — charge 30% payment fees and exert heavy influence over app distribution. The dominant social networks tightly restrict access, hindering the ability of third-party developers to scale. Startups and independent developers are increasingly competing from a disadvantaged position.
A potential way to reverse this trend are crypto tokens — a new way to design open networks that arose from the cryptocurrency movement that began with the introduction of Bitcoin in 2008 and accelerated with the introduction of Ethereum in 2014. Tokens are a breakthrough in open network design that enable: 1) the creation of open, decentralized networks that combine the best architectural properties of open and proprietary networks, and 2) new ways to incentivize open network participants, including users, developers, investors, and service providers. By enabling the development of new open networks, tokens could help reverse the centralization of the internet, thereby keeping it accessible, vibrant and fair, and resulting in greater innovation.
Crypto tokens: unbundling Bitcoin
Bitcoin was introduced in 2008 with the publication of Satoshi Nakamoto’s landmark paper that proposed a novel, decentralized payment system built on an underlying technology now known as a blockchain. Most fans of Bitcoin (including me) mistakenly thought Bitcoin was solely a breakthrough in financial technology. (It was easy to make this mistake: Nakamoto himself called it a “p2p payment system.”)
2009: Satoshi Nakamoto’s forum post announcing Bitcoin
In retrospect, Bitcoin was really two innovations: 1) a store of value for people who wanted an alternative to the existing financial system, and 2) a new way to develop open networks. Tokens unbundle the latter innovation from the former, providing a general method for designing and growing open networks.
Networks — computing networks, developer platforms, marketplaces, social networks, etc — have always been a powerful part of the promise of the internet. Tens of thousands of networks have been incubated by developers and entrepreneurs, yet only a very small percentage of those have survived, and most of those were owned and controlled by private companies. The current state of the art of network development is very crude. It often involves raising money (venture capital is a common source of funding) and then spending it on paid marketing and other channels to overcome the “bootstrap problem” — the problem that networks tend to only become useful when they reach a critical mass of users. In the rare cases where networks succeed, the financial returns tend to accrue to the relatively small number of people who own equity in the network. Tokens offer a better way.
Ethereum, introduced in 2014 and launched in 2015, was the first major non-Bitcoin token network. The lead developer, Vitalik Buterin, had previously tried to create smart contract languages on top of the Bitcoin blockchain. Eventually he realized that (by design, mostly) Bitcoin was too limited, so a new approach was needed.
2014: Vitalik Buterin’s forum post announcing Ethereum
Ethereum is a network that allows developers to run “smart contracts” — snippets of code submitted by developers that are executed by a distributed network of computers. Ethereum has a corresponding token called Ether that can be purchased, either to hold for financial purposes or to use by purchasing computing power (known as “gas”) on the network. Tokens are also given out to “miners” which are the computers on the decentralized network that execute smart contract code (you can think of miners as playing the role of cloud hosting services like AWS). Third-party developers can write their own applications that live on the network, and can charge Ether to generate revenue.
Ethereum is inspiring a new wave of token networks. (It also provided a simple way for new token networks to launch on top of the Ethereum network, using a standard known as ERC20). Developers are building token networks for a wide range of use cases, including distributed computingplatforms, prediction and financial markets, incentivized content creation networks, and attention and advertising networks. Many more networks will be invented and launched in the coming months and years.
Below I walk through the two main benefits of the token model, the first architectural and the second involving incentives.
Tokens enable the management and financing of open services
Proponents of open systems never had an effective way to manage and fund operating services, leading to a significant architectural disadvantage compared to their proprietary counterparts. This was particularly evident during the last internet mega-battle between open and closed networks: the social wars of the late 2000s. As Alexis Madrigal recently wrote, back in 2007 it looked like open networks would dominate going forward:
In 2007, the web people were triumphant. Sure, the dot-com boom had busted, but empires were being built out of the remnant swivel chairs and fiber optic cables and unemployed developers. Web 2.0 was not just a temporal description, but an ethos. The web would be open. A myriad of services would be built, communicating through APIs, to provide the overall internet experience.
But with the launch of the iPhone and the rise of smartphones, proprietary networks quickly won out:
As that world-historical explosion began, a platform war came with it. The Open Web lost out quickly and decisively. By 2013, Americans spent about as much of their time on their phones looking at Facebook as they did the whole rest of the open web.
Why did open social protocols get so decisively defeated by proprietary social networks? The rise of smartphones was only part of the story. Some open protocols — like email and the web — survived the transition to the mobile era. Open protocols relating to social networks were high quality and abundant (e.g. RSS, FOAF, XFN, OpenID). What the open side lacked was a mechanism for encapsulating software, databases, and protocols together into easy-to-use services.
For example, in 2007, Wired magazine ran an article in which they tried to create their own social network using open tools:
For the last couple of weeks, Wired News tried to roll its own Facebook using free web tools and widgets. We came close, but we ultimately failed. We were able to recreate maybe 90 percent of Facebook’s functionality, but not the most important part — a way to link people and declare the nature of the relationship.
Some developers proposed solving this problem by creating a database of social graphs run by a non-profit organization:
Establish a non-profit and open source software (with copyrights held by the non-profit) which collects, merges, and redistributes the graphs from all other social network sites into one global aggregated graph. This is then made available to other sites (or users) via both public APIs (for small/casual users) and downloadable data dumps, with an update stream / APIs, to get iterative updates to the graph (for larger users).
These open schemes required widespread coordination among standards bodies, server operators, app developers, and sponsoring organizations to mimic the functionality that proprietary services could provide all by themselves. As a result, proprietary services were able to create better user experiences and iterate much faster. This led to faster growth, which in turn led to greater investment and revenue, which then fed back into product development and further growth. Thus began a flywheel that drove the meteoric rise of proprietary social networks like Facebook and Twitter.
Had the token model for network development existed back in 2007, the playing field would have been much more level. First, tokens provide a way not only to define a protocol, but to fund the operating expenses required to host it as a service. Bitcoin and Ethereum have tens of thousands of servers around the world (“miners”) that run their networks. They cover the hosting costs with built-in mechanisms that automatically distribute token rewards to computers on the network (“mining rewards”).
There are over 20,000 Ethereum nodes around the world (source)
Second, tokens provide a model for creating shared computing resources (including databases, compute, and file storage) while keeping the control of those resources decentralized (and without requiring an organization to maintain them). This is the blockchain technology that has been talked about so much. Blockchains would have allowed shared social graphs to be stored on a decentralized network. It would have been easy for the Wired author to create an open social network using the tools available today.
Tokens align incentives among network participants
Some of the fiercest battles in tech are between complements. There were, for example, hundreds of startups that tried to build businesses on the APIs of social networks only to have the terms change later on, forcing them to pivot or shut down. Microsoft’s battles with complements like Netscape and Intuit are legendary. Battles within ecosystems are so common and drain so much energy that business books are full of frameworks for how one company can squeeze profits from adjacent businesses (e.g. Porter’s five forces model).
Token networks remove this friction by aligning network participants to work together toward a common goal— the growth of the network and the appreciation of the token. This alignment is one of the main reasons Bitcoin continues to defy skeptics and flourish, even while new token networks like Ethereum have grown along side it.
Moreover, well-designed token networks include an efficient mechanism to incentivize network participants to overcome the bootstrap problem that bedevils traditional network development. For example, Steemit is a decentralized Reddit-like token network that makes payments to users who post and upvote articles. When Steemit launched last year, the community was pleasantly surprised when they made their first significant payout to users.
Tokens help overcome the bootstrap problem by adding financial utility when application utility is low
This in turn led to the appreciation of Steemit tokens, which increased future payouts, leading to a virtuous cycle where more users led to more investment, and vice versa. Steemit is still a beta project and has since had mixed results, but was an interesting experiment in how to generalize the mutually reinforcing interaction between users and investors that Bitcoin and Ethereum first demonstrated.
A lot of attention has been paid to token pre-sales (so-called “ICOs”), but they are just one of multiple ways in which the token model innovates on network incentives. A well-designed token network carefully manages the distribution of tokens across all five groups of network participants (users, core developers, third-party developers, investors, service providers) to maximize the growth of the network.
One way to think about the token model is to imagine if the internet and web hadn’t been funded by governments and universities, but instead by a company that raised money by selling off domain names. People could buy domain names either to use them or as an investment (collectively, domain names are worth tens of billions of dollars today). Similarly, domain names could have been given out as rewards to service providers who agreed to run hosting services, and to third-party developers who supported the network. This would have provided an alternative way to finance and accelerate the development of the internet while also aligning the incentives of the various network participants.
The open network movement
The cryptocurrency movement is the spiritual heir to previous open computing movements, including the open source software movement led most visibly by Linux, and the open information movement led most visibly by Wikipedia.
1991: Linus Torvalds’ forum post announcing Linux; 2001: the first Wikipedia page
Both of these movements were once niche and controversial. Today Linux is the dominant worldwide operating system, and Wikipedia is the most popular informational website in the world.
Crypto tokens are currently niche and controversial. If present trends continue, they will soon be seen as a breakthrough in the design and development of open networks, combining the societal benefits of open protocols with the financial and architectural benefits of proprietary networks. They are also an extremely promising development for those hoping to keep the internet accessible to entrepreneurs, developers, and other independent creators.
It was free!” announces Bob the Dinosaur, an adorable moron from the Dilbert cartoon. Bob is driving a bright red convertible. “They just make you sign papers!”, he elaborates. That cartoon is a quarter of a century old, but some things never change. The suspicion lingers that too many people are buying cars using financial products they do not fully understand.
In the UK, the finger of suspicion is pointing at personal contract purchase agreements, or PCPs, which account for 80 per cent of new cars sold. The Prudential Regulation Authority and the Financial Conduct Authority are looking into the car finance sector (the FCA is supposed to prevent us being ripped off; the PRA is supposed to prevent banks accidentally ripping themselves off — thankless tasks).
It is difficult to explain quite how PCPs work, but easy to see the problem. Graham Hill, of the National Association of Commercial Finance Brokers, told the FT recently that using a PCP, drivers could pay less for a new BMW or Mercedes than for a second-hand Ford Focus. Or, as Bob the Dinosaur might put it, “they just make you sign papers!”
This miraculous effect is achieved by endlessly rolling over one quasi hire-purchase into another one, never quite buying a car. They are flattered by a buoyant used car market that is likely to sag before long. Yet manufacturers like them because they encourage people to buy new cars more often; car dealers like them because they generate more commission; and customers like them because the monthly payments are low. If you’re not worried yet, I have a car to sell you.
Some PCPs may be good value. The problem is that it is hard to tell. The closest analogy to a PCP — and it is not particularly close — is buying and selling a series of homes using interest-only mortgages. PCPs are a hybrid of several different financial products, part lease, part hire-purchase, and part option to choose between the two. Variables include contract length, the guaranteed value of the returned car, the deposit, purchase price of the car itself, maintenance contract tie-ins, mileage allowances, and (of course) the interest rate.
There is no reason to think customers can navigate these complexities. Suzanne Shu, an economist at UCLA, has shown that picking the cheapest mortgage deals is a problem that will fox even MBA students. PCPs are harder. We’ve seen this story play out before. There seems to be something about consumer finance that turns us all into Bob the Dinosaur.
George Akerlof and Robert Shiller, Nobel laureate economists and co-authors of Phishing for Phools (UK) (US), argue that the fallibility of consumers creates a profit opportunity. Consumer finance features heavily in their book, alongside junk food, cigarettes and slot machines — unflattering company. Given that a loan or insurance could be a life-saving product, why do financial services so often disappoint?
One reason is that finance shifts purchasing power over time and across different risky outcomes. We exhibit well-known biases when evaluating these different prospects, paying exorbitant prices to postpone a cost or eliminate a small risk. Payday loans, credit cards and extended warranties are real-world examples of that fallibility.
Second, financial contracts can create complexity out of simplicity. Even if we were thinking coolly about what was on offer, we might not be able to understand the small print — Suzanne Shu’s students couldn’t.
Third, many financial contracts are bundled up with other purchases — the PCP, plus mobile phone contracts and overpriced insurance for short-term car hire. Payment protection insurance is the infamous rip-off endgame.
Costly add-ons are not always a disaster. Everyone knows that popcorn is expensive at the cinema, and no regulatory intervention is needed. But all too often we’re seeing the primary product serving as bait for a consumer finance trap.
The best defence of laissez-faire in such cases is not that the market works well, but that regulators would make it worse. Is that true? Do regulators have a sensible response to the toxic tangle of slick salesmanship, financial wizardry, and consumer incompetence?
We could ban complex contracts: tempting, but heavy-handed. Most financial contracts have a rationale and a value to some customers. Richard Thaler and Cass Sunstein, authors of Nudge (UK) (US), have proposed an alternative system they call RECAP — for “Record, Evaluate, Compare Alternative Prices”. A RECAP rule would require finance companies to provide the entire pricing schedule in a computer-readable format, which would allow customers to go to third-party websites and compare different deals in a sophisticated way.
What auto finance needs — what most consumer finance needs — is for key information to be made simple and salient. Competition cannot work if consumers struggle to understand what they’re being sold and what it will cost. The car market’s heady mix of prestige products and bewildering finance will resist efforts at reform. Yet we must try. Bob the Dinosaur needs help.
Written for and first published in the Financial Times on 5th May.
My new book is “Fifty Things That Made The Modern Economy” – coming soon! If you want to get ahead of the curve you can pre-order in the US (slightly different title) or in the UK or through your local bookshop.
I haven't encountered this issue myself on any of my Ryzen Linux boxes, but it seems there are a number of Ryzen Linux users who are facing segmentation faults and sometimes crashes when running concurrent compilation loads on these Zen CPUs.
A Phoronix reader pointed out some of the resources for Ryzen Linux customers facing problems namely when running heavy compilation tasks, like on Arch and Gentoo. AMD hasn't yet found the root cause of this issue, but given the spread of users affected, appears to be related to the processor itself.
Those interested in learning more about these Ryzen compilation issues can find a number of open threads on the matter such as on the Gentoo forums, AMD Community, as well as some entries via this Google Doc tracking Gentoo users having the problem.
AMD is expected to update their community thread when a solution is found. Some workarounds include fiddling with Load Line Calibration (LLC) from the BIOS and some users have found success if disabling the SMT functionality while others are still encountering the problems even if they turn off SMT on their Ryzen 7 CPUs. The issue is happening on multiple versions of GCC but I haven't seen any reports when using LLVM/Clang or alternative compilers.
But consumer advocates, technology experts, people who have been inundated with these calls and the lawyers representing them say such an exemption would open the floodgates. Consumers’ voice mail boxes would be clogged with automated messages, they say, making it challenging to unearth important calls, whether they are from an elderly mother’s nursing home or a child’s school.
If unregulated, ringless voice mail messages “will likely overwhelm consumers’ voice mail systems and consumers will have no way to limit, control or stop these messages,” Margot Freeman Saunders, senior counsel at the National Consumer Law Center, wrote in the organization’s comment letter to the Federal Communications Commission on behalf of more than a dozen consumer groups. “Debt collectors could potentially hijack a consumer’s voice mail with collection messages.”
The commission is collecting public comments on the issue after receiving a petition from a ringless voice mail provider that wants to avoid regulation under the Telephone Consumer Protection Act of 1991. That federal law among other things prohibits calling cellular phones with automated dialing and artificial or prerecorded voices without first obtaining consent — except in an emergency.
All About the Message, the ringless voice mail provider petitioning the commission, uses technology developed by another company, Stratics Networks. All About the Message’s customers use the service to deliver messages for marketing or other purposes right to consumers.
Will Wiquist, a spokesman for the F.C.C., said the commission would review the record after the public comment period closed and consider a decision. There is no formal timeline for resolving such petitions, and the commission cannot comment on the petition until a ruling is issued.
“They are all poised to launch a cannon full of calls to consumers,” said Peter F. Barry, a consumer lawyer in Minneapolis. “If there is no liability for it, it will be a new law that needs to get passed very quickly.”
Even consumers on the “Do Not Call” list could potentially be bombarded by telemarketers, advocates said. “The legal question is whether the people sending the messages would be required to comply with the Do Not Call list,” Ms. Saunders said. “We read the law to possibly not apply if they are not considered calls.”
This is not the first time the commission has received such a request. Nearly three years ago, it received a similar petition from VoAPPs, another voice mail technology company, which wanted to allow debt collectors to reach consumers through voice mail. But the petition was withdrawn before the commission could rule.
More specifically, All About the Message wants the F.C.C. to rule that its voice mail messages are not calls, and therefore can be delivered by automatic telephone dialing systems using an artificial or prerecorded voice. In its petition, the company argued that the law “does not impose liability for voice mail messages” when they are delivered directly to a voice mail service provider and subscribers are not charged for a call.
“The act of depositing a voice mail on a voice mail service without dialing a consumers’ cellular telephone line does not result in the kind of disruptions to a consumer’s life — dead air calls, calls interrupting consumers at inconvenient times or delivery charges to consumers,” All About the Message wrote. The company’s lawyer declined to comment.
If the commission rules against it, All About the Message said, it wants a retroactive waiver to relieve the company and its customers of any liability and “potentially substantial damages” for voice mail already delivered.
PhotoFrank Kemp, a video editor in Dover, Del., received a ringless message that he said “was literally a telemarketing voice mail to try to sell telemarketing systems.”Credit
David Norbut for The New York Times
The company has reason to ask. Even though it started business just last year, one of All About the Message’s customers — an auto dealer — is already facing a lawsuit involving a consumer who received repeated messages. Tom Mahoney, who said he received four voice mail messages from Naples Nissan in 2016, is the lead plaintiff in a suit filed in United States District Court for the Southern District of Florida.
According to the suit, the parties in the case have reached a tentative agreement to settle all claims. Lawyers for both Mr. Mahoney and Naples Nissan declined to comment.
The suit said that Mr. Mahoney’s daughter had received similar messages — advertising zero-interest auto financing — and that neither he nor she had given the company consent.
Josh Justice, chief executive of Stratics Networks, said its technology — which can send out 100 ringless voice mail messages a minute — had existed for 10 years and had not caused a widespread nuisance. It was intended for businesses like hospitals, dentist’s and doctor's offices, banks, and shipping companies to reach customers, for example, and for “responsible marketing.”
“The concept of ringless voice mail was to develop a nonnuisance form of messaging or a nonintrusive alternative to robocalls,” Mr. Justice said.
He contends that telemarketers should be able to use ringless voice mail messages as long as they do so responsibly — that is, skipping over consumers on the “Do Not Call” list, identifying who is leaving the message and giving people a way to opt out. But he said he did not believe that ringless voice mail needed to be subject to the same regulations as other calls — unless regulators find that the messages are generating complaints or being used inappropriately.
Consumer advocates and other experts argue that the courts and the F.C.C. have already established that technology similar to ringless voice mail — which delivered mass automated texts to cellphones — was deemed the same as calls and was covered by the consumer protection law.
“These companies are only spinning an incorrect interpretation of the regulations and the definition of the word ‘call,’” said Randall Snyder, a telecommunications engineering consultant and expert witness in more than 100 cases involving related regulations.
“Definitions of words in regulations and statutes are legal issues,” he said, “but there is certainly lots of common sense here.”
The Republican National Committee, which is in favor of ringless voice mail, goes as far as to argue that prohibiting direct-to-voice-mail messages may be a violation of free speech. Telephone outreach campaigns, it said, are a core part of political activism.
“Political organizations like the R.N.C. use all manner of communications to discuss political and governmental issues and to solicit donations — including direct-to-voice-mail messages,” the committee said in its letter to the commission.
For now, consumers who receive these messages can file complaints with regulators; they can also provide comments on whether they believe ringless voice mail should be subject to consumer protection rules.
But Ms. Saunders said blocking messages might be impossible: It is the phone that blocks calls, and these messages go right to voice mail. (More advice— on how to register for the “Do Not Call” list and how to avoid robocalls and texts — can be found on the F.C.C. website.)
Justin T. Holcombe, a consumer lawyer and partner at Skaar & Feagle in Woodstock, Ga., said the commission’s ruling would have implications for just about everyone. If ringless voice mail could avoid consumer protection rules, “it would be a free-for-all,” he said.
Mr. Kemp, the video editor in Delaware who received the ringless voice mail message, said in recent weeks that he had been targeted by robocallers advertising vehicle financing, even though he owns his truck outright. His strategy? He goes through the menu prompts, acting as if he were interested; when he finally reaches a live person, he angrily demands that his number be removed from the caller’s list.
“Hasn’t worked yet,” he said, “but it’s a good stress reliever.”
Einstein Vision is a service that helps you build smarter applications by using deep learning to automatically recognize images. It provides an API that lets you use image recognition to build AI-enabled apps.
Golang is becoming a popular programming language as it is extremely fast, has a low-runtime footprint, and has a statically linked binary with minimal dependencies. If you’re a developer who wants to get familiar with Golang and Einstein Vision, this blog is for you. Today we learn how to quickly integrate your Go applications with Einstein Vision.
Because visual data is expanding rapidly at a 9% CAGR (Compound Annual Growth Rate) annually, it’s vital for organizations to tap into non-textual data to discover conversations about their brand.
Some use-cases to consider include:
Tracking your brand perception on social channels like Twitter and Facebook
Giving field service agents the ability to identify and classify objects, such as factory equipment, to handle service updates faster
Before we dive into more detail, let’s explore the algorithms that power Einstein Vision.
The algorithms behind Einstein Vision
Einstein Vision uses advancements made in deep learning and an underlying architecture of neural networks. It’s able to classify images with high accuracy as it uses multiple hidden layers to extract hidden features and assign appropriate weights and biases to each feature output.
It’s outside the scope of this blog to discuss neural networks in great detail. In general, think of them as multiple neurons tied together in a graph. There’s an input layer, where images are fed, and a hidden layer, which extracts each feature of the image’s contours, corners, and shapes to eventually predict new images.
The last layer in a neural network is the output layer, which predicts outputs. So now that we know a little bit more about neural networks, let’s look at the specifics of how Einstein Vision models are built using a training dataset.
How does Einstein Vision work?
Einstein Vision is part of a supervised learning technique in which a model is trained on prelabeled training data. The training dataset consists of labeled images and is loaded into Einstein via an API call. Next, a REST API call is made to train the dataset, and the output is a trained model, which has a unique model ID. Then an API call is made to predict the label of a new image that the model has never seen before.
Golang Einstein library In this section you will learn about the class structure and various functions exposed by the Golang library.
The class structure
Golang wrapper
Wrappers are contained in a couple of classes under the vision package as listed here:
`— vision
|— datasets.go
|— types.go
`— vision.go
Functions in vision.go Functions in this file focus on the actual classification of a new image against a generic prebuilt classifier or a custom classifier.
This blog post provides you with the basics on how to use Einstein Vision from a Golang stand-alone application. Hopefully these samples provide a good starting point for developers interested in extending their Golang applications to work with Einstein Vision APIs. You can find the code repo for the library and samples at https://github.com/rajdeepd/einstein-go-apis.
You can also try Einstein Vision on Trailhead to learn how to integrate Einstein Vision into your Force.com workflows using Apex and Visualforce. If you have any questions, feel free to reach out on Salesforce Developer Forums.
About the author Rajdeep Dua is a director of developer relations at Salesforce. He is passionate about helping developers learn about cloud computing, machine learning, and Salesforce. He has over 17 years of experience in software product development and developer relations.
This is a guest post by Dana Van Aken, Andy Pavlo, and Geoff Gordon of Carnegie Melon University. This project demonstrates how academic researchers can leverage our AWS Cloud Credits for Research Program to support their scientific breakthroughs.
Database management systems (DBMSs) are the most important component of any data-intensive application. They can handle large amounts of data and complex workloads. But they’re difficult to manage because they have hundreds of configuration “knobs” that control factors such as the amount of memory to use for caches and how often to write data to storage. Organizations often hire experts to help with tuning activities, but experts are prohibitively expensive for many.
OtterTune, a new tool that’s being developed by students and researchers in the Carnegie Mellon Database Group, can automatically find good settings for a DBMS’s configuration knobs. The goal is to make it easier for anyone to deploy a DBMS, even those without any expertise in database administration.
OtterTune differs from other DBMS configuration tools because it leverages knowledge gained from tuning previous DBMS deployments to tune new ones. This significantly reduces the amount of time and resources needed to tune a new DBMS deployment. To do this, OtterTune maintains a repository of tuning data collected from previous tuning sessions. It uses this data to build machine learning (ML) models that capture how the DBMS responds to different configurations. OtterTune uses these models to guide experimentation for new applications, recommending settings that improve a target objective (for example, reducing latency or improving throughput).
In this post, we discuss each of the components in OtterTune’s ML pipeline, and show how they interact with each other to tune a DBMS’s configuration. Then, we evaluate OtterTune’s tuning efficacy on MySQL and Postgres by comparing the performance of its best configuration with configurations selected by database administrators (DBAs) and other automatic tuning tools.
OtterTune is an open source tool that was developed by students and researchers in the Carnegie Mellon Database Research Group. All code is available on GitHub, and is licensed under Apache License 2.0.
The following diagram shows the OtterTune components and workflow.
At the start of a new tuning session, the user tells OtterTune which target objective to optimize (for example, latency or throughput). The client-side controller connects to the target DBMS and collects its Amazon EC2 instance type and current configuration.
Then, the controller starts its first observation period, during which it observes the DBMS and records the target objective. When the observation period ends, the controller collects internal metrics from the DBMS, like MySQL’s counters for pages read from disk and pages written to disk. The controller returns both the target objective and the internal metrics to the tuning manager.
When OtterTune’s tuning manager receives the metrics, it stores them in its repository. OtterTune uses the results to compute the next configuration that the controller should install on the target DBMS. The tuning manager returns this configuration to the controller, with an estimate of the expected improvement from running it. The user can decide whether to continue or terminate the tuning session.
Notes
OtterTune maintains a blacklist of knobs for each DBMS version it supports. The blacklist includes knobs that don’t make sense to tune (for example, path names for where the DBMS stores files), or those that could have serious or hidden consequences (for example, potentially causing the DBMS to lose data). At the beginning of each tuning session, OtterTune provides the blacklist to the user so he or she can add any other knobs that they want OtterTune to avoid tuning.
OtterTune makes certain assumptions that might limit its usefulness for some users. For example, it assumes that the user has administrative privileges that allow the controller to modify the DBMS’s configuration. If the user doesn’t, then he or she can deploy a second copy of the database on other hardware for OtterTune’s tuning experiments. This requires the user to either replay a workload trace or to forward queries from the production DBMS. For a complete discussion of assumptions and limitations, see our paper.
The following diagram shows how data is processed as it moves through OtterTune’s ML pipeline. All observations reside in OtterTune’s repository.
OtterTune first passes observations into the Workload Characterization component. This component identifies a smaller set of DBMS metrics that best capture the variability in performance and the distinguishing characteristics for different workloads.
Next, the Knob Identification component generates a ranked list of the knobs that most affect the DBMS’s performance. OtterTune then feeds all of this information to the Automatic Tuner. This component maps the target DBMS’s workload to the most similar workload in its data repository, and reuses this workload data to generate better configurations.
Let’s drill down on each of the components in the ML pipeline.
Workload Characterization: OtterTune uses the DBMS’s internal runtime metrics to characterize how a workload behaves. These metrics provide an accurate representation of a workload because they capture many aspects of its runtime behavior. However, many of the metrics are redundant: some are the same measurement recorded in different units, and others represent independent components of the DBMS whose values are highly correlated. It’s important to prune redundant metrics because that reduces the complexity of the ML models that use them. To do this, we cluster the DBMS’s metrics based on their correlation patterns. We then select one representative metric from each cluster, specifically, the one closest to the cluster’s center. Subsequent components in the ML pipeline use these metrics.
Knob Identification: DBMSs can have hundreds of knobs, but only a subset affects the DBMS’s performance. OtterTune uses a popular feature-selection technique, called Lasso, to determine which knobs strongly affect the system’s overall performance. By applying this technique to the data in its repository, OtterTune identifies the order of importance of the DBMS’s knobs.
Then, OtterTune must decide how many of the knobs to use when making configuration recommendations. Using too many of them significantly increases OtterTune’s optimization time. Using too few could prevent OtterTune from finding the best configuration. To automate this process, OtterTune uses an incremental approach. It gradually increases the number of knobs used in a tuning session. This approach allows OtterTune to explore and optimize the configuration for a small set of the most important knobs before expanding its scope to consider others.
Automatic Tuner: The Automated Tuning component determines which configuration OtterTune should recommend by performing a two-step analysis after each observation period.
First, the system uses the performance data for the metrics identified in the Workload Characterization component to identify the workload from a previous tuning session that best represents the target DBMS’s workload. It compares the session’s metrics with the metrics from previous workloads to see which ones react similarly to different knob settings.
Then, OtterTune chooses another knob configuration to try. It fits a statistical model to the data that it has collected, along with the data from the most similar workload in its repository. This model lets OtterTune predict how well the DBMS will perform with each possible configuration. OtterTune optimizes the next configuration, trading off exploration (gathering information to improve the model) against exploitation (greedily trying to do well on the target metric).
OtterTune is written in Python.
For the Workload Characterization and Knob Identification components, runtime performance isn’t a key concern, so we implemented the corresponding ML algorithms with scikit-learn. These algorithms run in background processes, incorporating new data as it becomes available in OtterTune’s repository.
For the Automatic Tuner, the ML algorithms are on the critical path. They run after each observation period, incorporating new data so that OtterTune can pick a knob configuration to try next. Because performance is a consideration, we implemented these algorithms using TensorFlow.
To collect data about the DBMS’s hardware, knob configurations, and runtime performance metrics, we integrated OtterTune’s controller with the OLTP-Bench benchmarking framework.
Experiment design
To evaluate, we compared the performance of MySQL and Postgres using the best configuration selected by OtterTune with the following:
Default: The configuration provided by the DBMS
Tuning script: The configuration generated by an open source tuning advisor tool
DBA: The configuration chosen by a human DBA
RDS: The configuration customized for the DBMS that is managed by Amazon RD and deployed on the same EC2 instance type
We conducted all of our experiments on Amazon EC2 Spot Instances. We ran each experiment on two instances: one for OtterTune’s controller and one for the target DBMS deployment. We used the m4.large and m3.xlarge instance types, respectively. We deployed OtterTune’s tuning manager and data repository on a local server with 20 cores and 128 GB of RAM.
We used the TPC-C workload, which is the industry standard for evaluating the performance of online transaction processing (OLTP) systems.
Evaluation
For each database we used in our experiment, MySQL and Postgres, we measured latency and throughput. The following graphs show the results. The first graph shows the amount of 99th percentile latency, which represents the “worst case” length of time that it takes a transaction to complete. The second graph shows results for throughput, measured as the average number of transactions completed per second.
MySQL results
Comparing the best configuration generated by OtterTune with configurations generated by the tuning script and RDS, MySQL achieves approximately a 60% reduction in latency and 22% to 35% better throughput with the OtterTune configuration. OtterTune also generates a configuration that is almost as good as one chosen by the DBA.
Just a few of MySQL’s knobs significantly affect its performance for the TPC-C workload. The configurations generated by OtterTune and the DBA provide good settings for each of these knobs. RDS performs slightly worse because it provides a suboptimal setting for one knob. The tuning script’s configuration performs the worst because it modifies only one knob.
Postgres results
For latency, the configurations generated by OtterTune, the tuning tool, the DBA, and RDS all achieve similar improvements over Postgres’ default settings. We can probably attribute this to the overhead required for round trips between the OLTP-Bench client and the DBMS over the network. For throughput, Postgres performs approximately 12% better with the configuration suggested by OtterTune than with the configurations chosen by the DBA and the tuning script, and approximately 32% better compared to RDS.
Similar to MySQL, only a few knobs significantly affect Postgres’ performance. The configurations generated by OtterTune, the DBA, the tuning script, and RDS all modified these knobs, and most provided reasonably good settings.
OtterTune automates the process of finding good settings for a DBMS’s configuration knobs. To tune new DBMS deployments, it reuses training data gathered from previous tuning sessions. Because OtterTune doesn’t need to generate an initial dataset for training its ML models, tuning time is drastically reduced.
What’s next? To accommodate the growing popularity of DBaaS deployments, where remote access to the DBMS’s host machine isn’t available, OtterTune will soon be able to automatically detect the hardware capabilities of the target DBMS without requiring remote access.
For more details about OtterTune, see our paper or the code on GitHub. Keep an eye on this website, where we will soon make OtterTune available as an online-tuning service.
About the Authors
Dana Van Aken is a PhD student in Computer Science at Carnegie Mellon University advised by Dr. Andrew Pavlo. Her broad research interest is in database management systems. Her current work focuses on developing automatic techniques for tuning database management systems using machine learning.
Dr. Andy Pavlo is an Assistant Professor of Databaseology in the Computer Science Department at Carnegie Mellon University. At CMU, he is a member of the Database Group and the Parallel Data Laboratory. His work is also in collaboration with the Intel Science and Technology Center for Big Data.
Dr. Geoff Gordon is Associate Professor and Associate Department Head for Education in the Department of Machine Learning at Carnegie Mellon University. His research interests include artificial intelligence, statistical machine learning, educational data, game theory, multi-robot systems, and planning in probabilistic, adversarial, and general-sum domains.
Google Cloud launched a new Internet of Things management service today called Google Cloud IoT Core that provides a way for companies to manage IoT devices and process data being generated by those devices.
A transportation or logistics firm, for example, could use this service to collect data from its vehicles and combine it with other information like weather, traffic and demand to place the right vehicles at the right place at the right time.
By making this into a service, Google is not only keeping up with AWS and Microsoft, which have similar services, it is tapping into a fast-growing market. In fact, a Google Cloud spokesperson said the genesis of this service wasn’t so much about keeping up with its competitors — although that’s clearly part of it — it was about providing a service its cloud customers were increasingly demanding.
That’s because more and more companies are dealing with tons of data coming from devices large and small, whether a car or truck or tiny sensors sitting on an MRI machine or a machine on a manufacturer’s shop floor. Just validating the devices, then collecting the data they are generating is a huge undertaking for companies.
Google Cloud IoT Core is supposed to help deal with all of that by removing a level of complexity associated with managing all of these devices and data. By packaging this as a service, Google is trying to do a lot of the heavy lifting for customers, providing them with the infrastructure and services they need to manage the data, using Google’s software services like Google Cloud Dataflow,Google BigQuery, andGoogle Cloud Machine Learning Engine. Customers can work with third-party partners like ARM, Intel and Sierra Wireless for their IoT hardware and Helium, Losant or Tellmeplus for building their applications.
Photo: Google Cloud
While the company bills itself as the more open alternative to competitors like AWS and Microsoft Azure, this IoT service is consistent with Google’s overall strategy to let customers use both its core cloud services and whatever other services they choose to bring to the process, whether they are from Google itself or from a third party.
The solution consists of two main pieces. First there is a device manager for registering each of the “things” from which you will be collecting data. This can be done manually through a console or programmatically to register the devices in a more automated fashion, which is more likely in scenarios involving thousands or even tens of thousands of devices.
As Google describes it, the device manager establishes the identity of a device and provides a mechanism for authenticating it as it connects to the cloud, while maintaining a configuration for each device that helps the Google Cloud service recognize it.
The second piece is a “protocol bridge,” which provides a way to communicate using standard protocols between the “things” and the Google Cloud service. It includes native support for secure connection over MQTT, an industry-standard IoT protocol, according to the company.
Once the device is registered and the data is moved across the protocol bridge, it can flow through processing and eventually visualization or use in an application.
*Customs brokerage services are provided by Flexport’s wholly-owned subsidiary, Flexport LLC, a licensed customs brokerage with a national permit.
*International ocean freight forwarding services are provided by Flexport International LLC, a licensed Ocean Transportation Intermediary FMC# 025219NF.
*U.S. trucking services are provided by Flexport International, LLC, a FMCSA licensed property broker USDOT #2594279 and MC #906604-B.
*All transactions are subject to Flexport’s standard terms and conditions, available at www.flexport.com/terms