This article was originally published atThe Conversationand has been republished under Creative Commons.
We all know thatparrots can talk. Some people may have even seenelephants,seals, orwhalesmimicking speech sounds. So why can’t our closest primate relatives speak like us? Our new research suggests they have the right vocal anatomy but not the brainpower to use it.
Scientists have been interested in understanding this phenomenon for centuries. Some have argued that nonhuman primates didn’t have theright-shaped body partsto make the same sounds as we do, and that human speech evolved after our speech organs changed. But comparative studies have shown that the form and function of the larynx and vocal tract is very similaracross most primates species, including humans.
This suggests that the primate vocal tract is “speech ready” but that most species don’t have the neural control to make the complex sounds that comprise human speech. When reviewing the evidence in 1871, Charles Darwinwrote,“the brain has no doubt been far more important.”
Along withJeroen Smaersfrom Stony Brook University in New York, I have been investigating the relationship between the number of different calls that each primate species can make and the architecture of their brains. For example, Golden pottos have only ever been recorded using two different sounds, while chimpanzees and bonobosuse around 40.
Our recent study, published in Frontiers in Neuroscience, focused on two particular features of the brain. These were the cortical association areas that govern voluntary control over behavior, and the brainstem nuclei that are involved in the neural control of muscles responsible for vocal production.Cortical association areasare found within the neocortex and are key to the higher-order brain functions considered to be the foundation for the complex behavior of primates.
The results indicate a positive correlation between the relative size of the cortical association areas and the size of the vocal repertoire of primates. In simple terms, primates with bigger cortical association areas tended to make more sounds. But, interestingly, a primate’s vocal repertoire was not linked to the overall size of its brain, just the relative size of these specific areas.
We also found that apes have particularly large cortical association areas, as well as a bigger hypoglossal nucleus than other primates. The hypoglossal nucleus is associated with the cranial nerve that controls the muscles of the tongue. This suggests that our closest primate relatives may have finer and more voluntary control over their tongues than other primate species.
The brain, particularly the relative size of the cortical association areas, seems to determine the extent of primates’ vocal repertoire.BruceBlaus/Wikimedia Commons
By understanding the nature of the relationship between vocal complexity and brain architecture, we hope to identify some of the key elements that underlie the evolution of complex vocal communication in our ancestors, ultimately leading to speech.
The origins of speech is a topic that has long been debated. TheSociété de Linguistique de Parisfamously banned any further inquiry into the matter in its publication pagesin 1866, as it was deemed to be far too unscientific. But much progress has been made in the last few decades thanks to a wide range of evidence, such as that fromstudies ofcommunication in other species, fossils and, more recently, genetics.
Research has shown that some primate species, such asvervet monkeys, use “words” to label things (what we would call semantics in human language).Some specieseven combine calls into simple “sentences” (what we would think of as syntax). This can tell us a lot about the early evolution of language, and the elements of language that might have already been present in our common ancestors with these species some millions of years ago.
The fossil record can also provide insight. Speech itself clearly does not fossilize, so researchers have searched for proxy evidence in the skeletal remains of extinct human relatives. For example, some researchers have argued that thepositionandshapeof the hyoid bone (the only bone in the vocal tract) can tell us something about the origins of speech.
Similarly, others have argued that thediameter of the thoracic canal(which connects the thorax to the nervous system), or thehypoglossal canal(through which the nerves travel to the tongue), can tell us something about breathing, or speech production. And the size and shape of the tiny bones in the middle ear may be able to tell us somethingabout speech perception. But, in general, the fossil record is simply too limited to draw any strong conclusions.
This reconstruction of the cranium of the fossil anthropoid primate Aegyptopithecus zeuxis shows what this species’ semicircular canal (inner ear) would have looked like. Unfortunately, scientists haven’t discovered many fossils that can inform them about the evolution of speech.Penn State/Flickr
Finally, comparing genetics of humans and other species has provided insight into the origins of speech. One much-discussed gene that seems to be relevant for speech is the FOXP2 gene. If this gene mutates, it leads to difficulties with learning and producing complex mouth movements, and towide-ranging linguistic deficiencies.
It was long thought that the DNA sequence changes in the human FOXP2 gene were a unique trait related to our unique ability to use speech. But more-recent studies have shown that these mutations are also present in some extinct human relatives, and the changes in this gene (and, perhaps language itself) may be much more ancient thanpreviously thought.
Technological developments, such as further ancient DNA sequencing of extinct species, and increased knowledge of the neurobiology of language, are certain to provide further giant leaps. But the future of this contentious and complex field will likely depend on large-scale, multidisciplinary collaboration. Comparative studies like ours, comparing traits across a range of species, were the primary tools used by Darwin. No doubt such studies will continue to provide important insights into the evolution of this incredible aspect of our behavior.
Jacob Dunn is a senior lecturer in zoology at Anglia Ruskin University in the U.K.
(Submitted on 22 Jun 2018 (v1), last revised 7 Sep 2018 (this version, v2))
Abstract: What is the right object representation for manipulation? We would like
robots to visually perceive scenes and learn an understanding of the objects in
them that (i) is task-agnostic and can be used as a building block for a
variety of manipulation tasks, (ii) is generally applicable to both rigid and
non-rigid objects, (iii) takes advantage of the strong priors provided by 3D
vision, and (iv) is entirely learned from self-supervision. This is hard to
achieve with previous methods: much recent work in grasping does not extend to
grasping specific objects or other tasks, whereas task-specific learning may
require many trials to generalize well across object configurations or other
tasks. In this paper we present Dense Object Nets, which build on recent
developments in self-supervised dense descriptor learning, as a consistent
object representation for visual understanding and manipulation. We demonstrate
they can be trained quickly (approximately 20 minutes) for a wide variety of
previously unseen and potentially non-rigid objects. We additionally present
novel contributions to enable multi-object descriptor learning, and show that
by modifying our training procedure, we can either acquire descriptors which
generalize across classes of objects, or descriptors that are distinct for each
object instance. Finally, we demonstrate the novel application of learned dense
descriptors to robotic manipulation. We demonstrate grasping of specific points
on an object across potentially deformed object configurations, and demonstrate
using class general descriptors to transfer specific grasps across objects in a
class.
The company’s growth exploded by 2011, just a year after its service went live, spawning countless copycats — for families, for music, for pornography, for Lady Gaga fans — and clones in every major country. It seemed possible that Pinterest could be as successful as Facebook, Instagram, Twitter or YouTube — maybe even Google. The market for social media advertising was still young and up for grabs. A Forbes cover story about Pinterest declared, “Move over, Zuck.”
The business case was simple and powerful: It was a shopping mall disguised as a mood board that held its users’ aspirations, unearthing pure and unfiltered commercial desire. “You can draw a direct line from those interests to a commercial opportunity or retail category,” said Andrew Lipsman, an analyst at eMarketer.
But just as the company began selling ads in 2014, user growth stalled and it wasn’t clear why, according to multiple people familiar with the company. The company disagreed that growth had stalled, arguing that it had “slightly slowed.”
Executives on Pinterest’s “growth” team proposed spending $50 million a year to acquire users through marketing, a common tactic for web companies. Other executives argued that the company should court celebrities and pay influencers to share content on Pinterest, similar to YouTube’s premium content program.
Mr. Silbermann opposed both, according to people familiar with the decision. He preferred what he called “quality growth.”
“There’s a natural rate at which you can scale a company that’s healthy,” Mr. Silbermann said. So Pinterest stuck to its knitting.
Not everyone was sold on the message. Even by the standards of start-ups, where employee turnover is common, the number of executives leaving Pinterest has been notable in recent years. Since 2015, the company lost people who ran media partnerships, operations, finance, growth, engineering, brand, product, tech partnerships, marketing, corporate development, communications and customer strategy, along with its general counsel and president. Jamie Favazza, a Pinterest spokeswoman, said, “Turnover is natural at high-growth start-ups, but we’ve built a strong team of leaders for the long-term.”
For thousands of years, human being have been contemplating the Universe and seeking to determine its true extent. And whereas ancient philosophers believed that the world consisted of a disk, a ziggurat or a cube surrounded by celestial oceans or some kind of ether, the development of modern astronomy opened their eyes to new frontiers. By the 20th century, scientists began to understand just how vast (and maybe even unending) the Universe really is.
And in the course of looking farther out into space, and deeper back in time, cosmologists have discovered some truly amazing things. For example, during the 1960s, astronomers became aware of microwave background radiation that was detectable in all directions. Known as the Cosmic Microwave Background (CMB), the existence of this radiation has helped to inform our understanding of how the Universe began.
Description:
The CMB is essentially electromagnetic radiation that is left over from the earliest cosmological epoch which permeates the entire Universe. It is believed to have formed about 380,000 years after the Big Bang and contains subtle indications of how the first stars and galaxies formed. While this radiation is invisible using optical telescopes, radio telescopes are able to detect the faint signal (or glow) that is strongest in the microwave region of the radio spectrum.
The CMB is visible at a distance of 13.8 billion light years in all directions from Earth, leading scientists to determine that this is the true age of the Universe. However, it is not an indication of the true extent of the Universe. Given that space has been in a state of expansion ever since the early Universe (and is expanding faster than the speed of light), the CMB is merely the farthest back in time we are capable of seeing.
Relationship to the Big Bang:
The CMB is central to the Big Bang Theory and modern cosmological models (such as the Lambda-CDM model). As the theory goes, when the Universe was born 13.8 billion years ago, all matter was condensed onto a single point of infinite density and extreme heat. Due to the extreme heat and density of matter, the state of the Universe was highly unstable. Suddenly, this point began expanding, and the Universe as we know it began.
At this time, space was filled with a uniform glow of white-hot plasma particles – which consisted of protons, neutrons, electrons and photons (light). Between 380,000 and 150 million years after the Big Bang, the photons were constantly interacting with free electrons and could not travel long distances. Hence why this epoch is colloquially referred to as the “Dark Ages”.
As the Universe continued to expand, it cooled to the point where electrons were able to combine with protons to form hydrogen atoms (aka. the Recombination Period). In the absence of free electrons, the photons were able to move unhindered through the Universe and it began to appear as it does today (i.e. transparent and permeated by light). Over the intervening billions of years, the Universe continued to expand and cooled greatly.
Due to the expansion of space, the wavelengths of the photons grew (became ‘redshifted’) to roughly 1 millimetre and their effective temperature decreased to just above absolute zero – 2.7 Kelvin (-270 °C; -454 °F). These photons fill the Universe today and appear as a background glow that can be detected in the far-infrared and radio wavelengths.
History of Study:
The existence of the CMB was first theorized by Ukrainian-American physicist George Gamow, along with his students, Ralph Alpher and Robert Herman, in 1948. This theory was based on their studies of the consequences of nucleosynthesis of light elements (hydrogen, helium and lithium) during the very early Universe. Essentially, they realized that in order to synthesize the nuclei of these elements, the early Universe needed to be extremely hot.
The Big Bang timeline of the Universe. Cosmic neutrinos affect the CMB at the time it was emitted, and physics takes care of the rest of their evolution until today. Image credit: NASA / JPL-Caltech / A. Kashlinsky (GSFC).
They further theorized that the leftover radiation from this extremely hot period would permeate the Universe and would be detectable. Due to the expansion of the Universe, they estimated that this background radiation would have a low temperature of 5 K (-268 °C; -450 °F) – just five degrees above absolute zero – which corresponds to microwave wavelengths. It wasn’t until 1964 that the first evidence for the CMB was detected.
This was the result of American astronomers Arno Penzias and Robert Wilson using the Dicke radiometer, which they had intended to use for radio astronomy and satellite communication experiments. However, when conducting their first measurement, they noticed an excess of 4.2K antenna temperature that they could not account for and could only be explained by the presence of background radiation. For their discovery, Penzias and Wilson were awarded the Nobel Prize in Physics in 1978.
Initially, the detection of the CMB was a source of contention between proponents of different cosmological theories. Whereas proponents of the Big Bang Theory claimed that this was the “relic radiation” left over from the Big Bang, proponents of the Steady State Theory argued that it was the result of scattered starlight from distant galaxies. However, by the 1970s, a scientific consensus had emerged that favored the Big Bang interpretation.
All-sky data obtained by the ESA’s Planck mission, showing the different wavelenghts. Credit: ESA
During the 1980s, ground-based instruments placed increasingly stringent limits on the temperature differences of the CMB. These included the Soviet RELIKT-1 mission aboard the Prognoz 9 satellite (which was launched in July of 1983) and the NASA Cosmic Background Explorer (COBE) mission (who’s findings were published in 1992). For their work, the COBE team received the Nobel Prize in physics in 2006.
COBE also detected the CMB’s first acoustic peak, acoustical oscillations in the plasma which corresponds to large-scale density variations in the early universe created by gravitational instabilities. Many experiments followed over the next decade, which consisted of ground and balloon-based experiments whose purpose was to provide more accurate measurements of the first acoustic peak.
The second acoustic peak was tentatively detected by several experiments, but was not definitively detected until the Wilkinson Microwave Anisotropy Probe (WMAP) was deployed in 2001. Between 2001 and 2010, when the mission was concluded, WMAP also detected a third peak. Since 2010, multiple missions have been monitoring the CMB to provide improved measurements of the polarization and small scale variations in density.
According to various cosmological theories, the Universe may at some point cease expanding and begin reversing, culminating in a collapse followed by another Big Bang – aka. the Big Crunch theory. In another scenario, known as the Big Rip, the expansion of the Universe will eventually lead to all matter and spacetime itself being torn apart.
If neither of these scenarios are correct, and the Universe continued to expand at an accelerating rate, the CMB will continue redshifting to the point where it is no longer detectable. At this point, it will be overtaken by the first starlight created in the Universe, and then by background radiation fields produced by processes that are assumed will take place in the future of the Universe.
If you believe this to be in error, please confirm below that you are not a robot by clicking "I'm not a robot"
below.
Please make sure your browser supports JavaScript and cookies and
that you are not blocking them from loading. For more information you can review the Terms of Service and Cookie
Policy.
On April 5, 1815, just before sunset, the Mt. Tambora volcano on the Indonesian island of Sumbawa erupted. So loud was the blast that the captain of the East India Company cruiser Benares, anchored more than 1,300 km away, thought he heard cannon fire, and put to sea in search of the pirates he assumed responsible. Luckily for him, the Benares hadn’t yet reached Sumbawa when Tambora erupted again five days later, this time in the most powerful volcanic explosion in 2,000 years.
Within a month, the death toll in Indonesia had reached 90,000—the worst ever recorded for a volcanic event—from the eruption itself and the starvation that followed as falling ash destroyed crops. But that grim count, as scholars like historian William Klingaman and his son Nicholas, a meteorologist, authors of 1816: The Year Without Summer, have just begun to investigate, was only the beginning of what Tambora would wreak.
The massive load of sulphate gases and debris the mountain shot 43 km into the stratosphere blocked sunlight and distorted weather patterns for three years, dropping temperatures between two and three degrees Celsius, shortening growing seasons and devastating harvests worldwide, especially in 1816. In the northern hemisphere, farmers from frozen—and abolitionist—New England, where some survived the winter of 1816 to 1817 on hedgehogs and boiled nettles, poured into the Midwest. That migration, the Klingamans argue, set in motion demographic ripples that would not play out until America’s Civil War, almost a half-century later.
Throughout the Old World, from China to Ireland, starving peasants flooded towns, begging and even selling their children for food. Famine-friendly diseases came in their wake. The worst typhus epidemic on record raged, while the lethal modern strain of what would become the 19th century’s greatest killer—cholera—and the first stirrings of state-organized public health measures both came to life.
And so too did Frankenstein and Dracula.
In Switzerland, the European epicentre of the disaster, the English poet Lord Byron and his circle spent much of June huddled around the fire in a chateau on Lake Geneva. Bored and oppressed by the rainy gloom, the poet urged his companions to compose ghost stories in the Gothic mode. Mary Shelley’s Frankenstein, the foundational tale of modern angst over scientists monkeying about with forces beyond their ken, is the most famous to have emerged from the summer of darkness. But “The Vampyre,” by Byron’s physician John Polidori, has been even richer in progeny.
Polidori’s short story, remembered now (if at all) for the way in which his undead protagonist so closely resembled the “mad, bad and dangerous to know” Byron, was a hit at the time, spawning a vampire craze that worked itself into unlikely literary nooks—in Wuthering Heights, Heathcliff’s housekeeper suspects her master of being a vampire. Bram Stoker’s Dracula (1897), and Bela Lugosi later, revived the genre by tying the vampire story to themes of sex, blood, death and aristocratic glamour. More recently, Anne Rice’s Vampire Chronicles, not to mention Buffy the Vampire Slayer, transformed the undead (or some of them, some of the time) from repulsive incarnations of evil into tragic, beautiful and conscience-stricken figures, setting the stage for Stephenie Meyer’s massively popular Twilight novels and their film versions.
The tale of the Byronic stories has itself become heavily mythologized: most versions stop with the climatic catastrophe functioning as a mere occasion—the writers could have as easily been housebound by a collapsed bridge. Not so, says Gillen D’Arcy Wood, an English professor and director of the Sustainability Studies Initiative at the University of Illinois at Urbana-Champaign, at least not in the case of Frankenstein,“the signature literary production of the year without a summer.”
Everything Shelley saw at the château and on her way there made its way into her novel about the electrical creation of life. One storm follows on another, she wrote her half-sister in England, including one in which Lake Geneva “was lit up, the pines on Jura made visible, and all the scene illuminated for an instant, when a pitchy blackness succeeded, and the thunder came in frightful bursts over our heads amid the blackness.” More subtly but unmistakably, she incorporated Switzerland’s starving peasantry in her tale. She imagines Frankenstein—who, it’s often forgotten, is the human creator in the novel—waking from a nightmare to find his hideous creation at his bedside, “looking on him with yellow, watery, but speculative eyes.” That echoes a refrain among English tourists of the era. One, on the road from Rome to Naples in 1817, after a second failed harvest tipped the rural poor into outright famine, recorded in his diary “the livid aspect of the miserable inhabitants.” (When the traveller asked how they lived, these “animated spectres” replied simply: “We die.”)
Shelley’s famous creature, says Wood, whose own book on Tambora will be published next year, “bears the mark of the famished and diseased” in more than his eye colour. Like the hungry refugees spreading typhus, he is a wanderer seen as a menace; the disgust everyone displays toward him mirrors the lack of sympathy most well-off Europeans showed the starving. As the creature himself put it, with considerably more irony than he is usually credited with, he suffered “from the inclemency of the season,” but “still more from the barbarity of man.”
Shelley wasn’t the only writer taking note of the weather, Wood points out, or of its human cost. Chinese poets in Yunan recorded devastated rice crops and the misery of the peasantry. And Byron’s poem “Darkness,” inspired by a July day in 1816 when the candles had to be lit at noon, carries themes of social breakdown after “all hearts / Were chill’d into a selfish prayer for light.”
Visual artists also responded. The innumerable shades of grey that dominated the sky for years were in fact preceded—after Tambora’s eruption but before the bad weather—by spectacular sunsets in the summer and fall of 1815. Since the ash cloud meant that less blue light and more red than normal reached the ground, sunsets were unusually rich in shades of red, purple and orange.
As the Klingamans describe in 1816, volcanologists have tried to date eruptions through the colours that artists—presumed to be trying to depict as accurately as possible—used to paint sunsets from the 16th to the 19th centuries. Looking at 550 samples by 181 painters, one group of scientists concluded that works from the years immediately following Tambora display the most red paint. And the two paintings with the highest amount, adds Wood, are a watercolour by William Turner—“the painter of light”—entitled Red Sky and Crescent Moon, and Caspar Friedrich’s Ships in the Harbour after Sunset. The flip side of the colourful sunsets of 1815 were the cloudy skies before and after; the teens were the cloudiest decade of the century, thanks in part to an eruption somewhere in the tropics in 1809. By 1818 landscape painter John Constable was a fixture on Hampstead Heath, painting study after study of clouds.
But the most influential literary voice of the Tambora era may have been only three years old when the volcano erupted. Charles Dickens, whose recreation of his childhood is at the core of his fiction, had a “body memory” like no other writer, argues Wood. “All his stories are shot through with snow, fog, rain and freezing cold, especially as suffered by children.” That’s true enough—consider the famous opening to Bleak House: “Implacable November weather. As much mud in the streets as if the waters had but newly retired from the face of the earth . . . Fog cruelly pinching the toes and fingers of a shivering little ’prentice boy.” Although critics maintain Dickensian atmospherics were a result of expanding industrialization, Wood’s point is that those sort of conditions didn’t actually dominate in London at the time Dickens compulsively wrote about them—particularly not the bone-chilling cold. “Our whole image of Victorian London,” he concludes, “may be based upon a Tambora childhood.” To Frankenstein and Dracula, then, add a long line of Dickensian waifs to the volcano’s fictive offspring.
Restoring historical context to Frankenstein and Bleak House also puts a spotlight on the real children of 1816 to 1818. As those studying Tambora have come to realize, almost everyone alive two centuries ago was hungry. For our own era of fast-building environmental crisis, the experiences, fictional and actual, of those three years offer the best-recorded account of how sudden and how devastating climate change can be.
Mirror life (also called mirror-image life, chiral life, or enantiomeric life) is a hypothetical form of life with mirror-reflected molecular building blocks.[1][2][3][4][5] The possibility of mirror life was first discussed by Louis Pasteur.[6] Although this alternative life form has not been discovered in nature, efforts to build a mirror-image version of biology's molecular machinery is already underway.[7]
Many of the essential molecules for life on Earth can exist in two mirror-image forms, referred to as "left-handed" and "right-handed", but living organisms do not use both. Proteins are exclusively composed of left-handed amino acids; RNA and DNA contain only right-handed sugars. This phenomenon is known as homochirality.[8] It is not known whether homochirality emerged before or after life, whether the building blocks of life must have this particular chirality, or indeed whether life needs to be homochiral.[9] Protein chains built from amino acids of mixed chirality tend not to fold or function as catalysts, but mirror-image proteins have been constructed that work the same but on substrates of opposite handedness.[8]
The concept
It is thought that such mirror organisms can be highly incompatible with existing microbes (viruses, bacteria, protozoa, etc.). Hypothetically, it is possible to recreate an entire ecosystem from the bottom up, in chiral form. In this way, the creation of an Earth ecosystem without microbial diseases might be possible. In some distant future, mirror life could be employed to create robust, effective and disease-free ecosystems for use on other planets.[10]
Advances in synthetic biology, like synthesizing viruses since 2002, partially synthetic bacteria in 2010, or synthetic ribosomes in 2013, may lead to the possibility of fully synthesizing a living cell from small molecules, where we could use mirror-image versions (enantiomers) of life's building-block molecules, in place of the standard ones. Some proteins have been synthesized in mirror-image versions, including polymerase in 2016.[11]
Reconstructing regular lifeforms in mirror-image form, using the mirror-image (chiral) reflection of their cellular components, could be achieved by substituting left-handed amino acids with right-handed ones, in order to create mirror reflections of all regular proteins. Analogically, we could get reflected sugars, DNA, etc., on which reflected enzymes would work perfectly. Finally we would get a normally functioning mirror reflection of a natural organism - a chiral counterpart organism - with which natural viruses and bacteria couldn't interact. Electromagnetic force (chemistry) is unchanged under such molecular reflection transformation (P-symmetry). There is a small alteration of weak interactions under reflection, which can produce very small corrections, but these corrections are many orders of magnitude lower than thermal noise - almost certainly too tiny to alter any biochemistry.[citation needed] However, there are also theories that weak interactions can have a greater effect on longer nucleic acids or protein chains, resulting in much less efficient conversion of mirror ribozymes or enzymes than normal ribozymes or enzymes.[12]
Chiral animals would obviously need to feed on reflected food, produced by reflected plants. The great advantage, though, is that such chiral organisms should enjoy a disease-free life, completely immune to all viruses and microbes (which virologists are now beginning to understand underlie a huge number of diseases).
Viruses would be completely incompatible with the reflected cellular structures; and bacteria, protozoa, and fungi could not function because they would not be able to find normal sugars inside reflected organisms. The reverse sugars circulating in the chiral organism's body would be indigestible as far as normal bacteria are concerned, so any bacterium entering a chiral organism would simply starve to death. The chiral environment is hostile for normal viruses, protozoa, bacteria, etc.
Mirror life presents potential dangers. For example, a chiral-mirror version of cyanobacteria, which only needs achiral nutrients and light for photosynthesis, could take over Earth's ecosystem due to lack of natural enemies, disturbing the bottom of the food chain by producing mirror versions of the required sugars. Some bacteria can digest L-glucose; exceptions like this would give some rare lifeforms an unanticipated advantage.
Direct applications
Direct application of mirror-chiral organisms can be mass production of enantiomers (mirror-image) of molecules produced by normal life.
Enantiopure drugs - some pharmaceuticals have known different activity depending on enantiomeric form,
Aptamers (L-ribonucleic acid aptamers): "That makes mirror-image biochemistry a potentially lucrative business. One company that hopes so is Noxxon Pharma in Berlin. It uses laborious chemical synthesis to make mirror-image forms of short strands of DNA or RNA called aptamers, which bind to therapeutic targets such as proteins in the body to block their activity. The firm has several mirror-aptamer candidates in human trials for diseases including cancer; the idea is that their efficacy might be improved because they aren't degraded by the body's enzymes. A process to replicate mirror-image DNA could offer a much easier route to making the aptamers, says Sven Klussmann, Noxxon Pharma's chief scientific officer."[13]
L-Glucose, enantiomer of standard glucose, for which tests showed that it tastes likes standard sugar, not being metabolized the same way. However, it was never marketed due to excessive manufacturing costs.[14]
In fiction
The creation of one chiral human is the basis of 1950 Arthur C. Clarke's story "Technical Error", from The Collected Stories. In this story, a physical accident transforms a person into his mirror image, speculatively explained by travel through a fourth physical dimension.
^Acevedo-Rocha, Carlos G. (2015). "The synthetic nature of biology". In Hagen, Kristin; Engelhard, Margret; Toepfer, Georg. Ambivalences of Creating Life: Societal and Philosophical Dimensions of Synthetic Biology. Springer. pp. 9–54. ISBN9783319210889.
The greatest threat posed by Australia's planned new anti-encryption laws comes from the voluntary requests made to communication providers, not the compulsory notices to give technical assistance, according Dr Chris Culnane, because they have greater scope and less oversight.
"At a very high level, the legislation introduces two compulsory notices, and one voluntary request. Whilst the compulsory notices have gained the most attention, it is my view that the voluntary assistance requests are where the greatest danger exists," Culnane wrote in a detailed blog post last week.
"The assistance requests are not constrained by the same limitations as the notices in what they [government agencies] can ask for, neither are they part of the annual reporting."
Culnane is a lecturer at the School of Computing and Information Systems at the University of Melbourne.
"My analysis is based on viewing the legislation as a technical document, looking for gaps and inconsistencies, since that is so often where the greatest threat lies," he wrote.
Under the new law, Australian government agencies would be able to issue three kinds of notices:
Technical Assistance Notices, which are compulsory notices for a communication provider to use an interception capability they already have;
Technical Capability Notices, which are compulsory notices for a communication provider to build a new interception capability, so that it can meet subsequent Technical Assistance Notices; and
Technical Assistance Requests, which Culnane said are described as voluntary requests. "There is no criminal or civil penalty for not complying with them, although they are covered by the same secrecy provisions," he wrote.
"It is my view that these [Technical Assistance Requests] are the real objective of the legislation, not the compulsory notices. The requests are defined differently to both of the notices, and have few, if any, limitations on what they can request," Culnane wrote.
"Furthermore, they are excluded from essential oversight, by virtue of not being included in the annual report issued by the minister (see 317ZS)."
The laws of mathematics do apply in Australia
The government says the legislation won't create backdoors in encryption. But it is intended to create a framework for providing access to endpoint devices, amongst many other things.
"The issue of System Weaknesses is made a big deal of in the legislation and explanatory note. It seems like it is an attempt to comply with the claim of not mandating backdoors. However, the term isn't defined anywhere," Culnane wrote.
"Furthermore, what is described remains a backdoor, albeit a keyed backdoor. There is no requirement for backdoors to be universally exploitable to be considered a backdoor, it merely needs to provide an alternative entry point into the target system or protocol."
Culnane noted that the description of a Systemic Weakness "seems somewhat contradictory", and offered some technical details of how keyed backdoors might work, before providing his conclusion.
"The only compromise appears to be that they have realised that in fact the laws of mathematics do apply in Australia and that the backdoor needs to be relocated somewhere else. That isn't really an improvement, it is just a technicality," he wrote.
Culnane believes that the legislation does allow for the creation of backdoors, however. The constraints on the the two kinds of Notices, which are defined in division 7 of the Bill, do not apply to Technical Assistance Requests.
"There is no restriction on a Technical Assistance Request asking for the implementation of a Systemic Weakness. Likewise, unlike Technical Capability Notices, there is no restriction on requesting the development of new capabilities to remove electronic protection (317E(1)(a))," he wrote.
Culnane's blog posts also covered issues such as the secrecy provisions, ways in which the legislation could be used more broadly than indicated in the explanatory document, and his concerns about the broad definition of a "communications provider".
"For example, it covers a person that '... provides an electronic service that has one or more end users in Australia', which appears to cover every website that is accessible from Australia," he wrote.
"Furthermore, the legislation also covers an individual if '... the person develops, supplies or updates software used, for use, or likely to be used, in connection with: (a) a listed carriage service; or (b) an electronic service that has one or more end users in Australia', which appears to cover every piece of software, or mobile app, that connects to internet or produces content that is going to be used on the internet.
"That is an incredibly broad category, the justification for which is not clear."
In its current form, as an exposure draft, the Bill still has to face public consultation before it's tabled in parliament. The government appears to be in a hurry, however, in part because the proposed laws would be part of its contribution to the Five Eyes nation's tougher new stance against encryption.
The deadline for public comments on the exposure draft is this Monday, 10 September 2018.
I was disturbed recently by reading about an incident in which a paper was accepted by the Mathematical Intelligencer and then rejected, after which it was accepted and published online by the New York Journal of Mathematics, where it lasted for three days before disappearing and being replaced by another paper of the same length. The reason for this bizarre sequence of events? The paper concerned the “variability hypothesis”, the idea, apparently backed up by a lot of evidence, that there is a strong tendency for traits that can be measured on a numerical scale to show more variability amongst males than amongst females. I do not know anything about the quality of this evidence, other than that there are many papers that claim to observe greater variation amongst males of one trait or another, so that if you want to make a claim along the lines of “you typically see more males both at the top and the bottom of the scale” then you can back it up with a long list of citations.
You can see, or probably already know, where this is going: some people like to claim that the reason that women are underrepresented at the top of many fields is simply that the top (and bottom) people, for biological reasons, tend to be male. There is a whole narrative, much loved by the alt right, that says that this is an uncomfortable truth that liberals find so difficult to accept that they will do anything to suppress it. There is also a counter-narrative that says that people on the far right keep on trying to push discredited claims about the genetic basis for intelligence, differences amongst various groups, and so on, in order to claim that disadvantaged groups are innately disadvantaged rather than disadvantaged by external circumstances.
I myself, as will be obvious, incline towards the liberal side, but I also care about scientific integrity, so I felt I couldn’t just assume that the paper in question had been rightly suppressed. I read an article by the author that described the whole story (in Quillette, which rather specializes in this kind of story), and it sounded rather shocking (though one has to bear in mind that the article is written by a disgruntled author and there is almost certainly another side to the story). In particular, he is at pains to stress that the paper is simply a mathematical theory to explain why one sex might evolve to become more variable than another, and not a claim that the theory applies to any given species or trait. In his words, “Darwin had also raised the question of why males in many species might have evolved to be more variable than females, and when I learned that the answer to his question remained elusive, I set out to look for a scientific explanation. My aim was not to prove or disprove that the hypothesis applies to human intelligence or to any other specific traits or species, but simply to discover a logical reason that could help explain how gender differences in variability might naturally arise in the same species.”
So as I understood the situation, the paper made no claims whatsoever about the real world, but simply defined a mathematical model and proved that in this model there would be a tendency for greater variability to evolve in one sex. Suppressing such a paper appeared to make no sense at all, since one could simply question whether the model was realistic. Furthermore, suppressing papers on this kind of topic simply plays into the hands of those who claim that liberals are against free speech, that science is not after all objective, and so on, claims that are widely accepted and do a lot of damage.
I was therefore prompted to look at the paper itself, which is on the arXiv, and there I was met by a surprise. I was worried that I would find it convincing, but in fact I found it so unconvincing that I think it was a bad mistake by Mathematical Intelligencer and the New York Journal of Mathematics to accept it, but for reasons of mathematical quality rather than for any controversy that might arise from it. To put that point more directly, if somebody came up with a plausible model (I don’t insist that it should be clearly correct) and showed that subject to certain assumptions about males and females one would expect greater variability to evolve amongst males, then that might well be interesting enough to publish, and certainly shouldn’t be suppressed just because it might be uncomfortable, though for all sorts of reasons that I’ll discuss briefly later, I don’t think it would be as uncomfortable as all that. But this paper appears to me to fall well short of that standard.
To justify this view, let me try to describe what the paper does. Its argument can be summarized as follows.
1. Because in many species females have to spend a lot more time nurturing their offspring than males, they have more reason to be very careful when choosing a mate, since a bad choice will have more significant consequences.
2. If one sex is more selective than the other, then the less selective sex will tend to become more variable.
To make that work, one must of course define some kind of probabilistic model in which the words “selective” and “variable” have precise mathematical definitions. What might one expect these to be? If I hadn’t looked at the paper, I think I’d have gone for something like this. An individual of one sex will try to choose as desirable a mate as possible amongst potential mates that would be ready to accept as a mate. To be more selective would simply mean to make more of an effort to optimize the mate, which one would model in some suitable probabilistic way. One feature of this model would presumably be that a less attractive individual would typically be able to attract less desirable mates.
I won’t discuss how variability is defined, except to say that the definition is, as far as I can see, reasonable. (For normal distributions it agrees with standard deviation.)
The definition of selectivity in the paper is extremely crude. The model is that individuals of one sex will mate with individuals of the other sex if and only if they are above a certain percentile in the desirability scale, a percentile that is the same for everybody. For instance, they might only be prepared to choose a mate who is in the top quarter, or the top two thirds. The higher the percentile they insist on, the more selective that sex is.
When applied to humans, this model is ludicrously implausible. While it is true that some males have trouble finding a mate, the idea that some huge percentage of males are simply not desirable enough (as we shall see, the paper requires this percentage to be over 50) to have a chance of reproducing bears no relation to the world as we know it.
I suppose it is just about possible that an assumption like this could be true of some species, or even of our cave-dwelling ancestors — perhaps men were prepared to shag pretty well anybody, but only some small percentage of particularly hunky men got their way with women — but that isn’t the end of what I find dubious about the paper. And even if we were to accept that something like that had been the case, it would be a huge further leap to assume that what made somebody desirable hundreds of thousands of years ago was significantly related to what makes somebody good at, say, mathematical research today.
Here is one of the main theorems of the paper, with a sketch of the proof. Suppose you have two subpopulations and within one of the two sexes, with being of more varied attractiveness than . And suppose that the selectivity cutoff for the other sex is that you have to be in the top 40 percent attractiveness-wise. Then because is more concentrated on the extremes than , a higher proportion of subpopulation will be in that percentile. (This can easily be made rigorous using the notion of variability in the paper.) By contrast, if the selectivity cutoff is that you have to be in the top 60 percent, then a higher proportion of subpopulation will be chosen.
I think we are supposed to conclude that subpopulation is therefore favoured over subpopulation when the other sex is selective, and not otherwise, and therefore that variability amongst males tends to be selected for, because females tend to be more choosy about their mates.
But there is something very odd about this. Those poor individuals at the bottom of population aren’t going to reproduce, so won’t they die out and potentially cause population to become less variable? Here’s what the paper has to say.
Thus, in this discrete-time setting, if one sex remains selective from each generation to the next, for example, then in each successive generation more variable subpopulations of the opposite sex will prevail over less variable subpopulations with comparable average desirability. Although the desirability distributions themselves may evolve, if greater variability prevails at each step, that suggests that over time the opposite sex will tend toward greater variability.
Well I’m afraid that to me it doesn’t suggest anything of the kind. If females have a higher cutoff than males, wouldn’t that suggest that males would have a much higher selection pressure to become more desirable than females? And wouldn’t the loss of all those undesirable males mean that there wasn’t much one could say about variability? Imagine for example if the individuals in were all either extremely fit or extremely unfit. Surely the variability would go right down if only the fit individuals got to reproduce. And if you’re worrying that the model would in fact show that males would tend to become far superior to females, as opposed to the usual claim that males are more spread out both at the top and at the bottom, let’s remember that males inherit traits from both their fathers and their mothers, as do females, an observation that, surprisingly, plays no role at all in the paper.
What is the purpose of the strange idea of splitting into two subpopulations and then ignoring the fact that the distributions may evolve (and why just “may” — surely “will” would be more appropriate)? Perhaps the idea is that a typical gene (or combination of genes) gives rise not to qualities such as strength or intelligence, but to more obscure features that express themselves unpredictably — they don’t necessarily make you stronger, for instance, but they give you a bigger range of strength possibilities. But is there the slightest evidence for such a hypothesis? If not, then why not just consider the population as a whole? My guess is that you just don’t get the desired conclusion if you do that.
I admit that I have not spent as long thinking about the paper as I would need to in order to be 100% confident of my criticisms. I am also far from expert in evolutionary biology and may therefore have committed some rookie errors in what I have written above. So I’m prepared to change my mind if somebody (perhaps the author?) can explain why the criticisms are invalid. But as it looks to me at the time of writing, the paper isn’t a convincing model, and even if one accepts the model, the conclusion drawn from the main theorem is not properly established. Apparently the paper had a very positive referee’s report. The only explanation I can think of for that is that it was written by somebody who worked in evolutionary biology, didn’t really understand mathematics, and was simply pleased to have what looked like a rigorous mathematical backing for their theories. But that is pure speculation on my part and could be wrong.
I said earlier that I don’t think one should be so afraid of the genetic variability hypothesis that one feels obliged to dismiss all the literature that claims to have observed greater variability amongst males. For all I know it is seriously flawed, but I don’t want to have to rely on that in order to cling desperately to my liberal values.
So let’s just suppose that it really is the case that amongst a large number of important traits, males and females have similar averages but males appear more at the extremes of the distribution. Would that help to explain the fact that, for example, the proportion of women decreases as one moves up the university hierarchy in mathematics, as Larry Summers once caused huge controversy by suggesting? (It’s worth looking him up on Wikipedia to read his exact words, which are more tentative than I had realized.)
The theory might appear to fit the facts quite well: if men and women are both normally distributed with the same mean but men have a greater variance than women, then a randomly selected individual from the top percent of the population will be more and more likely to be male the smaller gets. That’s just simple mathematics.
But it is nothing like enough reason to declare the theory correct. For one thing, it is just as easy to come up with an environmental theory that would make a similar prediction. Let us suppose that the way society is organized makes it harder for women to become successful mathematicians than for men. There are all sorts of reasons to believe that this is the case: relative lack of role models, an expectation that mathematics is a masculine pursuit, more disruption from family life (on average), distressing behaviour by certain male colleagues, and so on. Let’s suppose that the result of all these factors is that the distribution of whatever it takes for women to make a success of mathematics has a slightly lower mean than that for men, but roughly the same variance, with both distributions normal. Then again one finds by very basic mathematics that if one picks a random individual from the top percent, that individual will be more and more likely to be male as gets smaller. But in this case, instead of throwing up our hands and saying that we can’t fight against biology, we will say that we should do everything we can to compensate for and eventually get rid of the disadvantages experienced by women.
A second reason to be sceptical of the theory is that it depends on the idea that how good one is at mathematics is a question of raw brainpower. But that is a damaging myth that puts many people off doing mathematics who could have enjoyed it and thrived at it. I have often come across students who astound me with their ability to solve problems far more quickly than I can, (not all of them male). Some of them go on to be extremely successful mathematicians, but not all. And some who seem quite ordinary go on to do extraordinary things later on. It is clear that while an unusual level of raw brainpower, whatever that might be, often helps, it is far from necessary and far from sufficient for becoming a successful mathematician: it is part of a mix that includes dedication, hard work, enthusiasm, and often a big slice of luck. And as one gains in experience, one gains in brainpower — not raw any more, but who cares whether it is hardware or software? So even if it turned out that the genetic variability hypothesis was correct and could be applied to something called raw mathematical brainpower, a conclusion that would be very hard to establish convincingly (it’s certainly not enough to point out that males find it easier to visualize rotating 3D objects in their heads), that still wouldn’t imply that it is pointless to try to correct the underrepresentation of women amongst the higher ranks of mathematicians. When I was a child, almost all doctors and lawyers were men, and during my lifetime I have seen that change completely. The gender imbalance amongst mathematicians has changed more slowly, but there is no reason in principle that the pace couldn’t pick up substantially. I hope to live to see that happen.
Like this:
LikeLoading...
Related
This entry was posted on September 9, 2018 at 10:25 pm and is filed under News. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.
Abstract: As software becomes larger, programming languages become higher-level, and
processors continue to fail to be clocked faster, we'll increasingly require
compilers to reduce code bloat, eliminate abstraction penalties, and exploit
interesting instruction sets. At the same time, compiler execution time must
not increase too much and also compilers should never produce the wrong output.
This paper examines the problem of making optimizing compilers faster, less
buggy, and more capable of generating high-quality output.
Background: In an earlier blog post, we described a system called Anna, which used a shared-nothing, thread-per-core architecture to achieve lightning-fast speeds by avoiding all coordination mechanisms. Anna also used lattice composition to enable a rich variety of coordination-free consistency levels. The first version of Anna blew existing in-memory KVSes out of the water: Anna is up to 700x faster than Masstree, an earlier state-of-the-art research KVS, and up to 800x faster than Intel’s “lock-free” TBB hash table. You can find the previous blog post here and the full paper here. We refer to that version of Anna as “Anna v0.” In this post, we describe how we extended the fastest KVS in the cloud to be extremely cost-efficient and highly adaptive.
Public cloud users today are flush with storage options. Amazon Web Services offers two object storage services (S3 and Glacier) and two file system services (EBS and EFS), in addition seven different database services, ranging from relational databases to NoSQL key-value stores. It’s a dizzying variety, and users are naturally left asking which service is the right choice for them. In many cases, the short (and not very encouraging) answer is “all of them at once.”
Each one of these storage services provides a very narrow cost-performance tradeoff. For example, caching services like AWS ElastiCache are fast and expensive, and cold storage services like AWS Glacier are extremely slow and cheap. As a result, users face a catch-22: They must either compromise on cost by provisioning extremely large memory-speed clusters or compromise on performance by relegating all data to systems like DynamoDB or S3.
To make matters more complicated, most real applications have skewed data access patterns. Frequently accessed data is “hot”, and other data is “cold”, but these individual services are only designed for either hot or cold data. Users who don’t want to compromise on performance or cost must cobble together memory hierarchies by hand and build applications that track data and requests across many services.
Worse yet, performant cloud storage offerings (like ElastiCache) are inelastic: They require manual intervention to add & remove resources from the cluster. This means that cloud developers design & build bespoke solutions to monitor workload changes, modify resource allocation, and manually move data between storage engines.
This is unequivocally bad. Applications developers with realistic storage needs are constantly forced to reinvent the wheel instead of reasoning about the metrics they care the most about: performance and cost. We’d like to change that.
Anna v1
Using Anna v0 as an in-memory storage engine, we set out to address the cloud storage problems described above. We aimed to evolve the fastest KVS in the cloud into the most adaptive, cost-effective one as well. We did this by adding 3 key mechanisms to Anna: Vertical Tiering, Horizontal Elasticity, and Selective Replication.
The core component in Anna v11 is a monitoring system & policy engine that together enable workload-responsiveness and adaptability. To meet user-defined goals for performance (request latency) and cost, the monitoring service tracks and adjusts resources to workload changes. Each storage server collects statistics about the requests it serves, the data it stores, etc. The monitoring system periodically scrapes and munges this data, and the policy engine uses these statistics to take action via one of three mechanisms listed above. The trigger for each action is simple:
Elasticity: In order for a system to adapt to changing workloads, the system must be able to autoscale up and down to match the request volume it is seeing. When a tier is saturating compute or storage capacity, we add nodes to the cluster, and when resources are underutilized, they are deallocated to save cost.
Selective Replication: In real workloads, there is often a hot set of keys, which should be replicated beyond fault-tolerance requirements to improve performance. This increases the cores and network bandwidth available to serve common requests. Anna v0 enabled multi-master replication of keys, but had a fixed replication factor for all keys. As you can imagine, that was unreasonably expensive. In Anna v1, the monitoring engine picks the most accessed keys and increases the number of replicas of those keys specifically, without paying extra to replicate cold data.
Promotion & Demotion: Just like traditional memory hierarchies, cloud storage systems should store hot data in a high-performance, memory-speed tier for efficient access, while cold data should reside in a slower tier to save cost. Our monitoring engine automatically moves data between tiers based on access patterns.
In order to implement these mechanisms, we had to make two significant changes to the design of Anna. First, we deployed the storage engine across multiple storage media — currently RAM and flash disk. Each of these resulting storage tiers represents a different cost-performance tradeoff, akin to a traditional memory hierarchy. We also implemented a routing service that sends user requests to the correct servers in the correct tiers. This gives users a single, uniform API regardless of where the data is stored. Each one of these tiers has the same rich consistency model inherited from the first version of Anna, so the developer can work off a single (widely parameterizable) consistency model.
Our experiments show an impressive level of both performance and cost efficiency. Anna provides 8x the throughput of AWS ElastiCache’s and 355x the throughput of DynamoDB for a fixed price point. Anna is also able to react to workload changes by adding nodes and replicating data appropriately:
This blog post only provides a brief overview of the design of Anna. If you’re interested in learning more, you can find the full paper here and the code here. We’re pretty pleased with the improvements we’re seeing, and we’d love to get your feedback. We have some next steps brewing that we’re excited about as well, to take advantage of the performance and flexibility Anna provides for other tasks, so stay tuned!
1 Note that we previously referred to Anna v1 as Bedrock.
Please enable cookies on your web browser in order to continue.
The new European data protection law requires us to inform you of the following before you use our website:
We use cookies and other technologies to customize your experience, perform analytics and deliver personalized advertising on our sites, apps and newsletters and across the Internet based on your interests. By clicking “I agree” below, you consent to the use by us and our third-party partners of cookies and data gathered from your use of our platforms. See our Privacy Policy and Third Party Partners to learn more about the use of data and your rights. You also agree to our Terms of Service.
AGIP/Bridgeman ImagesJean-Louis Trintignant in the role of Hamlet, at the Théâtre de la Musique, Paris, 1971
In the decades after it was first staged, probably in 1600, Hamlet seems to have been popular, though not especially so. It was performed at the Globe Theatre, in Oxford, Cambridge, and elsewhere, and revived at least twice at court. But editions of Hamlet were published less frequently than those of Richard III, Richard II, or even Pericles, and aside from echoes of it in the works of other dramatists, the play is mentioned by only a couple of Shakespeare’s contemporaries (one saying that it appealed to the “wiser sort,” another that it managed to “please all”). It wasn’t until 1711 that anyone wrote at length about Hamlet; the Earl of Shaftesbury spoke of itthen as the Shakespeare play that “appears to have most affected English hearts” and was perhaps the most “oftenest acted,” which likely owed much to the popularity of Thomas Betterton, one of the great Hamlets.
Another century would pass before Hamlet became Shakespeare’s most celebrated play, a position from which it has yet to be dislodged. Much of the credit for this goes to Romantic writers in Germany and England who were drawn to its intense exploration of the self and who saw their own struggles reflected in Hamlet’s. Goethe’s coming-of-age novel Wilhelm Meister’s Apprenticeship (1795–1796) turned Hamlet into a model for subsequent portraits of the artist as a young man. William Hazlitt wrote that “it is we who are Hamlet…whose powers of action have been eaten up by thought,” and Samuel Taylor Coleridge declared: “I have a smack of Hamlet myself, if I may say so.” “We love Hamlet,” Lord Byron would add, “even as we love ourselves.”
Searching through surviving records from Stratford-upon-Avon not long before this, Edmond Malone discovered that Shakespeare’s son Hamnet (the spelling was interchangeable with Hamlet) had died at the age of eleven in 1596. Malone was the first biographer to create a chronology of Shakespeare’s works and reconstruct his life out of his plays and poems. Unsure of when to date King John, and assuming that “a man of such sensibility” as Shakespeare would not “have lost his only son…without being greatly affected by it,” Malone proposed that such heartfelt lines as “Grief fills the room up of my absent child” made it likely that King John was written in the immediate aftermath of Hamnet’s death.
But nobody much cared about King John. Biographers eventually proposed that Shakespeare’s expression of grief for his son’s untimely death was suspended for four years until it at last found a proper outlet in the aptly named Hamlet. As long as you overlooked that Hamlet is about a son mourning a father (not the other way around), that Shakespeare was rewriting an old play called Hamlet, and that he may not have seen his child more than a few times after leaving his family behind when he moved to London in the late 1580s, this proved to be a much better story. Moreover, critics now felt licensed to conflate the experiences of Hamlet and Shakespeare.
Hamlet had initially been published in a pair of quartos, printed in 1603 (Q1) and 1604–1605 (Q2). A third version of the play appeared in the First Folio edition of 1623 (F1), which trimmed 230 lines from Q2, added 90 new ones, and included a number of substantive changes. When Nicholas Rowe freshly edited Hamlet in 1709 he drew on passages deriving from both the Q2 and the F1 versions (at the time no copy of Q1 was extant), producing a kind of “best bits of Hamlet” that would be more or less copied for the next three hundred years. Then, in 1823, a copy of Q1 was belatedly found, calling into question much of what was understood about the play. This earliest printed version differed considerably from the other two and was considerably shorter. Was Q1 pirated or perhaps written much earlier? Were Shakespeare’s plays trimmed in performance? Did Shakespeare revise his work? Since that discovery, scholars have fiercely debated these questions, which are as consequential for the ways in which we imagine how Shakespeare wrote as they are for how we interpret Hamlet.
It’s a truism that no one accepts anyone else’s reading of Hamlet. And for at least two hundred years, no generation has been comfortable with its predecessor’s take on the play. It’s hard to think of another work whose interpretations so uncannily identify what the play calls the “form and pressure” of “the time.” Critics and actors usually register cultural shifts a bit belatedly; but on occasion the most astute seem to anticipate them. In the early nineteenth century, as traditional gender roles began to change, women actors, including Sarah Siddons, Charlotte Cushman, and Sarah Bernhardt, began to compete with men for the title role. In 1875 the influential biographer Edward Dowden assigned Hamlet to a dark place in the playwright’s life: after writing his romantic comedies, Shakespeare was “touched by the shadow of some of the deep mysteries of human existence” before he recovered and achieved the “grave serenity” of his late, redemptive plays. But in this interim Shakespeare had joined Hamlet “in the depths.”
A generation later there emerged a more radical rethinking of Hamlet and Shakespeare’s state of mind when writing it. Sigmund Freud, searching for confirmation of his theory of the Oedipus complex, wrote to his friend Wilhelm Fliess in 1897 that “the same thing might be at the bottom of Hamlet as well. I am not thinking of Shakespeare’s conscious intention, but believe, rather, that a real event stimulated the poet to his representation, in that his unconscious understood the unconscious of his hero.” Freud went on to suggest that Shakespeare’s own Oedipal crisis provided the long-sought explanation for Hamlet’s delay in avenging his father’s death: “How better than through the torment he suffers from the obscure memory that he himself had contemplated the same deed against his father out of passion for his mother?” Other pieces of the Hamlet puzzle quickly fell into place:
His conscience is his unconscious sense of guilt. And is not his sexual alienation in his conversation with Ophelia typically hysterical?… And does he not in the end, in the same marvelous way as my hysterical patients, bring down punishment on himself?
Freud’s theory would have a profound effect on both scholars and actors; a play that straddled the political and the familial was now increasingly viewed as a domestic tragedy. And Freud’s disciple Ernest Jones’s popular Hamlet and Oedipus (1949) extended his influence for another generation.
By the 1980s, these psychological approaches were swept aside in favor of ones better suited to a generation of academics that had come of age during the cultural turmoil of the 1960s. New Historicists refocused attention on the politics of Hamlet, including the triumph of the opportunistic Fortinbras, whose seizure of power at the play’s end had long been cut in performance. I recall watching elderly playgoers gasp at a production in which Horatio’s sentimental farewell to Hamlet (“Good night, sweet prince,/And flights of angels sing thee to thy rest”) was now followed by the entrance of Fortinbras, who, as he recited the play’s final line—“Go, bid the soldiers shoot”—unholstered a pistol, put it to Horatio’s head, and pulled the trigger.
Harold Jenkins’s popular Arden edition of the play (1982), which had followed the time-honored practice of conflating the multiple versions of Hamlet, was now deemed suspect, and was replaced in 2006 by a new Arden edition that published all three versions—Q1, Q2, and F1—separately. As New Historicists became interested in Shakespeare’s faith, the (quickly disabused) notion of a Catholic Shakespeare had lingering ramifications for how Hamlet, on his return from Protestant Wittenberg, confronts the ghost of a father come from Purgatory. It’s hard in retrospect to determine whether the desire to rethink the theological underpinnings of Hamlet drove scholars to recast Shakespeare’s own beliefs or vice versa.
I’ve taught Shakespeare to Columbia undergraduates for three decades, and while my students over the years haven’t changed their minds much about A Midsummer Night’s Dream or Macbeth, they have about Hamlet. As in everyone’s classes on the play, the conversation in mine inevitably turns to why Hamlet delays. Back in the 1980s, thanks to the influence of a generation of high school teachers who had seen the 1948 film of Laurence Olivier’s Oedipal Hamlet and had likely read Hamlet and Oedipus, I could always count on a few students to say that Hamlet couldn’t readily avenge himself on a man who acted on his own desires to kill his father and sleep with his mother. (These days no student mentions the Oedipal theory, and when I offer it as a possibility, the suggestion is met with groans or laughter.)
The older Romantic view of Hamlet as an intellectual paralyzed by excessive thought still appealed to procrastinating students, so I’d hear versions of that too. But as the years rolled by I’d hear new explanations. Some of my students suggested that Hamlet couldn’t act because he was a coward, others that he was experiencing a spiritual crisis. By the end of the century a new paradigm began to emerge: Hamlet was profoundly depressed—that’s why he is immobilized, has trouble with his girlfriend, and feels so alienated. As one student memorably put it, if Prozac had been available there would have been no delay.
As the long dominance of New Historicism, which so powerfully shaped my own work, has come to an end, I find myself increasingly curious about what the next generation will make of Hamlet and what its view of Shakespeare and his most popular hero might reveal about our cultural moment. Rhodri Lewis’s absorbing and original Hamlet and the Vision of Darkness is the first major reinterpretation of the play in some time and suggests where things may be heading.
Lewis is clearly impatient with how critics have previously understood Hamlet. He argues that it is wrong to impose “the retrojection of Romantic, Freudian, or any other kinds of individuality onto a period in which they would scarcely have been comprehensible.” Lewis also pushes “back against the ideologically interpellated subject that became an article of faith for an earlier critical generation.” All that warring over the multiple texts of Hamlet strikes him as pointless, and he is comfortable reverting to Jenkins’s mix-and-match Arden edition, having decided that the texts resemble each other closely enough to overlook their differences. In another retro move, Lewis declares that his book “is an exercise in literary criticism,” not to be mistaken for one of those modish studies that uses “Shakespeare to furnish examples with which to illustrate or to challenge the history, theory, or politics of x.”
Scraping away all these layers of critical varnish exposes for Lewis a much bleaker play than the one familiar to modern readers and playgoers:
Hamlet is not thus a model of nascent subjectivity, the first modern man, a dramatic laboratory for the invention of the human, or even a study of the frustrations attendant upon sixteenth-century princely dispossession. He is instead the finely drawn embodiment of a moral order that is collapsing under the weight of its own contradictions.
Lewis’s Hamlet turns out to be “a victim, a symptom, and an agent” of a world built on hollow and self-serving humanist truisms and a “confused, self-indulgent, and frequently heedless” one at that. He doesn’t so much delay in taking revenge as discover that he isn’t all that motivated to act on behalf of a father who failed to secure his succession.
Private Collection/Look and Learn/Peter Jackson Collection/Bridgeman ImagesProgram for a nineteenth-century production of Hamlet
It gets worse. Lewis’s Hamlet is “a thinker of unrelenting superficiality, confusion, and pious self-deceit. He feints at profundity but is unwilling and unable to journey beyond his own fears, blind spots, and preoccupations.” At least Claudius knows what sort of game he is playing; Hamlet, “unlike his uncle, is unable or unwilling to register in himself the corruption that he diagnoses in others.” “For all Claudius’s dishonesty,” and “for all Polonius’s self-serving lucubration,” Lewis concludes, “the young Prince Hamlet is the inhabitant of Elsinore most thoroughly mired in bullshit, about himself and about the world around him.” And Hamlet’s thoughts on the workings of providence are the “summa of his bullshit.”
It would be foolish trying to defend Hamlet by quoting his most famous soliloquy, since its words, stitched together out of empty pieties that he should critique but merely recycles, “comprise another study in superficial humanism, made up of commonplaces and sententiae divorced from the contexts that make them meaningful.” “To be or not to be” “sounds terrific,” but “it designedly does not make sense.” Nor should we take Hamlet’s talk of suicide seriously, since he is just “posturing.” Hamlet “pretends to engage” with the “prospect of self-murder because he is attracted to the image of himself disdaining the world, and because he has no intention of following through on the deed.”
Lewis’s Hamlet turns out to be as lame a drama critic as he is a historian, poet, and philosopher. By mocking Polonius’s response to the actors, Hamlet tries to distract us from his own “undercooked theorizing.” But we shouldn’t be misled; neither Polonius nor Hamlet “fully knows what he is talking about, though both are determined to conduct themselves as if they do.” The two are “high-born philistines whose pushiness and culturally deep pockets compel the professional artists to hear them out.”
Why have earlier critics failed to see Hamlet in this way? It’s tempting to blame Shakespeare for not signaling his intentions clearly enough. But Lewis, I imagine, is more likely to shift the blame to our collective refusal to register the ways in which the play turns on Shakespeare’s own rejection of humanism. So as not to misrepresent his book’s central argument, and to give a sense of how passionately it is expressed, I’ll quote at length:
Shakespeare repudiates two fundamental tenets of humanist culture. First, the core belief that history is a repository of wisdom from which human societies can and should learn…. Second, the conviction that the true value of human life could best be understood by a return ad fontes—to the origins of things, be they historical, textual, moral, poetic, philosophical, or religious (Protestant and Roman Catholic alike). For Shakespeare, this is a sham…. Like the past in general, origins are pliable—whatever the competing or complementary urges of appetite, honour, virtue, and expediency need them to be.
The fruitless search for absolutes by which to act or judge is doomed to failure: “Hamlet turns to moral philosophy, love, sexual desire, filial bonds, friendship, introversion, poetry, realpolitik, and religion in the search for meaning or fixity. In each case, it discovers nothing of significance.”
The absence of any moral certainties means that it’s a “kill or be killed” world, and the most impressive chapter in Hamlet and the Vision of Darkness establishes how the language of predation saturates the play. Lewis’s brilliant analysis here gives fresh meaning to long-familiar if half-understood phrases, including the “enseamed” marital bed, “Bait of falsehood,” “A cry of players,” “We coted them on the way,” “Start not so wildly,” “I am tame, sir,” “We’ll e’en to it like French falconers,” and “When the wind is southerly, I know a hawk from a handsaw.” Thirty years ago this analysis might have been the basis of an important, if localized, study—but that sort of book could never find a major publisher today. Here, it becomes a clever way of establishing what for Lewis is the play’s bass line:
Whatever an individual might strive to believe, he always and only exists as a participant in a form of hunting—one in which he, like everyone else, is both predator and prey.
It would have been bold enough to claim that Shakespeare wrote a play about the rot at the heart of sixteenth-century humanism. But for Lewis this turns out to be symptomatic of something larger, a crisis experienced not just by literature’s most famous character, but by Shakespeare himself, who “came to find humanist moral philosophy deficient in the face of human experience as he observed it.” For the Shakespeare of Hamlet,“humankind is bound in ignorance of itself.” We are told that “Shakespeare’s target is not Hamlet, or not just Hamlet. Instead, he sets himself against Boethius, against Cicero, against the conventions of humanism in the philosophical and religious round.” And Shakespeare apparently sets himself against God too:
There is no divine author scripting human affairs; no list of approved parts for humankind to play; no heavenly audience passing judgment on human performances.
These biographical claims, which can be traced back to Edward Dowden’s fantasies about Shakespeare “in the depths,” are the weakest part of the book and the most indebted to the psychologizing that Lewis elsewhere scorns. And Lewis dodges the question of what triggered Shakespeare’s profound disillusionment, declaring it to be “beyond the scope of this book.” We are left to wonder if Shakespeare ever overcame his despair, and whether in his late and seemingly redemptive plays he was merely faking it. We are also left in the dark about Lewis’s own turn against a humanist tradition in which he is so steeped; most scholars with this much Latin and Greek end up celebrating humanist culture, not exposing it as fraudulent.
Another question that the book doesn’t clearly answer is whether this is a story about a bad student—Hamlet—who merely regurgitates half-digested scraps of a Renaissance humanist education he doesn’t fully grasp, or whether he is a true product of that humanist tradition and conveys its arguments accurately, arguments that are revealed to be shallow and self-serving. Was Shakespeare—who never attended a university yet knew his Seneca and Tacitus—ever this invested in classical humanism, as Lewis wants us to believe? I’m not persuaded by his claim that Hamlet likely speaks his most famous soliloquy while holding a copy of Cicero’s Tusculan Disputations.
I searched in vain while reading this book for what drove this grim argument—before finding a provisional answer in “Hamlet: Then and Now,” a short essay that Lewis recently posted on the Princeton University Press website. He argues there that Shakespeare
offers us an unflinchingly brilliant guide to the predicaments in which we find ourselves in Trumpland and on Brexit Island. Not by prophesying the likes of Farage, Bannon, and Donald J. Trump…but by enabling us to experience a world in which the prevalent senses of moral order (political, ethical, personal) bear only the most superficial relation to lived experience.
If I understand Lewis correctly, we have paid a steep political price for failing to heed Shakespeare’s warning in Hamlet that the world has always been amoral and predatory.
Our political situation has altered with dizzying speed of late. Somehow, we collectively absorb these changes, even if many of us refuse to reconcile ourselves to them. But resistance to change is even fiercer when it comes to radical reinterpretations of our favorite works of art, and Lewis and his antihumanist approach to Hamlet, though it suits our moment, will, I suspect, win over very few adherents, at least in the short term.
Reading this book prompted some speculation of my own. I wondered what it revealed about the disillusionment of scholars like Rhodri Lewis, who, Hamlet-like, expected, when their turn came, to inherit an academic kingdom. With funding for higher education slashed, literature departments downsized, full-time faculty replaced by adjuncts, and illustrious universities like my own choosing to hire only at the entry level to replace those of us who will be retiring, the prospects facing the next generation of academics are dismal. Depressingly, there is only a single position advertised this year in all of North America for a senior Shakespeare scholar. The need to make a splash, even to overstate claims, is understandable.
Lewis’s Hamlet is not mine, nor is his Hamlet. The difference in our approaches and conclusions may simply be generational. But I admire his relentless questioning of underexamined beliefs that have long guided our reading of Hamlet and, if he is right, have been instrumental in leading us into the political mire in which we now find ourselves.
The Data Science & Infrastructure team at Plaid has grown significantly over the past few months into a team whose mission is to empower Plaid with a data-first culture. This post is a look at how we rebuilt internal analytics around rollup tables and materialized views at Plaid.
DSI: It’s a lifestyle
After building a scalable monitoring pipeline with Kinesis, Prometheus, & Grafana and beefing up our ETL efforts with Airflow, Plaid was in the midst of a transition to Periscope Data as a business intelligence tool. After testing and analyzing, we had decided to use Periscope for tracking metrics around our core product usage, go-to-market strategy, and internal operations for customer support and project management.
Snapshot of a Periscope Dashboard
At Plaid, we take a lot of pride in being a data driven company, and as such, the DSI team took on the responsibility for getting the data into our AWS Redshift data warehouse which powers the charts connected to Periscope. We decided to also own the query runtime performance of the SQL statements being written in Periscope to ensure our data collection efforts were maximally useful to the company. The tool was being rapidly adopted internally by both a collection of power users who were proficient in SQL, and by less experienced folks who had just started to get their feet wet.
The original use-case for our Redshift cluster wasn’t centered around an organization-wide analytics deployment, so initial query performance was fairly volatile: the tables hadn’t been setup with sort and distribution keys matching query patterns in Periscope, which are important table configuration settings for controlling data organization on-disk, and have a huge impact on performance. We also hadn’t broadly invested into Redshift settings like workload management (WLM) queues and the data stored in the cluster was a comprehensive dumping ground of data, not a schema tailored for analytics.
With “data” in our team name, we couldn’t go further in this post without establishing some key metrics. Fortunately, Periscope offers a great set of meta-tables related to the usage of the tool, for example tables showing which charts were being run by different users and how long the query ran from Periscope’s point of view.
Weekly Redshift Query performance at Plaid from Dec. 1 — Mar. 31st, all queries
Initial discovery
Looking into the data we saw that the p90 runtime (the sparkline in the top right corner in the image above) was fairly volatile, ranging from high single digits to tens of seconds week to week. More so, this first view looked at all the queries being run, but we wanted to value queries that were important to the success of the business. We set up some additional constraints:
We only considered weekday data points, and only after the entire day’s worth of data was available. We didn’t count weekends because the cluster usage pattern differed too much versus weekdays: since we were tracking our statistics daily, when query volume dropped on the weekends it created visual noise that detracted from analyzing normal weekday patterns. We only counted queries being run by a user — not queries that Periscope was running in the background to keep charts up-to-date. We excluded queries being run by the DSI team members themselves, since we were a large, noisy set of data points and didn’t have the same distribution of runtimes as many of the other users.
Runtimes of active queries in top 1000 dashboards from Dec. 1 — Mar. 31st
User-run queries turned out to be quite contentious! The runtimes look much slower than the general population of queries. For our first foray into critically analyzing our cluster’s performance, we targeted two main categories of potential problems:
Redshift cluster settings and data layout: Only some tables had their on-disk data distribution defined through these two keys, but others had no sort key and no purposeful distribution style (like sort and distribution key settings, for example.) Outlier user behavior: we expected some users to be running larger queries than others. We ran segmentation analysis by users, dashboards, and tables, to look for readily available patterns where performance was drifting upwards.
Solving for X
Off the bat we knew the first line-item was an issue in several circumstances; some tables had infinite rows and did not have a sort key set and as such, were highly likely to be less efficient than they should be. After manually reading through the SQL in several charts and interviewing the more prolific query authors, we settled on deploying timestamps as the sort key everywhere, since often the initial query step was to hone in on a specific time-slice of data.
Similarly, we modified distribution keys to work with our join and aggregation patterns. There are many great blog posts on this kind of work, and the AWS Redshift Documentation has lots of great pointers as well. While this was a useful endeavor and had some impact on query speed, it was relatively small and not going to be the overarching solution to our run-time problems.
Output from stl_alert_event_log highlights some table-specific issues
We conducted additional experiments using recommendations from the AWS support team, using both the Redshift built-in performance alert infrastructure like stl_alert_event_log and other experiments exploring schema changes like splitting tables into different monthly tables like logs_2018_01 , logs_2018_02, logs_2018_03, and union all-ing them back together. These solutions weren’t the ideal fit as they lacked context around our use-cases. Adding more cluster discipline by running vacuum and analyze on each table on a regular basis, and setting up better work-load management queues were also small wins, but we found big wins in the form of optimizing the company’s holistic approach to running queries.
In order to get a better understanding of the patterns specific to our company, we would first cross join all the queries against all the tables to inspect the underlying sql for matching names. This was made easy in part because we forbade usage of the public schema and table names tended to be lengthy enough to avoid false-positives.
-- Match each SQL query with any table used in the query
-- by searching the text for the schema_name.table_name of the table
select
id
, table_name
from
charts
cross join [tracked_tables_selection]
where
charts.sql like '%' || table_name || '%'
Discoveries
This led us to our first critical discovery: 95% of the slow queries came from 5% of the tables. Our Pareto distribution was quite steep — we had compounding factors as most interesting tables were the ones that also had the most data points. Logs from the core application behind Plaid were all being added to a single, large table, and users were writing many similar filters and aggregation over the dataset to analyze different time-periods, customer behavior, and error types.
Once we understood the problem, the technical solution was fairly straight-forward: pre-compute the common elements of the queries by creating rollup tables, materialized views, and pre-filtered & pre-segmented tables, and change user queries to run against these derivative tables. We already used Airflow for a good amount of data ETL management, so it was an easy choice to start materializing views and rollups. The associated DAGs are simple to implement; just have Airflow run the SQL query every 15 minutes or so, and then swap out the old table for the new one.
Creating infrastructure for derivative tables let us take granular data and produce something more meaningful to analyze. For example, we can take a raw logging table that has a message per HTTP request, group the data by the type of the request, the client, and the resulting status information, and bucket the data into hourly counts.
create table logs_hourly.apiv1_production as (
select
date_trunc('hour', "current_time_ms:hour") AS _timestamp
, client_id
, client_name
, error_message
, message
, method
, (time_ms_number / 5000) 5 as time_5s
, status_code
, cast (split_part(url_string, '?', 1) as varchar (64)) as url_path
, count (1)
from
logs_raw.apiv1_production
group by
1, 2, 3, 4, 5, 6, 7, 8, 9
);
The harder part was getting folks internally to migrate onto new structures once we identified the translation pattern. We needed to re-write queries that looked like this:
select
[current_time_ms:hour]
, count(1)
from
logs_raw.apiv1_production
where
[current_time_ms=14days]
group by
1
order by
1
to a similar query that looks like this:
select
_timestamp
, sum(count)
from
logs_hourly.apiv1_production
where
[_timestamp=14days]
group by
1
order by
1
They’re quite similar, and with a deterministic pattern: change the schema and table name, migrate to the _timestamp moniker we had centralized around, and change count(1) to sum(count) to take the aggregate of our hourly rollup aggregation. While we had many different kinds of derivative tables, the technical translation was usually an easy task.
The harder part was the operational deployment of these changes — we had grown to hundreds of dashboards with thousands of charts being used regularly for internal analytics, and it was challenging to both roll-out a change that was far-reaching enough that we could see movement in our metrics, but non-disruptive to the point where it would upend an individual’s workflow during migration.
To combat this, we setup a dashboard that would allow us to track the individual dashboards and charts contributing the slowest runtimes, and then cross-referenced against the distribution of run-times by users to allow us to more narrowly align with individual users internally — making changes on a user-by-user basis instead of a table-by-table basis facilitated greater transparency and communication of the procedural deployment.
Diving into where slow queries were hiding
This worked better not just through cross-team collaboration, it also allowed us to watch the system grow and evolve around our changes. As the tool became more accessible with not only faster query speeds but additional data sources, more teams started using it to track their KPIs and sometimes new authors would generate additional slow-down that we could then work through together.
Eventually, we got to a place where we were comfortable with respect to power-user query runtimes, and began shifting our focus towards involving folks outside of DSI in making and managing these derivative structures, to see if the system could become self-sustaining from an infrastructure perspective. The system had largely stabilized and we had removed nearly all of the “unbearably long” queries that were prevalent before this effort started:
The total migration: 10x the number of queries, 1/10th the query runtime
To anyone else faced with the question of deploying an analytics warehouse at a small company, WLM queues, vacuum analyze, on-disk data layout are all things you’ll need to look into. However, rather than trying to meet a performance bar with behind-the-scenes optimizations, we recommend first understanding your users’ query patterns and use cases, making sure the data is available to them in the most tightly-packed format possible through rollup tables, and reducing query duplication with materialized views and pre-segmented tables.
While we’re always looking for ways to continue improving our processes, we feel good about the initial progress of our query performance and are constantly embarking on new ways to make Plaid a data-first company. If Redshift performance, Spark cluster management, or real-time analytics through the ELK stack is right up your alley, come join us!
Special thanks to Lars Kamp from intermix.io, a great tool for Redshift optimization, for brainstorming Redshift performance with me back in March. And thanks to Angela Zhang for all the great feedback on this post.
WARNING: We're overloaded at the moment and we can't handle any new profile request. We're
sorry and will come back soon with a system that scales better. Seehttps://news.ycombinator.com/item?id=17951478
We love the default GitHub profiles and we want to enhance them:
The GitHub profiles aren't clearly showing all repos you have contributed to since you joined
GitHub. We are showing them all, even those you don't own and those owned by organizations
you're not in.1
The GitHub profiles are listing all the repos you own but they sort them only by age of the
latest commit. We prefer to sort repos by a combination of how active they are, how much you
have contributed to them, how popular they are, etc. For each user we want to see first the latest
greatest repos they have most contributed to.
On GitHub only repos earn stars. We push it one step further by transferring these stars to
users. If you have built 23% of a 145 stars repo, you deserve 33 stars for that contribution. We
add all these stars and clearly show how many of them you earned in total.
The GitHub profiles don't clearly show how big your contribution to a repo was, when you don't own
it. Maybe you wrote 5%. Maybe 90%. We make it clear.
GitHub detects programming languages. We want to also know abouttechnologies/frameworks, e.g. "react", "docker", etc.
The GitHub profiles allow filtering your repos by programming language. We will allow filtering
by technologies/frameworks as well.
The GitHub profiles can be tweaked by clicking around. We allow them to betweaked programmatically.
On GitHub only users and organizations have avatars. We bringavatars to repos.
Our enhanced profiles are accessible at https://ghuser.io/<github-username>, e.g.ghuser.io/AurelienLourot.
Some of my repos are not showing up on my profile, why?
Did you give them a star? We don't display repos with no stars at all. We think that if even you
haven't given them a star, then you probably aren't proud of them (yet).
Does ghuser.io intend to compete with the default GitHub profiles?
No, in fact we'd love GitHub to copy ghuser.io or to even do better, so that this project can die.
How are the organizations sorted in the Contributed to section?
For now it's kind of random. See#142 for more
details.
IN THE 1970s, when those behind America’s manned space programme were trying to keep it alive as people got bored of moon landings, one fantasy was that there were products which might be made easily in space that were hard to create on Earth—metal foams, for example. Such dreams came to nothing because, however fancy the product, the cost of manufacturing it in orbit was never lower than the price it would have commanded back on Earth.
Two Californian firms, however, think they have cracked this problem. Made in Space and FOMS (Fiber Optic Manufacturing in Space) are both proposing to manufacture optical fibre of the highest quality in the free-falling conditions of the International Space Station. At $1m a kilogram, this is a material that is well worth the trip to and from orbit.
Get our daily newsletter
Upgrade your inbox and get our Daily Dispatch and Editor's Picks.
Optical fibres are made by pulling glass into strands which have a diameter similar to that of human hair. Cables filled with these fibres have revolutionised telecommunications. When a telephone call, say, is encoded as laser pulses and sent through an optical fibre, it can travel a far greater distance, with a lower loss of signal quality, than if the message involved had been transmitted through a copper wire. As a consequence, except for the last few hundred metres of connection to the customer, copper cabling has almost disappeared.
Optical fibre could, though, be better than it is. The glass used contains impurities that both absorb and scatter part of the light passing through it. This can be ameliorated by adding germanium, which reduces absorption and scattering. But that is not a perfect answer.
The best solution known in principle was found in 1975, by researchers at the University of Rennes, in France. It is a glass, made from a mixture of the fluorides of zirconium, barium, lanthanum, aluminium and sodium, that is therefore known as ZBLAN (sodium has the chemical symbol Na). Fibre made from ZBLAN has extremely low losses from absorption and scattering, particularly in the part of the spectrum called the mid-infrared, where conventional optical fibre does not work well.
ZBLAN fibres are, though, fragile. That makes drawing one that is more than about a kilometre long a hard task which, in turn, makes them useless for long-distance work. They also tend to contain tiny crystals that form when the material is cooling. These negate the lack of absorption and scattering that otherwise give ZBLAN its advantages.
However, in the absence of stresses caused by gravity on the cooling material, much longer fibres could be drawn. Nor would the crystals form. And the one large place under human control where such stresses are absent is the space station.
Both firms say they have built apparatus to produce ZBLAN fibres that is small and light enough to send to, and operate in, the space station. Made in Space’s machine has some similarities to the sort of plastic-extrusion 3D printer used by hobbyists. It ingests a preformed pencil of ZBLAN. A furnace melts the tip of this pencil. Thin strands of fibre are then pulled from the molten area. But instead of being used to form an object while still soft, these strands are coated with a second sort of glass for protection and then spooled onto reels for storage.
Made in Space already has a plastic-extrusion printer on board the space station. This is used to make replacements for small items that have got damaged, obviating the delays involved in bringing them from Earth. The company’s managers are therefore reasonably confident that their ZBLAN extruder will also work in free-fall.
They will soon find out. A prototype arrived at the space station in July, and will be tested shortly. An equivalent device built by FOMS will be sent up later this year. Both firms are promising fibre with a performance 100 times better than anything made on Earth, in lengths of several tens of kilometres.
To say that this is truly an economic process is cheating slightly, since the ledger fails to account for the trivial matter of the $100bn or so spent to build the space station in the first place. But, given that this is now a sunk cost, it does seem possible that Made in Space and FOMS have actually found a way to fulfil the dreams of the 1970s, and make money by making things in space.
The thought of Christmas raises almost automatically the thought of Charles Dickens, and for two very good reasons. To begin with, Dickens is one of the few English writers who have actually written about Christmas. Christmas is the most popular of English festivals, and yet it has produced astonishingly little literature. There are the carols, mostly medieval in origin; there is a tiny handful of poems by Robert Bridges, T. S. Eliot, and some others, and there is Dickens; but there is very little else. Secondly, Dickens is remarkable, indeed almost unique, among modern writers in being able to give a convincing picture of happiness.
Dickens dealt successfully with Christmas twice in a chapter of The Pickwick Papers and in A Christmas Carol. The latter story was read to Lenin on his deathbed and according to his wife, he found its ‘bourgeois sentimentality’ completely intolerable. Now in a sense Lenin was right: but if he had been in better health he would perhaps have noticed that the story has interesting sociological implications. To begin with, however thick Dickens may lay on the paint, however disgusting the ‘pathos’ of Tiny Tim may be, the Cratchit family give the impression of enjoying themselves. They sound happy as, for instance, the citizens of William Morris's News From Nowhere don't sound happy. Moreover and Dickens's understanding of this is one of the secrets of his power their happiness derives mainly from contrast. They are in high spirits because for once in a way they have enough to eat. The wolf is at the door, but he is wagging his tail. The steam of the Christmas pudding drifts across a background of pawnshops and sweated labour, and in a double sense the ghost of Scrooge stands beside the dinner table. Bob Cratchit even wants to drink to Scrooge's health, which Mrs Cratchit rightly refuses. The Cratchits are able to enjoy Christmas precisely because it only comes once a year. Their happiness is convincing just because Christmas only comes once a year. Their happiness is convincing just because it is described as incomplete.
All efforts to describe permanent happiness, on the other hand, have been failures. Utopias (incidentally the coined word Utopia doesn't mean ‘a good place’, it means merely a ‘non-existent place’) have been common in literature of the past three or four hundred years but the ‘favourable’ ones are invariably unappetising, and usually lacking in vitality as well.
By far the best known modern Utopias are those of H. G. Wells. Wells's vision of the future is almost fully expressed in two books written in the early Twenties, The Dream and Men Like Gods. Here you have a picture of the world as Wells would like to see it or thinks he would like to see it. It is a world whose keynotes are enlightened hedonism and scientific curiosity. All the evils and miseries we now suffer from have vanished. Ignorance, war, poverty, dirt, disease, frustration, hunger, fear, overwork, superstition all vanished. So expressed, it is impossible to deny that that is the kind of world we all hope for. We all want to abolish the things Wells wants to abolish. But is there anyone who actually wants to live in a Wellsian Utopia? On the contrary, not to live in a world like that, not to wake up in a hygenic garden suburb infested by naked schoolmarms, has actually become a conscious political motive. A book like Brave New World is an expression of the actual fear that modern man feels of the rationalised hedonistic society which it is within his power to create. A Catholic writer said recently that Utopias are now technically feasible and that in consequence how to avoid Utopia had become a serious problem. We cannot write this off as merely a silly remark. For one of the sources of the Fascist movement is the desire to avoid a too-rational and too-comfortable world.
All ‘favourable’ Utopias seem to be alike in postulating perfection while being unable to suggest happiness. News From Nowhere is a sort of goody-goody version of the Wellsian Utopia. Everyone is kindly and reasonable, all the upholstery comes from Liberty's, but the impression left behind is of a sort of watery melancholy. But it is more impressive that Jonathan Swift, one of the greatest imaginative writers who have ever lived, is no more successful in constructing a ‘favourable’ Utopia than the others.
The earlier parts of Gulliver's Travels are probably the most devastating attack on human society that has ever been written. Every word of them is relevant today; in places they contain quite detailed prophecies of the political horrors of our own time. Where Swift fails, however, is in trying to describe a race of beings whom he admires. In the last part, in contrast with disgusting Yahoos, we are shown the noble Houyhnhnms, intelligent horses who are free from human failings. Now these horses, for all their high character and unfailing common sense, are remarkably dreary creatures. Like the inhabitants of various other Utopias, they are chiefly concerned with avoiding fuss. They live uneventful, subdued, ‘reasonable’ lives, free not only from quarrels, disorder or insecurity of any kind, but also from ‘passion’, including physical love. They choose their mates on eugenic principles, avoid excesses of affection, and appear somewhat glad to die when their time comes. In the earlier parts of the book Swift has shown where man's folly and scoundrelism lead him: but take away the folly and scoundrelism, and all you are left with, apparently, is a tepid sort of existence, hardly worth leading.
Attempts at describing a definitely other-worldly happiness have been no more successful. Heaven is as great a flop as Utopia though Hell occupies a respectable place in literature, and has often been described most minutely and convincingly.
It is a commonplace that the Christian Heaven, as usually portrayed, would attract nobody. Almost all Christian writers dealing with Heaven either say frankly that it is indescribable or conjure up a vague picture of gold, precious stones, and the endless singing of hymns. This has, it is true, inspired some of the best poems in the world:
Thy walls are of chalcedony, Thy bulwarks diamonds square, Thy gates are of right orient pearl Exceeding rich and rare!
But what it could not do was to describe a condition in which the ordinary human being actively wanted to be. Many a revivalist minister, many a Jesuit priest (see, for instance, the terrific sermon in James Joyce's Portrait of the Artist) has frightened his congregation almost out of their skins with his word-pictures of Hell. But as soon as it comes to Heaven, there is a prompt falling-back on words like ‘ecstasy’ and ‘bliss’, with little attempt to say what they consist in. Perhaps the most vital bit of writing on this subject is the famous passage in which Tertullian explains that one of the chief joys of Heaven is watching the tortures of the damned.
The pagan versions of Paradise are little better, if at all. One has the feeling it is always twilight in the Elysian fields. Olympus, where the gods lived, with their nectar and ambrosia, and their nymphs and Hebes, the ‘immortal tarts’ as D. H. Lawrence called them, might be a bit more homelike than the Christian Heaven, but you would not want to spend a long time there. As for the Muslim Paradise, with its 77 houris per man, all presumably clamouring for attention at the same moment, it is just a nightmare. Nor are the spiritualists, though constantly assuring us that ‘all is bright and beautiful’, able to describe any next-world activity which a thinking person would find endurable, let alone attractive.
It is the same with attempted descriptions of perfect happiness which are neither Utopian nor other-worldly, but merely sensual. They always give an impression of emptiness or vulgarity, or both. At the beginning of La Pucelle Voltaire describes the life of Charles IX with his mistress, Agnes Sorel. They were ‘always happy’, he says. And what did their happiness consist in? An endless round of feasting, drinking, hunting and love-making. Who would not sicken of such an existence after a few weeks? Rabelais describes the fortunate spirits who have a good time in the next world to console them for having had a bad time in this one. They sing a song which can be roughly translated: ‘To leap, to dance, to play tricks, to drink the wine both white and red, and to do nothing all day long except count gold crowns’ how boring it sounds, after all! The emptiness of the whole notion of an everlasting ‘good time’ is shown up in Breughel's picture The Land of the Sluggard, where the three great lumps of fat lie asleep, head to head, with the boiled eggs and roast legs of pork coming up to be eaten of their own accord.
It would seem that human beings are not able to describe, nor perhaps to imagine, happiness except in terms of contrast. That is why the conception of Heaven or Utopia varies from age to age. In pre-industrial society Heaven was described as a place of endless rest, and as being paved with gold, because the experience of the average human being was overwork and poverty. The houris of the Muslim Paradise reflected a polygamous society where most of the women disappeared into the harems of the rich. But these pictures of ‘eternal bliss’ always failed because as the bliss became eternal (eternity being thought of as endless time), the contrast ceased to operate. Some of the conventions embedded in our literature first arose from physical conditions which have now ceased to exist. The cult of spring is an example. In the Middle Ages spring did not primarily mean swallows and wild flowers. It meant green vegetables, milk and fresh meat after several months of living on salt pork in smoky windowless huts. The spring songs were gay Do nothing but eat and make good cheer, And thank Heaven for the merry year When flesh is cheap and females dear, And lusty lads roam here and there So merrily, And ever among so merrily! because there was something to be so gay about. The winter was over, that was the great thing. Christmas itself, a pre-Christian festival, probably started because there had to be an occasional outburst of overeating and drinking to make a break in the unbearable northern winter.
The inability of mankind to imagine happiness except in the form of relief, either from effort or pain, presents Socialists with a serious problem. Dickens can describe a poverty-stricken family tucking into a roast goose, and can make them appear happy; on the other hand, the inhabitants of perfect universes seem to have no spontaneous gaiety and are usually somewhat repulsive into the bargain. But clearly we are not aiming at the kind of world Dickens described, nor, probably, at any world he was capable of imagining. The Socialist objective is not a society where everything comes right in the end, because kind old gentlemen give away turkeys. What are we aiming at, if not a society in which ‘charity’ would be unnecessary? We want a world where Scrooge, with his dividends, and Tiny Tim, with his tuberculous leg, would both be unthinkable. But does that mean we are aiming at some painless, effortless Utopia? At the risk of saying something which the editors of Tribune may not endorse, I suggest that the real objective of Socialism is not happiness. Happiness hitherto has been a by-product, and for all we know it may always remain so. The real objective of Socialism is human brotherhood. This is widely felt to be the case, though it is not usually said, or not said loudly enough. Men use up their lives in heart-breaking political struggles, or get themselves killed in civil wars, or tortured in the secret prisons of the Gestapo, not in order to establish some central-heated, air-conditioned, strip-lighted Paradise, but because they want a world in which human beings love one another instead of swindling and murdering one another. And they want that world as a first step. Where they go from there is not so certain, and the attempt to foresee it in detail merely confuses the issue.
Socialist thought has to deal in prediction, but only in broad terms. One often has to aim at objectives which one can only very dimly see. At this moment, for instance, the world is at war and wants peace. Yet the world has no experience of peace, and never has had, unless the Noble Savage once existed. The world wants something which it is dimly aware could exist, but cannot accurately define. This Christmas Day, thousands of men will be bleeding to death in the Russian snows, or drowning in icy waters, or blowing one another to pieces on swampy islands of the Pacific; homeless children will be scrabbling for food among the wreckage of German cities. To make that kind of thing impossible is a good objective. But to say in detail what a peaceful world would be like is a different matter.
Nearly all creators of Utopia have resembled the man who has toothache, and therefore thinks happiness consists in not having toothache. They wanted to produce a perfect society by an endless continuation of something that had only been valuable because it was temporary. The wider course would be to say that there are certain lines along which humanity must move, the grand strategy is mapped out, but detailed prophecy is not our business. Whoever tries to imagine perfection simply reveals his own emptiness. This is the case even with a great writer like Swift, who can flay a bishop or a politician so neatly, but who, when he tries to create a superman, merely leaves one with the impression the very last he can have intended that the stinking Yahoos had in them more possibility of development than the enlightened Houyhnhnms.
In our last article "Why Love Generative Art?" we had a blast putting the genre into the context of modern art history. In this article we interview contemporary generative art prodigy (my words, not his) Manolo Gamboa Naon from Argentina.
Manolo's work feels like it is the result of the entire contents of twentieth-century art and design being put into a blender. Once chopped down into its most essential geometry, Manolo then lovingly pieces it back together with algorithms and code to produce art that is simultaneously futuristic and nostalgic. His work serves as a welcome (and needed) bridge into digital art and an antidote for those who see the genre as cold, mechanical, and discontinuous with the history of art.
We couldn't be more excited to share our interview with Manolo, his first to be published in English. But before we dive in, let's have some fun and deconstruct a few examples of his work. For me, seeing his work side by side with the masters of twentieth-century art highlights just how well it holds its own.
Arrowhead Picture - Wassily Kandinsky, 1923
Composition 8 - Wassily Kandinsky, 1923
I see Wassily Kandisnky as an obvious artistic influence on Manolo. The two share a masterful use of color and composition and an interest in exploring spiritual and psychological effects of color and geometry. Nowhere is this more apparent than in Manolo's series of works titled bbccclll, which have all the rhythm and beauty of Kandinsky's early-1920s lyrical abstractions. Kandinsky said of abstract painting that it is "the most difficult" of all the arts, noting:
It demands that you know how to draw well, that you have a heightened sensitivity for composition and for color, and that you be a true poet. This last is essential.
Manolo's visual poetry checks all of these boxes and does it through code and pixels alone. His poetry is most evident in the range of styles and emotions he can elicit from the most basic elements of geometry. For example, let's compare Manolo's Kandinsky-esque bbccclll with a work that feels closer to Sonia and Robert Delaunay, Manolo's CUDA. We can quickly see how Manolo triggers a completely different range of emotions by shifting the color and placement of just two basic elements, the circle and the triangle.
Circular Shapes Sun and Moon - Robert Delaunay, 1912,1931
Prismes électriques - Sonia Delaunay, 1914
Indeed, Sonia Delaunay sounds as if she is referring to Manolo's sensitivity to color and prolific body of work when she said:
He who knows how to appreciate color relationships, the influence of one color on another, their contrasts and dissonances, is promised an infinitely diverse imagery.
Another one of my favorite works by Manolo, ppllnnn, has a really strong Max Ernst vibe for me. The highly detailed and organic texture of this work reminds me of similar textures that Ernst was able to generate by pioneering techniques like frottage and decalcomania to introduce complexity and randomness into his own works.
100,000 Doves - Max Ernst, 1925
The Gramineous Bicycle Garnished with Bells the Dappled Fire Damps and the Echinoderms Bending the Spine to Look for Caresses - Max Ernst, 1921
Ernst, always open to surprise and chance, once said:
Painting is not for me either decorative amusement, or the plastic invention of felt reality; it must be every time: invention, discovery, revelation.
As you will see in our interview, invention, discovery, and revelation are also at the core of Manolo's art-making process.
Before starting the interview, I'd like to thank Artnome's brilliant digital collections analyst, Kaesha Freyaldenhoven, who acted as our English-to-Spanish interpreter and later transcribed the interview, translating it into English. None of this would be possible without her enthusiastic assistance, and we very are lucky to have it.
An Interview with Manolo Gamboa Naon
Jason Bailey (JB): A lot of generative artists either start as artists first or programmers first and then build the other skillset. Can you tell us a little about your background? How did you first end up making generative art?
Manolo Gamboa Naon (M): I was young. I was thirteen. But, well, I think in that moment, I started making images but I didn’t know what I was doing. I did not realize that there were other artists out there doing the same things I was doing. Only after many years and finding other artists, did I say, 'Wow! There are people doing things with Flash that I now appreciate in this moment.' I later switched to Processing.
JB: How long have you been using Processing?
M: Seven years.
JB: How did you learn it?
M: I had an orange book - Schiffman - during this time. But I also started studying design as a career. They encouraged us to learn Processing for a year. We had to learn how to program in the course. But during this time, I was more interested in creating interactive things rather than design.
JB: I am often surprised how people misunderstand generative art. I had a professor who told me generative art would always be limited, as there is no such thing as an accident with a program. He believed accidents are where discovery happens. I disagreed. When I was making generative art, there were often surprises when I would run the code, and I would build and adjust the work based on those surprises. I am curious about your thoughts on the creative process in coding art. Is it a discovery process with trial and error and accidents and discovery like traditional art making? Or do you have the complete outcome in mind before you start? Or maybe neither?
M: Generally, one has an idea, makes a first draft of this idea, and then begins correcting. All of the time, when I come and create, the most beautiful parts of the work are born from the errors. After a certain point, I believe that the maturity of my style was formed by making small errors because I was discovering as I went along. From these errors, I take an idea and it stays. I learn how to manipulate from these errors. The error is central to the work of generative artists apart from, obviously, the rules, and the rules become text that converts into an image. It is impossible to have what you imagine become what you see. The beginning is errors, errors, errors, errors. They are beautiful errors.
Manolo works in series across themes. Below we look at pppp as a theme with several works that Manolo developed over time:
JB:For me, your sense of color and pattern are what stands out the most. I feel like your choice of color palettes is very smart, and you evoke strong feelings through this alone. For example, you have some recent works that makes me nostalgic for the '80s, with shapes and colors that were very popular in that decade. Where do you get the inspiration for your color palette?
NNN - Manolo Gamboa Naon, March 5, 2018
NNN - Manolo Gamboa Naon, March 5, 2018
NNN - Manolo Gamboa Naon, March 5, 2018
NNN - Manolo Gamboa Naon, March 5, 2018
M: Color is a problem in my life. Realistically, when I began making generative artwork, I realized that programmers - I mean, I don’t want to generalize - but they do not give color a lot of importance. They do not have an intention. But in the past five or six years, I have been attempting to feel more comfortable using colors. Because it matters - a lot. And now, sometimes I spend more time forming a color palette than programming. My inspiration comes from looking around all the time. I look at a lot of things from design. Instagram, Twitter, all the time searching for references. A movie, an old newspaper. Inspiration comes from many places. In my work, I intend to evoke something - from a time or of a certain quality.
JB: What are the biggest changes you have seen in the last decade in Processing and generative art?
M: During the Flash epoch, people were generating compelling visual art. Since then there have been changes, but I am not sure if the changes have been progressive. I want to learn English so that I can start getting to know the world. I mean, there are many people who are interested in generative art on Twitter. Sometimes on the forums I do not know what people are saying, but I do understand the code.
MMGGK2 - Manolo Gamboa Naon, August 23, 2018
JB: What are the new trends? How do you think things like AI, machine learning, and deep learning will impact the future of generative art?
M: I think there are many artists working in deep learning and artificial intelligence right now. There is a lot of great progress. I prefer to work with the geometry, not so much on the side of AI or machine learning. It does not interest me that much. Maybe one day I will start getting into it. But from my perspective, they are two separate paths. Maybe I’m just old. I am very impressed by the work. But I do not know if this is something that I have a desire to pursue.
JB: I know many generative artists, particularly in the Processing community, make the code for their projects open source for others to learn from. Is this something you do as well? Why or why not?
M: A few weeks ago I published all of the code for everything I have created since 2007. Absolutely there are some that don’t work, but everything I have created on Processing is online. For me, the code of others were huge learning tools, and it seems like it is a good thing to share these sorts of things. A few people have asked me for my code and it would always take me a long time to respond individually, but in this form, I can pass along the information. Although I speak Spanish, the good thing about code is that I can read it in any language. Code transcends language, and to me, that is beautiful.
JB: How have you shown your work in the past? Do you only show it online because it is digital? Or have you shown it in galleries as well?
M: Principally, I share all of my work online. I really like the idea of people sharing online. My works have never been for printing. Really, I prefer to post them online and share the digital images in that way. I consume art through the internet, and I prefer that it stays there. That the works are viewed online is very important to me. I like the movement and allowing the work itself to live. Although there have been some artists in Buenos Aires who have printed their works and shared them in that way, this is not the route that I would like to take. I prefer to keep my works digital.
JB: Where do you get your inspiration?
M: Only after learning about the Argentinian scene of generative art did I become familiar with artists from the world. Artists who inspire me include:
JB: Is there anything else you would like to share with the Artnome audience?
M: I feel that images are dying, they are disposable. Their lifetimes are very short, considering the quantity of production of images. But I also think that there is a moment of contemporary art on the rise. Art is becoming a more central part of the culture. It has always been for music, but now, the visual arts - design, for example - are occupying more important positions in the lives of the people. I love what is happening.
There is also the destruction of the idea of an artistic genius. Well, sure there are people that are doing really good things, but it is not like it was before. Many different and talented artists are gaining recognition for their work. I think it is important to see the destruction of the artistic genius to understand that there are many people who are capable of doing what they did.
Communication between artists is also very important, with the help of the internet. Through the internet, I have been able to meet people who understand and appreciate what I am doing.
Conclusion
One of the great joys of having a platform like Artnome to share my thoughts on the intersection of art and tech is the opportunity to introduce people to deserving artists whom they may not know yet. Manolo may not be a fan of the idea of the artist as "genius," but he certainly fits all of my personal criteria for the distinction. His sensitivity to color and composition, the speed at which he explores new concepts, and the volume of compelling work he produces puts him into a class of his own. I feel lucky that Manolo agreed to let us interview him for Artnome so we can all have a little more insight into his remarkable work. I'd encourage you all to follow him on Twitter @manoloide and to check out the rest of his amazing work on Behance.
I'd like to again thank Kaesha Freyoldenhoven for her amazing job as interpreter and translator for this interview. As always, if you have questions or ideas for a post, feel free to reach out to jason@artnome.com. If you have not already, I recommend you sign up for our newsletter to keep up with news and new articles.
But in the intervening months, Swarm had moved forward with launch preparations, hoping that the FCC would approve its license before liftoff, which was scheduled for January. Spangelo said she wanted to wait for the agency’s approval, even as the launch date neared.
“That was totally the intention,” she said. “We were still hopeful that we would get the application in time and be able to operate them.”
When I asked Spangelo why she didn’t stop the launch when the FCC denied Swarm’s application, she said, “Others have been granted applications after launching their satellites, so we were still hopeful at that point.”
The FCC declined to comment on whether the agency has approved applications by companies after they carry out a launch.
The FCC did comment on the investigation of the SpaceBees, which is ongoing. “The enforcement inquiry is still ongoing and I can’t speak to what may or may not happen with that,” Neil Grace, a spokesperson for the agency, said.
Spangelo said the SpaceBees have shown themselves to be easily trackable by the Space Surveillance Network, as well as by LeoLabs, a California-based company that provides orbital data to commercial-satellite operators and others in an effort to prevent collisions.
It’s not clear whether the inquiry will result in disciplinary action against Swarm, and it’s even less clear what the nature of that would be. The agency is in uncharted regulatory territory. A penalty would send a clear message to other commercial-satellite providers, and might result in more stringent application rules down the line. Not issuing a penalty could risk the rise of a nightmare scenario in low-Earth orbit, in which private companies disregard federal regulators. Swarm had philanthropic intentions, but others might not.
Regulatory questions will only become more pressing as greater numbers of U.S. commercial companies produce satellites—and not just a handful of them, but entire constellations. Iridium, based in Virginia, is set to launch this fall the final few of its 66 satellites, which form a network that provides phone and data services. OneWeb, also based in Virginia, wants to launch 882 satellites that would provide internet services to people around the globe. SpaceX, Elon Musk’s rocket company, in February launched two prototypes of a proposed 12,000-satellite constellation that would do the same.
The ambitions of these and other companies, if successful, will sprinkle scores of satellites into a space already crowded with them. According to the latest numbers from the European Space Agency, there are 1,800 functioning satellites, 4,700 defunct ones, and 29,000 pieces of debris floating around Earth. Since I last wrote about Swarm’s unauthorized launch, just four months ago, the number of functioning satellites has grown by 400.
In this post, we will use the Ford GoBike Real-Time System, StreamSets Data Collector, Apache Kafka and MapD to create a real-time data pipeline of bike availability in the Ford GoBike bikeshare ecosystem. We’ll walk through the architecture and configuration that enables this data pipeline and share a simple auto-updating dashboard within MapD Immerse.
High-Level Architecture
The high-level architecture consists of two data pipelines; one pipeline polls the GoBike system and publishes that data to Kafka. The other pipeline consumes from Kafka using Streamsets, transforms the data, then writes the data to MapD:
In the remainder of this post, I’ll outline each of the components of the pipelines and their configurations.
GBFS
GBFS is data feed specification for bike share programs. It provides several JSON schemas so that the same data from every bike share can be open to app developers to incorporate into their systems or for researchers. The Ford GoBike system uses GBFS so I refer to data from Ford as GBFS for simplicity for the remainder of this article.
From GBFS, there are several feeds available and the details about each can be found here. For our use case, we're interested in each station’s status and also each station’s locations for future visualization. For a sample of the feeds, you can check out the documentation.
For this project, we’ll poll the GBFS for the station status. Station locations change infrequently, so a daily or weekly cron pull of the station locations should be sufficient to ensure our tables have the most correct information.
StreamSets
The majority of the heavy lifting for this system is managed using StreamSets. If you're unfamiliar with StreamSets, their website and documentation is top notch. At a high level, StreamSets is a plug-and-play stream-processing framework. I like to think of it as Spark Streaming with a UI on top of it. It provides a drag-and-drop interface for the source-processor-sink streaming model.
HTTP Client
To poll GBFS, I created an HTTP client in StreamSets with the following configurations:
Each GET request will return all stations for Ford GoBike and a status as outlined in the GBFS specification. The more desirable state would if every station_id JSON object were its own message. Fortunately, StreamSets has a pivot processor which will take a JSON Array like this:
and flatten it so that each object is its own message:
The configuration looks like this:
Kafka Producer
With the data polled from the GBFS API and flattened, it’s time to write it to Kafka. Connecting to Kafka with Streamsets is quite simple. A default installation of StreamSets come with the Kafka package and it has all the facilities for producing and consuming Kafka messages.
If you have a distributed deployment of Kafka (which you should for production), you have to ensure all of the Kafka node URIs are inputted:
Kafka Consumer
With a working producing pipeline we’re publishing about 500 messages every 2 seconds to Kafka. We’d like to consume that data and write it to MapD. StreamSets Kafka Consumer is easy to configure:
For the consumer, you need both the Broker URIs and the Zookeeper URIs. On initial testing, I ran into failures downstream when using the JDBC connector. With the JSON payload, some of the fields are strings that I outlined as Integers in my MapD table schema.
Using Streamsets Field Converter, I isolated these fields and converted them to the appropriate data type.
MapD via JDBC
Now that the rest of the pipeline is ready, we need to sink to our final destination, the MapD database.
Step 1 is getting the JDBC driver for MapD installed on StreamSets. You can download the latest driver here, and install the driver on the packages page in StreamSets:
Step 2 is to configure the JDBC connection in StreamSets. Connecting to MapD via JDBC is outlined in their documentation. I used the following JSON configuration to set up each field in the MapD table and how they made to the JSON payload coming from Kafka:
With these configurations, our data pipeline is complete.
Conclusion & Dashboard
With the 10-second polling interval on GBFS and splitting each GBFS record, we can write ~1000 m/s to MapD. We could poll more often to increase the throughput, but this should be adequate for our needs. Using MapD Immerse, we can provide a real-time dashboard to monitor our record counts in real-time:
In a future post, we’ll use this same system to write more feeds that follow the GBFS to test the scalability of streaming lots of data into MapD.
Notes
Thanks to Randy Zwitch for editing previous drafts of this post and his contributions to building the pipeline and provisioning the vms.