Quantcast
Channel: Hacker News
Viewing all 25817 articles
Browse latest View live

Astronaut’s DNA No Longer Matches His Identical Twin’s After Year Spent in Space

$
0
0

Spending a year in space not only changes your outlook, it transforms your genes.

Preliminary results from NASA's Twins Study reveal that 7% of astronaut Scott Kelly's genes did not return to normal after his return to Earth two years ago.

The study looks at what happened to Kelly before, during and after he spent one year aboard the International Space Station through an extensive comparison with his identical twin, Mark, who remained on Earth.

NASA has learned that the formerly identical twins are no longer genetically the same.

'Space genes'

The transformation of 7% of Scott's DNA suggests longer-term changes in genes related to at least five biological pathways and functions.

The newest preliminary results from this unique study of Scott, now retired from NASA, were released at the 2018 Investigator's Workshop for NASA's Human Research Program in January. Last year, NASA published its first round of preliminary results at the 2017 Investigator's Workshop. Overall, the 2018 findings corroborated those from 2017, with some additions.

Expedition 46 Commander Scott Kelly of NASA rests in a chair outside of the Soyuz TMA-18M spacecraft just minutes after he and Russian cosmonauts Mikhail Kornienko and Sergey Volkov of Roscosmos landed in a remote area near the town of Zhezkazgan, Kazakhstan on March 2, 2016. (Credit: NASA)

Expedition 46 Commander Scott Kelly of NASA rests in a chair outside of the Soyuz TMA-18M spacecraft just minutes after he and Russian cosmonauts Mikhail Kornienko and Sergey Volkov of Roscosmos landed in a remote area near the town of Zhezkazgan, Kazakhstan on March 2, 2016. (Credit: NASA)

To track physical changes caused by time in space, scientists measured Scott's metabolites (necessary for maintaining life), cytokines (secreted by immune system cells) and proteins (workhorses within each cell) before, during and after his mission. The researchers learned that spaceflight is associated with oxygen-deprivation stress, increased inflammation and dramatic nutrient shifts that affect gene expression.

In particular, Chris Mason of Weill Cornell Medicine reported on the activation of Scott's "space genes" while confirming the results of his separate NASA study, published last year.

To better understand the genetic dynamics of each twin, Mason and his team focused on chemical changes in RNA and DNA. Whole-genome sequencing revealed that each twin has more than expected unique mutations in his genome -- in fact, hundreds.

Although 93% of Scott's genetic expression returned to normal once he returned to Earth, a subset of several hundred "space genes" remained disrupted. Some of these mutations, found only after spaceflight, are thought to be caused by the stresses of space travel.

As genes turn on and off, change in the function of cells may occur.

Looking to Mars

Mason's work shows that one of the most important changes to Scott's cells was hypoxia, or a deficient amount of tissue oxygenation, probably due to a lack of oxygen and high levels of carbon dioxide. Possible damage to mitochondria, the "power plants of cells," also occurred in Scott's cells, as indicated by mitochondrial stress and increased levels of mitochondria in the blood.

Mason's team also saw changes in the length of Scott's telomeres, caps at the end of chromosomes that are considered a marker of biological aging. First, there was a significant increase in average length while he was in space, and then there was a decrease in length within about 48 hours of his landing on Earth that stabilized to nearly preflight levels. Scientists believe that these telomere changes, along with the DNA damage and DNA repair measured in Scott's cells, were caused by both radiation and calorie restrictions.

Expedition 43 NASA Astronaut Scott Kelly, left, and his identical twin brother Mark Kelly, pose for a photograph March 26, 2015, at the Cosmonaut Hotel in Baikonur, Kazakhstan. (Credit: Bill Ingalls/NASA via Getty Images)

Expedition 43 NASA Astronaut Scott Kelly, left, and his identical twin brother Mark Kelly, pose for a photograph March 26, 2015, at the Cosmonaut Hotel in Baikonur, Kazakhstan. (Credit: Bill Ingalls/NASA via Getty Images)

Additionally, the team found changes in Scott's collagen, blood clotting and bone formation due, most likely, to fluid shifts and zero gravity. The researchers discovered hyperactive immune activity as well, thought to be the result of his radically different environment: space.

The Twins Study helps NASA gain insight into what happens to the human body in space beyond the usual six-month International Space Station missions previously studied in other astronauts. Ten groups of researchers, including Mason's team, are looking at a wide variety of information about the Kelly twins' health, including how gut bacteria, bones and the immune system might be affected by living off planet.

Kelly's one-year mission is a scientific stepping stone to a planned three-year mission to Mars, NASA said. Research into how the human body adjusts to weightlessness, isolation, radiation and the stress of long-duration spaceflight is needed before astronauts are sent on journeys that would triple the time humans have spent in space so far.


Operating Broadcom Wi-Fi Chips as Arbitrary Signal Transmitters, Like SDRs

$
0
0

README.md

NexMon logo

This projects demonstrates our discovery that turns Broadcom's 802.11ac Wi-Fi chips into software-defined radios that transmit arbitrary signals in the Wi-Fi bands. In this example, we patch the Wi-Fi firmware of BCM4339 devices installed in Nexus 5 smartphones. The firmware patch activates three ioctls:

  1. NEX_WRITE_TEMPLATE_RAM (426) writes arbitrary data into Template RAM that stores the raw IQ samples that we may transmit. The ioctl's payload contains (1) an int32 value indicating the offset where data should be written in Template RAM in bytes, (2) an int32 value indicating the length of the data that should be written and (3) the IQ samples as array of IQ values, where I (inphase components) and Q (quadrature components) are stored as int16 numbers.

  2. NEX_SDR_START_TRANSMISSION (427) that triggers the transmission of IQ samples. The ioctl's payload contains (1) an int32 value indicating the number of samples to transmit, (2) an int32 value indicating the offset where the signal starts in Template RAM, (3) an int32 value indicating a chanspec (channel number, bandwidth, band, ...), (4) an int32 value indicating the power index (lower value means higher output power), and (5) an int32 value indicating whether to loop over the IQ samples or transmit them only once.

  3. NEX_SDR_STOP_TRANSMISSION (428) stops a transmission started usingNEX_SDR_START_TRANSMISSION.

The directory payload_generation contains the MATLAB script generate_frame.m that generates a Wi-Fi beacon frame with SSID MyCovertChannel. The generated IQ samples are written to a bash script that calls nexutil from the nexmon.org project to load the samples into the Wi-Fi chip's Template RAM by using ioctls. You can either generate your own signals or use the example myframe.sh file for transmitting the generated Wi-Fi frame. To this end, follow the Getting Started instructions below to install our patched Wi-Fi firmware on a Nexus 5 smartphone. Then, you need to copy myframe.sh to a directory that allows execution (such as /su/xbin/). To load the samples and start a single transmission, simply executute the bash script and observe the results by listening with a Wi-Fi sniffer on channel 1. A suitable Wireshark filter is wlan.addr == 82:7b:be:f0:96:e0. Of course, you are not limited to transmitting handcrafted Wi-Fi signals, you can transmit whatever you like in the 2.4 and 5 GHz bands. Nevertheless, you have to obey your local laws for transmitting signals, that might prohibit you to transmit any signal at all.

Any use of the Software which results in an academic publication or other publication which includes a bibliography must include citations to the nexmon project a) and the paper cited under b) or the thesis cited under c):

a) "Matthias Schulz, Daniel Wegemer and Matthias Hollick. Nexmon: The C-based Firmware Patching Framework. https://nexmon.org"

b) "Matthias Schulz, Jakob Link, Francesco Gringoli, and Matthias Hollick. Shadow Wi-Fi: Teaching Smartphones to Transmit Raw Signals and to Extract Channel State Information to Implement Practical Covert Channels over Wi-Fi. Accepted to appear in Proceedings of the 16th ACM International Conference on Mobile Systems, Applications, and Services (MobiSys 2018), June 2018."

c) "Matthias Schulz. Teaching Your Wireless Card New Tricks: Smartphone Performance and Security Enhancements through Wi-Fi Firmware Modifications. Dr.-Ing. thesis, Technische Universität Darmstadt, Germany, February 2018."

To compile the source code, you are required to first clone the original nexmon repository that contains our C-based patching framework for Wi-Fi firmwares. Than you clone this repository as one of the sub-projects in the corresponding patches sub-directory. This allows you to build and compile all the firmware patches required to repeat our experiments. The following steps will get you started on Xubuntu 16.04 LTS:

  1. Install some dependencies: sudo apt-get install git gawk qpdf adb
  2. Only necessary for x86_64 systems, install i386 libs:
sudo dpkg --add-architecture i386
sudo apt-get update
sudo apt-get install libc6:i386 libncurses5:i386 libstdc++6:i386
  1. Clone the nexmon base repository: git clone https://github.com/seemoo-lab/nexmon.git.
  2. Download and extract Android NDK r11c (use exactly this version!).
  3. Export the NDK_ROOT environment variable pointing to the location where you extracted the ndk so that it can be found by our build environment.
  4. Navigate to the previously cloned nexmon directory and execute source setup_env.sh to set a couple of environment variables.
  5. Run make to extract ucode, templateram and flashpatches from the original firmwares.
  6. Navigate to utilities and run make to build all utilities such as nexmon.
  7. Attach your rooted Nexus 5 smartphone running stock firmware version 6.0.1 (M4B30Z, Dec 2016).
  8. Run make install to install all the built utilities on your phone.
  9. Navigate to patches/bcm4339/6_37_34_43/ and clone this repository:git clone https://github.com/seemoo-lab/mobisys2018_nexmon_software_defined_radio.git
  10. Enter the created subdirectory mobisys2018_nexmon_software_defined_radio and runmake install-firmware to compile our firmware patch and install it on the attached Nexus 5 smartphone.
  • Matthias Schulz, Daniel Wegemer and Matthias Hollick. Nexmon: The C-based Firmware Patching Framework. https://nexmon.org
  • Matthias Schulz, Jakob Link, Francesco Gringoli, and Matthias Hollick. Shadow Wi-Fi: Teaching Smartphones to Transmit Raw Signals and to Extract Channel State Information to Implement Practical Covert Channels over Wi-Fi. Accepted to appear in Proceedings of the 16th ACM International Conference on Mobile Systems, Applications, and Services, MobiSys 2018, June 2018.
  • Matthias Schulz. Teaching Your Wireless Card New Tricks: Smartphone Performance and Security Enhancements through Wi-Fi Firmware Modifications. Dr.-Ing. thesis, Technische Universität Darmstadt, Germany, February 2018.

Get references as bibtex file

Secure Mobile Networking Lab (SEEMOO)

SEEMOO logo

Networked Infrastructureless Cooperation for Emergency Response (NICER)

NICER logo

Multi-Mechanisms Adaptation for the Future Internet (MAKI)

MAKI logo

Technische Universität Darmstadt

TU Darmstadt logo

Brainless Embryos Suggest Bioelectricity Guides Growth

$
0
0

The tiny tadpole embryo looked like a bean. One day old, it didn’t even have a heart yet. The researcher in a white coat and gloves who hovered over it made a precise surgical incision where its head would form. Moments later, the brain was gone, but the embryo was still alive.

The brief procedure took Celia Herrera-Rincon, a neuroscience postdoc at the Allen Discovery Center at Tufts University, back to the country house in Spain where she had grown up, in the mountains near Madrid. When she was 11 years old, while walking her dogs in the woods, she found a snake, Vipera latastei. It was beautiful but dead. “I realized I wanted to see what was inside the head,” she recalled. She performed her first “lab test” using kitchen knives and tweezers, and she has been fascinated by the many shapes and evolutionary morphologies of the brain ever since. Her collection now holds about 1,000 brains from all kinds of creatures.

This time, however, she was not interested in the brain itself, but in how an African clawed frog would develop without one. She and her supervisor, Michael Levin, a software engineer turned developmental biologist, are investigating whether the brain and nervous system play a crucial role in laying out the patterns that dictate the shapes and identities of emerging organs, limbs and other structures.

For the past 65 years, the focus of developmental biology has been on DNA as the carrier of biological information. Researchers have typically assumed that genetic expression patterns alone are enough to determine embryonic development.

To Levin, however, that explanation is unsatisfying. “Where does shape come from? What makes an elephant different from a snake?” he asked. DNA can make proteins inside cells, he said, but “there is nothing in the genome that directly specifies anatomy.” To develop properly, he maintains, tissues need spatial cues that must come from other sources in the embryo. At least some of that guidance, he and his team believe, is electrical.

In recent years, by working on tadpoles and other simple creatures, Levin’s laboratory has amassed evidence that the embryo is molded by bioelectrical signals, particularly ones that emanate from the young brain long before it is even a functional organ. Those results, if replicated in other organisms, may change our understanding of the roles of electrical phenomena and the nervous system in development, and perhaps more widely in biology.

“Levin’s findings will shake some rigid orthodoxy in the field,” said Sui Huang, a molecular biologist at the Institute for Systems Biology. If Levin’s work holds up, Huang continued, “I think many developmental biologists will be stunned to see that the construction of the body plan is not due to local regulation of cells … but is centrally orchestrated by the brain.”

Bioelectrical Influences in Development

The Spanish neuroscientist and Nobel laureate Santiago Ramón y Cajal once called the brain and neurons, the electrically active cells that process and transmit nerve signals, the “butterflies of the soul.” The brain is a center for information processing, memory, decision making and behavior, and electricity figures into its performance of all of those activities.

But it’s not just the brain that uses bioelectric signaling — the whole body does. All cell membranes have embedded ion channels, protein pores that act as pathways for charged molecules, or ions. Differences between the number of ions inside and outside a cell result in an electric gradient — the cell’s resting potential. Vary this potential by opening or blocking the ion channels, and you change the signals transmitted to, from and among the cells all around. Neurons do this as well, but even faster: To communicate among themselves, they use molecules called neurotransmitters that are released at synapses in response to voltage spikes, and they send ultra-rapid electrical pulses over long distances along their axons, encoding information in the pulses’ pattern, to control muscle activity.

Levin has thought about hacking networks of neurons since the mid-1980s, when he was a high school student in the suburbs near Boston, writing software for pocket money. One day, while browsing a small bookstore in Vancouver at Expo 86 with his father, he spotted a volume called The Body Electric, by Robert O. Becker and Gary Selden. He learned that scientists had been investigating bioelectricity for centuries, ever since Luigi Galvani discovered in the 1780s that nerves are animated by what he called “animal electricity.”

However, as Levin continued to read up on the subject, he realized that, even though the brain uses electricity for information processing, no one seemed to be seriously investigating the role of bioelectricity in carrying information about a body’s development. Wouldn’t it be cool, he thought, if we could comprehend “how the tissues process information and what tissues were ‘thinking about’ before they evolved nervous systems and brains?”

He started digging deeper and ended up getting a biology doctorate at Harvard University in morphogenesis — the study of the development of shapes in living things. He worked in the tradition of scientists like Emil du Bois-Reymond, a 19th-century German physician who discovered the action potential of nerves. In the 1930s and ’40s, the American biologists Harold Burr and Elmer Lund measured electric properties of various organisms during their embryonic development and studied connections between bioelectricity and the shapes animals take. They were not able to prove a link, but they were moving in the right direction, Levin said.

Before Genes Reigned Supreme

The work of Burr and Lund occurred during a time of widespread interest in embryology. Even the English mathematician Alan Turing, famed for cracking the Enigma code, was fascinated by embryology. In 1952 he published a paper suggesting that body patterns like pigmented spots and zebra stripes arise from the chemical reactions of diffusing substances, which he called morphogens.

But organic explanations like morphogens and bioelectricity didn’t stay in the limelight for long. In 1953, James Watson and Francis Crick published the double helical structure of DNA, and in the decades since “the focus of developmental biology has been on DNA as the carrier of biological information, with cells thought to follow their own internal genetic programs, prompted by cues from their local environment and neighboring cells,” Huang said.

The rationale, according to Richard Nuccitelli, chief science officer at Pulse Biosciences and a former professor of molecular biology at the University of California, Davis, was that “since DNA is what is inherited, information stored in the genes must specify all that is needed to develop.” Tissues are told how to develop at the local level by neighboring tissues, it was thought, and each region patterns itself from information in the genomes of its cells.

The extreme form of this view is “to explain everything by saying ‘it is in the genes,’ or DNA, and this trend has been reinforced by the increasingly powerful and affordable DNA sequencing technologies,” Huang said. “But we need to zoom out: Before molecular biology imposed our myopic tunnel vision, biologists were much more open to organism-level principles.”

The tide now seems to be turning, according to Herrera-Rincon and others. “It’s too simplistic to consider the genome as the only source of biological information,” she said. Researchers continue to study morphogens as a source of developmental information in the nervous system, for example. Last November, Levin and Chris Fields, an independent scientist who works in the area where biology, physics and computing overlap, published a paper arguing that cells’ cytoplasm, cytoskeleton and both internal and external membranes also encode important patterning data — and serve as systems of inheritance alongside DNA.

And, crucially, bioelectricity has made a comeback as well. In the 1980s and ’90s, Nuccitelli, along with the late Lionel Jaffe at the Marine Biological Laboratory, Colin McCaig at the University of Aberdeen, and others, used applied electric fields to show that many cells are sensitive to bioelectric signals and that electricity can induce limb regeneration in nonregenerative species.

According to Masayuki Yamashita of the International University of Health and Welfare in Japan, many researchers forget that every living cell, not just neurons, generates electric potentials across the cell membrane. “This electrical signal works as an environmental cue for intercellular communication, orchestrating cell behaviors during morphogenesis and regeneration,” he said.

However, no one was really sure why or how this bioelectric signaling worked, said Levin, and most still believe that the flow of information is very local. “Applied electricity in earlier experiments directly interacts with something in cells, triggering their responses,” he said. But what it was interacting with and how the responses were triggered were mysteries.

That’s what led Levin and his colleagues to start tinkering with the resting potential of cells. By changing the voltage of cells in flatworms, over the last few years they produced worms with two heads, or with tails in unexpected places. In tadpoles, they reprogrammed the identity of large groups of cells at the level of entire organs, making frogs with extra legs and changing gut tissue into eyes — simply by hacking the local bioelectric activity that provides patterning information.

And because the brain and nervous system are so conspicuously active electrically, the researchers also began to probe their involvement in long-distance patterns of bioelectric information affecting development. In 2015, Levin, his postdoc Vaibhav Pai, and other collaborators showed experimentally that bioelectric signals from the body shape the development and patterning of the brain in its earliest stages. By changing the resting potential in the cells of tadpoles as far from the head as the gut, they appeared to disrupt the body’s “blueprint” for brain development. The resulting tadpoles’ brains were smaller or even nonexistent, and brain tissue grew where it shouldn’t.

Unlike previous experiments with applied electricity that simply provided directional cues to cells, “in our work, we know what we have modified — resting potential — and we know how it triggers responses: by changing how small signaling molecules enter and leave cells,” Levin said. The right electrical potential lets neurotransmitters go in and out of voltage-powered gates (transporters) in the membrane. Once in, they can trigger specific receptors and initiate further cellular activity, allowing researchers to reprogram identity at the level of entire organs.

 

This work also showed that bioelectricity works over long distances, mediated by the neurotransmitter serotonin, Levin said. (Later experiments implicated the neurotransmitter butyrate as well.) The researchers started by altering the voltage of cells near the brain, but then they went farther and farther out, “because our data from the prior papers showed that tumors could be controlled by electric properties of cells very far away,” he said. “We showed that cells at a distance mattered for brain development too.”

Then Levin and his colleagues decided to flip the experiment. Might the brain hold, if not an entire blueprint, then at least some patterning information for the rest of the body, Levin asked — and if so, might the nervous system disseminate this information bioelectrically during the earliest stages of a body’s development? He invited Herrera-Rincon to get her scalpel ready.

Making Up for a Missing Brain

Herrera-Rincon’s brainless Xenopus laevis tadpoles grew, but within just a few days they all developed highly characteristic defects— and not just near the brain, but as far away as the very end of their tails. Their muscle fibers were also shorter and their nervous systems, especially the peripheral nerves, were growing chaotically. It’s not surprising that nervous system abnormalities that impair movement can affect a developing body. But according to Levin, the changes seen in their experiment showed that the brain helps to shape the body’s development well before the nervous system is even fully developed, and long before any movement starts.

False memories, or why we're so sure of things we're wrong about

$
0
0

Felipe De Brigard is an assistant professor of philosophy at Duke, where he runs the Imagination and Modal Cognition Lab. That may not sound like philosophy in the traditional sense; De Brigard’s work is at the intersection of philosophy and neuroscience, and is supported right now by a $500,000 grant from the Office of Naval Research.

Felipe De BrigardProfessor De Brigard: You explore how the imagination and memory interact. How did you become interested in this?

I’ve been interested in memory for as long as I can remember. In college, I studied neuropsychology in addition to philosophy, and interacted with patients suffering from memory deficits. One such deficit, known as confabulation, involved patients coming up with fantastic stories about absurd scenarios as if they were autobiographical memories. What intrigued me about this phenomenon is not what these patients got wrong -- which was a lot -- but rather what they got right. Despite being false, there was an air of plausibility to their confabulations.

 

What are you working on now?

My main interest is memory, and in particular the fact that we have false memories. We think we’re so sure that we remember something that happened – but it didn’t.

In trying to understand how we generate these false memories, I was struck by this model in which memory reconstructs a scene. You have pieces and gaps, and those gaps are filled with something like the most likely way in which they could be filled.

 

You mean, what you want to be true?

Not necessarily. You almost never have control over the process of filling up or reconstructing your memories. Unbeknownst to you, the process operates probabilistically. Given your past experiences and knowledge, your memory system gives you the most likely thing that could have happened to help you fill any gaps at the time of retrieval. Most of the time the final product coincides with what actually happened, so it is true. But sometimes it does not, in which case we talk about false memories. I think most of our memories are reconstructed in this way, and many of them are false, we just don’t notice.

All of this made me think that when we imagine things, we also imagine how things could have occurred in the past. Say you were crossing the street and almost got hit by a car. You can’t help but imagine what would have happened if you did get hit by the car. We just slip into that kind of imagination.

Those two things are closely linked – the counter-factual ‘what-if’ imagination, and the way memory recollects things. My lab works a lot on that connection, between memory retrieval and imagining hypothetical scenarios.

The (Navy) grant has to do with causation. People make judgments about causes all the time. Sometimes, what we think is the cause of something is the failure of something to act. If I ask you to water my plants over the weekend while I’m on vacation, and then you forget and my plants die, I’d say your failing to water my plants caused them to die.

Those sorts of causal judgments are really weird because the absence of an event is what causes something to happen. Something didn’t happen, and as a result, there was an effect. That’s called an omissive causation, because something was omitted. Trying to understand that is difficult.

My graduate student Paul Henne and I are interested in understanding how people make judgments about omissions. The nonexistence of an event led to the plants dying. I blame you; why don’t I blame something else? The President of the United States didn’t water my plants, but I’m not blaming him. So why do I choose you?

 

Okay, but in this case, I agreed to water your plants.

That’s right. We had an agreement. We both know that you were supposed to water the plants.

But in a lot of cases, there isn’t a clear agreement. The grant I’m working on explores how people decide what the cause is where the options are less clear. Our suggestion is that this process is dependent on how people generate counter-factual thoughts, and we want to understand how that psychological process works. Ultimately, we want to understand the psychological mechanisms underlying these kinds of cognitive processes to generate models that can contribute to computer programs and artificial intelligence.

 

What is artificial intelligence?

There is a very uncontroversial way of defining it: AI is a discipline of computer science that seeks to generate computer programs and algorithms to solve complex problems for which we thought human intelligence was required. Period. Some people, I think without much reason, think of AI as a discipline that seeks to manufacture minds and conscious entities. This may or may not happen, but such an outcome is neither its objective nor the best way to define it, in my opinion. It is like saying that medical science is the discipline that seeks to make us immortal. This may or may not happen, but in reality what medical science aims is to prevent us from dying of diseases, accidents, etc.

 

How do you do this?

In our lab, we do a lot of behavioral and neuroimaging experiments using magnetic resonance imaging. We scan people’s brains while they’re doing a task – like thinking about events that could have occurred.

 

You create scenarios for people to think about?

Yes, sometimes. In one, I ask people to imagine better and worse outcomes of events. We measure the brain activity to see if we can find patterns and see what parts of the brain are more active or more important for certain tasks.

 

It sounds like work than can help you fundamentally understand a bit better just how we think.

Yes, that’s true. We specifically are working on how people imagine, and how memory both constrains and guides our imagination.

Git Magic: A Usage-First Guide to Git

$
0
0

Git is a version control Swiss army knife. A reliable versatile multipurpose revision control tool whose extraordinary flexibility makes it tricky to learn, let alone master.

As Arthur C. Clarke observed, any sufficiently advanced technology is indistinguishable from magic. This is a great way to approach Git: newbies can ignore its inner workings and view Git as a gizmo that can amaze friends and infuriate enemies with its wondrous abilities.

Rather than go into details, we provide rough instructions for particular effects. After repeated use, gradually you will understand how each trick works, and how to tailor the recipes for your needs.

I’m humbled that so many people have worked on translations of these pages. I greatly appreciate having a wider audience because of the efforts of those named above.

Dustin Sallings, Alberto Bertogli, James Cameron, Douglas Livingstone, Michael Budde, Richard Albury, Tarmigan, Derek Mahar, Frode Aannevik, Keith Rarick, Andy Somerville, Ralf Recker, Øyvind A. Holm, Miklos Vajna, Sébastien Hinderer, Thomas Miedema, Joe Malin, Tyler Breisacher, Sonia Hamilton, Julian Haagsma, Romain Lespinasse, Sergey Litvinov, Oliver Ferrigni, David Toca, Сергей Сергеев, Joël Thieffry, and Baiju Muthukadan contributed corrections and improvements.

François Marier maintains the Debian package originally created by Daniel Baumann.

My gratitude goes to many others for your support and praise. I’m tempted to quote you here, but it might raise expectations to ridiculous heights.

If I’ve left you out by mistake, please tell me or just send me a patch!

This guide is released under the GNU General Public License version 3. Naturally, the source is kept in a Git repository, and can be obtained by typing:

$ git clone git://repo.or.cz/gitmagic.git  # Creates "gitmagic" directory.

or from one of the mirrors:

$ git clone git://github.com/blynn/gitmagic.git
$ git clone git://gitorious.org/gitmagic/mainline.git
$ git clone https://code.google.com/p/gitmagic/
$ git clone git://git.assembla.com/gitmagic.git
$ git clone git@bitbucket.org:blynn/gitmagic.git

GitHub, Assembla, and Bitbucket support private repositories, the latter two for free.

​Linus Torvalds slams CTS Labs over AMD vulnerability report

$
0
0

CTS Labs, a heretofore unknown Tel Aviv-based cybersecurity startup, has claimed it's found over a dozen security problems with AMD Ryzen and EPYC processors. Linus Torvalds, Linux's creator, doesnt buy it.

Torvalds, in a Google+ discussion, wrote:

"When was the last time you saw a security advisory that was basically 'if you replace the BIOS or the CPU microcode with an evil version, you might have a security problem?' Yeah."

Or, as a commenter put it on the same thread, "I just found a flaw in all of the hardware space. No device is secure: if you have physical access to a device, you can just pick it up and walk away. Am I a security expert yet?"

They've got a point.

CTS Labs sprang out of nowhere to give AMD less than 24 hours to address these "problems."

AMD investigating chip security flaws after less than 24 hours notice | CNET: AMD allegedly has its own Spectre-like security flaws

The startup has jazzed up its discoveries with a research paper, a video describing the vulnerabilities, and, of course, fancy names for them: Ryzenfall, Master Key, Fallout, and Chimera.

CTS Labs claimed in an interview they gave AMD less than a day because they didn't think AMD could fix the problem for "many, many months, or even a year" anyway.

Why would they possibly do this? For Torvalds: "It looks more like stock manipulation than a security advisory to me."

These are real bugs though. Dan Guido, CEO of Trail of Bits, a security company with a proven track-record, tweeted: "Regardless of the hype around the release, the bugs are real, accurately described in their technical report (which is not public afaik), and their exploit code works." But, Guido also admitted, "Yes, all the flaws require admin [privileges] but all are flaws, not expected functionality."

It's that last part that ticks Torvalds off. The Linux creator agrees these are bugs, but all the hype annoys the heck out of him.

Are there bugs? Yes. Do they matter in the real world? No.

They require a system administrator to be almost criminally negligent to work. To Torvalds, inflammatory security reports are annoying distractions from getting real work done.

This is far from the first such case. A recent Linux "vulnerability," Chaos, required the attacker to have the root password. News flash: If an attacker has the root password, your system is already completely hosed. Everything else is just details.

Torvalds believes "it's the security industry that has taught everybody to not be critical of their findings."

He also thinks, "there are real security researchers." For many of the rest, it's all about giving even the most minor security bug. In Torvalds' words: "A catchy name and a website is almost required for a splashy security disclosure these days."

Torvalds thinks "security people need to understand that they look like clowns because of it. The whole security industry needs to just admit that they have a lot of sh*t going on, and they should use -- and encourage -- some critical thinking."

This rant is far from the first time Torvalds has snarled at people or companies for focusing too much on what he sees as on the wrong end of security.

As he wrote on the Linux Kernel Mailing List (LKML) in 2008: "I refuse to bother with the whole security circus ... It makes "heroes" out of security people, as if the people who don't just fix normal bugs aren't as important. In fact, all the boring normal bugs are _way_ more important, just because there's a lot more of them. I don't think some spectacular security hole should be glorified or cared about as being any more 'special' than a random spectacular crash due to bad locking."

More recently, he doubled down on this position, saying last year about a proposed Linux kernel change, "Some security people have scoffed at me when I say that security problems are primarily 'just bugs'. Those security people are f**king morons."

What Torvalds really wants from security programmers and researchers, as he spelled out recently, is:

  • the first step should *ALWAYS* be "just report it." Not killing things, not even stopping the access. Report it. Nothing else.
  • "Do no harm" should be your mantra for any new hardening work.

Do that, and you'll make Torvalds, and a lot of other people who care about practical security, much happier.

Related Stories:

Wikipedia was unaware being used by YouTube for conspiracy theory fact-checking

$
0
0

Velkommen hjem!

Denne tidslinje er der, hvor du vil bruge mest af din tid og konstant få opdateringer om det, der interesserer dig.

Fungerer Tweets ikke for dig?

Hold over profilbilledet og klik på Følger-knappen for at stoppe med at følge enhver konto.

Deltag i samtalen

Tilføj dine tanker om ethvert Tweet med et svar. Find et emne, du er passioneret omkring, og hop direkte ind i samtalen.

Få mere af det, du elsker

Følg flere konti for at få øjeblikkelige opdateringer om de emner, du er interesseret i.

Gå aldrig glip af et Øjeblik

Følg de bedste historier, mens de sker.

Toys R Us to close all 800 of its U.S. stores

$
0
0

Toy store chain Toys R Us is planning to sell or close all 800 of its U.S. stores, affecting as many as 33,000 jobs as the company winds down its operations after six decades, according to a source familiar with the matter.

The news comes six months after the retailer filed for bankruptcy. The company has struggled to pay down nearly $8 billion in debt — much of it dating to a 2005 leveraged buyout — and has had trouble finding a buyer. There were reports earlier this week that Toys R Us had stopped paying its suppliers, which include the country’s largest toymakers. On Wednesday, the company announced it would close all 100 of its U.K. stores. In the United States, the company told employees closures would likely occur over time, and not all at once, according to the source, who spoke on the condition of anonymity because they were not authorized to discuss internal deliberations.

Toys R Us, once the country’s preeminent toy retailer, has been unable to keep up with big-box and online competitors. The recent holiday season dealt another blow to the embattled company, which struggled to find its footing even as the retail industry racked up its largest gains in years. In January, the retailer announced it would close 182 U.S. stores, or about one-fifth of its remaining Toys R Us and Babies R Us locations.

A group of toymakers led by Isaac Larian, chief executive of MGA Entertainment, the giant behind brands such as L.O.L. Surprise!, Little Tikes and Bratz, on Wednesday submitted a bid to buy Toys R Us’s Canadian arm, which includes 82 stores, according to Larian. He added that he is also looking into buying as many as 400 U.S. stores, which he would seek to operate under the Toys R Us name.

“There is no toy business without Toys R Us,” Larian said, noting that he sold his first product to the chain in 1979. “It’s a big deal and I’m going to try to salvage as much of it as possible.”

According to its September bankruptcy filing, Toys R Us owes MGA Entertainment $21.3 million.

Despite turnaround efforts at Toys R Us, which included adding more hands-on “play labs,” retail experts say the 60-year-old company has been unable to get customers back into its stores. It doesn’t offer the low prices or convenience of some of its larger competitors, nor the fun-filled experience that many smaller outfits do, some analysts have said.

Toys R Us, based in Wayne, N.J., has been struggling for years to pay down billions of dollars in debt as competitors such as Amazon, Walmart and Target win over an increasingly larger piece of the toy market. Its bankruptcy filing last year cited $7.9 billion in debt against $6.6 billion in assets. The company said it has more than 100,000 creditors, the largest of which are Bank of New York (owed $208 million), Mattel ($136 million) and Hasbro ($59 million). (Jeffrey P. Bezos, the founder and chief executive of Amazon, owns The Washington Post.)

The collapse of the storied toy chain raises a number of questions for employees, as well as consumers in the coming weeks. Sen. Charles E. Schumer (D-N.Y.) on Wednesday urged Toys R Us to give customers cash in exchange for their unused gift cards, which he said would be “as worthless and unwanted as a lump of coal in a stocking.”

“The music is about to stop for the iconic retailer,” Schumer said in a statement on Wednesday. “Consumers could be left in the lurch.” He also urged the Federal Trade Commission to “take an immediate look” at how Toys R Us is handling the winding down of its operations.

“The liquidation of Toys R Us is the unfortunate but inevitable conclusion of a retailer that lost its way,” Neil Saunders, managing director of the research firm GlobalData Retail, wrote in an email. “Even during recent store closeouts, Toys R Us failed to create any sense of excitement. The brand lost relevance, customers and ultimately sales.”

Toys R Us got its start as a baby furniture shop in Washington’s Adams Morgan neighborhood in 1948. It didn’t take long for Charles Lazarus, who founded the company at age 25, to realize he could make a lot more money selling toys than one-off cribs at Children’s Bargain Town. He renamed his business Toys R Us and created an emporium of exclusive products and ever-rotating inventory.

“Lazarus offered toy manufacturers the tantalizing picture of year-round toy sales and the ability to produce 12 months a year,” Eric Clark wrote in The Real Toy Story: Inside the Ruthless Battle for America’s Youngest Consumers.”

At its heyday, Toys R Us had a towering flagship store in New York’s Times Square (now closed and home to Old Navy) and a ubiquitous icon, Geoffrey the Giraffe. Its catchy jingle, with the refrain “I don’t wanna grow up, I’m a Toys R Us kid,” was a long-running television staple.

But in recent years, the company lost its footing as retailers like Walmart and Target began selling toys at lower profit margins. Toys R Us, which was saddled with billions in debt, couldn’t invest enough to keep its stores or websites competitive.

“We know that customers are willing to pay more for an enjoyable experience — just look at the lines at Starbucks every day — but Toys R Us has failed to give us anything special or unique,” Kelly O’Keefe, a professor of brand management at Virginia Commonwealth University, told The Washington Post this year. “You can find more zest for life in a Walgreens.”

Read more:

Why neighborhood toy stores are thriving while Toys R Us goes bankrupt

High heels are the worst, and women are finally ditching them

Kohl’s has a new plan to get you into its stores: Groceries

Americans are finally coming back to Macy’s to shop for clothes


'Striking weaknesses' in adult financial skills

$
0
0
supermarket tillImage copyrightGetty Images

A quarter of adults struggle to work out how much change they should get in a shop and half cannot read a simple financial line graph, a study suggests.

The study, from Cambridge University and University College London, found "striking weaknesses" in adults' financial skills across 31 countries.

It says financial literacy is essential if consumers are to avoid getting into debt or being misled on money matters.

The report says the findings point to a need for "urgent policy intervention".

The researchers analysed more than 100,000 results from 16- to 65-year-olds from 31 countries (listed below) who had completed the Programme for International Assessment of Adult Competencies test in 2011.

As part of this test, adults were asked four questions that assessed their ability to apply numerical skills to everyday financial tasks.

The researchers' analysis of these results said: "A substantial number of people lack the basic skills that are needed to solve everyday financial tasks."

The study, The financial skills of adults across the world, finds of adults across the 31 countries:

  • About a quarter could not work out how much change they should receive from a shop when buying a handful of goods, and this increased to about a third in Spain, England and Italy
  • About one in three adults struggled to work out the price they had to pay for a product when they were given a per unit cost, for example per litre or per kilo
  • About half could not read a simple financial line graph - the type often used to convey key information about the economy and financial products - and this rose to three-quarters in Greece, Chile, Italy and Turkey
  • Most struggled to calculate discounts involving more complex calculations

While adults in Estonia, Finland and Japan performed well across all four tasks, those in Turkey, Chile, Israel, Italy, Spain and England had among the weakest financial skills.


1. If you bought four packs of tea: chamomile ($4.60), green ($4.15), black ($3.35) and lemon ($1.80) with a $20 note, how much change would you get?

2. If a litre of cola costs $3.15, how much will you pay for a third of a litre?

3. If a football club offers the same discount for all season tickets - Main Stand - $50 for single entry, $300 for a season; Stand 2 - $35 for single entry, $210 for a season; Stand 3 - $25 for single entry, $150 for a season - what would the price be for a Stand 4 season ticket, where a single entry costs $21?

The answers are $6.10, $1.05 and $126 respectively.


The report says: "The ability to solve financial problems is critical to the wellbeing of adults across the world since everyday transactions, such as saving, spending and interacting with banks, require significant understanding of key financial concepts.

"Yet, in many countries, there is concern about the lack of financial acumen amongst adults, and whether education systems are equipping individuals with the necessary basic financial skills.

"Our key conclusion is that, in some countries, policy intervention will be needed to ensure adults have the basic skills they need to navigate their way through an increasingly complex financial world."

Study lead author Prof John Jerrim, from the UCL's Institute of Education, says: "We all need to be able to conduct basic financial calculations in order to make rational well informed decisions.

"This includes how much we should save into our pensions, understanding the financial implications of borrowing money from payday loan sites, through to whether we can really afford to buy a particular house."

The countries covered by the research paper are:

  • Turkey
  • Korea
  • Cyprus
  • Ireland
  • United States
  • France
  • Czech Republic
  • Finland
  • Slovakia
  • Chile
  • Estonia
  • New Zealand
  • Singapore
  • Slovenia
  • Belgium
  • Norway
  • Israel
  • Canada
  • England and Northern Ireland (counted as one for the report)
  • Poland
  • Germany
  • Italy
  • Lithuania
  • Austria
  • Greece
  • Russia
  • Netherlands
  • Denmark
  • Japan
  • Spain
  • Sweden

How Apple let Siri fall behind the Google Assistant and Alexa

$
0
0
Seven years after Siri launched on the iPhone 4S and it's still not as smart as it should be.
Seven years after Siri launched on the iPhone 4S and it's still not as smart as it should be.

Image: jhila farzaneh/mashable

It's no secret that Siri is way behind other voice assistants like the Google Assistant and Amazon's Alexa when it comes to comprehension and total number of skills. 

Apple has drastically improved Siri over the years, adding new features and upgrading its voice to sound more human-like, but its ongoing shortcomings really revealed themselves in the recent launch of the HomePod, the company's first product that's almost entirely controlled by the voice assistant.

So how did Apple screw up Siri so badly when it was released so far in advance of the competition? A new report from The Information reveals how years of missteps left Siri eating dust.

According to the report, after acquiring the original Siri app in 2010 for $200 million, Apple proceeded to quickly integrate the digital assistant into the iPhone 4S in 2011. There was so much potential for Siri, and Apple promised to bring voice controls to the masses just as it did multi-touch on the original iPhone.

Except the voice-controlled computing revolution never quite happened the way Apple predicted. iPhones users quickly realized that Siri couldn't do a lot of things. And even after Apple opened Siri up with SiriKit in 2016, it still isn't as intelligent as the Google Assistant or Alexa.

So what the heck happened?

According The Information, it all went downhill after Steve Jobs died in 2011. Jobs' death marked the beginning of Siri's downfall.

Instead of continuously updating Siri so that it would get smarter faster, Richard Williamson, one of the former iOS chief Scott Forstall's deputies, reportedly only wanted to update the assistant annually to coincide with new iOS releases.

Frustrated by all the patching they were doing to Siri, engineers reportedly batted around the idea of starting over.

This is, of course, not how a digital assistant should be treated. As Google and Amazon have demonstrated, digital assistants need to constantly be updated in the background in order to keep up with the ever-changing demands of its users.

Williamson denies the accusations that he slowed Siri development down and instead cast blame on Siri's creators. 

"It was slow, when it worked at all," Williamson said. "The software was riddled with serious bugs. Those problems lie entirely with the original Siri team, certainly not me."

Other problems over the years included layering new elements on top of Siri using technologies culled from new acquisitions. For example, the Siri team had issues integrating new search features from Apple's acquisition of Topsy in 2013 and natural language features from the VocalIQ acquisition in 2015.

"Members of the Topsy team expressed a reluctance to work with a Siri team they viewed as slow and bogged down by the initial infrastructure that had been patched up but never completely replaced since it launched."

Frustrated by all the patching they were doing to Siri, engineers reportedly considered starting over from scratch. Instead of building on top of Siri's reportedly bad infrastructure, they would rebuild Siri from the ground up — correctly on the second time around. Of course, when you're serving hundreds of millions of users across all of Apple's devices, that's a tall task.

The most revealing part of the report exposes how Apple didn't even have plans to integrate Siri into HomePod until after the Amazon Echo launched:

In a sign of how unprepared Apple was to deal with a rivalry, two Siri team members told The Information that their team didn’t even learn about Apple’s HomePod project until 2015—after Amazon unveiled the Echo in late 2014. One of Apple’s original plans was to launch its speaker without Siri included, according to a source.

Right now, it looks like Siri won't be blown up and a rebuilt. And if Apple wants to transform its assistant into a true competitor to the Google Assistant and Alexa, it'll need to sort out its internal management issues and decide what it really wants Siri to be. For all users, we hope it's more intelligence and deeper integration with third-party apps and services.

To Test Einstein’s Equations, Poke a Black Hole

$
0
0

In November 1915, in a lecture before the Prussian Academy of Sciences, Albert Einstein described an idea that upended humanity’s view of the universe. Rather than accepting the geometry of space and time as fixed, Einstein explained that we actually inhabit a four-dimensional reality called space-time whose form fluctuates in response to matter and energy.

Einstein elaborated this dramatic insight in several equations, referred to as his “field equations,” that form the core of his theory of general relativity. That theory has been vindicated by every experimental test thrown at it in the century since.

Yet even as Einstein’s theory seems to describe the world we observe, the mathematics underpinning it remain largely mysterious. Mathematicians have been able to prove very little about the equations themselves. We know they work, but we can’t say exactly why. Even Einstein had to fall back on approximations, rather than exact solutions, to see the universe through the lens he’d created.

Over the last year, however, mathematicians have brought the mathematics of general relativity into sharper focus. Two groups have come up with proofs related to an important problem in general relativity called the black hole stability conjecture. Their work proves that Einstein’s equations match a physical intuition for how space-time should behave: If you jolt it, it shakes like Jell-O, then settles down into a stable form like the one it began with.

“If these solutions were unstable, that would imply they’re not physical. They’d be a mathematical ghost that exists mathematically and has no significance from a physical point of view,” said Sergiu Klainerman, a mathematician at Princeton University and co-author, with Jérémie Szeftel, of one of the two new results.

To complete the proofs, the mathematicians had to resolve a central difficulty with Einstein’s equations. To describe how the shape of space-time evolves, you need a coordinate system — like lines of latitude and longitude — that tells you which points are where. And in space-time, as on Earth, it’s hard to find a coordinate system that works everywhere.

Shake a Black Hole

General relativity famously describes space-time as something like a rubber sheet. Absent any matter, the sheet is flat. But start dropping balls onto it — stars and planets — and the sheet deforms. The balls roll toward one another. And as the objects move around, the shape of the rubber sheet changes in response.

Einstein’s field equations describe the evolution of the shape of space-time. You give the equations information about curvature and energy at each point, and the equations tell you the shape of space-time in the future. In this way, Einstein’s equations are like equations that model any physical phenomenon: This is where the ball is at time zero, this is where it is five seconds later.

“They’re a mathematically precise quantitative version of the statement that space-time curves in the presence of matter,” said Peter Hintz, a Clay research fellow at the University of California, Berkeley, and co-author, with András Vasy, of the other recent result.

In 1916, almost immediately after Einstein released his theory of general relativity, the German physicist Karl Schwarzschild found an exact solution to the equations that describes what we now know as a black hole (the term wouldn’t be invented for another five decades). Later, physicists found exact solutions that describe a rotating black hole and one with an electrical charge.

These remain the only exact solutions that describe a black hole. If you add even a second black hole, the interplay of forces becomes too complicated for present-day mathematical techniques to handle in all but the most special situations.

Yet you can still ask important questions about this limited group of solutions. One such question developed out of work in 1952 by the French mathematician Yvonne Choquet-Bruhat. It asks, in effect: What happens when you shake a black hole?  

This problem is now known as the black hole stability conjecture. The conjecture predicts that solutions to Einstein’s equations will be “stable under perturbation.” Informally, this means that if you wiggle a black hole, space-time will shake at first, before eventually settling down into a form that looks a lot like the form you started with. “Roughly, stability means if I take special solutions and perturb them a little bit, change data a little bit, then the resulting dynamics will be very close to the original solution,” Klainerman said.

So-called “stability” results are an important test of any physical theory. To understand why, it’s useful to consider an example that’s more familiar than a black hole.

Imagine a pond. Now imagine that you perturb the pond by tossing in a stone. The pond will slosh around for a bit and then become still again. Mathematically, the solutions to whatever equations you use to describe the pond (in this case, the Navier-Stokes equations) should describe that basic physical picture. If the initial and long-term solutions don’t match, you might question the validity of your equations.

“This equation might have whatever properties, it might be perfectly fine mathematically, but if it goes against what you expect physically, it can’t be the right equation,” Vasy said.

For mathematicians working on Einstein’s equations, stability proofs have been even harder to find than solutions to the equations themselves. Consider the case of flat, empty Minkowski space — the simplest of all space-time configurations. This solution to Einstein’s equations was found in 1908 in the context of Einstein’s earlier theory of special relativity. Yet it wasn’t until 1993 that mathematicians managed to prove that if you wiggle flat, empty space-time, you eventually get back flat, empty space-time. That result, by Klainerman and Demetrios Christodoulou, is a celebrated work in the field.

One of the main difficulties with stability proofs has to do with keeping track of what is going on in four-dimensional space-time as the solution evolves. You need a coordinate system that allows you to measure distances and identify points in space-time, just as lines of latitude and longitude allow us to define locations on Earth. But it’s not easy to find a coordinate system that works at every point in space-time and then continues to work as the shape of space-time evolves.

“We don’t know of a one-size-fits-all way to do this,” Hintz wrote in an email. “After all, the universe does not hand you a preferred coordinate system.”

The Measurement Problem

The first thing to recognize about coordinate systems is that they’re a human invention. The second is that not every coordinate system works to identify every point in a space.

Take lines of latitude and longitude: They’re arbitrary. Cartographers could have anointed any number of imaginary lines to be 0 degrees longitude. And while latitude and longitude work to identify just about every location on Earth, they stop making sense at the North and South poles. If you knew nothing about Earth itself, and only had access to latitude and longitude readings, you might wrongly conclude there’s something topologically strange going on at those points.

This possibility — of drawing wrong conclusions about the properties of physical space because the coordinate system used to describe it is inadequate — is at the heart of why it’s hard to prove the stability of space-time.

“It could be the case that stability is true, but you’re using coordinates that are not stable and thus you miss the fact that stability is true,” said Mihalis Dafermos, a mathematician at the University of Cambridge and a leading figure in the study of Einstein’s equations.

In the context of the black hole stability conjecture, whatever coordinate system you’re using has to evolve as the shape of space-time evolves — like a snugly fitting glove adjusting as the hand it encloses changes shape. The fit between the coordinate system and space-time has to be good at the start and remain good throughout. If it doesn’t, there are two things that can happen that would defeat efforts to prove stability.

First, your coordinate system might change shape in a way that makes it break down at certain points, just as latitude and longitude fail at the poles. Such points are called “coordinate singularities” (to distinguish them from physical singularities, like an actual black hole). They are undefined points in your coordinate system that make it impossible to follow an evolving solution all the way through.

Second, a poorly fitting coordinate system might disguise the underlying physical phenomena it’s meant to measure. To prove that solutions to Einstein’s equations settle down into a stable state after being perturbed, mathematicians must keep careful track of the ripples in space-time that are set in motion by the perturbation. To see why, it’s worth considering the pond again. A rock thrown into a pond generates waves. The long-term stability of the pond results from the fact that those waves decay over time — they grow smaller and smaller until there’s no sign they were ever there.

The situation is similar for space-time. A perturbation will set off a cascade of gravitational waves, and proving stability requires proving that those gravitational waves decay. And proving decay requires a coordinate system — referred to as a “gauge” — that allows you to measure the size of the waves. The right gauge allows mathematicians to see the waves flatten and eventually disappear altogether.

“The decay has to be measured relative to something, and it’s here where the gauge issue shows up,” Klainerman said. “If I’m not in the right gauge, even though in principle I have stability, I can’t prove it because the gauge will just not allow me to see that decay. If I don’t have decay rates of waves, I can’t prove stability.”

The trouble is, while the coordinate system is crucial, it’s not obvious which one to choose. “You have a lot of freedom about what this gauge condition can be,” Hintz said. “Most of these choices are going to be bad.”

Partway There

A full proof of the black hole stability conjecture requires proving that all known black hole solutions to Einstein’s equations (with the spin of the black hole below a certain threshold) are stable after being perturbed. These known solutions include the Schwarzschild solution, which describes space-time with a nonrotating black hole, and the Kerr family of solutions, which describe configurations of space-time empty of everything save a single rotating black hole (where the properties of that rotating black hole — its mass and angular momentum — vary within the family of solutions).

Both of the new results make partial progress toward a proof of the full conjecture.

Hintz and Vasy, in a paper posted to the scientific preprint site arxiv.org in 2016, proved that slowly rotating black holes are stable. But their work did not cover black holes rotating above a certain threshold.

Their proof also makes some assumptions about the nature of space-time. The original conjecture is in Minkowski space, which is not just flat and empty but also fixed in size. Hintz and Vasy’s proof takes place in what’s called de Sitter space, where space-time is accelerating outward, just like in the actual universe. This change of setting makes the problem simpler from a technical point of view, which is easy enough to appreciate at a conceptual level: If you drop a rock into an expanding pond, the expansion is going to stretch the waves and cause them to decay faster than they would have if the pond were not expanding.   

“You’re looking at a universe undergoing an accelerated expansion,” Hintz said. “This makes the problem a little easier as it appears to dilute the gravitational waves.”

Klainerman and Szeftel’s work has a slightly different flavor. Their proof, the first part of which was posted online last November, takes place in Schwarzschild space-time — closer to the original, more difficult setting for the problem. They prove the stability of a nonrotating black hole, but they do not address solutions in which the black hole is spinning. Moreover, they only prove the stability of black hole solutions for a narrow class of perturbations — where the gravitational waves generated by those perturbations are symmetric in a certain way.

Both results involve new techniques for finding the right coordinate system for the problem. Hintz and Vasy start with an approximate solution to the equations, based on an approximate coordinate system, and gradually increase the precision of their answer until they arrive at exact solutions and well-behaved coordinates. Klainerman and Szeftel take a more geometric approach to the challenge.

The two teams are now trying to build on their respective methods to find a proof of the full conjecture. Some expert observers think the day might not be far off.

“I really think things are now at the stage that the remaining difficulties are just technical,” Dafermos said. “Somehow one doesn’t need new ideas to solve this problem.” He emphasized that a final proof could come from any one of the large number of mathematicians currently working on the problem.

For 100 years Einstein’s equations have served as a reliable experimental guide to the universe. Now mathematicians may be getting closer to demonstrating exactly why they work so well.

How to Make A.I. That’s Good for People

$
0
0

For a field that was not well known outside of academia a decade ago, artificial intelligence has grown dizzyingly fast. Tech companies from Silicon Valley to Beijing are betting everything on it, venture capitalists are pouring billions into research and development, and start-ups are being created on what seems like a daily basis. If our era is the next Industrial Revolution, as many claim, A.I. is surely one of its driving forces.

It is an especially exciting time for a researcher like me. When I was a graduate student in computer science in the early 2000s, computers were barely able to detect sharp edges in photographs, let alone recognize something as loosely defined as a human face. But thanks to the growth of big data, advances in algorithms like neural networks and an abundance of powerful computer hardware, something momentous has occurred: A.I. has gone from an academic niche to the leading differentiator in a wide range of industries, including manufacturing, health care, transportation and retail.

I worry, however, that enthusiasm for A.I. is preventing us from reckoning with its looming effects on society. Despite its name, there is nothing “artificial” about this technology — it is made by humans, intended to behave like humans and affects humans. So if we want it to play a positive role in tomorrow’s world, it must be guided by human concerns.

I call this approach “human-centered A.I.” It consists of three goals that can help responsibly guide the development of intelligent machines.

First, A.I. needs to reflect more of the depth that characterizes our own intelligence. Consider the richness of human visual perception. It’s complex and deeply contextual, and naturally balances our awareness of the obvious with a sensitivity to nuance. By comparison, machine perception remains strikingly narrow.

Sometimes this difference is trivial. For instance, in my lab, an image-captioning algorithm once fairly summarized a photo as “a man riding a horse” but failed to note the fact that both were bronze sculptures. Other times, the difference is more profound, as when the same algorithm described an image of zebras grazing on a savanna beneath a rainbow. While the summary was technically correct, it was entirely devoid of aesthetic awareness, failing to detect any of the vibrancy or depth a human would naturally appreciate.

That may seem like a subjective or inconsequential critique, but it points to a major aspect of human perception beyond the grasp of our algorithms. How can we expect machines to anticipate our needs — much less contribute to our well-being — without insight into these “fuzzier” dimensions of our experience?

Making A.I. more sensitive to the full scope of human thought is no simple task. The solutions are likely to require insights derived from fields beyond computer science, which means programmers will have to learn to collaborate more often with experts in other domains.

Such collaboration would represent a return to the roots of our field, not a departure from it. Younger A.I. enthusiasts may be surprised to learn that the principles of today’s deep-learning algorithms stretch back more than 60 years to the neuroscientific researchers David Hubel and Torsten Wiesel, who discovered how the hierarchy of neurons in a cat’s visual cortex responds to stimuli.

Likewise, ImageNet, a data set of millions of training photographs that helped to advance computer vision, is based on a project called WordNet, created in 1995 by the cognitive scientist and linguist George Miller. WordNet was intended to organize the semantic concepts of English.

Reconnecting A.I. with fields like cognitive science, psychology and even sociology will give us a far richer foundation on which to base the development of machine intelligence. And we can expect the resulting technology to collaborate and communicate more naturally, which will help us approach the second goal of human-centered A.I.: enhancing us, not replacing us.

Imagine the role that A.I. might play during surgery. The goal need not be to automate the process entirely. Instead, a combination of smart software and specialized hardware could help surgeons focus on their strengths — traits like dexterity and adaptability — while keeping tabs on more mundane tasks and protecting against human error, fatigue and distraction.

Or consider senior care. Robots may never be the ideal custodians of the elderly, but intelligent sensors are already showing promise in helping human caretakers focus more on their relationships with those they provide care for by automatically monitoring drug dosages and going through safety checklists.

These are examples of a trend toward automating those elements of jobs that are repetitive, error-prone and even dangerous. What’s left are the creative, intellectual and emotional roles for which humans are still best suited.

No amount of ingenuity, however, will fully eliminate the threat of job displacement. Addressing this concern is the third goal of human-centered A.I.: ensuring that the development of this technology is guided, at each step, by concern for its effect on humans.

Today’s anxieties over labor are just the start. Additional pitfalls include bias against underrepresented communities in machine learning, the tension between A.I.’s appetite for data and the privacy rights of individuals and the geopolitical implications of a global intelligence race.

Adequately facing these challenges will require commitments from many of our largest institutions. Universities are uniquely positioned to foster connections between computer science and traditionally unrelated departments like the social sciences and even humanities, through interdisciplinary projects, courses and seminars. Governments can make a greater effort to encourage computer science education, especially among young girls, racial minorities and other groups whose perspectives have been underrepresented in A.I. And corporations should combine their aggressive investment in intelligent algorithms with ethical A.I. policies that temper ambition with responsibility.

No technology is more reflective of its creators than A.I. It has been said that there are no “machine” values at all, in fact; machine values are human values. A human-centered approach to A.I. means these machines don’t have to be our competitors, but partners in securing our well-being. However autonomous our technology becomes, its impact on the world — for better or worse — will always be our responsibility.

Fei-Fei Li is a professor of computer science at Stanford, where she directs the Stanford Artificial Intelligence Lab, and the chief scientist for A.I. research at Google Cloud.

Follow The New York Times Opinion section on Facebook and Twitter (@NYTopinion), and sign up for the Opinion Today newsletter.

The German entrepreneurs celebrating their mistakes

$
0
0
Max Riedel speaking
Image caption Max Riedel speaking at one of Berlin's "Failure Nights"

Max Riedel has just admitted to losing hundreds of thousands of euros before the age of 30, and he is being enthusiastically applauded.

The Berlin-born business owner is addressing an unusual audience - young techies gathered on a Thursday night to celebrate catastrophe.

With the help of an amusing slideshow, featuring wisdom from TV's The A-Team, Max tells the tale of multiple blunders made in the early days of his events company, Holi Concept, which runs festivals and races in cities across Europe.

"In the last four or five years, I think I made 20 or 25 hard mistakes," he says, barely suppressing a cheeky grin.

"Every mistake cost between 50,000 and 300,000 euros."

Stories like Max's are at the core of a worldwide movement - with an unprintable name that loosely correlates with "Failure Nights" - designed to help those involved in young firms and start-ups learn from each others' mistakes.

Founded in Mexico, the group has chapters in over 250 cities across 80 different countries, including China, India, the UK, and the US.

Image caption Mr Riedel says he has made up to 25 "hard mistakes" over the last four or five years.

But in Berlin, the evening is more than just another networking opportunity - it's an attempt to challenge a deep cultural taboo.

"The Germans really have problems talking about their failures," says Patrick Wagner, one of the event's organisers.

A serial entrepreneur himself, Patrick is on a mission to educate his compatriots on the need to embrace risk.

"Seventy per cent of all start-ups [in Germany] are going to fail," he explains, "but even in Economic Studies, you don't get a single lesson about insolvency.

"You learn how to make money, but you never learn how to fail."

Measuring precisely how many German start-ups end up folding is a near-impossible task, in a country that saw almost 2,000 of them launch in 2017 alone.

Image caption Patrick Wagner, one of the events' organisers, says Germans "have problems talking about their failures"

And although some estimate that as many as 90% end up shutting shop, there is no suggestion that nascent German companies fail more often than their Silicon Valley or British counterparts.

But according to Philip Strothmann - the driving force behind a host of sustainability-focused start-ups - a fear of failure may be hampering entrepreneurship in the country, with fewer people deciding to start a new business in the first place.

There is, he says, widespread contempt in German society for those who play fast and loose with other people's cash.

People usually have "traditional jobs" and are concerned with security first and foremost, he says.

If you venture forth and end up with egg on your face, he adds, "they will say, 'see I could have told you'."

Image caption "Failure Nights" have attracted big crowds

It's not just the neighbours who might disapprove. The process of going insolvent in Germany, says Patrick Wagner, is a "nightmare". He claims the country's strict laws cause young founders to adopt an extremely cautious approach.

"They say Berlin is the new San Francisco," he says, breaking out into a chuckle.

"It's a joke, because no one is risking anything here, and this is a thing we want to change."

"It can be quite severe for a managing director if he keeps a business running in a state of insolvency," says Christian Spatz, a lawyer at Leonhardt Rattunde, which specialises in restructuring ailing firms.

In a book-lined meeting room on Berlin's Kurfurstendamm, Mr Spatz painstakingly explains how directors in Germany must file for insolvency as soon as they are unable to balance their books, or they may face a criminal conviction.

In some cases, directors could even face the prospect of having to reimburse the company, out of their own pockets, for payments made after the firm became illiquid or over-indebted.

Image caption Germany's strict insolvency laws can catch entrepreneurs by surprise

Most managing directors, he adds, young and old, are not aware of this liability.

Even among those who are aware of the legal consequences, Mr Spatz contends, German directors are extremely reluctant to let their company fold.

"There's a psychological aspect, and a social aspect too. Still, in Germany, there is a stigma surrounding insolvency".

It's this stigma that some blame for the perplexing lack of a European tech giant to rival Google or Facebook.

But Mr Spatz rejects the notion that Germany's rules are too stringent, and may be stifling innovation.

"I think we have a well rounded system that tries to consider all different players in the field," he says, referring to creditors and employees.

'Less risk averse'

Berlin, he adds, is a thriving, and growing tech hub, and the endemic fear of failure in German culture is fading with time.

Philip Strothmann agrees.

"This is changing with my generation, and the younger generation," he explains.

"We are a little less risk averse and we're taking matters into our own hands."

A few months after baring his soul at "Failure Nights", Max Riedel has very much taken matters into his own hands.

His company today runs regular events in more than 30 countries, and the business, now firmly established, is turning a modest profit.

Success in his professional life has bred success in his personal life too. Later this year, he'll finally take some time off from Holi Concept to celebrate his greatest triumph to date - and marry the mother of his 11-month old child.

Show HN: Talent vs. Luck – A recreation of the paper's model

$
0
0

I recently came across the paper titled Talent vs Luck: the role of randomness in success and failure, by A. Pluchino. A. E. Biondo, A. Rapisarda, on Hacker News on Hackernews. After reading through both the Scientific American and checking out the paper itself, I decided to try and recreate the model to learn more about it and follow up on some questions I had about the paper’s findings.

I go over my opinions on the paper’s findings in this companion post.

In this post, I first review the model as I understand it, demonstrate how I was able to recreate it, and show that the results from my recreation have roughly the same features as the one in the paper. As a disclaimer, I should note that I had not used Netlogo before this, and wasn’t able to find all of the possible Netlogo constraints in the paper. My model is likely not identical to the one in the paper, but does use same the dynamics and has similar results. If anyone finds errors in the code or in this post, please let me know in the comments or open a PR on the Github repo.

The paper employs agent-based modelling using Netlogo. Netlogo models are based on simulating interactions between things called “agents” that are placed into a 2D space where they can move and interact over time. Where they are placed, and how they move & interact are configurable parts of any model.

One great sample model that helps illuminate this is the Wolf Sheep Predation model. In this model, the 2D space represents the habitat that both animal populations live in. On setup, the wolf and sheep are placed randomly onto the space. Then on each step through time, wolf & sheep can move around, procreate, or hunt/be hunted, and die. As the model runs, it shows how the size of the two groups trends over time.

The model proposed in this paper features three different agents: people (black), lucky events (green), and unlucky events (red). At the outset, all three agents are distributed onto the 2D space at random. Here’s a screenshot of the initial setup shown in the paper:

Credit: Pluchino, Biondo, & Rapisarda 2018

The model proceeds in steps through time, during which event points move around randomly. The researchers aimed to model a 40 year career in 6-month segments, or 80 steps through time. People start with the same allotment of capital, and are given a number between 0 and 1 to represent their “talent”. Talent is normally distributed.

As I understand it, the way people acquire or lose capital occurs between each step, and depends on whether the person has come into contact with an event point as follows:

  1. The person intercepted no events, and their capital doesn’t change value.
  2. The person intercepted a lucky event, and the agent doubles their capital if their talent number exceeds a randomly generated number between 0 and 1.
  3. The agent intercepted an unlucky event, and the agent halves her capital.

With Netlogo, there are two main sections in which you develop your model: the Interface tab, and the Code tab. The Interface tab is where the visualization of the environment is. This is also the place where you build out GUI inputs for a model. The Code tab is where the core model is specified. The code is a little finniky to learn how to write, and basically has to follow the DSL that NetLogo defines for getting the model set up and running.

From the model’s description in the paper, we know the following constraints:

  1. There should be 1000 people with normally distributed talent, 250 lucky events, and 250 unlucky events.
  2. The events should move around at random.
  3. People should gain or lose capital when they intercept events.
  4. The model should play out over 80 steps, representing a 40 year working life.

Based on this, we can define our model’s setup in NetLogo:

extensions [csv]

globals [capital-list talent-list]
breed [ lucky-events lucky-event ]
breed [ unlucky-events unlucky-event ]
breed [ people person ]
turtles-own [ xc yc ]
people-own [capital talent outlist ]

to setup
  clear-all

  create-lucky-events initial-number-lucky-events
  [
    set shape  "dot"
    set color green
    set size 2  ; easier to see
    set label-color blue - 2
    setxy random-xcor random-ycor
  ]
  create-unlucky-events initial-number-unlucky-events
  [
    set shape  "dot"
    set color red
    set size 2  ; easier to see
    set label-color red - 2
    setxy random-xcor random-ycor
  ]

  create-people initial-number-people
  [
    set shape "person"
    set color brown
    set size 3  ; easier to see
    set capital initial-capital
    set talent random-normal mean-talent talent-std-dev
    set outlist []
    setxy random-xcor random-ycor
  ]
  display-labels
  reset-ticks
end

Without going into the detail, the basics of the above are that we’re creating three types of agents in the model: the lucky and unlucky events, and the people. This setup refers back to inputs for the initial numbers of these agents, the mean talent and standard deviation desired, and so on. All of these are placed randomly onto the initial map.

The step-by-step changes in the model are defined similarly:

to go
  if ticks >= career-length * 2 [
    export-capital-vals
    stop
  ]
  ask lucky-events [
    move
  ]
  ask unlucky-events [
    move
  ]
  ask people [
    if people-move [ move ]
    interact-with-events

    set outlist list (talent) (capital)
  ]
  tick
  display-labels
end

The basics here are that in each run, we ask the events to move. We also can optionally make people move. Then the key part: we ask the people to interact with events on their same patch of the 2D space. The basic event interaction is defined as:

to interact-with-events-persistent
  if count (lucky-events-here) >= 1 and (talent > random 1)
    [ set capital capital * 2 ]
  if count unlucky-events-here >= 1
    [ set capital capital / 2 ]
end

With this, we now have encoded the basic model dynamics from the paper. For the full code, I encourage readers to go to the Github repo and check it out.

As a note, there are a number of factors that I could not find in the paper:

  1. The size of the 2D space, and its patch size
  2. The distance and “randomness” that goes into the events moving
  3. Whether people also move (it seems like they don’t, but just for fun I’ve made it an option in my model)
  4. Whether events disappear after someone takes advantage of them.

As I don’t know the answers for what the paper’s authors did here, I’ve tried to account for that in my model. In it, it’s possible to toggle whether to have events die after being used, and whether people move. There are also inputs for how far events move with each step. As for the environment size, it’s definitely seems clear that varying the size of the environment changes the concentration of capital quite dramatically.

To run the model I’ve built, download Netlogo and the talent-vs-luck.nlogo file from my repo. Then open the Netlogo model on your computer. You should see a screen like this:

Talent vs Luck on Setup

After that, press “Setup” and then “Go”. Feel free to change any inputs you like.

The model is built to output some information on the screen, and other information onto an output.csv file from the directory where it was run.

I built out a spreadsheet to analyze the results from my model, and compare them back to the paper’s results. One of my first runs with a working model yielded a max capital of 5120, by a person with a near-to-average talent of 0.62. I uploaded these results into this Google spreadsheet (feel free to make a copy of it for your own use). As with the paper, the initial raw results show capital success quite distributed across talent:

Run 1: Capital vs Talent

To be sure that this run has similar features as described, I looked into how much capital was held by the upper quintile, finding that roughly 80% of the capital was held by the top:

Run 1: Capital By Quintile

I also found that a similar power-law feature could be found in how capital was held. The paper’s results had a power closer to -1.3, vs this run’s -1.06, but I’m going to assume that’s not too big of a deal.

Run 1: Capital By Bucket

For the most part, I think my model yields results that roughly align with the features described in the paper. As I mentioned earlier, there are a number of constraints that I didn’t find described in the paper, and without them it’s hard to be sure how close my setup is to theirs.

Most significantly, I’ve noticed that changing the environment size does lead to much higher max capital accumulations, but also a far higher concentration of capital (closer to 95% capital held by the top 20% in some runs). Meanwhile, the setup I used for my results above uses a space size 80x80, and leads to lower max capital accumulations, but generally uploads the 80-20 rule better. It would be interesting to know if the researchers from the paper found the same, and what constraints they used for these unspecified parts of their model.

After building this out myself, and reviewing both my own results and the paper in more depth, I’ve written up a post on my opinions of the paper. You can read that here.

Here’s a link to the Github Repo, if you want to check out the code and/or run it yourself.

Here’s a link to the spreadsheet used to analyze results from the model.

Making music using new sounds generated with machine learning

$
0
0

Technology has always played a role in inspiring musicians in new and creative ways. The guitar amp gave rock musicians a new palette of sounds to play with in the form of feedback and distortion. And the sounds generated by synths helped shape the sound of electronic music. But what about new technologies like machine learning models and algorithms? How might they play a role in creating new tools and possibilities for a musician’s creative process? Magenta, a research project within Google, is currently exploring answers to these questions.

Building upon past research in the field of machine learning and music, last year Magenta released NSynth (Neural Synthesizer). It’s a machine learning algorithm that uses deep neural networks to learn the characteristics of sounds, and then create a completely new sound based on these characteristics. Rather than combining or blending the sounds, NSynth synthesizes an entirely new sound using the acoustic qualities of the original sounds—so you could get a sound that’s part flute and part sitar all at once.

Since then, Magenta has continued to experiment with different musical interfaces and tools to make the algorithm more easily accessible and playable. As part of this exploration, Google Creative Lab and Magenta collaborated to create NSynth Super. It’s an open source experimental instrument which gives musicians the ability to explore new sounds generated with the NSynth algorithm.


The Role of Luck in Life Is Still Misunderstood – Talent vs. Luck Paper Review

$
0
0

The paper Talent vs Luck: the role of randomness in success and failure, by A. Pluchino. A. E. Biondo, and A. Rapisarda, has been receiving a lot of positive reception lately.

Unfortunately, the public response has been far too weak in questioning this paper’s claims. The results of its model are discussed in persuasive but misleading ways. The authors do not sufficiently justify the model’s design and validity. Its results feel more a consequence of its design than a proof that similar factors govern career success. Overall this leaves the model feeling contrived, and a poor base from which to extract learnings on real world dynamics.

To be clear, I care deeply about the issue of growing wealth inequality, and would love for research to be done that finds concrete evidence of how luck and privilege play a role in it. I’m all for finding ways we can adapt policy to help correct for non-meritocratic dynamics.

But it’s important that we make sure public policy (and opinion) shifts on the basis of good, unbiased, verified, repeatable insights coming from the research done on these topics.

In the paper, there’s a big gap between its wide-sweeping claims and the validity of the underlying model. Charts and numbers give the illusion of convincing support which, added together with its alignment to popular world views, make the paper quite persuasive. This has led to impressive headlines that herald the paper as if it’s concrete proof, like “The Role of Luck in Life Success Is Far Greater Than We Realized”, or “If you’re so smart, why aren’t you rich? Turns out it’s just chance.”.

This coverage is coming from reputed sources: these last two headlines are from Scientific American and the MIT Review. There’s a serious risk of this runaway momentum building an ever-growing chain of misguided agreement.

This fanfare is seriously misplaced. Let’s get into why.

The paper proposes an agent-based model to simulate the role of talent and luck in career success. This way of modeling involves placing agents, representing people, onto a 2D space and observing how they interact with their environment over time. Running into lucky and unlucky events, which move randomly through the environment, provide people chances at gaining or losing capital.

Initially, 1000 people are given an equal amount of capital (10 units), and a normal distribution of talent (ranging from 0 - 1.0, with a mean of 0.6). When they encounter a lucky event, a random number is generated between 0 and 1. If the agent’s talent rating exceeds this number, their capital is immediately doubled. In encounters with unlucky events, an agent’s capital is halved, regardless of talent.

Over the course of 40 years, or 80 steps representing 6 months each, capital distribution changes. In the paper’s results, the authors note that 20% of the population control 80% of the wealth. A select few individuals have an exorbitant amount of capital. This is similar to widely accepted views on the world’s current distribution of wealth.

I feel the paper’s findings are formed in misleading ways, by using data that don’t actually support neighbouring arguments and by omitting key elements. The authors note that the highest-wealth agents in their model primarily come from nearer-to-average talent levels, leading them to state that “less talented people are very often able to reach the top”. They underscore that, in 100 runs of the model, agents with exceptionally high talent win the most capital just 3% of the time, building a narrative that these outcomes are perverse or objectionable.

But these data points are statistically misleading. No reference is made to the relative size of the talent brackets, a convenient omission when we’re being told about less talented success occurring “very often”. The exceptionally high talented group is, by definition, a set of about 2% or 20-odd people, which means winning 3% of the time may not actually be a poor rate of success. Meanwhile, the nearer-to-average brackets have the largest populations. They aren’t necessarily winning “very often” - they just have more people to start out.

To make this less theoretical, consider the two charts below, which display results from a run using my own recreation of the model. The first chart shows the absolute number of people from each bracket making it to the wealthiest quintile. We see a similar trend here as in the paper, where the top quintile is dominated by people of nearer-to-average talent. The second chart shows the proportion of people in each bracket making to the top quintile.

Run 1: Top Quintile Population (Raw)

Wealthiest Quintile by Talent Bracket

Run 1: Top Quintile Population (Percentage)

Wealthiest Quintile by Proportion of Talent Bracket

This lower chart pokes a major hole in the narrative the authors build up to. There isn’t a clear message that the nearer-to-average brackets succeed “very often” compared to the others - certainly not in an extreme or perverse way. It is curious that this view of the data was left out.

That said, the lower chart indeed indicates that luck is playing an outsize role in capital outcomes. Though the lowest bracket succeeds slightly less often, there isn’t a significant trend of greater success with higher talent. Surely this means the model still shows our career success is not meritocratic, right?

Wrong - the model and its variations do not provide a base for gleaning wider learnings unless they actually simulate the real world. Without establishing that, the model and its modifications are just fictional simulations, not an exercise we can use to learn more about career success, inequality, or otherwise.

In searching through the paper for sections establishing how the model validly simulates career success, the most I could find were notes that its results exhibited similar features as real world wealth inequality. The authors state that the model follows the 80/20 rule, and that its results fit well with the power law.

The results-looks-similar argument is hardly a convincing one. It could be that the results were built into the model’s design. Indeed, based on the way its rules are set, the high concentrations of capital seem like they were a foregone conclusion.

The rules governing capital growth mean it will roughly grow by the exponent of the number of lucky events an agent comes across, and fall with unlucky ones. Like getting an improbable series of heads when flipping a coin long enough, improbable clusters of events are bound to occur for at least some agents in a large enough set. With an exponential growth function at work, it’s no wonder that we see high concentrations of capital come out.

This might be fine if the core rules of the model bore self-evident ties back to career dynamics and earnings, but they don’t. Why should capital growth only occur on chance events, when most workers are paid wages in exchange for their work? Why is its growth defined as exponential? If the model were being used to look at investment outcomes or financial trading, these rules would perhaps seem more self-evident.

But the model is being used to draw conclusions on career success, funding and educational policies. Though there is some mention of prior research on how luck plays a large role in people’s lives, I can find no section in the paper discussing the design of the model’s rules as it relates back to the areas its insights are supposed to apply to.

For me, the paper falls flat compared to its claims. There is a lack of convincing discussion on the model’s ties to the real world, and data is used & omitted in convenient if not spurious ways. It feels more like the authors had a set of conclusions they built a corresponding model for than that they aimed to build a model for career success and explore its results in an open ended fashion.

If the aim is to mitigate contributing factors to wealth inequality, changes need to be based on solid research. Models must be shown to have been designed to explore the issues, not risk appearing to be designed to obtain specific results. Most importantly, conclusions need to be drawn from unbiased explorations of the data, not the other way around.

This paper comes up short on these fronts. This is disappointing because I really do think that factors like luck & privilege play a role in wealth inequality, and that certain policies can compound or mitigate these unequal outcomes.

That said, I would be glad to see my points above countered, should the authors provide information helping to address these concerns, and hope to see more research on this topic in the future.

Special thanks to my brother, Gid, for his help discussing the paper & reviewing this post.

Here is a spreadsheet with the raw data and analysis of from the run referenced above.

Here is the code used to run that model.

Site with future Vulkan tutorials for beginners

$
0
0
Learn Vulkan

March 14th, 2018

Welcome to Learn Vulkan, successor of learnopengl.com.

Like many, Vulkan has likely eluded you as the graphics API only meant to serve the industry veterans, with a limited set of educational content. The mere thought of touching Vulkan as a beginning graphics programmer is, by some considered, heresy. Well... it's time to make a change.

Whether you're a beginning or a seasoned graphics programmer, Learn Vulkan will walk you through each and every step of making pixels dance using the latest, fastest and meanest graphics API out there: Vulkan. You will not only learn the ins and outs of graphics programming, you'll get to understand the actual tools the big boys play with.

This time however, things are going to be a bit different. Learn Vulkan won't be the sole education website you're used to, but a web platform accompanying a published book. This means a higher degree of professionalism, an actual printable version of all content and a more sustainable revenue model for me as an author. Most of the book (and all post-release articles) will be available online for free, with only the more final sections solely available in the book. I'll start hitting the typewriter the moment I feel I've mastered Vulkan, which is bound to happen in about ~6 months of this writing.

If you have a different vision, or opinion, about what Learn Vulkan should be, don't hesitate to contact me as nothing is set in stone yet. I look forward to starting this new adventure with you all together.

- Joey de Vries

Blue Vision, which builds collaborative AR, leave stealth with $14.5M led by GV

$
0
0

Blue Vision Labs, a London-based augmented reality startup co-founded by computer vision experts from Oxford and Imperial College, is emerging from stealth today with a new platform that it claims will be the first to bring ‘collaboration’ to the AR experience. With an app built on Blue Vision’s technology (via its API and SDK), multiple users will be able to see the same virtual objects, and interact with each other in that virtual space with spatial accuracy that hasn’t been seen in widely-available AR services before.

Scenarios where this kind of feature could come in useful could include multi-player games, on-street navigation apps, social media applications and education.

Peter Ondruska, the startup’s co-founder and CEO, tells me that Blue Vision’s tech can pinpoint people and other moving objects in a space to within centimeters of their actual location — far more accurate than typical GPS — meaning that it could give better results in apps that require two parties to find each other, such as in a ride-hailing app. (Hands up if you and your Uber driver have ever lost each other before you’ve even stepped foot in the vehicle.)

Blue Vision has been in stealth mode for the past two years building its product — and its founding team, which also includes Lukas Platinsky, Hugo Grimmett, and repeat entrepreneurs Andrej Pancik and Bryan Baum, have been working on the idea since 2011 — but now it is finally hitting the ground running.

Along with the launch of its SDK for developers, Blue Vision is announcing that it has raised $17 million in funding — $14.5 million in a new Series A led by GV, plus another $2.5 million in Seed funding that it raised earlier from Accel, SV Angel and others — all of whom also participated in this latest round, too.

The SDK will initially be free to use, Ondruska said.

There’s been a surge of interest in augmented and virtual reality technology in the last couple of years, fuelled by some interesting moves from larger tech companies like Google and Apple — launching developer kits to build applications, and working on more hardware to consume it — investments by larger media companies in building content for these platforms, and hundreds of millions of dollars that investors are pouring into the army of startups that are building both software and hardware to usher in this new age of how we will, apparently, all soon be seeing the world.

Some of these investments have so far felt like audacious moonshots. Magic Leap’s mountain of funding, for example, has yet to materialise into anything we can use, virtually or otherwise. But some are making their way to people today, and definitely causing a stir, if not a massive wave of usage. Think here of Apple’s ARKit and Google’s ARCore.

And VR development has even already started to tackle the collaboration challenge, too: recall Facebook’s Oculus division’s work on Rooms, where you can interact with multiple people.

Blue Vision’s approach is a little different from Oculus, in that it requires no more hardware than what many people already have — a smartphone and a basic smartphone camera — both to interact with the experience and to ingest the environment to build that experience. The fact that Blue Vision provides a relatively low barrier to entry, while also doing an enormous (and ground-breaking) amount of heavy lifting at the back end to solve a persistent challenge in AR, is what potentially makes the startup unique and noteworthy.

“They have reduced the need for specific, tailored hardware,” said who is joining the board. “Where we might have needed multiple lenses before, they have achieved same thing with a basic smartphone lens.”

Some of that heavy lifting has also involved building highly detailed maps that developers can now use to build collaborative AR experiences: the idea here is that the map of a space becomes the canvas onto which all of the other objects get placed for their interactions.

Ondruska said that initially the company has built maps covering the city centers of London, San Francisco and New York, with plans to add more locations. Users, he said, can also essentially “build” locations on the fly while using apps powered by Blue Vision, although these would work less well in fast-moving environments, where you might need to reference locations more accurately and pick up more detail.

Some have projected that AR-based applications could generate $83 billion by 2021.

That seems like a big leap, considering we’re now already at 2018 and so far our biggest “hit” in AR has been Pokemon Go. Ondruska believes that this is because there have been missing pieces in making AR a truly seamless and smooth experience, and that his team has built the parts that will complete the picture.

“One of the reasons why AR hasn’t really reached mass market adoption is because of the tech that is on the market,” he said. “Single-user experiences are limiting. We are allowing the next step, letting people see the right place, for example. None of that was possible before in AR because the backend didn’t exist. But by filling in this piece, we are creating new AR use cases, ones that are important and will be used on a daily basis.”

Ancient DNA Is Rewriting Human (and Neanderthal) History

$
0
0

Geneticist David Reich used to study the living, but now he studies the dead.

The precipitating event came in the form of 40,000-year-old Neanderthal bones found in a Croatian cave. So well-preserved were the bones that they yielded enough DNA for sequencing, and it became Reich’s job in 2007 to analyze the DNA for signs that Neanderthals interbred with humans—a idea he was “deeply suspicious” of at the time.

To his surprise, the DNA revealed that humans and Neanderthals did interbreed in their time together in Europe. Possibly even more than once. Today, surprisingly, the people carrying the most Neanderthal DNA are not in Europe but in East Asia—likely due to the patterns of ancient human migration in Eurasia in the thousands of years after Neanderthals died out. All this painted a complicated but dynamic picture of human prehistory. Since the very beginning of our species, humans have been on the move; at times they replaced and at other times they mixed with the local population, first hominids like Neanderthals and later other humans.

Reich has since converted his lab at Harvard Medical School into a “factory” for studying ancient DNA. His new book, Who We Are and How We Got Here, charts the myriad ways the study of ancient DNA is lobbing bombs into the halls of established wisdom. In Europe, for example, ancient DNA is identifying waves of migrations into the continent, in which groups of people serially replaced, or nearly replaced, the local population.

This work is not without controversy, especially as these replacements can be difficult to explain. Reich once had German collaborators drop out of a study when the initial findings seemed to mirror too closely Nazi propaganda about the Aryan race. We discuss this and other aspects of his work below. Our conversation has been condensed and edited for clarity.


Sarah Zhang: You recently published twopapers in which you analyzed over 600 genomes from ancient Europeans. In your book, you write you wanted to “build an American-style genomes factory” and “make ancient DNA industrial”? What does an ancient genome factory look like?

David Reich: What we do in our laboratory is we’ve really focused on trying to make data production efficient. We usually have several people working in parallel in a clean room on turning bones or teeth into powder. The powders are dissolved in a watery solution and the DNA is released and those are turned into a sequenceable form. That’s another step we now do on a robot, which processes 96 samples at once over a period of two days and turns them into sequenceable form. We’re constantly trying to find ways to reduce costs at every stage, so that we can reduce the amount of time it takes to process a sample.

Zhang: How much does it cost to process an ancient DNA sample right now?

Reich: In our hands, a successful sample costs less than $200. That’s only two or three times more than processing them on a present-day person. And maybe about one-third to one half of the samples we screen are successful at this point.

Zhang: Scientists recently reconstructed the face of Cheddar Man, a 10,000-year-old complete skeleton found in Britain. It became a bit of a story because they showed Cheddar Man had dark skin and blue eyes based on his DNA. Why is it that Cheddar Man looks so different from modern Europeans?

Reich: In Europe where we have the best data currently—although that will change over the coming years—we know a lot about how people have migrated. We know of multiple layers of population replacement over the last 50,000 years. Between 41,000 and 39,000 years ago in western Europe, the Neanderthals were replaced by modern human populations. The first modern human samples we have in Europe are about 40,000 years old and are genetically not at all related to present-day Europeans. They seem to be from extinct, dead-end groups.

After that, you see for the first time people related to later European hunter-gatherers who have contributed a little bit to present-day Europeans. That happens beginning 35,000 to 37,000 years ago. Then the ice sheets descend across northern Europe and a lot of these populations are chased into these refuges in the southern peninsulas of Europe. After the Ice Age, there’s a repeopling of northern Europe from the southwest, probably from Spain, and then also from the southeast, probably from Greece and maybe even from Anatolia, Turkey.

Again, after 9,000 years ago, there’s a mass movement of farmers into the region which almost completely replaces the hunter-gatherers with a small amount of mixture.

And then again, after 5,000 years ago, there’s this mass movement at the beginning of the Bronze Age of people from the steppe, who also probably bring these languages that are spoken by the great majority of Europeans today.

So in regard to Cheddar Man, he is from one of these groups that repeopled northern Europe from the south after the Ice Age. And this group was characterized by ancestry related to southeast Europeans. They did not have a lot of the skin-lightening mutations that are present in the first farmers and even more in the steppe pastoralists who come later. They don’t have the blond hair that is characteristic of many northern Europeans today. They do have the blue eyes that are characteristic of this region today. So you have this unusual look of dark skin and blue eyes.

Zhang: You’ve said that ancient DNA has changed the way we see archaeology from these time periods. How so?

Reich: Archaeology has always been political, especially in Europe. Archaeologists are very aware of the misuse of archaeology in the past, in the 20th century. There’s a very famous German archaeologist named Gustaf Kossinna, who was the first or one of the first to come up with the idea of “material culture.” Say, you see similar pots, and therefore you’re in a region where there was shared community and aspects of culture.

He went so far as to argue that when you see the spread of these pots, you’re actually seeing a spread of people and there’s a one-to-one mapping for those things. His ideas were used by the Nazis later, in propaganda, to argue that a particular group in Europe, the Aryans, expanded in all directions across Europe. He believed that the region where these people’s material culture was located is the natural homeland of the Aryan community, and the Germans were the natural inheritors of that. This was used to justify their expansionism in the propaganda that the Germans used in the run-up to the Second World War.

So after the Second World War, there was a very strong reaction in the European archaeological community—not just the Germans, but the broad continental European archaeological community—to the fact that their discipline had been used for these terrible political ends. And there was a retreat from the ideas of Kossinna.

Zhang: You actually had German collaborators drop out of a study because of these exact concerns, right? One of them wrote, “We must(!) avoid ... being compared with the so-called ‘siedlungsarchäologie Method’ from Gustaf Kossinna!”

Reich: Yeah, that’s right. I think one of the things the ancient DNA is showing is actually the Corded Ware culture does correspond coherently to a group of people. [Editor’s note: The Corded Ware made pottery with cord-like ornamentation and according to ancient DNA studies, they descended from steppe ancestry.] I think that was a very sensitive issue to some of our coauthors, and one of the coauthors resigned because he felt we were returning to that idea of migration in archaeology that pots are the same as people. There have been a fair number of other coauthors from different parts of continental Europe who shared this anxiety.

We responded to this by adding a lot of content to our papers to discuss these issues and contextualize them. Our results are actually almost diametrically opposite from what Kossina thought because these Corded Ware people come from the East, a place that Kossina would have despised as a source for them. But nevertheless it is true that there’s big population movements, and so I think what the DNA is doing is it’s forcing the hand of this discussion in archaeology, showing that in fact, major movements of people do occur. They are sometimes sharp and dramatic, and they involve large-scale population replacements over a relatively short period of time. We now can see that for the first time.

What the genetics is finding is often outside the range of what the archaeologists are discussing these days.

Zhang: I think at one point in your book you actually describe ancient DNA researchers as the “barbarians” at the gates of the study of history.

Reich: Yeah.

Zhang: Does it feel that way? Have you gotten into arguments with archaeologists over your findings?

Reich: I think archaeologists and linguists find it frustrating that we’re not trained in the language of archaeology and all these sensitivities like about Kossinna. Yet we have this really powerful tool which is this way of looking at things nobody has been able to look at before.

The point I was trying to make there was that even if we’re not always able to articulate the context of our findings very well, this is very new information, and a serious scholar really needs to take this on board. It’s dangerous. Barbarians may not talk in an educated and learned way but they have access to weapons and ways of looking at things that other people haven’t looked to. And time and again we’ve learned in the past that ignoring barbarians is a dangerous thing to do.

Zhang: As you say, the genetics data is now often ahead of the archaeology, and you keep finding these big, dramatic population replacements throughout human history that can’t yet be fully explained. How should we be thinking about these population replacements? Is there a danger in people interpreting or misinterpreting them as the result of one group’s superiority over another?

Reich: We should think we really don’t know what we’re talking about. When you see these replacements of Neanderthals by modern humans or Europeans and Africans substantially replacing Native Americans in the last 500 years or the people who built Stonehenge, who were obviously extraordinarily sophisticated, being replaced from these people from the continent, it doesn’t say something about the innate potential of these people. But it rather says something about the different immune systems or cultural mismatch.

Zhang: On the point of immune systems, one of the hypotheses for why people from the steppe were so successful in spreading through Europe is that they brought the bubonic plague with them. Since the plague is endemic to Central Asia, they may have built up immunity but the European farmers they encountered had not.

The obvious parallel is Columbus bringing smallpox and other diseases to the New World, which we think of as this huge, world-changing event. It reminded me that huge migrations replacing previous populations have happened many times before in human history.

Reich: Absolutely. The contact between people from Europe and Africa and the New World was a profound Earth-shattering event for our species, of course, in the last 500 years. But there have been profound and Earth-shattering events, again and again, every few thousand years in our history and that’s what ancient DNA is telling us.

Zhang: You end the book noting that you are optimistic that your work is “exploding stereotypes, undercutting prejudice, and highlighting the connections among peoples not previously known to be related.” I imagine you started writing this a few years ago. Given today’s political climate, are you still as optimistic now as you were when you started writing the book?

Reich: I think so. I know there are extremists who are interested in genealogy and genetics. But I think those are very marginal people, and there’s, of course, a concern they may impinge on the mainstream.

But if you actually take any serious look at this data, it just confounds every stereotype. It’s revealing that the differences among populations we see today are actually only a few thousand years old at most and that everybody is mixed. I think that if you pay any attention to this world, and have any degree of seriousness, then you can’t come out feeling affirmed in the racist view of the world. You have to be more open to immigration. You have to be more open to the mixing of different peoples. That’s your own history.

Policy laundering

$
0
0
From Wikipedia, the free encyclopedia

Policy laundering is the disguising of the origins of political decisions, laws, or international treaties.[1] The term is based on the similar money laundering.

Hiding responsibility for a policy or decision

One common method for policy laundering is the use of international treaties which are formulated in secrecy. Afterwards it is not possible to find out who opted for which part of the treaty. Each person can claim that it was not them who demanded a certain paragraph but that they had to agree to the overall "compromise".

Examples that could be considered as "policy laundering" are WIPO[2][3] or the Anti-Counterfeiting Trade Agreement (ACTA).[4]

Harmonization is the process through which a common set of policies are established across jurisdictions to remove irregularities. Regulations can change in any direction, however: regulations may be pushed to the lowest common denominator; but may equally benefit from the 'California effect', where one regulator pushes for the highest standards, setting models for others to follow. Harmonization requires further interrogation, however. In their review of global business regulation, Braithwaite and Drahos find that some countries (notably the U.S. and the UK) push for certain regulatory standards in international bodies and then bring those regulations home under the requirement of harmonization and the guise of multilateralism; this is what we refer to as policy laundering.

— Ian Hosein, 2004[5]

Hiding the real objective of a policy

One manifestation of policy laundering is claiming a different underlying objective for a policy than is actually the case. The usual reason for politicians following this approach is that the real objective is unpopular with the public. The usual process is to conflate the issue being addressed with an unrelated matter of great public concern. The intervention pursued is presented as addressing an issue of great public concern rather than the underlying objective.

Circumventing the regular approval process

Yet another manifestation of policy laundering is to implement legal policy which a subset of legislators desire but would normally not be able to obtain approval through regular means.

ACTA is legislation laundering on an international level of what would be very difficult to get through most Parliaments.

— Stavros Lambrinidis, Member of European Parliament, S and D, Greece[6]

An example of policy laundering where law was enacted and enforced despite both state and federal courts declaring the law unconstitutional is the Missouri v. Holland case. At that time, Congress attempted to protect migratory birds by statutory law.[7] However, both state and federal courts declared that law unconstitutional.[8] Not to be denied, authorized parties subsequently negotiated and ratified a treaty with Canada to achieve the same purpose.[9] Once the treaty was in place, Congress then passed the Migratory Bird Treaty Act of 1918 to enforce the treaty.[10] In Missouri v. Holland, the United States Supreme Court upheld that the new law was constitutional in order to support the treaty.

It has been suggested[11][12][13][by whom?] that policy laundering has become common political practice in areas related to terrorism and the erosion of civil liberties. In the air-travel industry, an example of policy laundering might be the requirement for passengers to show photographic identification. This was presented as addressing security concerns, but from the airline's perspective the measure had the important effect of ending unauthorised resale by passengers of unused tickets.[14]

See also

References

  1. ^Hosein, Ian (2004). "The Sources of Laws: Policy Dynamics in a Digital and Terrorized World". The Information Society. Taylor & Francis. 20 (3): 187–199. doi:10.1080/01972240490456854. Retrieved 16 October 2014.
  2. ^Herman, Bill D. & Oscar H. Gandy, Jr. (2008). "Catch 1201: A Legislative History and Content Analysis of the DMCA Exemption Proceedings". Cardozo Arts & Entertainment Law Journal. SSRN 844544Freely accessible.
  3. ^Yu, Peter K., The Political Economy of Data Protection, Chicago-Kent Law Review, Vol. 83, 2008
  4. ^Geist, Michael (9 June 2008). "Government Should Lift Veil on ACTA Secrecy".
  5. ^Hosein, Ian, 2004, "International Relations Theories and the Regulation of International Dataflows: Policy Laundering and other International Policy Dynamics"
  6. ^Stavros Lambrinidis, Vice President of European Parliament (2009), S and D, Greece, "Stop Acta"
  7. ^An Act Making Appropriations for the Department of Agriculture for the Fiscal Year Ending June 30, 1914, Act of March 4, 1913, 38 Stat. 828, c. 145, at page 847.
  8. ^United States v. Shauver, 214 Fed. 154 (E.D. Ark. 1914), United States v. McCullagh, 221 Fed. 288 (D.Kan. 1915), State v. Sawyer, 94 A. 886 (Maine 1915), and State v. McCullagh, 153 P. 557 (Kan. 557).
  9. ^Convention for the Protection of Migratory Birds of August 16, 1916, T.S. No. 628, 39 Stat. 1702.
  10. ^Migratory Bird Treaty Act, Act of July 3, 1918, c. 128, 40 Stat. 755. Codified at 18 U.S.C.§703.
  11. ^The Policy Laundering Project
  12. ^ACLU's Stop Policy Laundering Project
  13. ^Barry Steinhardt (ACLU) "The Problem of Policy Laundering"
  14. ^Pierre Lemieux, "Now It Seems We Need a Passport Inside Canada"

External links

Viewing all 25817 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>