Quantcast
Channel: Hacker News
Viewing all 25817 articles
Browse latest View live

YouTube AI deletes war crimes evidence as 'extremist material'

$
0
0

YouTube removes video documenting Islamic State's destruction of Nimrud artifacts (AFP)

YouTube is facing criticism after a new artificial intelligence program monitoring "extremist" content began flagging and removing masses of videos and blocking channels that document war crimes in the Middle East.

Middle East Eye, the monitoring organisation Airwars and the open-source investigations site Bellingcat are among a number of sites that have had videos removed for breaching YouTube's Community Guidelines.

The removals began days after Google, which owns YouTube, trumpeted the arrival of an artificial intelligence program that it said could spot and flag "extremist" videos without human involvement.

But since then vast tracts of footage, including evidence used in the Chelsea Manning court case and videos documenting the destruction of ancient artifacts by Islamic State, have been flagged as "extremist" and deleted from its archives.

Orient News, a Syrian opposition news site established in 2008, appears to have had its entire YouTube account removed.

Airwars raised the issue on Twitter after a number of its videos - largely showing footage of air strikes - were removed for violating YouTube guidelines.

Airwars later said its videos had been reviewed and restored, but given 18+ age classifications.

Eliot Higgins, of Bellingcat, made similar complaints.

Middle East Eye has also had a number of videos removed, some of which were later given age restrictions, some of which remain removed.

YouTube told MEE in an email that the video "Drone footage by Islamic State shows suicide car attacks on Iraqi forces inside Mosul" was removed and that YouTube had "assigned a Community Guidelines strike, or temporary penalty" to MEE's account.

The same occurred in the case of "Video appears to show Egyptian soldiers carrying out extra-judicial killings". MEE lodged an appeal with YouTube and received this response:

"After further review of the content, we've determined that your video does violate our Community Guidelines and have upheld our original decision. We appreciate your understanding."

Another video, documenting the destruction of Nimrud by IS, which is widely available across the internet, was removed from an MEE staff account, and all appeals were rejected.  

Alexa O’Brien, an American journalist who covered the US prosecution of Wikileaks whistleblower Chelsea Manning, said video used as evidence in the 2013 trial had also been removed for violating YouTube's guidelines.

O'Brien tweeted that, following "reconciliation" with YouTube, another video featuring al-Qaeda was also removed despite also being used as evidence in the trial.

She said that she had received two community strikes and now risked having her account deleted.

'Extremist Tube'

YouTube has faced mounting criticism about the presence of videos featuring "extremist" content from groups like al-Qaeda and the Islamic State, as well as far-right organisations.

In response, the site announced it would be increasing the use of artificial intelligence to monitor and review content online.

In a blog post last week the company announced it had begun "developing and implementing cutting-edge machine learning technology designed to help us identify and remove violent extremism and terrorism-related content in a scalable way".

"The accuracy of our systems has improved dramatically due to our machine learning technology," said the YouTube Team in the blog post.

These tools aren’t perfect, [but] in many cases our systems have proven more accurate than humans 

- Google blog post

"While these tools aren’t perfect, and aren’t right for every setting, in many cases our systems have proven more accurate than humans at flagging videos that need to be removed."

It said that more than "75 percent of the videos we've removed for violent extremism over the past month were taken down before receiving a single human flag".

Google also announced that its engineers had developed technology to prevent re-uploads of extremist material content, using image-matching techniques.

YouTube accounts face a three-strike policy in which they face being removed if they have three videos flagged up as in violation of the Community Guidelines.

Dozens of YouTube accounts related to the Syrian civil war are thought to have been removed by YouTube. Some contained videos with graphic images from the conflict, although many videos just focused on politics and interviews.

'Greater awareness'

Chris Woods, the head of Airwars, told MEE that the trend exhibited by YouTube risked severely undermining work done by Syrian opposition activists.

"I think what’s so troubling about this if we look at the Syrian accounts, this is video chronicling a six or seven-year war, and some of the most important parts of that war from the perspective of Syrians," he said.

He said he was still in negotiations with YouTube over a number of videos. Some videos hosted by Airwars were duplicated on the page of US central command, but have not been removed from the latter.

This is video chronicling... some of the most important parts of that war from the perspective of Syrians

- Chris Woods, Airwars

YouTube told MEE that while it could not comment on individual cases, it was not aware "any particular type of content was being flagged over another".

"Previously we used to rely on humans to flag content, now we're using machine learning to flag content which goes through to a team of trained policy specialists all around the world which will then make decisions," a spokeswoman said.

"So it’s not that machines are striking videos, it’s that we are using machine learning to flag the content which then goes through to humans."

Asked why masses of videos had been removed recently, the spokeswoman said "it could be that we now just have a greater awareness of the content on the platform and that a lot of this content could have been policy-violating and is now coming down".

Controversy over adverts being placed next to "inappropriate" content, including videos of fundamentalist Islamic preachers, white supremacists and Neo-Nazis, led to an exodus of major companies from YouTube in March.

An investigation by the Times newspaper said that UK taxpayer-funded government adverts and adverts from several media and retail companies including Channel 4, the BBC, Argos and L'Oreal had appeared alongside "extremist" content.


Pony Performance Cheatsheet

$
0
0

Performance, it’s a word in the dictionary

If you know what you are doing, Pony can make it easy to write high-performance code. Pony isn’t, however, a performance panacea. There are plenty of things that you as a programmer can do to hurt performance.

We’d categorize most of the information in this document as either “know how computers work” or “know how Pony works.” The former applies to any programming language. What you learn here is probably applicable in other languages that you use every day.

Martin Thompson has an excellent talk Designing for Performance that talks about writing performance sensitive code in the large. We firmly advise you to watch it. Performance tuning can be a massive rabbit hole. If the topic excites you, put “mechanical sympathy” into your favorite search engine; you’ll come up for air in a few months knowing a ton.

All of this is to say: performance is complicated. It’s more art than science. What we are presenting here is rules of thumb. Many are not always good or always bad. Like most things in engineering, there are tradeoffs involved. Be mindful. Be empirical. Be sure to measure the performance of your code before and after you change anything based on this document.

It’s our belief that the best way to get to awesome performance is baby steps. Don’t try to make a ton of changes at once. Make a small performance oriented change. Measure the impact. Repeat. Take one tiny step at a time towards better performance.

And remember, invest your time where it’s valuable. Worrying about possible performance problems in code that gets executed once at startup won’t get you anything. You need to “mind the hot path.” Performance tune your code that gets executed all the time. For example, if you are writing an HTTP server and want to make it high-performance, you definitely should focus on the performance of your HTTP parser, it’s going to get executed on every single request.

If you get stuck, fear not, we have a welcoming community that can assist you.

Pony performance tips

It’s probably your design

A poor design can kill your performance before you ever write a line of code. One of the fastest ways to hurt your performance is by inserting bottlenecks. As an example, imagine a system where you have X number of “processing actors” and 1 “aggregating actor.” Work is farmed out to each of the processing actors to undertake which in turn send their results to the aggregating actor. Can you spot the possible bottleneck? Our performance is going to be bound by the performance of the single aggregating actor. We haven’t written a line of code yet, and we already have a performance problem.

This pattern plays out over and over in our software systems. For a variety of well-intentioned reasons, we introduce bottlenecks into our designs. And you know what? That’s ok. It’s about trade-offs. Sometimes the performance loss is worth whatever we are gaining. However, we need to be aware of the trade-offs we are making. If we are going to consciously lower our performance, we need to get something at least as valuable in return.

Fast code is highly concurrent. Fast code avoids coordination. Fast code relies on local knowledge instead of global knowledge. Fast code is willing to trade the illusion of consistency for availability and performance. Fast code does this by design.

There’s a lot of material out there about writing high-performance, highly concurrent code. More than most people have time to absorb. Our advice? At least watch the Martin Thompson, and Peter Bailis talks below, then if you are hungry for more, ask the community what more you can learn.

Watch your allocations

Comparatively, creating new objects is expensive. Each object requires some amount of memory to be allocated. Allocations are expensive. The Pony runtime uses a pool allocator so creating a bunch of Strings is cheaper than if you had to malloc each one. Allocation is not, however, free. In hot paths, it can get quite expensive.

If you want to write high-performance Pony code, you are going to have to look at where you are allocating objects. For example:

let output = file_name + ":" + file_linenum + ":" + file_linepos + ": " + msg

How many allocations does it take to create output? You need to know. We’ll give you a hint:

// String.add
// commonly seen as "foo" + "bar"
fun add(that: String box): String =>
  """
  Return a string that is a concatenation of this and that.
  """
  let len = _size + that._size
  let s = recover String(len) end
  (consume s)._append(this)._append(that)

The point of this isn’t that you need to know that + on String calls add which in turn creates a new object. The point is, you need to know how many allocations your code and the code you are calling is going to trigger. The following code has far fewer allocations involved:

let output = recover String(file_name.size()
  + file_linenum.size()
  + file_linepos.size()
  + msg.size()
  + 4) end

output.append(file_name)
output.append(":")
output.append(file_linenum)
output.append(":")
output.append(file_linepos)
output.append(": ")
output.append(msg)
output

It’s going to perform much better than the former. You might find yourself wondering about the first part of the code, what’s going on with:

let output = recover String(file_name.size()
  + file_linenum.size()
  + file_linepos.size()
  + msg.size()
  + 4) end

The answer once again is in the source. In this case in the source for String.append and String.reserve.

fun ref append(seq: ReadSeq[U8], offset: USize = 0, len: USize = -1) =>
  """
  Append the elements from a sequence, starting from the given offset.
  """
  if offset >= seq.size() then
    return
  end

  let copy_len = len.min(seq.size() - offset)
  reserve(_size + copy_len)

  match seq
  | let s: (String box | Array[U8] box) =>
    s._copy_to(_ptr, copy_len, offset, _size)
    _size = _size + copy_len
    _set(_size, 0)
  else
    let cap = copy_len + offset
    var i = offset

    try
      while i < cap do
        push(seq(i)?)
        i = i + 1
      end
    end
  end

fun ref reserve(len: USize) =>
  """
  Reserve space for len bytes. An additional byte will be reserved 
  for the null terminator.
  """
  if _alloc <= len then
    let max = len.max_value() - 1
    let min_alloc = len.min(max) + 1
    if min_alloc <= (max / 2) then
      _alloc = min_alloc.next_pow2()
    else
      _alloc = min_alloc.min(max)
    end
    _ptr = _ptr._realloc(_alloc)
  end  

String.append will make sure that it has enough room to copy the new data onto the string by calling reserve. String.reserve will result in a new allocation if you are trying to reserve more memory than you’ve already allocated. So…

let output = recover String(file_name.size()
  + file_linenum.size()
  + file_linepos.size()
  + msg.size()
  + 4) end

reserves enough memory for our string ahead of time and avoids allocations. The same principle can apply to a variety of collections. If you know you need to put 256 items in a collection, allocate space for 256 from the get go, otherwise, as you add items to the collection, allocations will be triggered.

“Limit allocations” doesn’t only apply to knowing what happens in the code you are calling. You need to be aware of what your code is doing. You need to design your code to allocate as few objects as possible while triggering as few allocations from the pool allocator.

You can slaughter your performance if your object is growing in size and needs additional space and has to be copied into a newer roomier chunk of memory.

Just remember, if you want to maximise performance:

  • You need to know when code you call is allocating new objects.
  • You need to know when code you call results in memory copies

Some of you probably looked at the String performance enhancement above and thought, “doesn’t the JVM do that for you?” You’d be right. That’s a standard optimization in many languages. It’s an optimization we are adding to Pony. However, the basic pattern applies. Be aware of triggering extra allocations that you don’t need to. Your compiler and runtime can add many optimizations to avoid allocations, but it’s not going to do everything for you. You still need to understand your allocations. They’re called “your allocations” because in the end, you end up owning them and they end up owning your performance.

Give in to your “primitive obsession”

Many collections of programming “best practices” teach you to avoid “primitive obsession”. This great advice, normally. It’s not such great advice if you are worried about performance. Let’s take two ways you can represent a short form American zip code:

// primitive obsession!
let zip: U64

// proper OO!
class ZipCode
  let _zip: U64

  new create(zip: U64) =>
    _zip = zip

The problem with our right and proper OO version that avoids primitive obsession are that you are going to end up creating a new object. Do that in a hot path, and it will add up. As we said earlier, watch you allocations!

Perhaps our zip code example feels a little contrived? Here’s another one. It’s a somewhat familiar pattern to avoid using floating point numbers to represent monetary values. That’s wise, floating point numbers are not good for accuracy, and most of us want our monetary calculations to be accurate. A standard solution to this problem might look like the Money class below:

class Money
  let _dollars: U64
  let _cents: U8

  new create(dollars: U64, cents: U8) =>
    _dollars = dollars
    _cents = cents

That’s the beginning of a beautiful Money class. However, it’s also the start of a lot of potential allocations. If we give in to our primitive obsession, we can avoid the object allocation by using a tuple:

type Dollars is U64
type Cents is U8
type Money is (Dollars, Cents)

Avoid boxing machine words

Machine words like U8, U16, U32, U64, etc. are going to be better for performance than classes. Machine words have less overhead than classes. However, if we aren’t careful, we can end up “boxing” machine words and add cost. In hot path code, that boxing can have a large impact.

The easiest way to end up boxing a machine word is by including it in a union type. Take a look at the following code:

class Example
  let _array: Array[U64] = _array.create()

  fun index_for(find: U64): (USize | None) =>
    """
    Return the index position for `find` or `None` if it isn't in `Example`
    """
    for (index, value) in _array.pairs() do
      if value == find then
        return index
      end
    end

    None

This code is excellent and quite reasonable. The return type is very clear. We either get a USize or None. The problem here is that we need to box the USize that we return and it’s going to have a performance impact. For hot path code, you should give in to your inner C programmer and write the following:

class Example
  let _array: Array[U64] = _array.create()

  fun index_for(find: U64): USize =>
    """
    Return the index position for `find` or `-1` if it isn't in `Example`
    """
    for (index, value) in _array.pairs() do
      if value == find then
        return index
      end
    end

    -1

You probably won’t be proud of that code, but you’ll be proud of the performance improvement you get. The -1 idiom is one that should be quite familiar to folks with a C background. If you aren’t familiar with C, you might be thinking: “Wait, that’s a USize, an unsigned value. What on earth is -1 there?” And that’s a good question to ask. The answer is numeric types wrap overflow and wrap around. So -1 is equivalent to the max value of a U64. Keep that in mind because if you find your value in index 18,446,744,073,709,551,615, you’ll be treating it as not-found. That might be a problem, but we doubt it.

Avoid error

Pony’s error is often confused with exceptions from languages like Java. error while having some similarities, isn’t the same. Amongst other differences, error carries no runtime information like exception type. It’s also cheaper to set up a Pony error than it is an exception in languages like Java. Many folks hear that and think, “I can use error without worrying about performance.” Sadly, that isn’t the case.

error has a cost. It’s a good rule of thumb to avoid using error in hot path code. You should instead favor using union types where one value represents your “success” value, and the other represents your “error” value. Below you’ll see two versions of a contrived zero_is_bad function. The first utilizes error; the second is implemented using a union type.

Error version

use "debug"

class Foo
  fun zero_is_bad(i: U64): U64 ? =>
    if i == 0 then
      error
    end

    i * 2

class UsesFoo
  let _foo: Foo = Foo

  fun bar(i: U64) =>
    try
      let x = _foo.zero_is_bad(i)?
      Debug("good things happened " + x.string())
    else
      Debug("error happened")
    end

Union type version

use "debug"

primitive ZeroIsBad

class Foo
  fun zero_is_bad(i: U64): (U64 | ZeroIsBad) =>
    if i == 0 then
      ZeroIsBad
    else
      i * 2
    end

class UsesFoo
  let _foo: Foo = Foo

  fun bar(i: U64) =>
    match _foo.zero_is_bad(i)
    | let x: U64 =>
      Debug("good things happened " + x.string())
    | ZeroIsBad =>
      Debug("zero is bad")
    end

Which is better for performance? Well, it depends. How often is the input to zero_is_bad going to be 0? The more often it is, the worse the error version will perform compared to the union type version. If the i parameter to zero_is_bad is rarely 0, then the error version will perform better than the union type version.

Our union type version contains additional logic that will be executed on every single call. We have to match against the result of zero_is_bad to determine if you got a U64 or None. You are going to pay that cost every single time.

How do you know which is the best version? Well, there is no best version. There is only a version that will work better based on the inputs you are expecting. Pick wisely. Here’s our general rule of thumb. If it’s in hot path code, and you are talking about error happening in terms that are less than 1 in millions, you probably want the union type. But again, the only way to know is to test.

By the way, did you notice our union type version introduced a different problem? It’s boxing the U64 machine word. If zero_is_bad was returning a (FooClass | None) that wouldn’t be an issue. Here, however, it is. Be mindful that when you address one possible performance problem that you don’t introduce a different one. It’s ok to trade one potential performance problem for another; you just need to be mindful.

Watch the asynchrony!

Actors are awesome. We love them. We wouldn’t be using Pony if we didn’t. However, we do see people getting a little too excited about them. Yes, actors are a unit of concurrency. However, you don’t need to make everything an actor.

Sending a message from one actor to another is pretty fast; note the “pretty fast.” A message send isn’t free. The message has to be placed in the mailbox for the receiving actor which in turn needs to retrieve it. The multi-producer, single-consumer queue that powers actor mailboxes is highly tuned, but, it’s still a queue. Sometimes you don’t need another actor; you just need a class.

In addition to the queue costs you pay with a message send, depending on the contents of your message, you might be incurring additional garbage collection costs as well.

Mind the garbage collector

Things you should know about the Pony garbage collector

  • Each actor has its own heap
  • Actors might GC after each behavior call (never during one)
  • Effectively, Pony programs are constantly, concurrently collecting garbage
  • Garbage collection for an actor’s heap uses a mark and dont sweep algorithm
  • There are no garbage collection generations
  • When an object is sent from one actor to another, additional messages related to garbage collection have to be sent
  • If you send an object allocated in actor 1 to actor 2 and from there to actor 3, garbage collection messages will flow between actors 1, 2, and 3

Performance implications

There are two ways that Pony’s garbage collection can impact your performance:

  1. Time spent garbage collecting
  2. Time spent sending garbage collection messages between actors

To minimize the impact of garbage collection on your application, you’ll need to address anything that results in longer garbage collection times, and you’ll want to reduce the number of garbage collection messages generated.

Advice:

If you don’t allocate it, you don’t have to collect it. Yaya, we mentioned this already; but really, it’s an important component of your application’s performance profile.

  • Avoid sending objects between actors when you can

Remember our earlier primitive obsession conversation? Here’s another example of where primitive obsession can be your performance friend. Sending a “money tuple” like

// "money tuple"
type Dollars is U64
type Cents is U8
type Money is (Dollars, Cents)

from one actor to another will have no garbage collection overhead. Sending a “money class” such as

class Money
  let _dollars: U64
  let _cents: U8

  new create(dollars: U64, cents: U8) =>
    _dollars = dollars
    _cents = cents

will result in some garbage collection overhead. In the small, it won’t make much of a difference, but in a hot-path? Look out. It can make a big difference. If you can send machine words instead of classes, do it.

  • Avoid passing objects along long chains of actors

Sending 2 objects from Actor A to Actor B results in fewer garbage collection messages being generated than sending 1 object from Actor A to Actor B to Actor C. The can lead to some counter-intuitive performance improvements. For some applications that send objects down a long line of actors, it might make sense to create a copy of the object along the way and send the copy. Eventually, the cost of the memory allocation and copying will be less than the overhead from garbage collection messages.

Please note, this is not an issue that is going to impact most applications. It is, however, something you should be aware of.

  • Avoid holding on to objects that were allocated by another actor

Pony’s garbage collector is a non-generational mark and don’t sweep collector. It will perform best when the number of objects in an actor’s heap is kept low. The more objects on the heap, the longer a garbage collection cycle will take. All mark and don’t sweep collectors share this trait. The complexity of generational garbage collection was added to address problems with long-lived objects.

Issues with larger heap sizes interact interestingly with certain types of Pony applications. Take, for example, a network server. Clients open connections to it over TCP and exchange data. On the server side, data received from clients is allocated in the incoming TCP actors and then sent to other actors as an object or objects of some sort.

If our receiving actors hold onto the objects allocated in the TCP actors for an extended period, the number of objects in their heaps will grow. As the objects held grows, garbage collection times will increase.

Some applications might benefit from having receiving actors copy data once they get it from an incoming TCP actor rather than holding on to the data allocated by the TCP actor. Odds are, your application won’t need to do this, but it’s something to keep in mind.

Maybe you have too many threads

Let’s talk about the Pony scheduler for a moment. When you start up a Pony program, it will create one scheduler thread per available CPU. At least, that is what it does by default. Each of those scheduler threads will remain locked to a particular CPU. Without going into a ton of detail, this is usually the right thing to do for performance.

Pony schedulers use a work-stealing algorithm that allows idle schedulers to take work from other schedulers. In a loaded system, the work stealing algorithm can keep all the CPUs busy. However, when CPUs are underutilized, performance can suffer. Based on your program, running with fewer threads might result in better performance. When you run your pony program, you can pass the --ponythreads=X option to indicate how many scheduler threads the program will create. For some programs, the best choice is --ponythreads=1; this will turn off work-stealing, and it will keep all work on a single CPU which can sometimes provide a nice performance boost based on CPU caches.

We suggest you try the following:

  • Run your program under your expected workload.
  • Start with 1 scheduler thread and work your way up to the number of CPUs you have available.
  • Measure your performance with ponythread setting.
  • Use the number of scheduler threads that gives you the best performance.

Work is ongoing to improve the work-stealing scheduler. Feel free to check in on the developer mailing list to get an update.

Pin your scheduler threads

Multitasking operating systems are wondrous things. Without one, I wouldn’t be able to write up these tips while listening to obscure Beastie Boys remixes. For me, at this moment in time, having multiple programs running at once is an awesome thing. There are times though when multitasking operating systems can be annoying.

Much like how the Pony scheduler schedules different actors, so they all get time to use a CPU, so your operating system does with various running programs. And this can be problematic for performance. If we want to get the best performance from Pony programs, we want them to have access to CPUs as much as possible. And, we want each scheduler thread to have sole access to its particular CPU.

Modern CPU architectures feature a hierarchical layering of caches. The caches are used to hold data that the CPU needs. The closer the cache is to the CPU, the faster it can execute operations on that data. In our ideal world, the data we need is always in the caches the closest to the CPU. We don’t live in an ideal world, but we can do things to bring us closer to our ideal world.

One of those things is reserving CPUs for our programs. The benefits are two fold:

  • Our application never loses CPU time to some other process
  • Another process using the CPU doesn’t evict our data from CPU caches

If your operating system support process pinning, we suggest you do it. What you want to do is set your operating system and all “non-essential” programs to share 1, perhaps 2 CPUs; this frees up all your remaining CPUs for use by your Pony program.

On Linux, you’ll want to use cset to pin your Pony programs. Let’s have a look at an example:

sudo cset proc -s user -e numactl -- -C 1-4,17 chrt -f 80 \
  ./my-pony-program --ponythreads 4 --ponypinasio

This isn’t a complete cset tutorial so I’m only going to focus on one option and I’ll leave the rest to your investigation. Note the -C 1-4,17; this will make CPUs 1 to 4 plus CPU 17 available to our program my-pony-program.

--ponythreads 4 --ponypinasio

And those two additional options to our program? We’ve seen --ponythreads before. In this case, we are running with 4 scheduler threads. They will have exclusive access to CPUs 1 to 4. What about --ponypinasio and CPU 17?

In addition to scheduler threads, each Pony program also has an ASIO thread that handles asynchronous IO events. By supplying the --ponypinasio option, our ASIO thread will be pinned to a single CPU. Which CPU? Whichever available CPU has the highest number. To sum up:

// Set aside 5 CPUs for this program
-C 1-4,17
// Run 4 scheduler threads and pin the ASIO thread
--ponythreads 4 --ponypinasio

Tune your operating system

Depending on what your program does, tuning your operating system might yield good results. Operating systems like Linux exposes a variety of options that you can use to optimize them. The internet is awash in various documents purporting to give you settings that will lower network latency, raise network throughput or improve application latency. Beware of every single one of those documents. Even if they were written by a knowledgeable person, they weren’t written with your specific hardware in mind, with your specific operating system in mind, with your particular application in mind.

Now, warning aside, there’s plenty you can learn about tuning your operating system, and it can sometimes have a large impact on your application. Just remember, mindlessly turning knobs you don’t understand isn’t likely to make things better. Do your research. Understand what you are doing. Be empirical; measure and then measure again.

Build from source

The pre-built Pony packages are quite conservative with the optimizations they apply. To get the best performance, you should build your compiler from source. By default, Pony will then take advantage of any features of your CPU like AVX/AVX2. Additionally, you should try:

And last but not least, make sure you build a release version of the compiler and that your pony binary wasn’t compiled with --debug.

Profile it!

Intuitions about program performance are often wrong. The only way to find out for sure is to measure. You are going to need to profile your code. It will help you find hotspots. You can use standard profiling tools like Instruments and VTune on your Pony application. At the moment, we don’t have a handy guide to help you interpret the results you are getting, but we have one in the works. In the meantime, if you need help, the community is waiting to help.

Cathleen Morawetz has died

$
0
0

“The search for shock-free airfoils is sort of futile,” said Jonathan Goodman, a math professor and one of Dr. Morawetz’s colleagues at New York University at the Courant Institute for Mathematical Sciences. Dr. Morawetz’s paper on the subject, he said, was “a beautiful proof.”

Photo
Dr. Morawetz at New York University’s commencement in 2007.Credit Phil Gallo/New York University Photo Bureau

With that insight, aerospace engineers now design wings to minimize shocks rather than trying to eliminate them.

In later work Dr. Morawetz studied the scattering of waves off objects. She invented a method to prove what is known as the Morawetz inequality, which describes the maximum amount of wave energy near an object at a given time. It proves that wave energy scatters rather than lingering near the object indefinitely.

“She did some very nice things that are still quoted today,” said Louis Nirenberg, a New York University mathematician who first met Dr. Morawetz as a graduate student.

He said he attended a general relativity conference a few weeks ago. “People there were using her inequalities,” he said.

Cathleen Synge was born on May 5, 1923, in Toronto, the daughter of Irish immigrants. Her father, John Lighton Synge, was a physicist and mathematician known for research that used a geometric approach to study Einstein’s theory of general relativity. Her mother, the former Elizabeth Eleanor Mabel Allen, had been a math major in college but dropped out when she married. Dr. Morawetz credited her mother with encouraging her to have a career.

She earned a bachelor’s degree in mathematics at the University of Toronto in 1945, the same year she married Herbert Morawetz, a polymer chemist.

She toyed with the idea of going to India as a teacher, but a Toronto math professor who was a family friend persuaded her to go to graduate school instead. She received a master’s degree in mathematics at the Massachusetts Institute of Technology the next year and a doctorate at New York University in 1951. She wrote her thesis about imploding shock waves.

After a postdoctoral fellowship at M.I.T., she returned to New York University. She worked part time, supported by Navy contracts, before she was offered an assistant professorship in 1957. She spent the rest of her career at the university, including serving as the director of the Courant Institute from 1984 to 1988.

Dr. Morawetz was a member of the National Academy of Sciences and a fellow of the American Academy of Arts and Sciences. In 1995 she became president of the American Mathematical Society, and in 1998 she became the first female mathematician to receive a National Medal of Science.

In addition to her husband, Dr. Morawetz is survived by three daughters, Pegeen Rubinstein, Lida Jeck and Nancy Morawetz; a son, John; a sister, Isabel Seddon; six grandchildren; three great-grandchildren; and four step-grandchildren.

In an interview with the journal Science in 1979, Dr. Morawetz recalled that when her children were young — a time when few women pursued professional careers — people often asked whether she worried about them while she was at work.

Her reply: “No, I’m much more likely to worry about a theorem when I’m with my children.”

Continue reading the main story

Worldbuilding

$
0
0

Introduction

Dr. John D. Clark

THE SILICONE WORLD

1. THE STAR AND ITS MOST IMPORTANT PLANET

The planet is named Uller (it seems that when interstellar travel was developed, the names of Greek Gods had been used up, so those of Norse gods were used). It is the second planet of the star Beta Hydri, right angle 0:23, declension -77:32, G-0 (solar) type star, of approximately the same size as Sol; distance from Earth, 21 light years.

Uller revolves around it in a nearly circular orbit, at a distance of 100,000,000 miles, making it a little colder than Earth. A year is of the approximate length of that on Earth. A day lasts 26 hours.

The axis of Uller is in the same plane as the orbit, so that at a certain time of the year the north pole is pointed directly at the sun, while at the opposite end of the orbit it points directly away. The result is highly exaggerated seasons. At the poles the temperature runs from 120°C to a low of -80°C. At the equator it remains not far from 10°C all year round. Strong winds blow during the summer and winter, from the hot to the cold pole; few winds during the spring and fall. The appearance of the poles varies during the year from baked deserts to glaciers covered with solid CO2. Free water exists in the equatorial regions all year round.

2. SOLAR MOVEMENT AS SEEN FROM ULLER

As seen from the north pole—no sun is visible on Jan. 1. On April 1, it bisects the horizon all day, swinging completely around. April 1 to July 1, it continues swinging around, gradually rising in the sky, the spiral converging to its center at the zenith, which it reaches July 1. From July 1 to October 1 the spiral starts again, spreading out from the center until on October 1 it bisects the horizon again. On October 1 night arrives to stay until April 1.

At the equator, the sun is visible bisecting the southern horizon for all 26 hours of the day on January 1. From January 1 to April 1, the sun starts to dip below the horizon at night, to rise higher above it during the day. During all this time it rises and sets at the same hours, but rises in the southeast and sets in the southwest. At noon it is higher each day in the southern sky until April 1, when it rises due east, passes through the zenith and sets due west. From April 1 to July 1, its noon position drops down to the north, until on July 1, it is visible all day, bisected by the northern horizon.

3. CHEMISTRY AND GEOLOGY OF ULLER

Calcium and chlorine are rarer than on earth, sodium is somewhat commoner. As a result of the shortage of calcium there is a higher ration of silicates to carbonates than exists on earth. The water is slightly alkaline and resembles a very dilute solution of sodium silicate (water glass). It would have a pH of 8.5 and tastes slightly soapy. Also, when it dries out it leaves a sticky, and then a glassy, crackly film. Rocks look fairly earthlike, but the absence or scarcity of anything like limestone is noticeable. Practically all the sedimentary rocks are of the sandstone type.

All rivers are seasonal, running from the polar regions to the central seas in the spring only, or until the polar cap is completely dried out.

4. ANIMAL LIFE

As on Earth life arose in the primitive waters and with a carbon base, but because of the abundance of silicone, there was a strong tendency for the microscopic organisms to develop silicate exoskeletons, like diatoms. The present invertebrate animal life of the planet is of this type and is confined to the equatorial seas. They run from amoeba-like objects to things like crayfish, with silicate skeletons. Later, some species of them started taking silicone into their soft tissues, and eventually their carbon-chain compounds were converted to silicone type chains, from

with organic radicals on the side links. These organisms were a transitional type, with silicone tissues and water body fluids, resembling the earthly amphibians, and are now practically extinct. There are a few species, something like segmented worms, still to be seen in the backwaters of the central seas.

A further development occurred when the silicone chain animals began to get short-chain silicones into their circulatory systems, held in solution by OH or NH2 groups on the ends and branches of the chains. The proportion of these compounds gradually increased until the water was a minor and then a missing constituent. The larger mobile species were, then, practically anhydrous. Their blood consists of short-chain silicones, with quartz reinforcing for the soft parts and their armor, teeth, etc., of pure amorphous quartz (opal). Most of these parts are of the milky variety, variously tinted with metallic impurities, as are the varieties of sapphires.

These pure silicone animals, due to their practical indestructibility, annihilated all but the smaller of the carbon animals, and drove the compromise types into odd corners as relics. They developed into a fish-like animal with a very large swim-bladder to compensate for the rather higher density of the silicone tissues, and from these fish the land animals developed. Due to their high density and resulting high weight, they tend to be low on the ground, rather reptilian in look. Three pairs of legs are usual in order to distribute the heavy load. There is no sharp dividing line between the quartz armor and the silicone tissue. One merges into the other.

The dominant pure silicone animals only could become mobile and venture far from the temperate equatorial regions of Uller, since they neither froze nor stiffened with cold, nor became incapacitated by heat. Note that all animal life is cold-blooded, with a negligible difference between body and ambient temperatures. Since the animals are silicones, they don't get sluggish like cold snakes.

5. PLANT LIFE

The plants are of the carbon-metabolism, silicate-shell type, like the primitive animals. They spread out from the equator as far as they could go before the baking polar summers killed them. They have normal seasonal growth in the temperate zones and remain dormant and frozen in the winter. At the poles there is no vegetation, not because of the cold winter, but because of the hot summer. The winter winds frequently blow over dead trees and roll them as far as the equatorial seas. Other dead vegetation, because of the highly silicious water, always gets petrified unless it is eaten first. What with the quartz-speckled hides of the living vegetation and the solid quartz of the dead, a forest is spectacular.

The silicone animals live on the plants. They chew them up, dehydrate them, and convert their silicious outer bark and carbonaceous interiors into silicones for themselves. When silicone tissue is metabolized, the carbon and hydrogen go to CO2 and H2O, which are breathed out, while the silicone goes into SiO2, which is deposited as more teeth and armor. (Compare the terrestrial octopus, which makes armor-plating out of calcium urate instead of excreting urea or uric acid.) The animals can, of course, eat each other too, or make a meal of the small carbonaceous animals of the equatorial seas.

Further note that the animals cannot digest plants when they are cold. They can eat them and store them, but the disposal of the solid water and CO2 is too difficult a problem. When they warm up, the water in the plants melts and can be disposed of, and things are simpler.


II

THE FLUORINE PLANET

1. THE STAR AND PLANET

The planet named Niflheim is the fourth planet of Nu Puppis, right angle 6:36, declension -43:09; B8 type star, blue-white and hot, 148 light years distant from Earth, which will require a speed in excess of light to reach it.

Niflheim is 462,000,000 miles from its primary, a little less than the distance of Jupiter from our sun. It thus does not receive too great a total amount of energy, but what it does receive is of high potential, a large fraction of it being in the ultra-violet and higher frequencies. (Watch out for really super-special sunburn, etc., on unwarned personnel.)

The gravity of Niflheim is approximately 1 g, the atmospheric pressure approximately 1 atmosphere, and the average ambient temperature about -60°C; -76°F.

2. ATMOSPHERE

The oxidizer in the atmosphere is free fluorine (F2) in a rather low concentration, about 4 or 5 percent. With it appears a mad collection of gases. There are a few inert diluents, such as N2 (nitrogen), argon, helium, neon, etc., but the major fraction consists of CF4 (carbon tetrafluoride), BF3 (boron trifluoride), SiF4 (silicon tetrafluoride), PF5 (phosphorous pentafluoride), SF6 (sulphur hexafluoride) and probably others. In other words, the fluorides of all the non-metals that can form fluorides. The phosphorous pentafluoride rains out when the weather gets cold. There is also free oxygen, but no chlorine. That would be liquid except in very hot weather. It sometimes appears combined with fluorine in chlorine trifluoride. The atmosphere has a slight yellowish tinge.

3. SOIL AND GEOLOGY

Above the metallic core of the planet, the lithosphere consists exclusively of fluorides of the metals. There are no oxides, sulfides, silicates or chlorides. There are small deposits of such things as bromine trifluoride, but these have no great importance. Since fluorides are weak mechanically, the terrain is flattish. Nothing tough like granite to build mountains out of. Since the fluoride ion is colorless, the color of the soil depends upon the predominant metal in the region. As most of the light metals also have colorless ions, the colored rocks are rather rare.

4. THE WATERS UNDER THE EARTH

They consist of liquid hydrofluoric acid (HF). It melts at -83°C and boils at 19.4°C. In it are dissolved varying quantities of metallic and non-metallic fluorides, such as boron trifluoride, sodium fluoride, etc. When the oceans and lakes freeze, they do so from the bottom up, so there is no layer of ice over free liquid.

5. PLANTS AND PLANT METABOLISM

The plants function by photosynthesis, taking HF as water from the soil, and carbon tetrafluoride as the equivalent of carbon dioxide from the air to produce chain compounds, such as:

and at the same time liberating free fluorine. This reaction could only take place on a planet receiving lots of ultra-violet because so much energy is needed to break up carbon tetrafluoride and hydrofluoric acid. The plant catalyst (doubling for the magnesium in chlorophyll) is nickel. The plants are colored in various ways. They get their metals from the soil.

6. ANIMALS AND ANIMAL METABOLISM

Animals depend upon two main reactions for their energy, and for the construction of their harder tissues. The soft tissues are about the same as the plant molecules, but the hard tissues are produced by the reaction:

resulting in a teflon boned and shelled organism. He's going to be tough to do much with. Diatoms leave strata of powdered teflon. The main energy reaction is:

The blood catalyst metal is titanium, which results in colorless arterial blood and violet veinous, as the titanium flips back and forth between tri and tetra-valent states.

7. EFFECT ON INTRUDING ITEMS

Water decomposes into oxygen and hydrofluoric acid. All organic matter (earth type) converts into oxygen, carbon tetrafluoride, hydrofluoric acid, etc., with more or less speed. A rubber gas mask lasts about an hour. Glass first frosts and then disappears. Plastics act like rubber, only a little slower. The heavy metals, iron, nickel, copper, monel, etc., stand up well, forming an insoluble coat of fluorides at first and then doing nothing else.

8. WHY GO THERE?

Large natural crystals of fluorides, such as calcium difluoride, titanium tetrafluoride, zirconium tetrafluoride, are extremely useful in optical instruments of various forms. Uranium appears as uranium hexafluoride, all ready for the diffusion process. Compounds of such non-metals as boron are obtainable from the atmosphere in high purity with very little trouble. All metallurgy must be electrical. There are considerable deposits of beryllium, and they occur in high concentration in its ores.

Show HN: Extension-blocking domains removed by threat from other blacklists

$
0
0

README.md

Barb the Bear

Introduction

BarbBlock is a content blocking extension for Google Chrome. It blocks requests to sites which have used DMCA takedowns to force removal from other blacklists. Such takedowns are categorically invalid, but they can be effective at intimidating small open-source projects into compliance.

BarbBlock was created in response to a troubling instance where a company used the DMCA takedown process to force a domain blacklist to remove its domain. In reaction to this, some people added the domain to their personal blacklists, even those who weren't blocking it before the takedown. This phenomenon is called the Streisand Effect, and it (indirectly) gives BarbBlock its name. In essence, this extension exists to automate the Streisand effect.

The initial release of BarbBlock blocks the domain in question, functionalclam.com. If DMCA takedowns continue to be misused for blacklist removals, the extension will be updated to cover other domains as well. Updates are automatic through the Chrome App Store.

As the maintainer of this extension, I pledge to dispute any takedown that comes to this repository. This is not my first DMCA-takedown rodeo 😉. I also pledge to only add domains which have attempted to remove themselves from other blacklists through legal threats, including (but not limited to) "Cease and Desist" letters and DMCA takedowns.

Goals

I intend to accomplish a few things with this project.

  1. By calling the bluff of DMCA takedown notices, I hope to show that the takedown filers know their takedowns are meritless and would not stand up in court.
  2. If the extension gains significant traction, it will provide a disincentive for companies to issue takedowns in the first place. As a Chrome extension, the number of users is more quantitatively verifiable than a bunch of users independently adding domains to their blacklist.

Installation

Install BarbBlock from the Chrome App Store.

Adding to the Blacklist

Create an issue with the domains and the label blacklist. In the issue description, add a link to a DMCA takedown notice if available, or else a notice from your service provider that they have received a takedown request.

C++17 Features and STL Fixes in VS 2017 15.3

$
0
0

Visual Studio 2017’s first toolset update, version 15.3, is currently in preview and will be released in its final form very soon. (The toolset consists of the compiler, linker, and libraries. After VS 2017 RTM, the 15.1 and 15.2 updates improved the IDE. The 15.3 update improves both the IDE and the toolset. In general, you should expect the IDE to be updated at a higher frequency than the toolset.)

As usual, we’ve maintained a detailed list of the STL fixes that are available in the 15.3 update. We also have newer feature tables for the STL and the compiler.

New Features (in addition to C++17 features):

* The STL no longer depends on Magic Statics, allowing clean use in code compiled with /Zc:threadSafeInit-.

* Implemented P0602R0 “variant and optional should propagate copy/move triviality”.

* The STL now officially tolerates dynamic RTTI being disabled via /GR-. dynamic_pointer_cast() and rethrow_if_nested() inherently require dynamic_cast, so the STL now marks them as =delete under /GR-.

* Even when dynamic RTTI has been disabled via /GR-, “static RTTI” (in the form of typeid(SomeType)) is still available and powers several STL components. The STL now supports disabling this too, via /D_HAS_STATIC_RTTI=0. Note that this will disable std::any, std::function’s target() and target_type(), and shared_ptr’s get_deleter().

Correctness Fixes:

* STL containers now clamp their max_size() to numeric_limits<difference_type>::max() rather than size_type’s max. This ensures that the result of distance() on iterators from that container is representable in the return type of distance().

* Fixed missing specialization auto_ptr<void>.

* The meow_n() algorithms previously failed to compile if the length argument was not an integral type; they now attempt to convert non-integral lengths to the iterators’ difference_type.

* normal_distribution<float> no longer emits warnings inside the STL about narrowing from double to float.

* Fixed some basic_string operations which were comparing with npos instead of max_size() when checking for maximum size overflow.

* condition_variable::wait_for(lock, relative_time, predicate) would wait for the entire relative time in the event of a spurious wake. Now, it will wait for only a single interval of the relative time.

* future::get() now invalidates the future, as the standard requires.

* iterator_traits<void *> used to be a hard error because it attempted to form void&; it now cleanly becomes an empty struct to allow use of iterator_traits in “is iterator” SFINAE conditions.

* Some warnings reported by Clang -Wsystem-headers were fixed.

* Also fixed “exception specification in declaration does not match previous declaration” reported by Clang -Wmicrosoft-exception-spec.

* Also fixed mem-initializer-list ordering warnings reported by Clang and C1XX.

* The unordered containers did not swap their hashers or predicates when the containers themselves were swapped. Now they do.

* Many container swap operations are now marked noexcept (as our STL never intends to throw an exception when detecting the non-propagate_on_container_swap non-equal-allocator undefined behavior condition).

* Many vector<bool> operations are now marked noexcept.

* The STL will now enforce matching allocator value_types (in C++17 mode) with an opt-out escape hatch.

* Fixed some conditions where self-range-insert into basic_strings would scramble the strings’ contents. (Note: self-range-insert into vectors is still prohibited by the Standard.)

* basic_string::shrink_to_fit() is no longer affected by the allocator’s propagate_on_container_swap.

* std::decay now handles abominable function types (i.e. function types that are cv-qualified and/or ref-qualified).

* Changed include directives to use proper case sensitivity and forward slashes, improving portability.

* Fixed warning C4061 “enumerator ‘Meow’ in switch of enum ‘Kitten’ is not explicitly handled by a case label”. This warning is off-by-default and was fixed as an exception to the STL’s general policy for warnings. (The STL is /W4 clean, but does not attempt to be /Wall clean. Many off-by-default warnings are extremely noisy and aren’t intended to be used on a regular basis.)

* Improved std::list’s debug checks. List iterators now check operator->(), and list::unique() now marks iterators as invalidated.

* Fixed uses-allocator metaprogramming in tuple.

Performance/Throughput Fixes:

* Worked around interactions with noexcept which prevented inlining std::atomic’s implementation into functions that use Structured Exception Handling (SEH).

* Changed the STL’s internal _Deallocate() function to optimize into smaller code, allowing it to be inlined into more places.

* Changed std::try_lock() to use pack expansion instead of recursion.

* Improved std::lock()’s deadlock avoidance algorithm to use lock() operations instead of spinning on all the locks’ try_lock()s.

* Enabled the Named Return Value Optimization in system_category::message().

* conjunction and disjunction now instantiate N + 1 types, instead of 2N + 2 types.

* std::function no longer instantiates allocator support machinery for each type-erased callable, improving throughput and reducing .obj size in programs that pass many distinct lambdas to std::function.

* allocator_traits<std::allocator> contains manually inlined std::allocator operations, reducing code size in code that interacts with std::allocator through allocator_traits only (i.e. most code).

* The C++11 minimal allocator interface is now handled by the STL calling allocator_traits directly, instead of wrapping the allocator in an internal class _Wrap_alloc. This reduces the code size generated for allocator support, improves the optimizer’s ability to reason about STL containers in some cases, and provides a better debugging experience (as now you see your allocator type, rather than _Wrap_alloc<your allocator type> in the debugger).

* Removed metaprogramming for customized allocator::reference, which allocators aren’t actually allowed to customize. (Allocators can make containers use fancy pointers but not fancy references.)

* The compiler front-end was taught to unwrap debug iterators in range-based for-loops, improving the performance of debug builds.

* basic_string’s internal shrink path for shrink_to_fit() and reserve() is no longer in the path of reallocating operations, reducing code size for all mutating members.

* basic_string’s internal grow path is no longer in the path of shrink_to_fit().

* basic_string’s mutating operations are now factored into non-allocating fast path and allocating slow path functions, making it more likely for the common no-reallocate case to be inlined into callers.

* basic_string’s mutating operations now construct reallocated buffers in the desired state rather than resizing in place. For example, inserting at the beginning of a string now moves the content after the insertion exactly once (either down or to the newly allocated buffer), instead of twice in the reallocating case (to the newly allocated buffer and then down).

* Operations calling the C standard library in <string> now cache errno’s address to remove repeated interaction with TLS.

* Simplified is_pointer’s implementation.

* Finished changing function-based Expression SFINAE to struct/void_t-based.

* STL algorithms now avoid postincrementing iterators.

* Fixed truncation warnings when using 32-bit allocators on 64-bit systems.

* std::vector move assignment is now more efficient in the non-POCMA non-equal-allocator case, by reusing the buffer when possible.

Readability And Other Improvements:

* The STL now uses C++14 constexpr unconditionally, instead of conditionally-defined macros.

* The STL now uses alias templates internally.

* The STL now uses nullptr internally, instead of nullptr_t{}. (Internal usage of NULL has been eradicated. Internal usage of 0-as-null is being cleaned up gradually.)

* The STL now uses std::move() internally, instead of stylistically misusing std::forward().

* Changed static_assert(false, “message”) to #error message. This improves compiler diagnostics because #error immediately stops compilation.

* The STL no longer marks functions as __declspec(dllimport). Modern linker technology no longer requires this.

* Extracted SFINAE to default template arguments, which reduces clutter compared to return types and function argument types.

* Debug checks in <random> now use the STL’s usual machinery, instead of the internal function _Rng_abort() which called fputs() to stderr. This function’s implementation is being retained for binary compatibility, but has been removed in the next binary-incompatible version of the STL.

STL Feature Status:

We’re going to continue to add new features to VS 2017 in toolset updates, and we’re working on the second toolset update right now. While we can’t reveal its version number or provide an ETA, we can show you which features have already been implemented (and this list will continue to grow). For now, we’ll refer to the second toolset update as “VS 2017 15.x” (please don’t try to guess what x is, you’ll just create confusion).

Status

Std

Paper

Title

Notes

missing

C++20

P0463R1

endian

 

missing

C++20

P0674R1

make_shared() For Arrays

 

missing

C++17

P0433R2

Deduction Guides For The STL

 

patch

C++17

P0739R0

Improving Class Template Argument Deduction For The STL

[DR]

missing

C++17

P0607R0

Inline Variables For The STL (Options A and B2)

 

missing

C++17

P0426R1

constexpr For char_traits

 

missing

C++17

P0083R3

Splicing Maps And Sets

 

patch

C++17

P0508R0

Clarifying insert_return_type

 

missing

C++17

P0067R5

Elementary String Conversions

 

patch

C++17

P0682R1

Repairing Elementary String Conversions

[DR]

C++17

P0220R1

Library Fundamentals V1

 

missing

C++17

<memory_resource>

 

patch

C++17

P0337R0

Deleting polymorphic_allocator Assignment

 

missing

C++17

P0030R1

hypot(x, y, z)

 

missing

C++17

P0226R1

Mathematical Special Functions

 

missing

C++17

P0024R2

Parallel Algorithms

[parallel]

patch

C++17

P0336R1

Renaming Parallel Execution Policies

 

patch

C++17

P0394R4

Parallel Algorithms Should terminate() For Exceptions

patch

C++17

P0452R1

Unifying <numeric> Parallel Algorithms

 

patch

C++17

P0467R2

Requiring Forward Iterators In Parallel Algorithms

patch

C++17

P0502R0

Parallel Algorithms Should terminate() For Exceptions, Usually

patch

C++17

P0518R1

Copying Trivially Copy Constructible Elements In Parallel Algorithms

patch

C++17

P0523R1

Relaxing Complexity Requirements Of Parallel Algorithms (General)

patch

C++17

P0574R1

Relaxing Complexity Requirements Of Parallel Algorithms (Specific)

patch

C++17

P0623R0

Final C++17 Parallel Algorithms Fixes

 

missing

C++17

P0218R1

<filesystem>

 

patch

C++17

P0219R1

Relative Paths For Filesystem

 

patch

C++17

P0317R1

Directory Entry Caching For Filesystem

 

patch

C++17

P0392R0

Supporting string_view In Filesystem Paths

 

patch

C++17

P0430R2

Supporting Non-POSIX Filesystems

 

patch

C++17

P0492R2

Resolving NB Comments For Filesystem

 

VS 2017 15.x

C++17

P0003R5

Removing Dynamic Exception Specifications

[rem]

VS 2017 15.x

C++17

P0005R4

not_fn()

 

VS 2017 15.x

C++17

P0033R1

Rewording enable_shared_from_this

[14]

VS 2017 15.x

C++17

P0174R2

Deprecating Vestigial Library Parts

[depr]

VS 2017 15.x

C++17

P0302R1

Removing Allocator Support In std::function

[rem]

VS 2017 15.x

C++17

P0358R1

Fixes For not_fn()

 

VS 2017 15.x

C++17

P0414R2

shared_ptr<T[]>, shared_ptr<T[N]>

[14]

VS 2017 15.x

C++17

P0497R0

Fixing shared_ptr For Arrays

[14]

VS 2017 15.x

C++17

P0521R0

Deprecating shared_ptr::unique()

[depr]

VS 2017 15.x

C++17

P0618R0

Deprecating <codecvt>

[depr]

VS 2017 15.3

C++17

Boyer-Moore search()

 

VS 2017 15.3

C++17

P0031R0

constexpr For <array> (Again) And <iterator>

 

VS 2017 15.3

C++17

P0040R3

Extending Memory Management Tools

 

VS 2017 15.3

C++17

P0084R2

Emplace Return Type

 

VS 2017 15.3

C++17

P0152R1

atomic::is_always_lock_free

 

VS 2017 15.3

C++17

P0154R1

hardware_destructive_interference_size, etc.

 

VS 2017 15.3

C++17

P0156R2

scoped_lock

 

VS 2017 15.3

C++17

P0253R1

Fixing Searcher Return Types

 

VS 2017 15.3

C++17

P0258R2

has_unique_object_representations

[obj_rep]

VS 2017 15.3

C++17

P0295R0

gcd(), lcm()

 

VS 2017 15.3

C++17

P0298R3

std::byte

[byte]

VS 2017 15.3

C++17

P0403R1

UDLs For <string_view> (“meow”sv, etc.)

 

VS 2017 15.3

C++17

P0418R2

atomic compare_exchange memory_order Requirements

[14]

VS 2017 15.3

C++17

P0435R1

Overhauling common_type

[14]

VS 2017 15.3

C++17

P0505R0

constexpr For <chrono> (Again)

 

VS 2017 15.3

C++17

P0513R0

Poisoning hash

[14]

VS 2017 15.3

C++17

P0516R0

Marking shared_future Copying As noexcept

[14]

VS 2017 15.3

C++17

P0517R0

Constructing future_error From future_errc

[14]

VS 2017 15.3

C++17

P0548R1

Tweaking common_type And duration

[14]

VS 2017 15.3

C++17

P0558R1

Resolving atomic<T> Named Base Class Inconsistencies

[atomic] [14]

VS 2017 15.3

C++17

P0599R1

noexcept hash

[14]

VS 2017 15.3

C++17

P0604R0

invoke_result, is_invocable, is_nothrow_invocable

[depr]

VS 2017

C++17

<algorithm> sample()

 

VS 2017

C++17

<any>

 

VS 2017

C++17

<optional>

 

VS 2017

C++17

<string_view>

 

VS 2017

C++17

<tuple> apply()

 

VS 2017

C++17

P0032R3

Homogeneous Interface For variant/any/optional

VS 2017

C++17

P0077R2

is_callable, is_nothrow_callable

 

VS 2017

C++17

P0088R3

<variant>

 

VS 2017

C++17

P0163R0

shared_ptr::weak_type

 

VS 2017

C++17

P0209R2

make_from_tuple()

 

VS 2017

C++17

P0254R2

Integrating string_view And std::string

 

VS 2017

C++17

P0307R2

Making Optional Greater Equal Again

 

VS 2017

C++17

P0393R3

Making Variant Greater Equal

 

VS 2017

C++17

P0504R0

Revisiting in_place_t/in_place_type_t<T>/in_place_index_t<I>

VS 2017

C++17

P0510R0

Rejecting variants Of Nothing, Arrays, References, And Incomplete Types

VS 2015.3

C++17

P0025R1

clamp()

 

VS 2015.3

C++17

P0185R1

is_swappable, is_nothrow_swappable

 

VS 2015.3

C++17

P0272R1

Non-const basic_string::data()

 

VS 2015.2

C++17

N4387

Improving pair And tuple

[14]

VS 2015.2

C++17

N4508

shared_mutex (Untimed)

[14]

VS 2015.2

C++17

P0004R1

Removing Deprecated Iostreams Aliases

[rem]

VS 2015.2

C++17

P0006R0

Variable Templates For Type Traits (is_same_v, etc.)

[14]

VS 2015.2

C++17

P0007R1

as_const()

[14]

VS 2015.2

C++17

P0013R1

Logical Operator Type Traits (conjunction, etc.)

[14]

VS 2015.2

C++17

P0074R0

owner_less<>

[14]

VS 2015.2

C++17

P0092R1

<chrono> floor(), ceil(), round(), abs()

[14]

VS 2015.2

C++17

P0156R0

Variadic lock_guard

[14]

VS 2015

C++17

N3911

void_t

[14]

VS 2015

C++17

N4089

Safe Conversions In unique_ptr<T[]>

[14]

VS 2015

C++17

N4169

invoke()

[14]

VS 2015

C++17

N4190

Removing auto_ptr, random_shuffle(), And Old <functional> Stuff

[rem]

VS 2015

C++17

N4258

noexcept Cleanups

[14]

VS 2015

C++17

N4259

uncaught_exceptions()

[14]

VS 2015

C++17

N4277

Trivially Copyable reference_wrapper

[14]

VS 2015

C++17

N4279

insert_or_assign()/try_emplace() For map/unordered_map

[14]

VS 2015

C++17

N4280

size(), empty(), data()

[14]

VS 2015

C++17

N4366

Precisely Constraining unique_ptr Assignment

[14]

VS 2015

C++17

N4389

bool_constant

[14]

VS 2015

C++17

P0063R3

C11 Standard Library

[C11] [14]

VS 2013

C++17

N4510

Supporting Incomplete Types In vector/list/forward_list

[14]

For clarity, the Library Fundamentals V1 paper has been decomposed into its individual features, marked by “…” here.

To give you a better idea of our status, unimplemented papers are marked “missing” for primary features, or “patch” for papers that merely fixed parts of a primary feature. We implement them together, so the large number of “patch” rows doesn’t really indicate a large amount of missing work.

[DR] These papers were voted into the Working Paper after C++17, but as Defect Reports, meaning that they retroactively apply to C++17 (as bugfixes).

[parallel] The Parallel Algorithms are being gradually implemented. Some are available, but we’re still working on them.

[rem] Feature removals are activated by /std:c++17 (or /std:c++latest), with opt-out macros. The macros are _HAS_AUTO_PTR_ETC, _HAS_FUNCTION_ALLOCATOR_SUPPORT, _HAS_OLD_IOSTREAMS_MEMBERS, and _HAS_UNEXPECTED.

[14] These C++17 features are implemented unconditionally, even in /std:c++14 mode (the default). For some features, this was because they predated the introduction of MSVC’s Standard mode options. For other features, conditional implementation would be nearly pointless or undesirably complicated.

[depr] VS 2017 15.x (the second toolset update) will warn about usage of all STL features that were deprecated in C++17 (with the exception of the <stdio.h> family of C headers). /std:c++14 will not emit these warnings, but /std:c++17 (and /std:c++latest) will. The warning messages will be highly detailed, and will mention both the coarse-grained and fine-grained escape hatch macros. In particular, note that while invoke_result was implemented in 15.3, result_of deprecation will appear in 15.x.

[obj_rep] has_unique_object_representations is powered by a compiler intrinsic. Although this has been implemented in EDG (powering Intellisense), we haven’t activated it for that compiler yet. Also, the intrinsic is not yet available in Clang at all.

[byte] std::byte is enabled by /std:c++17 (and /std:c++latest), but has a fine-grained opt-out macro (_HAS_STD_BYTE can be defined to be 0). This is because given certain patterns of using-directives, it can conflict with the Windows SDK’s headers. This has been reported to the SDK team and will be fixed, but in the meantime the escape hatch is available.

[atomic] This is almost completely implemented in VS 2017 15.3, and the remaining differences are difficult to observe (some signatures differ from the Standard, as observed by taking their address or providing explicit template arguments). The STL’s next major binary-incompatible version will fix the remaining differences.

[C11] First available in VS 2015, the Universal CRT implemented the parts of the C11 Standard Library that are required by C++17, with minor exceptions. Those exceptions (which are tracked by bugs) are: missing C99 strftime() E/O alternative conversion specifiers, missing C11 fopen() exclusive mode, and missing C11 aligned_alloc(). The strftime() and fopen() functionality will be implemented in the future. aligned_alloc() will probably never be implemented, as C11 specified it in a way that’s incompatible with our implementation (namely, that free() must be able to handle highly aligned allocations).

For clarity, this table has omitted a number of papers that are Not Applicable (nothing for implementers to do, or users to take advantage of), such as wording clarifications.

Finally, note that this table contains one change relative to VS 2017 15.3 Preview 2 – we implemented P0604R0 “invoke_result, is_invocable, is_nothrow_invocable”, permanently renaming P0077R2 “is_callable, is_nothrow_callable”.

Compiler Feature Status:

C++03/11 Core Language Features

Status

Paper

Notes

[Everything else]

VS 2015

 

[A]

Two-phase name lookup

Partial

 

[B]

Expression SFINAE

Partial

N2634

[C]

C99 preprocessor

Partial

N1653

[D]

Extended integer types

N/A

N1988

[E]

C++14 Core Language Features

Status

Paper

Notes

Tweaked wording for contextual conversions

VS 2013

N3323

 

Binary literals

VS 2015

N3472

 

auto and decltype(auto) return types

VS 2015

N3638

 

init-captures

VS 2015

N3648

 

Generic lambdas

VS 2015

N3649

 

[[deprecated]] attribute

VS 2015

N3760

 

Sized deallocation

VS 2015

N3778

 

Digit separators

VS 2015

N3781

 

Variable templates

VS 2015.2

N3651

 

Extended constexpr

VS 2017

N3652

 

NSDMIs for aggregates

VS 2017

N3653

 

Avoiding/fusing allocations

N/A

N3664

[F]

C++17 Core Language Features

Status

Paper

Notes

Removing trigraphs

VS 2010

N4086

[14]

New rules for auto with braced-init-lists

VS 2015

N3922

[14]

typename in template template-parameters

VS 2015

N4051

[14]

Attributes for namespaces and enumerators

VS 2015

N4266

[14]

u8 character literals

VS 2015

N4267

[14]

Nested namespace definitions

VS 2015.3

N4230

 

Terse static_assert

VS 2017

N3928

 

Generalized range-based for-loops

VS 2017

P0184R0

[14]

[[fallthrough]] attribute

VS 2017

P0188R1

 

Removing the register keyword

VS 2017 15.3

P0001R1

 

Removing operator++ for bool

VS 2017 15.3

P0002R1

 

Capturing *this by value

VS 2017 15.3

P0018R3

 

Using attribute namespaces without repetition

VS 2017 15.3

P0028R4

 

__has_include

VS 2017 15.3

P0061R1

[14]

Direct-list-init of fixed enums from integers

VS 2017 15.3

P0138R2

 

constexpr lambdas

VS 2017 15.3

P0170R1

 

[[nodiscard]] attribute

VS 2017 15.3

P0189R1

 

[[maybe_unused]] attribute

VS 2017 15.3

P0212R1

 

Structured bindings

VS 2017 15.3

P0217R3

 

constexpr if-statements

VS 2017 15.3

P0292R2

[G]

Selection statements with initializers

VS 2017 15.3

P0305R1

 

Hexfloat literals

VS 2017 15.x

P0245R1

 

Matching template template-parameters to compatible arguments

VS 2017 15.x

P0522R0

 

Fixing qualification conversions

No

N4261

 

Allowing more non-type template args

No

N4268

 

Fold expressions

No

N4295

 

Removing dynamic-exception-specifications

No

P0003R5

 

Adding noexcept to the type system

No

P0012R1

 

Extended aggregate initialization

No

P0017R1

 

Over-aligned dynamic memory allocation

No

P0035R4

 

Removing some empty unary folds

No

P0036R0

 

Template argument deduction for class templates

No

P0091R3

and P0512R0

Declaring non-type template parameters with auto

No

P0127R2

 

Guaranteed copy elision

No

P0135R1

[H]

Rewording inheriting constructors

No

P0136R1

 

Refining expression evaluation order

No

P0145R3

and P0400R0

Pack expansions in using-declarations

No

P0195R2

 

Ignoring unrecognized attributes

No

P0283R2

 

Inline variables

No

P0386R2

 

Fixing class template argument deduction for initializer-list ctors

No

P0702R1

[DR]

C++20 Core Language Features

Status

Paper

Notes

Adding __VA_OPT__ for comma omission and comma deletion

No

P0306R4

 

Designated initialization

No

P0329R4

 

Allowing lambda-capture [=, this]

No

P0409R2

 

Familiar template syntax for generic lambdas

No

P0428R2

 

Default member initializers for bit-fields

No

P0683R1

 

Fixing const lvalue ref-qualified pointers to members

No

P0704R1

 

Concepts

No

P0734R0

 

[A] While dynamic exception specifications remain unimplemented, they were mostly removed in C++17 by P0003R5. One vestige remains in C++17, where throw() is deprecated and required to behave as a synonym for noexcept(true). MSVC doesn’t implement that behavior for throw() (it is still treated as a synonym for __declspec(nothrow)), but you can simply avoid throw() and use noexcept instead.

[B] Two-phase name lookup is partially implemented in VS 2017 15.3, and a detailed blog post will be available very soon.

[C] Expression SFINAE is partially implemented in VS 2017 15.3. While many scenarios work (and it has been sufficiently solid for the STL’s purposes for quite a while), some parts are still missing and some workarounds are still required.

[D] Support for C99’s preprocessor rules is unchanged (considered partial due to support for variadic macros, although there are numerous bugs). The preprocessor will be overhauled as part of finishing C++17.

[E] Extended Integer Types are marked as Not Applicable because implementations are permitted, but not required, to provide such types. Like GCC and Clang, MSVC has chosen to not provide extended integer types.

[F] Similarly, the rules for avoiding/fusing allocations are marked as Not Applicable because this is an optimization that is permitted, but not required. We currently have no plans to implement this (as reports indicate that it isn’t an especially valuable optimization).

[14] Unconditionally available, even in /std:c++14 mode.

[G] “if constexpr” is supported in /std:c++14 with a warning that can be suppressed, delighting template metaprogramming library authors everywhere.

[H] Unfortunately, while Guaranteed Copy Elision was implemented in preview builds of VS 2017 15.3, it had to be reverted due to bugs that were discovered. These bugs will be fixed before the feature is restored.

[DR] Like the STL, the Core Language also had a paper that was voted in as a Defect Report, retroactively applying to C++17. Time is no obstacle to the C++ Standardization Committee.

Reporting Bugs

Please let us know what you think about VS 2017 15.3. You can use the IDE’s Report A Problem to report bugs. For compiler and library bugs, it’s important to provide self-contained test cases.

Billy Robert O’Neal III @MalwareMinigun
bion@microsoft.com

Casey Carter @CoderCasey
cacarter@microsoft.com

Stephan T. Lavavej @StephanTLavavej
stl@microsoft.com

Steve Wishnousky stwish@microsoft.com

Ask HN: What free resources did you use to learn how to program ML/AI?

$
0
0

This doesn't actually answer the question, but I always think that people who want to study neural nets should read Marvin Minsky's Perceptrons. It's an academic work. It's short. It's incredibly well written and easy to understand. It shaped the history of neural net research for decades (err... stopped it, unfortunately :-) ). You should be able to find it at any university library.

Although this recommendation doesn't really fit the requirements of the poster, I think it is easy to reach first for modern, repackaged explanations and ignore the scientific literature. I think there is a great danger in that. Sometimes I think people are a bit scared to look at primary sources, so this is a great place to start if you are curious.


Firstly, while I think it's beneficial to learn multiple languages (python, R, matlab, julia), I'd suggest picking one to avoid overwhelming yourself and freaking out. I'd suggest python because there are great tools and lots of learning resources out there, plus most of the cutting edge neural networks action is in python.

Then for overall curriculum, I'd suggest:

1. start with basic machine learning (not neural networks) and in particular, read through the scikit-learn docs and watch a few tutorials on youtube. spend some time getting familiar with jupyter notebooks and pandas and tackle some real-world problems (kaggle is great or google around for datasets that excite you). Make sure you can solve regression, classification and clustering problems and understand how to measure the accuracy of your solution (understand things like precision, recall, mse, overfitting, train/test/validation splits)

2. Once you're comfortable with traditional machine learning, get stuck into neural networks by doing the fast.ai course. It's seriously good and will give you confidence in building near cutting-edge solutions to problems

3. Pick a specific problem area and watch a stanford course on it (e.g. cs231n for computer vision or cs224n for NLP)

4. Start reading papers. I recommend Mendeley to keep notes and organize them. The stanford courses will mention papers. Read those papers and the papers they cite.

5. Start trying out your own ideas and implementations.

While you do the above, supplement with:

* Talking Machines and O'Reilly Data Show podcasts

* Follow people like Richard Socher, Andrej Karpathy and other top researchers on Twitter

Good luck and enjoy!


For those who like videos, I would highly recommend utilizing Andrew Ng's Coursera ML videos for step one. I found his lectures to be good high level overviews of those topics.

The course in general lacks rigor, but I thought it was a very good first step.


Online courses recommended in this thread are great resources to get your feet wet. If you want to actually be able to build ML powered applications, or contribute to an MLE team, we've written a blog post which is a distillation of conversations with over 50 top teams (big and small) in the Bay Area. Hope you find it helpful!

https://blog.insightdatascience.com/preparing-for-the-transi...

Disclaimer: I work for Insight


I'm with the others on this. Never mind the cringe - he's all show, so much so I think he's bluffing (doesn't know ML). He amps up on "character" so much you're excited for the knowledge drop - when it comes, it's so fast and technical there's nothing to gain from it. The adage "if you can't explain something simply you don't understand it" applies. I was hoping he understood ML enough to boil things down; instead he spews equations and jargon so fast (1) you don't catch it, (2) I think he's just reading from a source. He doesn't go for essence, he goes for speed - and that's not helpful.

Again, the cringe isn't the problem directly; but that it's a cover for his bluff. The result is a not-newbie-friendly resource.


I just checked out the "About" section of his Youtube channel.

> I've been called Bill Nye of Computer Science Kanye of Code Beyonce of Neural Networks Osain Bolt of Learning Chuck Norris of Python Jesus Christ of Machine Learning but it's the other way. They are the Siraj Raval of X

I mean, seriously?


I've personally found him to be more of a "showman" and a youtube "star" rather than someone technically adept with data sciences. He is good at what he does - which is building cool things using cool tools/api.

But I wouldn't recommend him as a good resource to learn core ML from or figure out how stuff work internally.

Can protein startups and their investors take on Big Cow?

$
0
0

For many of us, our first experience with fake meat involves rubbery tofu that tastes more like sneaker sole than seared filet. As we forage on, next come the veggie burgers, the soy dogs, the meatless meatballs, the caramel-brown vacuum-sealed lumps called field roasts.

Eventually, we grow accustomed to these chewy, protein-dense, vaguely meat-like foodstuffs. And yet, the dream lives on: What if fake meat tasted and satiated like the real deal?

These days, startups are developing products that more closely resemble animal proteins. Venture capitalists and strategic investors are piling on, too, collectively putting hundreds of millions of dollars to work in companies developing meatless foods offering high protein and, in some cases, a meaty taste.

Over the past two years, Crunchbase has identified about $250 million in disclosed investments in what we call the alternative protein space. Actual investment levels may be quite a bit higher as strategic investors don’t always reveal round size.

While progress has been made, there’s work to be done. Heading to the Marina Umami Burger in San Francisco on a weekend afternoon, Crunchbase News ordered an Impossible Burger, a veggie patty heralded for its beefiness, and a regular burger made the same way. We then cut them in half and had each variety in sequence.

Without question, the Impossible Burger was edible. It was not lumpy; it was not an old shoe. But it was also very much not a burger. It remains firmly a substitute, not a replacement.

Luckily, there’s still plenty of capital sloshing around the fake meat venture ecosystem to fuel further innovation. Crunching the numbers, we’ve identified a few noteworthy trends for the alternative protein space. Here’s a quick overview of the key investment themes.

Investors chomp down on more late-stage rounds

It’s just as well there is no alternative protein unicorn. It just seems wrong to give a fake-meat company an animal moniker, even if it is a mythical one.

That said, there are alternative protein companies that have raised substantial sums of venture capital. This month, for instance, Impossible Foods, maker of the aforementioned burger, closed a $75 million Series E round that brings total funding for the six-year-old Silicon Valley company to more than $250 million. Backers include Bill Gates, Google and Temasek. So if anyone’s going to be the first soy-based unicorn, it’s probably Impossible.

The next most heavily funded company, Hampton Creek, has had its share of troubles delivering on its plan to produce veggie-based foods, in particular eggless versions of traditionally egg-reliant products like mayo. The San Francisco company has raised more than $200 million, including a $60 million round a year ago, and has its products on the shelves of major food retailers. But it has been in the news recently for internal problems, including the resignations a few weeks ago of four out of five board members.

Beyond Meat is also bulking up as it distributes more veggie burgers and mock chicken strips to grocery chains. The eight-year-old company has only disclosed $17 million of its total investments, but the actual total is much higher since the last several funding rounds have been of undisclosed size. Most recently, Beyond closed a Series F led by Tyson Foods and General Mills. The interest of Tyson, America’s largest beef producer, indicates that “big meat” sees promise, and potential competition, in the space.

Burgers and the art of “artificial meatener”

Much of the innovation in the fake-meat sector is centered around figuring out whether vegetables can be mixed up or engineered to taste more like meat. Just as the soda industry spent decades optimizing non-caloric sweeteners, alternative protein entrepreneurs are racing to perfect the 100 percent vegetarian “artificial meatener.”

Impossible’s quest began in 2011, with a five-year research project devoted to what creates the unique sensory experience of meat, and how to recreate it with plants. The closest thing Impossible has to an artificial meatener is an ingredient called “heme,” which is abundant in animal muscle and contributes to the characteristic color and taste of meat. The startup figured out how to take heme from soy roots and produce it using fermentation.

Beyond Meat, meanwhile, relies on pea protein for its meatless burger products. The startup uses “a proprietary system that applies heating, cooling, and pressure to align plant-proteins in the same fibrous structures that you’d find in animal proteins.” Beyond also adds yeast extract for flavoring, as it contains amino acids, including Glutamic acid, that add something resembling a savory meat taste to its faux beef and chicken.

Another approach, not targeted to those on a solely plant-based diet, relies on producing meat from animal cells, eliminating the need to raise livestock. Memphis Meats (actually based in San Francisco) has raised $3 million to pursue this goal. The startup isn’t selling products yet, but it has unveiled some of what it’s cooked up in labs, including a meatball, chicken and duck. Hampton Creek has also announced plans to deliver lab-made meat as early as next year.

Just add some protein

Not all the alternative protein investments are around mimicking meat. There’s also high consumer demand for healthy, convenient sources of protein, whether it’s in the form of pasta, shakes, chips or even water.

And these days, it seems like everybody wants more protein. While dieters cut carbs and slash fat intake, protein generally gets a pass, seen as a source of “good calories” that promote satiety and sustained energy levels. And although the pros and cons of high-protein diets are a topic of continued debate, there’s broad consensus about the value of high-protein foods in a balanced diet. Startups are seeking to address the cross-section of consumers who want the protein, but prefer to limit or avoid consumption of animal products.

Most recently, Detroit-based Banza raised $8 million to scale up production of chickpea-based pastas that offer a few grams more protein per serving than wheat-based varieties. At least three protein and meal-replacement beverage providers, KoiaProtein2o and Soylent, also closed multi-million-dollar rounds in the past three months. Soylent alone has raised more than $70 million to date to sell more beverages fortified with soy protein.

Eat bugs

Lastly, we look at bugs. Many bugs, and grasshoppers, in particular, are high in protein. They’re also commonly eaten in many parts of the world, and potentially marketable as feed source for livestock. And after looking at round counts, there are actually quite a few recent deals involving bug farmers, bug protein products and marketers of said products.

The Crunchbase database contains at least five companies in the insect protein space that have secured funding. The largest funding recipient is Exo Protein Bars, which makes snack bars containing cricket flour. The most recent funding recipient, meanwhile, is Hargol FoodTech, an Israeli startup that raised $600,000 this summer to build what it claims will be the world’s first commercial grasshopper farm.

Given that there’s not a lot of venture capital directed at this space, we’ll have to wait and see if it develops into something bigger.

A meaty conclusion

Anyone who thinks today’s alternative protein products sound too weird to satiate mainstream appetites should consider how drastically eating habits have changed over the last few generations. Some of the most popular meals of the 1900s apparently include a number of dishes, like chicken pudding and liver loaf, that would repel modern palates.

Fake-meat companies can get acquired for lots of money, too. Just ask Quorn Foods, whose private equity backers sold the company to Philippines food conglomerate Monde Nissin in 2015 for $830 million.

Quorn describes its veggie protein recipe as “taking a natural nutritious fungus from the soil and fermenting it to produce a dough called Mycoprotein.” Doesn’t sound too yummy, but apparently, the process produces tasty mock ground beef and chicken cutlets.

Image credit to Alex, master of burger shots.


The accidental invention of the Illuminati conspiracy

$
0
0

It’s the conspiracy theory to dwarf all conspiracy theories. A smorgasbord of every other intrigue under the sun, the Illuminati are the supposed overlords controlling the world’s affairs, operating secretly as they seek to establish a New World Order.

But this far-fetched paranoia all started with a playful work of fiction in the 1960s. What does this tell us about our readiness to believe what we read and hear – and what can the Illuminati myth reveal about the fake news and stories we continue to be influenced by today?

If you like this story, you might also like:

The strange photographs used to ‘prove’ conspiracy theories

How to avoid falling for lies and fake news

Why are people so incredibly gullible?

When most people try to look into the secret society’s history, they find themselves in Germany with the Enlightenment-era Order of the Illuminati. It was a Bavarian secret society, founded in 1776, for intellectuals to privately group together and oppose the religious and elitist influence over daily life. It included several well-known progressives at the time but, along with the Freemasons, they found themselves gradually outlawed by conservative and Christian critics and the group faded out of existence.

That is, until the 1960s. The Illuminati that we’ve come to hear about today is hardly influenced by the Bavarians at all, as I learned from author and broadcaster David Bramwell, a man who has dedicated himself to documenting the origins of the myth. Instead, an era of counter-culture mania, LSD and interest in Eastern philosophy is largely responsible for the group’s (totally unsubstantiated) modern incarnation. It all began somewhere amid the Summer of Love and the hippie phenomenon, when a small, printed text emerged: Principia Discordia. 

The book was, in a nutshell, a parody text for a parody faith – Discordianism – conjured up by enthusiastic anarchists and thinkers to bid its readers to worship Eris, goddess of chaos. The Discordian movement was ultimately a collective that wished to cause civil disobedience, practical jokes and hoaxes

The text itself never amounted to anything more than a counter-culture curiosity, but one of the tenets of the faith – that such miscreant activities could bring about social change and force individuals to question the parameters of reality – was immortalised by one writer, Robert Anton Wilson.

It’s an idealistic means of getting people to wake up to the suggested realities that they inhabit – David Bramwell, author

According to Bramwell, Wilson and one of the authors of the Principia Discordia, Kerry Thornley, “decided that the world was becoming too authoritarian, too tight, too closed, too controlled”. They wanted to bring chaos back into society to shake things up, and “the way to do that was to spread disinformation. To disseminate misinformation through all portals – through counter culture, through the mainstream media, through whatever means. And they decided they would do that initially by telling stories about the Illuminati.”

At the time, Wilson worked for the men’s magazine Playboy. He and Thornley started sending in fake letters from readers talking about this secret, elite organisation called the Illuminati. Then they would send in more letters – to contradict the letters they had just written.

“So, the concept behind this was that if you give enough contrary points of view on a story, in theory – idealistically – the population at large start looking at these things and think, ‘hang on a minute’,” says Bramwell.  “They ask themselves, ‘Can I trust how the information is presented to me?’ It’s an idealistic means of getting people to wake up to the suggested realities that they inhabit – which of course didn’t happen quite in the way they were hoping.”

The chaos of the Illuminati myth did indeed travel far and wide – Wilson and another Playboy writer wrote The Illuminatus! Trilogy which attributed the ‘cover-ups’ of our times – such as who shot John F Kennedy – to the Illuminati. The books became such a surprise cult success that they were made into a stage play in Liverpool, launching the careers of British actors Bill Nighy and Jim Broadbent.

Today, it’s one of the world’s most widely punted conspiracy theories

British electronic band The KLF also called themselves The Justified Ancients of Mu Mu, named after the band of Discordians that infiltrate the Illuminati in Wilson’s trilogy as they were inspired by the religion’s anarchic ideology. Then, an Illuminati role-playing card game appeared in 1975 which imprinted its mystical world of secret societies onto a whole generation.

Today, it’s one of the world’s most widely punted conspiracy theories; even celebrities like Jay-Z and Beyoncé have taken on the symbolism of the group themselves, raising their hands into the Illuminati triangle at concerts. It’s hardly instigated the mind-blowing epiphany – the realisation that it’s all fake – which the proponents of Discordianism had originally intended.

The 60s culture of mini-publishers and zines seems terrifically distant now from today’s globalised, hyper-connected internet, and it has undeniably been the internet’s propensity to share and propagate Illuminati rumours on websites like 4chan and Reddit that has brought the idea the fame it has today.

But we live in a world that is full of conspiracy theories and, more importantly, conspiracy theory believers; in 2015, political scientists discovered that about half of the general public in the USA endorse at least one conspiracy theory. These include anything from the Illuminati to the Obama ‘birther’ conspiracy, or the widely held belief that 9/11 was an inside job carried out by US intelligence services.

“There’s no one profile of a conspiracy theorist,” says Viren Swami, professor of social psychology at Anglia Ruskin University. “There are different perspectives of why people believe in these theories, and they’re not necessarily mutually exclusive – so the simplest form of explanation is that people who believe in conspiracy theories are suffering from some sort of psychopathology.”

Another conclusion researchers have drawn to is that these theories could provide rational ways of understanding events that are confusing or threatening to self esteem. “They give you a very simple explanation,” adds Swami, who published research in 2016 that found

believers in conspiracy theories are more likely to be suffering from stressful experiences than non-believers. Other psychologists also discovered last year that people with higher levels of education are less likely to believe in conspiracy theories. 

The big change now is that politicians, particularly Donald Trump, are starting to use conspiracies to mobilise support – Viren Swami, Anglia Ruskin University

The picture that this paints of modern America is a dark one, especially for Swami who has seen a change in who normally promotes conspiracy material. “Particularly in South Asia, conspiracy theories have been a mechanism for the government to control the people. In the West, it’s typically been the opposite; they’ve been the subject of people who lack agency, who lack power, and it’s their lacking of power that gives rise to conspiracy theories to challenge the government. Like with 9/11. If people lack power, conspiracy theories can sow the seeds of social protest and allow people to ask questions.

“The big change now is that politicians, particularly Donald Trump, are starting to use conspiracies to mobilise support.”

The 45th President of the United States was a notorious“birther”, regularly speaking to the media about how President Obama wasn’t really born in Hawaii. He also accused various US states of voter fraud after the 2016 election and his campaign team were responsible for propagating now debunked fabricated stories such as Pizzagate and the Bowling Green Massacre.  

I asked Swami if he thought that this shift in conspiracy theory usage could affect politics long term. “People could become disengaged with mainstream politics if they believe in conspiracy theories,” said Swami. “They’re much more likely to engage with fringe politics. They’re also much more likely to engage with racist, xenophobic and extremist views.”

The idea of an untouchable, secretive elite must resonate with people that feel left behind and powerless; Trump said he wanted to represent these people, especially the once-powerful industrial landscape of America’s Rust Belt. Yet instead of feeling better represented in the halls of power by a non-politician like themselves – and theoretically being less likely to feel powerless and vulnerable to conspiracies – it seems like some in America are more likely to believe in stories like the Illuminati more than ever before.

“If Wilson was alive today, he’d be part delighted, part shocked”, says David Bramwell. “As far as they thought in the 60s, culture was a little too tight. At present, it feels like things are loose. They’re unravelling.

“Perhaps more stability will come as people fight against ‘fake news’ and propaganda. We’re starting to understand how social media is feeding us ideas we want to believe. Echo chambers.”

Between internet forums, nods in popular culture and humankind’s generally uninhibited capacity for imagination, today’s truth-finders and fact checkers might debunk the Illuminati myth for good.

Join 800,000+ Future fans by liking us on Facebook, or follow us on Twitter.

If you liked this story, sign up for the weekly bbc.com features newsletter, called “If You Only Read 6 Things This Week”. A handpicked selection of stories from BBC Future, Earth, Culture, Capital, and Travel, delivered to your inbox every Friday. 

The Next Moon Landing Is Near, Thanks to Pioneering Engineers

$
0
0

Synergy Moon Technician Erik Reedy ponders rocket design at Interorbital Systems (IOS), backer of this international team competing for the Google Lunar XPrize. The $20 million prize will go to the first privately funded group to land a craft that travels 500 meters on the moon and beams images and video back to Earth—a small step toward potentially giant economic rewards beckoning from the moon, and beyond.

This story appears in the August 2017 issue of National Geographic magazine.

CALL FOR SUBMISSIONS The Short Film Showcase wants to feature your film as part of the National Geographic magazine's space issue this August. Click here for more information.

The youthful Indian engineers took their seats, a bit nervously, in a makeshift conference room inside a cavernous former car-battery warehouse in Bangalore. Arrayed in front of them were several much older men and women, many of them gray-haired luminaries of India’s robust space program. The first Asian space agency to send an orbiter to Mars, it also nearly tripled a previous world record by launching 104 satellites into orbit in a single mission this past February. The object of everyone’s attention was a small rolling device barely the size of a microwave oven.

The members of the young crew explained their plans to blast the device into space aboard a rocket late this year, position it into lunar orbit nearly a quarter million miles away, guide it to a landing on the moon, and send it roaming across the harsh lunar landscape. The engineers of TeamIndus said their company would do all of this on a shoestring budget, probably $65 million, give or take, the vast majority of it raised from private investors.

A prominent Mumbai investor, Ashish Kacholia, who has put more than a million dollars into the firm, sat at the back of the room, transfixed by the discussion. It somehow combined the intense, rapid-fire questions of a doctoral thesis defense with the freewheeling, everybody’s-shouting, laughter-punctuated atmosphere of the Lok Sabha, India’s boisterous lower house of parliament. Kacholia hardly needed to be here all day to check up on this particular investment of his—far from his largest—but he stayed just to hear the erudite dialogue on selenocentric (moon-centered) orbit projections, force modeling, apogee and perigee, and the basis for how “the kids” drew up the error covariance matrix.

TeamIndus, India A concept mocked up in foam for a video echoes a prototype of the rover ECA, now ready for testing in a Bangalore lab.

“It’s thrilling, really,” Kacholia explained. “You’ve got these 25-, 28-year-olds up there defending their calculations, all their work, in front of a thousand years of the nation’s collective aerospace experience and wisdom.” His friend S. K. Jain, also a well-known Indian investor, nodded in vigorous agreement. “These kids are firing up the whole imagination of India,” he commented. “They’re saying to everyone, Nothing is impossible. ”

Nearly 50 years after the culmination of the first major race to the moon, in which the United States and the Soviet Union spent fantastic amounts of public money in a bid to land the first humans on the lunar surface, an intriguing new race to our nearest neighbor in space is unfolding—this one largely involving private capital and dramatically lower costs. The most immediate reward, the $20 million Google Lunar XPrize (or GLXP) will be awarded to one of five finalist teams from around the world. They’re the first ever privately funded teams to attempt landing a traveling vehicle on the moon that can transmit high-quality imagery back to Earth.

The competition is modeled explicitly after the great innovation-spurring prize races of the early years of aviation, most notably the Orteig Prize, which Charles Lindbergh won in 1927 when he flew the Spirit of St. Louis nonstop from New York to Paris.

Like the quest for the Orteig Prize, the competition for the Lunar XPrize involves national prestige. Teams from Israel, Japan, and the U.S., plus one multinational group, are vying for the honor along with India; a cavalcade of other nations participated on the 16 teams that survived into the semifinal stage last year.

Almost as diverse as their countries of origin is the range of approaches and commercial partnerships involved in solving the three basic problems at hand—launching from Earth, landing on the moon, and then going mobile to gather and transmit data. To meet the last challenge, three teams plan to deploy variants of a traditional rover, while the other two intend to use their landing craft to make one giant leap for private enterprise: They will “hop” the required minimum of 500 meters on the moon rather than drive across the lunar surface.

As with many early aviation prizes, whichever team prevails almost surely will spend much more to win the prize than it gets back in prize money, though all the teams hope the global publicity and “brand enhancement” of victory will eventually make their investment pay off handsomely.

At its core, this new sprint to space poses a question that would have been laughable in the Cold War era of the 1960s, when the U.S. was willing to spend more than 4 percent of its federal budget to beat its superpower foe to the moon: Can someone actually make money venturing out into the great beyond? To a demonstrably wide range of entrepreneurs, scientists, visionaries, evangelists, dreamers, eccentrics, and possible crackpots involved in the burgeoning space industry, the answer is an enthusiastic yes.

President John F. Kennedy famously urged America in 1962 to “choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard.” Today Bob Richards, founder and CEO of Moon Express, the American team, offers a different, if consciously cheeky, rationale. “We choose to go to the moon,” he says, “because it is profitable!”

Whether Richards is correct about that, and if so, just when it might prove true, is wildly unclear. Setbacks are the norm in the space business, and realistically, many companies will make their early money mainly from government contracts, not private customers. Nonetheless, Richards predicts that the world’s first trillionaire will be a space entrepreneur, perhaps one who mines the lunar soil for helium-3, a gas that’s rare on Earth but plentiful on the moon and an excellent potential fuel source for nuclear fusion—a holy grail of energy technology that scientists have been trying to master for decades. Or a huge fortune may be minted from the asteroids and other near-Earth objects, where robotic technology could help mine vast amounts of gold, silver, platinum, titanium, and other prized elements bound up in them.

“There are $20 trillion checks up there, just waiting to be cashed!” says Peter Diamandis, a physician and engineer who is co-founder of Planetary Resources, a company backed by Avatar director James Cameron and several tech billionaires. Planetary Resources also acquired the company Asterank in 2013. Asterank’s website offers scientific data and projects the economic value of mining more than 600,000 asteroids.

Diamandis is also founder and executive chairman of the XPrize Foundation, which has sponsored several other award competitions designed to push the boundaries of invention and technology in fields as diverse as artificial intelligence, mathematics, energy, and global health. The whole thrust of the Lunar XPrize competition, says Chanda Gonzales- Mowrer, a senior director at the foundation, is to help pave the way to “a new era of affordable access to the moon and beyond.”

SpaceIL, Israel Wearing her official space suit costume at team headquarters in Tel Aviv, Yuval Klinger, 7, is enthusiastically tracking the Israeli organization’s progress—and contemplating whether spacefaring may be a part of her future career plans. She is far from alone in her interest. “We wanted all kids in Israel to be heads-up about this,” says SpaceIL’s leader, Eran Privman. “We want these kids to be able to explain to their parents what’s going on.”

Just as the worldwide acclaim for Lindbergh’s bravura feat sparked huge interest in civil aviation, the lunar competition is intended to fire public imagination about private space pioneers, who already are ferrying cargo to the International Space Station and deploying satellites, orbital rocketry, and test modules. Soon the crafts may be carrying passengers: Virgin Galactic, which billionaire founder Richard Branson calls “the world’s first commercial spaceline,” says it’s gearing up to take passengers on brief space tours in which they will experience weightlessness and awe-inspiring views of Earth. SpaceX founder Elon Musk announced in February that his company would fly two as yet unnamed private citizens around the moon in late 2018 aboard its Dragon spacecraft. Two months later Amazon founder Jeff Bezos said he’d be selling a billion dollars in stock a year to fund Blue Origin, his own commercial and space tourism enterprise.

There are plenty of reasons to be skeptical about how soon these firms will actually be carrying private customers to space; after all, a 2014 crash of Virgin Galactic’s prototype passenger spacecraft set that company’s effort back by several years. And while the Lunar XPrize competition appears to be coming to a head, there are plenty of obstacles to contend with: the possibility of a missed deadline, failure of prelaunch rocket tests, to name just two. Plus, the impact of the race on the public imagination could well prove limited. For one thing it simply lacks the human drama and suspense of the 1969 moon landing and safe return of men to Earth, a feat that began an era of human exploration on the lunar surface that wound up lasting a mere three years. Unmanned lunar rovers have been around for decades now: When China landed Yutu in 2013, it became the third nation to put a rover on the moon.

So, really, then: What’s the big deal?

“What’s new is that the cost of getting to space is dropping, and it is doing so dramatically,” explains John Thornton, the chief executive at Astrobotic, a Pittsburgh-based firm whose aim is to “make the moon accessible to the world” with logistical services that involve carrying everything from experiments for universities to MoonMail for customers who just want to leave a tiny something on the lunar surface—a note, a photo, a lock of hair from a deceased loved one.

“A company like ours can do the math and show investors that we really do have a feasible plan to make money,” Thornton says. “Not many years ago, that would have been science fiction.”

If the race to put a man on the moon was the equivalent of building one of those giant, room-size, prodigiously expensive mainframe computers in the early days of high technology, today’s race is analogous to a different era of computing: the race to put an affordable computer on everyone’s desktop or, a few years later, in everyone’s telephone. Today computers are so tiny—and the batteries that power them so compact—that we can reach the moon with increasingly smaller and decreasingly expensive devices. Rather than golf cart–size rovers on the moon, the next generation of machines exploring, mapping, and even mining the lunar landscape may well be the size of a child’s Tonka truck. More than anything else, that’s the driving factor behind today’s space economy.

“Think micro-rovers and miniature CubeSats,” says William L. “Red” Whittaker, legendary roboticist at Carnegie Mellon University and a pioneer in both rover and self-driving automobile technology. “It’s astonishing what’s going on. Small is the next big thing. Very small.”

The physics of human spaceflight remain more complex—we are growing neither smaller nor more compact, so it still takes plenty of fuel to get us up there—but these advances could herald a smaller, nimbler, cheaper way to get people back on the moon and far beyond.

In fact, some in the space industry say the moon may one day be less the object of our journey than a sort of giant Atlanta airport that we’ll have to go through on our way to somewhere else, where both the engineering and the economics of blasting off from a place with only one-sixth the gravity of Earth will make a lunar hub the ideal way station in exploring the universe.

Water, now locked in the form of ice at the lunar poles, would be both lifeblood and fuel source: water to drink, water to irrigate crops, and water to be split into oxygen and hydrogen, the former for us to breathe and the latter to power our spacecraft beyond this lunar base. Again, whether that will prove true, and if so, when, is unknowable. But what is known now is that the first destination of the emerging space industry is obvious: the moon.

Team Hakuto, Japan Kyoko Yonezawa reflects on the team’s progress as the launch deadline draws ever nearer. The plan is for Sorato, the Japanese rover, to hitch a ride to the moon aboard TeamIndus’s rocket and lander—and wait for the rovers to fight to the finish on the lunar surface. National pride and the optimism of youth have made the quest for the XPrize a huge story in Japan. Team leader Takeshi Hakamada says: “We’re not in this just to win, although that would be nice.”

To witness a test mission of Team Hakuto—Japan’s entry in the Lunar XPrize competition—I traveled last September to a remote, windswept region of western Japan known as the Tottori Sand Dunes. For days, ferocious and very un-moonlike rain whipping off the Sea of Japan pelted the coast, drowning out proper conditions for testing a lunar rover. In a nearby youth hostel, team leader Takeshi Hakamada and his colleagues were getting restless. Dressed in spiffy gray jackets with a rabbit logo (Hakuto is a mythological white rabbit in Japanese folktales) and tossing back energy drinks, they kept fine-tuning software that carefully mimicked the communications delay of 2.5 seconds between Earth and the moon, nearly a quarter million miles away.

Then abruptly one evening the skies cleared and stars emerged. Amid a crackle of walkie-talkies, Hakamada’s team carted an impressive array of laptops, tablets, and sensors through a wooded clearing and out onto the dunes. Then came—literally with white-glove treatment—a pair of roving robots designed to work mostly in tandem when they’re on the moon, but partly independently, which is where Hakamada’s profitmaking idea comes in.

Team Hakuto’s entry features a four-wheel rover—dubbed Sorato by the crew, after a song by a Japanese alternative rock band—which in future missions beyond XPrize will be tethered to a separate, two-wheel tilting robot. Both units are made largely of very lightweight, strong, carbon fiber components. Hakamada, a thin, thoughtful man with a mop of unruly hair, who has been a space geek since he saw his first Star Wars movie as an elementary school student, said the smaller robot can be lowered deep into fissures, lava tubes, and caves. It will gather vital data on such spots, which could serve an essential function one day as temporary habitats for future lunar bases, shielding arriving humans for a period of time while more permanent digs are constructed.

The Tokyo-based company Hakamada runs, iSpace, plans to leverage Japanese advances in technology miniaturization to probe, photograph, map, and model the moon in much higher detail than can be seen in the photos and soil-testing results from earlier lunar rover missions.

“We are not in this just to win a prize, although that would be nice,” Hakamada told me shortly before the test run. “We are in this to demonstrate to the world that we have a viable technology that can produce important information that people will be willing to pay for.”

With wheels that each look a bit like an old-fashioned waterwheel, the main rover reached a “drop point” on the dunes, a stand-in for the harsh lunar surface. It’s hitching a late December launch with the Indian Space Research Organisation, the government agency whose rocket will be carrying TeamIndus’s lunar rover as well. (To win the XPrize, a team must be launched by December 31, 2017, but can complete its mission in early 2018.)

It was quiet out on the Tottori Dunes as the clock neared midnight, the roar of the sea muffled by the bluffs. Hakuto’s tiny rover looked a bit forlorn out on the sandy simulacrum (a simulation of the lunar surface). Hakamada and his crew coordinated a series of computer-entered commands through the lunar time lag, and suddenly the rover clicked to life, cutting cleanly through the sand, traveling just a few inches per second. It correctly sensed and navigated around several hazards placed in its path. This ability will be critical on the moon, where a large enough rock or ditch could scuttle a whole mission.

Team Hakuto, Japan Members of the Japanese media assemble on the remote Tottori Sand Dunes to see Sorato undergo field tests. They look on as Hakamada carries the rover to a sandy test bed that simulates the moon’s surface. “We want to demonstrate to the world that we have a viable technology,” he says.

Team Hakuto, Japan Sorato sits in a Tokyo clean room.

“The rover did great,” Hakamada said later, beaming like a proud new father. In fact, he explained, his confidence in its performance was no longer his biggest challenge. “We believe that the biggest problem for space innovation now is really not technology itself but the entrepreneurship involved. To open new markets in space, you have to convince people this is for real—and thus defy all those old stereotypes about how only big government agencies can undertake this sort of exploration.

“That’s what’s great about this race,” he added. “Whoever wins will show it can be done.”

A few steps from the Atlantic Ocean, on a giant patch of Florida scrubland visited by alligators, sea turtles, and the occasional bobcat, Cape Canaveral’s Space Launch Complex (SLC) 17 appears at first glance to be a relic. From 1957 to 2011, the site was used for both Thor and Delta rocket launches, the former for the country’s first ballistic missiles, the latter for satellites and solar system probes and for closer observation of the sun itself.

On a pleasant March evening this year, the only sound at SLC-17 was a slight breeze from the sea whistling through the rusting towers of the complex. But behind a locked door in a former maintenance shed, the prototype vehicle belonging to the first U.S. company to receive government approval for a space mission beyond Earth orbit was ready to hit the beach—on its way, ultimately, to the moon.

To Bob Richards, once an assistant to famed astrophysicist Carl Sagan and now head of Moon Express, the beauty of the company’s MX-1E lander design is its dual-purpose utility. “There’s no need for a rover at all if your landing craft can provide the same function,” Richards told me. In fact, he added, the Google Lunar XPrize is too often misconstrued as a rover competition.

“The greatest challenge of the GLXP is to land on the moon,” he said. “Rovers can’t land on the moon themselves, and in fact the term ‘rover’ doesn’t appear in competition rules at all, just a requirement to accomplish mobility of at least 500 meters.”

Thus was born the idea of hopping to victory by bouncing along with the help of thrusters. After an initial rocket launch to low-Earth orbit, the MX-1E—a single-stage robotic spacecraft that is shaped and sized more than a bit like R2-D2 of Star Wars fame—will blast away using a super-high-test hydrogen peroxide as its main propellant to travel at bullet speed on course for its lunar goal. After establishing lunar orbit, Moon Express’s vehicle will eventually achieve what engineers euphemistically call a “soft landing”: Aided by reverse thrust, the vertical descent will nonetheless be violent enough to require cushioning by a flexible landing-leg system capable of absorbing the blow and springing back with enough life to take on the next stage of the mission. With a small amount of fuel remaining, the MX-1E will take off on a big hop—or, perhaps, a series of smaller hops—to travel the required distance to win the XPrize.

With his TED Talk–worthy profundities and an industry reputation (not always a positive one) for the gift of gab, Richards makes it all sound so brilliantly achievable that you’re tempted to invest. But there are arguments for holding on to your wallet—for one thing, Moon Express is currently slated for launch not with a proven carrier such as SpaceX, with its Falcon rocket lines, but instead with Rocket Lab, a U.S.-based company whose launch site at the Mahia Peninsula on the North Island of New Zealand opened this past September.

Testing is just beginning this year, meaning that the firm will be on a very aggressive timetable to achieve the XPrize’s stipulation of an actual launch by the end of the year. Previous milestone deadlines have been extended, but XPrize says it is committed to wrapping up the competition soon. Thus it could conceivably end with no winner, though a foundation official insists it “really, really wants someone to win.”

The other team aiming to hop the distance needed to win is based in a small complex of industrial buildings on the outskirts of Tel Aviv. Its leader is hardly less evangelistic than Richards.

“Our vision is to re-create an ‘Apollo effect’ here in Israel, to really inspire a rising generation of kids to excel in science and technology,” said Eran Privman, a national hero and the CEO of SpaceIL, whose eclectic résumé includes combat experience as a pilot in the Israeli Air Force; a doctorate in computer science and neuroscience from Tel Aviv University; and a range of research, development, and executive posts for several major technology companies in Israel. He was referring to the impact the Apollo space programs had on youth in the 1960s and ’70s, when the enterprise’s successful missions inspired many of the founders of today’s leading high-tech companies.

Roughly the size of a small refrigerator but more circular in shape—a bit like a flying saucer—SpaceIL’s lander is expected to weigh 1,323 pounds when it detaches from a SpaceX Falcon 9 rocket, though about two-thirds of that weight will be fuel used up by the time it is ready to land. With some residual spring action in its legs similar to the MX-1E’s, it will use the little fuel left to hop the nearly one-third of a mile set by the XPrize rules.

The Israeli effort began in late 2010 as “three crazy guys with not a lot of money but with the thought that it would be really cool to land a robot on the moon.” That’s how co-founder Yariv Bash described the beginning to me during a visit to the testing lab for the lander’s main computer. They struggled down to the wire to meet an initial competition deadline requiring them to show plans for a landing strategy and at least $50,000 in assets.

“We asked anybody we could for money,” Bash recalled. “It got to where I was asking my wife for money in my sleep.” While short on capital, the group was not short on know-how: Bash is an electronics and computer engineer who once headed R and D efforts for Israeli intelligence forces. (“You know Q in the James Bond movies?” Bash asked me with a wink. “It was a bit like that.”)

Their initial designs were far smaller—one as small as a two-liter soda bottle—than the lander they are assembling with parts from around the world this summer. And rather than a for-profit enterprise, SpaceIL has wound up as the only nonprofit in the remaining field of XPrize competitors, with generous funding from two well-known billionaires, technology entrepreneur Morris Kahn and casino magnate Sheldon Adelson. Its mission now is essentially twofold—to win the prize, of course, but also to educate and inspire a new generation of potential tech leaders in a country often referred to as Start-up Nation.

As in India, national pride is clearly on the line here. Virtually every school in Israel now has a teaching unit about the SpaceIL effort, and schoolkids will be closely following the mission once it blasts off for the moon, hoping theirs will become the first country ever to send a privately funded mission to explore the lunar surface.

“We wanted all kids in Israel to be heads-up about this,” said Privman, adding with a laugh: “We want these kids to be able to explain to their parents what’s going on.”

Enough with the hopping already. Hakuto, TeamIndus, and a California-based international consortium known as Synergy Moon all plan to use a separate, wheeled rover to gather data, which points up an arguable loophole in the rules: Hakuto could win by subcontracting out both launch and landing, only needing to deploy its Sorato rover to achieve victory. Gonzales-Mowrer, the XPrize race director, says that would be just fine: “We wanted teams to come up with various approaches to accomplishing the mission,” she explains. From a financing point of view, the main threshold is simply that competitors must show XPrize judges that at least 90 percent of their money comes from nongovernment sources.

“It’s been fun to watch the teams network with each other and with outside providers to drive down the cost,” she said. “In that sense, the ultimate goal of this competition has already been achieved.”

TeamIndus, India With ECA at rest, engineer Lakshman Murthy takes a break. The hundred-plus members of the team hope for dividends far greater than prize money. “There are superbright kids out there in the cities and in the remote parts of the nation,” says Sheelika Ravishankar (nicknamed “Jedi Master” by the team). “We need them to know anything is possible. We need to reach them.”

If there is to be a giant Walmart—or perhaps an Ikea—for spacefaring ventures someday, then Interorbital Systems, the primary company behind the Synergy Moon consortium, is determined to fill that role. It aims to be “the lowest cost launch provider in the commercial space industry,” says its co-founder and CEO, Randa Relich Milliron. To do this, she explains, it will build rockets in modular, standardized units; use off-the-shelf components wherever possible, including industrial irrigation tubes and microcontrollers; and experiment with lower cost fuels such as turpentine as propellants.

In her office at the Mojave Air & Space Port in the California desert, a hundred miles or so north of downtown Los Angeles, Milliron pointed with pride to the company brochure, which offers a do-it-yourself TubeSat Personal Satellite Kit for around $16,000, a price that “Includes Free Launch!” and could drop to $8,000 for high school or college students. Customers will assemble the tube (there is also a more expensive CubeSat available) and outfit it with whatever small additional gear they can fit, such as a camera for tracking migratory animals from orbit or sensors that can monitor weather conditions. The company plans to launch the personal satellites into orbit 192 miles above the Earth, a sufficient height to allow them to operate from three weeks to two months, depending on solar activity, after which the devices will burn up safely after reentering the atmosphere.

Milliron and her husband, Roderick, have been working on and off for more than 20 years to get the company—and its rockets—off the ground. It’s safe to say that several remaining and former competitors in the GLXP race admire their pluck but doubt their chances. Even if they reach the moon with one of their DIY rockets, their plan to use a customized “throwbot” as their roving device on the moon has also raised eyebrows. (Throwbots, throwable robots, are frequently used by the military, police, and firefighters to provide video “eyes” in a location too dangerous to enter, such as a terrorist hideout, a suspected meth lab, or a burning building.)

Even so, the couple and a small crew of employees press on in their warehouse set amid the large, military-issue sheds and Quonset huts that make up the spaceport side of the dusty desert complex—the other side of the runway is a giant “boneyard,” where commercial airliners such as old Boeing 747s and DC-10s have come to die, parked for good and waiting to be cut up for scrap.

The Millirons say their initial launches will be from a barge at an ocean site off the California coast. With a humble budget they decline to quantify publicly, but with grand dreams they describe expansively, it is hard to know exactly what to make of them or of the Synergy Moon entry in the space race, which their firm essentially anchors. The team does have a verified launch contract, although it appears to be essentially with itself, since it’s the only entrant in the race planning to do all the things needed to win—launching, landing, roving, and transmitting—on its own.

“Sometimes we feel like renegades or outcasts, building these rockets by ourselves,” said Randa Milliron on a tour of Interorbital’s workshop. “But that’s the whole point, really. We are disrupters. We are out to show the world this can all be done at truly radically lower costs.”

From this Mojave Desert outpost to the Atlantic shore at Cape Canaveral, from the outskirts of Tel Aviv to the Japanese sand dunes and a Bangalore warehouse, all five teams are forging ahead on their respective missions. Each is driven to win—but each is also surprisingly friendly with its competitors. Over the past several years, even as the number of teams officially dwindled from 29 to 16 and down to the five remaining at time of writing, one of them has hosted an annual summit meeting for everyone else, as well as XPrize Foundation officials, with each leader offering a frank presentation on successes and setbacks to date. Alliances have formed, such as an agreement between TeamIndus and Hakuto to share a ride on the Indian space agency’s rocket and the Indus lander, essentially duking it out once they reach the moon. An industry is being born.

“There’s really a ‘Yes We Can’ theme going on here,” says Rahul Narayan, the charismatic leader of the 112 members working for TeamIndus. “This is the time. How it will all evolve, exactly, I don’t know. I’m not sure anyone knows. But this is the time.”

Journalist Sam Howe Verhovek is based in Seattle and is the author of Jet Age: The Comet, the 707, and the Race to Shrink the World.Vincent Fournier is a French artist and photographer living in Paris. In this issue, they both make their first appearance in National Geographic magazine.

Stanford scientist searches for answer to his son’s Chronic Fatigue Syndrome

$
0
0

As a renowned Stanford scientist, Ron Davis has a deep appreciation for the power of modern medicine.

And yet an explanation for the disease afflicting his own beloved son eludes him.

Son Whitney, 33, suffers from such severe Chronic Fatigue Syndrome that he is bedridden, unable to eat or speak. The handsome man was once a photographer and adventurer. He traveled through the United States, studied Buddhism in India and Nepal, lived in an Ecuadorian rainforest and ran a campaign office for former president Barack Obama. Now he’s returned home to Palo Alto for 24-hour care.

So his father has set out to find the reason behind his mysterious condition — believed to affect 2 million Americans — convinced that science has an answer, and that knowledge will lead to a cure. He is also giving new hope to others.

“To have people like Dr. Davis who are studying it and looking for answers — it is huge,” said Lorene Irizary of Sonoma, sick for 22 years, a former Sonoma County official who now a patient at Stanford. She arrived in a wheelchair. “I’ve tried so hard to find answers. To be here with the researchers and the doctors and see it all together – It is really amazing.”

Whitney Dafoe, the son of Ron Davis, PhD, director of Stanford's ChronicFatigue Syndrome Research Center. Before his sickened a decade ago, Dafoe,
now 33,  was a photographer and an adventurer. He traveled to all 50
states, studied Buddhism in India and Nepal, lived in an Ecuadorian
rainforest and ran a campaign office for former President Barack Obama.
(Photo courtesy of Ashley Haugen.)
Whitney Dafoe, the son of Ron Davis, PhD, director of Stanford’s Chronic Fatigue Syndrome Research Center. Before he sickened a decade ago, Dafoe, now 33, was a photographer and an adventurer. He traveled to all 50 states, studied Buddhism in India and Nepal, lived in an Ecuadorian rainforest and ran a campaign office for former President Barack Obama. (Photo courtesy of Ashley Haugen)

On Saturday, at a Stanford symposium organized by Davis, patients and top scientists gathered to share their insights into the condition, also called myalgic encephalomyelitis or ME/CFS. About 300 people attended the conference and another 1,000 watched it online.

“We trying to get to the heart to what is really going on,” said Davis, PhD, director of Stanford’s Genome Technology Center and director of Stanford’s Chronic Fatigue Syndrome Research Center. “I think we are making good progress. There is a lot of new data that helps us focus in.”

This is what we know, so far: When people encounter a major stress in life, such as an infection, environmental toxin, trauma or physical shock, the body hits the “pause” button — briefly.  This is a normal; it’s nature’s way of keeping us alive, during times of trial.

Most of us bounce back. While we all feel exhausted after a setback — say, the flu or mononucleosis -we eventually recover.

But some don’t. Their debilitation persists.

“Basically it’s a shutdown of the body,” said Davis. “It doesn’t reactivate.”

“If you have mono or influenza, once the viral infection is gone you don’t feel very well. You feel  totally exhausted and miserable.  That is what these patients feel all the time — year after year after years.”

At the symposium, scientists agreed that multiple studies have failed to find a single underlying anomaly for the disease. And patients with similar onsets have different longterm outcomes.

But biological patterns are emerging. Scientists are studying the unique chemical fingerprints that specific cellular processes and life’s molecular machinery leave behind – and see clues.

Immune system T cells are using significantly less of their respiratory capacity, they reported. They’ve seen metabolites that are consistent with enhanced inflammation and reduced recovery. There also are disturbances in the body’s pathways for fat, lipid, sugar and energy metabolism.

One scientist reported that seven underlying genetic “cluster issues” have been identified, perhaps predisposing them to illness.

“What is the mechanism for the body’s shutdown? What is the mechanism for reactivation? We don’t know,” said Davis.

For Whitney, the illness followed a bad case of mononucleosis, then a spell of headaches and dizziness after a college trip to Jamaica. He sickened while in India, then later caught a severe cold when back home — and never returned to health.

Ron Davis, director of Stanford's Chronic Fatigue Syndrome Research Center, shares a gesture of "I love you," with his sick son Whitney. (Photo courtesy of Ashley Haugen)
Ron Davis, director of Stanford’s Chronic Fatigue Syndrome Research Center, shares a gesture of “I love you,” with his sick son Whitney. (Photo courtesy of Ashley Haugen)

Davis’s tireless search — studying journals, assembling experts, calling National Institutes of Director Dr. Francis Collins and urging more funding — has not yet revealed a certain culprit.

It’s a frustration for Davis, 76, who is also  professor of biochemistry and of genetics with a degree from Caltech, postdoctoral training at Harvard, and 30 biotechnology patents to his name. Named one of the world’s greatest innovators by The Atlantic magazine, he developed a technique which led to the discovery of RNA splicing and also helped find a way to join DNA fragments.

But he knows his efforts are opening a door to a new frontier of research into a disease that was once thought merely psychological — where scientists can look beyond the vast spectrum of ailments.

After the conference, Davis would end his day like all others: going home, opening the door to Whitney’s dark room, and start an intravenous solution to provide food and hydration.

“I was hoping some doctors would know what was going on with Whitney and knew how to treat it. It took a long time to realize that no one knows how to treat it.”

“I think we can figure this out and cure it.  I am optimistic,” he said. “People recover. It is not something that is intractable.”


To learn more about Stanford’s Chronic Fatigue Syndrome Research Center, go here: http://med.stanford.edu/sgtc/donation.htm

To learn more about the Community Symposium on the Molecular Baiss of ME/CFS, and order a DVD of proceedings, go to:www.omf.ngo and https://www.omf.ngo/community- symposium/

Hype or Not? Some Perspective on OpenAI’s DotA 2 Bot

$
0
0

See the Hacker News Discussion for additional context.

When I read today’s news about OpenAI’s DotA 2 bot beating human players at The International, an eSports tournament with a prize pool of over $24M, I was jumping with excitement. For one, I am a big eSports fan. I have never played DotA 2, but I regularly watch other eSports competitions on Twitch and even played semi-professionally when I was in high school. But more importantly, multiplayer online battle arena (MOBA) games like DotA and real-time strategy (RTS) games like Starcraft 2, are seen as being way beyond the capabilities of current Artificial Intelligence techniques. These games require long-term strategic decision making, multiplayer cooperation, and have significantly more complex state and action spaces than Chess, Go, or Atari, all of which have been “solved” by AI techniques over the past decades. DeepMind has been working on Starcraft 2 for a while and just recently released their research environment. So far no researchers have managed to make significant breakthroughs. It is thought that we are at least 1-2 years away from beating good human players at Starcraft 2.

That’s why the OpenAI news came as such a shock. How can this be true? Have there been recent breakthroughs that I wasn’t aware of? As I started looking more into what exactly the DotA 2 bot was doing, how it was trained, and what game environment it was in, I came to the conclusion that it’s an impressive achievement, but not the AI breakthrough the press would like you to believe it is. That’s what this post is about. I would like to offer a sober explanation of what’s actually new. There is a real danger of overhyping Artificial Intelligence progress, nicely captured by misleading tweets like these:

Let me start out by saying that none of the hype or incorrect assumptions is the fault of OpenAI researchers. OpenAI has traditionally been very straightforward and explicit about the limitations of their research contributions. I am sure it will be the same in this case. OpenAI has not yet published technical details of their solution, so it is easy to jump to wrong conclusions for people not in the field.

Let’s start out by looking at how difficult the problem that the DotA 2 bot is solving actually is. How does it compare to something like AlphaGo?

  • 1v1 is not comparable to 5v5. In a typical game of DotA 2, a team of 5 plays against another team of 5 players. These games require high-level strategy, team communication and coordination, and typically take around 45 minutes. 1v1 games are much more restricted. Two players basically move down a single lane and try to kill each other. It’s typically over in a few minutes. Beating an opponent in 1v1 requires mechanical skill and short-term tactics, but none of the things, like long term planning or coordination, that are challenging for current AI techniques. In fact, the number of useful actions you can take is less than in a game of Go. The effective state space (the player’s idea of what’s currently going on in the game), if represented in a smart way, should be smaller than in Go as well.
  • Bots have access to more information: The OpenAI bot was (most likely) built on top of the game’s bot API, giving it access to all kinds of information humans do not have access to. Even if OpenAI researchers restricted access to certain kinds of information, the bot still has access to more exact information than humans. For example, a skill may only hit an opponent within a certain range and a human player must look at the screen and estimate the current distance to the opponent. That takes practice. The bot knows the exact distance and can make an immediate decision to use the skill or not. Having access to all kinds of exact numerical information is a big advantage. In fact, during the game, one could see the bot executing skills at the maximum distance several times.
  • Reaction Times: Bots can react instantly, human’s can’t. Coupled with the information advantage from above this is another big advantage. For example, once the opponent is out of range for a specific skill a bot can immediately cancel it.
  • Learning to play a single specific character: There are 100 different characters with different innate abilities and strengths. The only character the bot learns to play, Shadow Fiend, generally does immediate attacks (as opposed to more complex skills lasting over a period of time) and benefits from knowing exact distances and having fast reactions times – exactly what a bot is good at.
  • Hard-coded restrictions: The bot was not trained from scratch knowing nothing about the game. Item choices were hardcoded, and so were certain techniques, such as creep block, that were deemed necessary to win. It seems like what was learned is mostly the interaction with the opponent.

Given that 1v1 is mostly a game of mechanical skill, it is not surprising that a bot beats human players. And given the severely restricted environment, the artificially restricted set of possible actions, and that there was little to no need for long-term planning or coordination, I come to the conclusion that this problem was actually significantly easier than beating a human champion in the game of Go. We did not make sudden progress in AI because our algorithms are so smart – it worked because our researchers are smart about setting up the problem in just the right way to work around the limitations of current techniques. The training time for the bot, said to be around 2 weeks, suggests the same. AlphaGo required several months of highly distributed large-scale training on Google’s GPU clusters. We’ve made some progress since then, but not something that reduces computational requirements by an order of magnitude.

Now, enough with the criticism. The work may be a little overhyped by the press, but there are in fact some extremely cool and surprising things about it. And clearly, a large amount of challenging engineering work and partnership building must have gone into making this happen.

  • Trained entirely through self-play: The bot does not need any training data. It does not learn from human demonstrations either. It starts out completely random and keeps playing against itself. While this technique is nothing new, it is surprising (at least to me) that the bot learns techniques that human players are also known to use, as suggested by comments (here and here). I don’t know enough about the DotA 2 to judge this, but I think it’s extremely cool. There may be other techniques the bot has learned but humans are not even aware of. This is similar to what we’ve seen with AlphaGo, where human players started to learn from its unintuitive moves and adjusted their own game play. (Update: It has been confirmed that certain techniques were hardcoded, so it is unclear what exactly is learned)
  • A major step for AI + eSports: Having challenging environments, such as DotA 2 and Starcraft 2, to test new AI techniques on is extremely important. If we can convince the eSports community and game publishers that we can provide value by applying AI techniques to games, we can expect a lot of support in return, and this may result in much faster AI progress.
  • Partially Observable environments: While the details of how OpenAI researchers handled this with the API are unclear, a human player only sees what’s on the screen and may have a restricted set of view e.g. uphill. This means, unlike with games like Go or Chess or Atari (and more like Poker) we are in a partially observable environment – we don’t have access to full information about the current game state. Such problems are typically much harder to solve and an active area of research where progress is severely needed. That being said, it is unclear how much partial observability in a 1v1 DotA 2 match really matters – there isn’t too much to strategize about.

Above all, I’m very excited to read OpenAI’s technical report of what actually went into building this.

Thanks to @smerity for useful feedback, suggestions, and DotA knowledge.

The extraordinary story of Britain’s efforts to finance the First World War

$
0
0

Michael Anson, Norma Cohen, Alastair Owens and Daniel Todman

Financing World War I required the UK government to borrow the equivalent of a full year’s GDP.  But its first effort to raise capital in the bond market was a spectacular failure. The 1914 War Loan raised less than a third of its £350m target and attracted only a very narrow set of investors. This failure and its subsequent cover-up has only recently come to light following research analysing the Bank’s ledgers.  It reveals the shortfall was secretly plugged by the Bank, with funds registered individually under the names of the Chief Cashier and his deputy to hide their true origin.  Keynes, one of a handful of officials in the know at the time, described the concealment as “a masterly manipulation”.

The emergence of capital markets as a war front

War was an expensive business. Between 1913/14 and 1918/19, government spending rose more than 12-fold to £2.37bn, almost entirely attributable to military expenditures (Morgan, 1952).  While tax revenue did quadruple over the same period, war debt was required to finance the remainder.  As a result, UK government debt increased from around 25% of GDP to 125% in four short years, requiring bond issuance and debt build-up on a pace unlike anything seen before (or since) in peacetime.

Figure 1: UK national debt

These unprecedented costs meant that war posed a fiscal as well as a military challenge for governments.  Capital raising was not ancillary to Britain’s war strategy— as the wealthiest economy by far among the Entente, and the financial centre of its day, it lay at the heart of it. As articulated by Lloyd George in 1914, (Daunton, 2002; French, 1985) Britain’s plan was to use its commercial and military naval forces to ensure a blockade of the Central Powers, to provide a limited army to support French troops on the Continent and raise the capital to provide arms and supplies for its allies.

Although the government ostensibly appealed to investors’ sense of duty, they also offered an attractive yield on the bonds. For this first wartime foray into the bond market, the government offered 4.1%, well above the 2.5% payable on other government debt at the time.  Unlike almost all existing government debt, which took the form of consols, war bonds were loans where principal was to be repaid after 10 years.

As part of a project looking at the financing of World War I, the ledgers of investors who purchased the 3½% War Loan have been analysed for the first time. These reveal the startling truth about the failure of the first bond issue of the Great War and the extraordinary role of the Bank of England in covering and then concealing the shortfall in funds…

The disastrous first bond issue

Britain’s banks agreed initially to make firm offers for £60m of the new issue, representing up to 10 per cent of their deposit and credit accounts, while the Bank of England agreed to take up £39.4m. The remaining £250m was expected to be sold to the public.  However, demand was so weak that only £91.1m was purchased.

And what funds were raised came from a woefully small group of financiers, companies and private individuals. These included wealthy private individuals and companies including what would become known as Bass Brewers and several shipping companies that were among businesses benefitting from surging war demand for their services (see images below).

Figure 2: Bond purchases by companies

The 1914 War Loan was sold in minimum lots of £100 to avoid drawing off deposits from Post Office Savings Banks where rates were much lower, which significantly narrowed the pool of potential investors.  Just 1.2m individuals earned the £160 per year net of exemptions to require them to pay income tax (Daunton, 2002).  Ranauld Michie, a historian of British investment, estimated that there were roughly 1m holders of tradable securities on the eve of World War I (Michie, 1999). The ledgers show that only 97,635 investors signed up to buy bonds, fewer than 10% of the pool of potential investors. Not only was the pool of wealth narrow and concentrated; it was heavily invested outside Britain where higher returns were found. An estimated £4bn was invested abroad in 1914, so only a fraction of these funds were voluntarily repatriated to finance the war effort.

Of those who did purchase, the modal investment was the minimum £100 and half of all investments were for £200 or less.  But a small fraction of 2% of investors by number accounted for over 40% of investment by value.

Figure 3: Average value of War Loan holdings by region

Figure 4: Percentage of households purchasing War Loans by region

These were disproportionately based in what was unquestionably the world’s financial capital, the City of London, suggesting the vulnerability of Britain’s economy as a whole to the exigencies of war.

The highest percentage of households buying War Loan was concentrated in London, with the second highest in the wealthy South East of England. However, in the rapidly industrialising West Midlands, a smaller percentage of households bought the bonds, but those households had deeper pockets; sums raised there were higher than in the South East.

The great cover up

The general public could be forgiven for believing that the first War Loan was an unbridled success, given the overwhelmingly positive coverage. The Financial Times, for example, reported on 23 November 1914, that the Loan had been over-subscribed by £250,000,000. “And still the applications are pouring in,” it gushed.

Figure 5: Financial Times cutting

Source: Financial Times, 23 November 1914

Reproduced by kind permission of the Financial Times

Disclosure of the failed fund raising would have been “disastrous” in the words of  John Osborne, a part-time secretary to Governor Montagu Norman, in a history of the war years written in 1926.  Copies of this account were only given to the Bank’s top three officials and it was decades before the full version emerged. Revealing the truth would doubtless have led to the collapse of all outstanding War Loan prices, endangering any future capital raising. Apart from the need to plug the funding shortfall, any failure would have been a propaganda coup for Germany.

So to cover its tracks, the Bank made advances to its chief cashier,  Gordon Nairn, and his deputy, Ernest Harvey, who then purchased the securities in their own names with the bonds then held by the Bank of England on its balance sheetTo hide the fact that the Bank was forced to step in, the bonds were classified as holdings of ‘Other Securities’ in the Bank of England’s balance sheet rather than as holdings of Government Securities (Wormell, 2000).

John Maynard Keynes, in a 1915 memo to Treasury Secretary John Bradbury (below), which was stamped ‘Secret’ -, praised the ‘masterful manipulation’ of the Bank’s balance sheet as means of hiding what would have been a damning admission of failure. But he also warned that Bank financing of the war should not be allowed to go on and that funds must be found elsewhere.

Figure 6: Keynes memo

Source: The National Archive (T170/72)

The longer term consequences

Faced with the possibility of catastrophic defeat, Britain threw overboard its centuries-long embrace of free market principles in several areas. It demonstrated a previously unseen willingness to interfere in private ownership of industry and property. It demanded that industries produce required goods, imposed rent freezes on private property, rationed imports and ultimately confiscated its own citizens’ foreign securities (Archive: 8A240/1).

Finance was no exception. Evidence from this sample of early investors in the War Loan illustrates that ‘sacrifice’ alone was not sufficient to attract the required funds.  Subsequent war financings l would pay investors an even higher premium— including the mammoth War Loan of 1917 which raised  £2bn by offering a hefty return of 5.4%.

But even this wasn’t enough.  In January 1915, the Treasury prohibited the issue of any new private securities without clearance, and UK investors were banned from buying most new securities (Morgan (1952)).  As the war dragged on, and capital became increasingly crucial to the Allies, the net would tighten further. And this episode was to be the first of several instances during the war where the Bank used its own reserves to provide needed capital.

The long-held laissez-faire principles of the Liberal and Conservative parties were thus sacrificed to raise the capital upon which the War’s outcome depended. Later on this would become a source of controversy. As the war unfolded, ministers were pilloried for rewarding investors far too generously for surrendering capital which should have been sacrificed gladly as a matter of patriotic principle (Hirst and Allen, 1926) .  And during the 1920s, as debt service rose to nearly 40% of tax receipts, investors were cast as profiteers, idly collecting rents on War Loans while others toiled.

For the Bank of England and economic policymakers more generally, this early failure led to the realisation that managing the national debt was a complex and, in war times, perhaps Herculean task. The episode marked an important step on the Bank’s transformation from private institution to a central bank..  A decade after the Armistice, the altered role of the Bank prompted creation of a Parliamentary commission to examine its functions, ultimately setting it on a path to nationalisation (Sayers (1975)).

Mike Anson works in the Bank’s archive, Norma Cohen is a PhD student at Queen Mary University of London, Alastair Owens is Professor of Historical Geography at Queen Mary University of London, and Daniel Todman is a Senior Lecturer in History at Queen Mary University of London.

References

Bank of England Archive – file 8A240/1

Daunton, M. (2002) Just Taxes: The Politics of Taxation in Britain 1914-1979, Cambridge University Press, p40, Table 2.2 p42

French, D. (1986), British Strategy and War Aims 1914-1916, (RLE First World War), Routledge

Hirst, F.W., Allen, J.E. (1926) British War Budgets, Humphrey, Milford, London, for Carnegie Endowment for International Peace,

Michie, R.C. (1999), The London Stock Exchange: A History, Oxford University Press, p 71

Morgan, E.V. (1952) Studies in British Financial Policy 1914-25, Macmillan & Co, London p104 Table 9

Wormell, J. (2002), The Management of the Debt of the United Kingdom: 1900-1932, Routledge, London…

Sayers, R.S. (1975) The Bank of England 1891-1944, Vol. I., p66

Acknowledgements

We are grateful to the UK’s Arts and Humanities Research Council for supporting our research (Award Number AH/M006891/1).

If you want to get in touch, please email us at bankunderground@bankofengland.co.uk or leave a comment below.

Comments will only appear once approved by a moderator, and are only published where a full name is supplied.

Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.

Efficient embedded computing (2010) [pdf]

The Legion Lonely

$
0
0

On Thursday, July 13, 1995, a concentration of high pressure in the upper atmosphere above Midwest Chicago forced massive amounts of hot air to the ground, causing temperatures as high as 41°C (106°F). In a Midwestern city not built for tropical heat, roads buckled, cars broke down in the street, and schools closed their doors. On Friday, three Con Ed power transformers failed, leaving 49,000 people without electricity. In high-rise apartments with no air conditioning, temperatures hit 49°C (120°F) even with the windows open. The heat continued into Saturday. The human body can only take about 48 hours of uninterrupted heat like this before its defenses begin to shut down, and emergency rooms were so crowded they had to turn away heatstroke victims. Sunday was no better, and as the death toll rose—of dehydration, heat exhaustion, and renal failure—the morgues hit capacity, too, and bodies were stored in refrigerated meat-packing trucks. In all, 739 people died as a result of the heat wave.

In its aftermath, an inquiry found, unsurprisingly, that the majority of those who died were poor, old, and lived alone. More surprising was the gender imbalance: significantly more men died than women. This was especially strange considering that in Chicago in July of 1995, there were more old women who lived alone than old men.

What made these men more vulnerable than the women? It wasn’t physical circumstances. Both groups lived mostly in “single room occupancy” buildings, or SROs—apartments of one room in what used to be called flophouses. It was social circumstances. The phrase “No known relatives” appears repeatedly in police reports of the dead men’s homes. Letters of regret were found on floors and in backs of drawers: “I would like to see you if that’s possible, when you come to the city”; “It seems to me that our family should have gotten along.” The single rooms of the deceased are described as “roach infested” and “a complete mess,” indicating few or no visitors. The women, according to Eric Klinenberg, who wrote a book on the heat wave, had people who checked up on them and so kept them alive; the men did not. “When you have time please come visit me soon at my place,” read another letter, unsent.

What conditions lead to this kind of isolation? Why men?

*

Artie, 63, who lives in Beards Fork, West Virginia, population 200, has never married. He grew up in Beards Fork, but spent most of his life elsewhere. He moved back when he was 47 to take care of his sick mother, who died earlier this year. Now, after putting his own life on hold for sixteen years, he finds himself single, semi-retired, and without a close friend. “Life goes by really fast,” he said. Since his mom died, he’s found himself thinking, “Where in the hell did it go?” 

This is the kind of thing he used to talk about with his mother. Now that she’s gone, he doesn’t really open up to anyone. He has no close friends in the area, and he’s “felt a lot of depression over the past few years.”

Artie’s not an antisocial guy or a homebody. His career brought him into contact with hundreds of interesting people over the years; he lived in California for a decade, and before that he had a nine-year relationship. But back in his hometown, all the connections he made seem to have melted away. “I don’t really have any close friends, other than my family,” he said, “which is something different.” (A 2005 Australian study agreed: while close friendships increase your longevity by up to 22 percent, family relationships make no difference.)

Artie has a group of friends he met in his thirties and forties with whom he’s still in touch, mainly on Facebook, but those relationships are “not quite the same as the friendships I had when I was younger. Less deep. Less vulnerable. And I’m not even sure I want to [open up].” He’s somewhat close with a few of his former coworkers, but though they confide in him, he doesn’t feel like he can confide in them. “They’re younger,” he said. “They don’t understand my problems.” Despite being semi-retired, he still goes into the office every day and stays late, after everyone’s gone. “I’m reluctant to go home,” he said. “Nobody’s there.”

In many ways, Artie seems in danger of going down the path of those Chicago SRO-dwellers. But there’s an important difference between those men and Artie: what the former had in common was their social isolation—having few or no social connections. Artie’s problem, on the other hand, is one of loneliness—the feeling of being isolated, regardless of your social connectedness, usually due to having few or no confidants. 

Is this my future?  

At first glance it seems unlikely. I’m 34. I have what seems to me a pretty active social life. I’m integrated into my community and I go to arts events regularly. I’ve lived here in Toronto off and on since I was 18. I went to university here. I helped found an arts venue here. I know hundreds of people here, if not thousands. I have multiple jobs—college instructor, freelance writer, tutor. I have friends. Whatever path led to these lonely destinations, I want to believe, is not the path I’m on. When I die, my floor will be tidy, and my letters sent.

And yet, there’s something about their stories that seems eerily familiar. Slowly but surely, I feel my social world slipping away from me. All three of my jobs combined require me to be around other humans a total of about eight hours out of a week’s 168. The other 160, I’m mostly at home. It’s not unusual for me to go several days in a row with no social contact of any kind, and the longer I go without it, the scarier it feels. I become shy, paranoid that no one would want to hang out with me. Social slights metastasize in my brain. I start to avoid social functions, convinced I’ll walk into a wall of mysterious eye contact. I live close to many friends, but I hide from them when I see them in the street. I don’t think of myself as antisocial—I love people, love being around them, and have had so many good friendships—but it often feels like an uphill battle, and mystifyingly complex, to not slip back endlessly into this pit of despair.

The thing is, I wasn’t always like this. How did I get here? 

*

Friendship in adulthood is a challenge for a lot of people. On average, both men and women start to lose friends around age 25, and continue to lose friends steadily for the rest of our lives. As adults, we tend to work more, commit to more serious romantic relationships, and start families, all of which end up taking priority over buddy time. Even if, like me, at age 34, you don’t have full-time work, you’re not in a relationship, and you’re nowhere close to starting your own family, others’ adulting leaves you bereft.

Furthermore, young adults move around the country more than any other demographic, which severs our support networks—a phenomenon Robert Putnam calls the “re-potting” effect, referring to the injury a transplanted plant sustains losing its roots. People are changing jobs more than ever, which interrupts connections that in previous eras would have become decades long. And freelancing, which Forbes estimates 50 percent of the U.S. workforce will be doing in one way or another by 2020, deprives the worker of not only job security, but social stability. As a freelancer who’s had six different jobs in the past year alone and who lived in a dozen countries throughout my twenties, I fall squarely into the most vulnerable part of this Venn diagram. I try to compensate by keeping up with, like, four or five different friend-groups on social media—mostly Facebook, where I have 3,691 contacts—but I often find myself using social media more like a performance art video game than a way to facilitate friendships. And studies show that I’m closer to the norm than the exception. “Online social contacts with friends and family,” as one study put it, “were not an effective alternative for offline social interactions in reducing feelings of loneliness.”

And that’s what I am, I guess. Lonely. Sometimes excruciatingly so. Loneliness can be measured by psychometry like the UCLA Loneliness Scale (I scored 21 out of 40) or the De Jong Gierveld Loneliness Scale (I am high on emotional loneliness, low on social loneliness). For me, though, loneliness at its core is a stubborn, irrational certainty that no matter how well I know the people in my life—several of whom I consider close, some of whom I’ve known for decades—I am not, as the poem goes, involved in mankind. I still feel, in the bad moments, frantic with isolation, and become my 16-year-old self, desperate on the edge of my parents’ bathtub, mentally searching for a friend, having ruled out all the obvious candidates. I tried to summon a world in which Blake MacPhail, whose sister’s apartment I visited once two years earlier, could be considered my friend. That wasn’t the loneliest I ever felt, but it set the template, and I still feel that way more often than perhaps those who know me would suspect. Or maybe they would suspect it; maybe they feel it too; over the past few decades, as the structure of society has changed, loneliness has increased, and now affects almost half the population. Just last week, the American Psychological Association issued a press release advising that “many nations around the world now suggest we are facing a ‘loneliness epidemic.’” 

And as if feeling lonely wasn’t bad enough, it also turns out that loneliness and isolation are shockingly bad for your health and wellbeing. The quality of your friendships is the largest predictor of your happiness. Social isolation weakens your immune system, raises your blood pressure, messes with your sleep, and can be as bad for you as smoking 15 cigarettes a day. According to the authors of a widely cited meta-analysis, loneliness on its own can increase your chances of an early death by 30 percent and “heightened risk for mortality from a lack of social relationships is greater than that from obesity.” And in practical terms, being in contact with nobody in an emergency, like the men in the Chicago heat wave, can kill you in an instant.

Unfortunately for me, like the majority of those Chicago dead, I belong to another, perhaps counterintuitive, at-risk category: I’m a man. All that freelancing and moving and adulthood stuff affects men and women alike, but, for a complex set of reasons, men face additional roadblocks to connection. On average, we have fewer confidants and are more socially isolated. Women do report being lonelier than men, and research says, statistically, they are—if they’re married and between the ages of 20 and 49. For all other demographics, though, men are in fact lonelier than women. On top of all that, there’s a consensus among researchersthat due to male reluctance to self-identify as having emotional problems, the ubiquity of men’s loneliness is probably being underestimated. 

*

I have a photograph of my friend Tyler and myself snuggling on my parents’ cream carpet, in the sun, next to my sandy dog. It’s a sweet moment, but captures something bitter, too: this was probably the last time I touched a male friend in a way that wasn’t a handshake or a bro-y hug. We were, like, six.

One avenue into understanding men’s loneliness is to consider how children are socialized. In an interview, Niobe Way, a lecturer at New York University who has worked with adolescent boys for over two decades, talked about how we are failing boys. “The social and emotional skills necessary for boys to thrive are just not being fostered,” she said in an interview. Indeed, when you look at the research, men do not start life as the stereotypes we become. Six-month-old boys are likely to “cry more than girls,” more likely to express joy at the sight of our mother’s faces, and more likely to match our expressions to theirs. In general, before the age of four or five, research shows that boys are more emotive than girls.

The change begins around the time we start school: at that age—about five—boys become worse than girls at “changing our facial expressions to foster social relationships.” This is the beginning of a socialization process in “a culture that supports emotional development for girls and discourages it for boys,” according to Dan Kindlon and Michael Thompson. This begins to affect our friendships early—in a study in New Haven, Connecticut, boys aged 10-18 were significantly worse than girls at knowing who their friends were: “over a two-week period, the boys changed their nomination of who their best friend was more frequently than girls, and their nomination was less likely to be reciprocated.”

Still, there’ll never be better soil than school in which to grow friendships, and most boys do find good friends as children. Way, who summarized her findings in her book Deep Secrets, found that, up until early adolescence, boys are not shy about how much they love their friends. Way quotes one boy named Justin in his first year of high school: “[My best friend and I] love each other… That’s it… you have this thing that is deep, so deep, it’s within you, you can’t explain it. … I guess in life, sometimes two people can really, really understand each other and really have a trust, respect, and love for each other.” Another high school freshman, Jason, told Way friendships were important because then “you are not lonely … you need someone to turn to when things are bad.”

However, for many boys—Way calls it “near-universal”—a shift occurs in late adolescence, roughly from the ages of 15-20. In a phase of life we often think of in optimistic terms—self-discovery, coming of age—boys’ trust in each other shatters like glass. Three years after his first interview, Jason, asked if he had any close friends, said no, “and immediately adds that while he has nothing against gay people, he himself is not gay.” Another boy interviewed by Way in the eleventh grade who up until the year before had maintained a best friendship for ten years said he now had no friends because “you can’t trust nobody these days.” In interviews with thousands of boys, Way saw a tight correlation between confiding in close friends and mental health, and she observed that, across all ethnic groups and income brackets, three quarters of the boys she spoke to “grow fearful of betrayal by and distrustful of their male peers” in late adolescence, and “begin to speak increasingly of feeling lonely and depressed.”

Making matters worse, in the middle of this estrangement from other boys, as we’re becoming young men, we’re governed more than ever by a new set of rules about what behaviour we’re allowed to show. Psychologists call them display rules. “Expressions of hurt and worry and of care and concern for others,” according to white high schools boys, are “gay” or “girly.” Black and Hispanic boys, according to Way’s interviews, feel pressure to conform to even stricter rules. Men who break the rules, and express “sadness, depression, fear, and dysphoric self-conscious emotions such as shame and embarrassment” are viewed as “unmanly” and are comforted less than women. Way told me when she speaks in public, she often quotes a 16-year-old boy who said, “It might be nice to be a girl, ‘cause then I wouldn’t have to be emotionless.”

*

And yet, it’s easy to be skeptical—aren’t men doing fine, compared to everyone else? How much does this actually hurt men? They still have friends, don’t they? And yes, entering adulthood, and up to the age of 25, men and women do have approximately the same number of friends. For the outsider looking in, then, and even for the man himself, it may appear that nothing’s amiss. But to paraphrase University of Missouri researchers Barbara Bank and Suzanne Hansford, men have power, but are not well. In the UK, suicide rates among men are steadily rising. In the US, so is unemployment among men, often coupled with opioid abuse. In a 2006 paper addressed to psychiatric practitioners, William S. Pollack of Harvard Medical School wrote, “present socialization systems are dangerous to boys’ physical and mental health and to those around them, leading to increased school failure, depression, suicide, lonely isolation, and, in extremis, violence.” In a study Pollack did of boys age 12-18, only 15 percent of them projected “positive, forward-looking sentiment regarding their futures as men.” 

Women keep being intimate with their friends into adulthood, and men, generally, do not: “Despite efforts to dismiss it, the finding that men’s same-sex friendships are less intimate and supportive than women’s is robust and widely documented.”

Perhaps you want to say that men just like it that way. Perhaps you want to say, like Geoffrey Grief, writing in Buddy System: Understanding Male Friendships, “Men are more comfortable with shoulder-to-shoulder friendships while women prefer face-to-face friendships, which are more emotionally expressive.” Shoulder-to-shoulder meaning: engaging in a shared activity, like playing a game of pick-up basketball, as opposed to confiding face-to-face and being emotionally vulnerable. This may be true for some men, who, like some women, need less intimacy than others. But when asked, men say we wish we had more intimacy in our friendships with other men.

“What is wrong with men,” Bank and Hansford asked, “that they can’t or won’t do what they enjoy to the same extent as women do?” In a study of 565 undergraduates, they investigated. Six possible reasons why men shut each other out were measured by questions like “how often [the subject] and their best friend showed affection for each other, had a strong influence on the other, confided in the other, and depended on the other for help.” The worst offenders? Homophobia, and something they called “emotional restraint,” which they measured by responses to statements like “A man should never reveal worries to others.”

From the vantage point of adulthood, especially in progressive circles, it’s easy to forget the ubiquitous and often quasi-ironic homophobia of teen boys, which circulated among my guy friends. That’s why it was amazing to read Dude, You’re a Fag by C. J. Pascoe, who spent a year embedded in an American high school divining and taxonomizing the structures of teen male identity in intricate and systemic detail. She concluded that “achieving a masculine identity entails the repeated repudiation of the specter of failed masculinity”—in other words, boys must earn their gender over and over again, often by “lobbing homophobic epithets at one another.” And unfortunately, for boys both gay and straight, the rise of gay marriage and queer visibility has not made schoolyards any more tolerant. In fact, Way, who still works with kids in New York City schools, warned that, in contrast to the unselfconscious way straight men can be physically affectionate in repressive societies, in cultures where being gay is a publicly viable option, boys actually feel even more pressure to prove their straight identity. “I hear that coming from all sorts of sources—parents, kids,” said Way, who notes the simultaneity of the rise of gay acceptance and the phrase no homo.

Way discusses yet another reason men may shut each other out: a major betrayal or insult we don’t have the relationship skills to get past. Though they often happen in late adolescence, Way saw the fallout from these formative injuries as “a dramatic loss that appears to have long-term consequences.”

For all these reasons—the socialization we receive as kids, as well as emotional restraint, homophobia, experiences of betrayal, and many others—many men stop confiding in each other, trusting each other, supporting each other, and expressing emotion around each other. And if you subtract all that, what kind of friend will you be, exactly?

* 

“Fickle” and “calculating” is what men tend to be as friends, according to a four-year study at the University of Manchester. More neutrally stated, a comparative study of men in New Zealand and the United States found that, in both cultures, “friendships between males tend to be instrumental in nature, whereas female friendships are more intimate and emotional.” We’re good at being buddies when times are good, but in harder times we tend to abandon each other, or hide from each other, knowing or fearing the other won’t have the language or skills—or will—to support us. 

Dave, 30, a writer and bartender, who struggles to form deep connections with other men, said navigating male friendship is “almost as challenging as dealing with girls when you’re single—you don’t know how close a guy wants to get.” Most of Dave’s friendships are with his male coworkers at the bar, and they mostly just talk about sports. “If the conversation ever gets a little more personal, it’s usually because we’re like, six beers in. And the next time we see each other it’s just like, ‘hey.’” 

For some men, there’s a direct line from their years as the New Haven schoolboys whose best-friend nominations were unreciprocated to now. Our reluctance to show real feeling can mean we don’t acknowledge or affirm friendships. The relative laxness of male friendships can also leave you wondering who your friends are—who should you invest in? Ian, 33, who lives in Toronto and works in the food service industry, has a wide network of acquaintances all across the city, but “they’re not really confidable-quality friends.”

Ian, like me, belongs to yet another group at a high risk of loneliness: single men. Using data from 4,130 German adults, researchers found that single men are lonelier than both men in relationships and single women. (Single women, in other studies, having been found to be “happier, less lonely, and more psychologically balanced than single men.”) A lot of men don’t cultivate emotional intimacy when they are not in partnership with a significant other. Whereas single women at least feel as if they can cry to their friends, single men, generally, cry to no one.

*

Loneliness is not just a bad feeling and like lung tar for your health; it can actually cause you to become more objectively socially isolated, in a vicious cycle triggered by a particularly cruel trait of humans: we ostracize the lonely.

Using data from 5,000 people in Framingham, Massachusetts, one study found that loneliness is contagious. Having a lonely friend (or coworker, of family member) increases your chance of being lonely by 52 percent, and each additional lonely day per week you have leads to one additional lonely day per month for everyone you know. Why would this be? Well, lonely people tend to act “in a less trusting and more hostile fashion,” to “approach social encounters with greater cynicism,” and to be less able to pick up on positive social signals, which causes them to withdraw, making those around them feel lonely, too. Like a virus, this loneliness spreads, giving one person the ability to “destabilize an entire network,” as one of the researchers told the New York Times in 2009, leaving patient zero further and further away from anyone who’s not lonely. Like the rhesus macaque monkeys in a horrific 1965 study who were kept in a “pit of despair” and then shunned when reintroduced to the group, “humans may similarly drive away lonely members of their species,” concluded the authors of the Framingham study. Over time, lonely people are pushed further and further away from others, which only increases their loneliness further, which causes further ostracization.

Decades of this can push you right to the periphery of society. A report by the British Columbia Ministry of Health reported that, compared to their female equivalents, never-married men “are more depressed, report lower levels of well-being and life satisfaction and poorer health, and are more likely to commit suicide.” Indeed, the men who died alone in the Chicago heat wave were all single, and it’s difficult not to see being lonely and single as the path of unsent letters on cockroach floors and the collapse of all contacts.

* 

It may seem like the answer is a relationship, or marriage. Married people in general are less lonely than single people, and married men are less lonely than married women. A 1991 meta-study summed it up: marriage is “particularly rewarding for men.”

However, closer inspection reveals a more complicated, and hazardous, picture. Though less lonely, married men are more socially isolated. Compared to single men, and even unmarried men cohabiting with a partner, married men in a 2015 British study were significantly more likely to say that they had “no friends to turn to in a serious situation.” This seemed to capture the situation of Roger, 53, in Indianapolis, who’s been married for 24 years. “The friendships I had in college and post-college have kind of dissipated,” he said. “My wife and I have a few friends in couples, but I don’t really see friends outside of that.” He confides in no one other than his wife. “There’s very little need to,” he said. Roger is typical: married men “generally get their emotional needs met by their spouses/partners.” Why, then, would Roger need to keep up with anyone else?

In contrast, married women “often get their emotional needs met by their female friends.” That married women’s friends are more important to them than married men’s friends may be one reason why a 2014 British study found that women organize and encourage a couple’s social life more than men, and, in general, “men are far more dependent on their partner for social contact than women are.” When I shared this fact with the men I interviewed, several of them admitted that this was true of them, with one saying his partner spoke with his own mother more than he did, another saying he wouldn’t be in touch with his friends from college if it wasn’t for his partner, and a third saying, because most of his pre-marriage friends were female and there was tension with his wife when he hung out with them, he saw mostly her friends now.

There are clear dangers for married men shunting all this social planning to their wives. (It can be grimmer still for gay men, who struggle with loneliness even more than straight men.) Aside from the questionable morality of offloading all this emotional labour, what are you going to do if your marriage ends before you do?

Brandon, 35, a professor in St. Catharines, Ontario, who got divorced a couple years ago, saw the results up close: when his marriage dissolved, all his friends, who had originally been closer to his ex-wife, went with her. “It was a big wake-up call,” he said. You don’t want to find yourself mid-disaster one day “and realize you’ve surrounded yourself with people who, while interesting, don’t really give a shit about you.”

But Brandon was lucky: he divorced young enough to learn his lesson. A seemingly infinite number of gruesome studies of older divorced and widowed men show that they, like never-married men, are lonelier and more isolated than their female counterparts. Divorced men“are more apt to suffer from emotional loneliness than are women,” and widowers have it even worse. While widowed women generally “are capable of living alone and taking care of themselves,” widowers “encounter severe difficulties in adapting to the single status,” which “leads to a precarious condition … reflected in unusually high rates of mental disorders, suicides, and mortality risk.” All this suggests that married men don’t actually learn how to not be lonely, they just bandaid the problem with marriage, and if that ends, they have all the same problems they’ve always had, but now are older, and for that reason even more prone to isolation.

* 

So—what should you do? I’m glad you asked now, because the more friends you have while young, the more friends you’ll have when you’re old, so the sooner you start improving your connections, the less likely you are to slip into a loneliness/ostracization spiral.

Social isolation is, by definition, ameliorated by simply seeing more people. Most interventions I’ve seen come at the policy level, mostly for older men. The UK seems to be the most aggressive in this approach, with programs like Men in Sheds, which originated in Australia and brings older men together to share tools to work on DIY projects of their own choosing; Walking Football, which gives those who’ve aged out of their prime soccer years the opportunity to play with others of their ilk; and Culture Club, which hosts expert speakers, targets men who’ve spent their lives in “intellectual pursuits” and enforces a “no chit chat” rule (also: no women). One even targeted single-room occupancy hotels, of the kind inhabited by the men who died in the Chicago heat wave. They set up a blood-pressure evaluation program in these SRO’s lobbies, cajoling men “who tended to stay in their rooms due to physical disability and fear of crime” into social interaction on the pretext of a convenient health check-up. Over time, the program “helped participants identify shared interests.”

Programs like this seem to have been successful at least in part because “older men participate in organizations slightly more than older women.” Most of these programs try to meet the conditions generally understood to be required to create close friends: proximity; repeated, unplanned interactions; and a setting that encourages people to let their guard down and confide in each other. If you’re trying to go it on your own, and aren’t into joining a program like the ones above, or they’re not available—or you’re not a senior!—you’d be well-advised to try to replicate these conditions on your own. I’ve had some success showing up to a weekly writers’ group.

Lonely people tend to have a range of maladaptive behaviours and thought patterns. They—we —“have lower feelings of self-worth,” they “tend to blame themselves for social failures,” they “are more self-conscious in social situations,” and they tend to “adopt behaviours that increase, rather than decrease, their likelihood of rejection.” For men, this may include hypervigilance in abiding by display rules learned long ago that were designed to protect us from threats that no longer exist.

Fortunately, all these things can change.

This is done primarily through cognitive-behavoural therapy. The “cornerstone” of these interventions was to “teach lonely individuals to identify automatic negative thoughts and regard them as hypotheses to be tested rather than facts.” The specific approach depended on the target population. “Reminiscence therapy” was used to help institutionalized elderly people recall past life events, which they were encouraged to reinterpret in a positive light, and also to apply “positive aspects of past relationships to present relationships.” Thought substitution techniques were used with Navy recruits, who were encouraged to replace a negative thought like “I am a total failure” with “I’m often successful at the things I do.” Lonely college students responded well to a “reframing” technique, where they were coached to reframe their present experience of loneliness in positive ways, e.g., “A nice part of being lonely now, is that it allows you to develop and discover more about yourself at a time when others may be so wrapped up in a relationship that they end up spending their time trying to be what someone else wants them to be.” Variations of this kind of therapy were shown to be successful across diverse lonely populations, from sex offenders in jail to people with limited mobility.

*

As the novelist Jacob Wren notes, though, there are no individual solutions to collective problems. And, unfortunately, men’s loneliness is a problem not only for themselves. Though not shown to be causal, there is significant correlation between loneliness in men and violent behaviour. A 2014 Turkish study found that violent high school boys are disproportionately lonely; a 1985 study found that “men who scored high on measures of loneliness readily aggressed against a female subject in a laboratory study”; and, in a dynamic that would appear to explain some aspects of red-pill culture, a 1996 study of sex offenders in a Canadian prison found that those who were lonely and lacked intimacy in their lives “blamed these problems on women.” Even more troublingly, matching studies in Canada and New Zealand found higher-than-average loneliness in populations of male rapists.

It’s clear that it’s in everyone’s interests that men’s loneliness be curbed. But what specifically can be done?

“The boys are telling us the solution,” said Way, in our interview. “They want friendships! They need them, they’ll go ‘wacko’ if they don’t have them.” Better friendships, she says, are what men are missing—the key to better mental and emotional health, and certainly the antidote to loneliness. Pollack echoes this: “As tough, cool, and independent as they may sometimes seem, boys yearn desperately for friendships and relationships.”

Changing how boys navigate their friendships, or how men relate to each other in even the smallest ways, may seem forbidding, but, as Louis D. Brandeis said, most of the things worth doing in the world had been declared impossible before they were done.

Way told me about working with a class of seventh-graders just last year. She read them that quote from Justin who, speaking of his friends and himself, says, “sometimes two people can really, really understand each other and really have a trust, respect, and love for each other.” These seventh-graders started laughing. “The dude sounds gay,” one of them said. Way set them straight, telling them that 85 percent of the boys she interviewed over 25 years sounded like this. They were totally quiet. And then someone said, “For real?” Way said, Yeah, this is what boys sound like. All of a sudden, the boys were waving their hands to tell Way about their close friendships, their relationships, “and two boys who had just so-called ‘broken up’ their friendship, even started to talk to each other about the friendship.” As Way said, as soon as they learned having emotions and loving their friends was normal, “they were allowed to access what they really knew, and they were like, ‘This is me.’”

It is possible to change the culture. What is normal can change. And in the meantime, know this: intimacy is normal. Having close friends is a normal thing to want. And if you’re struggling, you’re not alone.


Visualising High Frequency Trading in Bitcoin

$
0
0

Some time ago, I had a look at the seasonality of traded volume on Bitcoin exchanges, up until December 2013. My objective was to determine approximate trading sessions for 3 popular exchanges. I found that the intra-day volume followed a kind of sinusoidal pattern, which I attributed as the tell tale sign of the presence of humans on the exchanges. This article attempts to visually explore the extent of algorithmic trading in Bitcoin, with a focus specifically on the Bitstamp exchange and limit orderbook data.

Feel free to skip this part if you are already familiar with the inner workings of a limit order book and exchanges in general.

An exchange/bourse is a marketplace where agents can buy and sell _things_ to each other. There are many ways for an exchange to facilitate this, however the most popular mechanism, and the subject of this article, is the concept of a Limit Order Book.

The first main component of the exchange, serving parties interested in buying or selling units of some object (a stock, contract, currency, etc.) is the Limit Order Book. The Limit Order Book is a type of auction mechanism for recording the _passive_ trading intentions of individuals (people, organisations, algorithms..). A passive intention to buy an asset, is a bid to buy that asset at a price which is less than, or equal to, the current best bid for the asset in question. Similarly, a passive intention to sell an asset, is a (asking) price which is greater than, or equal, to the current best asking price. The price of the bid or ask order is known as the Limit Price. In the case of a Bid, the limit price is the maximum amount a party is prepared to pay to buy the asset, and in the case of an Ask, the limit price is the minimum amount the party is prepared to sell the asset for.

The order book consists of 2 sides. The Bid side, which contains limit orders for parties interested in buying, and the Ask side, which contains limit orders for parties interested in selling. Each side of the book is a priority queue: orders are ordered by their limit prices, such that on the bid side of the book, parties willing to pay more for the asset are placed at the top of the book. Similarly, parties willing to sell for less are placed at the top of the ask side of the book. The top of the bid and ask sides of the book are known as the current best bid and ask. The difference between the best bid and ask is known as the market spread. If 2 or more parties insert orders at the same bid or ask price, then the orders are ordered according to arrival time. Specifically, each side of the order book is a price priority FIFO queue.

The second main component of the exchange is the matching engine. The role of the matching engine is to match incoming orders to buy or sell against limit orders resting at the various levels in the order book. The resting limit orders are said to be market makers because they are providing liquidity to the market. In order for a trade to occur, a trader (market taker) must cross the market spread and pay the asking price if the trader is buying, or the bid price if the trader is selling. Specifically, limit orders are removed from the order book according to their price/time ordering, by impatient traders who require their orders to be executed immediately. The arrival rates and volume of these impatient orders (demand) vs the amount of resting volume, and the rate at which it is replenished (supply), is the mechanical essence of the exchange.

To illustrate the 2 components of the limit order book and exchange matching engine, the following table represents the top 10 best bid and ask orders to buy and sell Bitcoin on the Bitstamp exchange at 10:13am on the 2nd November 2014. The left hand side contains the top 10 best bids from parties interested in buying a specific amount at a specific price. The right hand side contains the top 10 best asks from parties interested in selling. The first row represents the current best bid and ask prices along with the amount of volume available at those prices. The Liquidity column is the cumulative sum of volume available at all price levels.

Now, suppose that an order to immediately buy 10 Bitcoin arrives. The exchange matching engine will honor this request by removing the first 4 ask orders and then partially removing the 5th order, resulting in 5 trades, and widening the spread, such that the new best ask in the order book after filling the order request will be 12.89324700 @ $325.68. Similarly, if an order to immediately sell 5 Bitcoin arrives, the order will consume (hit/lift) the first 4 bid orders and partially consume the 5th, leaving the best bid at 13.25048100 @ $324.56. The initiator/aggressor of the buy trade (market order) pays the VWAP (Volume Weighted Average Price) for 10 Bitcoin:

((325.38*1.12400000)+(325.39*0.61464703)+(325.45*1.20200000)+(325.67*0.45260000)+(325.68*6.60675297)) / 10 = $325.60

It is in the interest of the aggressor who placed the market order to buy 10 Bitcoin to receive the lowest possible VWAP. The VWAP received depends on the amount of volume available at each price level in the order book. And since the exchange matching engine matches incoming market orders to resting limit orders by price/time priority, the VWAP depends on the Liquidity: the cumulative sum of volume at each price level required to fill the incoming order.


Bids

 Asks

LiquidityVolumeLimit Price Limit PriceVolumeLiquidity
3.4588167003.45881670324.88<- Spread ->325.381.124000001.12400000
3.6518288000.19301215324.85 325.390.614647031.73864700
4.2659474000.61411858324.82 325.451.202000002.94064700
4.8804811000.61453372324.80 325.670.452600003.39324700
18.2504810013.3700000324.56 325.6819.5000000022.89324700
18.622481000.37200000324.36 325.712.0000000024.89324700
18.865847000.24336568324.36 325.720.9170000025.81024700
19.245847000.38000000324.33 325.990.7630000026.57324700
19.845847000.60000000324.07 326.091.4563976028.02964500
26.895847007.05000000324.06 326.191.4715300029.50117500

The amount of volume/liquidity available on each side of the order book determines the order book "shape". The following graph shows the shape of the order book for the same point in time as above (10.13am). The y-axis shows the cumulative sum of volume available at each price level +-5% from the current best bid/ask. Higher amounts of volume closer to the current best bid/ask represent better liquidity (market orders receive better VWAPs and leave less impact). In a "balanced" market, the shape of the book would appear to be symmetrical. In this example, there is slightly more volume available on the ask side of the book than there is on the bid side (within +-5% of the current best bid/ask). We might say that there is an imbalance in this order book: parties are more interested in selling than in buying.

order book liquidity

The dynamics of limit order books is a complex subject, and I only briefly describes some simple mechanical aspects of them here in the hope to keep this article self contained. The following papers offer excellent insights into the subject:

Data Collection

I collected order book event data from the Bitstamp exchange over a 4 month period, between July and October (2014), resulting in a dataset of ~33 million individual events; A miniscule dataset in comparison to the throughput on "conventional" exchanges, see (Nanex: High Frequency Quote Spam) for example.

The data contains individual order book events describing the life cycle/state of individual limit orders. An order event may be one of Add, Modify, Delete. An Add event corresponds to the insertion of a limit order into the order book, Modify corresponds to a partial fill of an order, and Delete corresponds to the removal of an order from the book (either from a complete fill or cancellation). A "raw" event is structured as follows:

[ id, timestamp, price, volume, action, side ]

Where id = Unique identifier for the limit order. Timestamp = the time at which the initial Add order arrived at the exchange, Price = the limit price, Volume = the amount of volume for the order in Bitcoin, action = Add, Modify, or Delete, and finally, side = Bid or Ask, depending on the direction (buy/sell) of the order.

Data Summary

In reference to the observation that the average traded volume by time of day follows a sinusoidal pattern, the average number of events arriving every 15 minutes follows the same pattern, as shown in the following graph:

order book events

The graph shows the average number of order book events occurring averaged over each 24 hour period (white line). The regression (red) shows a clear sinusoidal pattern. Trading activity is lowest at ~3am and peaks at ~3pm (UTC). The most active continuous period with respect to the mean number of events (blue line) is between 9.30am and 10pm.

While the event dataset consists of ~33 million events, these events can be broken down into individual orders and their types. In total, of the identifiable order types, there were 14,619,019 individual "flashed orders" (orders added and later deleted without being hit) representing 93% of all order book activity, 707,113 "resting orders" (orders added and not deleted unless hit) and 455,825 "marketable orders" (orders that crossed the book resulting in 1 or more reported trades).

As mentioned previously, all orders have a life cycle. An order is first added, it may then be updated, if it is partially filled, and finally will be deleted if it has been cancelled by the trader or completely filled. The following visualisation shows 1 hour of limit order events on the 27th October. The y- axis represents the limit price for both bid and ask orders which can be distinguished by the colour red for ask and blue for bid. The circles are an approximate guide to the amount of volume in the order. And furthermore, if the volume is being added to the order book, the circle is opaque, whereas if the order is being removed (cancelled), the circle is empty. The point at which the bid orders meet the ask orders corresponds to the top of the order book, such that higher priced bids and lower priced asks are closer to the current market price. Here, the limit price has been restricted to the $342:$375 interval.

order book events

In reference to the above graph, it is interesting to note the apparent regularity and order of the event data. I think that given this obvious regularity, not to mention quantity, it would be reasonable to assume that most, if not all, of the activity in this time period is the result of the systematic activity from automated market participants. Zooming in closer (shown below), to a 30 minute range, further highlights the systematic activity surrounding the market midprice. At this time resolution, it is possible to begin to interpret the type of activity occurring in the graph.

order book events

At each price level, orders are being added and then removed on a highly periodic basis. This is the result of position re-allocation: In response to perceived changing conditions, trading agents are re-positioning their orders. The changing conditions could be due to a change in midprice, order book depth, or some external event. Whatever the reason, it is interesting to observe that the cyclic patterns could be the result of agents re-positioning their orders in response to other agents re-positioning... In the below graph, zoomed into a 15 minute time period, the order re-allocation is slightly more obvious. The apparent X-X-X pattern could show the presence of multiple orders at different price levels originating from the same strategy being adjusted.

order book events

Zooming in further, to a 5 minute interval and closer to the midprice, the re-allocation is clear. Looking at the second "X" pattern on the bid side of the order book (blue), we can see that orders are more or less simultaneously being added (opaque circles) and removed (empty circles). The individual X's in this case seem to show a battle between 2 processes: The first process adds orders (by descending price), while in response, a second process seems to cancel orders (by ascending price). The second X in the series, perhaps shows the first process deleting the orders it added in the first X (this time in ascending price order) and the second process adding back it's previously deleted orders.

order book events

The processes seem to be feeding off one another, yielding this X-X-X effect. For me, this highlights the most interesting aspect of order book event data: If the market participants are event driven, then the market dynamics are the result of (agents responding to (agents responding to (agents responding to (events)))) and so on. At this level, everything can be reduced to mechanical interactions. The market can be viewed as a Complex Adaptive System, which is a huge subject, well beyond the scope of this article. For anyone who may be reading this with a robotics/cybernetics/ai background, I recommend Eugene A. Durenard'sProfessional Automated Trading which contains some great insights.

The X-X-X order re-allocation pattern shown above is due to the environment in which the traders/agents are operating in and can be reproduced with some simple rules. Given that a limit order book is a (price,time) queue, the only way to jump queue position (with respect to a bid order) is to increase the limit price. By placing an order some distance from the current best bid, as is the case here, there is a chance that the order will be hit. The likelihood of the order being hit, essentially, decreases as a function of distance (in volume) from the current best bid, the market order arrival rates (flow), and the rate at which liquidity is replenished (how resilient the order book is to market impacts). As such, if an agent has a bid order placed at say, $347 for 10 Bitcoin, the agent placing the order has some view on the likelihood of their order being hit. Most likely, they are looking to extract value from rare market impact events. If another agent comes along, with a similar view and places an order at $347.01, then suddenly the original view of the first agent changes, since there is now more volume "in front" of the order at $347. This may cause the first agent to re-position the order to $347.02, which in turn may cause the second agent to re-position even higher and so forth. In this case, the competing processes are likely to be reacting to both the Add and Delete events of the other party: When the one agent sees the other delete, it defaults to it's previous position. This might explain the horizontal sequence.

This is of course, subjective and based on an initial interpretation of the above visualisation. If anything, it is at least clear that there are predominantly systematic processes involved.

Rapidly deleted orders, or "fleeting orders", seem to get a lot of negative attention in the media. In conventional markets, some have associated rapidly cancelled orders as a manipulative tactic employed by high frequency traders. My (humble) belief is that probably most (but not all) of the time, high cancellation rates can be attributed to various strategies feeding off one another/re-adjusting positions, as described above. The fact that limit order books are price/time priority based (there are also other schemes), in a way, forces participants to constantly re-position their orders: the market participants are most certainly not collaborating in any way, so most of the time the activity seen in the order book is probably the result of some giant reactive programming exercise. Naive side note: Maybe there exists an alternative (game theoretical) scheme, in which 2 market participants could somehow (at least temporarily) agree to "share" a queue position, instead of jumping in-front of one another.

Having said that, there are indeed various ways that high cancellation rates can be associated with more "predatory" high frequency trading. Given that trading in Bitcoin, on Bitstamp, is completely unregulated, and basically anything goes, I would expect (and hope) to see evidence of manipulative activity. As an initial step, the following 4 visualisations show every single "fleeting" order event over a 4 month period. The intention of the visualisations are not to expose any "suspicious" order cancellation activity, but more to further highlight the predominance of systematic trading on this particular exchange.

Each graph shows the amount of deleted volume (per event) along the horizontal axis, vs time, ascending on the vertical axis. Each point in the graph corresponds to a cancelled order event, where the colour blue differentiates the cancelled Bid events from the cancelled Ask events. I define "flashed volume" as an order that is placed in the order book and later cancelled. As such, these visualisations exclude any partially filled, then cancelled orders. Finally, the horizontal axis is plotted in log-scale, since the distribution of order size decays exponentially: small orders are much more frequent (This is another subject altogether).

October

order book cancellations

September

order book cancellations

August

order book cancellations

July

order book cancellations

The first observation is again the striking regularity/geometric appearance of the data. Simply plotting the cancellations by volume shows obvious systematic activity (there are other dimensions, this is the most obvious to explore). The second observation, is that the cancellations appear to happen in regimes or clusters, with cancelled bid volume sometimes far greater than cancelled ask volume, and vice-versa. This may lead to the interpretation that orders are being re-allocated either in response to some external news, or to chase the market.

Zooming in on a single day (this time with volume shown on a linear scale), it is evident that most of the volume falls under 100 Bitcoin per order, at least for this particular day (2014-11-02) shown below. Which makes some sense, since on that particular day the VWAP was $323.95 and $32,395 is quite a lot of money to place in a single order.. In fact, 95% of the volume on this day is <= 60.89 with the median volume being 5 Bitcoin.

order book depth

Filtering the graph to include cancelled volume <= 100 shows the density of cancelled volume for this day (260,524 deleted order events). The next few visualisations show the same time period, however with a progressively lower limit on the size of volume displayed.

order book depth
order book depth
order book depth
order book depth
order book depth

The available volume at any given price level is the sum of the volume from the individual orders enqueued at that price. As such, the available volume at a price level can be viewed as an individual time series that changes when a new order is enqueued (Add), removed (Cancel), or filled (Modified). The cumulative sum of the volume available at price levels above and below the market midprice corresponds to the level of liquidity available in the market.

A common approach to viewing the order book volume is to plot the cumulative sum of the volume on either side of the book (as shown in the introduction). This approach shows available liquidity, order book imbalance and volume size at each level as a type of step function, more generally, the order book "shape". The problem with this approach is that it is limited to displaying instantaneous information: it shows no persistent information, instead showing a snapshot of the order book for 1 particular instant in time.

Order Book Price Level Volume

The following visualisations show how the order book volume evolves through time, and show a complete picture of all limit order activity throughout the day. The first example, shown below, shows the volume available at every price level, in 1 cent ticks, filtered to show the 2100 levels >= $313 and <= $334 for a 24 hour period on the 2nd November. The white time series corresponds to the market midprice, which is defined as the average price between the current best bid and ask prices: (best.bid+best.ask)/2. All price levels above the midprice correspond to the Ask side of the book, while levels below correspond to the Bid side. The colours differentiate between the amount of volume at each level: Blue means little volume, through to Green, Yellow, and finally, Red to indicate a large quantity of volume. Since the majority of price levels contain very little volume, the colour scheme has been rescaled, so that most of the colour differentiation occurs below ~5 Bitcoin and anything above (rare) is in the red-end of the spectrum.

order book depth

Unlike the order book event visualisations shown previously, this visualisation shows the actual life cycle of limit orders. In the context of this article, the sheer extent of limit order activity is shown, demonstrating the ebb and flow of activity throughout the day.

Order Book Liquidity

I borrowed the idea for visualising liquidity/depth (and the spectral colour scheme) from Nanex: I'm a big fan of the visualisations resulting from their research.

The below chart (which is aligned with the above) shows the cumulative sum of order book depth through time. The amount of data I am dealing with here comes nowhere near as close as the amount of data processed by Nanex. Furthermore, this exchange (Bitstamp) is highly unliquid. As such, instead of showing the available depth at actual price levels, I have grouped the volume into 40 percentile buckets above (20) and below (20) the current best bid and ask price. This example, shows the amount of volume available throughout the day on the bid side of the book (negative liquidity on the y-axis) vs the volume available on the ask side of the book (positive). Starting at the 0 line, the first red region above it shows the amount of volume available at the best ask price up until 0.0025% (25 BPS) above. The band above that, shows the amount of volume for all prices >= 25 BPS and < 50 BPS. The top band (blue), shows the amount of volume at price levels >= 475 BPS and < 500 BPS (5%). The same percentile ranges are repeated for price levels below the best bid price.

order book depth

Some Examples

Using the same day as an example, zooming in to a 3 hour period, centered at 12pm, the following chart and accompanying depth map shows the order book activity at the end of a minor (in Bitcoin terms) sell off. It is interesting to note the midprice movement in this chart in relation to the order book depth. Firstly, note the ribbons of declining volume above the midprice. This shows quite a large amount of volume being periodically cancelled and then shifted down a few levels. Second, the depth map below the chart, shows the total volume above and below, note how immediately before the last sell off there is an increase in liquidity above the midprice, and immediately after, below.

order book depth
order book liquidity

Zooming in to a 15 minute interval, centered on the same region, it becomes appropriate to show the current best bid and ask along with trade events instead of the market midprice. Here, visible on both the order book price level chart and the depth map, we can see a number of limit orders being cancelled immediately before a sell off.

order book depth
order book depth

Using the price level and depth map visualisations above, it is possible to begin to interpret some of the activity. Looking though some random slices of the data, some re-occurring themes begin to emerge. As mentioned before, given that this market is completely unregulated (I'm not implying this is a bad thing), and is essentially a free-for-all in terms of the potential for competing trading algos, I hope to find some interesting activity to visualise. So far I have identified a few patterns reminiscent of the "Negative HFT" initially reported by and categorised by Nanex.

In the next few examples, I will try to interpret a few "suspicious" looking patterns. Generally, it was not very hard to find these examples, although I am disappointed a little in terms of the sophistication of the apparent tactics. I may add more examples at a later date.

Layering

Layering is reported in conventional markets as a negative HFT tactic. Related to fleeting orders (inevitable order cancellations), laying involves adding volume at various price levels with the sole intention to influence other market participants into believing (observing) an order book imbalance or strong buying/selling pressure. In other words, laying attempts to coerce the market into a direction that will benefit the initiators position.

I have identified 3 types of layering so far, which pretty much occur all of the time. Having said that, some laying will simply be market chasing strategies: placing orders always a certain level below or above the market in the hope to gain value from rare events. In increasing order of aggressiveness, the types of layering found so far are: 1) Basic resting order layering: orders, usually very large, are placed in the book and left there. The intent may be to give a false impression of "support". 2) Variable layering: A large amount of volume is added for a short period of time, deleted, and then added back at a higher (in case of bid) price. The intent may be to move the market. 3) Finally, the most aggressive form of layering occurs at, or very close to the best bid or ask price. In this case the layering immediately affects the midprice and can gradually move the market.

The first type of layering is not very exciting, so I will not show it in isolation, but instead start with the 2nd type. The first example, below, shows a 6 hour snapshot between 5pm and 11pm on 2014-10-31. Here, the bid side of the book appears to show consistent layering activity with high volume (orange/red orders). It could be argued that this strategy is following the price. However I selected this example since the "climbing" activity seems to persist regardless of market direction.

order book layering

This climbing activity is best visualised in terms of order events. Below, in a 1 hour example, the price climbing pattern of add/delete/add, at incremental price levels on the bid side is quite obvious.

order book layering

The next example, from the same day, shows the same pattern. However this time, on both sides of the order book.

order book layering

The next example, along with the associated depth chart, taken from 2014-10-03, shows some very interesting activity. The void beneath the midprice around 12pm is the result of quite a large market impact; the results of which are best viewed in the aligned depth chart. Just after 1pm, a large resting order (~300 Bitcoin) appears below the bid and then above, illustrating the first type of layering.

order book layering
order book layering

On the right hand side of the above visualisation, there is some tightly packed layering occurring. Closer examination, below, shows a number of orders placed very close to the best bid. I'm not sure if there is anything suspicious happening here (besides the climbing bid), however it is interesting to see that all of the orders are hit at 15.40.

order book layering

The next example, from the same day, shows layering on the ask side of the book. This time, the orders are removed before they can be hit.

order book layering

The next day, 2014-10-04, shows a lot of layering activity, particularly on the ask side of the book. On the bid side, the volume at $326.01 is continuously "flashed", starting at 17:42:57s 68ms, averaging 356.8 Bitcoin, up until 775.4 Bitcoin ($252,788). As soon as the best bid is within 80 BPS range, it disappears at 19:54:07s 099ms.

order book layering

Zooming in to just before 8pm and restricting the maximum amount of displayed volume to 100, shows the layering above the midprice in some more detail.

order book layering

The same time period and price level range shown as order book events clarifies the situation. The layering on the ask side of the book seems to be stacked in blocks. The whole block is shifted down incrementally, as can be seen by the add/cancel pattern below.

order book layering

The final level of layering aggressiveness occurs at the market spread. The example below, from 2014-10-03, shows the best bid slowly increasing. It could be argued that this is some kind of pegged-order, however there are times when the best ask is not increasing (or in fact decreasing) while the best bid is still increasing periodically.

order book layering

This example, from 2014-10-14, shows a more active version. Note how the market appears to be being forced upward.

order book layering

Looking at the event data for this time period shows the climbing effect in more detail.order book layering

The final example, with corresponding event visualisation, shows the phenomenon occurring on 2014-10-18.

order book layering
order book layering

Quote Stuffing

Quote stuffing, which is referred to as "one of the most visually obvious forms of HFT" in Credit Suisse: High Frequency Trading – Measurement, Detection and Response and documented extensively by Nanex, for example here: Nanex: Quote Stuffing and Strange Sequences, occurs when bid or ask orders are added and then/deleted in rapid succession. Whether or not quote stuffing is malicious or not seems to be the subject of heated debate. I'm not an expert, so I'm not going to take a one sided position on this. I'm simply interested to see if similar observations can be made in Bitcoin, and whether or not it is possible to make sense of any observed activity. If I had to guess, I would say livelock like conditions are very likely to occur in event driven code: I've had to deal with all manor of different situations with my own trading code. I would also say that while this kind of activity may most of the time be a side effect of event driven processes, I think it is very plausible that it could be used in a targeted/creative way. Either way, the fact that this activity occurs at all is interesting in its own right.

The following examples show some activity on the Bitstamp exchange that appear to mirror quote stuffing (malicious or otherwise) found in conventional markets. The first example, below, shows a 15 minute time period on 2014-10-01. Note that 15 minutes is a very long time in comparison to where this phenomena are usually observed. Nevertheless, the asking price exhibits what is referred to as a square-wave in the Credit Suisse report.

quote stuffing

This next example, taken from 2014-10-03, the best ask oscillates quite wildly over a long period of time (~30 minutes). If this is the result of a bug, then it is a pretty serious one: the fluctuation is huge.

quote stuffing

Zooming in to a 5 minute period shows the anomaly in finer detail. In addition, the depth map shows the fluctuating liquidity as the order is continuously added and removed.

quote stuffing
quote stuffing

The next example, taken from 2014-10-17, shows a more interesting pattern. The ask in this case is decreasing at an irregular rate. I'm guessing that this is simply 2 or more selling algos jumping over each other. The bid side however seems to exhibit a more regular pattern: the bid increases to some level in small steps, and then resets.

quote stuffing

Looking at the bid event event data, restricted to the current best bid for the same period, shows this climbing and then reseting effect in greater detail.

quite stuffing

A slightly more interesting observation occurring on 2014-10-18. This time, reminiscent of the "sawtooth" pattern.

quote stuffing

Another square wave pattern. This time occurring on 2014-10-20.

quote stuffing

The sawtooth pattern occurring below, on 2014-10-22 is quite interesting. If we look at the corresponding order events, not only is this pattern increasing in price, but also in volume. The aligned cancelled volume chart shows the actual amounts.

quote stuffing
quote stuffing
quote stuffing

The remaining visualisations show various other formations.

quote stuffing
quote stuffing
quote stuffing
quote stuffing
quote stuffing
EOF

Memories of Kurt Gödel

$
0
0

[This memoir essay appeared in the magazine Science 82 in April 1982, and in my 1982 book Infinity and the Mind. It’s based on the “Conversations With Godel” documented in my previous post. Some of the photos are from a recent trip through the West, others from the 1970s.]


[This is a cropped version of a photo by Arnold Newman.]

I didn’t know where his real office door was, so I went around to knock on the outside door instead. This was a glass patio door, looking out on a little pond and the peaceful woods beyond the Institute for Advanced Study. It was a sunny March day, but the office was quite dark. I couldn’t see in. Did Kurt Gödel really want to see me?

Suddenly he was there, floating up before the long glass door like some fantastic deep-sea fish in a pressurized aquarium. He let me in, and I took a seat by his desk.

Kurt Gödel was unquestionably the greatest logician of the century. He may also have been one of our greatest philosophers. When he died in 1978, one of the speakers at his memorial service made a provocative comparison of Gödel with Einstein … and with Kafka.

Like Einstein, Gödel was German-speaking and sought a haven from the events of the Second World War in Princeton. And like Einstein, Gödel developed a structure of exact thought that forces everyone, scientist and layman alike, to look at the world in a new way.


[Alley in Elko, Nevada, 2012]

The Kafkaesque aspect of Gödel’s work and character is expressed in his famous Incompleteness Theorem of 1930. Although this theorem can be stated and proved in a rigorously mathematical way, what it seems to say is that rational thought can never penetrate to the final, ultimate truth. A bit more precisely, the Incompleteness Theorem shows that human beings can never formulate a correct and complete description of the set of natural numbers, {0, 1, 2, 3, . . .}. But if mathematicians cannot ever fully understand something as simple as number theory, then it is certainly too much to expect that science will ever expose any ultimate secret of the universe.

Scientists are thus left in a position somewhat like K. in The Castle. Endlessly we hurry up and down corridors, meeting people, knocking on doors, conducting our investigations. But the ultimate success will never be ours. Nowhere in the castle of science is there a final exit to absolute truth.

This seems terribly depressing. But, paradoxically, to understand Gödel’s proof is to find a sort of liberation. For many logic students, the final breakthrough to full understanding of the Incompleteness Theorem is practically a conversion experience. This is partly a by-product of the potent mystique Gödel’s name carries. But, more profoundly, to understand the essentially labyrinthine nature of the castle is, somehow, to be free of it.

Gödel certainly impressed me as a man who had freed himself from the mundane struggle. I visited him in his Institute office three times in 1972, and if there is one single thing I remember most, it is his laughter.

His voice had a high, singsong quality. He frequently raised his voice toward the ends of his sentences, giving his utterances a quality of questioning incredulity. Often he would let his voice trail off into an amused hum. And, above all, there were his bursts of complexly rhythmic laughter.

The conversation and laughter of Gödel were almost hypnotic. Listening to him, I would be filled with the feeling of perfect understanding. He, for his part, was able to follow any of my chains of reasoning to its end almost as soon as I had begun it. What with his strangely informative laughter and his practically instantaneous grasp of what I was saying, a conversation with Gödel felt very much like direct telepathic communication.


[Rudy with Roger Shatzkin near New Bunswick, New Jersey, 1973.]

The first time I visited Gödel it was at his invitation. I was at Rutgers University, writing my doctoral thesis in logic and set theory. I was particularly interested in Cantor’s Continuum Problem. One of Gödel’s unpublished manuscripts on this problem was making the rounds, and I was able to get hold of a Xerox of a Xerox of a Xerox.

I deciphered the faint squiggles and thought about the ideas for several months, finally giving a talk on the manuscript at Rutgers. I had a number of questions about the proof Gödel had sketched and wrote him a letter about them.

He probably would not have answered—Gödel almost never answered letters. But I happened to be attending a weekly seminar at the Institute with Gaisi Takeuti, an eminent logician who was there for a year’s research. Gödel knew this, and one day while I was at the seminar in Takeuti’ s office, he phoned up and asked that I come see him.

Gödel’s office was dim and unlit. There was comfortable carpeting and furniture. On the empty desk sat an empty glass of milk. Gödel was quite short, but his presence was such that visitors sometimes left with the impression that he was very tall.


[Folding chair at a tourist cabin in Lee Vining, California, 2012.]

When I saw him he was dressed as in all his pictures, with a suit over a warm vest and necktie. He is known to have worried a great deal about his health and was always careful to keep himself well bundled-up. Indeed, in the winter, one would sometimes see him leaving the Institute with a scarf wrapped around his head.

He encouraged me to ask questions, and, feeling like Aladdin in the treasure cave, I asked him as many as I could think of. His mind was unbelievably fast and experienced. It seemed that, over the years, he had already thought every possible philosophical problem through to the very end.

Despite his vast knowledge, he still could discuss ideas with the zest and openness of a young man. If I happened to say something particularly stupid or naive, his response was not mockery, but rather an amused astonishment that anyone could think such a thing. It was as if during his years of isolated thought he had forgotten that the rest of the human race was not advancing along with him.

The question of why Gödel chose to live most of his life in splendid isolation is a difficult one. Although he was not Jewish, the Second World War forced him to flee Europe, and this may have soured him somewhat on humanity. Yet, he loved life in America, the comfortable position at the Institute, the chance to meet Einstein, the great social freedom. But he spent his later years in an ever-deepening silence.


[Rudy studying P.J.Cohen’s book on set theory in Highland Park, NJ, 1970, with photo of Wm. Burroughs in background.]

The first time I saw Gödel, he invited me; the second two times, I invited myself. This was not easy. I wrote him several times, insisting that we should meet again to talk. Finally I phoned him to say this again.

“Talk about what?” Gödel said warily. When I finally got to his office for my second visit, he looked up at me with an expression of real dislike. But annoyance gave way to interest, and, after I’d asked a few questions, the conversation turned as friendly and spirited as the first. Still, toward the end of a conversation, when he was tired, Gödel would sometimes look at a visitor with an eerie mixture of fear and suspicion, as if to say, What is this stranger doing in my retreat?

Gödel was, first and foremost, a great thinker. The essence of the man is not to be found in his physical description, but rather in his ideas. I would like to describe now some of our discussions on mathematics, physics, and philosophy.

One of Gödel’s less well-known papers is a 1949 article called, “A Remark on the Relationship Between Relativity Theory and Idealistic Philosophy.” In this paper, probably influenced by his conversations with Einstein as well as by his interest in Kant, Gödel attempts to show that the passage of time is an illusion. The past, present and future of the universe are just different regions of a single vast spacetime. Time is part of space-time, but space-time is a higher reality existing outside of time.


[Windmills in the desert, Route 6, Nevada or Utah, 2012.]

In order to destroy the time-bound notion of the universe as a series of evanescent frames on some cosmic movie screen, Gödel actually constructed a mathematical description of a possible universe in which one can travel back through time. His motivation was that if one can conceive of time-travelling to last year, then one is pretty well forced to admit the existence of something besides the immediate present.

I was disturbed by the traditional paradoxes inherent in time travel. What if I were to travel back in time and kill my past self? If my past self died, then there would be no I to travel back in time, so I wouldn’t kill my past self after all. So then the time-trip would take place, and I would kill my past self. And so on. I was also disturbed by the fact that if the future is already there, then there is some sense in which our free will is an illusion.

Gödel seemed to believe that not only is the future already there, but worse, that it is, in principle, possible to predict completely the actions of some given person.

I objected that if there were a completely accurate theory predicting my actions, then I could prove the theory false—by learning the theory and then doing the opposite of what it predicted. According to my notes, Gödel’s response went as follows: “It should be possible to form a complete theory of human behavior, i.e., to predict from the hereditary and environmental givens what a person will do. However, if a mischievous person learns of this theory, he can act in a way so as to negate it. Hence I conclude that such a theory exists, but that no mischievous person will learn of it. In the same way, time-travel is possible, but no person will ever manage to kill his past self.” Gödel laughed his laugh then, and concluded, “The a priori is greatly neglected. Logic is very powerful.”


[Wall brace in San Juan Bautista, CA, 2012.]

Apropos of the free will question, on another occasion he said:

“There is no contradiction between free will and knowing in advance precisely what one will do. If one knows oneself completely then this is the situation. One does not deliberately do the opposite of what one wants.”

As well as questions, I also brought in for Gödel’s enjoyment some offbeat theories of physics I had come up with recently. I was quite satisfied when, after hearing one of my half-baked theories, he shook his head and said, “This is a very strange idea. A bizarre idea.”

There is one idea truly central to Gödel’s thought that we discussed at some length. This is the philosophy underlying Gödel’s credo, “I do objective mathematics.” By this, Gödel meant that mathematical entities exist independently of the activities of mathematicians, in much the same way that the stars would be there even if there were no astronomers to look at them. For Gödel, mathematics, even the mathematics of the infinite, was an essentially empirical science.


[Road work near eastern part of Route 50, Nevada, 2012.]

According to this standpoint, which mathematicians call Platonism, we do not create the mental objects we talk about. Instead, we find them, on some higher plane that the mind sees into, by a process not unlike sense perception.

The philosophy of mathematics antithetical to Platonism is formalism, allied to positivism. According to formalism, mathematics is really just an elaborate set of rules for manipulating symbols. By applying the rules to certain “axiomatic” strings of symbols, mathematicians go about “proving” certain other strings of symbols to be “theorems.”

The game of mathematics is, for some obscure reason, a useful game. Some strings of symbols seem to reflect certain patterns of the physical world. Not only is “2 + 2 = 4” a theorem, but two apples taken with two more apples make four apples.

It is when one begins talking about infinite numbers that the trouble really begins. Cantor’s Continuum Problem is undecidable on the basis of our present-day theories of mathematics. For the formalists this means that the continuum question has no definite answer. But for a Platonist like Gödel, this means only that we have not yet “looked” at the continuum hard enough to see what the answer is.


[Rudy in Lynchburg, VA, 1980, used as author photo for first edition of Infinity and the Mind.]

In one of our conversations I pressed Gödel to explain what he meant by the “other relation to reality” by which he said one could directly see mathematical objects. He made the point that the same possibilities of thought are open to everyone, so that we can take the world of possible forms as objective and absolute. Possibility is observer-independent, and therefore real, because it is not subject to our will.

There is a hidden analogy here. Everyone believes that the Empire State Building is real, because it is possible for almost anyone to go and see it for himself. By the same token, anyone who takes the trouble to learn some mathematics can “see” the set of natural numbers for himself. So, Gödel reasoned, it must be that the set of natural numbers has an independent existence, an existence as a certain abstract possibility of thought.

I asked him how best to perceive pure abstract possibility. He said three things, i) First one must close off the other senses, for instance, by lying down in a quiet place. It is not enough, however, to perform this negative action, one must actively seek with the mind, ii) It is a mistake to let everyday reality condition possibility, and only to imagine the combinings and permutations of physical objects—the mind is capable of directly perceiving infinite sets, iii) The ultimate goal of such thought, and of all philosophy, is the perception of the Absolute. Gödel rounded off these comments with a remark on Plato: “When Plautus could fully perceive the Good, his philosophy ended.”

Gödel shared with Einstein a certain mystical turn of thought. The word “mystic” is almost pejorative these days. But mysticism does not really have anything to do with incense or encounter groups or demoniac possession. There is a difference between mysticism and occultism.

A pure strand of classical mysticism runs from Plato to Plotinus and Eckhart to such great modern thinkers as Aldous Huxley and D. T. Suzuki. The central teaching of mysticism is this: Reality is One. The practice of mysticism consists in finding ways to experience this higher unity directly.

The One has variously been called the Good, God, the Cosmos, the Mind, the Void, or (perhaps most neutrally) the Absolute. No door in the labyrinthine castle of science opens directly onto the Absolute. But if one understands the maze well enough, it is possible to jump out of the system and experience the Absolute for oneself.


[In Grant park near San Jose, CA, 2012.]

The last time I spoke with Kurt Gödel was on the telephone, in March 1977. I had been studying the problem of whether machines can think, and I had become interested in the distinction between a system’s behavior and the underlying mind or consciousness, if any.

What had struck me was that if a machine could mimic all of our behavior, both internal and external, then it would seem that there is nothing left to be added. Body and brain fall under the heading of hardware. Habits, knowledge, self-image and the like can all be classed as software. All that is necessary for the resulting system to be alive is that it actually exist.

In short, I had begun to think that consciousness is really nothing more than simple existence. By way of leading up to this, I asked Gödel if he believed there is a single Mind behind all the various appearances and activities of the world.

He replied that, yes, the Mind is the thing that is structured, but that the Mind exists independently of its individual properties.

I then asked if he believed that the Mind is everywhere, as opposed to being localized in the brains of people.

Gödel replied, “Of course. This is the basic mystic teaching.”

We talked a little set theory, and then I asked him my last question: “What causes the illusion of the passage of time?”

Gödel spoke not directly to this question, but to the question of what my question meant—that is, why anyone would even believe that there is a perceived passage of time at all.

He went on to relate the getting rid of belief in the passage of time to the struggle to experience the One Mind of mysticism. Finally he said this: “The illusion of the passage of time arises from the confusing of the given with the real. Passage of time arises because we think of occupying different realities. In fact, we occupy only different givens. There is only one reality.”


[Hidalgo cemetery in Almaden Quicksilver Park near San Jose, CA, 2012.]

I wanted to visit Gödel again, but he told me that he was too ill. In the middle of January 1978, I dreamed I was at his bedside.

There was a chessboard on the covers in front of him. Gödel reached his hand out and knocked the board over, tipping the men onto the floor. The chessboard expanded to an infinite mathematical plane. And then that, too, vanished. There was a brief play of symbols, and then emptiness—an emptiness flooded with even white light.

The next day I learned that Kurt Gödel was dead.

The Side Project Marketing Checklist

$
0
0

Pre-Launch

Market Research

If we knew what were doing it wouldn’t be called research. - Albert Einstein

Competitive Landscape

Customer Research

PR Preparations

  • [ ] Create list of tech, startup, and industry blogs.
  • [ ] Create list of local small business journals (eg: Crain’s Chicago).
  • [ ] Create list of local bloggers and journalists in your industry.
  • [ ] Create a “Media Kit” page (check out this example).

Landing Page

  • [ ] Come up with a name and domain name.
  • [ ] Write a site tagline and elevator pitch.
  • [ ] Create a logo.
  • [ ] Set up a landing page.

    Landing page tools
  • [ ] Create “About” and “Contact” pages.
  • [ ] Create pricing page:

    Pricing ideas
    • [ ] Create a free or trial tier for your paid product.
    • [ ] Offer a 100% satisfaction/money-back guarantee.
    • [ ] Make product invite-only to start.
    • [ ] Offer free/discounted access for early adopters/beta testers.
  • [ ] Add social media follow links to landing page.
  • [ ] Set up analytics to learn about who signs up, bounces, etc.

    Analytics platforms

Email Setup

Blog Setup

Content Marketing is all the marketing that’s left. - Seth Godin

  • [ ] Choose a blogging platform:

    Blogging platforms
  • [ ] Research keywords that you’d like your site/blog to rank for.
  • [ ] Create anchor posts or pages for keywords you’d like to rank for.
  • [ ] Have a blog post brainstorming session:

    Blog post ideas
    • Little known features of your product.
    • Highlight use cases for your product.
    • Highlight customers who are using your product.
    • Interview industry specialists.
    • List of popular sites in your industry (be sure to notify them after you publish it!)
    • See these lists of ideas from Buffer and Hubspot.
  • [ ] Add email signup form or link to all blog posts.
  • [ ] Add social media follow links to all blog posts.

Social Media Setup

Post-Launch

Customer Outreach

You should be talking to a small number of users who are seriously interested in what you’re making, not a broad audience who are on the whole indifferent. - Jessica Livingston, Founding Partner at Y Combinator

Free Promotional Channels

I don’t care how much money you have, free stuff is always a good thing. - Queen Latifah

  • [ ] Post your product on directories and review sites (Matt McCaffrey has compiled a great list on Github).
  • [ ] Write and distribute a Press Release.
  • [ ] Write and distribute an eBook, exchange it for email signup.
  • [ ] Write and distribute a white paper, exchange it for email signup.
  • [ ] Give free access to influential bloggers in the industry.
  • [ ] Build a “best of” page with your best blog posts that you wrote or contributed to other sites (ProBlogger calls this a “Sneeze Page”).
  • [ ] Make sure all blog posts have high quality images.

    Places to get free stock images
  • [ ] Create an online course or guide around your product/industry

    Free learning management systems
    External course-creation sites

Many people take no care of their money till they come nearly to the end of it, and others do just the same with their time. - Johann Wolfgang von Goethe

  • [ ] Paid social and search advertising

    Social and search advertising platforms
  • [ ] Commission based advertising

    Commission/affiliate advertising platforms
  • [ ] Sponsor a local meetup or conference for your target customers.
  • [ ] Sponsor podcasts your customers might be listening to.
  • [ ] Sponsor/advertise an industry newsletter (check out Newsletter.city).
  • [ ] Set up a user referral marketing system.

    Referral marketing platforms
  • [ ] Run an engagement contest with prizes or free products for winners.
  • [ ] Buy email or lead lists.

Recurring

Blogging

Blogging is like work, but without coworkers thwarting you at every turn. - Scott Adams

  • [ ] Build/update publishing calendar for your blog.
  • [ ] Regularly post blog posts on your blog(s).
  • [ ] Solicit guest posts from early customers and fans of your product.
  • [ ] Repurpose existing blog posts:

    Repurposing blog posts
    • [ ] Record/post video of you reading the post on YouTube.
    • [ ] Turn posts into a podcast.
    • [ ] Create an infographic based on the post.
    • [ ] Create a Slideshare or Prezi of your post.
  • [ ] Promote your blog content:

    Blog promotion techniques
    • [ ] Send post to your email list.
    • [ ] Promote on your social media.
    • [ ] Email friends and relatives, ask them to share if relevant.
    • [ ] Send to other bloggers for feedback, ask to share if they like it.
    • [ ] Add your latest blog post or landing page to your email signature.

Email

Email is the Jason Bourne of online: somebody’s always trying to kill it. It can’t be done. - Unknown

  • [ ] Send a regular email newsletter with blog posts, use cases, customer stories, etc.
  • [ ] Promote email list on social media.
  • [ ] Send 20 cold emails per week to connect with early customers and get direct feedback.
  • [ ] Send new users a personal email introducing yourself.

Social Media

We have technology, finally, that for the first time in human history allows people to really maintain rich connections with much larger numbers of people.– Pierre Omidyar

Public Relations

The art of publicity is a black art; but it has come to stay, and every year adds to its potency.– Thomas Paine

External Sites

Optimizations

  • [ ] Run a customer poll (can also generate content for your blog or social media channels).
  • [ ] Create another side project to promote your product (read more).
  • [ ] A/B test your landing/payment pages (check out Optimizely).
  • [ ] A/B test email newsletters and promotions.
  • [ ] Implement Twitter cards on your blog posts.
  • [ ] Implement rich snippets in Google search results.
  • [ ] Collect and show testimonials from your happy users.
  • [ ] Analyze user signup flow (check out the teardowns here).
  • [ ] Test your website on multiple platforms, make sure speed is good.
  • [ ] Use Website Grader to pinpoint website improvements.
  • [ ] Create and track weekly traffic and growth goals.
  • [ ] Time social media posts and email newsletters to when your audience is most likely to respond.
  • [ ] Make sure each page on your site has a clear call-to-action.
  • [ ] Implement live chat to capture leads and allow them to ask questions (Intercom seems to be the most popular).
  • [ ] Audit and improve your conversion rate (see this checklist for detailed steps you can take)
  • [ ] Set up automatic analytics reports to be emailed to you each week.
  • [ ] Experiment with various signup form locations, colors, and sizes.
  • [ ] Add “Exit Intent” popup to your blog/site.
  • [ ] Create an FAQs page.

We need to document macOS

$
0
0

I spend an inordinate amount of time searching for information about macOS. Whether I am researching the answers for my section in MacFormat magazine, or trying to solve my own problems here, I am also daily reminded of Apple’s wholesale failure to provide consistent and complete documentation of its flagship product.

This never used to be the case. In the days of classic Mac OS, when print publishing was growing as a result of Macs, Apple published an exemplary series of books under the banner Inside Macintosh, some of which volumes are still in a pile under my desk. It did so despite Mac OS undergoing quite rapid evolution: System 7.0 in 1991, 8.0 six years later in 1997, and 9.0 just two years later in 1999.

I don’t know how many technical authors were employed by Apple at that time, but I suspect it was a far higher proportion of total staff than it is now.

Since then, Mac OS X, then OS X, now macOS have become immeasurably more expansive and complex, many more people use Macs, and they need much more support. Yet Apple’s documentation for users and those supporting users has all but dried up. Even when you look through guides for developers, few have been substantially revised for several years, and many are completely out of kilter with El Capitan or Sierra.

It’s very easy to blame users for not making best use of the information that is available, or to recommend everyone to contact Apple support. But I read, and answer, a relentless stream of questions from users who – for example – did not realise that they had to recharge their Magic Trackpad 2, never knew how to launch an app from the Dock, or couldn’t manage their Photos library.

Apple changes things without explaining the new system properly, and without providing us with suitable new tools. One of the best, and most shocking, examples of this was the introduction of the new unified log in Sierra. Almost a year later, Apple has not provided us with an accessible utility which can browse past entries in the new log, nor any guide at all as to how to use or interpret new log entries. All the useful information is contained in the man page for the command tool log.

Much of the information available is outdated and now incorrect. There was a time when launchd– second only in importance to the kernel – controlled and orchestrated most of the services supplied in macOS. Now a large proportion of them are scheduled and dispatched by two systems, DAS and CTS, which are barely even mentioned in Apple’s latest developer documentation, a busy part of macOS which doesn’t even appear to have a name.

I’m afraid that there is no glimmer of light that Apple intends to change this situation. Indeed, with the next major release of macOS making its way through its last beta releases, there is a good chance that it will soon get even worse when we have to contend with the foibles of APFS and many other changes.

From where I’m sitting, there is only one way that we are going to get better documentation: to create it ourselves.

So far, open and collaborative projects have developed a great deal of valuable software. They have attracted a great many developers, some of them extremely talented, and have been a real success. But open-sourcing and collaboratively-developing software is only part of what computer users need. Most open source projects are proud to have user documentation which is an order of magnitude better than anything provided by Apple for macOS.

What Mac users need most is an open and collaborative project to document macOS. I’m happy to help make this happen, and to donate to it relevant material from this blog in a joint effort to take us all from the dark back into the light.

When Dinosaurs Ruled the Earth, Mammals Took to the Skies

$
0
0

When paleontologists looked at the size and shape of these fossils, they found that many did not fit the simple picture of early mammals as tiny insect-eaters. To the researchers’ surprise, a number of extinct species independently evolved bodies resembling those of living mammals.

Some swam like otters, for example. Others scavenged, like raccoons, or dug into insect nests like today’s aardvarks.

In 2007, Jin Meng, a paleontologist at the American Museum of Natural History, and his colleagues reported finding the fossil of a 160-million-year-old mammal, called Volaticotherium, that looked as if it could glide.

Today, placental mammals like flying squirrels and marsupials like sugar gliders travel through the air from tree to tree. But Volaticotherium belonged to a different lineage and independently evolved the ability to glide.

They were not the only mammals to do so, it turns out. Dr. Luo and his colleagues have now discovered at least two other species of gliding mammals from China, which they described in the journal Nature.

The fossils of the new species, Maiopatagium and Vilevolodon, are exquisitely preserved, revealing many details of their anatomy.

Winglike sheets of skin stretched from their cheeks to their legs and tails. They also had remarkably flexible shoulders needed to climb up trees and then maneuver through the air during a glide.

Dr. Luo and his colleagues discovered that the two new species are even more distantly related to living gliders than Volaticotherium. They belong to an extinct lineage called haramiyidans, which diverged from the ancestors of all living mammals over 200 million years ago.

As a result, they had only some of the traits that define mammals today.

While they had fur and were warm-blooded like living mammals, they were more like reptiles in some respects. They had not yet evolved the tiny chain of bones that allow living mammals to hear, for example.

The newly discovered fossils demonstrate unusual adaptations. To support gliding muscles, the animals’ collarbones joined in a V shape — “like the wishbone in a chicken,” said Dr. Guillermo W. Rougier, a paleontologist at the University of Louisville who was not involved in the new studies.

Photo
Scientists note with interest this repeated evolution — all the more striking because the earliest mammal gliders evolved in forests very different than those found on Earth today. Flowering trees did not yet exist, so there was no fruit for the mammals to eat. Instead, they may have leapt from tree to tree to feed on the cones of conifer trees or the soft parts of giant ferns.Credit April I. Neander/University of Chicago

Dr. Meng said that the growing number of fossil gliders showed that many different kinds of mammals followed the same evolutionary path. “They did their own experiments,” he said.

There must have been some benefit that drove the repeated evolution of gliding, Dr. Luo said. Some tree-dwelling mammals only eat food from certain species, for example.

Running on the ground might have put them at risk of getting killed by a predator. Soaring may have kept them out of harm’s way.

What makes this repeated evolution all the more striking is that the earliest mammal gliders evolved in forests very different from those found on Earth today.

Flowering trees did not yet exist, so there was no fruit to eat. Instead, the earliest mammal gliders may have leapt from tree to tree to feed on the cones of conifer trees or the soft parts of giant ferns.

The new fossils demonstrate just how many surprises early mammals have left to deliver, Dr. Rougier said. “I expect we’re going to keep finding more strange things.”

Correction: August 11, 2017

An article on Thursday about new fossil discoveries that show that prehistoric “squirrels” glided through forests at least 160 million years ago referred imprecisely to what is shared between placental mammals and their mothers. While nutrients and oxygen from the mother’s blood are transferred to fetal blood through the placenta, the growing mammals do not draw blood from their mothers.

Continue reading the main story
Viewing all 25817 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>