Quantcast
Channel: Hacker News
Viewing all 25817 articles
Browse latest View live

HyperCard On The Archive

$
0
0

On August 11, 1987, Bill Atkinson announced a new product from Apple for the Macintosh; a multimedia, easily programmed system called HyperCard. HyperCard brought into one sharp package the ability for a Macintosh to do interactive documents with calculation, sound, music and graphics. It was a popular package, and thousands of HyperCard “stacks” were created using the software.

Additionally, commercial products with HyperCard at their heart came to great prominence, including the original Myst program.

Flourishing for the next roughly ten years, HyperCard slowly fell by the wayside to the growing World Wide Web, and was officially discontinued as a product by Apple in 2004. It left behind a massive but quickly disappearing legacy of creative works that became harder and harder to experience.

To celebrate the 30th anniversary of Hypercard, we’re bringing it back.

After our addition of in-browser early Macintosh emulation earlier this year, the Internet Archive now has a lot of emulated Hypercard stacks available for perusal, and we encourage you to upload your own, easily and quickly.

If you have Hypercard stacks in .sit, .bin.hqx, and other formats, visit this contribution site to have your stack added quickly and easily to the Archive: http://hypercardonline.tk

This site, maintained by volunteer Andrew Ferguson, will do a mostly-automatic addition of your stack into the Archive, including adding your description and creating an automatic screenshot. Your cards shall live again!

Along with access to the original HyperCard software in the browser, the Archive’s goal of “Access to ALL Knowledge” means there’s many other related items to the Hypercard programs themselves, and depending on how far you want to dig, there’s a lot to discover.

There are entire books written about Hypercard, of course – for example, The Complete Hypercard Handbook (1988) and the Hypercard Developers’ Guide (1988), which walk through the context and goals of Hypercard, and then the efforts to program in it.

If you prefer to watch video about Hypercard, the Archive has you covered as well. Here’s an entire episode about Hypercard. As the description indicates: “Guests include Apple Fellow and Hypercard creator Bill Atkinson, Hypercard senior engineer Dan Winkler, author of “The Complete Hypercard Handbook” Danny Goodman, and Robert Stein, Publisher of Voyager Company. Demonstrations include Hypercard 1.0, Complete Car Cost Guide, Focal Point, Laserstacks, and National Galllery of Art.”

Our goal to bring historic software back to a living part of the landscape continues, so feel free to dig in, bring your stacks to life, and enjoy the often-forgotten stacks of yore.


UX brutalism

$
0
0

Originated in post-World War II Europe, Brutalism in design and architecture was marked by raw, unpolished aesthetics that made it easier to inexpensively rebuild cities and venues. It can also be seen as a reaction by a younger generation to the lightness, optimism, and frivolity of some 1930s and 1940s design.

Over the last few years, we have started to see the same design trendapplied to web design: harsh, stripped-down designs that have no frills, as a reaction to the cleanliness and polish of design systems.

Update Git software now (and svn and mercurial)

$
0
0
'[ANNOUNCE] Git v2.14.1, v2.13.5, and others' - MARC
[prev in list] [next in list] [prev in thread] [next in thread] 
List:       git
Subject:    [ANNOUNCE] Git v2.14.1, v2.13.5, and others
From:       Junio C Hamano <gitster () pobox ! com>
Date:       2017-08-10 18:00:04
Message-ID: xmqqh8xf482j.fsf () gitster ! mtv ! corp ! google ! com
[Download message RAW]

The latest maintenance release Git v2.14.1 is now available at the
usual places, together with releases for older maintenance track for
the same issue: v2.7.6, v2.8.6, v2.9.5, v2.10.4, v2.11.3, v2.12.4,
and v2.13.5.

These contain a security fix for CVE-2017-1000117, and are released
in coordination with Subversion and Mercurial that share a similar
issue.  CVE-2017-9800 and CVE-2017-1000116 are assigned to these
systems, respectively, for issues similar to it that are now
addressed in their part of this coordinated release.

The tarballs are found at:

    https://www.kernel.org/pub/software/scm/git/

The following public repositories all have a copy of these tags:

  url = https://kernel.googlesource.com/pub/scm/git/git
  url = git://repo.or.cz/alt-git.git
  url = https://github.com/gitster/git

A malicious third-party can give a crafted "ssh://..." URL to an
unsuspecting victim, and an attempt to visit the URL can result in
any program that exists on the victim's machine being executed.
Such a URL could be placed in the .gitmodules file of a malicious
project, and an unsuspecting victim could be tricked into running
"git clone --recurse-submodules" to trigger the vulnerability.

Credits to find and fix the issue go to Brian Neel at GitLab, Joern
Schneeweisz of Recurity Labs and Jeff King at GitHub.

 * A "ssh://..." URL can result in a "ssh" command line with a
   hostname that begins with a dash "-", which would cause the "ssh"
   command to instead (mis)treat it as an option.  This is now
   prevented by forbidding such a hostname (which should not impact
   any real-world usage).

 * Similarly, when GIT_PROXY_COMMAND is configured, the command is
   run with host and port that are parsed out from "ssh://..." URL;
   a poorly written GIT_PROXY_COMMAND could be tricked into treating
   a string that begins with a dash "-" as an option.  This is now
   prevented by forbidding such a hostname and port number (again,
   which should not impact any real-world usage).

 * In the same spirit, a repository name that begins with a dash "-"
   is also forbidden now.
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About |News |Add a list | Sponsored by KoreLogic

How I, a woman in tech, benefited from sexism in Silicon Valley

$
0
0

I know you’re tired of hearing about that Google manifesto. Me too. I’ve read the memo. I’ve read Yonatan Zunger’s angry response and profoundly disagree with his opinion that the act of publishing it was “incredibly stupid and harmful”. I’ve read Megan McArdle’s essay and relate with her feeling of isolation as I’m often the only woman in an environment dominated by men. I’ve watched Damore’s interview and was struck by his sincerity and calmness even when the reporter was pushing him into corner.

Throughout the entire process, I was deeply conflicted. As a woman in tech, I understand precisely what everyone is talking about. However, as a woman in tech who has experienced sexism to the point of accepting it as background noise, I have the feeling that we’ve been only addressing one side of the story. It’s the side where women are victims. I’m here to tell the story of how I, as a woman in tech, benefited from sexism and that men can be victims too.

In Feburary 2015, I applied to be Section Leader, a position similar to Teaching Assistant, for the series of introductory courses into Computer Science (CS) at Stanford University. It was a coveted position as it’s a huge boost for your resume and many tech companies have exclusive outreach programs for section leaders. I didn’t think I’d be accepted. I had just started taking CS class a quarter ago. I was pretty shy and had a very strong accent since I’d only been in the US for a few months. I applied together a few friends, all male, mostly smarter, and much more confident than I was. But I was accepted, while all of my friends weren’t. I just thought that it was because I prepped a lot for the interviews and my preparation paid off.

As I got more involved in the section leading community, I started noticing a pattern. My female friends, like I did, seemed to have better luck with getting into section leading. Still, I didn’t think much of it, until I volunteered to help out with the interviews myself.

There were three interviewers for that day, and I was the only female. The first candidate, a male, wasn’t very good, so we all agreed that it was a reject. The second candidate, also a male, was pretty good but kind of nervous, so we agreed that he’d make a good section leader with more training, so “weak hire”. The third candidate, a female, didn’t seem to know the problem she was teaching very well and even offered wrong explanation. When we discussed her case, we all agreed on her shortcomings. But when we tried to put on the scores, I noticed that my fellow interviewers gave her pretty good scores with recommendation “weak hire”.

“Wait, do you guys think her performance is comparable to that of the previous guy?”

“No, I think the previous guy was better,” they both said.

“But you gave her the same scores you gave the other guy.”

“Oh, did we?”

They checked back their scores for the previous guy and, after some deliberation, lowered the scores they gave the female candidate.

I’m not suggesting that my fellow interviewers were sexist. They weren’t even aware of their inconsistent judgment until I pointed it out. But this incident made me think that subconsciously, people are using another scale to judge women’s ability in tech. Nobody says it out loud because we all know that’s no longer socially acceptable, but I couldn’t help but hearing it in my head: “She’s good for a woman, so even though she doesn’t do as well as that guy, she still gets the same scores because she’s in the women’s league.” I can’t help but thinking that I wouldn’t have been accepted into section leading if I weren’t a girl. And I feel bad for my guy friends who would have made excellent section leaders if they were girls.

That wasn’t the only occasion in which I felt like my gender might have something to do with my opportunities in tech. I’ve had a guy asking me to join his startup because it looks good to have a female co-founder. I’ve had the feeling that a company wants to hire me just because they want to diversify their office, like one day the all-dude team would look around and say: “What’s up with all this testostorone? We need a girl in here.” In school, I keep having guys asking me to join their projects because they thought I was cute. They didn’t say it at that time – it’d be stupid – but after we’d spent some time together they asked me out.

Sexism isn’t making it harder for women to enter tech. From my personal experiences, sexism makes it even easier for women to enter tech, though I understand that my experiences don’t generalize to that of other women. It’s possible that women are always judged differently. Sometimes it’s positive – people judge women on a more generous scale. Sometimes it’s negative – as people’ve talked about it at length. We need to be careful when encouraging affirmative action in tech to ensure it doesn’t reinforce the philosophy of treating women differently. Lowering your hiring standards for women can give people like me the lingering self doubt that maybe I wasn’t good enough. Worse, it gives many techbros reasons to believe that his female colleagues aren’t as good as his, and act accordingly.

What we need to do is to make it easier for women to stay in tech. Even though I’m already in tech and love what I do, I sometimes have the feeling Megan McArdle had, the feeling that maybe I don’t belong here. I don’t like those 48 hour coding hackathons without sleep or shower. They are useless and detrimental to health. I don’t like techbros’ hangouts, with beers, playing pool, and occasional jerking off jokes. I don’t like feeling like a piece of furniture to add to the company’s diversity. I don’t like my male teammates to think that I got to where they are only because I’m female. I don’t like the representation of female engineers on TV, always nerdy, unattractive, and without much of a social life.

I don’t know how we can make it more fun to women in tech. Maybe it’s a chicken and egg problem, that we need to make tech friendlier to women to have more women in tech, but we also need more women in tech to make tech friendlier to women. Until we reach that point, there will be different opinions. There will be women upset because of the sexist treatment they receive, but there will also be men who are upset. There will be people with different points of view and different ideas of solutions. We should hear them out instead of shutting them down, the way Google fired James Damore or the Internet collectively shamed him.

The world in which IPv6 was a good design

$
0
0

The world in which IPv6 was a good design

Last November I went to an IETF meeting for the first time. The IETF is an interesting place; it seems to be about 1/3 maintenance grunt work, 1/3 extending existing stuff, and 1/3 blue sky insanity. I attended mostly because I wanted to see how people would react to TCP BBR, which was being presented there for the first time. (Answer: mostly positively, but with suspicion. It kinda seemed too good to be true.)

Anyway, the IETF meetings contain lots and lots of presentations about IPv6, the thing that was supposed to replace IPv4, which is what the Internet runs on. (Some would say IPv4 is already being replaced; some would say it has already happened.) Along with those presentations about IPv6, there were lots of people who think it's great, the greatest thing ever, and they're pretty sure it will finally catch on Any Day Now, and IPv4 is just a giant pile of hacks that really needs to die so that the Internet can be elegant again.

I thought this would be agreat chance to really try to figure out what was going on. Why is IPv6 such a complicated mess compared to IPv4? Wouldn't it be better if it had just been IPv4 with more address bits? But it's not, oh goodness, is it ever not. So I started asking around. Here's what I found.

Buses ruined everything

Once upon a time, there was the telephone network, which used physical circuit switching. Essentially, that meant moving connectors around so that your phone connection was literally just a very long wire ("OSI layer 1"). A "leased line" was literally a very long wire that you leased from the phone company. You would put bits in one end of the wire, and they'd come out the other end, a fixed amount of time later. You didn't need addresses because there was exactly one machine at each end.

Eventually the phone company optimized that a bit. Time-division multiplexing (TDM) and "virtual circuit switching" was born. The phone company could transparently take the bits at a slower bit rate from multiple lines, group them together with multiplexers and demultiplexers, and let them pass through the middle of the phone system using fewer wires than before. Making that work was a little complicated, but as far as we modem users were concerned, you still put bits in one end and they came out the other end. No addresses needed.

The Internet (not called the Internet at the time) was built on top of this circuit switching concept. You had a bunch of wires that you could put bits into and have them come out the other side. If one computer had two or three interfaces, then it could, if given the right instructions, forward bits from one line to another, and you could do something a lot more efficient than a separate line between each pair of computers. And so IP addresses ("layer 3"), subnets, and routing were born. Even then, with these point-to-point links, you didn't need MAC addresses, because once a packet went into the wire, there was only one place it could come out. You used IP addresses to decide where it should go after that.

Meanwhile, LANs got invented as an alternative. If you wanted to connect computers (or terminals and a mainframe) together at your local site, it was pretty inconvenient to need multiple interfaces, one for each wire to each satellite computer, arranged in a star configuration. To save on electronics, people wanted to have a "bus" network (also known as a "broadcast domain," a name that will be important later) where multiple stations could just be plugged into a single wire, and talk to any other station plugged into the same wire. These were not the same people as the ones building the Internet, so they didn't use IP addresses for this. They all invented their own scheme ("layer 2").

One of the early local bus networks was arcnet, which is dear to my heart (I wrote the first Linux arcnet driver and arcnet poetry way back in the 1990s, long after arcnet was obsolete). Arcnet layer 2 addresses were very simplistic: just 8 bits, set by jumpers or DIP switches on the back of the network card. As the network owner, it was your job to configure the addresses and make sure you didn't have any duplicates, or all heck would ensue. This was kind of a pain, but arcnet networks were usually pretty small, so it was only kind of a pain.

A few years later, ethernet came along and solved that problem once and for all, by using many more bits (48, in fact) in the layer 2 address. That's enough bits that you can assign a different (sharded-sequential) address to every device that has ever been manufactured, and not have any overlaps. And that's exactly what they did! Thus the ethernet MAC address was born.

Various LAN technologies came and went, including one of my favourites, IPX (Internetwork Packet Exchange, though it had nothing to do with the "real" Internet) and Netware, which worked great as long as all the clients and servers were on a single bus network. You never had to configure any addresses, ever. It was beautiful, and reliable, and worked. The golden age of networking, basically.

Of course, someone had to ruin it: big company/university networks. They wanted to have so many computers that sharing 10 Mbps of a single bus network between them all became a huge bottleneck, so they needed a way to have multiple buses, and then interconnect - "internetwork," if you will - those buses together. You're probably thinking, of course! Use the Internet Protocol for that, right? Ha ha, no. The Internet protocol, still not called that, wasn't mature or popular back then, and nobody took it seriously. Netware-over-IPX (and the many other LAN protocols at the time) were serious business, so as serious businesses do, they invented their own thing(s) to extend the already-popular thing, ethernet. Devices on ethernet already had addresses, MAC addresses, which were about the only thing the various LAN protocol people could agree on, so they decided to use ethernet addresses as the keys for their routing mechanisms. (Actually they called it bridging and switching instead of routing.)

The problem with ethernet addresses is they're assigned sequentially at the factory, so they can't be hierarchical. That means the "bridging table" is not as nice as a modern IP routing table, which can talk about the route for a whole subnet at a time. In order to do efficient bridging, you had to remember which network bus each MAC address could be found on. And humans didn't want to configure each of those by hand, so it needed to figure itself out automatically. If you had a complex internetwork of bridges, this could get a little complicated. As I understand it, that's what led to the spanning tree poem, and I think I'll just leave it at that. Poetry is very important in networking.

Anyway, it mostly worked, but it was a bit of a mess, and you got broadcast floods every now and then, and the routes weren't always optimal, and it was pretty much impossible to debug. (You definitely couldn't write something like traceroute for bridging, because none of the tools you need to make it work - such as the ability for an intermediate bridge to even have an address - exist in plain ethernet.)

On the other hand, all these bridges were hardware-optimized. The whole system was invented by hardware people, basically, as a way of fooling the software, which had no idea about multiple buses and bridging between them, into working better on large networks. Hardware bridging means the bridging could go really really fast - as fast as the ethernet could go. Nowadays that doesn't sound very special, but at the time, it was a big deal. Ethernet was 10 Mbps, because you could maybe saturate it by putting a bunch of computers on the network all at once, not because any one computer could saturate 10 Mbps. That was crazy talk.

Anyway, the point is, bridging was a mess, and impossible to debug, but it was fast.

Internet over buses

While all that was happening, those Internet people were getting busy, and were of course not blind to the invention of cool cheap LAN technologies. I think it might have been around this time that the ARPANET got actually renamed to the Internet, but I'm not sure. Let's say it was, because the story is better if I sound confident.

At some point, things progressed from connecting individual Internet computers over point-to-point long distance links, to the desire to connect whole LANs together, over point-to-point links. Basically, you wanted a long-distance bridge.

You might be thinking, hey, no big deal, why not just build a long distance bridge and be done with it? Sounds good, doesn't work. I won't go into the details right now, but basically the problem is congestion control. The deep dark secret of ethernet bridging is that it assumes all your links are about the same speed, and/or completely uncongested, because they have no way to slow down. You just blast data as fast as you can, and expect it to arrive. But when your ethernet is 10 Mbps and your point-to-point link is 0.128 Mbps, that's completely hopeless. Separately, the idea of figuring out your routes by flooding all the links to see which one is right - this is the actual way bridging typically works - is hugely wasteful for slow links. And sub-optimal routing, an annoyance on local networks with low latency and high throughput, is nasty on slow, expensive long-distance links. It just doesn't scale.

Luckily, those Internet people (if it was called the Internet yet) had been working on that exact set of problems. If we could just use Internet stuff to connect ethernet buses together, we'd be in great shape.

And so they designed a "frame format" for Internet packets over ethernet (and arcnet, for that matter, and every other kind of LAN).

And that's when everything started to go wrong.

The first problem that needed solving was that now, when you put an Internet packet onto a wire, it was no longer clear which machine was supposed to "hear" it and maybe forward it along. If multiple Internet routers were on the same ethernet segment, you couldn't have them all picking it up and trying to forward it; that way lies packet storms and routing loops. No, you had to choose which router on the ethernet bus is supposed to pick it up. We can't just use the IP destination field for that, because we're already using that for the final destination, not the router destination. Instead, we identify the desired router using its MAC address in the ethernet frame.

So basically, to set up your local IP routing table, you want to be able to say something like, "send packets to IP address 10.1.1.1 via the router at MAC address 11:22:33:44:55:66." That's the actual thing you want to express. This is important! Your destination is an IP address, but your router is a MAC address. But if you've ever configured a routing table, you might have noticed that nobody writes it like that. Instead, because the writers of your operating system's TCP/IP stack are stubborn, you write something like "send packets to IP address 10.1.1.1 via the router at IP address 192.168.1.1."

In truth, that really is just complicating things. Now your operating system has to first look up the ethernet address of 192.168.1.1, find out it's 11:22:33:44:55:66, and finally generate a packet with destination ethernet address 11:22:33:44:55:66 and destination IP address 10.1.1.1. 192.168.1.1 is just a pointless intermediate step.

To do that pointless intermediate step, you need to add ARP (address resolution protocol), a simple non-IP protocol whose job it is to convert IP addresses to ethernet addresses. It does this by broadcasting to everyone on the local ethernet bus, asking them all to answer if they own that particular IP address. If you have bridges, they all have to forward all the ARP packets to all their interfaces, because they're ethernet broadcast packets, and that's what broadcasting means. On a big, busy ethernet with lots of interconnected LANs, excessive ARP starts becoming one of your biggest nightmares. It's especially bad on wifi. As time went on, people started making bridges/switches with special hacks to avoid forwarding ARP as far as it's technically supposed to go, to try to cut down on this problem. Some devices (especially wifi access points) just make fake ARP answers to try to help. But doing any of that is a hack, albeit sometimes a necessary hack.

Death by legacy

Time passed. Eventually (and this actually took quite a while), people pretty much stopped using non-IP protocols on ethernet at all. So basically all networks became a physical wire (layer 1), with multiple stations on a bus (layer 2), with multiple buses connected over bridges (gotcha! still layer 2!), and those inter-buses connected over IP routers (layer 3).

After a while, people got tired of manually configuring IP addresses, arcnet style, and wanted them to auto-configure, ethernet style, except it was too late to literally do it ethernet style, because a) the devices had already been manufactured with ethernet addresses, not IP addresses, and b) IP addresses were only 32 bits, which is not enough to just manufacture them forever with no overlaps, and c) just assigning IP addresses sequentially instead of using subnets would bring us back to square one: it would just be ethernet over again, and we already have ethernet. So that's where bootp and DHCP came from. Those protocols, by the way, are special kinda like ARP is special (except they pretend not to be special, by technically being IP packets). They have to be special, because an IP node has to be able to transmit them before it has an IP address, which is of course impossible, so it just fills the IP headers with essentially nonsense (albeit nonsense specified by an RFC), so the headers might as well have been left out. But nobody would feel nice if they were inventing a whole new protocol that wasn't IP, so they pretended it was IP, and then they felt nice. Well, as nice as one can feel when one is inventing DHCP. Anyway, I digress. The salient detail here is that unlike real IP services, bootp and DHCP need to know about ethernet addresses, because after all, it's their job to hear your ethernet address and assign you an IP address to go with it. They're basically the reverse of ARP, except we can't say that, because there's a protocol called RARP that is literally the reverse of ARP. Actually, RARP worked quite fine and did the same thing as bootp and DHCP while being much simpler, but we don't talk about that.

The point of all this is that ethernet and IP were getting further and further intertwined. They're nowadays almost inseparable. It's hard to imagine a network interface without a 48-bit MAC address, and it's hard to imagine that network interface working without an IP address. You write your IP routing table using IP addresses, but of course you know you're lying when you name the router by IP address; you're just indirectly saying that you want to route via a MAC address. And you have ARP, which gets bridged but not really, and DHCP, which is an IP packet but is really an ethernet protocol, and so on.

Moreover, we still have both bridging and routing, and they both get more and more complicated as the LANs and the Internet get more and more complicated, respectively. Bridging is still, mostly, hardware based and defined by IEEE, the people who control the ethernet standards. Routing is still, mostly, software based and defined by the IETF, the people who control the Internet standards. Both groups still try to pretend the other group doesn't exist. Network operators basically choose bridging vs routing based on how fast they want it to go and how much they hate configuring DHCP servers, which they really hate very much, which means they use bridging as much as possible and routing when they have to.

In fact, bridging has gotten so completely out of control that people decided to extract the layer 2 bridging decisions out completely to a higher level (with configuration exchanged between bridges using a protocol layered over IP, of course!) so it can be centrally managed. That's called software-defined networking (SDN). It helps a lot, compared to letting your switches and bridges just do whatever they want, but it's also fundamentally silly, because you know what's software defined networking? IP. It is literally and has always been the software-defined network you use for interconnecting networks that have gotten too big. But the problem is, it was always too hard to hardware accelerate, and anyway, it didn't get hardware accelerated, and configuring DHCP really is a huge pain, so network operators just learned how to bridge bigger and bigger things. And nowadays big data centers are basically just SDNed, and you might as well not be using IP in the data center at all, because nobody's routing the packets. It's all just one big virtual bus network.

It is, in short, a mess.

Now forget I said all that...

Great story, right? Right. Now pretend none of that happened, and we're back in the early 1990s, when most of that had in fact already happened, but people at the IETF were anyway pretending that it hadn't happened and that the "upcoming" disaster could all be avoided. This is the good part!

There's one thing I forgot to mention in that big long story above: somewhere in that whole chain of events, we completely stopped using bus networks. Ethernet is not actually a bus anymore. It justpretends to be a bus. Basically, we couldn't get ethernet's famousCSMA/CD to keep working as speeds increased, so we went back to the good old star topology. We run bundles of cables from the switch, so that we can run one cable from each station all the way back to the center point. Walls and ceilings and floors are filled with big, thick, expensive bundles of ethernet, because we couldn't figure out how to make buses work well... at layer 1. It's kinda funny actually when you think about it. If you find sad things funny.

In fact, in a bonus fit of insanity, even wifi - the ultimate bus network, right, where literally everybody is sharing the same open-air "bus" - we almost universally use wifi in a mode, called "infrastructure mode," which literally simulates a giant star topology. If you have two wifi stations connected to the same access point, they can't talk to each other directly. They send a packet to the access point, but addressed to the MAC address of the other node. The access point then bounces it back out to the destination node.

HOLD THE HORSES LET ME JUST REVIEW THAT FOR YOU. There's a little catch there. When node X wants to send to Internet node Z, via IP router Y, via wifi access point A, what does the packet look like? Just to draw a picture, here's what we want to happen:

X -> [wifi] -> A -> [wifi] -> Y -> [internet] -> Z

Z is the IP destination, so obviously the IP destination field has to be Z. Y is the router, which we learned above that we specify by using its ethernet MAC address in the ethernet destination field. But in wifi, X can't just send out a packet to Y, for various reasons (including that they don't know each other's encryption keys). We have to send to A. Where do we put A's address, you might ask?

No problem! 802.11 has a thing called 3-address mode. They add a third ethernet MAC address to every frame, so they can talk about the real ethernet destination, and the intermediate ethernet destination. On top of that, there are bit fields called "to-AP" and "from-AP," which tell you if the packet is going from a station to an AP, or from an AP to a station, respectively. But actually they can both be true at the same time, because that's how you make wifi repeaters (APs send packets to APs).

Speaking of wifi repeaters! If A is a repeater, it has to send back to the base station, B, along the way, which looks like this:

X -> [wifi] -> A -> [wifi-repeater] -> B -> [wifi] -> Y -> [internet] -> Z

X->A uses three-address mode, but A->B has a problem: the ethernet source address is X, and the ethernet destination address is Y, but the packet on the air is actually being sent from A to B; X and Y aren't involved at all. Suffice it to say that there's a thing called 4-address mode, and it works pretty much like you think.

(In 802.11s mesh networks, there's a 6-address mode, and that's about where I gave up trying to understand.)

Avery, I was promised IPv6, and you haven't even mentioned IPv6

Oh, oops. This post went a bit off the rails, didn't it?

Here's the point of the whole thing. The IETF people, when they were thinking about IPv6, saw this mess getting made - and maybe predicted some of the additional mess that would happen, though I doubt they could have predicted SDN and wifi repeater modes - and they said, hey wait a minute, stop right there. We don't need any of this crap! What if instead the world worked like this?

  • No more physical bus networks (already done!)
  • No more layer 2 internetworks (that's what layer 3 is for)
  • No more broadcasts (layer 2 is always point-to-point, so where would you send the broadcast to? replace it with multicast instead)
  • No more MAC addresses (on a point-to-point network, it's obvious who the sender and receiver are)
  • No more ARP and DHCP (no MAC addresses, no so mapping IP addresses to MAC addresses)
  • No more complexity in IP headers (so you can hardware accelerate IP routing)
  • No more IP address shortages (so that we can go back to routing big subnets again)
  • No more manual IP address configuration except at the core (and there are so many IP addresses that we can recursively hand out subnets down the tree from there)

Imagine that we lived in such a world: wifi repeaters would just be IPv6 routers. So would wifi access points. So would ethernet switches. So would SDN. ARP storms would be gone. "IGMP snooping bridges" would be gone. Bridging loops would be gone. Every routing problem would be traceroute-able. And best of all, we could drop 12 bytes (source/dest ethernet addresses) from every ethernet packet, and 18 bytes (source/dest/AP addresses) from every wifi packet. Sure, IPv6 adds an extra 24 bytes of address (vs IPv4), but you're dropping 12 bytes of ethernet, so the added overhead is only 12 bytes - pretty comparable to using two 64-bit IP addresses but having to keep the ethernet header. The idea that we could someday drop ethernet addresses helped to justify the oversized IPv6 addresses.

It would have been beautiful. Except for one problem: it never happened.

Requiem for a dream

One person at work put it best: "layers are only ever added, never removed."

All this wonderfulness depended on the ability to start over and throw away the legacy cruft we had built up. And that is, unfortunately, pretty much impossible. Even if IPv6 hits 99% penetration, that doesn't mean we'll be rid of IPv4. And if we're not rid of IPv4, we won't be rid of ethernet addresses, or wifi addresses. And if we have to keep the IEEE 802.3 and 802.11 framing standards, we're never going to save those bytes. So we will always need the stupid "IPv6 neighbour discovery" protocol, which is just a more complicated ARP. Even though we no longer have bus networks, we'll always need some kind of simulator for broadcasts, because that's how ARP works. We'll need to keep running a local DHCP server at home so that our obsolete IPv4 light bulbs keep working. We'll keep needing NAT so that our obsolete IPv4 light bulbs can keep reaching the Internet.

And that's not the worst of it. The worst of it is we still need the infinite abomination that is layer 2 bridging, because of one more mistake the IPv6 team forgot to fix. Unfortunately, while they were blue-skying IPv6 back in the 1990s, they neglected to solve the "mobile IP" problem. As I understand it, the idea was to get IPv6 deployed first - it should only take a few years - and then work on it after IPv4 and MAC addresses had been eliminated, at which time it should be much easier to solve, and meanwhile, nobody really has a "mobile IP" device yet anyway. I mean, what would that even mean, like carrying your laptop around and plugging into a series of one ethernet port after another while you ftp a file? Sounds dumb.

The killer app: mobile IP

Of course, with a couple more decades of history behind us, now we know a few use cases for carrying around a computer - your phone - and letting it plug into one ethernet port wireless access point after another. We do it all the time. And with LTE, it even mostly works! With wifi, it works sometimes. Good, right?

Not really, because of the Internet's secret shame: all that stuff only works because of layer 2 bridging. Internet routing can't handle mobility - at all. If you move around on an IP network, your IP address changes, and that breaks any connections you have open.

Corporate wifi networks fake it for you, bridging their whole LAN together at layer 2, so that the giant central DHCP server always hands you the same IP address no matter which corporate wifi access point you join, and then gets your packets to you, with at most a few seconds of confusion while the bridge reconfigures. Those newfangled home wifi systems with multiple extenders/repeaters do the same trick. But if you switch from one wifi network to another as you walk down the street - like if there's a "Public Wifi" service in a series of stores - well, too bad. Each of those gives you a new IP address, and each time your IP address changes, you kill all your connections.

LTE tries even harder. You keep your IP address (usually an IPv6 address in the case of mobile networks), even if you travel miles and miles and hop between numerous cell towers. How? Well... they typically just route all your traffic back to a central location, where it all gets bridged together (albeit with lots of firewalling) into one super-gigantic virtual layer 2 LAN. And your connections keep going. At the expense of a ton of complexity, and a truly embarrassing amount of extra latency, which they would really like to fix, but it's almost impossible.

Making mobile IP actually work

So okay, this has been a long story, but I managed to extract it from those IETF people eventually. When we got to this point - the problem of mobile IP - I couldn't help but ask. What went wrong? Why can't we make it work?

The answer, it turns out, is surprisingly simple. The great design flaw was in how the famous "4-tuple" (source ip, source port, destination ip, destination port) was defined. We use the 4-tuple to identify a given TCP or UDP session; if a packet has those four fields the same, then it belongs to a given session, and we can route it to whatever socket is handling that session. But the 4-tuple crosses two layers: internetwork (layer 3) and transport (layer 4). If, instead, we had identified sessions usingonly layer 4 data, then mobile IP would have worked perfectly.

Let's do a quick example. X port 1111 is talking to Y port 80, so it sends a packet with 4-tuple (X,1111,Y,80). The response comes back with (Y,80,X,1111), and the kernel delivers it to the socket that generated the original packet. When X sends more packets tagged (X,1111,Y,80), then Y delivers them all to the same server socket, and so on.

Then, if X hops IP addresses, it gets a new name, say Q. Now it'll start sending packets with (Q,1111,Y,80). Y has no idea what that means, and throws it away. Meanwhile, if Y sends packets tagged (Y,80,X,1111), they get lost, because there is no longer an X to receive them.

Imagine now that we tagged sockets without reference to their IP address. For that to work, we'd need much bigger port numbers (which are currently 16 bits). Let's make them, say, 128 or 256 bits, some kind of unique hash.

Now X sends out packets to Y with tag (uuid,80). Note, the packets themselves still contain the (X,Y) addressing information, down at layer 3 - that's how they get routed to the right machine in the first place. But the kernel doesn't use the layer 3 information to decide which socket to deliver to; it just uses the uuid. The destination port (80 in this case) is only needed to initiate a new session, to identify what service you want to connect to, and can be ignored or left out after that.

For the return direction, Y's kernel caches the fact that packets for (uuid) go to IP address X, which is the address it most recently received (uuid) packets from.

Now imagine that X changes addresses to Q. It still sends out packets tagged with (uuid,80), to IP address Y, but now those packets come from address Q. On machine Y, it receives the packet and matches it to the socket associated with (uuid), notes that the packets for that socket are now coming from address Q, and updates its cache. Its return packets can now be sent, tagged as (uuid), back to Q instead of X. Everything works! (Modulo some care to prevent connection hijacking by impostors.)

There's only one catch: that's not how UDP and TCP work, and it's too late to update them. Updating UDP and TCP would be like updating IPv4 to IPv6; a project that sounded simple, back in the 1990s, but decades later, is less than half accomplished (and the first half was the easy part; the long tail is much harder).

The positive news is we may be able to hack around it with yet another layering violation. If we throw away TCP - it's getting rather old anyway - and instead use QUIC over UDP, then we can just stop using the UDP 4-tuple as a connection identifier at all. Instead, if the UDP port number is the "special mobility layer" port, we unwrap the content, which can be another packet with a proper uuid tag, match it to the right session, and deliver those packets to the right socket.

There's even more good news: the experimental QUIC protocol already, at least in theory, has the right packet structure to work like this. It turns out you need unique session identifiers (keys) anyhow if you want to use stateless packet encryption and authentication, which QUIC does. So, perhaps with not much work, QUIC could support transparent roaming. What a world that would be!

At that point, all we'd have to do is eliminate all remaining UDP and TCP from the Internet, and then we would definitely not need layer 2 bridging anymore, for real this time, and then we could get rid of broadcasts and MAC addresses and SDN and DHCP and all that stuff.

And then the Internet would be elegant again.

August 11, 2017 05:49

Microsoft Announces Windows 10 Pro for Workstations

$
0
0

Windows 10 Pro for Workstations is a high-end edition of Windows 10 Pro, comes with unique support for server grade PC hardware and is designed to meet demanding needs of mission critical and compute intensive workloads.

We know that power users have unique needs to run efficiently and we take the feedback we hear seriously. Much of that valuable feedback comes through the Windows Insider Program. Today we are very excited to announce a new edition of Windows 10 Pro designed to meet the needs of our advanced users deploying their Workstation PCs in demanding and mission-critical scenarios. Windows 10 Pro for Workstations will be delivered as part of our Fall Creators Update, available this fall.     

 Windows 10 Pro for Workstations

The value of Windows 10 Pro for Workstations is directly aligned to increase the performance and reliability of high-end PCs, with the following features:

  • ReFS (Resilient file system): ReFS provides cloud-grade resiliency for data on fault-tolerant storage spaces and manages very large volumes with ease. ReFS is designed to be resilient to data corruption, optimized for handling large data volumes, auto-correcting and more. It protects your data with integrity streams on your mirrored storage spaces. Using its integrity streams, ReFS detects when data becomes corrupt on one of the mirrored drives and uses a healthy copy of your data on the other drive to correct and protect your precious data.
  • Persistent memory: Windows 10 Pro for Workstations provides the most demanding apps and data with the performance they require with non-volatile memory modules (NVDIMM-N) hardware. NVDIMM-N enables you to read and write your files with the fastest speed possible, the speed of the computer’s main memory. Because NVDIMM-N is non-volatile memory, your files will still be there, even when you switch your workstation off.
  • Faster file sharing: Windows 10 Pro for Workstations includes a feature called SMB Direct, which supports the use of network adapters that have Remote Direct Memory Access (RDMA) capability. Network adapters that have RDMA can function at full speed with very low latency, while using very little CPU. For applications that access large datasets on remote SMB file shares, this feature enables:
    • Increased throughput: Leverages the full throughput of high speed networks where the network adapters coordinate the transfer of large amounts of data at line speed.
    • Low latency: Provides extremely fast responses to network requests, and, as a result, makes remote file storage feel as if it is directly attached storage.
    • Low CPU utilization: Uses fewer CPU cycles when transferring data over the network, which leaves more power available to other applications running on the system.
  • Expanded hardware support: One of the top pain points expressed by our Windows Insiders was the limits on taking advantage of the raw power of their machine. Hence, we are expanding hardware support in Windows 10 Pro for Workstations. Users will now be able to run Windows 10 Pro for Workstations on devices with high performance configurations including server grade Intel Xeon or AMD Opteron processors, with up to 4 CPUs (today limited to 2 CPUs) and add massive memory up to 6TB (today limited to 2TB).

Performance is a very important requirement in this new world of fast paced innovation and we will continue to invest on Windows 10 Pro for Workstations to enable Windows power users to maximize every aspect of their high-performance device. Windows 10 Pro for Workstations utilizes significant investments, that Windows has made in recent releases, for scaling up across a high number of logical processors and large amounts of memory. Our architectural changes in the Windows kernel take full advantage of high-end processors families, such as Intel Xeon or AMD Opteron, that package a high number of cores in single or multi-processor configurations.

Thank you to our customers and Windows Insiders for your feedback.  We look forward to continuing to hear from you.

Updated August 10, 2017 2:06 pm

Negative Result: Reading Kernel Memory from User Mode

$
0
0

I were going to write an introduction about how important negative results can be. I didn’t. I assume you can figure out for yourself why that is and if not you got all the more reason to read this blog post. If you think it’s trivial why my result is negative, you definitely need to read the blog post.

I think most researchers would immediately think that reading kernel memory from an unprivileged process cannot work because “page tables”. That is what the Instruction Set Architecture or ISA says. But in order to understand why that was unsatisfactory to me we need to dig a bit deeper and I’ll start at the beginning.

 
When a software running on a Core requires memory it starts a so called “load” command. The load command is then processed in multiple stages until the data is found and returned or an error occurred. The Figure below shows a simplified version of this sub system.

Memory hierachy.png

Software including operating system uses virtual addressing to start a load (which is what a memory read is called inside the CPU). The first stage of processing is the L1 cache. The L1 cache is split between a data and an instruction cache. The L1 Cache is a so called VIPT or Virtually Indexed, Physically Tagged cache. This means the data can be looked up by directly using the virtual address of the load request.  This along with central position in the core makes the L1 incredibly fast. If the requested data was not found in the L1 cache the load must be passed down the cache hierarchy. This is the point where the page tables come into play. The page tables are used to translate the virtual address into a physical address. This is essentially how paging is enabled on x64. It is during this translation that privileges are checked. Once we have a physical address, the CPU can query the L2 cache and L3 cache in turn. Both are PITP caches (Physically Indexed, Physically Tagged) thus requiring the translation in the page tables before the lookup can be done. If no data was in the caches then the CPU will ask the memory controller to fetch the data in main memory. The latency of a data load in the L1 cache is around 5 clock cycles whereas a load from main memory is typically around 200 clock cycles. With the security check in the page tables and the L1 located before those we see already at this point that the because “page table” argument is too simple. That said – the intel manuals software developer’s manuals [2] states that the security settings is copied along the data into the L1 cache.

In this and the next section I’ll outline the argumentation behind my theory, but only outline. For those a bit more interested it might be worth looking at my talk at HackPra from January [3] where I describe the pipeline in slightly more detail, though in a different context. Intel CPU’s are super scalar, pipelined CPU’s and they use speculative execution and that plays a big role in why I thought it may be possible read kernel memory from an unprivileged user mode process.   A very simplified overview of the pipeline can be seen in figure below.

pipeline.png

The pipeline starts with the instruction decoder. The instruction decoder reads bytes from memory, parse the buffer and output instructions.  Then it decodes instructions into micro ops. Micro ops are the building blocks of instructions. It is useful to think of x86 CPU’s as a Complex Instruction Set Computer (CISC) with a Reduced Instruction Set Computer (RISC) backend. It’s not entirely true, but it’s a useful way to think about it. The micro ops (the reduced instructions)  are queued into the reordering buffer. Here micro ops is kept it until all their dependencies have been resolved. Each cycle any micro ops with all dependencies resolved are scheduled on available execution units in first in ,first out order. With multiple specialized execution units many micro ops are executing at once and importantly due to dependencies and bottlenecks in execution units micro ops need not execute in the same order as the entered the reorder buffer. When a micro op is finished executing the result and exception status are added to the entry in the reorder buffer thus resolving dependencies of other micro ops. When all Micro ops belonging to a given instruction has been executed and have made it to the head of the reorder buffer queue they enter the retirement processing. Because the micro ops are at the head of the reorder buffer we can be sure that retirement is in the same order in which the micro ops were added to the queue. If an exception was flagged in the reorder buffer for a micro op being retired an interrupt is raised on the instruction to which the micro op belonged. Thus the interrupt is always raised on the instruction that caused even if the micro op that caused the interrupt was executed long before the entire instruction was done. A raised interrupt causes a flush of the pipeline, so that any micro op still in the reorder buffer is discarded and the instruction decoder reset. If no exception was thrown, the result is committed to the registers. This is essentially an implementation of Tomasulo’s algorithm [1] and allows multiple instructions to execute at the same time while maximizing resource use.

Imagine the following instruction executed in usermode

mov rax,[somekernelmodeaddress]

It will cause an interrupt when retired, but it’s not clear what happens between when the instruction is finished executing and the actual retirement. We know that once retired any information it may or may not have read is gone as it’ll never be committed to the architectural registers.. However, maybe we have a chance of seeing what was read if Intel relies perfectly on Tomasulo’s algorithm. Imagine the mov instruction sets the results in the reorder buffer as well as the flag that retirement should cause an interrupt and discard any data fetched. If this is the case we can execute additional code speculatively. Imagine the following code:

mov rax, [Somekerneladdress]

mov rbx, [someusermodeaddress]

If there are no dependencies both will execute simultaneous (there are two execution units for loads) and while the second will never get it’s result committed to the registers because it’ll be discarded when the firsts mov instruction causes an interrupt to be thrown. However, the second instruction will also execute speculatively and it may change the microarchitectural state of the CPU in a way that we can detect it. In this particular case the second mov instruction will load the

someusermodeaddress

into the cache hierarchy and we will be able to observe faster access time after structured exception handling took care of the exception. To make sure that the someusermodeaddress is not already in the cache hierarchy I can use the clflush instruction before starting the execution. Now only a single step is required for us to leak information about the kernel memory:

Mov rax, [somekerneladdress]

And rax, 1

Mov rbx,[rax+Someusermodeaddress]

If this is executed last two instructions are executed speculatively the address loaded differs depending on the value loaded from somekerneladdress and thus the address loaded into the cache may cause different cache lines to be loaded. This cache activity we can we can observe through a flush+reload cache attack.

The first problem we’re facing here is we must make sure the second and third instruction runs before the first one retires. We have a race condition we must win. How do we engineer that? We fill the reorder buffer with dependent instructions which use a different execution unit than the ones we need. Then we add the code listed above. The dependency forces the CPU to execute one micro op at a time of the filling instructions. Using an execution unit different than the one we used in the leaking code make sure that the CPU can speculatively execute these while working on the fill pattern.

The straight forward fill pattern I use is 300 add reg64, imm. I choose add rax, 0x141. I chose this because we have two execution units that are able to execute this instruction (Integer alus) and since they must execute sequentially one of these units will always be available to my leakage code (provided that another hardware thread in the same core isn’t mixing things up).

Since my kernel read mov instruction and my leaking mov instruction must run sequentially and that the data fetched by the leaking instruction cannot be in the cache the total execution time would be around 400 CLK’s if not the kernel address isn’t cached. This is a pretty steep cost given that an add rax, 0x141 cost around 3 CLK’s. For this reason, I see to it that the kernel address I’m accessing is loaded into the cache hierarchy. I use two different methods to ensure that. First, I call a syscall that touches this memory. Second, I use the prefetcht0 instruction to improve my odds of having the address loaded in L1. Gruss et al [4] concluded that prefetch instructions may load the cache despite of not having access rights. Further they showed that it’s possible for the page table traverse to abort and that would surely mean that I’m not getting a result. But having the data already in L1 will avoid this traverse.
 

All said and done there are a few assumptions I made about Intels implementation of Tomasulo’s algorithm:

1)    Speculative execution continues despite interrupt flag

2)    I can win the race condition between speculative execution and retirement

3)    I can load caches during speculative execution

4)    Data is provided despite interrupt flag

I ran the following tests on my i3-5005u Broadwell CPU.

I found no way I could separately test assumption 1,2 and 3. So I wrote up the code I outlined above and instead of code above I used the following code:

Mov rdi, [rdi];  where rdi is somekernelmodeaddress

mov rdi, [rdi + rcx+0]; where rcx is the Someusermodeaddress

Then I timed accessing the Someusermodeaddress after an exception handler dealt with the resulting exception. I did 1000 runs and sorted out  the 10 slowest out. First I did a run with the 2nd line above present with the value at the kernel mode address 0. I then did a second run with the second line commented out. This allows me test if I have a side channel inside the speculative execution. The results is summarized in the below histogram (mean and variance respectively: mean= 71.22 std = 3.31; mean= 286.17 std =  53.22). So obviously I have a statistic significant  covert channel inside the speculative execution.

 
In a second step I made sure that the kernel address I was trying to read had the value 4096.  Now if I’m actually reading kernelmode data the second line will fetch a cache line in the next page. Thus I would expect a slow access to Someusermodeaddress. As the speculative fetch should access a cache line exactly one page further. I selected the offset 4096 to avoid effects due to hardware prefetchers I could’ve added the size of a cache line. Fortunately I did not get a slow read suggesting that Intel null’s the result when the access is not allowed. I double checked by accessing the cache line I wanted to access and indeed that address was not loaded into the cache either. Consequently it seems likely that intel do process the illegal reading of kernel mode memory, but do not copy the result into the reorder buffer. So at this point my experiment is failed and thus the negative result. Being really curious I added an add rdi,4096 instruction after the first line in the test code and could verify that this code was indeed executed speculatively and the result was side channeled out.

While I  did set out to read kernel mode without privileges and that produced a negative result, I do feel like I opened a Pandora’s box. The thing is there was two positive results in my tests. The first is that Intel’s implementation of Tomasulo’s algorithm is not side channel safe. Consequently we have access to results of speculative execution despite the results never being committed. Secondly, my results demonstrate that speculative execution does indeed continue despite violations of the isolation between kernel mode and user mode.

This is truly bad news for the security. First it gives microarchitecture side channel attacks additional leverage – we can deduct not only information from is actually executed but also from what is speculatively executed. It also seems likely that we can influence what is speculative executed and what is not through influencing caches like the BTB see Dmitry Evtyushkin and Dmitry Ponomarev [5] for instance.It thus add another possibility to increase the expressiveness of microarchitecture side channel attacks and thus potentially allow an attacker even more leverage through the CPU. This of cause makes writing constant time code even more complex and thus it is definitely bad news.

Also it draws into doubt mitigations that rely on retirement of instructions. I cannot say I know how far that stretches, but my immediate guess would be that vmexit’s is handled on instruction retirement. Further we see that speculative execution does not consistently abide by isolation mechanism, thus it’s a haunting question what we can actually do with speculative execution.

[1] Intel® 64 and IA-32 Architectures Software Developer Manuals. Intel. https://software.intel.com/en-us/articles/intel-sdm

[2] Tomasulo, Robert M. (Jan 1967). “An Efficient Algorithm for Exploiting Multiple Arithmetic Units”. IBM Journal of Research and Development. IBM. 11 (1): 25–33. ISSN 0018-8646. doi:10.1147/rd.111.0025.

[3] Fogh, Anders. “Covert shotgun: Automatically finding covert channels in SMT”

https://www.youtube.com/watch?v=oVmPQCT5VkY.

[4] Gruss, Daniel, et al. “Prefetch side-channel attacks: Bypassing SMAP and kernel ASLR.”

Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2016.

[5] Evtyushkin D, Ponomarev D, Abu-Ghazaleh N. Jump Over ASLR: Attacking Branch Predictors to Bypass ASLR. InProceedings of 49th International Symposium on Microarchitecture (MICRO) 2016.

Filecoin Suspends ICO After Raising $186M in One Hour

$
0
0

ICO mania is still going strong despite all the hacks, repeated technical glitches, worries about wrong incentives and warnings by regulators from around the world. We just got an example for this today as Filecoin was forced to suspend their ICO after an incredible amount of money directed towards it overwhelmed the system.

Learn how to buy Bitcoin and Ethereum safely with our simple guide!

Filecoin is a cryptocurrency powered storage network. Miners earn Filecoin by providing open hard-drive space to the network, while users spend Filecoin to store their files encrypted in the decentralized network.

If you wonder why such a non-sexy and not incredibly unique product like a decentralized file system would get so much attention from investors, you need not look farther than its list of backers. Last week a group of top 150 Silicon Valley venture capital brand names invested $52 million in the project, these include Sequoia Capital, Andreessen Horowitz, Union Square Ventures, Winklevoss Capital.

The Filecoin ICO has an uncapped USD target and money started pouring in a torrent of transactions as soon as the sale was open to the public (accredited investors). Soon the team behind the project reported funds totaling $252 million were inbound.


This however proved too much traffic for the ICO to handle and backlog of conformation was created, frustrating investors that could not tell if the got in on the action or not, just like happened with previous massive ICOs. The raised figure was updated down soon after to the amount Filecoin was able to confirm.

As of the time of writing this article, about seven hours after the latest update from Filecoin, the ICO remains suspended.


Uber shareholder group wants Benchmark off board

$
0
0

Chinese officials are investigating three of the country's largest tech companies, Tencent, Baidu and Sina, for cybersecurity violations, CNBC reports. The government's cybersecurity arm issued a statement that each of the companies' social platform (WeChat, Tieba and Weibo, respectively) gives users an avenue to spread violence, terror, false rumors and obscene imagery that endangers national security and public safety.

Why it matters: We've seem similar crackdowns of social media platforms across the world. In Europe, leaders in France, the U.K. and Germany have all vowed to fine U.S. tech companies, like Google and Facebook, if they failed to remove such content in an orderly fashion. In the U.S., it was just revealed that the FBI was monitoring fake social media accounts that were potentially spreading false information on Election Day.

MIT team’s school-bus algorithm could save $5M and 1M bus miles

$
0
0

A trio of MIT researchers recently tackled a tricky vehicle-routing problem when they set out to improve the efficiency of the Boston Public Schools bus system.

Last year, more than 30,000 students rode 650 buses to 230 schools at a cost of $120 million.

In hopes of spending less this year, the school system offered $15,000 in prize money...

The Minifree Libreboot T400 is free as in freedom

$
0
0

The Libreboot T400 doesn’t look like much. It’s basically a refurbished Lenovo Thinkpad with the traditional Lenovo/IBM pointer nubbin and a small touchpad. It’s a plain black laptop, as familiar as any luggable assigned to a cubicle warrior on the road. But, under the hood, you have a machine that fights for freedom.

The T400 runs Libreboot, a free and open BIOS and the Trisquel GNU/Linux OS. Both of these tools should render the Libreboot T400 as secure from tampering as can be. “Your Libreboot T400 obeys you, and nobody else!” write its creators, and that seems to be the case.

How does it work? And should you spend about $300 on a refurbished Thinkpad with Linux installed? That depends on what you’re trying to do. The model I tested was on the low end with enough speed and performance to count but Trisquel tended to bog down a bit and the secure browser, “an unbranded Mozilla based browser that never recommends non-free software,” was a little too locked down for its own good. I was able to work around a number of the issues I had but this is definitely not for the faint of heart.

That said, you are getting a nearly fully open computer. The 14.1-inch machine runs a Intel Core 2 Duo P8400 processor and starts at 4GB of RAM with 160GB hard drive space. That costs about $257 plus shipping and includes a battery and US charger.

Once you have the T400 you’re basically running a completely clean machine. It runs a free (as in freedom) operating system complete with open drivers and applications and Libreboot ensures that you have no locked-down software on the machine. You could easily recreate this package yourself on your own computer but I suspect that you, like me, would eventually run into a problem that couldn’t be solved entirely with free software. Hence the impetus to let Minifree do the work for you.

If you’re a crusader for privacy, security, and open standards, than this laptop is for you. Thankfully it’s surprisingly cheap and quite rugged so you’re not only sticking it to the man but you could possibly buy a few of these and throw them at the man in a pinch.

The era of common Linux on the desktop – and not in the form of a secure, libre device like this – is probably still to come. While it’s trivial (and fun) to install a Linux instance these days I doubt anyone would do it outright on a laptop that they’re using on a daily basis. But for less than a price of a cellphone you can use something like the T400 and feel safe and secure that you’re not supporting (many) corporate interests when it comes to your computing experience. It’s not a perfect laptop by any stretch but it’s just the thing if you’re looking for something that no one but you controls.

A Theory of Jerks

$
0
0

Picture the world through the eyes of the jerk. The line of people in the post office is a mass of unimportant fools; it’s a felt injustice that you must wait while they bumble with their requests. The flight attendant is not a potentially interesting person with her own cares and struggles but instead the most available face of a corporation that stupidly insists you shut your phone. Custodians and secretaries are lazy complainers who rightly get the scut work. The person who disagrees with you at the staff meeting is an idiot to be shot down. Entering a subway is an exercise in nudging past the dumb schmoes.

We need a theory of jerks. We need such a theory because, first, it can help us achieve a calm, clinical understanding when confronting such a creature in the wild. Imagine the nature-documentary voice-over: ‘Here we see the jerk in his natural environment. Notice how he subtly adjusts his dominance display to the Italian restaurant situation…’ And second – well, I don’t want to say what the second reason is quite yet.

As it happens, I do have such a theory. But before we get into it, I should clarify some terminology. The word ‘jerk’ can refer to two different types of person (I set aside sexual uses of the term, as well as more purely physical senses). The older use of ‘jerk’ designates a kind of chump or an ignorant fool, though not a morally odious one. When Weird Al Yankovic sang, in 2006, ‘I sued Fruit of the Loom ’cause when I wear their tightie-whities on my head I look like a jerk’, or when, on 1 March 1959, Willard Temple wrote in a short story in the Los Angeles Times: ‘He could have married the campus queen… Instead the poor jerk fell for a snub-nosed, skinny little broad’, it’s clear it’s the chump they have in mind.

Sign up for Aeon’s Newsletter

The jerk-as-fool usage seems to have begun as a derisive reference to the unsophisticated people of a ‘jerkwater town’: that is, a town not rating a full-scale train station, requiring the boiler man to pull on a chain to water his engine. The term expresses the travelling troupe’s disdain. Over time, however, ‘jerk’ shifted from being primarily a class-based insult to its second, now dominant, sense as a term of moral condemnation. Such linguistic drift from class-based contempt to moral deprecation is a common pattern across languages, as observed by Friedrich Nietzsche in On the Genealogy of Morality (1887). (In English, consider ‘rude’, ‘villain’, ‘ignoble’.) And it is the immoral jerk who concerns me here.

Why, you might be wondering, should a philosopher make it his business to analyse colloquial terms of abuse? Doesn’t Urban Dictionary cover that kind of thing quite adequately? Shouldn’t I confine myself to truth, or beauty, or knowledge, or why there is something rather than nothing (to which the Columbia philosopher Sidney Morgenbesser answered: ‘If there was nothing you’d still be complaining’)? I am, in fact, interested in all those topics. And yet I suspect there’s a folk wisdom in the term ‘jerk’ that points toward something morally important. I want to extract that morally important thing, to isolate the core phenomenon towards which I think the word is groping. Precedents for this type of work include the Princeton philosopher Harry Frankfurt’s essay ‘On Bullshit’(2005) and, closer to my target, the Irvine philosopher Aaron James’s book Assholes (2012). Our taste in vulgarity reveals our values.

I submit that the unifying core, the essence of jerkitude in the moral sense, is this: the jerk culpably fails to appreciate the perspectives of others around him, treating them as tools to be manipulated or idiots to be dealt with rather than as moral and epistemic peers. This failure has both an intellectual dimension and an emotional dimension, and it has these two dimensions on both sides of the relationship. The jerk himself is both intellectually and emotionally defective, and what he defectively fails to appreciate is both the intellectual and emotional perspectives of the people around him. He can’t appreciate how he might be wrong and others right about some matter of fact; and what other people want or value doesn’t register as of interest to him, except derivatively upon his own interests. The bumpkin ignorance captured in the earlier use of ‘jerk’ has changed into a type of moral ignorance.

Some related traits are already well-known in psychology and philosophy – the ‘dark triad’ of Machiavellianism, narcissism, and psychopathy, and James’s conception of the asshole, already mentioned. But my conception of the jerk differs from all of these. The asshole, James says, is someone who allows himself to enjoy special advantages out of an entrenched sense of entitlement. That is one important dimension of jerkitude, but not the whole story. The callous psychopath, though cousin to the jerk, has an impulsivity and love of risk-taking that need be no part of the jerk’s character. Neither does the jerk have to be as thoroughly self-involved as the narcissist or as self-consciously cynical as the Machiavellian, though narcissism and Machiavellianism are common enough jerkish attributes. My conception of the ‘jerk’ also has a conceptual unity that is, I think, both theoretically appealing in the abstract and fruitful in helping explain some of the peculiar features of this type of animal, as we will see.

The opposite of the jerk is the sweetheart. The sweetheart sees others around him, even strangers, as individually distinctive people with valuable perspectives, whose desires and opinions, interests and goals are worthy of attention and respect. The sweetheart yields his place in line to the hurried shopper, stops to help the person who dropped her papers, calls an acquaintance with an embarrassed apology after having been unintentionally rude. In a debate, the sweetheart sees how he might be wrong and the other person right.

The moral and emotional failure of the jerk is obvious. The intellectual failure is obvious, too: no one is as right about everything as the jerk thinks he is. He would learn by listening. And one of the things he might learn is the true scope of his jerkitude – a fact about which, as I will explain shortly, the all-out jerk is inevitably ignorant. Which brings me to the other great benefit of a theory of jerks: it might help you figure out if you yourself are one.

Some clarifications and caveats.

First, no one is a perfect jerk or a perfect sweetheart. Human behaviour – of course! – varies hugely with context. Different situations (sales-team meetings, travelling in close quarters) might bring out the jerk in some and the sweetie in others.

Second, the jerk is someone who culpably fails to appreciate the perspectives of others around him. Young children and people with severe mental disabilities aren’t capable of appreciating others’ perspectives, so they can’t be blamed for their failure and aren’t jerks. Also, not all perspectives deserve equal treatment. Failure to appreciate the outlook of a neo-Nazi, for example, is not sign of jerkitude – though the true sweetheart might bend over backwards to try.

Third, I’ve called the jerk ‘he’, for reasons you might guess. But then it seems too gendered to call the sweetheart ‘she’, so I’ve made the sweetheart a ‘he’ too.

I said that my theory might help us to tell whether we, ourselves, are jerks. But, in fact, this turns out to be a peculiarly difficult question. The Washington University psychologist Simine Vazire has argued that we tend to know our own characteristics quite well when the relevant traits are evaluatively neutral and straightforwardly observable, and badly when they are loaded with value judgments and not straightforwardly observable. If you ask someone how talkative she is, or whether she is relatively high-strung or relatively mellow, and then you ask her friends to rate her along the same dimensions, the self-rating and the peer ratings usually correlate quite well – and both sets of ratings also tend to line up with psychologists’ best attempts to measure such traits objectively.

Why? Presumably because it’s more or less fine to be talkative and more or less fine to be quiet; OK to be a bouncing bunny and OK instead to keep it low-key, and such traits are hard to miss in any case. But few of us want to be inflexible, stupid, unfair or low in creativity. And if you don’t want to see yourself that way, it’s easy enough to dismiss the signs. Such characteristics are, after all, connected to outward behaviour in somewhat complicated ways; we can always cling to the idea that we have been misunderstood. Thus we overlook our own faults.

it’s entirely possible for a picture-perfect jerk to acknowledge, in a superficial way, that he is a jerk. ‘So what, yeah, I’m a jerk,’ he might say

With Vazire’s model of self-knowledge in mind, I conjecture a correlation of approximately zero between how one would rate oneself in relative jerkitude and one’s actual true jerkitude. The term is morally loaded, and rationalisation is so tempting and easy! Why did you just treat that cashier so harshly? Well, she deserved it – and anyway, I’ve been having a rough day. Why did you just cut into that line of cars at the last minute, not waiting your turn to exit? Well, that’s just good tactical driving – and anyway, I’m in a hurry! Why did you seem to relish failing that student for submitting her essay an hour late? Well, the rules were clearly stated; it’s only fair to the students who worked hard to submit their essays on time – and that was a grimace not a smile.

Since the most effective way to learn about defects in one’s character is to listen to frank feedback from people whose opinions you respect, the jerk faces special obstacles on the road to self-knowledge, beyond even what Vazire’s model would lead us to expect. By definition, he fails to respect the perspectives of others around him. He’s much more likely to dismiss critics as fools – or as jerks themselves – than to take the criticism to heart.

Still, it’s entirely possible for a picture-perfect jerk to acknowledge, in a superficial way, that he is a jerk. ‘So what, yeah, I’m a jerk,’ he might say. Provided this label carries no real sting of self-disapprobation, the jerk’s moral self-ignorance remains. Part of what it is to fail to appreciate the perspectives of others is to fail to see your jerkishly dismissive attitude toward their ideas and concerns as inappropriate.

Ironically, it is the sweetheart who worries that he has just behaved inappropriately, that he might have acted too jerkishly, and who feels driven to make amends. Such distress is impossible if you don’t take others’ perspectives seriously into account. Indeed, the distress itself constitutes a deviation (in this one respect at least) from pure jerkitude: worrying about whether it might be so helps to make it less so. Then again, if you take comfort in that fact and cease worrying, you have undermined the very basis of your comfort.

All normal jerks distribute their jerkishness mostly down the social hierarchy, and to anonymous strangers. Waitresses, students, clerks, strangers on the road – these are the unfortunates who bear the brunt of it. With a modicum of self-control, the jerk, though he implicitly or explicitly regards himself as more important than most of the people around him, recognises that the perspectives of those above him in the hierarchy also deserve some consideration. Often, indeed, he feels sincere respect for his higher-ups. Perhaps respectful feelings are too deeply written in our natures to disappear entirely. Perhaps the jerk retains a vestigial kind of concern specifically for those whom it would benefit him, directly or indirectly, to win over. He is at least concerned enough about their opinion of him to display tactical respect while in their field of view. However it comes about, the classic jerk kisses up and kicks down. The company CEO rarely knows who the jerks are, though it’s no great mystery among the secretaries.

Because the jerk tends to disregard the perspectives of those below him in the hierarchy, he often has little idea how he appears to them. This leads to hypocrisies. He might rage against the smallest typo in a student’s or secretary’s document, while producing a torrent of errors himself; it just wouldn’t occur to him to apply the same standards to himself. He might insist on promptness, while always running late. He might freely reprimand other people, expecting them to take it with good grace, while any complaints directed against him earn his eternal enmity. Such failures of parity typify the jerk’s moral short-sightedness, flowing naturally from his disregard of others’ perspectives. These hypocrisies are immediately obvious if one genuinely imagines oneself in a subordinate’s shoes for anything other than selfish and self-rationalising ends, but this is exactly what the jerk habitually fails to do.

Thinking yourself important is a pleasantly self-gratifying excuse for disregarding the interests and desires of others

Embarrassment, too, becomes practically impossible for the jerk, at least in front of his underlings. Embarrassment requires us to imagine being viewed negatively by people whose perspectives we care about. As the circle of people whom the jerk is willing to regard as true peers and superiors shrinks, so does his capacity for shame – and with it a crucial entry point for moral self-knowledge.

As one climbs the social hierarchy it is also easier to become a jerk. Here’s a characteristically jerkish thought: ‘I’m important, and I’m surrounded by idiots!’ Both halves of this proposition serve to conceal the jerk’s jerkitude from himself. Thinking yourself important is a pleasantly self-gratifying excuse for disregarding the interests and desires of others. Thinking that the people around you are idiots seems like a good reason to disregard their intellectual perspectives. As you ascend the hierarchy, you will find it easier to discover evidence of your relative importance (your big salary, your first-class seat) and of the relative idiocy of others (who have failed to ascend as high as you). Also, flatterers will tend to squeeze out frank, authentic critics.

This isn’t the only possible explanation for the prevalence of powerful jerks, of course. Maybe jerks are actually more likely to rise in business and academia than non-jerks – the truest sweethearts often suffer from an inability to advance their own projects over the projects of others. But I suspect the causal path runs at least as much in the other direction. Success might or might not favour the existing jerks, but I’m pretty sure it nurtures new ones.

The moralistic jerk is an animal worth special remark. Charles Dickens was a master painter of the type: his teachers, his preachers, his petty bureaucrats and self-satisfied businessmen, Scrooge condemning the poor as lazy, Mr Bumble shocked that Oliver Twist dares to ask for more, each dismissive of the opinions and desires of their social inferiors, each inflated with a proud self-image and ignorant of how they are rightly seen by those around them, and each rationalising this picture with a web of moralising ‘should’s.

Scrooge and Bumble are cartoons, and we can be pretty sure we aren’t as bad as them. Yet I see in myself and all those who are not pure sweethearts a tendency to rationalise my privilege with moralistic sham justifications. Here’s my reason for trying to dishonestly wheedle my daughter into the best school; my reason why the session chair should call on me rather than on the grad student who got her hand up earlier; my reason why it’s fine that I have 400 library books in my office…

Whatever he’s into, the moralising jerk exudes a continuous aura of disdain for everything else

Philosophers seem to have a special talent for this: we can concoct a moral rationalisation for anything, with enough work! (Such skill at rationalisation might explain why ethicist philosophers seem to behave no morally better, on average, than comparison groups of non-ethicists, as my collaborators and I have found in a series of empirical studies looking at a broad range of issues from library-book theft and courteous behaviour at professional conferences to rates of charitable donation and Nazi party membership in the 1930s.) The moralistic jerk’s rationalisations justify his disregard of others, and his disregard of others prevents him from accepting an outside corrective on his rationalisations, in a self-insulating cycle. Here’s why it’s fine for me to proposition my underlings and inflate my expense claims, you idiot critics. Coat the whole thing, if you like, in a patina of academic jargon.

The moralising jerk is apt to go badly wrong in his moral opinions. Partly this is because his morality tends to be self-serving, and partly it’s because his disrespect for others’ perspectives puts him at a general epistemic disadvantage. But there’s more to it than that. In failing to appreciate others’ perspectives, the jerk almost inevitably fails to appreciate the full range of human goods – the value of dancing, say, or of sports, nature, pets, local cultural rituals, and indeed anything that he doesn’t care for himself. Think of the aggressively rumpled scholar who can’t bear the thought that someone would waste her time getting a manicure. Or think of the manicured socialite who can’t see the value of dedicating one’s life to dusty Latin manuscripts. Whatever he’s into, the moralising jerk exudes a continuous aura of disdain for everything else.

Furthermore, mercy is near the heart of practical, lived morality. Virtually everything that everyone does falls short of perfection: one’s turn of phrase is less than perfect, one arrives a bit late, one’s clothes are tacky, one’s gesture irritable, one’s choice somewhat selfish, one’s coffee less than frugal, one’s melody trite. Practical mercy involves letting these imperfections pass forgiven or, better yet, entirely unnoticed. In contrast, the jerk appreciates neither others’ difficulties in attaining all the perfections that he attributes to himself, nor the possibility that some portion of what he regards as flawed is in fact blameless. Hard moralising principle therefore comes naturally to him. (Sympathetic mercy is natural to the sweetheart.) And on the rare occasions when the jerk is merciful, his indulgence is usually ill-tuned: the flaws he forgives are exactly the one he recognises in himself or has ulterior reasons to let slide. Consider another brilliant literary cartoon jerk: Severus Snape, the infuriating potions teacher in J K Rowling’s novels, always eager to drop the hammer on Harry Potter or anyone else who happens to annoy him, constantly bristling with indignation, but wildly off the mark – contrasted with the mercy and broad vision of Dumbledore.

Despite the jerk’s almost inevitable flaws in moral vision, the moralising jerk can sometimes happen to be right about some specific important issue (as Snape proved to be) – especially if he adopts a big social cause. He needn’t care only about money and prestige. Indeed, sometimes an abstract and general concern for moral or political principles serves as a kind of substitute for genuine concern about the people in his immediate field of view, possibly leading to substantial self-sacrifice. And in social battles, the sweetheart will always have some disadvantages: the sweetheart’s talent for seeing things from his opponent’s perspective deprives him of bold self-certainty, and he is less willing to trample others for his ends. Social movements sometimes do well when led by a moralising jerk. I will not mention specific examples, lest I err and offend.

How can you know your own moral character? You can try a label on for size: ‘lazy’, ‘jerk’, ‘unreliable’ – is that really me? As the work of Vazire and other personality psychologists suggests, this might not be a very illuminating approach. More effective, I suspect, is to shift from first-person reflection (what am I like?) to second-person description (tell me, what am I like?). Instead of introspection, try listening. Ideally, you will have a few people in your life who know you intimately, have integrity, and are concerned about your character. They can frankly and lovingly hold your flaws up to the light and insist that you look at them. Give them the space to do this, and prepare to be disappointed in yourself.

Done well enough, this second-person approach could work fairly well for traits such as laziness and unreliability, especially if their scope is restricted: laziness-about-X, unreliability-about-Y. But as I suggested above, jerkitude is not so tractable, since if one is far enough gone, one can’t listen in the right way. Your critics are fools, at least on this particular topic (their critique of you). They can’t appreciate your perspective, you think – though really it’s that you can’t appreciate theirs.

To discover one’s degree of jerkitude, the best approach might be neither (first-person) direct reflection upon yourself nor (second-person) conversation with intimate critics, but rather something more third-person: looking in general at other people. Everywhere you turn, are you surrounded by fools, by boring nonentities, by faceless masses and foes and suckers and, indeed, jerks? Are you the only competent, reasonable person to be found? In other words, how familiar was the vision of the world I described at the beginning of this essay?

If your self-rationalising defences are low enough to feel a little pang of shame at the familiarity of that vision of the world, then you probably aren’t pure diamond-grade jerk. But who is? We’re all somewhere in the middle. That’s what makes the jerk’s vision of the world so instantly recognisable. It’s our own vision. But, thankfully, only sometimes.

Syndicate this Essay

Lab hidden inside a famous monument

$
0
0

“I’ll just open the hatch…” says Richard Smith, who is stooped over in the ticket office at the Monument. He’s examining the oak-panelled floor as though it’s hiding a secret chamber, as in an Indiana Jones movie. Above him, a desk is piled high with leaflets “This is to certify that ________ has climbed the 311 steps of the Monument”.

The thing is, there are actually 345.

The Monument to the Great Fire of London consists of a towering, 202-foot (61-metre) stone column, decorated with dragons and topped with a flaming golden orb. On the inside, a striking spiral staircase stretches all the way to the top, twisting up like the peel of an apple carved in a single, continuous ribbon.

For years, a cracked plaque tacked to the base explained that it had been designed by Sir Christopher Wren.

Again, this isn’t entirely truthful.

It makes a lot of sense that the capital’s beloved Monument would be borne of Britain’s most celebrated architect. After all, Wren was widely involved in rebuilding London after the Great Fire destroyed pretty much every inch of ground within the city walls – including 13,200 houses and numerous extraordinary public buildings, from riverside castles to Whittington’s Longhouse, one of the largest public toilets in Europe. He even created the nearby St Paul’s Cathedral.

In fact, the Monument was designed by his close friend: a scientist.

Robert Hooke was a man of many passions, who applied his enquiring mind to subjects as diverse as chemistry and map making, at the sober end of the scale, and folk beliefs about toads and his own bowel movements at the other. In his day, he had a reputation as lofty as the pillar itself, variously described as “England’s Leonardo” and “certainly the greatest mechanick [sic] this day in the world”.

Today his name has largely been forgotten, but his contributions have endured. Among other things, he coined the word “cell” to describe the basic unit of life (they reminded him of Monks’ rooms, or “cells”), devised Hooke’s law of elasticity – arguably not particularly exciting, but useful – and invented mechanisms still used in clocks and cameras to this day.

After the fire, Hooke tried his hand at architecture too, designing hospitals, civic buildings and churches across the city. He didn’t get a lot of credit, partly because most of his achievements were signed off by, and mistakenly attributed to, Wren – and partly because some of them weren’t very good.

One such project was the Bethlem Royal Hospital.  In an age where charity was increasingly fashionable, this new psychiatric hospital was designed more for its visitors than its patients. The focus on aesthetics was so extreme, it was widely mocked as a “palace for lunatics”. Among other features, it boasted an ornate façade that cracked the building with its weight and a garden bordered by dangerously low walls; though patients might have escaped, at least they didn’t obscure the splendor of the building. Needless to say, the hospital name, corrupted to “Bedlam”,  remains synonymous with chaos to this day.

Then came the Monument. It was supposed to be a grand acknowledgement of the fire, but at the time, “what Hooke really wanted was to build a very long telescope” says Maria Zack, a mathematician at Nazarene University, California. In the end, he decided to combine both. 

My guide for the day is Richard Smith, a Londoner with a Cockney accent and infectious enthusiasm for this enigmatic pillar. He lifts up the hatch – the only clue to which is a couple of wrought-iron hinges – and disappears underground. It leads to yet more stairs.

Eventually we find ourselves in a room with a domed roof. The ancient brick walls are bare and crumbling and it smells damp, like clothes that have been left in the washing machine for too long. This part is usually off-limits to visitors, though I can’t help thinking they probably don’t mind all that much.

Today the room is empty, except for a wireless router and some sensors.  “When they put the building next door up a couple of years ago, they had to make sure they didn’t accidentally knock this one down,” says Smith. But a few hundred years ago, it was a state-of-the-art physics lab.

To see why, Smith ushers me into the centre of the room. Looking up through a metal grate, there’s a clear view all the way through the spiral staircase, up to the highest point in the building. Right at the top, hidden inside a decorative golden orb, is yet another trap door – this time made of heavy iron. When it’s opened, you can look all the way up, from the basement lab into the night sky. In fact, the entire building is a giant telescope.

This isn’t as bonkers as it sounds. Back in the 17th Century, scientists were still arguing about whether the Sun revolved around the Earth or the other way around. Like all rational people today, Hooke was thoroughly convinced by the latter. But no one could prove it.

In theory, it should have been easy, thanks to “astronomical parallax”, an apparent shift in the position of one object, against a backdrop of another.

To experience parallax, all you need to do is hold out a finger and squint at it though one eye, then close that eye and open the other. Though all that’s changed is your viewing perspective, your finger will appear to move. “It’s a concept we all understand intuitively,” says Zack.

Scaling this up, if the Earth changes its position relative to the stars, while circling the Sun – then they should appear to jump from one place to another over the course of a year.

The catch is just how tiny these movements are. Take Gamma Draconis, a giant orange-coloured star around 900 trillion miles (1.4 quadrillion km) away. Instead of measuring the movement of objects in the sky, from planets to satellites, in metres or inches, astronomers divide up the heavens like the face of an imaginary clock. Every six months, the star moves north or south in the sky at a scale equivalent to the hands moving 22 ten-thousandths of a second. 

To magnify parallax enough to see it, you need a very large telescope indeed.

Hooke’s first idea was to embed one in his lodgings at Gresham College, where he was professor of geometry. The 36-foot (11m) telescope was so large, holes had to be cut through the structure of the building. In the end, it passed through two floors and out through the roof.

Next, Hooke chose his star. Gamma Draconis was the ideal candidate, because it’s relatively bright and passes directly overhead. Now all he had to do was wait for it to pass overhead – he was ready to change our perspective on the Universe forever.

Except it didn’t work. The measurements depended on lining the lenses up exactly, but the structure just wasn’t stable enough. They were fixed in place using a wooden structure – a material known to expand in heat and flex in the wind.

Instead, he turned to the Monument. This time, he was determined the structure would be sturdy. His plans called for 28,196 cubic feet (798 cubic metres) of the finest Portland stone, which is roughly the weight of 14 blue whales. “This wasn’t just some flimsy narrow tube like the other telescope,” says Zack.

The construction took the best part of six years, mostly because they kept running out. Eventually the king issued a proclamation, forbidding anyone from transporting rocks from the Isle of Portland without first consulting Wren, who was officially in charge of the project.

There were a few other hiccups along the way, such as the suggestion that it should be topped by a statue of the king, Charles II. This would have ruled out using it as a telescope, of course.

“Hooke was like ‘Oh, I know what you mean, but wouldn’t it be better to have this nice golden orb on the top? Then you can use it to shoot fireworks.’” says Zack, while pointing out that the need for a firework launching pad would hardly have been at the forefront of peoples’ minds.

Luckily, the King refused to have a statue of himself on top anyway, since he thought this might make it look like he was responsible for the fire. Hooke got his way and it was completed in 1677.

Originally, lenses would have slotted in at either end and an observer, standing in the lab, could take measurements of the stars using a special eyepiece. This time, surely Hooke would get his way.

Then disaster struck. “He was trying to keep the two lenses aligned, 200ft apart, with only limited ways to anchor them to the telescope,” says Zack. Worse still, the Monument is next to Fish Street Hill, which was the main route to London Bridge at the time. This was one of the busiest roads in London, mere metres from his highly sensitive scientific experiment. In the end, vibrations from the traffic ruined everything.

Parallax wasn’t discovered until 1838, when German astronomer Friedrich Bassel observed the movement of the star 61 Cygni.

But that isn’t quite the end of the Monument’s story. In the 17th Century, high buildings were rare. At the time, the tallest building in the world was Strasbourg Cathedral, which was only just over twice as high. Previously, Hooke had been forced to conduct experiments requiring height from the top of London’s Westminster Abbey or St Paul’s Cathedral. Now he had his very own laboratory for “tryalls” on the effects of height, particularly on the pressure of the surrounding air. In 1678 he penned a typically muddled diary entry:

Thursday, May 16th – wrote to Grace angry letter about her mothers Shirds. ag. at Fish street piller tryd experiment it descended at the top about 1/3rd of an inch. DH. view at Bedlam with Govenors 0. Sir Joseph Watt here. Opend Iron chest hurt finger. sat not, discoursd about Experiment at Fish Street Column. with Mr. Henshaw, &c., at Jonathans.

Hooke was using a barometer to measure how pressure changed as he walked up the pillar. He had planned the Monument very carefully – each step is exactly six inches tall – so he could track the changes in pressure with precision. Between the bottom and top of the stairs, the mercury level dropped by about a third of an inch, confirming that air pressure decreases with altitude.

Finally, a successful experiment at the Monument. Who cares that this had already been discovered three decades earlier? Even if it was by simply walking up a mountain.

--

This story is a part of BBC Britain – a series focused on exploring this extraordinary island, one story at a time. Readers outside of the UK can see every BBC Britain story by heading to the Britain homepage; you also can see our latest stories by following us on Facebook and Twitter.

Join 800,000+ Future fans by liking us on Facebook, or follow us on Twitter.

If you liked this story, sign up for the weekly bbc.com features newsletter, called “If You Only Read 6 Things This Week”. A handpicked selection of stories from BBC Future, Earth, Culture, Capital, and Travel, delivered to your inbox every Friday. 

Stanford research: Hidden benefits of gossip, ostracism (2014)

$
0
0
January 27, 2014

A Stanford study finds that what you might think of as your worst qualities – talking about people behind their backs and voting others "off the island" – can offer surprising benefits for our greater harmony.

By Clifton B. Parker

New research shows that gossip and ostracism are useful tools by which groups encourage cooperation and reform bullies. (Photo: Shutterstock / CREATISTA)

While gossip and ostracism get a bad rap, they may be quite good for society, according to Stanford scholars.

Conventional wisdom holds that gossip and social exclusion are always malicious, undermining trust and morale in groups.

But that is not always true, according to a new study published in the journal Psychological Science. Robb Willer, an associate professor of sociology, explored the nature of gossip and ostracism in experimental groups in collaboration with co-authors Matthew Feinberg, a Stanford University postdoctoral researcher, and Michael Schultz from the University of California–Berkeley.

Their research showed that gossip and ostracism can have very positive effects. They are tools by which groups reform bullies, thwart exploitation of "nice people" and encourage cooperation.

"Groups that allow their members to gossip," said Feinberg, "sustain cooperation and deter selfishness better than those that don't. And groups do even better if they can gossip and ostracize untrustworthy members. While both of these behaviors can be misused, our findings suggest that they also serve very important functions for groups and society."

The research game involved 216 participants, divided into groups, who decided whether to make financial choices that would benefit their group.

Researchers commonly use this public-goods exercise to examine social dilemmas because individual participants will benefit the most by selfishly free-riding off everyone else's contributions while contributing nothing themselves.

Before moving on to the next round with an entirely new group, participants could gossip about their prior group members. Future group members then received that information and could decide to exclude – ostracize – a suspect participant from the group before deciding to make their next financial choices.

'Invest in the public good'

The researchers found that when people learn – through gossip – about the behavior of others, they use this information to align with those deemed cooperative. Those who have behaved selfishly can then be excluded from group activities, based on the prevailing gossip. This serves the group's greater good, for selfish types are known to exploit more cooperative people for their own gains.

"By removing defectors, more cooperative individuals can more freely invest in the public good without fear of exploitation," the researchers noted.

However, there is hope for the castaways. When people know that others may gossip about them – and experience the resulting social exclusion – they tend to learn from the experience and reform their behavior by cooperating more in future group settings. In contrast, highly anonymous groups, like many Internet message boards, lack accountability – allowing antisocial behavior to thrive.

"Those who do not reform their behavior," Willer said, "behaving selfishly despite the risk of gossip and ostracism, tended to be targeted by other group members who took pains to tell future group members about the person's untrustworthy behavior. These future groups could then detect and exclude more selfish individuals, ensuring they could avoid being taken advantage of."

The very threat of ostracism frequently deterred selfishness in the group. Even people who had been ostracized often contributed at higher levels when they returned to the group. "Exclusion compelled them to conform to the more cooperative behavior of the rest of the group," the researchers wrote.

The study reflects past research showing that when people know others may talk about their reputation, they tend to behave more generously. Where reputational concerns are especially strong, people sometimes engage in "competitive altruism," attempting to be highly pro-social to avoid exclusion from a group. The same appears to hold true for those returning from "exile" – the incentive is to cooperate rather than risk more trouble.

"Despite negative connotations, the pairing of the capacity to gossip and to ostracize undesirable individuals from groups has a strong positive effect on cooperation levels in groups," Willer said.

Workplace implications

Willer said the research is applicable to some workplace dynamics.For example, what if gossiping and ostracism were impossible? That result could be worrisome, he added.

"Imagine a workplace," he said, "where an employee's performance could only be seen by individuals in the immediate setting, and those individuals could not pass on what they had seen to other co-workers or supervisors. Further, imagine a work setting where managers could not fire delinquent employees. It would be hard to deter workers from cutting corners ethically or freeloading, working only when they were directly supervised."

The research advances what was previously known about gossip and ostracism at the individual level to the group level, Willer said.

Looking ahead, he and his colleagues are conducting field experiments on how the threat of gossip and exclusion affect behavior in real-world settings – in one study, calling car repair shops for estimates, with one group of callers stating they are active users of Yelp, the online review service that can make or break reputations.

As Willer points out, whether one calls it gossip or "reputational information sharing," as sociologists and psychologists do, this behavior, along with ostracism, seems fundamental to human nature.

People pass on information about how others behave in workplaces, student workgroups, business and political coalitions, on the Internet, in volunteer organizations and beyond. While much of this behavior may be undesirable and malicious, a lot of it is critical to deterring selfishness and maintaining social order in groups.

"I think it does speak to the mechanisms that keep people behaving honestly and generously in many settings and, where behavior is entirely anonymous, helps explain when they don't," Willer said.

-30-

The Origin of Japanese Tempura

$
0
0

In 1543, a Chinese ship with three Portuguese sailors on board was headed to Macau, but was swept off course and ended up on the Japanese island of Tanegashima. Antonio da Mota, Francisco Zeimoto and Antonio Peixoto – the first Europeans to ever step on Japanese soil – were deemed ‘southern barbarians’ by the locals because of the direction from which they came and their ‘unusual’, non-Japanese features. The Japanese were in the middle of a civil war and eventually began trading with the Portuguese, in general, for guns. And thus began a Portuguese trading post in Japan, starting with firearms and then other items such as soap, tobacco, wool and even recipes.

The Portuguese remained in Japan until 1639, when they were banished because the ruling shogun Iemitsu believed Christianity was a threat to Japanese society. As their ships sailed away for the final time, the Portuguese left an indelible mark on the island: a battered and fried green bean recipe called piexinhos da horta. Today, in Japan, it’s called tempura and has been a staple of the country’s cuisine ever since.

You may also be interested in:  
–  The hearty stew that unites a nation
–  The bread baked by bubbling geysers
–  The planet's most overrated sandwich?

No-one knows the exact origins of peixinhos da horta. “We know it existed in 1543,” said Michelin-starred chef Jose Avillez when I met up with him at Cantinho de Avillez, one of his acclaimed Lisbon restaurants. “But before that, it’s anyone’s guess.”

Green beans, it turns out, changed food history.

However, peixinhos da horta was only one of many dishes the Portuguese inspired around the world. In fact, Portuguese cuisine, still heavily overshadowed by the cuisines of Italy, Spain and France, may be the most influential cuisine on the planet.

Portuguese cuisine may be the most influential cuisine on the planet

When the Portuguese turned up in Goa, India, where they stayed until 1961, they cooked a garlicky, wine-spiked pork dish called carne de vinha d’alhos, which was adopted by locals to become vindaloo, one of the most popular Indian dishes today. In Malaysia, several staples, including the spicy stew debal, hail from Portuguese traders of centuries past. Egg tarts in Macao and southern China are direct descendants to the egg tarts found in Lisbon bakeries. And Brazil’s national dish, feijoada, a stew with beans and pork, has its origins in the northern Portuguese region of Minho; today, you can find variations of it everywhere the Portuguese have sailed, including Goa, Mozambique, Angola, Macau and Cape Verde.

Peixinhos da horta were often eaten during Lent or Ember days (the word ‘tempura’ comes from the Latin word tempora, a term referring to these times of fasting), when the church dictated that Catholics go meatless. “So the way around that,” Avillez said, “[was] to batter and fry a vegetable, like the green bean. And just to add to it, we called it peixinhos do horta, little fish of the garden. If you can’t eat meat for that period of time, this was a good replacement.”

The word ‘tempura’ comes from the Latin word tempora

And it had other functions too. “When the poor couldn’t afford fish, they would eat these fried green beans as a substitute,” Avillez said. And sailors would fry the beans to preserve them during long journeys, much in the way humans have been curing and salting meat for preservation purposes for centuries.

Perhaps not constricted by tradition, the Japanese lightened the batter and changed up the fillings. Today, everything from shrimp to sweet potatoes to shitake mushrooms is turned into tempura.

“The Japanese inherited the dish from us and they made it better,” Avillez said.

Avillez said Japanese people sometimes turn up at his restaurants and see the fried bean dish and say, “Hey, Portuguese cuisine is influenced by Japanese cuisine.” He added, “And that’s when I say, ‘No, in this case it’s the other way around’.” A Japanese-born sous chef at Avillez’s two-Michelin-starred Lisbon restaurant, Belcanto, even chose to train in Portugal instead of France because he recognised the influence on his home cuisine, particularly in peixinhos da horta.

Avillez said his one complaint about the dish, in general, has always been that the beans are often fried in the morning and so they go cold and limp by the time they get to the table later that day. He remedies this by not only cooking them on demand, but by adding a starch called nutrios that keeps them crispy. After the bean is blanched, it gets rolled in the batter of wheat flour, egg, milk, and nutrios and then flash fried.

Other chefs I talked to in Portugal had their own recipes for the fried green beans, but they didn’t deviate much. “It’s a very simple dish,” said chef Olivier da Costa, when I met up with him at his Lisbon restaurant Olivier Avenida, located in the Avani Avenida Liberdade hotel. “I use a batter of flour, milk, eggs, salt, pepper and beer,” he said. “Beer?” I asked. “Yes! It ferments the batter and the beer foam gives it a better taste.” He didn’t have the dish on his menu at the time so I had to take his word for it.

One reason why Portuguese love peixinhos da horta so much, da Costa said, was nostalgia. “We all eat it as children and thus have fond memories of it. These days it’s been making a comeback, not just because people are eating more vegetarian food, but because a younger generation are taking more interest in our local cuisine and because they want to be taken back to that simpler time.”

Avillez is taking this newfound interest in super traditional Portuguese cuisine to a new level. Along with his Japanese-born sous chef, he plans to temporarily offer a tasting menu called ‘1543’, the year the Portuguese first showed up in Japan, offering peixinhos da horta and other Portuguese dishes that have inspired Japanese cuisine. Alongside the Portuguese dishes, he plans to serve the Japanese versions that evolved from the Portuguese presence in Japan four-and-a-half centuries ago.

Each bite was like taking a first bite

Back at Cantinho de Avillez, an order of peixinhos da horta appeared in front of me. They were rigid like pencils with a lumpy texture and a yellow-ish hue. Each bite was like taking a first bite: crisp, light and super flavourful, the crunchy texture of the batter complimenting the sturdy feel of the bean. The dish has been one of the only consistent items on the menu at Cantinho de Avillez, which opened in 2012.

“I can’t take it off,” Avillez said. “My regulars would be enraged.”

Join over three million BBC Travel fans by liking us on Facebook, or follow us on Twitter and Instagram.

If you liked this story, sign up for the weekly bbc.com features newsletter called "If You Only Read 6 Things This Week". A handpicked selection of stories from BBC Future, Earth, Culture, Capital and Travel, delivered to your inbox every Friday.


Safari Should Display Favicons in Its Tabs

$
0
0

Back in May I wrote a piece titled “Safari vs. Chrome on the Mac”. From my conclusion:

In short, Safari closely reflects Apple’s institutional priorities (privacy, energy efficiency, the niceness of the native UI, support for MacOS and iCloud technologies) and Chrome closely reflects Google’s priorities (speed, convenience, a web-centric rather than native-app-centric concept of desktop computing, integration with Google web properties). Safari is Apple’s browser for Apple devices. Chrome is Google’s browser for all devices.

I personally prefer Safari, but I can totally see why others — especially those who work on desktop machines or MacBooks that are usually plugged into power — prefer Chrome. DF readers agree. Looking at my web stats, over the last 30 days, 69 percent of Mac users visiting DF used Safari, but a sizable 28 percent used Chrome. (Firefox came in at 3 percent, and everything else was under 1 percent.)

As someone who’s been a Mac user long enough to remember when there were no good web browsers for the Mac, having both Safari and Chrome feels downright bountiful, and the competition is making both of them better.

I got a ton of feedback on this piece — way more than typical for an article. One bit I heard from a few readers is that I gave Safari/WebKit short shrift on performance — the WebKit team cares deeply about performance and with regard to JavaScript in particular, WebKit is kicking ass.

But really, taken as a whole, the response to my piece was about one thing and one thing only: the fact that Safari does not show favicons on tabs and Chrome does. There are a huge number of Daring Fireball readers who use Chrome because it shows favicons on tabs and would switch to Safari if it did.

The reaction was so overwhelming I almost couldn’t believe it.

The gist of it is two-fold: (1) there are some people who strongly prefer to see favicons in tabs even when they don’t have a ton of tabs open, simply because they prefer identifying tabs graphically rather than by the text of the page title; and (2) for people who do have a ton of tabs open, favicons are the only way to identify tabs.

With many tabs open, there’s really nothing subjective about it: Chrome’s tabs are more usable because they show favicons. Here are two screenshot comparisons between Safari and Chrome from my 13-inch MacBook Pro. The first set shows 11 tabs: the TechMeme home page plus the first 10 stories linked today. The second set shows 17 tabs: the Daring Fireball homepage and the 16 items I’ve linked to so far this week.

This is not even close. Once Safari gets to a dozen or so tabs in a window, the left-most tabs are literally unidentifiable because they don’t even show a single character of the tab title. They’re just blank. I, as a decade-plus-long dedicated Safari user, am jealous of the usability and visual clarity of Chrome with a dozen or more tabs open. And I can see why dedicated Chrome users would consider Safari’s tab design a non-starter to switching.

I don’t know what the argument is against showing favicons in Safari’s tabs, but I can only presume that it’s because some contingent within Apple thinks it would spoil the monochromatic aesthetic of Safari’s toolbar area. I really can’t imagine what else it could be. I’m personally sympathetic to placing a high value on aesthetics even when it might come at a small cost to usability. But in this case, I think Safari’s tab design — even if you do think it’s aesthetically more appealing — comes at a large cost in usability and clarity. The balance between what looks best and what works best is way out of whack with Safari’s tabs.

And it’s highly debatable whether Safari’s existing no-favicon tabs actually do look better. The feedback I’ve heard from Chrome users who won’t even try Safari because it doesn’t show favicons isn’t just from developers — it’s from designers too. To me, the argument that Safari’s tab bar should remain text-only is like arguing that MacOS should change its Command-Tab switcher and Dock from showing icons to showing only the names of applications. The Mac has been famous ever since 1984 for placing more visual significance on icons than on names. The Mac attracts visual thinkers and its design encourages visual thinking. So I think Safari’s text-only tab bar isn’t just wrong in general, it’s particularly wrong on the Mac.1

I really can’t say this strongly enough: I think Safari’s lack of favicons in tabs, combined with its corresponding crumminess when displaying a dozen or more tabs in a window, is the single biggest reason why so many Mac users use Chrome.

You can even make an argument that adding favicons to Safari wouldn’t just make Safari better, but would make the entire MacOS system better, because Safari gets dramatically better battery life than Chrome. For MacBook users who spend much or most of their days in a web browser, it can mean the difference of 1-2 hours of battery life. This is actually a common refrain I heard from numerous readers back in May: that they wished they could switch from Chrome to Safari because they know Safari gets better battery life, but won’t because Safari — seemingly inexplicably — doesn’t show favicons in tabs.

Favicons wouldn’t even have to be displayed by default to solve the problem — Apple could just make it a preference setting, and power users would find it. The fact that it’s not even a preference, even though it may well be the single most-common feature request for Safari, seems downright spiteful. And not just mean-to-others spiteful, but cut-off-your-nose-to-spite-your-face spiteful. It might sound silly if you’re not a heavy user of browser tabs, but I am convinced that the lack of favicons is holding back Safari’s market share.

A pilot explains turbulence

$
0
0

Boeing 777 vortex schipholBoeing 777 at Amsterdam Schiphol Airport.Flickr/Maarten Visser

  • Turbulence is generally harmless.
  • It also feels much worse than it actually is. 
  • Pilots try to avoid turbulence to give passengers a smoother ride.
  • But turbulence can be unpredictable.
  • Turbulence seems to be getting worse due to climate change. 

Editor's note: Patrick Smith is a commercial airline pilot who currently flies Boeing 757 and 767 aircraft. Smith also runs the blog AskThePilot.com and is the author of the book "Cockpit Confidential."

Turbulence: Spiller of coffee, jostler of luggage, filler of barf bags, rattler of nerves. But is it a crasher of planes?

Judging by the reactions of many airline passengers, one would assume so; turbulence is far and away the number one concern of anxious flyers. Intuitively, this makes sense. Everybody who steps on a plane is uneasy on some level, and there’s no more poignant reminder of flying’s innate precariousness than a good walloping at 37,000 feet. It’s easy to picture the airplane as a helpless dinghy in a stormy sea. Boats are occasionally swamped, capsized, or dashed into reefs by swells, so the same must hold true for airplanes. So much about it seems dangerous.

Except that, in all but the rarest circumstances, it’s not. For all intents and purposes, a plane cannot be flipped upside-down, thrown into a tailspin, or otherwise flung from the sky by even the mightiest gust or air pocket.

Conditions might be annoying and uncomfortable, but the plane is not going to crash. Turbulence is an aggravating nuisance for everybody, including the crew, but it’s also, for lack of a better term, normal. From a pilot’s perspective, it is ordinarily seen as a convenience issue, not a safety issue. When a flight changes altitude in search of smoother conditions, this is by and large in the interest of comfort.

The pilots aren’t worried about the wings falling off; they’re trying to keep their customers relaxed and everybody’s coffee where it belongs. Planes themselves are engineered to take a remarkable amount of punishment, and they have to meet stress limits for both positive and negative G-loads. They can withstand an extreme amount of stress, and the level of turbulence required to dislodge an engine or cause structural damage is something even the most frequent flyer — or pilot for that matter — won’t experience in a lifetime of traveling. Over the whole history of modern commercial aviation, the number of jetliner crashes directly caused by turbulence can be counted on one hand.

United Airlines pilots Boeing 777United Airlines pilots in a Boeing 777 cockpit.AP

Altitude, bank, and pitch will change only slightly during turbulence — in the cockpit, we see just a twitch on the altimeter — and inherent in the design of airliners is a trait known to pilots as “positive stability.” Should the aircraft be shoved from its position in space, its nature is to return there, on its own. Passengers might feel the plane “plummeting” or “diving” — words the media can’t get enough of — when in fact it’s hardly moving.

I remember one night, headed to Europe, hitting some unusually rough air about halfway across the Atlantic. It was the kind of turbulence people tell their friends about. Fewer than forty feet of altitude change, either way, is what I saw. Ten or twenty feet, if that, most of the time. Any change in heading—the direction our nose was pointed—was all but undetectable. I imagine some passengers saw it differently, overestimating the roughness by orders of magnitude. “We dropped like 3,000 feet!”

At times like this, pilots will slow to a designated“turbulence penetration speed” to ensure high-speed buffet protection (don’t ask) and prevent damage to the airframe. We can also request higher or lower altitudes, or ask for a revised routing. If you feel the plane climbing or descending mid-flight, good chance it’s because of a report from fellow pilots up ahead.

In the worst of it, you probably imagine the pilots in a sweaty lather: the captain barking orders, hands tight on the wheel as the ship lists from one side to another. Nothing could be further from the truth. The crew is not wrestling with the beast so much as merely riding things out. Indeed, one of the worst things a pilot could do during strong turbulence is to try to fight it. Some autopilots have a special mode for these situations. Rather than increasing the number of corrective inputs, it does the opposite, desensitizing the system.

Up front, you can imagine a conversation going like this:

Pilot 1: “Well, why don’t we slow it down?”
Pilot 2: “Ah, man, this is spilling my orange juice all down inside this cup holder.”
Pilot 1: “Let’s see if we can get any new reports from those guys up ahead.”
Pilot 2: “Do you have any napkins over there?”

Boeing 777X plane cloudsBoeing

Avoiding turbulence is a combination of art and science. We take our cues from weather charts, radar returns, and those real-time reports from other aircraft. Larger carriers have their own meteorology departments, and we get periodic updates from the ground.

Often, though, it’s as simple as looking out the window. Some indicators are unmistakable and relatively easy to avoid. For example, those burbling, cotton-ball cumulus clouds—particularly the anvil-topped variety that occur in conjunction with thunderstorms—are always a lumpy encounter. Flights over mountain ranges and through certain frontal boundaries will also get the cabin bells dinging, as will transiting a jet stream boundary.

But the weather is always changing, and predicting where, when, and how much of turbulence can sometimes be a guessing game. Every now and then it’s totally unforeseen. When we hit those bumps on the way to Europe that night, what info we had told us not to expect anything worse than mild chop. Later, in an area where stronger turbulence had been forecast, it was smooth. You just never know.

When we pass on reports to other crews, turbulence is graded from “light” to “extreme.” The worst encounters entail a post-flight inspection by maintenance staff. There are definitions for each degree, but in practice, the grades are awarded subjectively. I’ve never been through an "extreme," but I’ve had my share of "moderates" and a sprinkling of "severe."

An Airbus A380, the world's largest jetliner, generates vortex during a flying display at the 51st Paris Air Show at Le Bourget airport near Paris, June 18, 2015. REUTERS/Pascal Rossignol/File Photo       An Airbus A380, the world's largest jetliner, generates vortex during a flying display at the 51st Paris Air Show at Le Bourget airport near ParisThomson Reuters

One of those severe instances took place in July 1992 when I was the captain on a fifteen-passenger turboprop. It was, of all flights, a twenty-five-minute run from Boston to Portland, Maine.

It had been a hot day, and by early evening, a forest of tightly packed cumulus towers stretched across eastern New England. The formations were short—about 8,000 feet at the tops, and deceptively pretty to look at. As the sun fell, it became one of the most picturesque skyscapes I’ve ever seen, with build-ups in every direction forming a horizon-wide garden of pink coral columns. They were beautiful and, it turned out, quite violent — little volcanoes spewing out invisible updrafts.

The pummeling came on with a vengeance until it felt like being stuck in an upside-down avalanche. Even with my shoulder harness pulled snug, I remember holding up a hand to brace myself, afraid my head might hit the ceiling. Minutes later, we landed safely in Portland. No damage, no injuries.

Now, it would be unwise of me to sugar coat this too much, and I concede that powerful turbulence has, on occasion, resulted in damage to aircraft and injury to their occupants. Each year worldwide, about a hundred people, half of them flight attendants, are hurt by turbulence seriously enough to require medical attention — head, neck, shoulder and ankle injuries being the most common. That works out to about fifty passengers. Fifty out of the two billion or so who fly each year. And a majority of them are people who fall or are thrown about because they aren’t belted in when they should be.

The bad news is, that number will probably be going up. If it feels like you’ve been seeing more and more news stories about dramatic turbulence encounters, that’s because you have. This is partly the result of the media’s obsession with anything related to flying, the ease with which scary-looking videos can be shared and spread online, and the fact there are more airplanes flying than ever before.

But it’s also true that the skies themselves are getting bumpier. Evidence shows that turbulence is becoming stronger and more prevalent as a byproduct of climate change. Turbulence is a symptom of the weather from which it spawns, and it stands to reason that as global warming destabilizes weather patterns and intensifies storms, experiences like the one I had over Maine, and the ones that keep popping up in the news, will become more common.

Because turbulence can be unpredictable, I am known to provide annoying, noncommittal answers when asked how best to avoid it:

"Is it better to fly at night than during the day?" Sometimes.

"Should I avoid routes that traverse the Rockies or the Alps?" Hard to say.

"Are small planes more susceptible than larger ones?" It depends.

"They’re calling for gusty winds tomorrow. Will it be rough?" Probably, but who knows.

"Where should I sit, in the front of the plane or in the back?"

Delta Premium ImageDelta Premium cabin.Delta

Ah, now that one I can work with. While it doesn’t make a whole lot of difference, the smoothest place to sit is over the wings, nearest to the plane’s centers of lift and gravity. The roughest spot is usually the far aft. In the rearmost rows, closest to the tail, the knocking and swaying is more pronounced.

As many travelers already know, flight crews in the United States tend to be more twitchy with the seat belt sign than those in other countries. We keep the sign on longer after takeoff, even when the air is smooth, and will switch it on again at the slightest jolt or burble. In some respects, this is another example of American over-protectiveness, but there are legitimate liability concerns. The last thing a captain wants is the FAA breathing down his neck for not having the sign on when somebody breaks an ankle and sues. Unfortunately, there’s a cry-wolf aspect to this; people get so accustomed to the sign dinging on and off, seemingly without reason, that they ignore it altogether.

There’s also something known as “wake turbulence.”  This is a different phenomenon…

If you can picture the cleaved roil of water that trails behind a boat or ship, you’ve got the right idea. With aircraft, this effect is exacerbated by a pair of vortices that spin from the wingtips. At the wings’ outermost extremities, the higher-pressure air beneath is drawn toward the lower pressure air on top, resulting in a tight, circular flow that trails behind the aircraft like a pronged pair of sideways tornadoes.

The vortices are most pronounced when a plane is slow and the wings are working hardest to produce lift. Thus, prime time for an encountering them is during approach or departure. As they rotate—at speeds that can top 300 feet per second—they begin to diverge and sink. If you live near an airport, stake out a spot close to a runway and listen carefully as the planes pass overhead; you can often hear the vortices’ whip-like percussions as they drift toward the ground.

As a rule, bigger planes brew up bigger, most virulent wakes, and smaller planes are more vulnerable should they run into one. The worst offender is the Boeing 757. A mid-sized jet, the 757 isn’t nearly the size of a 747 or 777, but thanks to a nasty aerodynamic quirk it produces an outsized wake that, according to one study, is the most powerful of any airplane.

To avoid wake upsets, air traffic controllers are required to put extra spacing between large and small planes. For pilots, one technique is to slightly alter the approach or climb gradient, remaining above any vortices as they sink. Another trick is to use the wind. Gusts and choppy air will break up vortices or otherwise move them to one side.

Winglets — those upturned fins at the end of the wings — also are a factor. One of the ways these devices increase aerodynamic efficiency is by mitigating the severity of wingtip vortices. Thus, a winglet-equipped plane tends to produce a more docile wake than a similarly sized plane without them.

Despite all the safeguards, at one time or another, every pilot has had a run-in with a wake, be it the short bump-and-roll of a dying vortex or a full-force wrestling match. Such an encounter might last only a few seconds, but they can be memorable. For me, it happened in Philadelphia in 1994.

American Airlines Boeing 757 (1)American Airlines Boeing 757-200.American Airlines

Ours was a long, lazy, straight-in approach to runway 27R from the east, our nineteen-seater packed to the gills. Traffic was light, the radio mostly quiet. At five miles out, we were cleared to land. The traffic we’d been following, a 757, had already cleared the runway and was taxiing toward the terminal. We’d been given our extra ATC spacing buffer, and just to be safe, we were keeping a tad high on the glide path. Our checklists were complete and everything was normal.

At around 200 feet, only seconds from touchdown, with the approach light stanchions below and the fat white stripes of the threshold just ahead, came a quick and unusual nudge—as if we’d struck a pothole. Then, less than a second later, came the rest of it. Almost instantaneously, our 16,000-pound aircraft was up on one wing, in a 45-degree right bank.

It was the first officer’s leg to fly, but suddenly there were four hands on the yokes, turning to the left as hard as we could. Even with full opposite aileron—something never used in normal commercial flying—the ship kept rolling to the right. There we were, hanging sideways in the sky; everything in our power was telling the plane to go one way, and it insisted on going the other. A feeling of helplessness, of lack of control, is part and parcel of nervous flyer psychology. It’s an especially bad day when the pilots are experiencing the same uncertainty.

Then, as suddenly as it started, the madness stopped. In less than five seconds, before either of us could utter so much as an expletive, the plane came to its senses and rolled level.

boeing 747Boeing

If you’re interested, it’s possible to stake out a spot near an airport and actually hear wingtip vortices as they drift toward the ground:

You need to be very close to a runway — preferably within a half-mile of the end. The strongest vortices are produced on take off, but ideally, you want to be on the landing side, as the plane will be nearer (i.e. lower) at an equivalent position from the threshold. A calm day is ideal, as the wind will dissipate a vortex before it reaches the ground. About 30 seconds after the jet passes overhead you’ll begin to hear a whooshing, crackling and thundering. It’s a menacing sound, unlike anything you’ve heard before. See — or hear — for yourself in this footage captured on my iPhone.

It was taken at the Belle Isle Marsh Reservation, a popular birdwatching spot about a half-mile north of runway 22R at Boston’s Logan International Airport. The plane is a 757. Excuse the atrocious video quality, but the sound is acceptable and that’s the important thing. You begin to hear the vortices at time 0:45, and they continue pretty much to the end. Note the incredible gunshot-like noises at 0:58.

Play loud!

Read the original article on AskThePilot.com. Copyright 2017. Follow AskThePilot.com on Twitter.

Crafting plausible fantasy maps

$
0
0

Building a map for a fantasy setting involves a lot of details – most of them fun! Art styles, fonts, and icons need to be chosen. But some mapping concerns go beyond mere aesthetics. If you’re building a sizable chunk of continent on an Earth-like world,* you’re going to need to keep geology in mind.

Mountains

Mountains are the skeletal system of your world. Unless you already know the exact shape of your continent or island, sketch your mountains before your coastlines.* Mountains back each other up and form chains, ranges, and ridges. They take their shape from their creation:

  • Most mountains are the children of lusty continental plates, birthed from scandalous collisions between landmasses too attracted to each other for their own good. The surface crumples upward where the plates collide. Even when one plate slides beneath the other, the friction and pressure send mountains shooting upward.
  • Volcanic mountains can form near the edge of the top plate during this long, slow collision. Volcanoes are formed from the recycled remnants of the bottom plate.*
  • Plates burning themselves as they migrate over mantle plumes* deep in the planet can also give rise to volcano-blisters. These eventually form chains of mountains as the plate slowly moves over the hotspot.
  • Impacts between planets and asteroids form circular rings of mountains. Sometimes an impact crater boasts a second ring or lone peak at the center of the crater. If the crater is big enough and the world is Earth-like, it will likely be a lake or sea.

All four of these methods form chains, ridges, long plateaus, or rings. Lone mountains are rare and almost always volcanic in origin.* Hill country, worn plateaus, badlands, and other rugged-but-not-mountainous terrain is typically the eroded remains of ancient, “dead” mountains.

On Your Map:

Place mountains in rows or long blobs. If you’ve already drawn your coastlines, extend the mountain chains past them, forming peninsulas and island chains. On land, their foothills should be visible: scatter a few hills here and there at the margins of the mountains. A few ranges of lone hills (old, worn out mountain ranges) are a good idea. The occasional lone mountain is okay; it’s probably a volcano or the central peak inside a mountain-ringed impact crater.

If your goal is realistic topography, remember that mountain chains look like long fused ridges at continental scales. Most individual mountains are only discernible when the map is zoomed in to the scale of a small European country or U.S. state.*

Coastlines

Coastlines appear to be the singular, end-all defining feature of any landmass – and to human experience, they are. But a planet’s rock is stalwart and little concerned with its surroundings, whether gaseous atmosphere or liquid ocean. On human timescales (thousands of years or less) water is largely irrelevant to solid land. An island is just a mountain surrounded by ocean, and a mountain is just an island with the water drained down. Any contour line on a topographic map or any depth line on a nautical chart could theoretically be a coastline. So outline your mountains first. Then for your coastlines, just add water.

When water is poured over a landscape formed of continents and mountains, what does it do? Land isn’t arbitrarily arranged on a planet. Plate tectonics sort rock into lighter and heavier, and clumps the lighter rock together as continents. And at any level of magnification closer than a full global picture, coastlines will have a similar, fractal look.

This fractal nature means you can focus on the coast of a small territory or examine the margins of a continent, and the contours* should be about the same. In fact, if you aren’t feeling very inspired, you can take a zoomed-in island from Earth* and make it your fantasy continent, or vice versa.*

But there are some quirky exceptions to the fractal behavior of coastlines, and some places where the shore may look distinctly different:

  • Land sorts into large continents from a distant enough perspective, and tidal flats and beaches complicate matters when zoomed far in.
  • Mountains, glaciers, and (most importantly) a combination of the two affect coastlines. The glacier-carved fjords, inlets, and island ranges of Chile, British Columbia, and Norway are complex, convoluted affairs. And these regions don’t look the same at all levels of magnification.
  • Coastlines also get a bit weird around river deltas. The land bulges outward, but this bulge is carved up by multiple river channels.
  • Certain flat regions, like North America’s eastern seaboard, can have complex chains of ever-shifting barrier islands.
  • Continents that haven’t had sea-level ice for hundreds of millions of years might have smooth-looking coastlines without a lot of inlets or islands; Africa and Australia are perfect examples.

On Your Map:

  • Respect the fact that land likes to clump together into continents; don’t arbitrarily blend land and sea at large scales.*
  • Make mountainous coastlines in cold areas – or areas that were once glaciated – especially “fiddly.” Add fjords, inlets, outlying islands, that sort of thing.
  • Some occasional long, skinny barrier islands running parallel to the coast but smoother than it, can add a realistic feel.

Otherwise just make sure that most of your coastlines are rough or jagged.

Rivers

Mountains and coastlines follow general patterns but few hard-and-fast rules. Not so with rivers; the courses of rivers are dictated by simple logic and never deviate without a lot of magic. Even the most vigorous hand-waving won’t save you from ridicule if one of your rivers flows uphill or follows a circular loop.

Rivers are like coastlines in that their shape is (arguably) fractal, and their placement is a matter of contour lines. Except that rivers are perpendicular to the contours rather than parallel. Rivers always move along the easiest available path from high elevation to low. And the fractal pattern of rivers is directional: rivers will always merge as they flow toward the coast, never split.*

Rivers often find themselves trapped at the bottoms of massive canyons eroded over millions of years, prisons of their own making. However, rivers are not permanently fixed in place like mountains; they can migrate. Over flat country, meandering rivers shift their course by growing continually wider loops.* Eventually these loops come full circle and “pinch off,” straightening the river once more. But in these scenarios the river stays within its basin. In a mountainous region, a river may change course if rockslides or glaciers block it, forcing the waters to find an entirely new route to the sea. And rivers can be tamed by humans. Even ancient humans frequently diverted rivers along different courses, typically for purposes of irrigation or flood regulation.

Many rivers will be fed by regions with seasonally variant rainfall* or spring melts, and have periods of very low or high (flood-level) flow. As an example, the Fraser is a fairly short river in Western Canada, but during the wet spring, snowmelt combined with massive rainfall gives this river higher flow volumes than the Mississippi, North America’s longest river. Some rivers – especially those winding through hot deserts – may be completely dry and “dead” for part of the year. Or, alternatively, completely frozen.

On Your Map:

Here are some river placement rules, in order of importance from “avoid deviation at all costs” to “explain yourself if you get creative”:

  • Rivers never completely cross a continent or other landmass; they never start in the same body of water they empty into!*
  • Rivers never form loops. If they appear to, these are actually ring-shaped lakes.
  • Major rivers will always end at an ocean or, occasionally, a lake.* Smaller rivers are like people: they can die long, slow, painful deaths in attempted crossings of a vast desert.
  • Rivers always join together as they flow toward the sea; they never branch out. Exceptions are made in flat deltas, where the river deposits islands of silt as it empties into an ocean or lake. These are typically small (no bigger than a large city) but can fan out over large distances if the terrain is flat enough. For example, Bangladesh, the world’s eighth-most populous country, consists almost entirely of a single conjoined delta of three large rivers.
  • Rivers tend to start at mountains – or occasionally lakes, wetlands, or hills. Mountains catch more water, and they hold significant water as ice that can melt and feed rivers during the spring. If there aren’t any mountains handy, use a forest; these imply sufficient water to found a river.

Lakes

Lakes are generally just a consequence of rivers getting delayed in depressed portions of a continent on their way to the sea. Any time a bowl-shaped, walled-off region exists inside a landmass, its fate depends on how much rain it gets and whether it has a river pouring into it.

If the depressed region is a desert, it will be a dry endorheic basin.* It may have a small salt lake in its bottom, because salts will accumulate over millions of years with no way to escape to an ocean. Or it could have a more recently-filled fresh or brackish lake. If it is rainy or fed by a river, it will fill up until the water can spill out at its lowest point, making a freshwater lake that is part of a larger drainage area.

There’s no real limit on lake size, though continents don’t usually have giant holes in them by default. Earth’s largest lakes have one of two backstories: they were carved recently by glaciers,* or they were formed when pieces of continents began to rift apart.* Rarely, they are water-filled impact craters.

Cold temperate and subarctic regions tend to be peppered with little lakes, connected by a tangle of streams and rivers. This is especially true for cold regions that are rocky over short scales but flat over larger scales. Rockiness means the rivers can’t just carve a straight line through soft ground to their goal. They’re stuck filtering through a series of pools on their way to the ocean. The Canadian Shield* is so dense with lakes and connecting rivers that its “land” is nearly half water.

Like rivers, lakes can shift their boundaries. But when lakes drain or fill, they tend to impact a much larger area. These regional apocalypses can be caused by earthquakes, ice dams, or human intervention.* Consider these as great sources of historical trauma for your world. Any fertile river valley that becomes a lakebed has a good chance of drowning an entire civilization. And any inland sea of freshwater that dries up is also going to bring down a civilization or two, if for different reasons. Both of these things have happened throughout humanity’s early history – if not always on fantastically grand scales.

Less apocalyptic but still interesting are gradual or seasonal changes to a lake’s shoreline. Many lakes, especially in Africa and Australia, fill during monsoons but shrink or empty out during the dry season. This is common in flat regions with shallow, gradual coastlines. Lakes in the mountains might gain or lose similar amounts of water, but their shores are typically so steep they wouldn’t look any different on a map.

On Your Map:

  • Treat the coasts of large lakes like a continent’s outer coastline. Any mountain range that ends at a large lake should poke into it a bit, perhaps ending in a few islands. Small lakes are more like little inlets or channels, filling the spaces between mountains and often mimicking rivers in their long, narrow shape.*
  • Consider dotted or blurred lines for some lakes, since the shores are not always stable. In regions with glacial ice dams or seasonal monsoon rains, lakes could empty and fill over the course of a year.
  • You can pepper cold, wet regions with lakes, especially if the land is flat enough.* But don’t put as many lakes in hot, dry regions unless they’re fed by large rivers. Consider placing salt flats here instead.
  • In flat country, consider decorating meandering rivers with elbow lakes. These are remnants of the river’s previous courses; they look like curved pieces of pinched-off river.
  • Feel free to string multiple lakes along the course of a river, like beads on a string. This is especially common in cold regions or on rivers held back by dams (natural or human-made).

Vegetation

Not all maps bother to show vegetation cover, but very large plants warrant enough importance to human travel and habitation that fantasy maps will almost always mark forests. More detailed maps may also differentiate farmers’ fields, barren tundra, deserts, prairie, ice sheets, and marshes. I have a full post on placing vegetation, but here I’ll give a quick summary:

Interplay of Climate and Terrain

Climate zones are critical to a region’s vegetation, and the biggest predictor for climate is latitude.* Any terrestrial planet will have bands of wet and dry. The climate tends to be wet near the equator and again in temperate regions.* Conversely, most land is dry about a third of the way to the poles* and dry* again at the poles themselves.

Mountains tend to be colder and wetter than their surroundings, and frequently one side of a mountain range will be wetter or milder than the other. If very different eco-regions don’t have mountains separating them, their transition will be gradual. For example, there’s rarely a cut-and-dry boundary between temperate forest and desert on flat ground. Instead there’s a gradual transition first to savannah, then grassland, then arid scrubland, and finally high desert.

Settlements

Humans (especially pre-industrial, agricultural humans) like to settle near rivers. All the earliest agricultural civilizations grew up along rivers.* Fresh, moving water provided something to drink, irrigation for crops, power for watermills* and highways for high-volume trade. In the ancient world, it was much easier to trade large quantities of goods via water than it was overland. This remained true until – well, until forever. It’s still true. Ocean coasts and rivers have tradeoffs: both offer good shipping routes, but rivers are sources of fresh water and fertile sediment while oceans are superior for fishing and whaling. Cities that become large and prosperous require not only water and resources but also trade.

Humans also construct some fortifications or settlements in defensible positions, such as atop hills and cliffs or on islands. But these structures will still be close to their centers of civilization (river valleys and coastlines), especially at first. Some fortresses may be found guarding remote mountain passes as well, once agricultural city-states grow into large nations.

Different humanoids– and different human cultures – could find different regions attractive for settlement. In high-stereotype fantasy, dwarves prefer underground or mountainous areas and elves prefer great forests.* Drow and sun-sensitive orcs, goblins, or trolls may require underground dwellings. But access to the lands above may be essential for resources or ventilation, necessitating karst landscapes rich with caves. Even garden-variety humans can grow large cities in nontraditional areas. This is especially possible once modern technology, magic, or domesticating a new herd animal makes it viable. For example, in North America we see cities in barren inland regions* due to modern technologies such as air conditioning; plus, highways and railways have become more important trade routes than rivers.

On Your Map:

Put most of your cities on rivers and/or coastlines unless you have a story-relevant reason not to. Don’t completely disregard this in science fiction, either. We humans might not be so reliant on rivers as we once were, but we still like them. The biggest cities tend to show up where trade routes intersect – at branches in rivers or on the coastline near an important mountain pass.

Roads

Roads must connect important centers of habitation and trade, and they also follow some of the same rules as rivers – if not so strictly. Roads will often track a river or coastline, because these are easy places to build roads* and most people will be living there anyway. Unlike rivers and coastlines, roads do occasionally need to pass over mountains. Humans will find the lowest and easiest possible routes through mountain ranges, and these routes will be important choke points for defense.

On Your Map:

If you choose to include roads, make sure you keep their style visually distinct from rivers, since they’ll be drawn alongside each other quite a lot. Otherwise, just play connect-the-dots with cities, and make roads sparser and more squiggly over hills or mountains.

Boundaries

Political boundaries are a special case. For starters, they’re drawn only onto maps and rarely marked on the face of the world itself. Boundaries have more in common with abstract human creations like “justice” and “sin” than with concrete* creations such as roads and towers. Yet they do follow certain rules, just like natural features and the less abstract trappings of civilization.

Boundaries often follow obvious natural barriers like mountains and rivers. But river boundaries have a different flavor than boundaries marked by a high mountain range, empty desert, or large lake. Though a river marks an arbitrary line that a boundary can follow, by default an agricultural civilization will fill up as much of a river valley as it can. Thus a boundary along a river is an “artificial” divide, likely separating two halves of the same culture. Rivers are common places for imperial conquerors to divide land, but they are almost never natural cultural boundaries. Not to mention, rivers on flat ground shift over time – so they can be unreliable for demarcation.

Boundaries that evolve naturally between two nations are more commonly mountains*, arbitrary lines in uninhabited deserts, choke-points on peninsulas, or wide bodies of water such as lakes or sea channels. Nations don’t always align perfectly to regions divided by these features – but there’s a good correlation. This correlation is strongest where the nations evolved naturally and weren’t given arbitrarily-drawn boundaries by imperial powers carving up the world.

On Your Map:

Before placing boundaries, ask yourself whether the boundaries were placed by locals or by external empires.

If these boundaries arose naturally between the local inhabitants, force your boundaries into uninhabited or difficult-to-traverse regions. If boundaries have been imposed by outsiders* at some point in history,* rivers and arbitrary straight lines should be more common.

Behold: The Realm of Badmap!

Getting a feel for how maps should look can take time, especially since there are so few hard-and-fast rules.* That’s where analyzing other maps – both real and fictional – can help. Most maps are pretty reasonable affairs; it’s rare to find one so ridiculously ill-conceived that it can illustrate all the “don’ts” of mapmaking in one place.

And so I’ve made that exact thing for your viewing displeasure! Can you point out all the very bad, not good features on the following map of the Realm of Badmap?*

Badmap

I haven’t included boundaries or roads (they’re not always needed), but everything else is there. I’ve used the amazing Sketchy Cartography Brushes by Deviant Art’s StarRaven, which I highly recommend. I apologize for exploiting the brushes for this work of cartography evil. It had to be done. For, uh… for Science.

A Reasonable Map

Here’s a variant on the same general landmass but with reasonable features. I’ve tried to be as consistent as possible with the art style so that the only major differences are geological and climatological.*

ReasonableLands

This map still isn’t “perfect” because there is no singular perfect; the permutations of acceptable are vast. In fact, it’s statistically likely for a map of any given region to be not perfect. Even on Earth, curious “un-Earth-like” features do occur here and there. So for detailed continent-sized maps, I advise consciously adding an odd feature or two. The map I drew has some borderline-believable features. The big southern river exits to the coast through a mountain range.* The barrier islands are a bit far from the coastline in the northwest. And the vegetation patterns could suggest a reversed (compared to Earth) rotation of the planet.*

Be prepared to explain the existence of any anomaly on your map. Semi-plausible oddities can add interest to your world’s geological history or aesthetic, or cause some confusion among your world’s scholars. Why do those rivers seem to cross each other? Because of elaborate irrigation canals built by the Ancients making artificial under/over paths for the sacred streams. Why are there only foothills on one side of the mountain range? Because of the way the continent buckled to form them, or possibly the Earth God was just lazy that day.

But hopefully your maps will keep mostly to the plausible and believable, thanks in part to the guidelines and examples of this post. It would be a shame to craft a perfect story or game only to be mocked for your world’s impossible river or misplaced mountains.

Read more about

 

 

The ethereum “hacker” didn't hack anything

$
0
0

The blockchain does'nt neeed a judge

A couple of weeks back, there was an incident where a “hacker” was able to take advantage of an issue in an Ethereum smart contract to “take” 30MM worth of Ethereum (ETH).

You can see the transactions on Etherscan here

But I’m not sure that “theft” is the appropriate term to use in this situation. In fact, this incident disproves the viability of Ethereum smart contracts in general.

To understand why I’ve come to believe this, first, lets talk about what it means to own a cryptocurrency.

You don’t actually possess cryptocurrency unless you run a node

When you own a cryptocurrency, you can’t really “hold it in your wallet” like you could have a $100 bill in your physical wallet. In crypto, your wallet is actually a key which is used to control the associated account number on the Blockchain.

This isn’t all that different than how banks work.

Your bank debit card allows you to make charges or deposits against a digital representation of the amount of money that is held by the bank. The history of past bank transactions (the ledger) is maintained on bank computers within private networks.

With cryptocurrencies, however, the ledger is stored on a network of computers all around the world. All of these computers “vote” (gain consensus) on a group of transactions to determine which are valid and which are not.

How exactly the consensus system works is outside of the scope of this post, but what you really need to know is that because of the way Ethereum works, anyone who is running an Ethereum node has access to everything. Every Ether ever created every balance on every account and, most importantly, every smart contract that was ever written.

So the alleged hacker didn’t need to break into a computer –which is what the US government considers hacking. The alleged hacker just needed to run an Ethereum node, at which point the ETH she “took” and the contract that she “exploited” was actually already on her computer.

Can you take something that has been given to you?

If you give your money to a bank, and they keep it, we don’t call that a hack.

Believe it or not, this happens all of the time.

When you do something like send a wire from your bank account, the bank keeps charges you a fee. If the bank charges you more then they are supposed to you would need to convince a judge that the bank violated the terms of the agreement that you had setup with the bank. If the judge agrees with you the judge forces the bank to suffer consequences.

The whole idea behind the blockchain is to remove the need to trust a judge or any other singular 3rd party to maintain the history of transactions or resolve disputes. This is done via the consensus process I mentioned above.

This is easy on Bitcoin because when you send a Bitcoin to someone, there isn’t much to dispute. You sent the coin, or you didn’t.

With Ethereum, this is a little different.

Ethereum allows people to setup more complex agreements between parties. These agreements are called smart contracts.

I like to think of a smart contract like an on blockchain vending machine. It sits there, and when you deposit a token, it does some work and outputs something else. Behind the scenes, a smart contract is really just a computer program that executes on blockchain nodes. The big problem is that, like all computer programs, it’s nearly impossible to build something without bugs.

The alleged hacker violated no agreement

In this case, the bug in the contract clearly allowed for an owner to be changed after the contract was initialized. The alleged hacker didn’t do anything that the contract didn’t allow, they merely sent a message which executed the contract as it was written.

So moving forward we should all refer to the alleged hacker as the “contract executor” which is more accurate.

You may think the intent behind the contract was to keep the Ether safe. And you are probably right. Computers don’t understand intent though. Computers blindly execute instructions that they are given so to have the intent argument mean anything you would need to argue it in front of a judge. One of the core ideas behind the blockchain is to remove the need for any type of argument like this. The balances (state) on the chain is the state that is agreed upon by the network, not a group of judges.

If we are arguing about intent the requirement to have a 3rd party decide on the intent negates the whole idea behind a contract on the blockchain in the first place.

The smart contract is the authority

Recently I also found out that æternity (one of the “victims” of the contact execution) is working with the authorities to track down the contract executor.

“Together with æternity’s lawyer, we have contacted the Liechtenstein police authorities and filed charges. The police will forward the matter to Interpol.”

As I said above, what makes the Blockchain worth using is its decentralization. If we need the courts to enforce transactions on the blockchain, how is that any better than the current non-blockchain method of transacting with people?

By going to the authorities æternity is admitting that the Blockchain isn’t an improvement at all. Which means that the old school method writing a “real world” contract between partners would have worked just fine.

In æternity’s case, they could have done a partnership agreement and sent the cash to a bank account that requires multiple signatures for withdrawing funds. In this case, they would have been admitting that the contract language may be flawed and that you may need a 3rd party to decide what it really meant.

“But what about their ICO” you may be thinking, “they needed the blockchain to raise money”. Well, there are still ways of funding projects off chain that have proven themselves over history to be secure and stable. Living by the ICO sword, also means that you should be willing to die by it.

Especially if you are a blockchain company.

So there you have it, a hack wasn’t a hack. It’s just an execution of a contract with a bug. To argue about the intent of the contract would need someone to decide on what the correct state should be which negates the whole purpose of a smart contract to begin with.

Since the decentralized consensus process is really the only thing that is valuable about the blockchain the above means that smart contracts, on chain, are useless.

Tags: etheruem and cryptocurrency

Creating domain-specific languages in Julia using macros (like lisp)

$
0
0

Since the beginning of Julia, it has been tempting to use macros to write domain-specific languages (DSLs), i.e. to extend Julia syntax to provide a simpler interface to create Julia objects with complicated behaviour. The first, and still most extensive, example is JuMP.

Since the fix for the infamous early Julia issue #265, which was incorporated in Julia 0.6, some previous methods for creating DSLs in Julia, mainly involving eval, ceased to work.

In this post, we will describe a recommended pattern (i.e., a reusable structure) for creating DSLs without the use of eval, using syntax suitable for Julia 0.6 and later versions; it is strongly recommended to upgrade to Julia 0.6.

Creating a Model object containing a function

This blog post arose from a question in the JuliaCon 2017 hackathon about the Modia modelling language, where there is a @model macro. Here we will describe the simplest possible version of such a macro, which will create a Model object that contains a function, and is itself callable.

First we define the Model object. It is tempting to write it like this:

structNaiveModelf::Functionend

We can then create an instance of the NaiveModel type (i.e., an object of that type) using the default constructor, e.g. by passing it an anonymous function:

julia>m1=NaiveModel(x->2x)NaiveModel(#1)

and we can call the function using

If we wish instances like m to themselves behave like functions, we can overload the call syntax on the NaiveModel object:

julia>(m::NaiveModel)(x)=m.f(x)

so that we can now just write

Parametrising the type

Since Function is an abstract type, for performance we should not have a field of this type inside our object. Rather, we parametrise the type using the type of the function:

structModel{F}f::Fend(m::Model)(x)=m.f(x)
julia>m2=Model(x->2x)Model{##3#4}(#3)

Let’s compare the performance:

julia>usingBenchmarkToolsjulia>@btimem1(10);41.482ns(0allocations:0bytes)julia>@btimem2(10);20.212ns(0allocations:0bytes)

Indeed we have removed some overhead in the second case.

Manipulating expressions

We wish to define a macro that will allow us to use a simple syntax, of our choosing, to create objects. Suppose we would like to use the syntax

to define a Model object containing the function x -> 2x. Note that 2x on its own is not valid Julia syntax for creating a function; the macro will allow us to use this simplified syntax for our own purposes.

Before getting to macros, let’s first build some tools to manipulate the expression 2x in the correct way to build a Model object from it, using standard Julia functions.

First, let’s create a function to manipulate our expression:

function make_function(ex::Expr)return:(x->$ex)end
julia>ex=:(2x);julia>make_function(ex):(x->begin# In[12], line 2:2xend)

Here, we have created a Julia expression called ex, which just contains the expression 2x that we would like for the body of our new function, and we have passed this expression into make_function, which wraps it into a complete anonymous function. This assumes that ex is an expression containing the variable x and makes a new expression representing an anonymous function with the single argument x. (See e.g. my JuliaCon 2017 tutorial for an example of how to walk through the expression tree in order to extract automatically the variables that it contains.)

Now let’s define a function make_model that takes a function, wraps it, and passes it into a Model object:

function make_model(ex::Expr)return:(Model($ex))end
julia>make_model(make_function(:(2x))):(Model((x->begin# In[12], line 2:2xend)))

If we evaluate this “by hand”, we see that it correctly creates a Model object:

julia>m3=eval(make_model(make_function(:(2x))))Model{##7#8}(#7)julia>m3(10)20

Macros

However, this is ugly and clumsy. Instead, we now wrap everything inside a macro. A macro is a code manipulator: it eats code, massages it in some way (possibly including completely rewriting it), and spits out the new code that was produced. This makes macros an incredibly powerful (and, therefore, dangerous) tool when correctly used.

In the simplest case, a macro takes as argument a single Julia Expr object, i.e. an unevaluated Julia expression (i.e., a piece of Julia code). It manipulates this expression object to create a new expression object, which it then returns.

The key point is that this returned expression is “spliced into” the newly-generated code in place of the old code. The compiler will never actually see the old code, only the new code.

Let’s start with the simplest possible macro:

macromodel(ex)@showex@showtypeof(ex)returnnothingend

This just shows the argument that it was passed and exits, returning an empty expression.

julia>m4=@model2xex=:(2x)typeof(ex)=Expr

We see that the Julia Expr object has been automatically created from the explicit code that we typed.

Now we can plug in our previous functions to complete the macro’s functionality:

julia>macromodel(ex)returnmake_model(make_function(ex))end@model(macrowith1method)julia>m5=@model2xModel{##7#8}(#7)julia>m5(10)20

To check that the macro is doing what we think it is, we can use the @macroexpand command, which itself is a macro (as denoted by the initial @):

julia>@macroexpand@model2x:((Main.Model)((#71#x->begin  # In[12], line 2:2#71#xend)))

Macro “hygiene”

However, our macro has an issue, called macro “hygiene”. This has to do with where variables are defined. Let’s put everything we have so far inside a module:

moduleModelsexportModel,@modelstructModel{F}f::Fend(m::Model)(x)=m.f(x)function make_function(ex::Expr)return:(x->$ex)endfunction make_model(ex::Expr)return:(Model($ex))endmacromodel(ex)returnmake_model(make_function(ex))endend

Now we import the module and use the macro:

julia>usingModelsjulia>m6=@model2x;julia>m6(10)20

So far so good. But now let’s try to include a global variable in the expression:

julia>a=2;julia>m7=@model2*a*xModels.Model{##7#8}(#7)julia>m7(10)UndefVarError:anotdefinedStacktrace:[1]#7 at ./In[1]:12 [inlined][2](::Models.Model{##7#8})(::Int64) at ./In[1]:9

We see that it cannot find a. Let’s see what the macro is doing:

julia>@macroexpand@model2*a*x:((Models.Model)((#4#x->begin  # In[1], line 12:2*Models.a*#4#xend)))

We see that Julia is looking for Models.a, i.e. a variable a defined inside the Models module.

To fix this problem, we must write a “hygienic” macro, by “escaping” the code, using the esc function. This is a mechanism telling the compiler to look for variable definitions in the scope from which the macro is called (here, the current module Main), rather than the scope where the macro is defined (here, the Models module):

moduleModels2exportModel,@modelstructModel{F}f::Fend(m::Model)(x)=m.f(x)function make_function(ex::Expr)return:(x->$ex)endfunction make_model(ex::Expr)return:(Model($ex))endmacromodel(ex)returnesc(make_model(make_function(ex)))endend
julia>usingModels2julia>a=2;julia>m8=@model2*a*xModels2.Model{##3#4}(#3)julia>m8(10)40

This is the final, working version of the macro.

Conclusion

We have successfully completed our task: we have seen how to create a macro that enables a simple syntax for creating a Julia object that we can use later.

For some more in-depth discussion of metaprogramming techniques and macros, see my video tutorial Invitation to intermediate Julia, given at JuliaCon 2016:

  • Video: https://www.youtube.com/watch?v=rAxzR7lMGDM
  • Jupyter notebooks: https://github.com/dpsanders/intermediate_julia

Author: David P. Sanders, Associate Professor, Department of Physics, Faculty of Sciences, National University of Mexico (UNAM).

Viewing all 25817 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>