Quantcast
Channel: Hacker News
Viewing all 25817 articles
Browse latest View live

Japan Is Selling Ice Cream That Doesn't Melt

$
0
0

The future is now, folks: Japanese scientists have discovered a completely organic way to make ice cream retain its shape and not melt for several hours. Your taste buds won't notice, unless you have the patience and willpower to observe rather than eat your frozen treats this summer.

According to the Asahi Shimbun, a Japanese daily newspaper, scientists at Biotherapy Development Research Center Co. in Kanazawa stumbled upon the miracle-working method by accident earlier this year. Researchers had reportedly asked a pastry chef to create a dessert using polyphenol liquid, extracted from strawberries, in an effort to help out strawberry farmers whose crops were suffering after the earthquake and tsunami in eastern Japan in 2011. The frustrated chef told scientists that "dairy cream solidified instantly when strawberry polyphenol was added," and although he believed there was "something suspicious" about the polyphenol, one researcher at the center immediately realized the natural compound's potential for greatness.

Through trial and error, Tomihisa Ota, professor emeritus of pharmacy at Kanazawa University, soon developed non-melting popsicles. "Polyphenol liquid has properties to make it difficult for water and oil to separate so that a popsicle containing it will be able to retain the original shape of the cream for a longer time than usual and be hard to melt," he told the Asahi Shimbun. The popsicles went on sale in Kanazawa, Osaka, and Tokyo in April and, according to one of the newspaper's staffers, they definitely live up to the hype. "When heat from a dryer was applied in an air-conditioned room, a vanilla popsicle that was purchased from a regular shop began melting around the edges almost instantly," according to the intrepid reporter. "But the Kanazawa Ice retained its original shape even after five minutes. It also tasted cool."

Another local news site, SoraNews24, purchased a bear-shaped popsicle from Kanazawa Ice in early July and documented the changes it underwent over the course of three hours at room temperature. By the end of the time-lapse video, although the Popsicle stick could be removed from the bear's stomach with little resistance, the ice cream creature still reportedly tasted cool and had largely retained its shape, spreading only a tiny bit as it sunk into its paper bed.

Foodstagrammers, this one's for you: You can now take your time searching for the perfect shot and still enjoy every last drop of your Kanazawa Ice.


Scientists Find Record 2.7M-Year-Old Ice Core in Antarctica

$
0
0

Back in 2010, a group of scientists drilling in Antarctica pulled up a one-million-year-old chunk of ice. At the time, it was the oldest ice core ever discovered. But as Paul Voosen reports for Science, the team recently dug even deeper into Earth’s glacial history, unearthing an ice core that dates back 2.7 million years.  

The chilly discovery was made in the Allan Hills region of Antarctica, in an area of largely untouched blue ice. Typically, as Sarah Laskow explains in Atlas Obscura, scientists drill into ice made up of continuous layers, each one compacted over time. But that type of ice does not preserve its oldest layers, which eventually are melted by the Earth’s internal heat. The team consequently looked to blue ice, which is layered not by age, but rather is formed in exposed areas where any net addition or subtraction of snow is mitigated due to wind and sublimation. It is because of that, Voosen writes, that “old layers are driven up...revealing the lustrous blue of compressed ice below."

There is a drawback to studying blue ice, however; because it is not organized into neat layers, it is difficult to date. So Michael Bender, a Princeton geochemist, devised a solution that involved measuring the amount of argon and potassium contained within a piece of ice. It isn’t the most accurate method—there is a margin of error of about 100,000 years—but it can give researchers a fairly good picture of an ice core’s age.

But why, you may ask, are researchers on the hunt for ancient ice? As Trevor Nace explains in Forbes, ice cores from the Arctic and Antarctica can tell us a lot about the climates and atmospheres of past epochs. When snow first falls, it is fluffy and airy; over time, as it get covered with successive layers of snow, it becomes compacted, its air pockets are forced out and it begins to transform into ice. But even ancient ice contains tiny bubbles—and those little bubbles have roughly the same air composition as they did when the original layer of snow first fell.

The team’s findings, which were presented at the Goldschmidt Conference in Paris, revealed that the ice dating back 2.7 million years contained air bubbles that did not exceed 300 parts per million (PPM) carbon dioxide—in comparison to the levels of carbon dioxide in the air today (which exceeded 410 PPM for the first time in millenia this April). The ice may offer be a from the beginning of an ice age; as Laskow points out, experts have theorized that such low carbon dioxide levels played a role in pushing Earth into a series of significant cold periods.

Moving forward, the team plans to continue exploring blue ice, in search of ice dating back five million years. According to Nace, they are looking to go back to a time when carbon dioxide levels were comparable to what they are today. By unearthing Earth’s frosty history, they hope to be able to better understand where the planet is heading in the future.

Like this article?
SIGN UP for our newsletter

Namecheap has taken down Neo-Nazi site Daily Stormer

$
0
0

Neo-nazi news site Daily Stormer will need to find another host after Namecheap announced on Sunday that it will not host the site. The hate site was registered with the company on Friday after being kicked off of GoDaddy, Google Domains, and a Russian registrar earlier this week before being shut down by each.

On Friday, the company’s Twitter account began replying to users about the registration, saying its legal and abuse department was “already looking into it.”

In a blog post, Namecheap CEO Richard Kirkendall said that he examined the content of the website, and found that “paired with the support for violent groups and causes [it] passes from protected free speech into incitement.”

The site was criticized for accepting the registration, with some users saying thatthey were moving their websites to a new host. Kirkendall has been addressing replies on Twitter, where he said that he and his company“don’t make rash decisions around freedom of speech.”

For the moment, while the Daily Stormer can be accessed through the anonymous Tor network, it is offline for most internet users.

What is Hindley-Milner and why is it cool? (2008)

$
0
0

Anyone who has taken even a cursory glance at the vast menagerie of programming languages should have at least heard the phrase “Hindley-Milner”.  F#, one of the most promising languages ever to emerge from the forbidding depths of Microsoft Research, makes use of this mysterious algorithm, as do Haskell, OCaml and ML before it.  There is even some research being undertaken to find a way to apply the power of HM to optimize dynamic languages like Ruby, JavaScript and Clojure.

However, despite widespread application of the idea, I have yet to see a decent layman’s-explanation for what the heck everyone is talking about.  How does the magic actually work?  Can you always trust the algorithm to infer the right types?  Further, why is Hindley-Milner is better than (say) Java?  So, while those of you who actually know what HM is are busy recovering from your recent aneurysm, the rest of us are going to try to figure this out.

Ground Zero

Functionally speaking, Hindley-Milner (or “Damas-Milner”) is an algorithm for inferring value types based on use.  It literally formalizes the intuition that a type can be deduced by the functionality it supports.  Consider the following bit of psuedo-Scala (not a flying toy):

deffoo(s: String) = s.length
 
// note: no explicit typesdefbar(x, y) = foo(x) + y

Just looking at the definition of bar, we can easily see that its type must be (String, Int)=>Int.  As humans, this is an easy thing for us to intuit.  We simply look at the body of the function and see the two uses of the x and y parameters.  x is being passed to foo, which expects a String.  Therefore, x must be of type String for this code to compile.  Furthermore, foo will return a value of type Int.  The + method on class Int expects an Int parameter; thus, y must be of type Int.  Finally, we know that + returns a new value of type Int, so there we have the return type of bar.

This process is almost exactly what Hindley-Milner does: it looks through the body of a function and computes a constraint set based on how each value is used.  This is what we were doing when we observed that foo expects a parameter of type String.  Once it has the constraint set, the algorithm completes the type reconstruction by unifying the constraints.  If the expression is well-typed, the constraints will yield an unambiguous type at the end of the line.  If the expression is not well-typed, then one (or more) constraints will be contradictory or merely unsatisfiable given the available types.

Informal Algorithm

The easiest way to see how this process works is to walk it through ourselves.  The first phase is to derive the constraint set.  We start by assigning each value (x and y) a fresh type (meaning “non-existent”).  If we were to annotate bar with these type variables, it would look something like this:

defbar(x: X, y: Y) = foo(x) + y

The type names are not significant, the important restriction is that they do not collide with any “real” type.  Their purpose is to allow the algorithm to unambiguously reference the yet-unknown type of each value.  Without this, the constraint set cannot be constructed.

Next, we drill down into the body of the function, looking specifically for operations which impose some sort of type constraint.  This is a depth-first traversal of the AST, which means that we look at operations with higher-precedence first.  Technically, it doesn’t matter what order we look; I just find it easier to think about the process in this way.  The first operation we come across is the dispatch to the foo method.  We know that foo is of type String=>Int, and this allows us to add the following constraint to our set:

X  \mapsto  String

The next operation we see is +, involving the y value.  Scala treats all operators as method dispatch, so this expression actually means “foo(x).+(y).  We already know that foo(x) is an expression of type Int (from the type of foo), so we know that + is defined as a method on class Int with type Int=>Int (I’m actually being a bit hand-wavy here with regards to what we do and do not know, but that’s an unfortunate consequence of Scala’s object-oriented nature).  This allows us to add another constraint to our set, resulting in the following:

X  \mapsto  String

Y  \mapsto  Int

The final phase of the type reconstruction is to unify all of these constraints to come up with real types to substitute for the X and Y type variables.  Unification is literally the process of looking at each of the constraints and trying to find a single type which satisfies them all.  Imagine I gave you the following facts:

  • Daniel is tall
  • Chris is tall
  • Daniel is red
  • Chris is blue

Now, consider the following constraints:

Person1 is tall

Person1 is red

Hmm, who do you suppose Person1 might be?  This process of combining a constraint set with some given facts can be mathematically formalized in the guise of unification.  In the case of type reconstruction, just substitute “types” for “facts” and you’re golden.

In our case, the unification of our set of type constraints is fairly trivial.  We have exactly one constraint per value (x and y), and both of these constraints map to concrete types.  All we have to do is substitute “String” for “X” and “Int” for “Y” and we’re done.

To really see the power of unification, we need to look at a slightly more complex example.  Consider the following function:

defbaz(a, b) = a(b) :: b

This snippet defines a function, baz, which takes a function and some other parameter, invoking this function passing the second parameter and then “cons-ing” the result onto the second parameter itself.  We can easily derive a constraint set for this function.  As before, we start by coming up with type variables for each value.  Note that in this case, we not only annotate the parameters but also the return type.  I sort of skipped over this part in the earlier example since it only sufficed to make things more verbose.  Technically, this type is always inferred in this way.

defbaz(a: X, b: Y): Z = a(b) :: b

The first constraint we should derive is that a must be a function which takes a value of type Y and returns some fresh type Y’ (pronounced like “why prime“).  Further, we know that :: is a function on class List[A] which takes a new element A and produces a new List[A].  Thus, we know that Y and Z must both be List[Y'].  Formalized in a constraint set, the result is as follows:


X  \mapsto  (Y=>Y’ )

Y  \mapsto  List[Y']

Z  \mapsto  List[Y']

Now the unification is not so trivial.  Critically, the X variable depends upon Y, which means that our unification will require at least one step:


X  \mapsto  ( List[Y']=>Y’ )

Y  \mapsto  List[Y']

Z  \mapsto  List[Y']

This is the same constraint set as before, except that we have substituted the known mapping for Y into the mapping for X.  This substitution allows us to eliminate X, Y and Z from our inferred types, resulting in the following typing for the baz function:

defbaz(a: List[Y']=>Y', b: List[Y']): List[Y'] = a(b) :: b

Of course, this still isn’t valid.  Even assuming that Y' were valid Scala syntax, the type checker would complain that no such type can be found.  This situation actually arises surprisingly often when working with Hindley-Milner type reconstruction.  Somehow, at the end of all the constraint inference and unification, we have a type variable “left over” for which there are no known constraints.

The solution is to treat this unconstrained variable as a type parameter.  After all, if the parameter has no constraints, then we can just as easily substitute any type, including a generic.  Thus, the final revision of the baz function adds an unconstrained type parameter “A” and substitutes it for all instances of Y’ in the inferred types:

def baz[A](a: List[A]=>A, b: List[A]): List[A] = a(b) :: b

Conclusion

…and that’s all there is to it!  Hindley-Milner is really no more complicated than all of that.  One can easily imagine how such an algorithm could be used to perform far more complicated reconstructions than the trivial examples that we have shown.

Hopefully this article has given you a little more insight into how Hindley-Milner type reconstruction works under the surface.  This variety of type inference can be of immense benefit, reducing the amount of syntax required for type safety down to the barest minimum.  Our “bar” example actually started with (coincidentally) Ruby syntax and showed that it still had all the information we needed to verify type-safety.  Just a bit of information you might want to keep around for the next time someone suggests that all statically typed languages are overly-verbose.

McCLIM: A GUI Toolkit for Common Lisp

$
0
0

What is McCLIM?

McCLIM is a FOSS implementation of theCommon Lisp Interface Manager specification, a powerful toolkit for writing GUIs in Common Lisp. It is licensed under theGNU Library General Public License.

You can access theMcCLIM manual draft PDF if you want, but it's still a work in progress. To reach the developers you may either write to themailing list or on the #clim irc channel.

Features

  • Mature yet modern CLIM II protocol implementation
  • Extensible GUI toolkit for applications
  • Sophisticated interface manager for Common Lisp
  • Portable between various Common Lisp implementations
  • Robust solution for creating end-user applications

Resources

Some external tutorials for CLIM may be found here:

Examples

(in-package :common-lisp-user)(defpackage"APP"(:use:clim:clim-lisp)(:export"APP-MAIN"))(in-package :app)(define-application-frame superapp ()()(:panes(int :interactor:height 400 :width 600))(:layouts(default int)))(defun app-main ()(run-frame-top-level (make-application-frame 'superapp)))

SuperApp capture

Hall of Fame

McCLIM is written by a diverse group of individuals from across the world. Contributors past and present include:

  • Daniel Barlow
  • Gilbert Baumann
  • Julien Boninfan
  • Alexey Dejneka
  • Clemens Fruhwirth
  • Andreas Fuchs
  • Robert Goldman
  • Iban Hatchondo
  • Andy Hefner
  • Brian Mastenbrook
  • Mike McDonald
  • Timothy Moore
  • Edena Pixel
  • Max-Gerd Retzlaff
  • Christophe Rhodes
  • Duncan Rose
  • Arnaud Rouanet
  • Lionel Salabartan
  • Rudi Schlatte
  • Brian Spilsbury
  • Robert Strandh
 

An OSI model for the 21st century

$
0
0

The Internet protocol suite is wonderful, but it was designed before the advent of modern cryptography and without the benefit of hindsight. On the modern Internet, cryptography is typically squeezed into a single, incredibly complex layer, Transport Layer Security (TLS; formerly known as Secure Sockets Layer, or SSL). Over the last few months, 3 entirely unrelated (but equally catastrophic) bugs have been uncovered in 3 independent TLS implementations (Apple SSL/TLS, GnuTLS, and most recently OpenSSL, which powers most “secure” servers on the Internet), making the TLS system difficult to trust in practice.

What if cryptographic functions were spread out into more layers? Would the stack of layers become too tall, inefficient, and hard to debug, making the problem worse instead of better? On the contrary, I propose that appropriate cryptographic protocols could replace most existing layers, improving security as well as other functions generally not thought of as cryptographic, such as concurrency control of complex data structures, lookup or discovery of services and data, and decentralized passwordless login. Perhaps most importantly, the new architecture would enable individuals to internetwork as peers rather than as tenants of the telecommunications oligopoly, putting net neutrality directly in the hands of citizens and potentially enabling a drastically more competitive bandwidth market.

Of course, the layers I propose will doubtless introduce new problems of their own, but I’d like to start this conversation with some concrete ideas, even if I don’t have a final answer. (Please feel free to email me your comments or tweet @davidad.)

Descriptions follow for each of the five new layers I suggest, four of which are named after common information security requirements, and one of which (Transactions) is borrowed from database requirements (and also vaguely suggestive of cryptocurrency).


General disclaimer for InfoSec articles:Reading this article does not qualify you to design secure systems. Writing this article does not qualify me to design secure systems. In fact, nobody is qualified to design secure systems. A system should not be considered secure unless it has been reviewed by multiple security experts and resisted multiple serious attempts to violate its security claims in practice. The information contained in this article is offered “as is” and without warranties of any kind (express, implied, and statutory), all of which the author expressly disclaims to the fullest extent permitted by law.

For our purposes today, the Data Link and Physical layers are a black box (perhaps literally), to which we have an interface (the “network interface”) which looks like a transmit queue and a receive queue. These queues can store “payloads” of anywhere from 1 to 12801octets (bytes). The next layer in the stack can push a payload onto the Data Link transmit queue (and possibly get an error if it’s full) and can pop a payload from the Data Link receive queue (and possibly get an error if it’s empty). The Data Link layer is responsible for (eventually) flushing the transmit queue, and any payload which leaves the transmit queue must appear on the receive queues of all other devices connected to the same channel (a technical term, which may refer to a radio channel in the case of cellular devices, or simply to a particular length of cable in a point-to-point wired connection).

Integrity layer

We would like a received payload to self-evidently be the same payload which was sent. Although the Data Link layer is supposed to provide such an assurance, various kinds of attacks on the system might invalidate this assumption. Integrity protocols mitigate these attacks:

Paranoia Level Attacks Mitigation Common Implementation My Preferred Implementation
1 Thermal noise, cosmic rays checksum hashTCP ChecksumCRC-32C
2 Deliberate corruption cryptographic hashSHA-1BLAKE2b
3 Spoofing of trusted contacts keyed hashHMAC-SHA1SipHash
4 Spoofing of strangers public-key signature of cryptographic hashSHA-1 + RSABLAKE2b + Ed25519

Integrity protocols are fairly simple: the appropriate verification material is placed at the beginning of every Data Link payload. The Integrity layer exposes the same kind of “transmit queue and receive queue” interface as the Data Link layer, but the payload which can be passed to the Integrity layer must be somewhat smaller, so that there is room for the verification material and the Integrity payload together to fit into 1280 octets. Overhead ranges from 4 octets for a CRC-32C checksum to 96 octets for an Ed25519 signature.

In the keyed hash case, some state is necessary at the Integrity protocol level: each API customer must be able to add “trusted contacts” to its “address book” by specifying a symmetric key corresponding to a given endpoint name (which may have been negotiated at a higher protocol level, or simply out-of-band entirely). Since some advanced higher-level protocols may define symmetric authentication keys that are only good for a single use (e.g. Axolotl ratcheting after the handshake phase), “address book entries” should be single-use by default, with renewal explicitly required after each payload received from a given contact.

Availability layer

We would like networked endpoints to be available to receive packets from other endpoints in a way that is robust to unannounced changes in network topology. This layer conceptually takes the place of the Network layer in the original model, as it will be responsible for routing packets. Significantly, in this proposal, there are no “hosts” or “ports”: only “endpoints”, identified by public keys. This is simply taking the end-to-end principle one step further, by considering the “host” merely part of the network infrastructure which makes applications available.

A fully implemented Availability layer should provide unicast (deliver to a unique endpoint authenticated by a given public key, wherever it may be), anycast (deliver to nearest endpoint authenticated by a given public key), and multicast (a.k.a.pub/sub: route to all endpoints who have asked to subscribe to a given ID, and provide a subscription method).

Routing Semantics Current Reliability New Implemenation
Overlay on existing Internet Native Mesh
Multicast awfulS/Kademlia message broker Straightforward extension of unicast
Anycast decent No advantage over load balancers Possible extension of unicast
Unicast excellent Special case of multicastElectric Routing

I believe the Electric Routing algorithm2 is up to the challenge of replacing unicast3, and that it could be extended to provide multicast and even anycast, but other algorithms could be developed at this protocol layer as well. The first real-world implementation of the system I’m describing will very likely be developed as an overlay network on top of IP, in which case multicast can be implemented simply atop S/Kademlia, with unicast as a special case, and anycast can be emulated with standard load-balancing techniques.

The tradeoff here is that routers have a lot more work to do, since there are no “addresses” corresponding directly to geographic location. But, it means that every node on the network can participate as a router, so there is a lot more capacity to do that work. In addition, the endpoints-only scheme has many potentially desirable properties with respect to features like pseudonymity, NAT transparency, redundancy, and decentralization of the telecommunications market (especially in densely settled areas).

Confidentiality layer

Ideally, we would like to not transmit any information to anything other than the destination endpoint(s). This ideal is not in general achievable on a public network, but some types of mitigation are possible:

In cases 3 and 4, this layer has to maintain some state, holding session keys or message keys, and the Axolotl ratchet is a little complicated; but this layer does not have to worry about the verification of identity (which will be provided on a higher layer, by services such as keybase.io or using pronounceable hash fingerprints) or integrity (which will be provided by a lower layer).

Non-Repudiation and/or Repudiation layer

We would like for a receiver to be sure that a message they receive was sent by a given sender, and we would like for a sender to be sure that a given message was successfully received. Sometimes, we would also like for a receiver to be unaware of the location a message was sent from. The result is three related but orthogonal protocol types, which may be nested:

Repudiation Property Meaning Protocol
Non-Repudiation of Sending Recipient knows immediate sender Sender includes a hash of their public key in the message. To understand why this is necessary given the Integrity layer, read this excellent article
Non-Repudiation of Receipt Sender knows message was received Recipient must send a signed acknowledgement for every message. This also implements “reliable delivery”
Repudiation of Origin Message is difficult to traceOnion Routing

Transactions layer

We would like for sets of nodes which wish to maintain common mutable state variables to be able to do so, even in the presence of various types of adversaries. This is a common abstraction for the requirements of git, cryptocurrencies, and distributed databases (i.e. ACIDMVCC). I propose that (borrowing most directly from git, but also from Clojure’s concurrent data structures) changes in large or complex mutable states be represented as changes to the root of a Merkle tree, thus reducing the state subject to transactional semantics to single-packet size4.

To make it obvious what I’m intending to refer to, the owner of a particular “domain name” or a particular “coin” (or, generally, any cryptographically controlled resource) is an example of a mutable state. But so is, for instance, the contents of any social media profile, email inbox, hypertext page, or source code repository. These things could all be managed without reference to central authorities or single points of failure.

Many (including myself) have claimed that the core contribution of Bitcoin, the block-chain protocol, is a novel solution to the Byzantine Generals Problem, but it turns out this is somewhat misleading. Although the block-chain protocol is Byzantine-fault-tolerant in a novel way, there has been plenty of research on Byzantine protocols over the years, and it seems probably unnecessary to constantly “mine,” i.e. solve cryptopuzzles, to achieve Byzantine fault tolerance. The main reason to introduce cryptopuzzles is to reduce the efficacy of Sybil attacks, in which one malicious actor fabricates arbitrarily many identities in order to exceed the Byzantine fault tolerance threshold and control the system. However, these attacks can also be mitigated by requiring crypto-puzzles only for joining the network (as in S/Kademlia), and by blacklisting nodes which behave suspiciously (the latter being how most attacks on Bitcoin are stopped in practice).

Application layer

In such an environment, applications (or application components!) are essentially just maps from one mutable state to another, in functional reactive programming style. In the same way that you might encode packet filters into a kernel’s TCP/IP stack today, you might encode entire applications into a kernel’s “mesh” stack in the future. Various search functions, including full-text search, could be provided using the OneSwarm approach or potentially by distributed Bloom filters implemented atop this platform (an idea due to Andrée Monette). Resource control and access control can be provided by means of cryptographic capabilities.

But, in general, this layer is completely open for all sorts of applications. Essentially, any end-user service that runs on a network (and what doesn’t, these days?) would fit here.

I’ve outlined some radical ideas for how to re-build the Internet protocol stack in a way that is ultimately more coherent with Internet cultural values (freedom of expression, pseudonymity, reduced potential for abuses of power). This outline still needs quite a bit of work and thought before being turned into implementations, but I feel like I’ve reached a turning point in making my ideas about next-generation architectures concrete, and at a timely moment with respect to conversations about TLS and net neutrality. If you would like to see these concepts made into working code, please reach out and let me know.


  1. This number is cribbed from the IPv6 RFC.

  2. coauthored by Petar Maymounkov, who also coauthored Kademlia, the DHT powering BitTorrent

  3. Electric routing does need some extensions to mitigate various attacks, but I believe the countermeasures from S/Kademlia are readily adapted to meet these needs.

  4. This is similar in principle to the trick used by most practical public-key cryptosystems, which use the actual public-key algorithm only to encrypt a key from some symmetric cryptosystem, and then encrypt arbitrarily large content using a stream cipher. The common principle is that you can do the hard security algorithm on a small piece of data, and use easier security algorithms to apply those hard security properties to large chunks of data.

Zidisha (YC NP) is looking for a Volunteer Coordinator (remote, volunteer)

$
0
0

Description

Please note that this is an unpaid volunteering opportunity / internship.

Zidisha is the first direct person-to-person microlending platform, connecting lenders worldwide and borrowers in developing countries.

We have an opening for a highly personable, responsible and detail-oriented intern or volunteer to help manage our international operations and develop our team of interns and volunteers. This position is an opportunity to gain extensive leadership experience and to take part in the day-to-day operations of a rapidly growing nonprofit startup.

This is a part-time or full-time "virtual volunteering" position that may be undertaken from any location in the world. We'd like you to commit a minimum of 10 hours per week to the position, on a flexible schedule which includes availability of at least one hour per day on weekdays.

Successful candidates will have a strong track record of paid or unpaid work for mission-driven organizations, a passion for advancing opportunities for entrepreneurs in the world's most marginalized communities, and a high degree of responsibility and organization.

Responsibilities

1. Provide orientations to new interns and volunteers, and coordinate their assignments to volunteer teams.

  • Field email applications from prospective volunteers and interns, respond to questions, and help those who join get started.
  • Work with team leads to assign new volunteers and interns where they are most needed.

2. Ensure interns and volunteers have the information they need to complete their responsibilities successfully.

  • Provide Skype or email orientations to new interns and volunteers.
  • Monitor our volunteer forum, and respond to questions posted there within one business day.

3. Ensure that each team of interns and volunteers completes its duties successfully.

  • Monitor completion of team duties (responding to borrower emails, disbursing loans, entering repayments, reviewing new borrower applications) each business day.
  • Work with team leads to resolve any difficulties, and check in to ensure their teams have enough help and enough to do.

4. Lend a hand with various operations tasks as needed. These may include:

  • Helping our volunteer accountant with monthly accounting reconciliations.
  • Sharing entrepreneur projects and success stories via Twitter, Facebook and Pinterest.
  • Responding to borrower and lender emails and SMS.
  • Taking part in the conversations in our user forum.
  • Helping with loan disbursements and repayment entry.

Qualifications

  • Passion for our mission of advancing economic opportunities for marginalized communities.
  • Comfort with taking leadership and responsibility for results.
  • Highly organized and detail-oriented.
  • A high degree of dependability and strong time management skills.
  • A warm, outgoing and helpful personality.
  • Empathy, discretion and mature judgment.
  • Skill in working with teams and in maintaining good communication at multiple levels.

How To Apply

Please send a resume and note explaining why this position is a good fit for you to julia@zidisha.org.


How the Voyager space probe's golden record was made

$
0
0

We inhabit a small planet orbiting a medium-sized star about two-thirds of the way out from the center of the Milky Way galaxy—around where Track 2 on an LP record might begin. In cosmic terms, we are tiny: were the galaxy the size of a typical LP, the sun and all its planets would fit inside an atom’s width. Yet there is something in us so expansive that, four decades ago, we made a time capsule full of music and photographs from Earth and flung it out into the universe. Indeed, we made two of them.

The time capsules, really a pair of phonograph records, were launched aboard the twin Voyager space probes in August and September of 1977. The craft spent thirteen years reconnoitering the sun’s outer planets, beaming back valuable data and images of incomparable beauty. In 2012, Voyager 1 became the first human-made object to leave the solar system, sailing through the doldrums where the stream of charged particles from our sun stalls against those of interstellar space. Today, the probes are so distant that their radio signals, travelling at the speed of light, take more than fifteen hours to reach Earth. They arrive with a strength of under a millionth of a billionth of a watt, so weak that the three dish antennas of the Deep Space Network’s interplanetary tracking system (in California, Spain, and Australia) had to be enlarged to stay in touch with them.

If you perched on Voyager 1 now—which would be possible, if uncomfortable; the spidery craft is about the size and mass of a subcompact car—you’d have no sense of motion. The brightest star in sight would be our sun, a glowing point of light below Orion’s foot, with Earth a dim blue dot lost in its glare. Remain patiently onboard for millions of years, and you’d notice that the positions of a few relatively nearby stars were slowly changing, but that would be about it. You’d find, in short, that you were not so much flying to the stars as swimming among them.

The Voyagers’ scientific mission will end when their plutonium-238 thermoelectric power generators fail, around the year 2030. After that, the two craft will drift endlessly among the stars of our galaxy—unless someone or something encounters them someday. With this prospect in mind, each was fitted with a copy of what has come to be called the Golden Record. Etched in copper, plated with gold, and sealed in aluminum cases, the records are expected to remain intelligible for more than a billion years, making them the longest-lasting objects ever crafted by human hands. We don’t know enough about extraterrestrial life, if it even exists, to state with any confidence whether the records will ever be found. They were a gift, proffered without hope of return.

I became friends with Carl Sagan, the astronomer who oversaw the creation of the Golden Record, in 1972. He’d sometimes stop by my place in New York, a high-ceilinged West Side apartment perched up amid Norway maples like a tree house, and we’d listen to records. Lots of great music was being released in those days, and there was something fascinating about LP technology itself. A diamond danced along the undulations of a groove, vibrating an attached crystal, which generated a flow of electricity that was amplified and sent to the speakers. At no point in this process was it possible to say with assurance just how much information the record contained or how accurately a given stereo had translated it. The open-endedness of the medium seemed akin to the process of scientific exploration: there was always more to learn.

In the winter of 1976, Carl was visiting with me and my fiancée at the time, Ann Druyan, and asked whether we’d help him create a plaque or something of the sort for Voyager. We immediately agreed. Soon, he and one of his colleagues at Cornell, Frank Drake, had decided on a record. By the time NASA approved the idea, we had less than six months to put it together, so we had to move fast. Ann began gathering material for a sonic description of Earth’s history. Linda Salzman Sagan, Carl’s wife at the time, went to work recording samples of human voices speaking in many different languages. The space artist Jon Lomberg rounded up photographs, a method having been found to encode them into the record’s grooves. I produced the record, which meant overseeing the technical side of things. We all worked on selecting the music.

I sought to recruit John Lennon, of the Beatles, for the project, but tax considerations obliged him to leave the country. Lennon did help us, though, in two ways. First, he recommended that we use his engineer, Jimmy Iovine, who brought energy and expertise to the studio. (Jimmy later became famous as a rock and hip-hop producer and record-company executive.) Second, Lennon’s trick of etching little messages into the blank spaces between the takeout grooves at the ends of his records inspired me to do the same on Voyager. I wrote a dedication: “To the makers of music—all worlds, all times.”

To our surprise, those nine words created a problem at NASA. An agency compliance officer, charged with making sure each of the probes’ sixty-five thousand parts were up to spec, reported that while everything else checked out—the records’ size, weight, composition, and magnetic properties—there was nothing in the blueprints about an inscription. The records were rejected, and NASA prepared to substitute blank discs in their place. Only after Carl appealed to the NASAadministrator, arguing that the inscription would be the sole example of human handwriting aboard, did we get a waiver permitting the records to fly.

In those days, we had to obtain physical copies of every recording we hoped to listen to or include. This wasn’t such a challenge for, say, mainstream American music, but we aimed to cast a wide net, incorporating selections from places as disparate as Australia, Azerbaijan, Bulgaria, China, Congo, Japan, the Navajo Nation, Peru, and the Solomon Islands. Ann found an LP containing the Indian raga “Jaat Kahan Ho” in a carton under a card table in the back of an appliance store. At one point, the folklorist Alan Lomax pulled a Russian recording, said to be the sole copy of “Chakrulo” in North America, from a stack of lacquer demos and sailed it across the room to me like a Frisbee. We’d comb through all this music individually, then meet and go over our nominees in long discussions stretching into the night. It was exhausting, involving, utterly delightful work.

“Bhairavi: Jaat Kahan Ho,” by Kesarbai Kerkar

In selecting Western classical music, we sacrificed a measure of diversity to include three compositions by J. S. Bach and two by Ludwig van Beethoven. To understand why we did this, imagine that the record were being studied by extraterrestrials who lacked what we would call hearing, or whose hearing operated in a different frequency range than ours, or who hadn’t any musical tradition at all. Even they could learn from the music by applying mathematics, which really does seem to be the universal language that music is sometimes said to be. They’d look for symmetries—repetitions, inversions, mirror images, and other self-similarities—within or between compositions. We sought to facilitate the process by proffering Bach, whose works are full of symmetry, and Beethoven, who championed Bach’s music and borrowed from it.

I’m often asked whether we quarrelled over the selections. We didn’t, really; it was all quite civil. With a world full of music to choose from, there was little reason to protest if one wonderful track was replaced by another wonderful track. I recall championing Blind Willie Johnson’s “Dark Was the Night,” which, if memory serves, everyone liked from the outset. Ann stumped for Chuck Berry’s “Johnny B. Goode,” a somewhat harder sell, in that Carl, at first listening, called it “awful.” But Carl soon came around on that one, going so far as to politely remind Lomax, who derided Berry’s music as “adolescent,” that Earth is home to many adolescents. Rumors to the contrary, we did not strive to include the Beatles’ “Here Comes the Sun,” only to be disappointed when we couldn’t clear the rights. It’s not the Beatles’ strongest work, and the witticism of the title, if charming in the short run, seemed unlikely to remain funny for a billion years.

“Dark Was the Night, Cold Was the Ground,” by Blind Willie Johnson

Ann’s sequence of natural sounds was organized chronologically, as an audio history of our planet, and compressed logarithmically so that the human story wouldn’t be limited to a little beep at the end. We mixed it on a thirty-two-track analog tape recorder the size of a steamer trunk, a process so involved that Jimmy jokingly accused me of being “one of those guys who has to use every piece of equipment in the studio.” With computerized boards still in the offing, the sequence’s dozens of tracks had to be mixed manually. Four of us huddled over the board like battlefield surgeons, struggling to keep our arms from getting tangled as we rode the faders by hand and got it done on the fly.

The sequence begins with an audio realization of the “music of the spheres,” in which the constantly changing orbital velocities of Mercury, Venus, Earth, Mars, and Jupiter are translated into sound, using equations derived by the astronomer Johannes Kepler in the sixteenth century. We then hear the volcanoes, earthquakes, thunderstorms, and bubbling mud of the early Earth. Wind, rain, and surf announce the advent of oceans, followed by living creatures—crickets, frogs, birds, chimpanzees, wolves—and the footsteps, heartbeats, and laughter of early humans. Sounds of fire, speech, tools, and the calls of wild dogs mark important steps in our species’ advancement, and Morse code announces the dawn of modern communications. (The message being transmitted is Ad astra per aspera, “To the stars through hard work.”) A brief sequence on modes of transportation runs from ships to jet airplanes to the launch of a Saturn V rocket. The final sounds begin with a kiss, then a mother and child, then an EEG recording of (Ann’s) brainwaves, and, finally, a pulsar—a rapidly spinning neutron star giving off radio noise—in a tip of the hat to the pulsar map etched into the records’ protective cases.

“The Sounds of Earth”

Ann had obtained beautiful recordings of whale songs, made with trailing hydrophones by the biologist Roger Payne, which didn’t fit into our rather anthropocentric sounds sequence. We also had a collection of loquacious greetings from United Nations representatives, edited down and cross-faded to make them more listenable. Rather than pass up the whales, I mixed them in with the diplomats. I’ll leave it to the extraterrestrials to decide which species they prefer.

“United Nations Greetings/Whale Songs”

Those of us who were involved in making the Golden Record assumed that it would soon be commercially released, but that didn’t happen. Carl repeatedly tried to get labels interested in the project, only to run afoul of what he termed, in a letter to me dated September 6, 1990, “internecine warfare in the record industry.” As a result, nobody heard the thing properly for nearly four decades. (Much of what was heard, on Internet snippets and in a short-lived commercial CD release made in 1992 without my participation, came from a set of analog tape dubs that I’d distributed to our team as keepsakes.) Then, in 2016, a former student of mine, David Pescovitz, and one of his colleagues, Tim Daly, approached me about putting together a reissue. They secured funding on Kickstarter, raising more than a million dollars in less than a month, and by that December we were back in the studio, ready to press play on the master tape for the first time since 1977.

Pescovitz and Daly took the trouble to contact artists who were represented on the record and send them what amounted to letters of authenticity—something we never had time to accomplish with the original project. (We disbanded soon after I delivered the metal master to Los Angeles, making ours a proud example of a federal project that evaporated once its mission was accomplished.) They also identified and corrected errors and omissions in the information that was provided to us by recordists and record companies. Track 3, for instance, which was listed by Lomax as “Senegal Percussion,” turns out instead to have been recorded in Benin and titled “Cengunmé”; and Track 24, the Navajo night chant, now carries the performers’ names. Forty years after launch, the Golden Record is finally being made available here on Earth. Were Carl alive today—he died in 1996 at the age of sixty-two—I think he’d be delighted.

This essay was adapted from the liner notes for the new edition of the Voyager Golden Record, recently released as a vinyl boxed set by Ozma Records.


Exploring How and Why Trees ‘Talk’ to Each Other

$
0
0

Two decades ago, while researching her doctoral thesis, ecologist Suzanne Simard discovered that trees communicate their needs and send each other nutrients via a network of latticed fungi buried in the soil — in other words, she found, they “talk” to each other. Since then, Simard, now at the University of British Columbia, has pioneered further research into how trees converse, including how these fungal filigrees help trees send warning signals about environmental change, search for kin, and transfer their nutrients to neighboring plants before they die.

By using phrases like “forest wisdom” and “mother trees” when she speaks about this elaborate system, which she compares to neural networks in human brains,

Simardpic_square.jpg

Suzanne Simard

Simard’s work has helped change how scientists define interactions between plants. “A forest is a cooperative system,” she said in an interview with Yale Environment 360. “To me, using the language of ‘communication’ made more sense because we were looking at not just resource transfers, but things like defense signaling and kin recognition signaling. We as human beings can relate to this better. If we can relate to it, then we’re going to care about it more. If we care about it more, then we’re going to do a better job of stewarding our landscapes.”

Simard is now focused on understanding how these vital communication networks could be disrupted by environmental threats, such as climate change, pine beetle infestations, and logging. “These networks will go on,” she said. “Whether they’re beneficial to native plant species, or exotics, or invader weeds and so on, that remains to be seen.”

Yale Environment 360: Not all PhD theses are published in the journal Nature. But back in 1997, part of yours was. You used radioactive isotopes of carbon to determine that paper birch and Douglas fir trees were using an underground network to interact with each other. Tell me about these interactions.

Suzanne Simard: All trees all over the world, including paper birch and Douglas fir, form a symbiotic association with below-ground fungi. These are fungi that are beneficial to the plants and through this association, the fungus, which can’t photosynthesize of course, explores the soil. Basically, it sends mycelium, or threads, all through the soil, picks up nutrients and water, especially phosphorous and nitrogen, brings it back to the plant, and exchanges those nutrients and water for photosynthate [a sugar or other substance made by photosynthesis] from the plant. The plant is fixing carbon and then trading it for the nutrients that it needs for its metabolism. It works out for both of them.

It’s this network, sort of like a below-ground pipeline, that connects one tree root system to another tree root system, so that nutrients and carbon and water can exchange between the trees. In a natural forest of British Columbia, paper birch and Douglas fir grow together in early successional forest communities. They compete with each other, but our work shows that they also cooperate with each other by sending nutrients and carbon back and forth through their mycorrhizal networks.

e360: And they can tell when one needs some extra help versus the other, is that correct?

Simard: That’s right. We’ve done a bunch of experiments trying to figure out what drives the exchange. Keep in mind that it’s a back and forth exchange, so sometimes the birch will get more and sometimes the fir will get more. It depends on the ecological factors that are going on at the time.

One of the important things that we tested in that particular experiment was shading. The more Douglas fir became shaded in the summertime, the more excess carbon the birch had went to the fir. Then later in the fall, when the birch was losing its leaves and the fir had excess carbon because it was still photosynthesizing, the net transfer of this exchange went back to the birch.

There are also probably fungal factors involved. For example, fungus that is linking the network is going to be looking to secure its carbon sources. Even though we don’t understand a whole lot about that, it makes sense from an evolutionary point of view. The fungus is in it for its own livelihood, to make sure that it’s got a secure food base in the future, so it will help direct that carbon transfer to the different plants.

I don’t think there’s ever going to be a shortage of an ability to form a network, but the network might be different.

e360: Do you think this exchange system holds true in other ecosystems as well, like grasslands, for instance? Has there been any work done on that?

Simard: Yes, not just in my lab, but also in other labs well before me”¦ Grasslands, and even some of the tree species we’re familiar with like maple and cedar, form a different type of mycorrhiza. In British Columbia, we have big grasslands that come up through the interior of the province and interface with the forest. We’re looking at how those grasslands, which are primarily arbuscular mycorrhizal, interact with our ectomycorrhizal forest, because as climate changes, the grasslands are predicted to move up into the forests.

e360: Will these exchanges continue under climate change, or will communication be blocked?

Simard: I don’t think it will be blocked. I don’t think there’s ever going to be a shortage of an ability to form a network, but the network might be different. For example, there will probably be different fungi involved in it, but I think these networks will go on. Whether they’re beneficial to native plant species, or exotics, or invader weeds and so on, that remains to be seen.

e360: Through molecular tools, you and one of your graduate students discovered what you call hub, or mother, trees. What are they, and what’s their role in the forest?

Simard: Kevin Beiler, who was a PhD student, did really elegant work where he used DNA analysis to look at the short sequences of DNA in trees and fungal individuals in patches of Douglas fir forest. He was able to map the network of two related sister specials of mycorrhizal fungi and how they link Douglas fir trees in that forest.

Just by creating that map, he was able to show that all of the trees essentially, with a few isolated [exceptions], were linked together. He found that the biggest, oldest trees in the network were the most highly linked, whereas smaller trees were not linked to as many other trees. Big old trees have got bigger root systems and associate with bigger mycorrhizal networks. They’ve got more carbon that’s flowing into the network, they’ve got more root tips. So it makes sense that they would have more connections to other trees all around them.

In later experiments, we’ve been pursuing whether these older trees can recognize kin, whether the seedling that are regenerating around them are of the same kin, whether they’re offspring or not, and whether they can favor those seedlings — and we found that they can. That’s how we came up with the term “mother tree,” because they’re the biggest, oldest trees, and we know that they can nurture their own kin.

TreeConnections_BeilerEtAl2010_kb.jpg

Beiler et al 2010

A diagram of a fungal network that links a group of trees, showing the presence of highly connected “mother trees.”

e360: You also discovered that when these trees are dying there’s a surprising ecological value to them that isn’t realized if they’re harvested too soon.

Simard: We did this experiment actually in the greenhouse. We grew seedlings of [Douglas fir] with neighbors [ponderosa pine], and we injured the one that would have been acting as the mother tree, [which was] the older fir seedling. We used ponderosa pine because it’s a lower elevation species that’s expected to start replacing Douglas fir as climate changes. I wanted to know whether or not there was any kind of transfer of the legacy of the old forest to the new forest that is going to be migrating upward and northward as climate changes.

When we injured these Douglas fir trees, we found that a couple things happened. One is that the Douglas fir dumped its carbon into the network and it was taken up by the ponderosa pine. Secondly, the defense enzymes of the Douglas fir and the ponderosa pine were “up-regulated” in response to this injury. We interpreted that to be defense signaling going on through the networks of trees. Those two responses — the carbon transfer and the defense signal — only happened where there was a mycorrhizal network intact. Where we severed the network, it didn’t happen.

The interpretation was that the native species being replaced by a new species as climate changes is sending carbon and warning signals to the neighboring seedlings to give them a head start as they assume the more dominant role in the ecosystem.

e360: You’ve talked about the fact that when you first published your work on tree interaction back in 1997 you weren’t supposed to use the word “communication” when it came to plants. Now you unabashedly use phrases like forest wisdom and mother trees. Have you gotten flack for that?

Simard: There’s probably a lot more flack out there than I even hear about. I first started doing forest research in my early 20s and now I’m in my mid-50s, so it has been 35 years. I have always been very aware of following the scientific method and of being very careful not to go beyond what the data says. But there comes a point when you realize that that sort of traditional scientific method only goes so far and there’s so much more going on in forests than we’re able to actually understand using the traditional scientific techniques.

So I opened my mind up and said we need to bring in human aspects to this so that we understand deeper, more viscerally, what’s going on in these living creatures, species that are not just these inanimate objects. We also started to understand that it’s not just resources moving between plants. It’s way more than that. A forest is a cooperative system, and if it were all about competition, then it would be a much simpler place. Why would a forest be so diverse? Why would it be so dynamic?

To me, using the language of communication made more sense because we were looking at not just resource transfers, but things like defense signaling and kin recognition signaling. The behavior of plants, the senders and the receivers, those behaviors are modified according to this communication or this movement of stuff between them.

Also, we as human beings can relate to this better. If we can relate to it, then we’re going to care about it more. If we care about it more, then we’re going to do a better job of stewarding our landscapes.

If we leave trees that support not just mycorrhizal networks, but other networks of creatures, then the forest will regenerate.

e360: The mountain pine beetle is devastating western [North American] landscapes, killing pine and spruce trees. You coauthored research on what pine beetle attacks do to mycorrhizal networks. What did you find, and what are the implications for regeneration of those forests?

Simard: That work was led by Greg Pec, a graduate student at the University of Alberta. The first stage (of the attack) is called green attack. They go from green attack to red attack to gray attack. So basically, by the third or fourth year, the stands are dead.

We took soil from those different stands and grew log pole pine seedlings in them. We found that as time went on with mortality, that mycorrhizal network became less diverse and it also changed the defense enzyme in the seedlings that were grown in those soils. The diversity of those molecules declined. The longer the trees had been dead, the lower the mycorrhizal diversity and the lower the defense molecule diversity was in those seedlings.

Greg, in looking at the fungal diversity in those stands, found that even though the fungal diversity changed, the mycorrhizal network was still important in helping regenerate the new seedlings that were coming up in the understory.

Even though the composition of that mycorrhizal network is shifting, it’s still a functional network that is able to facilitate regeneration of the new stand.

e360: What does your work tell you about how to maintain resilience in the forest when it comes to logging and climate change?

Simard: Resilience is really about the ability of ecosystems to recover their structures and functions within a range of possibilities. For forests in particular, trees are the foundation. They provide habitat for the other creatures, but also make the forest work. Resilience in a forest means the ability to regenerate trees. There’s a lot that can be done to facilitate that because of these mycorrhizal networks, which we know are important in allowing trees to regenerate. It’s what we leave behind that’s so important. If we leave trees that support not just mycorrhizal networks, but other networks of creatures, then the forest will regenerate. I think that’s the crucial step is maintaining that ability to regenerate trees.

e360: You’ve spoken about your hope that your findings would influence logging practices in British Columbia and beyond. Has that happened?

Simard: Not my work specifically. Beginning in the 1980s and 90s, that idea of retaining older trees and legacies in forests retook hold. Through the 1990s in Western Canada, we adopted a lot of those methodologies, not based on mycorrhizal networks. It was more for wildlife and retaining down wood for habitat for other creatures.

But for the most part, especially in the last decade and a half, a lot of [logging] defaults to clear-cutting with not that much retention. Part of that was driven by the mountain pine beetle outbreak that is still going on. The good forestry practices that were developing got swept away in the salvage logging of those dying trees.

ALSO FROM YALE e360Is Climate Change Putting World’s Microbiomes at Risk?

Researchers are only beginning to understand the complexities of the microbes in the earth’s soil and the role they play in fostering healthy ecosystems. Now, climate change is threatening to disrupt these microbes and the key functions they provide. READ MORE

Today, people are still trying retention forestry, but it’s just not enough. Too often it’s just the token trees that are left behind. We’re starting on a new research project to test different kinds of retention that protect mother trees and networks.

e360: That’s the grant that you just received from the Canadian government to reassess current forest renewal practices?

Simard: Yes, we’re really excited about this. We’re testing the idea of retaining mother trees in different configurations — so leaving them as singles, as groups, as shelter woods, and then regenerating the forest using a mix of natural regeneration and traditional regeneration practices. We’re testing these across a range of climates in Douglas fir forest, from very dry and hot all the way up to cool and wet. There’s going to be about 75 sites in total that cross this climate gradient. We’re going to be measuring things like carbon cycling and productivity and bird and insect diversity. And we’ve got a lot of interest from First Nations groups in British Columbia because this idea of mother trees and the nurturing of new generations very much fits with First Nations’ world view.

Kubermesh: self-hosted/healing/provisioning, partial-mesh network K8s cluster

$
0
0

README.md

Local dev prerequisites

sudo apt-get install qemu-kvm libvirt-bin docker virtinst

Getting started

./deploy libvirt 4

This will set up 4 nodes using libvirt, and run the bootstrap process

Usernames/Passwords are currently hardcoded to core/core

Useful commands

virt-viewer kubermesh1 - to get a graphical consolevirsh console kubermesh1 - to get a serial console

Cleaning up

./teardown

Current address allocations

Anycast addresses:

  • apiserver: fd65:7b9c:569:680:98eb:c508:eb8c:1b80
  • etcd: fd65:7b9c:569:680:98eb:c508:ea6b:b0b2
  • docker hub mirror: [fd65:7b9c:569:680:98e8:1762:7b6e:83f6]:5000
  • gcr.io mirror: [fd65:7b9c:569:680:98e8:1762:7b6e:61d3]:5002
  • quay.io mirror: [fd65:7b9c:569:680:98e8:1762:7abd:e0b7]:5001

libvirt networking:

  • host <-> gateway: 172.30.0.1/30
  • host <-> gateway: 2001:db8:a001::1/64
  • gateway <-> kubermesh1: 172.30.0.9/29
  • gateway <-> kubermesh1: 2001:db8:a002::10/126
  • gateway docker: 172.30.1.1/24
  • flannel ipv4: 172.31.0.0/16
  • custom ipv4 allocation: 172.30.2.0/24
  • custom ipv6 allocation: 2001:db8::/71

Installation networking:

  • ??::0/119
  • host: ::0/123
  • host vip: ::0/128
  • mesh interfaces: ::10/123
  • mesh interface 1: ::10/126
  • mesh interface 2: ::14/126
  • mesh interface 3: ::18/126
  • mesh interface 4: ::1c/126
  • pods: ::100/120

Hardware specific

NUCs

  • Disable legacy network boot in the bios, so it will use EFI boot over ipv6
    • Enter the BIOS
    • Select Advanced under Boot Order
    • Under Legacy Boot Priority disable Legacy Boot
    • Under Boot Configuration select Boot Network Devices Last
    • Under Boot Configuration select Unlimited Boot to Network Attempts
    • Under Boot Configuration check Network Boot is set to UEFI PXE & iSCSI
    • F10

USB Devices for the gateway bootstrapping physical hardware

  • Plug USB3 ones into a USB2 port, so qemu can use it without Speed Mismatch errors
  • Might need to add /run/udev/data/** r, to /etc/apparmor.d/abstractions/libvirt-qemu until qemu fixes that bug

Graydon Hoare: What next for compiled languages?

$
0
0

Since everybody is talking about this post,we might as well.

Key topics discussed: modules(you know, real ones); errors ("there are serious abstraction leakages and design trade-offs in nearly every known approach"); Coroutines, async/await, "user-visible" asynchronicity; effect systems, more generally (you could see that coming, couldn't you?); Extended static checking (ESC), refinement types, general dependent-typed languages; and formalization ("we have to get to the point where we ship languages -- and implementations -- with strong, proven foundations").

He goes on to discuss a whole grab bag of "potential extras" for mainstream languages, including the all time favorite: units of measure.

Feel free to link to the relevant discussions from the LtU archive...

The Darknet and the Future of Content Distribution (2002) [pdf]

Rolling Your Own Blockchain in Haskell

$
0
0

Bitcoin and Ethereum provide a decentralized means of handling money, contracts, and ownership tokens. From a technical perspective, they have a lot of moving parts and provide a good way to demo a programming language.

This article will develop a simple blockchain-like data structure, to demonstrate these in Haskell:

  • Writing a binary serializer and deserializer
  • Using cryptographic primitives to calculate hashes
  • Automatically adjusting the difficulty of a miner in response to computation time.

We’ll name it Haskoin. Note that it won’t have any networking or wallet security until a future article.

What is a Blockchain?

The first step when writing any software application is always to figure out your data structures. This is true whether it’s Haskell, Perl, C, or SQL. We’ll put the major types and typeclass instances in their own module:

{-# LANGUAGE GeneralizedNewtypeDeriving, NoImplicitPrelude, DeriveTraversable, DeriveDataTypeable, StandaloneDeriving, TypeSynonymInstances, FlexibleInstances #-}moduleHaskoin.TypeswhereimportProtoludeimportCrypto.HashimportControl.Comonad.CofreeimportData.DataimportqualifiedData.VectorasVnewtypeAccount=AccountIntegerderiving(Eq,Show,Num)dataTransaction=Transaction{_from::Account,_to::Account,_amount::Integer}deriving(Eq,Show)newtypeBlockFa=Block(V.Vectora)deriving(Eq,Show,Foldable,Traversable,Functor,Monoid)typeBlock=BlockFTransactiontypeHaskoinHash=DigestSHA1dataBlockHeader=BlockHeader{_miner::Account,_parentHash::HaskoinHash}deriving(Eq,Show)dataMerkleFa=Genesis|NodeBlockHeaderaderiving(Eq,Show,Functor,Traversable,Foldable)typeBlockchain=CofreeMerkleFBlock

MerkleF is a higher-order Merkle tree type that adds a layer onto some other type. The Cofree MerkleF Block does two things: It recursively applies MerkleF to produce a type for all depths of Merkle trees, and it attaches an annotation of type Block to each node in the tree.

When using Cofree, anno :< xf will construct one of these annotated values.

It will be more useful to look at an “inverted” tree where each node knows its parent, rather than one where each node knows its children. If each node knew its children, adding a single new block to the end would require changing every node in the tree. So MerkleF produces a chain, not a tree.

Protolude is a replacement Prelude that I’ve been using recently in moderately-sized projects. Prelude has a lot of backwards-compatibility concerns, so a lot of people shut it off with the NoImplicitPrelude language extension and import a custom one.

Why do we choose this weird MerkleF type over the simpler one below?

newtypeBlock=Block(V.VectorTransaction)dataBlockchain=GenesisBlock|NodeBlockBlockHeaderBlockchain

The main reason is to get those Functor, Traversable, and Foldable instances, because we can use them to work with our Merkle tree without having to write any code. For example, given a blockchain

importqualifiedData.VectorasVletgenesis_block=Block(V.fromList[])letblock1=Block(V.fromList[Transaction011000])letgenesis_chain=genesis_block:<Genesisletchain1=block1:<Node(BlockHeader{_miner=0,_parentHash=undefined})genesis_chainletchain2=block1:<Node(BlockHeader{_miner=0,_parentHash=undefined})chain1

, here’s how you can get all of its transactions:

lettxns=toList$mconcat$toListchain2-- [Transaction {_from = Account 0, _to = Account 1, _amount = 1000},Transaction {_from = Account 0, _to = Account 1, _amount = 1000}]lettotalVolume=sum$map_amounttxns-- 2000

I tested the above using stack ghci to enter an interactive prompt.

Real blockchains have a lot of useful things in the header, such as timestamps or nonce values. We can add them to BlockHeader as we need them.

A bunch of abstract types that are awkward to use aren’t very useful by themselves. We need a way to mine new blocks to do anything interesting. In other words, we want to define mineOn and makeGenesis:

moduleHaskoin.MiningwheretypeTransactionPool=IO[Transaction]mineOn::TransactionPool->Account->Blockchain->IOBlockchainmineOnpendingTransactionsminerAccountroot=undefinedmakeGenesis::IOBlockchainmakeGenesis=undefined

The genesis block is pretty easy, since it doesn’t have a header:

makeGenesis=return$Block(V.fromList[]):<Genesis

We can write mineOn without any difficulty, transaction limiting, or security pretty easily if we knew how to calculate a parent node’s hash:

mineOn::TransactionPool->Account->Blockchain->IOBlockchainmineOnpendingTransactionsminerAccountparent=dots<-pendingTransactionsletblock=Block(V.fromListts)letheader=BlockHeader{_miner=minerAccount,_parentHash=hashparent}return$block:<Nodeheaderparenthash::Blockchain->HaskoinHashhash=undefined

Crypto.Hash has plenty of ways to hash something, and we’ve chosen type HaskoinHash = Digest SHA1 earlier. But in order to use it, we need some actual bytes to hash. That means we need a way to serialize and deserialize a Blockchain. A common library to do that is binary, which provides a Binary typeclass that we’ll implement for our types.

It’s not difficult to write instances by hand, but one of the advantages of using weird recursive types is that the compiler can generate Binary instances for us. Here’s complete code to serialize and deserialize every type we need:

{-# LANGUAGE StandaloneDeriving, TypeSynonymInstances, FlexibleInstances, UndecidableInstances, DeriveGeneric, GeneralizedNewtypeDeriving #-}moduleHaskoin.SerializationwhereimportHaskoin.TypesimportControl.Comonad.CofreeimportCrypto.HashimportData.BinaryimportData.Binary.GetimportData.ByteArrayimportqualifiedData.ByteStringasBSimportqualifiedData.ByteString.LazyasBSLimportData.Vector.BinaryimportGHC.Genericsinstance(Binary(f(Cofreefa)),Binarya)=>Binary(Cofreefa)whereinstance(Binarya)=>Binary(MerkleFa)whereinstanceBinaryBlockHeaderwhereinstanceBinaryTransactionwherederivinginstanceBinaryAccountderivinginstanceBinaryBlockderivinginstanceGeneric(Cofreefa)derivinginstanceGeneric(MerkleFa)derivinginstanceGenericBlockHeaderderivinginstanceGenericTransactioninstanceBinaryHaskoinHashwhereget=domDigest<-digestFromByteString<$>(get::GetBS.ByteString)casemDigestofNothing->fail"Not a valid digest"Justdigest->returndigestputdigest=put$(convertdigest::BS.ByteString)deserialize::BSL.ByteString->Blockchaindeserialize=decodeserialize::Blockchain->BSL.ByteStringserialize=encode

I only included deserialize and serialize to make it clearer what the end result of this module is. Let’s drop them in favor of decode and encode from Data.Binary.

Generic is a way of converting a value into a very lightweight “syntax tree” that can be used by serializers(JSON, XML, Binary, etc.) and many other typeclasses to provide useful default definitions. The Haskell wiki has a good overview. binary uses these Generic instances to define serializers that work on just about anything.

We had to hand-write a Binary instance for HaskoinHash because Digest SHA1 from the Crypto.Hash library didn’t provide it or a Generic instance. That’s okay - digests are pretty much bytestrings anyways, so it was only a few lines.

Here’s how to use them to implement mineOn:

importCrypto.Hash(hashlazy)mineOn::TransactionPool->Account->Blockchain->IOBlockchainmineOnpendingTransactionsminerAccountparent=dots<-pendingTransactionsletblock=Block(V.fromListts)letheader=BlockHeader{_miner=minerAccount,_parentHash=hashlazy$encodeparent}return$block:<Nodeheaderparent

And here’s how to test that this actually works:

testMining::IOBlockchaintestMining=dolettxnPool=return[]chain<-makeGenesischain<-mineOntxnPool0chainchain<-mineOntxnPool0chainchain<-mineOntxnPool0chainchain<-mineOntxnPool0chainchain<-mineOntxnPool0chainreturnchain-- GHCI>chain<-testMiningBlock[]:<Node(BlockHeader{_miner=Account0,_parentHash=efb3febc87c41fffb673a81ed14a6fb4f736df79})(Block[]:<Node(BlockHeader{_miner=Account0,_parentHash=2accb557297850656de70bfc3e13ea92a4ddac29})(Block[]:<Node(BlockHeader{_miner=Account0,_parentHash=f51e30233feb41a228706d1357892d16af69c03b})(Block[]:<Node(BlockHeader{_miner=Account0,_parentHash=0072e83ae8e9e22d5711fd832d350f5a279c1c12})(Block[]:<Node(BlockHeader{_miner=Account0,_parentHash=c259e771b237769cb6bce9a5ab734c576a6da3e1})(Block[]:<Genesis)))))>encodechain"\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\SOH\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\DC4\239\179\254\188\135\196\US\255\182s\168\RS\209Jo\180\247\&6\223y\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\SOH\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\DC4*\204\181W)xPem\231\v\252>\DC3\234\146\164\221\172)\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\SOH\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\DC4\245\RS0#?\235A\162(pm\DC3W\137-\SYN\175i\192;\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\SOH\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\DC4\NULr\232:\232\233\226-W\DC1\253\131-5\SIZ'\156\FS\DC2\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\SOH\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\DC4\194Y\231q\178\&7v\156\182\188\233\165\171sLWjm\163\225\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL\NUL">(decode$encodechain)::BlockchainBlock[]:<Node(BlockHeader{_miner=Account0,_parentHash=efb3febc87c41fffb673a81ed14a6fb4f736df79})(Block[]:<Node(BlockHeader{_miner=Account0,_parentHash=2accb557297850656de70bfc3e13ea92a4ddac29})(Block[]:<Node(BlockHeader{_miner=Account0,_parentHash=f51e30233feb41a228706d1357892d16af69c03b})(Block[]:<Node(BlockHeader{_miner=Account0,_parentHash=0072e83ae8e9e22d5711fd832d350f5a279c1c12})(Block[]:<Node(BlockHeader{_miner=Account0,_parentHash=c259e771b237769cb6bce9a5ab734c576a6da3e1})(Block[]:<Genesis)))))

If you’re testing serialization code at home, you may prefer to use the base16-bytestring library to hex-encode your ByteStrings:

>importqualifiedData.ByteString.Base16.LazyasBSL>chain<-testMining>BSL.encode$encodechain00000000000000000100000000000000000000000014efb3febc87c41fffb673a81ed14a6fb4f736df79000000000000000001000000000000000000000000142accb557297850656de70bfc3e13ea92a4ddac2900000000000000000100000000000000000000000014f51e30233feb41a228706d1357892d16af69c03b000000000000000001000000000000000000000000140072e83ae8e9e22d5711fd832d350f5a279c1c1200000000000000000100000000000000000000000014c259e771b237769cb6bce9a5ab734c576a6da3e1000000000000000000

Note that it will probably be a PITA for a C programmer trying to follow our serialization/deserialization code because the byte-wrangling is hidden in a lot of really generic code. If you want to produce a spec for people to use(always a good idea), you’ll probably want to hand-roll your serialization code so it’s self-documenting.

There are a couple mining-related problems with this so-called blockchain:

  1. People can have negative balances, so people can create a “scapegoat account” that they transfer unlimited amounts of money from.
  2. There is no transaction limiting, so someone could create a huge block and run our miners out of memory.
  3. We always mine empty blocks, so nobody can transfer money.
  4. There is no difficulty, so miners aren’t proving they’ve done any work.

I say that these are all mining problems because the code that miners run is going to deal with them.

#3 we’ll wait for Networking to solve. The rest we can do now.

To solve #1, we need account balances for anyone with a transaction that we’re mining a block for. Let’s go ahead and calculate all possible account balances:

blockReward=1000balances::Blockchain->M.MapAccountIntegerbalancesbc=lettxns=toList$mconcat$toListbcdebits=map(\Transaction{_from=acc,_amount=amount}->(acc,-amount))txnscredits=map(\Transaction{_to=acc,_amount=amount}->(acc,amount))txnsminings=map(\h->(_minerAccounth,blockReward))$headersbcinM.fromListWith(+)$debits++credits++minings

And then once we have a parent blockchain, we know how to filter out the invalid transactions:

validTransactions::Blockchain->[Transaction]->[Transaction]validTransactionsbctxns=letaccounts=balancesbcvalidTxntxn=caseM.lookup(_fromtxn)accountsofNothing->FalseJustbalance->balance>=_amounttxninfiltervalidTxntxns

To solve #2, I’ll let the current miner choose however many transactions he wants to put in his block. That means I’ll put a constant globalTransactionLimit = 1000 at the top that we’ll use when mining, but we won’t verify past blocks using it.

To solve #4, we need to add a nonce field to our BlockHeader that the miner can increment until he finds a good hash. We’ll make it an arbitrarily-large integer to avoid the scenario that no nonce values yield a sufficiently-difficult hash. And since we want to adjust our difficulty so blocks take roughly the same time to mine, we’ll store a timestamp in the header.

importData.Time.Clock.POSIX-- Add new fieldsdataBlockHeader=BlockHeader{_miner::Account,_parentHash::HaskoinHash,_nonce::Integer,_minedAt::POSIXTime}deriving(Eq,Show)-- Add serializers for POSIXTimeinstanceBinaryPOSIXTimewhereget=fromInteger<$>(get::GetInteger)putx=put$(roundx::Integer)globalTransactionLimit=1000mineOn::TransactionPool->Account->Blockchain->IOBlockchainmineOnpendingTransactionsminerAccountparent=dots<-pendingTransactionsts<-return$validTransactionsparenttsts<-return$takeglobalTransactionLimittsloopts0wherevalidChainbc=difficultybc<desiredDifficultyparentlooptsnonce=donow<-getPOSIXTimeletheader=BlockHeader{_miner=minerAccount,_parentHash=hashlazy$encodeparent,_nonce=nonce,_minedAt=now}block=Block(V.fromListts)candidate=block:<NodeheaderparentifvalidChaincandidatethenreturncandidateelseloopts(nonce+1)difficulty::Blockchain->Integerdifficulty=undefineddesiredDifficulty::BlockChain->IntegerdesiredDifficulty=undefined

We enter loop and keep incrementing the counter and fetching the time until we find a candidate with the right difficulty. The actual difficulty of a Blockchain is just its hash converted to an integer:

importCrypto.Number.Serialize(os2ip)difficulty::Blockchain->Integerdifficultybc=os2ip$(hashlazy$encodebc::HaskoinHash)

How do we know what the right difficulty is? To start with, we’ll calculate the average time-between-blocks for the last 100 blocks:

numBlocksToCalculateDifficulty=100blockTimeAverage::BlockChain->NominalDiffTimeblockTimeAveragebc=average$zipWith(-)times(tailtimes)wheretimes=takenumBlocksToCalculateDifficulty$map_minedAt$headersbcheaders::BlockChain->[BlockHeader]headersGenesis=[]headers(_:<Nodexnext)=x:headersnextaverage::(Foldablef,Numa,Fractionala,Eqa)=>fa->aaveragexs=sumxs/(ifd==0then1elsed)whered=fromIntegral$lengthxs

Let’s have a target time of 10 seconds. Suppose blockTimeAverage bc gives 2 seconds, so we want blocks to take 5 times as long: adjustmentFactor = targetTime / blockTimeAverage bc = 5. Which means we want only 1/5 of the originally-accepted blocks to be accepted.

Since hashes are uniformly-distributed, 1/5 of the original hashes are less than originalDifficulty / 5, which will be our new difficulty. That’s what Bitcoin does: difficulty = oldDifficulty * (2 weeks) / (time for past 2015 blocks).

genesisBlockDifficulty=0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFtargetTime=10-- BEWARE: O(n * k), where k = numBlocksToCalculateDifficultydesiredDifficulty::Blockchain->IntegerdesiredDifficultyx=round$loopxwhereloop(_:<Genesis)=genesisBlockDifficultyloopx@(_:<Node_xs)=oldDifficulty/adjustmentFactorwhereoldDifficulty=loopxsadjustmentFactor=min4.0$targetTime`safeDiv`blockTimeAveragex

Here are a few recent mining times using these calculations:

>exampleChain<-testMining>exampleChain<-mineOn(return[])0exampleChain-- Repeat a bunch of times>mapM_print$mapblockTimeAverage$chainsexampleChain6.61261425s6.73220925s7.97893375s12.96145975s10.923974s9.59857375s7.1819445s2.2767425s3.2307515s7.215131s15.98277575s

They hover around 10s because targetTime = 10.

We’ll save the blockchain on disk, and give people 3 tools:

  • A tool to mine blocks and create a new chain
  • A tool to list account balances

The first tool is the miner:

{-# LANGUAGE NoImplicitPrelude, OverloadedStrings #-}moduleHaskoin.Cli.MinewhereimportHaskoin.MiningimportHaskoin.SerializationimportHaskoin.TypesimportProtoludeimportSystem.EnvironmentimportData.BinaryimportqualifiedData.ByteString.LazyasBSLimportSystem.DirectoryimportPrelude(read)defaultChainFile="main.chain"defaultAccount="10"main::IO()main=doargs<-getArgslet(filename,accountS)=caseargsof[]->(defaultChainFile,defaultAccount)[filename]->(filename,defaultAccount)[filename,account]->(filename,account)_->panic"Usage: mine [filename] [account]"swapFile=filename++".tmp"txnPool=return[]account=Account$readaccountSforever$dochain<-loadOrCreatefilenamemakeGenesis::IOBlockchainnewChain<-mineOntxnPoolaccountchainencodeFileswapFilenewChaincopyFileswapFilefilenameprint"Block mined and saved!"loadOrCreate::Binarya=>FilePath->(IOa)->IOaloadOrCreatefilenameinit=doexists<-doesFileExistfilenameifexiststhendecodeFilefilenameelsedox<-initencodeFilefilenamexreturnx

The second one prints all of the account balances

{-# LANGUAGE NoImplicitPrelude, OverloadedStrings #-}moduleHaskoin.Cli.ListBalanceswhereimportHaskoin.MiningimportHaskoin.SerializationimportHaskoin.TypesimportProtoludeimportSystem.EnvironmentimportData.BinaryimportqualifiedData.MapasMimportqualifiedData.ByteString.LazyasBSLdefaultChainFile="main.chain"main::IO()main=doargs<-getArgslet(filename)=caseargsof[]->(defaultChainFile)[filename]->(filename)_->panic"Usage: list-balances [filename]"chain<-decodeFilefilename::IOBlockchainforM_(M.toAscList$balanceschain)$\(account,balance)->doprint(account,balance)

Here’s its output:

$stackexeclist-balances(Account10,23000)

So I’ve apparently mined 23 blocks just testing stack exec mine.

Conclusion

We developed a simple blockchain data structure. You can browse the repository on Github.

Future Haskoin-related articles may cover

  • Using networking and concurrency primitives to set up a peer-to-peer network.
  • Securing accounts in wallets, so other people can’t transfer money out of your account.
  • Building a ‘blockchain explorer’ website
  • GPU-accelerating our hashing
  • FPGA-accelerating our hashing

Future cryptocurrency-related articles may cover:

  • You may have heard about proof-of-work and proof-of-stake. What about proof-of-proof - where the miners compete to prove novel theorems in an approriate logic?
  • Adding a Turing-complete scripting language
  • Better ways to parse command line options
  • Building a Bitcoin exchange

IOCCC Flight Simulator

$
0
0

The IOCCC Flight Simulator was the winning entry in the 1998 International Obfuscated C Code Contest. It is a flight simulator in under 2 kilobytes of code, complete with relatively accurate 6-degree-of-freedom dynamics, loadable wireframe scenery, and a small instrument panel.

IOCCC Flight Simulator runs on Unix-like systems with X Windows. As per contest rules, it is in the public domain.

Documentation

Introduction

You have just stepped out of the real world and into the virtual. You are now sitting in the cockpit of a Piper Cherokee airplane, heading north, flying 1000 feet above ground level.

Use the keyboard to fly the airplane. The arrow keys represent the control stick. Press the Up Arrow key to push the stick forward. Press the left arrow key to move the stick left, and so on. Press Enter to re-center the stick. Use Page Up and Page Down increase and decrease the throttle, respectively. (The rudder is automatically coordinated with the turn rate, so rudder pedals are not represented.)

On your display, you will see on the bottom left corner three instruments. The first is the airspeed indicator; it tells you how fast you're going in knots. The second is the heading indicator, or compass. 0 is north, 90 is east, 180 is south, 270 is west. The third instrument is the altimeter, which measures your height above ground level in feet.

Usage

To use, type:

cat horizon.sc pittsburgh.sc | ./banks

banks is the name of the program (a quirk of IOCCC rules, and no pun intended). horizon.sc and pittsburgh.sc are scenery files.

Features

  • Simulator models a Piper Cherokee, which is a light, single-engine, propeller-driven airplane.
  • The airplane is modeled as a six degree-of-freedom rigid body, accurately reflecting its dynamics (for normal flight conditions, at least).
  • Fly through a virtual 3-D world, while sitting at your X console.
  • Loadable scenery files.
  • Head-up display contains three instruments: a true airspeed indicator, a heading indicator (compass), and an altimeter.
  • Flight controls may be mapped to any keys at compile time by redefining the macros in the build file. Nice if your keyboard doesn't have arrow keys.
  • Time step size can be set at compile time. This is useful to reduce flicker on network X connections. (But be careful: step sizes longer than about 0.03 seconds tend to have numerical stability problems.)
  • Airplane never stalls!
  • Airplane never runs out of fuel!

Scenery

Each of the *.sc files is a scenery file. The simulator program reads in the scenery from standard input on startup. You may input more than one scenery file, as long as there are less than 1000 total lines of input.

Here is a brief description of the scenery files:

  • horizon.sc - A horizon, nothing more. You will probably always want to input this piece of scenery.
  • mountains.sc - An alternate horizon; a little more mountainous.
  • pittsburgh.sc - Scenery of downtown Pittsburgh. The downtown area is initially located to your right.
  • bb.sc - Simple obstacle course. Try to fly over the buildings and under the bridges.
  • pyramids.sc - Fly over the tombs of the ancient Pharaohs in this (fictitious) Egyptian landscape.
  • river.sc - Follow a flowing river from the sky.

A few examples of how to input scenery:

cat horizon.sc pittsburgh.sc | ./banks
cat mountains.sc bb.sc | ./banks
cat mountains.sc river.sc pyramids.sc | ./banks

You can simulate flying through a cloud bank as well:

./banks < /dev/null

You will usually want at least a horizon, though.

The format of scenery files is simple, by the way. They're just a list of 3-D coordinates, and the simulator simply draws line segments from point to point as listed in the scenery file. 0 0 0 is used to end a series of consecutive line segments. Note that in the coordinate system used, the third coordinate represents altitude in a negative sense: negative numbers are positive altitudes.

I'm sure you'll be making your own scenery files very soon!!!

Alternate Build Instructions

Several options must be passed to the compiler to make the build work. The provided build file has the appropriate options set to default values. Use this section if you want to compile with different options.

To map a key to a control, you must pass an option to the compiler in the format "-Dcontrol=key". The possible controls you can map are described in the table below:

Control  Description           Default Key
-------  --------------------- -----------
IT       Open throttle         XK_Page_Up
DT       Close throttle        XK_Page_Down
FD       Move stick forward    XK_Up
BK       Move stick back       XK_Down
LT       Move stick left       XK_Left
RT       Move stick right      XK_Right
CS       Center stick          XK_Enter

Values for the possible keys can be found in the X Windows header file <X11/keysym.h>. This file is most likely a cross-reference to another header, <X11/keysymdef.h>.

You must map all seven controls to keys at compile time, or the compilation will fail.

For example, to map Center Stick to the space-bar, the compile option would be "-DCS=XK_space".

To set the time step size, you must pass the following option to the compiler: "-Ddt=duration", where dt is literal, and where duration is the time in seconds you want the time step to be.

Two things to keep in mind when selecting a time step. Time steps that are too large (more than about 0.03) will cause numerical stability problems and should be avoided. Setting the time step to be smaller than your clock resolution will slow down the simulator, because the system pauses for more time than the simulator expects.

The best advice is to set time step size to your system timer resolution. Try a longer period if you're getting too much flicker.

Screen Shots

Here we are flying towards Downtown Pittsburgh. We can see the Point, several buildings including the USX tower, and several bridges including the Smithfield Street bridge. We see three instruments near the bottom.

http://www.aerojockey.com/images/ioccc1.gif

About the IOCCC Entry

IOCCC stands for "International Obfuscated C Code Contest." It is an quasi-annual contest to see who can write the most unreadable, unintelligible, unmanagable, but legal C program.

In the 1998 IOCCC, My flight simulator won the "Best of Show" prize. Here is the source code to the program:

#include                                     <math.h>
#include                                   <sys/time.h>
#include                                   <X11/Xlib.h>
#include                                  <X11/keysym.h>
                                          double L ,o ,P
                                         ,_=dt,T,Z,D=1,d,
                                         s[999],E,h= 8,I,
                                         J,K,w[999],M,m,O
                                        ,n[999],j=33e-3,i=
                                        1E3,r,t, u,v ,W,S=
                                        74.5,l=221,X=7.26,
                                        a,B,A=32.2,c, F,H;
                                        int N,q, C, y,p,U;
                                       Window z; char f[52]
                                    ; GC k; main(){ Display*e=
 XOpenDisplay( 0); z=RootWindow(e,0); for (XSetForeground(e,k=XCreateGC (e,z,0,0),BlackPixel(e,0))
; scanf("%lf%lf%lf",y +n,w+y, y+s)+1; y ++); XSelectInput(e,z= XCreateSimpleWindow(e,z,0,0,400,400,
0,0,WhitePixel(e,0) ),KeyPressMask); for(XMapWindow(e,z); ; T=sin(O)){ struct timeval G={ 0,dt*1e6}
; K= cos(j); N=1e4; M+= H*_; Z=D*K; F+=_*P; r=E*K; W=cos( O); m=K*W; H=K*T; O+=D*_*F/ K+d/K*E*_; B=
sin(j); a=B*T*D-E*W; XClearWindow(e,z); t=T*E+ D*B*W; j+=d*_*D-_*F*E; P=W*E*B-T*D; for (o+=(I=D*W+E
*T*B,E*d/K *B+v+B/K*F*D)*_; p<y; ){ T=p[s]+i; E=c-p[w]; D=n[p]-L; K=D*m-B*T-H*E; if(p [n]+w[ p]+p[s
]== 0|K <fabs(W=T*r-I*E +D*P) |fabs(D=t *D+Z *T-a *E)> K)N=1e4; else{ q=W/K *4E2+2e2; C= 2E2+4e2/ K
 *D; N-1E4&& XDrawLine(e ,z,k,N ,U,q,C); N=q; U=C; } ++p; } L+=_* (X*t +P*M+m*l); T=X*X+ l*l+M *M;
  XDrawString(e,z,k ,20,380,f,17); D=v/l*15; i+=(B *l-M*r -X*Z)*_; for(; XPending(e); u *=CS!=N){
                                   XEvent z; XNextEvent(e ,&z);
                                       ++*((N=XLookupKeysym
                                         (&z.xkey,0))-IT?
                                         N-LT? UP-N?& E:&
                                         J:& u: &h); --*(
                                         DN -N? N-DT ?N==
                                         RT?&u: & W:&h:&J
                                          ); } m=15*F/l;
                                          c+=(I=M/ l,l*H
                                          +I*M+a*X)*_; H
                                          =A*r+v*X-F*l+(
                                          E=.1+X*4.9/l,t
                                          =T*m/32-I*T/24
                                           )/S; K=F*M+(
                                           h* 1e4/l-(T+
                                           E*5*T*E)/3e2
                                           )/S-X*d-B*A;
                                           a=2.63 /l*d;
                                           X+=( d*l-T/S
                                            *(.19*E +a
                                            *.64+J/1e3
                                            )-M* v +A*
                                            Z)*_; l +=
                                            K *_; W=d;
                                            sprintf(f,
                                            "%5d  %3d"
                                            "%7d",p =l
                                           /1.7,(C=9E3+
                              O*57.3)%0550,(int)i); d+=T*(.45-14/l*
                             X-a*130-J* .14)*_/125e2+F*_*v; P=(T*(47
                             *I-m* 52+E*94 *D-t*.38+u*.21*E) /1e2+W*
                             179*v)/2312; select(p=0,0,0,0,&G); v-=(
                              W*F-T*(.63*m-I*.086+m*E*19-D*25-.11*u
                               )/107e2)*_; D=cos(o); E=sin(o); } }

Note that this program will not compile out-of-the-box. It requires certain compile-time parameters. The folloing script builds it on my Linux system:

#! /bin/sh
cc banks.c -o banks -DIT=XK_Page_Up -DDT=XK_Page_Down \
        -DUP=XK_Up -DDN=XK_Down -DLT=XK_Left -DRT=XK_Right \
        -DCS=XK_Return -Ddt=0.02 -lm -lX11 -L/usr/X11R6/lib

If you want to try this program, I suggest you download the 1998 IOCCC Winners Distribution.

One of the rules of the contest was that the program could not be longer than 1536 bytes (excluding spaces, tabs, newlines, semicolons, and braces). Needless to say, cramming a flight simulator into such a small file was fairly difficult. I will say that if it weren't for the wonderful property of orthogonal matrices, this flight simulator would not have been possible.

Ideal OS: Rebooting the Desktop Operating System

$
0
0

TL;DR: By the end of this essay I hope to convince you of the following facts. First, that modern desktop operating systems are anything but. They are bloated, slow, and layered with legacy cruft that still functions only thanks to Moore's Law. Second, that innovation in desktop operating systems stopped about 15 years ago and the major players are unlikely to heavily invest in them again. And finally, I hope to convince you that we can and should start over from scratch, learning the lessons of the past.

"Modern" Desktop Operating Systems are Bloated

Consider the Raspberry Pi. For 35 dollars I can buy an amazing computer with four CPU cores, each running over a gigahertz. It also has a 3d accelerator, a gig of RAM, and built in wifi & bluetooth & ethernet. For 35 bucks! And yet, for many of the tasks I want to do with it, this Raspberry Pi is no better than the 66 megahertz computer I used in college.

In fact, in some cases it's worse. It took tremendous effort to get 3D accelerated Doom to work inside of X windows in the mid 2000s, something that was trivial with mid-1990s Microsoft Windows.

Below is a screenshot of Processing running for the first time on a Raspberry Pi with hardware acceleration, just a couple of years ago. And it was possible only thanks to a completely custom X windows video driver. This driver is still experimental and unreleased, five years after the Raspberry Pi shipped.

Despite the problems of X-Windows, the Raspberry Pi has a suprisingly powerful GPU that can do things like the below screenshot, but only once we get X windows out of the way. (the actual screenshot below is from OSX, but the same code does run on a Pi 3 at 60fps).

Here's another example. Atom is one of the most popular editors today. Developers love it because it has oodles of plugins, but let us consider how it's written. Atom uses Electron, which is essentially an entire webbrowser married to a NodeJS runtime. That's two Javascript engines bundled up into a single app. Electron apps use browser drawing apis which delegate to native drawing apis, which then delegate to the GPU (if you're luck) for the actual drawing. So many layers.

For a long time Atom couldn't open a file larger than 2 megabytes because scrolling would be to slow. They solved it by writing the buffer implementation in C++, essentially removing one layer of indirection.

Even fairly simple apps are pretty complex these days. An email app, like the one above is conceptually simple. It should just be a few database queries, a text editor, and a module that knows how to communicate with IMAP and SMTP servers. Yet writing a new email client is very difficult and consumes many megabytes on disk, so few people do it. And if you wanted to modify your email client, or at least the one above (Mail.app, the default client for Mac), there is no clean way to extend it. There are no plugins. There is no extension API. This is the result of many layers of cruft and bloat.

No Innovation

Innovation in desktop operating systems is essentially dead. One could argue that it ended sometime in the mid-90s, or even in the 80s with the release of the Mac, but clearly all progress stopped after the smartphone revolution.

Mac OS

Mac OS X was once a shining beacon of new features, with every release showing profound progress and invention. Quartz 2D! Expose! System wide device syncing! Widgets! Today, however Apple puts little effort into their desktop operating system besides changing the theme every now and then and increasing hooks to their mobile devices.

Apple's newest version of Mac OS X (now renamed macOS in honor of where they were two decades ago) is called High Sierra. What are banner features that we are eagerly awaiting this fall? A new filesystem and a new video encoding format. Really, that's it? Oh, and they added editing to Photos, which was already there in iPhotos but removed during the upgrade and they will block autoplay videos now in Safari.

Apple is the most valuable company in the world and this is the best they can do? Desktop UX just isn't a priority for them.

Microsoft Windows

On the Windows side there has been a flurry of activity as Microsoft tried to reinvent the desktop as a touch operating system for tablets and phones. This was a disaster that they are still recovering from. In the process of this shift they didn't add any features that actually helped desktop users, though they did spend an absurd amount of money creating a custom background image.

Instead of improving the desktop UX they focused on adding new application models with more and more layers on top of the old code. Incidentally, Windows can still run applications from the early 90s.

CMD.exe, the terminal program which essentially still lets you run DOS apps was only replaced in 2016. And the biggest new feature of the latest Windows 10 release? They added a Linux subsystem. More layers piled on top.

X Windows

X Windows has improved even less than the other two desktop OSes. In fact, it's the very model of non-change. People were complaining about it in the early 90s. I'm glad that I can reskin my GUI toolkit, but how about a system wide clipboard that holds more than one item at a time? That hasn't changed since the 80s!

X added compositing window managers in the mid-2000s, but due to legacy issues it can't be used for anything beyond sliding your windows around.

Wayland is supposed to fix everything, but it's been almost a decade in development and still isn't ready for prime time. Compatibility with old stuff is really hard. I think Apple had the right idea when they bundled the old macOS up in an emulator called Classic, fire-walling it off from the new code.

Work Stations?

Fundamentally desktop operating systems became easier to use as they were adopted by the mass market; then the mass market moved to smartphones and all interest in improving the desktop interface stopped.

I can't blame Apple and Microsoft (and now Google) for this. 3 billion smartphones replaced every two years is a far bigger market than a few hundred million desktops and laptops replaced every five.

I think we need to take back the desktop operating system experience. We used to call these things workstations. If the desktop is freed from being the OS for the masses, then it can go back to being an OS for work.

Things we don't have in 2017

This is the year 2017. Let's consider things that really should exist, but don't for some reason.

Why can I dock and undock tabs in my web browser or in my file manager, but I can't dock a tab between the two apps? There is no technical reason why this shouldn't be possible. Application windows are just bitmaps at the end of the day, but the OS guys haven't built it because it's not a priority.

Why can't I have a file in two places at once on my filesystem? Why is it fundamentally hierarchical? Why can I sort by tags and metadata? Database filesystems have existed for decades. Microsoft tried to build it with WinFS, but that was removed from Vista before it shipped thanks to internal conflicts. BeOS shipped it twenty years ago. Why don't we have them in our desktop OSes today?

Any web app can be zoomed. I can just hit command + and the text grows bigger. Everything inside the window automatically rescales to adapt. Why don't my native apps do that? Why can't I have one window big and another small? Or even scale them automatically as I move between the windows? All of these things are trivial to do with a compositing window manager, which has been commonplace for well over a decade.

Limited Interaction

My computer has a mouse, keyboard, tilt sensors, light sensors, two cameras, three microphones, and an array of bluetooth accessories; yet only the first two are used as general input devices. Why can't I speak commands to my computer or have it watch as I draw signs in the air, or better yet watch as I work to tell me when I'm tired and should take a break.

Why can't my computer watch my eyes to see what I'm reading, or scan what I'm holding in my hands using some of that cool AR technology coming to my phone. Some of these features do exist as isolated applications, but they aren't system wide and they aren't programmable.

Why can't my Macbook Pro use Bluetooth for talking to interesting HID devices instead of syncing to my Apple Wait. Oh wait, my Mac can't sync to my Apple Watch. Another place where my desktop plays second fiddle to my phone.

My can't my computer use anything other than the screen for output? My new Razor laptop has an RGB light embedded under every key, and yet it's only used for waves of color. How about we use these LEDs for something useful! (via Bjorn Stahl, I think).

Application Silos

Essentially every application on my computer is a silo. Each application has its own part of the filesystem, its own config system, and its own preferences, database, file formats and search algorithms. Even its own set of key bindings. This is an incredible amount of duplicated effort.

More importantly, the lack of communication between applications makes it very difficult to get them to coordinate. The founding principle of Unix was small tools that work together, but X Windows doesn't enable that at all.

Build for 1984

So why are our computers still so clunky? Essentially because they were built for 1984. The Desktop GUI was invented at a time when most people created a document from scratch, saved it, and printed it. If you were lucky you stored the document on a shared filesystem or emailed it to someone. That was it. The GUI was built to handle tasks that were previously done with paper.

The problem is that we live in 2017. We don't work the same way we did in 1984. In a typical day I will bring in code from several remote sites, construct some tests, and generate a data structure representing the result, which are then sent out to the internet for others to use. Import, synthesize, export.

I create VR content. I remix images. I send messages to a dozen social networks. I make the perfect play list from a selection of 30k songs. I process orders of magnitude more data from more locations than I did only 20 years ago, much less 40 years ago when these concepts were invented. The desktop metaphor just doesn't scale to today's tasks. I need a computer to help me do modern work.

We need a modern workstation

So now we come to the speculative part. Suppose we actually had the resources, and had a way to address (or ignore) backwards compatibility. Suppose we actually built something to redesign the desktop around modern work practices. How would we do it?

We should start by getting rid of things that don't work very well.

  • Traditional filesystems are hierarchical, slow to search, and don't natively store all of the metadata we need.
  • All IPC. There are too many ways for programs to communicate. Pipes, sockets, shared memory, RPC, kernel calls, drag and drop, cut and paste.
  • Command line interfaces don't fit modern application usage. We simply can't do everything with pure text. I'd like to pipe my Skype call to a video analysis service while I'm chatting, but I can't really run a video stream through awk or sed.
  • Window Managers on traditional desktops are not context or content aware, and they are not controlable by other programs.
  • Native Applications are heavy weight, take a long time to develop and very siloed.

So what does that leave us with? Not much. We have a kernel and device drivers. We can keep a reliable filesystem but it won't be exposed to end users or applications. NOw let's add some things back in.

Document Database

Start with a system wide document database. Wouldn't it be easier to build a new email client if the database was already built for you? The UI would only be a few lines of code. In fact, many common applications are just text editors combined with data queries. Consider iTunes, Address Book, Calendar, Alarms, Messaging, Evernote, Todo list, Bookmarks, Browser History, Password Database, and Photo manager. All of these are backed by their own unique datastore. Such wasted effort, and a block to interoperability.

BeOS proved that a database filesystem could really function and provide incredible advantages. We need to bring it back.

A document database filesystem has many advantages over a traditional one. Not only can 'files' exist in more than one place, and they become easily searchable, having a guaranteed performant database makes app building far easier.

Consider iTunes. iTunes stores the actual mp3 files on disk, but all metadata in a private database. Having two sources of truth causes endless problems. If you add a new song on disk you must manually tell iTunes to rescan it. If you want to make a program that works with the song database you have to reverse engineer iTunes DB format, and pray that Apple doesn't change it. All of these problems go away with a single system wide database.

Message Bus

A message bus will be the only kind of IPC. We get rid of sockets, files, pipes, ioctrls, shared memory, semaphores, and everything else. All communication is through a message bus. This gives us one place to manage security and enables lots of interesting features through clever proxying.

In reality we probably would continue to have some of these available as options for apps that need it, like sockets for a webbrowser, but all communication to the system and between apps should be messages.

Compositor

Now we can add in a compositing window manager that really just moves surfaces around in 3D, transforms coordinates, and is controlled through messages. Most of what a typical window manager does, like arranging windows, overlaying notifications, and determining which window has focus; can actually be done by other programs who just send messages to the compositor to do the real work.

This means the compositor is heavily integrated with the graphics driver, which is essential to making such a system fast. Below is the diagram for Wayland, the compositor which will eventually become the default for desktop Linux.

Applications would do their drawing by requesting a graphics surface from the compositor. When they finish their drawing and are ready to update they just send a message saying: please repaint me. In practice we'd probably have a few types of surfaces for 2d and 3d graphics, and possibly raw framebuffers. The important thing is that at the end of the day it is the compositor which controls what ends up on the real screen, and when. If one app goes crazy the compositor can throttle it's repaints to ensure the rest of the system stays live.

Apps become Modules

All applications become small modules that communicate through the message bus for everything. Everything. No more file system access. No hardware access. Everything is a message.

If you want to play an mp3 you send a play message to an mp3 service. You draw by having the compositor do it for you. This separation makes the system far more secure. In Linux terms each app would be completely isolated through user permissions and chroot, or perhaps all the way to docker containers or virtual machines. There's a lot of details to work out, but this is very doable today.

Module apps would be far easier to write than today. If the database is the single source of truth then a lot of the general work of copying data in and out of memory can go away. In the music player example, instead of the search field loading up data and filtering it to show the list, the search field just specifies a query. The list is then bound to this query and data automatically flows in. If another application adds a song to the database that matches the search query, the music player UI will automatically update. This is all without any extra work from the app developer. Live queries make so many things easier and more robust.

Rebuild the Apps

From this base we should be able to build everything we need. However, this also means we have to rebuild everything from scratch. Higher level constructs built on top of the database would make many applications a lot easier to rebuild. Let's look at some examples.

Email. If we separate the standard email client into GUI and networking modules, which communicate solely through messages, then building a client becomes a lot easier. The GUI doesn't have to know anything about GMail vs Yahoo mail or how to process SMTP error messages. It simply looks for documents with type 'email' in them. When the GUI wants to send a message it marks an email with the property outgoing=true. A headless module will listing for outgoing emails and do the actual STMP processing.

Splitting the email app into component makes replacing one part far easier. You could build a new email frontend in an afternoon without having to rebuild the networking parts. You could build a spam detector that has no UI at all, it just listens for incoming messages, processes them, and marks the bad ones with a spam tag. It doesn't know or care how spam is visualized. It just does one thing well.

Email filters could do other interesting things. Perhaps you send an email to your bot with the command 'play beatles'. A tiny module looks for this incoming email, sends another message to the mp3 module for playing the music, then marks the email as deleted.

Once everything becomes a database query the entire system becomes more flexible and hackable.

The command line

I know I said we would get rid of the commandline before, but I take that back. I really like the commandline as an interface sometimes, it's the pure text nature that bothers me. Instead of chaining CLI apps together with text streams we need something richer, like serialized object streams (think JSON but more efficient). Then we start getting some real power.

Consider the following tasks:

  • I want to use my laptop as an amplified microphone. I speak into it and the sound comes out of a bluetooth speaker on the other side of the room.
  • Whenever I tweet something with the hashtag #mom I want a copy sent, by email, to my mother.
  • I want to use my iphone sitting on a stand made of legos as microscope. It streams to my laptop, which has controls to record, pause, zoom, and rebroadcast as a live stream to youtube.
  • I want to make a simple bayesian filter which detects emails from my power company, adds the tag 'utility', logs into the website, fetches the current bill amount and due date, and adds an entry to my calendar.

Each of these tasks is conceptually simple but think of how much code you would have to write to actually make this work today. With a CLI built on object streams each of these examples could become a one or two line script.

We could do complex operations like 'find all photos taken in the last four years within 50 miles of Yosemite, and that have a star rating of 3 or higher, resize them to be 1000px on the longest size, then upload them to a Flickr album called "Best of Yosemite", and link to the album on Facebook'. This could be done with built in tools, no custom coding required, just by combining a few primitives.

Apple actually built such a system. It's called Automator. You can visually create powerful workflows. They never promote it, have started deprecating the Applescript bindings which make it work underneath, and recently laid off or transferred all of the members of the Automator team. Ah well.

System Side Semantic Keybindings

Now that we've rebuilt the world, what new things could we do?

Services are available system wide. This means we could have a keybinding service which gives the user one place to set up keybindings. It also means we could have a richer sense of what a keybinding is. Instead of mapping a key to a function in a particular program, a key binding maps a key combo to a command message. All applications that work on documents could have a 'save' or 'new' command. The keybinding service would be responsible for turning a control-S into the save command. I call these semantic keybindings.

Semantic keybindings would also make it a lot easier to support alternate forms of input. Suppose you built a weird Arduino button thing that speaks every time you mash a button. You wouldn't need to write any custom code. Just make the arduino send in a new keypress event, then map it to a play audio message in the bindings editor. Turn a digital pot into a custom scroll wheel. Your UI is now fully hackable.

I need to do some more research in this area, but I suspect semantic keybindings would make screen readers and other accessibility software easier to build.

Windows

In our new OS every window is tab dockable. Or side dockable. Or something else. The apps don't care. We have a lot of freedom to explore here.

The old MacOS 8 had a form of tabbed windows, at least for Finder windows, where you could dock them at the bottom of the screen for easy access. Another cool thing that was left behind in the transition to Mac OS X

In the screenshot below the user is lifting the edge of a window up to see what's underneath. That's super cool!

This is an example from Ametista: a mini-toolkit for exploring new window management techniques, a research paper by Nicolas Roussel.

Since the system fully controls the environment of all apps, it could enforce security restrictions and show that to the user. A trusted system app could have a green border. A new app downloaded from the internet would have a red border. An app with an unknown origin could have a black border, or just not be shown at all. Many kinds of spoofing attacks become impossible.

Smart copy and paste

When you copy in one window then shift to another, the computer knows that you just copied something. It can now use this knowledge to do something useful, like automatically shift the first window to the side, but still visible, and render the selected text in glowing green. This keeps the user's mind on the task at hand. When the user pastes into a new window we could show the green text actually leap from one window to another.

But why stop there. Let's make a clipboard that can hold more than one item at at time. We have gigs of RAM. Let's use it. When I copy something why do I have to remember what I copied before I paste it? The clipboard isn't actually visible anywhere. Let's fix that.

The clipboard should be visible on screen as some sort of a shelf that shows the recent items I've copied. I can visit three webpages, copy each url to the clipboard, then go back to my document and paste all three at once.

This clipboard viewer can let me scroll through my entire clipping history. I could search and filter it with tags. I could 'pin' my favorites for use later.

Classic macOS actually had an amazing tool built into it called [name], but it was dropped in the shift to OS X. We had the future decades ago! Let's bring it back.

Working Sets

And finally we get to what I think is the most powerful metaphor change in our new Ideal OS. In the new system all applications are tiny isolated things which only know what the system tells them. If the treat the database as the single source of truth, and the database itself is versioned, and our window manager is fully hackable... then some really interesting things become possible.

Usually I have a split between personal files and files for work. I tend to use separate folders, accounts, and sometimes separate computers. In the Ideal OS my files could actually be separated by the OS. I could have one screen up with my home email, and another screen with my work email. These are the exact same app, just initialized with different query settings.

When I open a file browser on the home screen it only shows files designated as home projects. If I create a document on my work screen the new file is automatically tagged as being work only. Managing all of this is trivial; just extra fields in the database.

Researchers at Georgia Tech actually built a version of this in their research paper: Giornata: Re-Envisioning the Desktop Metaphor to Support Activities in Knowledge Work.

Now let's take things one step further. If everything is versioned, even GUI settings and window positioned (since it's all stored in the database), I could take a snapshot of a screen. This would store the current state of everything, even my keybindings. I can continue working, but if I want I could rollback to that snapshot. Or I could view the old snapshot and restore it to a new screen. Now I have essentially created a 'template' that I can use over and over whenever I start a new project. This template can contain anything I want: email settings, chat history, todos, code, issue windows, or even a github view.

Now we can essentially treat all state in the computer like a github repo, with the ability to fork the state of the entire system. I think this would be huge. People would exchange useful workspaces online much as they do with Docker images. People could tweak their workflows add useful scripts embedded into the workspaces. The possibilities really are amazing.

None of this is New

So that's it. The vision. All of this is built on three things: a system wide versioned realtime database, a system wide realtime message bus, and a programmable compositor.

I want to stress that absolutely nothing I've talked about here is new. I didn't come up with any of them. All of these ideas are years or decades old. Database filesystems were pioneered by BeOS. Plan 9 used a single mechanism for all IPC. Oberon let you customize the environment entirely within an editable document. And of course tons and tons of fascinating research papers.

Why don't we have it?

None of this is new. And yet we still don't have it? Why is that?

I suspect the root cause is simply that building a successful operating system is hard. It's far better to extend what exists than create something new; but extension means you are limited by choices made in the past.

Could we actually build this? I suspect not. No one has done it because, quite honestly, there is no money in it. And without money there simply aren't enough resources to build it.

However, if somehow we figured out a team to build it, or at least decided to make a runnable prototype, I would start by targeting hardware with a fixed known set of hardware and existing device drivers. Driver support has always been the downfall of desktop Linux. A Raspberry Pi 3 would be an excellent choice.

So now my question to you is: do you think this idea is worth pursuing, at least to the runnable prototype stage? Would you contribute to such a project? What would need to already be working before you'd be willing to test it out. And of course, what should we call it?

If you are interested in joining the discussion about the future of desktop UX, please sign up to our new group Ideal OS Design.

Thanks, Josh


Poland's oldest university denies Google's right to patent Polish coding concept

$
0
0

The Jagiellonian University will demand the withdrawal of a patent application filed by Google in the US on a solution, developed by Dr. Jaroslaw Duda, an employee and lecturer of the university, told PAP the University's spokesman Adrian Ochalik.  

On Friday, the private Radio ZET broadcaster reported about the issue, involving a patent application on Asymmetrical Numeral Systems coding (ANS), which allows data compression in computers and other electronic devices. Currently it is used by Apple, Facebook and Google. iPhones and Macintosh computers use ANS to register data.

Several years ago, Duda, a lecturer at the Faculty of Mathematics and Computer Science at Jagiellonian University in Cracow, posted his method on the Internet. "I'm a scientist. I didn't patent this method as I believe such concepts should be complimentary and accessible to everyone", Dr. Duda told PAP.

Duda added that since 2014 he had been communicating with Google via e-mail and a public forum - and helping the IT giant to adapt the ANS method for video file compression. "The patent application, filed in the USA, contains exactly the same concepts I wrote for Google. (...) I never meant anyone - including Google - to limit access to this solution by patenting it", underlined Dr. Duda. "I intend to file an objection with the US Patent and Trademark Office”, he said.

The Jagiellonian University supports Duda. "We understand the original, idealistic intention of our employee, (...) who wanted the ANS coding method to remain accessible to the public free of charge. Therefore, filing the patent application with the American patent office without Dr. Duda's prior consent may be regarded as controversial both in business and ethical terms", the University's spokesperson Adrian Ochalik told PAP. "We will demand the withdrawal of the patent application by Google, as just like the code's author, we wouldn not like the access to be limited by any means", he added.

In 2016, an attempt to patent Duda'a solution was also made in Great Britain, however, the British court ruled that a free-of-charge online content could not be subject to a patent procedure. (PAP)

Golymer – Create HTML custom elements with Go (GopherJS)

$
0
0

README.md

Create HTML custom elements with go (gopherjs)

With golymer you can create your own HTML custom elements, just by registering an go struct. The innerHTML of the shadowDOM has automatic data bindings to the struct fields (and fields of the struct fields ...).

It's unstable, things will break in the future. golymer works only on chrome. (some webcomponent polyfills will be needed for custom elements to work in other browsers).

Contribution of all kind is welcome. Tips for improvement or api simplification also :)

package mainimport"github.com/microo8/golymer"const myAwesomeTemplate = `<style>	:host {		background-color: blue;		width: 500px;		height: [[FooAttr]]px;	}</style><p>[[privateProperty]]</p><input type="text" value="{{BarAttr}}"/>`typeMyAwesomeElementstruct {
	golymer.ElementFooAttrintBarAttrstring
	privateProperty float64
}funcNewMyAwesomeElement() *MyAwesomeElement {e:=&MyAwesomeElement{
		FooAttr: 800,
	}
	e.Template = myAwesomeTemplatereturn e
}funcmain() {//pass the element constructor to the Define functionerr:= golymer.Define(NewMyAwesomeElement)if err != nil {panic(err)
	}
}

Then just run $ gopherjs build, import the generated script to your html <script src="my_awesome_element.js"></script> and you can use your new element

<my-awesome-elementfoo-attr="1"bar-attr="hello"></my-awesome-element>

define an element

To define your own custom element, you must create an struct that embeds the golymer.Element struct. And then an function that is the constructor for the struct. Then add the constructor to the golymer.Define function. This is an minimal example.:

typeMyElemstruct {
	golymer.Element
}funcNewMyElem() *MyElem {returnnew(MyElem)
}funcinit() {err:= golymer.Define(NewMyElem)if err != nil {panic(err)
	}
}

The struct name, in CamelCase, is converted to the kebab-case. Because html custom elements must have at least one dash in the name, the struct name must also have at least one "hump" in the camel case name. (MyElem -> my-elem). So, for example, an struct named Foo will not be defined and the Define function will return an error.

Also the constructor fuction must have an special shape. It can't take no arguments and must return an pointer to an struct that embeds the golymer.Element struct.

element attributes

golymer creates attributes on the custom element from the exported fields and syncs the values.

typeMyElemstruct {
	golymer.ElementExportedFieldstring
	unexportedField intFoofloat64Barbool
}

Exported fields have attributes on the element. This enables to declaratively set the api of the new element. The attributes are also converted to kebab-case.

<my-elemexported-field="value"foo="3.14"bar="true"></my-elem>

lifecycle callbacks

golymer.Element implemets the golymer.CustomElement interface. It's an interface for the custom elements lifecycle in the DOM.

ConnectedCallback() called when the element is connected to the DOM. Override this callback for setting some fields, or spin up some goroutines, but remember to call the golymer.Element also (myElem.Element.ConnectedCallback()).

DisconnectedCallback() called when the element is disconnected from the DOM. Use this to release some resources, or stop goroutines.

AttributeChangedCallback(attributeName string, oldValue string, newValue string, namespace string) this callback called when an observed attribute value is changed. golymer automatically observes all exported fields. When overriding this, also remember to call golymer.Element callback (myElem.Element.AttributeChangedCallback(attributeName, oldValue, newValue, namespace)).

template

The innerHTML of the shadowDOM in your new custom element is just an string that must be assigned to the Element.Template field in the constructor. eg:

funcNewMyElem() *MyElem {e:=new(MyElem)
	e.Template = `<h1>Hello golymer</h1>`return e
}

The element will then have an shadowDOM thats innerHTML will be set from the Template field at the connectedCallback.

one way data bindings

golymer has build in data bindings. One way data bindings are used for presenting an struct field's value. For defining an one way databinding you can use double square brackets with the path to the field ([[Field]] or subObject.Field) Eg:

Where the host struct has an Text field. Or the name in brackets can be an path to an fielt subObject.subSubObject.Field. The field value este then converted to it's string representation. One way data bindings can be used in text nodes, like in the example above, and also in element attributes eg. <div style="display: [[MyDisplay]];"></div>

Every time the fields value is changed, the template will be automaticaly changed. Changing the Text fields value eg myElem.Text = "foo" also changes the <p> element's innerHTML.

two way data bindings

Two way data bindings are declared with two curly brackets ({{Field}} or {{subObject.Field}}) and work only in attributes of elements in the template. So every time the elements attribute is changed, the declared struct field will also be changed. golymer makes also an workaround for html input elements, so it is posible to just bind to the value attribute.

<inputid="username"name="username"type="text"value="{{Username}}">

Changing elem.Username changes the input.value, and also changing the input.value or the value attribute document.getElementById("username").setAttribute("newValue") or the user adds some text, the elem.Username will be also changed.

connecting to events

Connecting to the events of elements can created by addEventListener function, but it is also possible to connect some struct method with an on-<eventName> attribute.

<buttonon-click="ButtonClicked"></button>
func(e *MyElem) ButtonClicked(event *golymer.Event) {print("the button was clicked!")
}

custom events

golymer adds the DispatchEvent method so you can fire your own events.

event:= golymer.NewEvent("my-event",map[string]interface{}{"detail": map[string]interface{}{"data": "foo",
		},"bubbles": true,
	},
)
elem.DispatchEvent(event)

and these events can be also connected to:

<my-second-elementon-my-event="MyCustomHandler"></my-second-element>

observers

On changing an fields value, you can have an observer, that will get the old and new value of the field. It must just be an method with the name: Observer<FieldName>. eg:

func(e *MyElem) ObserverText(oldValue, newValuestring) {print("Text field changed from", oldValue, "to", newValue)
}

children

golymer scans the template and checks the id of all elements in it. The id will then be used to map the children of the custom element and can be accessed from the Childen map (map[string]*js.Object). Attribute id cannot be databinded (it's value must be constant).

const myTemplate = `<h1 id="heading">Heading</h1><my-second-element id="second"></my-second-element><button on-click="Click">click me</button>`func(e *MyElem) Click(event *golymer.Event) {secondElem:= e.Children["second"].Interface().(*MySecondElement)
	secondElem.DoSomething()
}

type assertion

It is possible to type assert the node object to your custom struct type. With selecting the node from the DOM directly

myElem:= js.Global.Get("document").Call("getElementById").Interface().(*MyElem)

and also from the Children map

secondElem:= e.Children["second"].Interface().(*MySecondElement)

Latency matters

$
0
0

A month ago, danluu wrote about terminal and shell performance. In that post, he measured the latency between a key being pressed and the corresponding character appearing in the terminal. Across terminals, median latencies ranged between 5 and 45 milliseconds, with the 99.9th percentile going as high as 110 ms for some terminals. Now I can see that more than 100 milliseconds is going to be noticeable, but I was certainly left wondering: Can I really perceive a difference between 5 ms latency and 45 ms latency?

Turns out that I can.

Where I came from

Like probably all shell configurations, mine has a prompt, that is: It displays some contextual data when I can enter the next command. Most prompts include the username, hostname, and the current working directory. Mine can also include the current Git branch and commit, the current kubectl context, and which OpenStack credentials are currently loaded. It knows some quite unique tricks. For example, if the current working directory is not accessible anymore, it highlights the inaccessible path elements. Or, inside a Git repository, it highlights the path elements inside the repo.

prettyprompt screenshot

Since that’s quite a lot of logic, I delegated the rendering of the entire prompt to a custom program. That program used to be a Python 2 script, but since I’m not nearly as fluent in Python as I used to be, I didn’t dare touch it anymore. I therefore decided to rewrite it into a Go program. The feature set didn’t change, but here’s a thing that did change:

$ time ./bin/prettyprompt.py > /dev/null; time prettyprompt > /dev/null
./bin/prettyprompt.py > /dev/null  0,05s user 0,00s system 98% cpu 0,057 total
prettyprompt > /dev/null  0,00s user 0,00s system 92% cpu 0,003 total

The new prompt renders in 3 ms whereas the old one took 57 ms. That’s nothing to do with sloppy programming, that’s just how long it takes to start up the Python interpreter, as can be easily observed by running a Python interpreter without actually doing anything:

$ time python2 -c 0 > /dev/null
python2 -c 0 > /dev/null  0,02s user 0,01s system 52% cpu 0,050 total

(By the way, all these measurements are on hot caches. The first execution of python2 takes more than double as long.)

The Go program, on the other hand, does not need to start a runtime. It’s probably short-lived enough to never even garbage-collect.

The “Wow effect”… I guess?

I did these measurements while I was still writing the Go program, just for fun. But only after switching to the new prompt did I realize how much snappier my terminal feels just because of this change. There was always this short gap between seeing the output of one command and being able to enter the next command, but I did never really understand that this was due to my slow prompt renderer, and not due to the behavior of the shell, the terminal, the OS scheduler, or any other entity involved in the process.

When I press Enter in my shell now, the next prompt just appears instantaneously, without any perceivable latency between the keypress and the prompt being rendered. This feels so magical, I cannot even put it into words. It’s as if this were a new notebook, not the same one that I’ve been using for five years now.

The point is, latency really is important for how an application or a system feels. I will definitely care more about responsiveness of my programs in the future after this experience.

Update

This has been discussed on Hacker News, and lead to a semi-relatedfollowup of mine.

Created: Sun, 20 Aug 2017 19:02:45 UTC
Last edited: Sun, 20 Aug 2017 22:05:41 UTC

Supernova’s messy birth casts doubt on reliability of astronomical yardstick

$
0
0

NASA

This artist's conception shows a white dwarf (left) siphoning material from a larger star, a process that will eventually cause a stellar explosion.

The exploding stars known as type Ia supernovae are so consistently bright that astronomers refer to them as standard candles — beacons that are used to measure vast cosmological distances. But these cosmic mileposts may not be so uniform. A new study finds evidence that the supernovae can arise by two different processes, adding to lingering suspicions that standard candles aren't so standard after all.

The findings, which have been posted on the arXiv preprint server1 and accepted for publication in the Astrophysical Journal, could help astronomers to calibrate measurements of the Universe’s expansion. Tracking type Ia supernovae showed that the Universe is expanding at an ever-increasing rate, and helped to prove the existence of dark energy — advances that secured the 2011 Nobel Prize in Physics.

The fact that scientists don’t fully understand these cosmological tools is embarrassing, says the latest study’s lead author, Griffin Hosseinzadeh, an astronomer at the University of California, Santa Barbara. “One of the greatest discoveries of the century is based on these things and we don’t even know what they are, really.”

It's not for lack of trying: astronomers have put forth a range of hypotheses to explain how these stellar explosions arise. Scientists once thought that the supernovae were built uniformly, like fireworks in a cosmic assembly line. That changed in the 1990s, when astronomers noticed that some of the supernovae were dimmer than the others.

Astronomers can correct for the difference because the brightest supernovae seem to fade more slowly than their dimmer kin. Still, the fact that each ‘standard candle’ looks slightly different from the next is cause for concern. “When you’re trying to measure the expansion rate of the Universe to 1%, these subtle differences make you worry that maybe type Ia supernovae are throwing you off,” says Peter Garnavich, an astronomer at the University of Notre Dame in Indiana.

Burning bright

At least one thing seems clear, however. Astronomers remain convinced that a white dwarf, an Earth-sized remnant of a Sun-like star, plays a central part in the formation of each type Ia supernova. But they’re not sure what pushes white dwarves over the edge, because these stars are too stable to explode on their own. That suggests that a companion star — another white dwarf, a star like the Sun or even a giant star — helps to set each supernova in motion.

If this companion star is large, the idea goes, then the white dwarf would siphon material from it. Eventually, it would accumulate so much extra mass that the pressure would ignite a runaway thermonuclear explosion. But if the companion star is small — perhaps a second white dwarf — the two celestial bodies would spiral towards each other and merge together before exploding.

Researchers have been searching for evidence of these processes by hunting for newly formed supernovae. That’s because a supernova created in the first scenario would leave evidence behind: material travelling out from the stellar explosion would light up as it hit the slightly smaller, but still intact, companion star. But a supernova formed by the merger of a white dwarf and a small companion would obliterate all traces of the stars involved in its birth.

Astronomers had only seen evidence for the second scenario — until now. Griffin and his team’s paper is the first to report a supernova formed by a white dwarf leaching material from a massive companion star. The results add weight to the idea that type Ia supernovae can form through two different stellar assembly lines.

On the hunt

The first hint of the discovery came on 10 March, when a supernova appeared on the outskirts of the spiral galaxy NGC 5643, 16.9 million parsecs from Earth (55 million light years). David Sand, an astronomer at the University of Arizona in Tucson and a co-author of the study, found it as he pored over data from the DLT40 supernova search, which scans roughly 500 galaxies every night.

Sand quickly took another image to verify that what he had seen was a stellar explosion, not an unknown asteroid. Within a few minutes, he knew it was time to alert the Las Cumbres Observatory — a network of 18 telescopes around the world that allows astronomers to monitor objects continuously as they move across the sky.

Hosseinzadeh, Sand and their colleagues observed the supernova every 5 hours for roughly 6 days and then once a night for 40 days after — allowing them to map its changing luminosity. During this period, they saw a temporary jump in brightness caused by material ejected from the supernova striking the companion star.

“This is the best evidence yet for a shock due to a companion star in a normal type Ia supernova,” Garnavich says.

But the discovery is just the beginning of unravelling the mystery behind these not-so-standard candles. To better pin down their measurements of the cosmos, astronomers will keep searching for more of these dim young supernovae.

“It’s like having a tool that you know how to use, but you don’t know how it works,” Hosseinzadeh says. “Understanding the physics of the tool that you're using seems better than just using it blindly.”

Fifty Years Later, Remembering Sci-Fi Pioneer Hugo Gernsback

$
0
0

When expat Luxembourger Hugo Gernsback arrived in the United States in 1904, even he could not have predicted the impact his lush imagination and storytelling drive would have on the global literary landscape.

Young, haughty and dressed to the nines, Gernsback, who had received a technical education in Europe, soon established himself not only as a New York electronics salesman and tinkerer, but also as a prolific, forward-thinking publisher with a knack for blending science and style.

Modern Electrics, his first magazine, provided readers with richly illustrated analyses of technologies both current and speculative. Always sure to include a prominent byline for himself, Gernsback delved into the intricacies of subjects like radio wave communication, fixating without fail on untapped potential and unrealized possibilities.

Owing to their historical import, many of Gernsback's publications are now preserved at the Smithsonian Libraries on microfiche and in print, 50 years after his death on August 19, 1967. Enduring legacy was not on the young man's mind in his early days, though—his Modern Electrics efforts were quick and dirty, hurriedly written and mass-printed on flimsy, dirt-cheap paper.

With a hungry readership whose size he did not hesitate to boast of, Gernsback found himself constantly under the gun. Running low on Modern Electrics content one 1911 April evening, the 26-year-old science junkie made a fateful decision: he decided to whip up a piece of narrative fiction.

Centered on the exploits of a swashbuckling astronaut called Ralph 124C (“one to foresee”), the pulpy tale intermixed over-the-top action—complete with a damsel in distress—with frequent, elaborate explanations of latter-day inventions.

To Gernsback’s surprise, his several-page filler story—which ended on a moment of high suspense—was a smash hit among readers. His audience wanted more, and Gernsback was all too happy to oblige.

In the next 11 issues of Modern Electrics, he parceled out the adventure in serial fashion, ultimately creating enough content for a novel, which he published in 1925.

Nothing gave Hugo Gernsback more joy than sharing his visions of the future with others, and with the success of his flamboyant “Romance of the Year 2660,” he realized that he had a genuine audience.

Eager to deliver exciting and prophetic content to his followers, Gernsback founded Amazing Stories in 1926, conceptualizing it as the perfect outré complement to the more rigorous material of Modern Electrics and the similarly themed Electrical Experimenter (first published in 1913). The purview of the new publication was to be “scientifiction”—wild tales rife with speculative science.

In an early issue of Amazing Stories, Gernsback laid out his foundational mission statement. "Having made scientifiction a hobby since I was 8 years old, I probably know as much about it as any one," he wrote, "and in the long run experience will teach just what type of stories is acclaimed by the vast majority." Within the text of the editorial note, Gernsback exhorted himself to "Give the readers the very best type of stories that you can get hold of," while recognizing fully that this would be a "pioneer job."

Gernsback wasn't the first to pen a science fiction story, granted—the inaugural issue of Amazing Stories featured reprints of H.G. Wells and Jules Verne, and indeed there are far older works that could plausibly fit the description. What he did do was put a name to it, and collect under one roof the output of disparate authors in search of unifying legitimacy.

In the eyes of prominent present-day sci-fi critic Gary Westfahl, this was a heroic achievement unto itself. "I came to recognize that Gernsback had effectively created the genre of science fiction," Westfahl recalls in his book Hugo Gernsback and the Century of Science Fiction. Gernsback, he wrote, "had an impact on all works of science fiction published since 1926, regardless of whether he played any direct role in their publication."

Though Gernsback’s writing is at times stilted and dry, despite his best intentions, his laser focus on imagining and describing the technologies of tomorrow—sometimes with uncanny accuracy—paved the way for all manner of A-list sci-fi successors.

Isaac Asimov has termed Gernsback the “father of science fiction,” without whose work he says his own career could never have taken off. Ray Bradbury has stated that “Gernsback made us fall in love with the future.”

Hugo Gernsback was by no means a man without enemies—his ceaseless mismanagement of contributors’ money made sure of that. Nor is he wholly free from controversy—a column of his detailing a theoretical skin-whitening device is especially likely to raise eyebrows.

But while acknowledging such character flaws is, of course, necessary, it is equally so to highlight the passion, vitality, and vision of an individual committed to disseminating to his readers the wonder of scientific advancement.

It was for these traits that Gernsback was chosen as the eponym of science-fiction’s Hugo award, and it is for these traits that he is worth remembering today, 50 years after his passing. Between television, Skype and wireless phone chargers, the great prognosticator would find our modern world pleasingly familiar.

Like this article?
SIGN UP for our newsletter

Viewing all 25817 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>