Quantcast
Channel: Hacker News
Viewing all 25817 articles
Browse latest View live

The Fifth Protocol (2014)

$
0
0

“Wait a minute… Make up your mind. This Snow Crash thing—is it a virus, a drug, or a religion?”

Juanita shrugs. “What’s the difference?”

– Snow Crash

Cryptocurrencies will create a fifth protocol layer powering the next generation of the Internet.

Humans don’t *need* math-based cryptocurrencies when dealing with other humans. We walk slowly, talk slowly, and buy big things. Credit cards, cash, wires, checks – the world seems fine.

Machines, on the other hand, are far chattier and quicker to exchange information. The Four Layers of the Internet Protocol Suite are constantly communicating. The Link Layer puts packets on a wire. The Internet Layer routes them across networks. The Transport Layer persists communication across a given conversation. And the Application Layer delivers entire documents and applications.

This chatty, anonymous network treats resources as “too cheap to meter.” It’s a giant grid that transfers data but doesn’t transfer value. DDoS attacks, email spam, and flooded VPNs result. Names and identities are controlled by overlords – ICANN, DNS Servers, Facebook, Twitter, and Certificate “Authorities.”

Where’s the protocol layer for exchanging value, not just data? Where’s the distributed, anonymous, permission-less system for chatty machines to allocate their scarce resources? Where is the “virtual money” to create this “virtual economy?”

Cryptocurrencies like Bitcoin are already trustless – any machine can accept it from any other, securely. They are (nearly) free. They are global – no central bank required, and any machine can speak the language. And they’re one to two steps from being quick, anonymous, and capable of authentication.

Suppose we had a QuickCoin, which cleared transactions nearly instantly, anonymously, and for infinitesimal mining fees. It could use the Bitcoin blockchain for security or for easy trading in and out. SMTP would demand QuickCoin to weed out spam. Routers would exchange QuickCoin to shut down DDoS attacks. Tor Gateways would demand Quickcoin to anonymously route traffic. Machines would bypass centralized DNS and OAuth servers, using Coins to establish ownership.

Why stop at one Coin? Let’s posit a dozen new Appcoins. Using application-specific coins rewards the open-source developers with a pre-mined quantity. A TorCoin can be paid to its developers and gateways and by Tor users, achieving consensus via proof-of-bandwidth. We can allocate any scarce network resource this way – i.e., BoxCoin for Storage, CacheCoin for Caching, etc.

Lets move on to other networks. Can a completely distributed grid of small generators trade power with each other, using a decentralized and trustless cryptocurrency? Can a traffic jam of self-driving cars clear itself as the computerized vehicles bid for right of way? Can a mass of people crossing a street take priority over a single car waiting at the traffic light, as their phones vote, trustlessly and reliably, for their presence? Can we efficiently route networks of assets like water and power, and liabilities like pollutants and sewage, across a distributed grid? Can we trade stocks and financial assets with no brokers, custodians, or agents?

Cryptocurrencies are electronic cash, and as such, will be used by electronic agents to exchange value, verify contracts, and track identity and reputation. All of a sudden, the computing resources spent by the Bitcoin miners doesn’t seem wasted – it seems efficient, given that it can be used for congestion control and routing of other network resources.

Cryptocurrencies are an emergent property of the Internet – almost a fifth protocol in the Internet suite. If Satoshi Nakomoto did not exist, it would still be necessary to invent them. Someday, they will be used by the machines in our network, on our desk, in our garage, and in our pocket to exchange value and achieve consensus at blinding speeds, anonymously, and at minimal cost.

When that day arrives, large distributed networks that we rely upon will change. Starting with the Internet, they will become de-centralized market economies, far more intelligent than they are today. Just as human brains co-evolved with our ability to trade and exchange goods with people who weren’t related to us, so the network will become more intelligent as it learns to trade currency and contracts with unrelated nodes.

Eventually, there will be no functioning Internet or Internet of Things at the protocol layer without deep cryptocurrency integration. Turning off this fifth protocol will be impossible. Cryptocurrencies also remain mediums of exchange and stores of value. Nation states that are used to imposing capital controls will face a quandary – ban cryptocurrencies, and live in the technology dustbin. Enable them, and this virus, this religion, this protocol – will enable the free flow of money and language, along with packets, around the globe.


ReactOS 0.4.5 Released

$
0
0

The ReactOS Project is pleased to release version 0.4.5 as a continuation of its three month cadence. Beyond the usual range of bug fixes and syncs with external dependencies, a fair amount of effort has gone into the graphical subsystem. Thanks to the work of Katayama Hirofumi and Mark Jansen, ReactOS now better serves requests for fonts and font metrics, leading to an improved rendering of applications and a more pleasant user experience. Your continued donations have also funded a contract for Giannis Adamopoulos to fix every last quirk in our theming components. The merits of this work can be seen in ReactOS 0.4.5, which comes with a smoother themed user interface and the future promises to bring even more improvements. In another funded effort, Hermès Bélusca-Maïto has got MS Office 2010 to run under ReactOS, another application from the list of most voted apps. Don’t forget to install our custom Samba package from the Application Manager if you want to try it out for yourself.
On top of this, there have been several major fixes in the kernel and drivers that should lead to stability improvements on real hardware and on long-running machines.

ReactOS 0.4.5 with Lautus Theme enabled

Excel and Word 2010 on ReactOS

As usual, the general notes, tests, and changelog for the release can be found at their respective links. A less technical community changelog for ReactOS 0.4.5 is also available.
ISO images and prepared VMs for testing can be downloaded from the Download page.

Exploring Flutter for Cross-Platform Mobile Development

$
0
0

At my work, we have a weekly meeting for mobile developers where a presentation is given by a different developer each week. When my turn came around recently, I decided to spend some time researching Flutter, which is a cross-platform framework from Google. Being an Android developer, I’ve always been interested in cross-platform frameworks and the potential benefits they could provide, but I don’t often hear good things about them from coworkers and others in the Android development community. When I explored Flutter, I found a framework that is surprisingly good, albeit early in development.

To be clear, Flutter has been in a tech preview phase for a while and only hit alpha on May 15, 2017 1, or 11 days ago at the time of this writing. Because of this, it’s important to understand that Flutter still has a long way to go in the way of libraries and general improvements to the framework. With that being said, now that Flutter is in alpha I’m sure the Flutter developers want people to try the framework and report any bugs they come across or suggestions they have.

Easy Setup

Setting up a development environment for Flutter was incredibly straight-forward for me. The installation instructions are easy to follow, and the Flutter CLI includes a doctor command, which checks your system for Flutter dependencies and tells you exactly what you need to do to install missing dependencies.

IntelliJ IDEA seems to be the primary editor for Flutter apps, as the Flutter team maintains a plugin for it. My experience with Flutter development in IntelliJ has been very good. The plugin worked flawlessly, providing autocompletion, code formatting, and giving you an easy way to run Flutter apps on both iOS and Android emulators and physical devices.

Dart

Before actually using Flutter, I had to take a little detour and learn about theDart programming language. Flutter apps are written in Dart, which gets ahead-of-time (AOT) compiled to machine code, meaning no interpreter is involved when the app runs on a device.

Initially, I wasn’t really sure what to think about the Dart language. It seems like a combination of Java and JavaScript (ECMAScript), which makes sense because Google was originally pushing it as a web language that would eventually have support in Chrome. Dart has a lot of features that you would see in ES2015 to ES2017, as well as optional typing, abstract classes and interfaces, generics, isolates (similar to green threads?), and more.

After spending a little bit of time writing some Dart code, I found it to be a pretty easy language to use. It’s expressive, faster to write and understand than Java, and it just stays out of your way.

Rendering and Performance

One of the more interesting facets of Flutter is that it doesn’t use the built-in UI widgets from either mobile platform. Instead, Flutter uses a 2D graphics library, called skia, which serves as the rendering engine and draws Flutter UI widgets directly to the screen, bypassing the native UI.

By rendering the UI itself and using some of its own special sauce during the layout, painting, and compositing steps, Flutter apps are able to achieve a consistent 60 frames per second. I found the below video to be pretty informative about how Flutter is able to achieve its performance goals.

Though Flutter doesn’t use the native UI widgets, it’s a cross-platform framework and has widgets for both Android and iOS. From what I could tell, being that Flutter is a Google invention, there seemed to be more Material (Android) widgets than “Cupertino” (iOS) widgets, but it does look like they’re actively developing new Cupertino widgets.

When I tried a couple of Flutter apps on both platforms, they were mostly great. On Android, I found that the performance was better than apps built with the native Android framework. If there were dropped frames anywhere, I sure didn’t see them. On iOS, the biggest thing that stuck out to me was the scrolling. It didn’t quite feel right because of the physics, but I’m sure that would just require a little tweaking by the Flutter team.

There were two talks about Flutter at Google I/O 2017 a few days ago, and the talk below shows how easy it is to make an app for both platforms.

In Flutter, everything is a “widget” and widgets are composable. The way I understand this concept is that a widget can primarily do a couple of things:

  • Create a piece of UI, like a screen, button, or a paragraph of text.
  • Wrap another widget to modify how it looks or works, such as
    • applying styles, like colors, fonts, etc.
    • creating padding, margin, or layouts.
    • defining input or gesture recognition.

There are two main types of widgets: StatelessWidget and StatefulWidget. AStatelessWidget, just as it sounds, doesn’t have any state and defines a build method that describes how the widget appears.

classRedSquareextendsStatelessWidget{finalWidgetchild;RedSquare({Keykey,this.child}):super(key:key);@overrideWidgetbuild(BuildContextcontext){returnnewContainer(decoration:newBoxDecoration(color:newColor(0xFFFF0000),shape:BoxShape.rectangle,),child:child,);}}

A StatefulWidget, on the other hand, keeps track of its state through a separate class that extends the State class. The State class is actually what contains the build method for a StatefulWidget, basically describing how the state should appear.

classFizzBuzzScreenextendsStatefulWidget{FizzBuzzScreen({Keykey}):super(key:key);@override_FizzBuzzScreenStatecreateState()=>new_FizzBuzzScreenState();}class_FizzBuzzScreenStateextendsState<FizzBuzzScreen>{int_counter=0;void_incrementCounter(){setState((){_counter++;});}String_getFizzBuzz(){StringfizzBuzz='';if(_counter%3==0){fizzBuzz+='Fizz';}if(_counter%5==0){fizzBuzz+='Buzz';}returnfizzBuzz.length==0?_counter.toString():fizzBuzz;}@overrideWidgetbuild(BuildContextcontext){returnnewScaffold(body:newCenter(child:newText('($_counter): ${_getFizzBuzz()}'),),floatingActionButton:newFloatingActionButton(onPressed:_incrementCounter,child:newIcon(Icons.add),),);}}

In the example above, we have a screen that displays text for a FizzBuzz counter, using an interpolated string that calls the _getFizzBuzz method to determine what text to show, and a Floating Action Button (FAB) that calls the_incrementCounter method when pressed. The key to getting the widget, a screen in this case, to update for the user is found in the _incrementCounter method. Inside that method, we call setState, which is defined on the State class. We pass an anonymous function to setState and mutate the state as needed within that function. setState will then take care of rebuilding your widget, which will then display the updates to the user.

If you’re familiar with the concept of Flux, you might recognize that this is somewhat similar. You have a one-way data flow, and the UI reacts to changes in the state. I found it to be very refreshing that Flutter chose and enforces this sort of pattern.

Hot Reload

Long build times are something that mobile developers experience on a daily basis. One of the great things about Flutter is the inclusion of hot reload. Hot reload allows you to update the source of the app on the fly without having to restart it.

In my experience, hot reload for a simple app with a few screens was incredibly fast. The time between clicking the hot reload button in IntelliJ and seeing the UI update in my emulators was about half a second (according to the Flutter logs and my eyes). The experience is very similar to what you get with a common tool front-end developers use called LiveReload.

Even better, hot reload doesn’t clear the state of your widgets. If you change a piece of UI and use hot reload, all of the information that the widget previously had is still there. You don’t have to start from the home screen of your app every time you update the UI!

Flutter's Hot Reload

(Image from the Getting Started Guide)

My Verdict

I was very surprised to find such a solid framework in Flutter. Overall, Flutter looks like a very promising cross-platform framework and I’m excited to see where it goes. I will absolutely be keeping up with future developments of the framework and I might even start contributing libraries! I definitely encourage you to try out Flutter for yourself.

Have you used Flutter? What did you think? Send me a tweet!

Introduction to ARM Assembly Basics

$
0
0

Welcome to this tutorial series on ARM assembly basics. This is the preparation for the followup tutorial series on ARM exploit development (not published yet). Before we can dive into creating ARM shellcode and build ROP chains, we need to cover some ARM Assembly basics first.

The following topics will be covered step by step:

ARM Assembly Basics Tutorial Series:
Part 1: Introduction to ARM Assembly
Part 2: Data Types Registers
Part 3: ARM Instruction Set
Part 4: Memory Instructions: Loading and Storing Data
Part 5: Load and Store Multiple
Part 6: Conditional Execution and Branching
Part 7: Stack and Functions

To follow along with the examples, you will need an ARM based lab environment. If you don’t have an ARM device (like Raspberry Pi), you can set up your own lab environment in a Virtual Machine using QEMU and the Raspberry Pi distro by following this tutorial. If you are not familiar with basic debugging with GDB, you can get the basics in this tutorial.

Why ARM?

This tutorial is generally for people who want to learn the basics of ARM assembly. Especially for those of you who are interested in exploit writing on the ARM platform. You might have already noticed that ARM processors are everywhere around you. When I look around me, I can count far more devices that feature an ARM processor in my house than Intel processors. This includes phones, routers, and not to forget the IoT devices that seem to explode in sales these days. That said, the ARM processor has become one of the most widespread CPU cores in the world. Which brings us to the fact that like PCs, IoT devices are susceptible to improper input validation abuse such as buffer overflows. Given the widespread usage of ARM based devices and the potential for misuse, attacks on these devices have become much more common.

Yet, we have more experts specialized in x86 security research than we have for ARM, although ARM assembly language is perhaps the easiest assembly language in widespread use. So, why aren’t more people focusing on ARM? Perhaps because there are more learning resources out there covering exploitation on Intel than there are for ARM. Just think about the great tutorials on Intel x86 Exploit writing by the Corelan Team– Guidelines like these help people interested in this specific area to get practical knowledge and the inspiration to learn beyond what is covered in those tutorials. If you are interested in x86 exploit writing, the Corelan tutorials are your perfect starting point. In this tutorial series here, we will focus on assembly basics and exploit writing on ARM.

ARM processor vs. Intel processor

There are many differences between Intel and ARM, but the main difference is the instruction set. Intel is a CISC (Complex Instruction Set Computing) processor that has a larger and more feature-rich instruction set and allows many complex instructions to access memory. It therefore has more operations, addressing modes, but less registers than ARM. CISC processors are mainly used in normal PC’s, Workstations, and servers.

ARM is a RISC (Reduced instruction set Computing) processor and therefore has a simplified instruction set (100 instructions or less) and more general purpose registers than CISC. Unlike Intel, ARM uses instructions that operate only on registers and uses a Load/Store memory model for memory access, which means that only Load/Store instructions can access memory. This means that incrementing a 32-bit value at a particular memory address on ARM would require three types of instructions (load, increment and store) to first load the value at a particular address into a register, increment it within the register, and store it back to the memory from the register.

The reduced instruction set has its advantages and disadvantages. One of the advantages is that instructions can be executed more quickly, potentially allowing for greater speed (RISC systems shorten execution time by reducing the clock cycles per instruction). The downside is that less instructions means a greater emphasis on the efficient writing of software with the limited instructions that are available. Also important to note is that ARM has two modes, ARM mode and Thumb mode. Thumb mode is intended primarily to increase code density by using 16-bit instead of 32-bit instructions.

More differences between ARM and x86 are:

  • In ARM, most instructions can be used for conditional execution.
  • The Intel x86 and x86-64 series of processors use the little-endian format
  • The ARM architecture was little-endian before version 3. Since then ARM processors became BI-endian and feature a setting which allows for switchable endianness.

There are not only differences between Intel and ARM, but also between different ARM version themselves. This tutorial series is intended to keep it as generic as possible so that you get a general understanding about how ARM works. Once you understand the fundamentals, it’s easy to learn the nuances for your chosen target ARM version. The examples in this tutorial were created on an ARMv6 (Raspberry Pi 1). Some of the differences include:

  • Registers on ARMv6 and ARMv7 start with the letter R (R0, R1, etc), while ARMv8 registers start with the letter X (X0, X1, etc).
  • The amount of general purpose registers might also vary, however in most case only 16 are accessible in User Mode.

The naming of the different ARM versions might also be confusing:

ARM familyARM architecture
ARM7ARM v4
ARM9ARM v5
ARM11ARM v6
Cortex-AARM v7-A
Cortex-RARM v7-R
Cortex-MARM v7-M

Before we can start diving into ARM exploit development we first need to understand the basics of Assembly language programming, which requires a little background knowledge before you can start to appreciate it. But why do we even need ARM Assembly, isn’t it enough to write our exploits in a “normal” programming / scripting language? It is not, if we want to be able to do Reverse Engineering and understand the program flow of ARM binaries, build our own ARM shellcode, craft ARM ROP chains, and debug ARM applications.

You don’t need to know every little detail of the Assembly language to be able to do Reverse Engineering and exploit development, yet some of it is required for understanding the bigger picture. The fundamentals will be covered in this tutorial series. If you want to learn more you can visit the links listed at the end of this chapter.

So what exactly is Assembly language? Assembly language is just a thin syntax layer on top of the machine code which is composed of instructions, that are encoded in binary representations (machine code), which is what our computer understands. So why don’t we just write machine code instead? Well, that would be a pain in the ass. For this reason, we will write assembly, ARM assembly, which is much easier for humans to understand. Our computer can’t run assembly code itself, because it needs machine code. The tool we will use to assemble the assembly code into machine code is a GNU Assembler from the GNU Binutils project named as which works with source files having the *.s extension.

Once you wrote your assembly file with the extension *.s, you need to assemble it with as and link it with ld:

$ as program.s -o program.o
$ ld program.o -o program

Let’s start at the very bottom and work our way up to the assembly language. At the lowest level, we have our electrical signals on our circuit. Signals are formed by switching the electrical voltage to one of two levels, say 0 volts (‘off’) or 5 volts (‘on’). Because just by looking we can’t easily tell what voltage the circuit is at, we choose to write patterns of on/off voltages using visual representations, the digits 0 and 1, to not only represent the idea of an absence or presence of a signal, but also because 0 and 1 are digits of the binary system. We then group the sequence of 0 and 1 to form a machine code instruction which is the smallest working unit of a computer processor. Here is an example of a machine language instruction:

1110 0001 1010 0000 0010 0000 0000 0001

So far so good, but we can’t remember what each of these patterns (of 0 and 1) mean.  For this reason, we use so called mnemonics, abbreviations to help us remember these binary patterns, where each machine code instruction is given a name. These mnemonics often consist of three letters, but this is not obligatory. We can write a program using these mnemonics as instructions. This program is called an Assembly language program, and the set of mnemonics that is used to represent a computer’s machine code is called the Assembly language of that computer. Therefore, Assembly language is the lowest level used by humans to program a computer. The operands of an instruction come after the mnemonic(s). Here is an example:

MOV R2, R1

Now that we know that an assembly program is made up of textual information called mnemonics, we need to get it converted into machine code. As mentioned above, in the case of ARM assembly, the GNU Binutils project supplies us with a tool called as. The process of using an assembler like as to convert from (ARM) assembly language to (ARM) machine code is called assembling.

In summary, we learned that computers understand (respond to) the presence or absence of voltages (signals) and that we can represent multiple signals in a sequence of 0s and 1s (bits). We can use machine code (sequences of signals) to cause the computer to respond in some well-defined way. Because we can’t remember what all these sequences mean, we give them abbreviations – mnemonics, and use them to represent instructions. This set of mnemonics is the Assembly language of the computer and we use a program called Assembler to convert code from mnemonic representation to the computer-readable machine code, in the same way a compiler does for high-level languages.

A Practical Guide to Linux Commands, Editors, and Shell Programming [pdf]

Raspberry Pi VPN Server: Build Your Own Virtual Private Network

$
0
0

Raspberry Pi VPN server


In this tutorial, I will be going through the steps on how to setup a Raspberry Pi VPN server using the OpenVPN software. I will also go into setting up various things you must do to ensure that your connection is as secure as possible by setting up encryption keys.

This can be a bit of a long process, but it is a relatively simple tutorial to follow, and shouldn’t require any extra interaction once it has been configured.


Using a Raspberry Pi is a cheap way of setting up a virtual private network (VPN) that can stay online 24/7 without consuming a large amount of power. It’s small and powerful enough to handle a few connections at a time making it great for private use at home.

VPN’s are an incredibly useful network tool that can allow you to gain access to encrypted and secure internet traffic even when you are utilizing public Wi-Fi.

As an added bonus, you can also use it to allow yourself to connect to your own computer and access the home network. This allows your other devices that are located outside your local network to act as if they were on the local network of the VPN Server. For example, if you had a network attached storage server that you wanted to access whilst away then a VPN server will be extremely handy in achieving a secure way to access it.

Equipment List

Below are all the bits and pieces that I used for this Raspberry Pi VPN server tutorial, there is nothing super special that you will need to be able to complete this.

Recommended:

Raspberry Pi 2 or 3

Micro SD Card or a SD card if you’re using an old version of the Pi.

Ethernet Cord or Wifi dongle (The Pi 3 has WiFi inbuilt)

Optional:

Raspberry Pi Case

USB Keyboard

USB Mouse

Getting prepared for your VPN server

Before we get started with setting up the Raspberry Pi VPN server there is a few things we must go over to ensure that you are ready to set it up and use it.

Firstly, for this tutorial it’s important to know that I am using a clean version of Raspbian. If you haven’t installed it and would like to learn how then my guide on installing Raspbian is extremely handy if you’re new to all this.

For starters, make sure you actually do need a VPN before you start setting this up, as it can act as a gateway into your home network.

If you do intend on using a VPN make sure all the computers on your home network are secure, and that you aren’t sharing anything within your local network that you wouldn’t want someone gaining access to.

Preparing your VPN Server’s IP Address

It’s important to decide whether you are going to make use of a static IP address or a dynamic IP address, setting up a VPN for a static IP address is a rather simple process and requires no extra work.

However, if you want to utilize a dynamic IP address you must make use of a dynamic DNS service.

If you choose to go down the dynamic DNS service route, then you should decide whether you want to make use of your own domain name, or a free one.

If you want to make use of your own domain name, then you can use a service like CloudFlare, if you want to make use of a free sub domain then a service such as no-ip.org will be useful for you.

You can check out our guide on setting up your Raspberry Pi for Dynamic DNS for more information.

Remember the domain name that you set up for either Cloudflare or no-ip.org as you will need this later on in the tutorial.

Port Forward for your Raspberry Pi VPN

The third important thing that you will need to get done before you start setting up your Raspberry Pi is to port forward for the OpenVPN software.

The default port you need to forward is 1194, however we recommend port forwarding a different port and using that instead to try and avoid open port scans on your home network. Remember the port you set as you will need this later on in the tutorial. The protocol you will have to make use of for this port is UDP.

If you are unsure on how to portforward on your router, we recommend looking your router up over on port forward.

Installing the VPN Server

1. Setting up a Raspberry Pi VPN Server can be quite a complicated process, normally you would have to install the software, generate the encryption keys, add the port to the firewall, set the Pi to keep a static IP address and much more.

Luckily for us there is a much easier way to setup a Raspberry Pi VPN server thanks to an install script called PiVPN, this handles all the grunt work for setting up a VPN and reduces the potential for making mistakes.

Before we get started, we should first change the password of the default pi user, this is to ensure if someone managed to gain access to your VPN they wouldn’t be able to access your Raspberry Pi easily.

passwd

2. With the password changed we can begin the process of setting up our VPN server on the Raspberry Pi. We can begin this process by running the command below, this command downloads the install script from PiVPN’s github page and runs it.

Normally running a script straight from a URL is a poor idea, as it can be an easy way for someone to gain access to your Raspberry Pi and do some serious damage.

However this is a trusted source that we have verified, if you want to check out the code yourself, just go to the location of the script.

curl -L https://install.pivpn.io | bash

3. Once you have run the above command you should be met with the following screen. To proceed to the next screen you just need to press enter.
PiVPN Install


4. The next screen basically explains that it will need to setup a static IP address for the Raspberry Pi, this is so that when the Raspberry Pi is restarted for any reason, it will try and utilize the same IP address again.

PiVPN Static IP

5. On here we will just be selecting <Yes> to using the current network settings as a static local IP Address. If you are unhappy with the current settings, then select <No>.
PiVPN Set Static IP


6. The warning you are presented with after this basically warns you that there is a chance your router will assign the IP address to another device. Luckily most routers also let you set the IP address to static within its interface as well. For the most part you should be able to just ignore it. So just select <Ok> and press enter.

PiVPN Set Static IP Conflict

7. The next screen explains that we will need to set a local user that the OVPN configurations will be created for. You can just select <Ok> and go onto the next screen.

Local Users list

8. Here we will be presented with a list of users that we can choose. In this tutorial we will be just making use of the default pi user. Once you are happy with the user you have selected, press enter.

Choose a user

9. Now you will be presented with an explanation of unattended upgrades, this feature basically makes Raspbian automatically download security package updates to your Raspberry Pi daily.

This helps secure your Raspberry Pi which is incredibly important since we will be opening a port on the router. Select <Ok> to continue.
Unattended Upgrades


10. On this screen, we highly recommend selecting <Yes>. Leaving this feature switched off can pose a big security risk to your Raspberry Pi and potentially your home network.

Unattended Upgrades select


11. Now we will be asked to set the protocol that OpenVPN will run through, we will be making use of UDP, only select TCP if you know why you need it. Press enter when you are happy with your choice.

Set protocol


12. Now we will be selecting the port OpenVPN will operate through, while you can just press enter to retain the default port of 1194, we do recommend changing this.
The reason to change it is that if someone does a default port scan on your IP address it will be much harder for them to know you have a VPN up and running.

Set Port

13. Below is the confirmation screen for the port number you set, if you are happy with the port you have chosen then select <Yes> to continue.

Confirm Port

14. Now we must choose the encryption key size, we recommend using 2048-bit encryption as it currently offers good protection without sacrificing speed.

If you are truly worried about the encryption of your connection then you can make use of the 4096-bit encryption key, however this will take some serious time and will slow the overall connection.
Set encyrption Strength

15. The next screen basically tells us what the PiVPN script is about to do, expect this process to take some time, it can take anywhere from a couple of minutes to an hour. Select <Ok> to proceed.

server information key

16. We now need to decide whether we want to make use of our public IP Address or utilize a Dynamic IP service such as no-ip.org.

If you have a dynamic IP address then use the arrow keys to navigate up and down, and use spacebar to select the DNS Entry before pressing Enter.

Utilize DNS Name or Public IP

17. If you selected DNS then you can set your DNS name here, this can be either a xxxxx.no-ip.org address or a domain name of your own choosing that points to your IP address. You should’ve set this up before completing this tutorial.

Set public name of this server


18. The next step is to select a DNS provider. A DNS provider is what resolves a URL into a IP address.
For the sake of simplicity, we will be just making use of Google’s public DNS servers, however they are known for recording the data that passes through them.

Select DNS Provider


19. You have now successfully completed the installation of your Raspberry Pi VPN, while there are still a couple more things you will need to complete to allow connections, you are now about 90% done.

Installation complete


20. We will now be greeted by a screen asking for us to reboot the Raspberry Pi, just select <Yes> to the next two screens as it’s crucial you reboot.
Reboot

Setting up your first OpenVPN User

1. Normally setting up a user for Openvpn would be a painful process as you would have to generate the individual certificates for the user, luckily we can do this in one single command thanks to PiVPN.

To begin adding the user, run the following command:

sudo pivpn add

On this screen, you will need to enter a name for the client, this name will act as an identifier so you can differentiate between different clients.

It will also ask you to set a password for the client, it is important to make this something secure and not easy to guess as this will secure the encryption key.

So, if someone can guess the password easily it severely reduces the security of your VPN.
Pivpn add


Once you press enter to these, the PiVPN script will tell Easy-RSA to generate the 2048-bit RSA private key for the client, and then store the file into /home/pi/ovpns.

/home/pi/ovpns is the folder we will have to gain access to in the next few steps so we can copy the generated file to our devices.

Make sure you keep these files safe as they are your only way of accessing your VPN.

2. Now that our new “client” has been setup for OpenVPN with our passphrase we will now need to get it to the device that we intend on connecting from. The easiest way to do this is to make use of SFTP from within your home network.

Make sure you have a program such as FileZilla that can handle SFTP connections installed before continuing with this tutorial.

To get started, let’s login to our Raspberry Pi over SFTP. Remember to type sftp:// in front of your Raspberry Pi’s IP address. If you don’t have you Pi’s local address use the command hostname -I in terminal.

Once you have entered your IP address, Username and Password, press the quickconnect button.
SFTP Details


3. Once you have successfully logged in, we need to look for the ovpns folder, as this is where the file we need will be located. Once you have found the folder, double click on it.
SFTP ovpns
4. Now all we need to do is drag the .ovpn file you want to somewhere safe on your computer. This file contains the data that we will need to connect to the VPN so keep this file safe.

It is also the only way someone could potentially gain access to your VPN, so keeping the passphrase and the file secure is incredibly important. If someone gains access to these they could potentially cause some harm to your network.
SFTP ovpns download

5. Now we have the .opvn file on our device we can use this to make a connection to our VPN.

The .opvn file stores everything we need to make a secure connection. It contains the web address to connect to, and all the encryption data it needs. The only thing it does not contain is your passphrase, so you will need to enter this when you connect to the VPN.

The client we are going to use is the official OpenVPN client, you can obtain this from their official openvpn website.

Download and install this client, on its first run it will automatically minimize to the taskbar, right click on the icon, then select “Import file…”

OpenVPN GUI

6. You will be presented with a file explorer screen, in here go to where you saved the .opvn file from earlier. Once you have found it, double click the file to import into the OpenVPN client.

Select ovpn file

7. You should now be presented with a dialog telling you the file has been successfully imported into OpenVPN. Just press the OK button to proceed.
Ovpn file imported successfully


8. Now right click the OpenVPN client icon in the taskbar again, this time click “Connect”.

OpenVPN GUI 2

9. Now the OpenVPN client will attempt to read the data located in the .opvn file. Since we have a passphrase set, it will now ask for you to enter the passphrase you set earlier on in this tutorial.

Once you are certain you have entered the correct passphrase, press the “OK” Button.
Ovpn enter password


10. The OpenVPN client will now attempt to connect to your Raspberry Pi’s VPN server. If the OpenVPN icon turns to a solid green, then it means that you have successfully connected into your VPN.

However, if it turns yellow and fails to turn green after 60 seconds that means something is causing the connection to fail.

In most cases the connection failing is caused by a port forwarding issues, my router for instance has numerous issues with port forwarding. It is easiest to google your router’s model number to try and find help on any issues you may face with port forwarding. Some ISP’s (Internet Service Provider) also block certain ports so it’s best to check that the port you plan on using is not being blocked by your ISP.

If you are using a dynamic DNS service, then make sure that the service is being correctly updated with your latest IP address, if the IP address has changed but the DNS setting hasn’t then it will cause the connection to fail.

Hopefully by now you will have a fully functional VPN that you’re able to successfully connect to.

I hope that this tutorial has shown you how to setup a Raspberry Pi VPN Server and that you haven’t run into any issues. It’s certainly a great project for anyone who wishes to set up a cheap always-on VPN network. If you have some feedback, tips or have come across any issues that you would like to share then please don’t hesitate to leave a comment below.

The FREE Raspberry Pi Crash Course

Enter your email below and get the Raspberry Pi crash course delivered straight to your inbox!

Please check your inbox for a confirmation email.

A year of digging through code yields “smoking gun” on VW, Fiat diesel cheats

$
0
0
Enlarge/ Volkswagen AG Turbocharged Direct Injection (TDI) vehicles sit parked in a storage lot at San Bernardino International Airport (SBD) in San Bernardino, California, U.S., on Wednesday, April 5, 2017. Volkswagen agreed last year to buy back about 500,000 diesels that it rigged to pass US emissions tests if it can’t figure out a way to fix them. In the meantime, the company is hauling them to storage lots, such as ones at an abandoned NFL stadium outside Detroit, the Port of Baltimore and a decommissioned Air Force base in California. Photographer: Patrick T. Fallon/Bloomberg via Getty Images

Researchers from Bochum, Germany, and San Diego, California, say they’ve found the precise mechanisms that allowed diesel Volkswagens and Audis to engage or disengage emissions controls depending on whether the cars were being driven in a lab or driven under real-world conditions. As a bonus, the researchers also found previously-undisclosed code on a diesel Fiat 500 sold in Europe.

Auto manufacturers have been cheating on emissions control tests for decades, but until recently, their cheats were fairly simple. Temperature-sensing or time-delay switches could cut the emissions control system when a car was being driven under certain conditions.

These days, cars are an order of magnitude more complex, making it easier for manufacturers to hide cheats among the 100 million lines of code that make up a modern, premium-class vehicle.

In 2015, regulators realized that diesel Volkswagens and Audis were emitting several times the legal limit of nitrogen oxides (NOx) during real-world driving tests. But one problem regulators confronted was that they couldn’t point to specific code that allowed the cars to do this. They could prove the symptom (high emissions on the road), but they didn’t have concrete evidence of the cause (code that circumvented US and EU standards).

Luckily, subsequent subpoenas of e-mails between executives revealed the kind of broad-brush information that helped federal investigators secure settlements from Volkswagen and its supplier Bosch. Investigators were also able to get a plea deal with a former VW engineer.

This latest research finally offers a smoking gun. For more than a year, researchers studied 926 firmware images from the VWs and Audis identified by the EPA in 2015, and they found a potential defeat device in 406 of those firmware images. All the cars studied had Engine Control Unit (ECU) systems developed by Bosch.

Interestingly, Volkswagen may not have written any of the code that enabled its scandal, although it may have requested certain functions from Bosch. The researchers note: “We have found no evidence that automobile manufacturers write any of the code running on the ECU [Engine Control Unit]. All code we analyzed in this work was documented in documents copyrighted by Bosch and identified automakers as the intended customers.”

Discovering a hidden cheat

The researchers, led by University of California San Diego computer scientist Kirill Levchenko, faced a number of challenges in their quest to find the offending code.

Firmware images were gleaned from car-tuning forums and from an online portal maintained by Volkswagen for car repair shops. Documentation, in the form of so-called “function sheets,” was harder to come by. The function sheets were necessary to give the binary context, but the sheets are copyrighted by Bosch and generally not shared with the public. The research team ended up turning to the auto-performance tuning community again. These hard-core hobbyists and professionals share leaked function sheets so they can make aftermarket modifications to their cars.

“[T]he vehicle can switch to an operating regime favored by the manufacturer for real driving rather than the clean regime necessary to pass the emission test.”

Once the researchers were able to study the code running on the faulty diesels, they discovered that Volkswagen’s defeat devices were far more nuanced than anything found previously. Levchenko told Eurekalert that the “Volkswagen defeat device is arguably the most complex in automotive history.”

The researchers found that the cars assumed they were being tested in a lab until a sensor reading ruled out a lab test. At that point, “the vehicle can switch to an operating regime favored by the manufacturer for real driving rather than the clean regime necessary to pass the emission test,” the research paper noted (PDF).

Earlier, more basic versions of the software checked for three conditions to decide whether a lab test was being conducted. But by 2009, some firmware versions included 10 different checks for a lab test. The cars checked for things like how long and how far the car had been driven (in the US, emissions cycle tests last no more than 31 minutes and the car drives exactly 11.04 miles). Some cars also may have conducted steering wheel checks—if the steering wheel deviated more than 20° from neutral position, it was probably not being tested in the lab.

As each check ruled a lab test in or out, a different function of the Engine Control Unit—like fuel injection timing, how exhaust gas is recirculated through the vehicle, and how much urea should be injected into the exhaust—could respond.

Hidden in plain sight

In many of the function sheets, the code to cheat emissions tests was labeled as modifying the “acoustic condition” of the engine—a label that helped the cheat fly under the radar for as long as it did. But the “acoustic condition” wasn’t simply labeling legerdemain. Originally, “acoustic condition” code really did apply to the sound of the engine on startup.

The labelling checks out with a story in a legal complaint filed by the New York Attorney General in July 2016. The state’s top lawyer wrote that, as early as 2004, Audi was looking for a way to stop the rattling sound that diesel engines are known to make as the engine starts up. They allegedly solved the problem by injecting extra fuel as the engine initiated combustion. The extra fuel increased emissions. So, the “acoustic condition” was modified to discern when lab testing was occurring so that the Audis in question could pass their emissions tests and also have nice quiet start-ups that customers would find inoffensive.

According to that same 2016 complaint from the New York Attorney General, Volkswagen later struggled with building its selective catalytic converter—the company felt that the kind licensed by Mercedes-Benz was superior, and VW’s solution required an extra tank that stored “gallons” of Diesel Exhaust Fluid. Volkswagen then tried to build a “Lean Trap” for the catalytic converter to trap NOx, but engineers found that the car could only go 50,000 before the Lean Trap broke.

At this point, according to the NY AG, engineers turned back to that “acoustic condition” code for a little help to meet strict emissions regulations. Those engineers were also on a deadline to put forward a product that could compete with competitors’ diesel passenger vehicles.

That legal assessment of the defeat device scandal seems to have held up as the researchers analyzed the cars’ code. The VWs and Audis in question checked for a number of parameters at startup, and if a lab test was a possibility, the car would start with that assumption, enabling full emissions controls. The code permitted the car “to operate... as if two distinct personalities took turns controlling the vehicle,” the paper’s authors wrote.

The paper also notes that the researchers tested the diesel Fiat 500X because it used the same Engine Control Unit from Bosch as the Volkswagens and Audis did. There was no mention of the “acoustic condition” in the Fiat’s function sheet, but some undisclosed code was discovered controlling how the car regenerates its NOx Storage Catalyst (NSC).

“Unlike the Volkswagen defeat device, the FCA [Fiat Chrysler Automobiles] mechanism relies on time only, reducing the frequency of NSC regenerations 26 minutes 40 seconds after engine start,” the paper notes. In a normal system, the NSC reduces NOx emission by trapping it in a catalyst and then regenerating the catalyst as it gets full.

But regeneration hurts a car’s fuel economy numbers and puts a lot of load on the Diesel Particulate Filter (DPF). “By reducing the frequency of NSC regeneration, a manufacturer can improve fuel economy and increase DPF service life, at the cost of increased NOx emissions,” the researchers explained.

This problem? It’s an arms race.

To do a lot of their analysis, the authors of the paper developed a static analysis system that could scan auto firmware to look for defeat devices. They were largely successful with Volkswagens and Audis, but they stressed that more work has to be put into this problem. Staying ahead of automakers is difficult when they know precisely what regulators are looking for. Automakers, of course, stand to gain considerably if they can hide emissions cheats and deliver cars with performance superior to their competitors. Consumers will be excited to get better gas mileage—they won’t necessarily know that their car is spewing an outsized chunk of NOx into the air behind them.

The researchers also say that it’s high-time regulators dispense with the kind of lab tests that US and EU governments have required for years. Instead, some kind of active scan for illegal code needs to be developed. This problem, the paper notes, “drives a critical research agenda going forward that will only become more important as regulators are asked to oversee and evaluate increasingly complex vehicular systems (e.g., autonomous driving).”

Elliptic Curves as Python Objects

$
0
0

Last time we saw a geometric version of the algorithm to add points on elliptic curves. We went quite deep into the formal setting for it (projective space \mathbb{P}^2), and we spent a lot of time talking about the right way to define the “zero” object in our elliptic curve so that our issues with vertical lines would disappear.

With that understanding in mind we now finally turn to code, and write classes for curves and points and implement the addition algorithm. As usual, all of the code we wrote in this post is available on this blog’s Github page.

Every introductory programming student has probably written the following program in some language for a class representing a point.

class Point(object):
    def __init__(self, x, y):
        self.x = x
        self.y = y

It’s the simplest possible nontrivial class: an x and y value initialized by a constructor (and in Python all member variables are public).

We want this class to represent a point on an elliptic curve, and overload the addition and negation operators so that we can do stuff like this:

p1 = Point(3,7)
p2 = Point(4,4)
p3 = p1 + p2

But as we’ve spent quite a while discussing, the addition operators depend on the features of the elliptic curve they’re on (we have to draw lines and intersect it with the curve). There are a few ways we could make this happen, but in order to make the code that uses these classes as simple as possible, we’ll have each point contain a reference to the curve they come from. So we need a curve class.

It’s pretty simple, actually, since the class is just a placeholder for the coefficients of the defining equation. We assume the equation is already in the Weierstrass normal form, but if it weren’t one could perform a whole bunch of algebra to get it in that form (and you can see how convoluted the process is in this short report or page 115 (pdf p. 21) of this book). To be safe, we’ll add a few extra checks to make sure the curve is smooth.

class EllipticCurve(object):
   def __init__(self, a, b):
      # assume we're already in the Weierstrass form
      self.a = a
      self.b = b

      self.discriminant = -16 * (4 * a*a*a + 27 * b * b)
      if not self.isSmooth():
         raise Exception("The curve %s is not smooth!" % self)

   def isSmooth(self):
      return self.discriminant != 0

   def testPoint(self, x, y):
      return y*y == x*x*x + self.a * x + self.b

   def __str__(self):
      return 'y^2 = x^3 + %Gx + %G' % (self.a, self.b)

   def __eq__(self, other):
      return (self.a, self.b) == (other.a, other.b)

And here’s some examples of creating curves

>>> EllipticCurve(a=17, b=1)
y^2 = x^3 + 17x + 1
>>> EllipticCurve(a=0, b=0)
Traceback (most recent call last):
  [...]
Exception: The curve y^2 = x^3 + 0x + 0 is not smooth!

So there we have it. Now when we construct a Point, we add the curve as the extra argument and a safety-check to make sure the point being constructed is on the given elliptic curve.

class Point(object):
   def __init__(self, curve, x, y):
      self.curve = curve # the curve containing this point
      self.x = x
      self.y = y

      if not curve.testPoint(x,y):
         raise Exception("The point %s is not on the given curve %s" % (self, curve))

Note that this last check will serve as a coarse unit test for all of our examples. If we mess up then more likely than not the “added” point won’t be on the curve at all. More precise testing is required to be bullet-proof, of course, but we leave explicit tests to the reader as an excuse to get their hands wet with equations.

Some examples:

>>> c = EllipticCurve(a=1,b=2)
>>> Point(c, 1, 2)
(1, 2)
>>> Point(c, 1, 1)
Traceback (most recent call last):
  [...]
Exception: The point (1, 1) is not on the given curve y^2 = x^3 + 1x + 2

Before we go ahead and implement addition and the related functions, we need to be decide how we want to represent the ideal point [0 : 1 : 0]. We have two options. The first is to do everything in projective coordinates and define a whole system for doing projective algebra. Considering we only have one point to worry about, this seems like overkill (but could be fun). The second option, and the one we’ll choose, is to have a special subclass of Point that represents the ideal point.

class Ideal(Point):
   def __init__(self, curve):
      self.curve = curve

   def __str__(self):
      return "Ideal"

Note the inheritance is denoted by the parenthetical (Point) in the first line. Each function we define on a Point will require a 1-2 line overriding function in this subclass, so we will only need a small amount of extra bookkeeping. For example, negation is quite easy.

class Point(object):
   ...
   def __neg__(self):
      return Point(self.curve, self.x, -self.y)

class Ideal(Point):
   ...
   def __neg__(self):
      return self

Note that Python allows one to override the prefix-minus operation by defining __neg__ on a custom object. There are similar functions for addition (__add__), subtraction, and pretty much every built-in python operation. And of course addition is where things get more interesting. For the ideal point it’s trivial.

class Ideal(Point):
   ...
   def __add__(self, Q):
      return Q

Why does this make sense? Because (as we’ve said last time) the ideal point is the additive identity in the group structure of the curve. So by all of our analysis, P + 0 = 0 + P = P, and the code is satisfyingly short.

For distinct points we have to follow the algorithm we used last time. Remember that the trick was to form the line L(x) passing through the two points being added, substitute that line for y in the elliptic curve, and then figure out the coefficient of x^2 in the resulting polynomial. Then, using the two existing points, we could solve for the third root of the polynomial using Vieta’s formula.

In order to do that, we need to analytically solve for the coefficient of the x^2 term of the equation L(x)^2 = x^3 + ax + b. It’s tedious, but straightforward. First, write

\displaystyle L(x) = \left ( \frac{y_2 - y_1}{x_2 - x_1} \right ) (x - x_1) + y_1

The first step of expanding L(x)^2 gives us

\displaystyle L(x)^2 = y_1^2 + 2y_1 \left ( \frac{y_2 - y_1}{x_2 - x_1} \right ) (x - x_1) + \left [ \left (\frac{y_2 - y_1}{x_2 - x_1} \right ) (x - x_1) \right ]^2

And we notice that the only term containing an x^2 part is the last one. Expanding that gives us

\displaystyle \left ( \frac{y_2 - y_1}{x_2 - x_1} \right )^2 (x^2 - 2xx_1 + x_1^2)

And again we can discard the parts that don’t involve x^2. In other words, if we were to rewrite L(x)^2 = x^3 + ax + b as 0 = x^3 - L(x)^2 + ax + b, we’d expand all the terms and get something that looks like

\displaystyle 0 = x^3 - \left ( \frac{y_2 - y_1}{x_2 - x_1} \right )^2 x^2 + C_1x + C_2

where C_1, C_2 are some constants that we don’t need. Now using Vieta’s formula and calling x_3 the third root we seek, we know that

\displaystyle x_1 + x_2 + x_3 = \left ( \frac{y_2 - y_1}{x_2 - x_1} \right )^2

Which means that x_3 = \left ( \frac{y_2 - y_1}{x_2 - x_1} \right )^2 - x_2 - x_1. Once we have x_3, we can get y_3 from the equation of the line y_3 = L(x_3).

Note that this only works if the two points we’re trying to add are different! The other two cases were if the points were the same or lying on a vertical line. These gotchas will manifest themselves as conditional branches of our add function.

class Point(object):
   ...
   def __add__(self, Q):
      if isinstance(Q, Ideal):
         return self

      x_1, y_1, x_2, y_2 = self.x, self.y, Q.x, Q.y

      if (x_1, y_1) == (x_2, y_2):
         # use the tangent method
         ...
      else:
         if x_1 == x_2:
            return Ideal(self.curve) # vertical line

         # Using Vieta's formula for the sum of the roots
         m = (y_2 - y_1) / (x_2 - x_1)
         x_3 = m*m - x_2 - x_1
         y_3 = m*(x_3 - x_1) + y_1

         return Point(self.curve, x_3, -y_3)

First, we check if the two points are the same, in which case we use the tangent method (which we do next). Supposing the points are different, if their x values are the same then the line is vertical and the third point is the ideal point. Otherwise, we use the formula we defined above. Note the subtle and crucial minus sign at the end! The point (x_3, y_3) is the third point of intersection, but we still have to do the reflection to get the sum of the two points.

Now for the case when the points P, Q are actually the same. We’ll call it P = (x_1, y_1), and we’re trying to find 2P = P+P. As per our algorithm, we compute the tangent line J(x) at P. In order to do this we need just a tiny bit of calculus. To find the slope of the tangent line we implicitly differentiate the equation y^2 = x^3 + ax + b and get

\displaystyle \frac{dy}{dx} = \frac{3x^2 + a}{2y}

The only time we’d get a vertical line is when the denominator is zero (you can verify this by taking limits if you wish), and so y=0 implies that P+P = 0 and we’re done. The fact that this can ever happen for a nonzero P should be surprising to any reader unfamiliar with groups! But without delving into a deep conversation about the different kinds of group structures out there, we’ll have to settle for such nice surprises.

In the other case y \neq 0, we plug in our x,y values into the derivative and read off the slope m as (3x_1^2 + a)/(2y_1). Then using the same point slope formula for a line, we get J(x) = m(x-x_1) + y_1, and we can use the same technique (and the same code!) from the first case to finish.

There is only one minor wrinkle we need to smooth out: can we be sure Vieta’s formula works? In fact, the real problem is this: how do we know that x_1 is a double root of the resulting cubic? Well, this falls out again from that very abstract and powerful theorem of Bezout. There is a lot of technical algebraic geometry (and a very interesting but complicated notion of dimension) hiding behind the curtain here. But for our purposes it says that our tangent line intersects the elliptic curve with multiplicity 2, and this gives us a double root of the corresponding cubic.

And so in the addition function all we need to do is change the slope we’re using. This gives us a nice and short implementation

def __add__(self, Q):
      if isinstance(Q, Ideal):
         return self

      x_1, y_1, x_2, y_2 = self.x, self.y, Q.x, Q.y

      if (x_1, y_1) == (x_2, y_2):
         if y_1 == 0:
            return Ideal(self.curve)

         # slope of the tangent line
         m = (3 * x_1 * x_1 + self.curve.a) / (2 * y_1)
      else:
         if x_1 == x_2:
            return Ideal(self.curve)

         # slope of the secant line
         m = (y_2 - y_1) / (x_2 - x_1)

      x_3 = m*m - x_2 - x_1
      y_3 = m*(x_3 - x_1) + y_1

      return Point(self.curve, x_3, -y_3)

What’s interesting is how little the data of the curve comes into the picture. Nothing depends on b, and only one of the two cases depends on a. This is one reason the Weierstrass normal form is so useful, and it may bite us in the butt later in the few cases we don’t have it (for special number fields).

Here are some examples.

>>> C = EllipticCurve(a=-2,b=4)
>>> P = Point(C, 3, 5)
>>> Q = Point(C, -2, 0)
>>> P+Q
(0.0, -2.0)
>>> Q+P
(0.0, -2.0)
>>> Q+Q
Ideal
>>> P+P
(0.25, 1.875)
>>> P+P+P
Traceback (most recent call last):
  ...
Exception: The point (-1.958677685950413, 0.6348610067618328) is not on the given curve y^2 = x^3 + -2x + 4!

>>> x = -1.958677685950413
>>> y = 0.6348610067618328
>>> y*y - x*x*x + 2*x - 4
-3.9968028886505635e-15

And so we crash headfirst into our first floating point arithmetic issue. We’ll vanquish this monster more permanently later in this series (in fact, we’ll just scrap it entirely and define our own number system!), but for now here’s a quick fix:

>>> import fractions
>>> frac = fractions.Fraction
>>> C = EllipticCurve(a = frac(-2), b = frac(4))
>>> P = Point(C, frac(3), frac(5))
>>> P+P+P
(Fraction(-237, 121), Fraction(845, 1331))

Now that we have addition and negation, the rest of the class is just window dressing. For example, we want to be able to use the subtraction symbol, and so we need to implement __sub__

def __sub__(self, Q):
   return self + -Q

Note that because the Ideal point is a subclass of point, it inherits all of these special functions while it only needs to override __add__ and __neg__. Thank you, polymorphism! The last function we want is a scaling function, which efficiently adds a point to itself n times.

class Point(object):
   ...
   def __mul__(self, n):
      if not isinstance(n, int):
         raise Exception("Can't scale a point by something which isn't an int!")
      else:
            if n < 0:
                return -self * -n
            if n == 0:
                return Ideal(self.curve)
            else:
                Q = self
                R = self if n & 1 == 1 else Ideal(self.curve)

                i = 2
                while i <= n:
                    Q = Q + Q

                    if n & i == i:
                        R = Q + R

                    i = i << 1
   return R

   def __rmul__(self, n):
      return self * n

class Ideal(Point):
    ...
    def __mul__(self, n):
        if not isinstance(n, int):
            raise Exception("Can't scale a point by something which isn't an int!")
        else:
            return self

The scaling function allows us to quickly compute nP = P + P + \dots + P (n times). Indeed, the fact that we can do this more efficiently than performing n additions is what makes elliptic curve cryptography work. We’ll take a deeper look at this in the next post, but for now let’s just say what the algorithm is doing.

Given a number written in binary n = b_kb_{k-1}\dots b_1b_0, we can write nP as

\displaystyle b_0 P + b_1 2P + b_2 4P + \dots + b_k 2^k P

The advantage of this is that we can compute each of the P, 2P, 4P, \dots, 2^kP iteratively using only k additions by multiplying by 2 (adding something to itself) k times. Since the number of bits in n is k= \log(n), we’re getting a huge improvement over n additions.

The algorithm is given above in code, but it’s a simple bit-shifting trick. Just have i be some power of two, shifted by one at the end of every loop. Then start with Q_0 being P, and replace Q_{j+1} = Q_j + Q_j, and in typical programming fashion we drop the indices and overwrite the variable binding at each step (Q = Q+Q). Finally, we have a variable R to which Q_j is added when the j-th bit of n is a 1 (and ignored when it’s 0). The rest is bookkeeping.

Note that __mul__ only allows us to write something like P * n, but the standard notation for scaling is n * P. This is what __rmul__ allows us to do.

We could add many other helper functions, such as ones to allow us to treat points as if they were lists, checking for equality of points, comparison functions to allow one to sort a list of points in lex order, or a function to transform points into more standard types like tuples and lists. We have done a few of these that you can see if you visit the code repository, but we’ll leave flushing out the class as an exercise to the reader.

Some examples:

>>> import fractions
>>> frac = fractions.Fraction
>>> C = EllipticCurve(a = frac(-2), b = frac(4))
>>> P = Point(C, frac(3), frac(5))
>>> Q = Point(C, frac(-2), frac(0))
>>> P-Q
(Fraction(0, 1), Fraction(-2, 1))
>>> P+P+P+P+P
(Fraction(2312883, 1142761), Fraction(-3507297955, 1221611509))
>>> 5*P
(Fraction(2312883, 1142761), Fraction(-3507297955, 1221611509))
>>> Q - 3*P
(Fraction(240, 1), Fraction(3718, 1))
>>> -20*P
(Fraction(872171688955240345797378940145384578112856996417727644408306502486841054959621893457430066791656001, 520783120481946829397143140761792686044102902921369189488390484560995418035368116532220330470490000), Fraction(-27483290931268103431471546265260141280423344817266158619907625209686954671299076160289194864753864983185162878307166869927581148168092234359162702751, 11884621345605454720092065232176302286055268099954516777276277410691669963302621761108166472206145876157873100626715793555129780028801183525093000000))

As one can see, the precision gets very large very quickly. One thing we’ll do to avoid such large numbers (but hopefully not sacrifice security) is to work in finite fields, the simplest version of which is to compute modulo some prime.

So now we have a concrete understanding of the algorithm for adding points on elliptic curves, and a working Python program to do this for rational numbers or floating point numbers (if we want to deal with precision issues). Next time we’ll continue this train of thought and upgrade our program (with very little work!) to work over other simple number fields. Then we’ll delve into the cryptographic issues, and talk about how one might encode messages on a curve and use algebraic operations to encode their messages.

Until then!

Advertisements

The Science Behind the Flamingo’s One-Legged Stance

$
0
0

With the help of zoo keepers, the researchers coaxed eight young flamingos who had just eaten and were getting sleepy onto a device called a force plate to measure their postural sway, or the movements of an unsteady body as it tries to stabilize itself.

Photo
Researchers coaxed a baby flamingo onto a force plate to measure how it stabilizes itself while standing on one leg.Credit Rob Feit/Georgia Tech

“Remarkably, when they’re falling asleep, the motion and the speed of the body was very, very low,” said Dr. Ting. “That’s counterintuitive because when you and I stand on one leg and close our eyes, we generally have more postural sway.”

That’s because our response is complicated. The nervous system senses instability and sends messages to muscles to tell them to contract to stabilize the body. But the steady zoo flamingos appeared to use some kind of passive strategy that relied less on muscles and nerves, and more on the simple mechanics of how their bodies fit together.

The researchers used flamingo cadavers, which obviously lack active muscles, to see if muscles were necessary for this stability.

Dr. Chang stood the cadavers up in a one-legged position. Rather than flopping over as expected, the bird settled into a stable, one-legged posture that stayed put even when the top of its body was tilted backward and forward. On two legs, or if the foot was not right below the body, the cadaver was far less stable.

The joints were easily unfolded also, suggesting that a flamingo can transition out of this position without much effort, either to switch legs, respond to wind or muddy water, or escape a threat.

The birds showed that “it’s possible to maintain what we’d consider very difficult posture without having to activate muscles,” said Dr. Ting. The birds might, she added, rely on gravity and some interaction between joints and ligaments to keep everything in place.

Because moving in and out of the one-legged stance appears to use little energy, flamingos could inspire improvements for robotics and powered prosthetics, said Dr. Ting, who studies the process of recovering movement after an injury.

“Usually as humans we take the standing behavior for granted until we lose that ability,” she said. Simplicity may be for the birds, but we complicated humans can appreciate its lessons.

Continue reading the main story

Hacking Go's type system

$
0
0

Are you in the mood for a stroll inside Go’s type system ? If you are already familiarized with it, this post can be funny for you, or just plain stupid.

If you have no idea how types and interfaces are implemented on Go, you may learn something, I sure did :-)

Since I worked with handwritten type systems in C, like the one found in glib GObjects, I’m always curious on how languages implement the concept of type safety on a machine that actually only has numbers. This curiosity has extended to how I could bend Go’s type system to my own will.

Go’s instances do not carry type information on them, so my only chance will involve using an interface{}. All type systems that I’m aware off usually implement types as some sort of integer code, which is used to check if the type corresponds to the one you are casting.

To begin the exploration I wanted to find how type assertions are made, so I wrote this ridiculous code:

packagemainimport"fmt"funcmain(){varaintvarbinterface{}=ac:=b.(int)fmt.Println(c)}

Compiled it outputing it’s assembly:

go build -gcflags -S cast.go

Found that the assembly code corresponding to this:

Is (roughly) this:

	0x002a 00042 (cast.go:7)	LEAQ	type.int(SB), AX
	0x0031 00049 (cast.go:8)	CMPQ	AX, AX
	0x0034 00052 (cast.go:8)	JNE	$0, 162
	0x0036 00054 (cast.go:9)	MOVQ	$0, ""..autotmp_3+56(SP)
	0x003f 00063 (cast.go:9)	MOVQ	$0, ""..autotmp_2+64(SP)
	0x0048 00072 (cast.go:9)	MOVQ	$0, ""..autotmp_2+72(SP)
	0x0051 00081 (cast.go:9)	MOVQ	AX, (SP)
	0x0055 00085 (cast.go:9)	LEAQ	""..autotmp_3+56(SP), AX
	0x005a 00090 (cast.go:9)	MOVQ	AX, 8(SP)
	0x005f 00095 (cast.go:9)	PCDATA	$0, $1
	0x005f 00095 (cast.go:9)	CALL	runtime.convT2E(SB)

The runtime.convT2E call caught my attention, it was not hard to find it on the iface.go file on the golang source code.

It’s code (on the time of writing, Go 1.8.1):

// The conv and assert functions below do very similar things.// The convXXX functions are guaranteed by the compiler to succeed.// The assertXXX functions may fail (either panicking or returning false,// depending on whether they are 1-result or 2-result).// The convXXX functions succeed on a nil input, whereas the assertXXX// functions fail on a nil input.funcconvT2E(t*_type,elemunsafe.Pointer)(eeface){ifraceenabled{raceReadObjectPC(t,elem,getcallerpc(unsafe.Pointer(&t)),funcPC(convT2E))}ifmsanenabled{msanread(elem,t.size)}ifisDirectIface(t){// This case is implemented directly by the compiler.throw("direct convT2E")}x:=newobject(t)// TODO: We allocate a zeroed object only to overwrite it with// actual data. Figure out how to avoid zeroing. Also below in convT2I.typedmemmove(t,x,elem)e._type=te.data=xreturn}

There is the eface type, that has a _type field. Checking out what would be a eface I found this:

typeifacestruct{tab*itabdataunsafe.Pointer}typeefacestruct{_type*_typedataunsafe.Pointer}

From what I read before onRuss Cox post about interfaces I would guess that the iface is used when you are using interfaces that have actually methods on it. That is why it has an itab, the interface table, which is roughly equivalent to a C++ vtable.

I will ignore the iface (althought it is interesting) since it does not seem to be what I need to hack Go’s type system, there is more potential on eface, which covers the special case of empty interfaces (the equivalent of a void pointer in C).

On the post Russ Cox says that the empty interface is a special case that holds only the type information + the data, there is no itable, since it makes no sense at all (a interface{} has no methods).

The interface{} is just a way to transport runtime type information + data on a generic way through your code and it seems to be the more promissing way to hack types.

The type is:

type_typestruct{sizeuintptrptrdatauintptr// size of memory prefix holding all pointershashuint32tflagtflagalignuint8fieldalignuint8kinduint8alg*typeAlg// gcdata stores the GC type data for the garbage collector.// If the KindGCProg bit is set in kind, gcdata is a GC program.// Otherwise it is a ptrmask bitmap. See mbitmap.go for details.gcdata*bytestrnameOffptrToThistypeOff}

Lots of promissing fields to hack with, but actual type check is just a direct pointer comparison:

ife._type!=t{panic(&TypeAssertionError{"",e._type.string(),t.string(),""})}

It seems easier to just find a way to get the eface struct and overwrite itstype pointer with the one I desire. This smells like a job to theunsafe package.

I still don’t have a good idea on how to get the _type, or how to manipulate the eface type. My guess would be to just cast it as a pointer and do some old school pointer manipulation, but I’m not sure yet.

One function that is a good candidate to give some directions on how to do it is reflect.TypeOf:

funcTypeOf(iinterface{})Type{eface:=*(*emptyInterface)(unsafe.Pointer(&i))returntoType(eface.typ)}

Yeah, just cast the pointer to a eface pointer:

// emptyInterface is the header for an interface{} value.typeemptyInterfacestruct{typ*rtypewordunsafe.Pointer}

It seems that although the eface was private on the runtime package it is copied here on the reflect package. Well, if the reflect package can do it, so can I :-) (a little duplication is better than a big dependency, right ?).

Before going on, I was curious about where the types are initialized. It seems that there is just one unique pointer with all the type information for each type. Thanks to vim-go and go guru for the invaluable help on analysing code and allowing me to check all the referers to a type it has been pretty easy to find this on runtime/symtab.go:

// moduledata records information about the layout of the executable// image. It is written by the linker. Any changes here must be// matched changes to the code in cmd/internal/ld/symtab.go:symtab.// moduledata is stored in read-only memory; none of the pointers here// are visible to the garbage collector.typemoduledatastruct{pclntable[]byteftab[]functabfiletab[]uint32findfunctabuintptrminpc,maxpcuintptrtext,etextuintptrnoptrdata,enoptrdatauintptrdata,edatauintptrbss,ebssuintptrnoptrbss,enoptrbssuintptrend,gcdata,gcbssuintptrtypes,etypesuintptrtextsectmap[]textsecttypelinks[]int32// offsets from typesitablinks[]*itabptab[]ptabEntrypluginpathstringpkghashes[]modulehashmodulenamestringmodulehashes[]modulehashgcdatamask,gcbssmaskbitvectortypemapmap[typeOff]*_type// offset to *_rtype in previous modulenext*moduledata}

A good candidate is the typemap field, checking out how it is used I found this on runtime/type.go:

// typelinksinit scans the types from extra modules and builds the// moduledata typemap used to de-duplicate type pointers.functypelinksinit(){iffirstmoduledata.next==nil{return}typehash:=make(map[uint32][]*_type,len(firstmoduledata.typelinks))modules:=activeModules()prev:=modules[0]for_,md:=rangemodules[1:]{// Collect types from the previous module into typehash.collect:for_,tl:=rangeprev.typelinks{vart*_typeifprev.typemap==nil{t=(*_type)(unsafe.Pointer(prev.types+uintptr(tl)))}else{t=prev.typemap[typeOff(tl)]}// Add to typehash if not seen before.tlist:=typehash[t.hash]for_,tcur:=rangetlist{iftcur==t{continuecollect}}typehash[t.hash]=append(tlist,t)}ifmd.typemap==nil{// If any of this module's typelinks match a type from a// prior module, prefer that prior type by adding the offset// to this module's typemap.tm:=make(map[typeOff]*_type,len(md.typelinks))pinnedTypemaps=append(pinnedTypemaps,tm)md.typemap=tmfor_,tl:=rangemd.typelinks{t:=(*_type)(unsafe.Pointer(md.types+uintptr(tl)))for_,candidate:=rangetypehash[t.hash]{iftypesEqual(t,candidate){t=candidatebreak}}md.typemap[typeOff(tl)]=t}}prev=md}}

It seems that the typemap is initialized on the startup of the process, with help of information collected by the linker, on build time.

The typelinksinit function is used on the schedinit function (from runtime/proc.go):

// The bootstrap sequence is:////	call osinit//	call schedinit//	make & queue new G//	call runtime·mstart//// The new G calls runtime·main.funcschedinit(){// raceinit must be the first call to race detector.// In particular, it must be done before mallocinit below calls racemapshadow._g_:=getg()ifraceenabled{_g_.racectx,raceprocctx0=raceinit()}sched.maxmcount=10000tracebackinit()moduledataverify()stackinit()mallocinit()mcommoninit(_g_.m)alginit()// maps must not be used before this callmodulesinit()// provides activeModulestypelinksinit()// uses maps, activeModulesitabsinit()// uses activeModulesmsigsave(_g_.m)initSigmask=_g_.m.sigmaskgoargs()goenvs()parsedebugvars()gcinit()sched.lastpoll=uint64(nanotime())procs:=ncpuifn,ok:=atoi32(gogetenv("GOMAXPROCS"));ok&&n>0{procs=n}ifprocs>_MaxGomaxprocs{procs=_MaxGomaxprocs}ifprocresize(procs)!=nil{throw("unknown runnable goroutine during bootstrap")}ifbuildVersion==""{// Condition should never trigger. This code just serves// to ensure runtime·buildVersion is kept in the resulting binary.buildVersion="unknown"}}

And schedinit, at least according to go guru, is not called anywhere. The output of -gcflags -S also has no reference to this initialization.

Searching inside the runtime package:

(runtime)λ> grep -R schedinit .
./asm_amd64.s: CALL    runtime·schedinit(SB)
./asm_mips64x.s:       JAL     runtime·schedinit(SB)
./asm_arm.s:   BL      runtime·schedinit(SB)
./proc.go://   call schedinit
./proc.go:func schedinit() {
./asm_s390x.s: BL      runtime·schedinit(SB)
./traceback.go:        // schedinit calls this function so that the variables are
./asm_ppc64x.s:        BL      runtime·schedinit(SB)
./asm_arm64.s: BL      runtime·schedinit(SB)
./asm_386.s:   CALL    runtime·schedinit(SB)
./asm_mipsx.s: JAL     runtime·schedinit(SB)
./asm_amd64p32.s:      CALL    runtime·schedinit(SB)

It seems like the bootstraping code for each supported platform is ASM code. Lets take a look at the amd64 implementation:

	CLD				// convention is D is always left cleared
	CALL	runtime·check(SB)

	MOVL	16(SP), AX		// copy argc
	MOVL	AX, 0(SP)
	MOVQ	24(SP), AX		// copy argv
	MOVQ	AX, 8(SP)
	CALL	runtime·args(SB)
	CALL	runtime·osinit(SB)
	CALL	runtime·schedinit(SB)

	// create a new goroutine to start program
	MOVQ	$runtime·mainPC(SB), AX		// entry
	PUSHQ	AX
	PUSHQ	$0			// arg size
	CALL	runtime·newproc(SB)
	POPQ	AX
	POPQ	AX

The whole thing has more than 2000 lines, so I just copied the part that confirms that schedinit is called before running the actual code, and on schedinit the typelinksinit will be called, that will initialize the types map.

Sorry, got pretty far from the objective, lets go back to the type system hacking fun. Lets start the copying fun, just like the reflect package does, to inspect details on different types:

packagemainimport("fmt""unsafe")// tflag values must be kept in sync with copies in://	cmd/compile/internal/gc/reflect.go//	cmd/link/internal/ld/decodesym.go//	runtime/type.gotypetflaguint8typetypeAlgstruct{// function for hashing objects of this type// (ptr to object, seed) -> hashhashfunc(unsafe.Pointer,uintptr)uintptr// function for comparing objects of this type// (ptr to object A, ptr to object B) -> ==?equalfunc(unsafe.Pointer,unsafe.Pointer)bool}typenameOffint32// offset to a nametypetypeOffint32// offset to an *rtypetypertypestruct{sizeuintptrptrdatauintptrhashuint32// hash of type; avoids computation in hash tablestflagtflag// extra type information flagsalignuint8// alignment of variable with this typefieldAlignuint8// alignment of struct field with this typekinduint8// enumeration for Calg*typeAlg// algorithm tablegcdata*byte// garbage collection datastrnameOff// string formptrToThistypeOff// type for pointer to this type, may be zero}typeefacestruct{typ*rtypewordunsafe.Pointer}func(eeface)String()string{returnfmt.Sprintf("type: %#v\n\ndataptr: %v",*e.typ,e.word)}funcgetEface(iinterface{})eface{return*(*eface)(unsafe.Pointer(&i))}funcmain(){varaintvarbintvarcstringvardfloat32varefloat64varfrtypevargefacefmt.Printf("a int:\n%s\n\n",getEface(a))fmt.Printf("b int:\n%s\n\n",getEface(b))fmt.Printf("c string:\n%s\n\n",getEface(c))fmt.Printf("d float32:\n%s\n\n",getEface(d))fmt.Printf("e float64:\n%s\n\n",getEface(e))fmt.Printf("f rtype:\n%s\n\n",getEface(f))fmt.Printf("g eface:\n%s\n\n",getEface(g))}

The output of running the code:

(typehack(git master))λ> go run inspectype.go
a int:
type: main.rtype{size:0x8, ptrdata:0x0, hash:0xf75371fa, tflag:0x7, align:0x8, fieldAlign:0x8, kind:0x82, alg:(*main.typeAlg)(0x4fb3d0), gcdata:(*uint8)(0x4b0eb8), str:843, ptrToThis:35392}

dataptr: 0xc42000a2f0

b int:
type: main.rtype{size:0x8, ptrdata:0x0, hash:0xf75371fa, tflag:0x7, align:0x8, fieldAlign:0x8, kind:0x82, alg:(*main.typeAlg)(0x4fb3d0), gcdata:(*uint8)(0x4b0eb8), str:843, ptrToThis:35392}

dataptr: 0xc42000a390

c string:
type: main.rtype{size:0x10, ptrdata:0x8, hash:0xe0ff5cb4, tflag:0x7, align:0x8, fieldAlign:0x8, kind:0x18, alg:(*main.typeAlg)(0x4fb3f0), gcdata:(*uint8)(0x4b0eb8), str:5274, ptrToThis:44480}

dataptr: 0xc42000a400

d float32:
type: main.rtype{size:0x4, ptrdata:0x0, hash:0xb0c23ed3, tflag:0x7, align:0x4, fieldAlign:0x4, kind:0x8d, alg:(*main.typeAlg)(0x4fb420), gcdata:(*uint8)(0x4b0eb8), str:6791, ptrToThis:34880}

dataptr: 0xc42000a478

e float64:
type: main.rtype{size:0x8, ptrdata:0x0, hash:0x2ea27ffb, tflag:0x7, align:0x8, fieldAlign:0x8, kind:0x8e, alg:(*main.typeAlg)(0x4fb430), gcdata:(*uint8)(0x4b0eb8), str:6802, ptrToThis:34944}

dataptr: 0xc42000a4e8

f rtype:
type: main.rtype{size:0x30, ptrdata:0x28, hash:0x622c3ba0, tflag:0x7, align:0x8, fieldAlign:0x8, kind:0x19, alg:(*main.typeAlg)(0x482ca0), gcdata:(*uint8)(0x4b0ec9), str:11620, ptrToThis:35904}

dataptr: 0xc420014270

g eface:
type: main.rtype{size:0x10, ptrdata:0x10, hash:0x4358c73f, tflag:0x7, align:0x8, fieldAlign:0x8, kind:0x19, alg:(*main.typeAlg)(0x4fb3e0), gcdata:(*uint8)(0x4b0eba), str:11606, ptrToThis:74272}

dataptr: 0xc42000a5c0

Since this is already getting pretty extensive I won’t dive in every single detail of the outputs. But we can observe some interesting things.

The hack seems to have worked perfectly, since the size of all types makes sense. Alignment information also makes sense too. And also there is the kind information. Like I said on the start, type systems usually just use a number to differentiate on the types.

But comparing two different structs shows an interesting characteristic from Go. Although rtype and eface are two different types, and casting between the two types won’t work, they are of the same kind 0x19.

On the reflect package there is the Kind type, which is a enumeration of all Go’s base types. There is some information about that here. Every named/unnamed type you define in Go will always have an underlying type, which will be one of the types on the kind enumeration (that shows up on our type struct).

So in Go you can’t create types in the same sense of the native types that comes with the language, you can’t create new kinds, at least AFAIK. This is considerably confusing, because kind is a synonim of type :-), but it is one of the two hardest challenges on programming, giving names to things (the other ones is implementing caches :-)).

But the types you create work well enough, the compiler will help you, and reflection will also work properly. Even with the same kind, different types will have different rtype pointers associated with them, even different size in the struct case, but it is an interesting detail that I tought it was worth mentioning.

Well, now we can go back to hacking the type system.

There is a lot of ways to manipulate this type information, but the more naive way that I can think of is to define a function that gets an interface{} variable representing the value that will be casted and another interface{} variable that will carry the type information from where you want to cast to. The return is a new interface{} that can be casted to the desired target type. Something like this:

funcMorph(valueinterface{},desiredtypeinterface{})interface{}

Well, in this case the lack of generics on Go obligates me to use an interface{} and push the cast to the client, or develop a function for every basic type, but types defined by the client would require the client writing its own functions.

Let’s just let the client do some heavy lifting on this case, Jersey’s style (not that “The right thing” also does not have it’s place).

The final implementation can be found on morfus, the most small and stupid Go library ever :-).

I say this because the final hack on the type system is so simple that it makes me want to cry, Go is indeed terribly simple, nothing to feed my ego here :-(. The whole magic:

packagemorfosimport"unsafe"typeefacestruct{Typeunsafe.PointerWordunsafe.Pointer}funcgeteface(i*interface{})*eface{return(*eface)(unsafe.Pointer(i))}// Morph will coerce the given value to the type stored on desiredtype// without copying or changing any data on value. The result will// be a merge of the data stored on value with the type stored on// desiredtype, basically a frankstein :-).//// The result value should be castable to the type of desiredtype.funcMorph(valueinterface{},desiredtypeinterface{})interface{}{valueeface:=geteface(&value)typeeface:=geteface(&desiredtype)valueeface.Type=typeeface.Typereturnvalue}

My very first passing test:

packagemorfos_testimport("testing""github.com/katcipis/morfos")funcTestStructsSameSize(t*testing.T){typeoriginalstruct{xintyint}typenotoriginalstruct{zintwint}orig:=original{x:100,y:200}_,ok:=interface{}(orig).(notoriginal)ifok{t.Fatal("casting should be invalid")}morphed:=morfos.Morph(orig,notoriginal{})morphedNotOriginal,ok:=morphed.(notoriginal)if!ok{t.Fatal("casting should be valid now")}iforig.x!=morphedNotOriginal.z{t.Fatalf("expected x[%d] == z[%d]",orig.x,morphedNotOriginal.z)}iforig.y!=morphedNotOriginal.w{t.Fatalf("expected y[%d] == w[%d]",orig.y,morphedNotOriginal.w)}}

This test is “safe” because both structs have the same size, C programmers must be feeling butterflies on their bellies :-).

Although the hack is small, there is a lot of fun we can have with it, but before we go on there is one single line of unsafeness that is usually unknow to Go newcomers:

funcgeteface(i*interface{})*eface{return(*eface)(unsafe.Pointer(i))}

My feeling the first time I saw this was:

Go or C

With this kind of casting, my hack could be written as:

packagemainimport("fmt""unsafe")typeastruct{aint}typebstruct{bint}funcmain(){x:=a{a:100}y:=*(*b)(unsafe.Pointer(&x))fmt.Printf("x %v\n",x)fmt.Printf("y %v\n",y)}

And get this:

It works, as can be read here the unsafe.Pointer has special properties that allows it to be cast just like you do in C:

A Pointer can be converted to a pointer value of any type.

You may be thinking, what was the point of all this then ? Well, the objective was to hack the type system, which is to make the runtime casting facility behave as I want, my interest was to break this:

Make the “safe” cast behave unsafely, based on sheer curiosity on how this can actually be safe (take a look under the hood). And this has been achieved.

So lets forget that there is a VERY simpler way to force casts in Go and have some fun with my useless hack (we should at least have some fun, right ?).

In Go strings are immutable, or are they ?

funcTestMutatingString(t*testing.T){typestringStructstruct{strunsafe.Pointerlenint}varrawstr[5]byterawstr[0]='h'rawstr[1]='e'rawstr[2]='l'rawstr[3]='l'rawstr[4]='o'hi:=stringStruct{str:unsafe.Pointer(&rawstr),len:len(rawstr),}somestr:=""morphed:=morfos.Morph(hi,somestr)mutableStr:=morphed.(string)ifmutableStr!="hello"{t.Fatalf("expected hello, got: %s",mutableStr)}rawstr[0]='h'rawstr[1]='a'rawstr[2]='c'rawstr[3]='k'rawstr[4]='d'ifmutableStr!="hackd"{t.Fatalf("expected hackd, got: %s",mutableStr)}}

To do this I exploited the fact that Go’s strings are just structs with a pointer to the actual byte array and a len, the string does not need to be null terminated, thanks to the len field.

As expected this test pass. Without reassigning the mutableStr variable at any moment I was able to make it represent a different string, by changing its internal byte array.

Besides being fun, this hack is another example on how using theunsafe package will trully make your program unsafe. Seeing this on the code:

y:=*(*b)(unsafe.Pointer(&x))

Will make all kind of alarms bell on your head, but this:

Well, if ok is true it is safe to use val, or is it ? :-)

Thanks to my friends/reviewers:

An Introduction to the Theory of Elliptic Curves [pdf]

Is there a tension between creativity and accuracy?

$
0
0

On Twitter, I’ve been chatting with my friend Julia Galef about tensions between thinking creatively and thinking in a way that reduces error.

Of course, all other things being equal, I’m in favour of reducing error in our thinking!

However, all other things are not always equal.

In particular, I believe“there’s a tension, too, between behaviours which maximize accuracy & which maximize creativity… A lot of important truths come from v. irrational ppl.”

Julia has summarized some of her thinking in a blog post, where she disagrees, writing: “I totally agree that we need more experimentation with “crazy ideas”! I’m just skeptical that rationality is, on the margin, in tension with that goal.”

Before getting to Julia’s arguments, I want to flesh out the idea of a tension between maximizing creativity and maximizing accuracy.

Consider the following statement of Feynman’s, on the need to fool himself into believing that he had a creative edge in his work. He’s talking about his early ideas on how to develop a theory of electrons and light (which became, after many years, quantum electrodynamics). The statement is a little jarring to modern sensibilities, but please look past that to the idea he’s trying to convey:

I told myself [of his competitors]: “They’re on the wrong track: I’ve got the track!” Now, in the end, I had to give up those ideas and go over to their ideas of retarded action and so on – my original idea of electrons not acting on themselves disappeared, but because I had been working so hard I found something. So, as long as I can drive myself one way or the other, it’s okay. Even if it’s an illusion, it still makes me go, and this is the kind of thing that keeps me going through the depths.

It’s like the African savages who are going into battle – first they have to gather around and beat drums and jump up and down to build up their energy to fight. I feel the same way, building up my energy by talking to myself and telling myself, “They are trying to do it this way, I’m going to do it that way” and then I get excited and I can go back to work again.

Many of the most creative scientists I know are extremely determined people, willing to explore unusual positions for years. Sometimes, those positions are well grounded. And sometimes, even well after the fact, it’s obvious they were fooling themselves, but somehow their early errors helped them find their way to the truth. They were, to use the mathematician Goro Shimura’s phrase “gifted with the special capability of making many mistakes, mostly in the right direction”.

An extreme example is the physicist Joseph Weber, who pioneered gravitational wave astronomy. The verdict of both his contemporaries and of history is that he was fooling himself: his systems simply didn’t work the way he thought. On the other hand, even though he fooled himself for decades, the principals on the (successful!) LIGO project have repeatedly acknowledged that his work was a major stimulus for them to work on finding gravitational waves. In retrospect, it’s difficult to be anything other than glad that Weber clung so tenaciously to his erroneous beliefs.

For me, what matters here is that: (a) much of Weber’s work was based on an unreasonable belief; and (b) on net, it helped speed up important discoveries.

Weber demonstrates my point in an extreme form. He was outright wrong, and remained so, and yet his erroneous example still served a useful purpose, helping inspire others to pursue ideas that eventually worked. In some sense, this is a collective (rather than individual) version of my point. More common is the case – like Feynman – of a person who may cling to mistaken beliefs for a long period, but ultimately uses that as a bridge to new discovery.

Turning to Julia’s post, she responds to my argument with: “In general, I think overconfidence stifles experimentation”, and argues that the great majority of people in society reject “crazy” ideas – say, seasteading – because they’re overconfident in conventional wisdom.

I agree that people often mistakenly reject unusual ideas because they’re overconfident in the conventional wisdom.

However, I don’t think it’s relevant to my argument. Being overconfident in beliefs that most people hold is not at all the same as being overconfident in beliefs that few people hold.

You may wonder if the underlying cognitive mechanisms are the same, and perhaps there’s some kind of broad disposition to overconfidence?

But if that was the case then you’d expect that someone overconfident in their own unusual ideas would, in other areas, also be overconfident in the conventional wisdom.

However, my anecdotal experience is that a colleague willing to pursue unusual ideas of their own is often particularly sympathetic to unusual ideas from other people in other areas. This suggests that being overconfident in your own crazy ideas isn’t likely to stifle other experimentation.

Julia also suggests several variants on the “strategy of temporarily suspending your disbelief and throwing yourself headlong into something for a while, allowing your emotional state to be as
if
you were 100% confident.”

In a sense, Feynman and Weber were practicing an extreme version of this strategy. I don’t know Weber’s work well, but it’s notable that in the details of Feynman’s work he was good at ferreting out error, and not fooling himself. He wasn’t always rigorous – mathematicians have, for instance, spent decades trying to make the path integral rigorous – but there was usually a strong core argument. Indeed, Feynman delivered a very stimulating speech on the value of careful thought in scientific work.

How can this careful approach to the details of argument be reconciled with his remarks about the need to fool yourself in creative work?

I never met Feynman, and can’t say how he reconciled the two points of view. But my own approach in creative work, and I believe many others also take this approach, is to carve out a sort of creative cocoon around nascent ideas.

Consider Apple designer Jony Ive’s remarks at a memorial after Steve Jobs’ death:

Steve used to say to me — and he used to say this a lot — “Hey Jony, here’s a dopey idea.”

And sometimes they were. Really dopey. Sometimes they were truly dreadful. But sometimes they took the air from the room and they left us both completely silent. Bold, crazy, magnificent ideas. Or quiet simple ones, which in their subtlety, their detail, they were utterly profound. And just as Steve loved ideas, and loved making stuff, he treated the process of creativity with a rare and a wonderful reverence. You see, I think he better than anyone understood that while ideas ultimately can be so powerful, they begin as fragile, barely formed thoughts, so easily missed, so easily compromised, so easily just squished.

To be creative, you need to recognize those barely formed thoughts, thoughts which are usually wrong and poorly formed in many ways, but which have some kernel of originality and importance and
truth. And if they seem important enough to be worth pursuing, you construct a creative cocoon around them, a set of stories you tell yourself to protect the idea not just from others, but from your own self doubts. The purpose of those stories isn’t to be an air tight defence. It’s to give you the confidence to nurture the idea, possibly for years, to find out if there’s something really there.

And so, even someone who has extremely high standards for the final details of their work, may have an important component to their thinking which relies on rather woolly arguments. And they may well need to cling to that cocoon. Perhaps other approaches are possible. But my own experience is that this is often the case.

Julia finishes her post with:

One last point: Even if it turned out to be true that irrationality is necessary for innovators, that’s only a weak defense of your original claim, which was that I’m significantly overrating the value of rationality in general. Remember, “coming up with brilliant new ideas” is just one domain in which we could evaluate the potential value-add of increased rationality. There are lots of other domains to consider, such as designing policy, allocating philanthropic funds, military strategy, etc. We could certainly talk about those separately; for now, I’m just noting that you made this original claim about the dubious value of rationality in general, but then your argument focused on this one particular domain, innovation.

To clarify, I didn’t intend my claim to be in general: the tension I see is between creativity and accuracy.

That said, this tension does leak into other areas.

If you’re a funder, say, trying to determine what to fund in AI research, you go and talk to AI experts. And many of those people are likely to have cultivated their own creative cocoons, which will inform their remarks. How a funder should deal with that is a separate essay. My point here is simply that this process of creative cocooning isn’t easily untangled from things like evaluation of work.

The Woolly Mammoth's Last Stand

$
0
0

The real reason, they concluded, after examining lake bed sediments, was simply a lack of fresh water. Elephants are heavy drinkers and mammoths, their close cousins, were probably even more so, because they were adapted to the cold but were trying to survive in the post-ice age climate. During dry periods, only one lake on St. Paul was available and this seems to have failed as thirsty mammoths destroyed the plant cover around its shores.

The mammoths of Wrangel, a much larger island, survived for some 1,600 years longer and seem to have met a different fate. A team led by Eleftheria Palkopoulou and Love Dalen of the Swedish Museum of Natural History gained a major insight into the population history of the woolly mammoth by analyzing the whole genomes of two individuals. One was a mammoth from the mainland, from the Oimyakon district of northeastern Siberia, that died some 45,000 years ago at a time when the species still flourished. The other was from Wrangel Island and perished around 4,300 years ago, a few hundred years before the final extinction.

From the amount of genetic variation in each genome, the Swedish team was able to calculate the effective population size — a genetic concept roughly equivalent to the breeding population — of the woolly mammoths at each time period. The Oimyakon mammoth’s genome indicated an effective population size of 13,000 individuals whereas that of the Wrangel mammoth was a mere 300.

Photo
Adrian Lister, a mammoth researcher, with Lyuba, a baby woolly mammoth considered to be the most complete example of the species ever found.Credit Matt Dunham/Associated Press

The dwindling population during this 40,000 year period suffered a reduction in genetic diversity of some 20 percent, the Swedish team reported, suggesting that the lesser fitness of the Wrangel mammoths might have contributed to their extinction.

In fact, the Wrangel mammoth’s genome carried so many detrimental mutations that the population had suffered a “genomic meltdown,” according to Rebekah Rogers and Montgomery Slatkin of the University of California, Berkeley. Analyzing the Swedish team’s mammoth data at the gene level, they found that many genes had accumulated mutations that would have halted synthesis of proteins before they were complete, making the proteins useless, they report Thursday in PLOS Genetics.

The mammoth had lost many of the olfactory genes that underlie the sense of smell, as well as receptors in the vomeronasal gland, which detects pheromones, hormonally active scents that influence the behavior of other individuals. Loss of such genes, to judge by the situation with elephants, could disrupt mate choice and social status.

Another damaged gene is called FOXQ1, which affects the structure and translucency of hair. Mammoths had thick, rough hair that provided essential insulation in ice age climates. Damage to FOXQ1, at least in some mice and rabbits that carry the same damaged form of the gene, causes hairs to become less stiff and shiny, taking on a satin appearance. A herd of Wrangel mammoths in moonlight might have shimmered like ghosts, but any compromise to their insulation would have jeopardized survival.

Photo
Wrangel Island today. The wooly mammoths of Wrangel Island survived longer than most, but perished about 4,000 years ago. Credit Gabrielle & Michel Therin-Weise/Getty Images

There has been a long theoretical debate among biologists as to whether or not species are driven to extinction before genetic factors have time to play a role. The two snapshots of the woolly mammoth genome, one while it flourished and the other near extinction, support the idea that there is genomic meltdown in small populations that contributes to extinction.

“This is probably the best evidence I can think of for the rapid genomic decay of island populations,” said Hendrik Poinar, an evolutionary geneticist at McMaster University.

The discovery that individual genes were deleted in the Wrangel mammoth’s genome is a “very novel result,” and if confirmed, “will have very important implications for conservation biology,” Dr. Dalen said.

Those implications do not seem particularly hopeful because they imply that once genomic decline has begun in a threatened species, it is irreversible. The upside, Dr. Rogers said, “is that it took hundreds of generations on this island to get a signal as strong as we saw.”

Several mammoth specialists and the biologist George Church of Harvard Medical School have proposed resurrecting the mammoth by making the necessary genetic changes in the very similar elephant genome, and then bringing the altered genome to life in the womb of an elephant surrogate mother. Though little more than a charming fantasy in 2008, when Dr. Church proposed the idea, some of the many technical obstacles have come to seem slightly less daunting.

But it’s now evident that some mammoth genomes would make hardier animals than others. “I wouldn’t recommend using a Wrangel Island mammoth as a template,” said Beth Shapiro, a biologist at the University of California, Santa Cruz, and author of “How to Clone a Mammoth.”

Continue reading the main story

Samsung Says It's Serious About Foundry, Creates Business Unit

$
0
0

Samsung Electronics Co., the world’s second-biggest chipmaker, is increasing its effort in semiconductor outsourcing, separating the company’s foundry business into a new unit as part of a challenge to market leader Taiwan Semiconductor Manufacturing Co.

Samsung is elevating the foundry business to show its independence and guarantee its access to resources within the company. The South Korean company on Wednesday also promised customers that it will introduce new production techniques ahead of competitors and have a new plant up and running by the fourth quarter of this year.

Creating a separate unit for the business may ease the concerns of some potential customers who compete with other parts of Samsung, said Kelvin Low, a senior director of foundry marketing.

“As part of our commitment to play seriously in the foundry area we felt that it was best that we create an independent organization,” Low said. “That will result in less conflict of interest -- although that’s not really an issue these days -- it’s still a perception in some customers’ minds.”

Samsung’s commitment underlines the importance of its chip unit and the growth in demand for outsourced production of chips. Fewer companies than before can afford to invest the billions of dollars it takes to build and equip plants and fund research into new manufacturing. TSMC, which dominates the market by producing components for companies such as Apple Inc., has posted double-digit percentage revenue gains for the last five years.

In March, Intel Corp., which is the world’s largest maker of chips through its lead in the market for computer microprocessors, said it’s recommitting itself to building its own made-to-order chip business and claimed its manufacturing is still ahead of Samsung and TSMC. All three companies fight for orders from companies such as Qualcomm Inc. and Apple, which design their own chips but don’t own any manufacturing.

The electronics arm of South Korea’s largest business group doesn’t currently break out sales of its foundry unit. As the world’s biggest maker of memory chips, sales of those components make up most of the company’s total semiconductor revenue, contributing 12.12 trillion won ($10.5 billion) of 15.66 trillion won in the first quarter. That indicates the other chips, including foundry produced and those made for its own internal consumption, had sales of 3.54 trillion won in the quarter, a gain of 10 percent from a year earlier. That’s still less than half of the $7.53 billion in revenue TSMC posted in the quarter.

Samsung, whose chip unit provided two-thirds of companywide operating profit, is locked in a race with TSMC and Intel to improve factory output in increments the industry refers to as nanometers. Currently the best is 10-nanometer production. Samsung said it was the first to mass produce at that node and TSMC said it’s now in full production of 10 nanometer with a rapid increase in output due in the second half of this year.

Samsung on Wednesday said 8 nanometer is coming later this year and 7 will follow in 2018. Its 7-nanometer production will be the first to use extreme ultraviolet lithography -- a new version of the process of burning the circuit patterns into layers of materials on silicon to create chips. Switching to that will reduce the number of steps required, reduce cost and make the chips perform better, Samsung said. The company believes it will begin to introduce 4-nanometer production in 2020, a step which will require a brand new arrangements of transistors, the fundamental building blocks of chips.

Realizing Hackett, a metaprogrammable Haskell

$
0
0

Almost five months ago, I wrote a blog post about my new programming language, Hackett, a fanciful sketch of a programming language from a far-off land with Haskell’s type system and Racket’s macros. At that point in time, I had a little prototype that barely worked, that I barely understood, and was a little bit of a technical dead-end. People saw the post, they got excited, but development sort of stopped.

Then, almost two months ago, I took a second stab at the problem in earnest. I read a lot, I asked a lot of people for help, and eventually I got something sort of working. Suddenly, Hackett is not only real, it’s working, and you can try it out yourself!

Hackett is still very new, very experimental, and an enormous work in progress. However, that doesn’t mean it’s useless! Hackett is already a remarkably capable programming language. Let’s take a quick tour.

As Racket law decrees it, every Hackett program must begin with #lang. We can start with the appropriate incantation:

If you’re using DrRacket or racket-mode with background expansion enabled, then congratulations: the typechecker is online. We can begin by writing a well-typed, albeit boring program:

#lang hackett(main(println"Hello, world!"))

In Hackett, a use of main at the top level indicates that running the module as a program should execute some IO action. In this case, println is a function of type {String -> (IO Unit)}. Just like Haskell, Hackett is pure, and the runtime will figure out how to actually run an IO value. If you run the above program, you will notice that it really does print out Hello, world!, exactly as we would like.

Of course, hello world programs are boring—so imperative! We are functional programmers, and we have our own class of equally boring programs we must write when learning a new language. How about some Fibonacci numbers?

#lang hackett(deffibs:(ListInteger){0::1::(zip-with+fibs(tail!fibs))})(main(println(show(take10fibs))))

Again, Hackett is just like Haskell in that it is lazy, so we can construct an infinite list of Fibonacci numbers, and the runtime will happily do nothing at all. When we call take, we realize the first ten numbers in the list, and when you run the program, you should see them printed out, clear as day!

But these programs are boring. Printing strings and laziness may have been novel when you first learned about them, but if you’re reading this blog post, my bet is that you probably aren’t new to programming. How about something more interesting, like a web server?

#lang hackett(requirehackett/demo/web-server)(dataGreeting(greetingString))(instance(->BodyGreeting)[->body(λ[(greetingname)]{"Hello, "++name++"!"})])(defserverrun-server[GET"/"->String=>"Hello, world!"][GET"greet"->String->Greeting=>greeting])(main(do(println"Running server on port 8080.")(run-server8080)))

$ racket my-server.rkt
Running server on port 8080.
^Z
$ bg
$ curl 'http://localhost:8080/greet/Alexis'
Hello, Alexis!

Welcome to Hackett.

Excited yet? I hope so. I certainly am.

Before you get a little too excited, however, let me make a small disclaimer: the above program, while quite real, is a demo. It is certainly not a production web framework, and it actually just uses the Racket web server under the hood. It does not handle very many things right now. You cannot use it to build your super awesome webapp, and even if you could, I would not recommend attempting to do so.

All that said, it is a real tech demo, and it shows off the potential for Hackett to do some pretty cool things. While the server implementation is just reusing Racket’s dynamically typed web server, the Hackett interface to it is 100% statically typed, and the above example shows off a host of features:

  • Algebraic datatypes. Hackett has support for basic ADTs, including recursive datatypes (though not yet mutually recursive datatypes).

  • Typeclasses. The demo web server uses a ->Body typeclass to render server responses, and this module implements a ->Body instance for the custom Greeting datatype.

  • Macros. The defserver macro provides a concise, readable, type safe way to define a simple, RESTful web server. It defines two endpoints, a homepage and a greeting, and the latter parses a segment from the URL.

  • Static typechecking. Obviously. If you try and change the homepage endpoint to produce a number instead of a string, you will get a type error! Alternatively, try removing the ->Body instance and see what happens.

  • Infix operators. In Hackett, { curly braces } enter infix mode, which permits arbitrary infix operators. Most Lisps have variadic functions, so infix operators are not strictly necessary, but Hackett only supports curried, single-argument functions, so infix operators are some especially sweet sugar.

  • Pure, monadic I/O. The println and run-server functions both produce (IO Unit), and IO is a monad. do notation is provided as a macro, and it works with any type that implements the Monad typeclass.

All these features are already implemented, and they really work! Of course, you might look at this list and be a little confused: sure, there are macros, but all these other things are firmly Haskellisms. If you thought that, you’d be quite right! Hackett is much closer to Haskell than Racket, even though it is syntactically a Lisp. Keep this guiding principal in mind as you read this blog post or explore Hackett. Where Haskell and Racket conflict, Hackett usually prefers Haskell.

For a bit more information about what Hackett is and what it aims to be, check out my blog post from a few months ago from back when Hackett was called Rascal. I won’t reiterate everything I said there, but I do want to give a bit of a status update, explain what I’ve been working on, and hopefully give you some idea about where Hackett is going.

In September of 2016, I attended (sixth RacketCon), where I saw a pretty incredible and extremely exciting talk about implementing type systems as macros. Finally, I could realize my dream of having an elegant Lisp with a safe, reliable macro system and a powerful, expressive type system! Unfortunately, reality ensued, and I remembered I didn’t actually know any type theory.

Therefore, in October, I started to learn about type systems, and I began to read through Pierce’s Types and Programming Languages, then tried to learn the things I would need to understand Haskell’s type system. I learned about Hindley-Milner and basic typeclasses, and I tried to apply these things to the Type Systems as Macros approach. Throughout October, I hacked and I hacked, and by the end of the month, I stood back and admired my handiwork!

…it sort of worked?

The trouble was that I found myself stuck. I wasn’t sure how to proceed. My language had bugs, programs sometimes did things I didn’t understand, the typechecker was clearly unsound, and there didn’t seem to be an obvious path forward. Other things in my life became distracting or difficult, and I didn’t have the energy to work on it anymore, so I stopped. I put Hackett (then Rascal) on the shelf for a couple months, only to finally return to it in late December.

At the beginning of January, I decided it would be helpful to be public about what I was working on, so I wrote a blog post! Feedback was positive, overwhelmingly so, and while it was certainly encouraging, I suddenly felt nervous about expectations I had not realized I was setting. Could I really build this? Did I have the knowledge or the time? At that point, I didn’t really, so work stalled.

Fortunately, in early April, some things started to become clear. I took another look at Hackett, and I knew I needed to reimplement it from the ground up. I also knew that I needed a different technique, but this time, I knew a bit more about where to find it. I got some help from Sam Tobin-Hochstadt and put together an implementation of Pierce and Turner’s Local Type Inference. Unfortunately, it didn’t really provide the amount of type inference I was looking for, but fortunately, implementing it helped me figure out how to understand the rather more complicated (though very impressive) Complete and Easy Bidirectional Typechecking for Higher-Rank Polymorphism. After that, things just sort of started falling into place:

Less than three weeks later, and I have a programming language with everything from laziness and typeclasses to a tiny, proof-of-concept web server with editor support. The future of Hackett looks bright, and though there’s a lot of work left before I will be even remotely satisfied with it, I am excited and reassured that it already seems to be bearing some fruit.

So what’s left? Is Hackett ready for an initial release? Can you start writing programs in it today? Well, unfortunately, the answer is mostly no, at least if you want those programs to be at all reliable in a day or two. If everything looks so cheery, though, what’s left? What is Hackett still missing?

What Hackett still isn’t

I have a laundry list of features I want for Hackett. I want GADTs, indexed type families, newtype deriving, and a compiler that can target multiple backends. These things, however, are not essential. You can probably imagine writing useful software without any of them. Before I can try to tackle those, I first need to tackle some of the bits of the foundation that simply don’t exist yet (or have at least been badly neglected).

Fortunately, these things are not insurmountable, nor are they necessarily especially hard. They’re things like default class methods, static detection and prevention of orphan instances, exhaustiveness checking for pattern-matching, and a real kind system. That’s right—right now, Hackett’s type system is effectively dynamically typed, and even though you can write a higher-kinded type, there is no such thing as a “kind error”.

Other things are simply necessary quality of life improvements before Hackett can become truly usable. Type errors are currently rather atrocious, though they could certainly be worse. Additionally, typechecking currently just halts whenever it encounters a type error, and it makes no attempt to generate more than one type error at a time. Derivation of simple instances like Show and Eq is important, and it will also likely pave the way for a more general form of typeclass deriving (since it can most certainly be implemented via macros), so it’s uncharted territory that still needs to be explored.

Bits of plumbing are still exposed in places, whether it’s unexpected behavior when interoperating with Racket or errors sometimes reported in terms of internal forms. Local bindings are, if you can believe it, still entirely unimplemented, so let and letrec need to be written up. The standard library needs fleshing out, and certain bits of code need to be cleaned up and slotted into the right place.

Oh, and of course, the whole thing needs to be documented. That in and of itself is probably a pretty significant project, especially since there’s a good chance I’ll want to figure out how to best make use of Scribble for a language that’s a little bit different from Racket.

All in all, there’s a lot of work to be done! I am eager to make it happen, but I also work a full-time job, and I don’t have it in me to continue at the pace I’ve been working at for the past couple of weeks. Still, if you’re interested in the project, stay tuned and keep an eye on it—if all goes as planned, I hope to make it truly useful before too long.

It’s possible that this blog post does not seem like much; after all, it’s not terribly long. However, if you’re anything like me, there’s a good chance you are interested enough to have some questions! Obviously, I cannot anticipate all your questions and answer them here in advance, but I will try my best.

Can I try Hackett?

Yes! With the caveat that it’s alpha software in every sense of the word: undocumented, not especially user friendly, and completely unstable. However, if you do want to give it a try, it isn’t difficult: just install Racket, then run raco pkg install hackett. Open DrRacket and write #lang hackett at the top of the module, then start playing around.

Also, note that the demo web server used in the example at the top of this blog post is not included when you install the hackett package. If you want to try that out, you’ll have to run raco pkg install hackett-demo to install the demo package as well.

Are there any examples of Hackett code?

Unfortunately, not a lot right now, aside from the tiny examples in this blog post. However, if you are already familiar with Haskell, the syntax likely won’t be hard to pick up. Reading the Hackett source code is not especially recommended, given that it is filled with implementation details. However, if you are interested, reading the module where most of the prelude is defined isn’t so bad. You can find it on GitHub here, or you can open the hackett/private/prim/base module on a local installation.

How can I learn more / ask questions about Hackett?

Feel free to ping me and ask me questions! I may not always be able to get back to you immediately, but if you hang around, I will eventually send you a response. The best ways to contact me are via the #racket IRC channel on Freenode, the snek Slack community (which you can sign up for here), sending me a DM on Twitter, opening an issue on the GitHub repo, or even just sending me an email (though I’m usually a bit slower to respond to the latter).

How can I help?

Probably the easiest way to help out is to try Hackett for yourself and report any bugs or infelicities you run into. Of course, many issues right now are known, there’s just so much to do that I haven’t had the chance to clean everything up. For that reason, the most effective way to contribute is probably to pick an existing issue and try and implement it yourself, but I wouldn’t be surprised if most people found the existing implementation a little intimidating.

If you are interested in helping out, I’d be happy to give you some pointers and answer some questions, since it would be extremely nice to have some help. Please feel free to contact me using any of the methods mentioned in the previous section, and I’ll try and help you find something you could work on.

How does Hackett compare to X / why doesn’t Hackett support Y?

These tend to be complex questions, and I don’t always have comprehensive answers for them, especially since the language is evolving so quickly. Still, if you want to ask me about this, feel free to just send the question to me directly. In my experience, it’s usually better to have a conversation about this sort of thing rather than just answering in one big comparison, since there’s usually a fair amount of nuance.

When will Hackett be ready for me to use?

I don’t know.

Obviously, there is a lot left to implement, that is certainly true, but there’s more to it than that. If all goes well, I don’t see any reason why Hackett can’t be early beta quality by the end of this year, even if it doesn’t support all of the goodies necessary to achieve perfection (which, of course, it never really can).

However, there are other things to consider, too. The Racket package system is currently flawed in ways that make rapidly iterating on Hackett hard, since it is extremely difficult (if not impossible) to make backwards-incompatible changes without potentially breaking someone’s program (even if they don’t update anything about their dependencies)! This is a solvable problem, but it would take some work modifying various elements of the package system and build tools, so that might need to get done before I can recommend Hackett in good faith.

It would be unfair not to mention all the people that have made Hackett possible. I cannot list them all here, but I want to give special thanks to Stephen Chang, Joshua Dunfield, Robby Findler, Matthew Flatt, Phil Freeman, Ben Greenman, Alex Knauth, Neelakantan Krishnaswami, and Sam Tobin-Hochstadt. I’d also like to thank everyone involved in the Racket and Haskell projects as a whole, as well as everyone who has expressed interest and encouragement about what I’ve been working on.

As a final point, just for fun, I thought I’d keep track of all the albums I’ve been listening to while working on Hackett, just in the past few weeks. It is on theme with the name, after all. This list is not completely exhaustive, as I’m sure some slipped through the cracks, but you can thank the following artists for helping me power through a few of the hills in Hackett’s implementation:

  • The Beach Boys — Pet Sounds
  • Boards of Canada — Music Has The Right To Children, Geogaddi
  • Bruce Springsteen — Born to Run
  • King Crimson — In the Court of the Crimson King, Larks’ Tongues in Aspic, Starless and Bible Black, Red, Discipline
  • Genesis — Nursery Cryme, Foxtrot, Selling England by the Pound, The Lamb Lies Down on Broadway, A Trick of the Tail
  • Mahavishnu Orchestra — Birds of Fire
  • Metric — Fantasies, Synthetica, Pagans in Vegas
  • Muse — Origin of Symmetry, Absolution, The Resistance
  • Peter Gabriel — Peter Gabriel I, II, III, IV / Security, Us, Up
  • Pink Floyd — Wish You Were Here
  • Supertramp — Breakfast In America
  • The Protomen — The Protomen, Act II: The Father of Death
  • Talking Heads — Talking Heads: 77, More Songs About Buildings and Food, Fear of Music, Remain in Light
  • Yes — Fragile, Relayer, Going For The One

And of course, Voyage of the Acolyte, by Steve Hackett.


Blockchains are the new Linux, not the new Internet

$
0
0

Cryptocurrencies are booming beyond belief. Bitcoin is up sevenfold, to $2,500, in the last year. Three weeks ago the redoubtable Vinay Gupta, who led Ethereum’s initial release, published an essay entitled “What Does Ether At $100 Mean?” Since then it has doubled. Too many altcoins to name have skyrocketed in value along with the Big Two. ICOs are raking in money hand over fist over bicep. What the hell is going on?

(eta: in the whopping 48 hours since I first wrote that, those prices have tumbled considerably, but are still way, way up for the year.)

A certain seductive narrative has taken hold, is what is going on. This narrative, in its most extreme version, says that cryptocurrencies today are like the Internet in 1996: not just new technology but a radical new kind of technology, belittled or ignored by by most, which has slowly and subtly grown in power and influence over the last several years, and is about to explode into worldwide relevance and importance with shocking speed and massive repercussions.

(Lest you think I’m overstating this, I got a PR pitch the other day which literally began: “Blockchain’s 1996 Internet moment is here,” as a preface to touting a $33 million ICO. Hey, what’s $33 million between friends? It’s now pretty much taken as given that we’re in a cryptocoin bubble.)

I understand the appeal of this narrative. I’m no blockchain skeptic. I’ve been writing about cryptocurrencies with fascination for six years now. I’ve been touting and lauding the power of blockchains, how they have the potential to make the Internet decentralized and permissionless again, and to give us all power over our own data, for years. I’m a true believer in permissionless money like Bitcoin. I called the initial launch of Ethereum “a historic day.”

But I can’t help but look at the state of cryptocurrencies today and wonder where the actual value is. I don’t mean financial value to speculators; I mean utility value to users. Because if nobody wants to actually use blockchain protocols and projects, those tokens which are supposed to reflect their value are ultimately … well … worthless.

Bitcoin, despite its ongoing internal strife, is very useful as permissionless global money, and has a legitimate shot at becoming a global reserve and settlement currency. Its anonymized descendants such as ZCash have added value to the initial Bitcoin proposition. (Similarly, Litecoin is now technically ahead of Bitcoin, thanks to the aforementioned ongoing strife.) Ethereum is very successful as a platform for developers.

But still, eight years after Bitcoin launched, Satoshi Nakamoto remains the only creator to have built a blockchain that an appreciable number of ordinary people actually want to use. (Ethereum is awesome, and Vitalik Buterin, like Gupta, is an honest-to-God visionary, but it remains a tool / solution / platform for developers.) No other blockchain-based software initiative seems to be at any real risk of hockey-sticking into general recognition, much less general usage.

With all due respect to Fred Wilson, another true believer — and, to be clear, an enormous amount of respect is due — it says a lot that, in the midst of this massive boom, he’s citing “Rare Pepe Cards,” of all things, as a prime example of an interesting modern blockchain app. I mean, if that’s the state of the art…

Maybe I’m wrong; maybe Rare Pepe will be the next Pokémon Go. But on the other hand, maybe the ratio of speculation to actual value in the blockchain space has never been higher, which is saying a lot.

Some people argue that the technology is so amazing, so revolutionary, that if enough money is invested, the killer apps and protocols will come. That could hardly be more backwards. I’m not opposed to token sales, but they should follow “If you build something good enough, investors will flock to you,” not “If enough investors flock to us, we will build something good enough.”

A solid team working on an interesting project which hasn’t hit product-market fit should be able to raise a few million dollars — or, if you prefer, a couple of thousand bitcoin — and then, once their success is proven, they might sell another tranche of now-more-valuable tokens. But projects with hardly any users, and barely any tech, raising tens of millions? That smacks of a bubble made of snake oil … one all too likely to attract the heavy and unforgiving hand of the SEC.

That seductive narrative though! The Internet in 1996! I know. But hear me out. Maybe the belief that blockchains today are like the Internet in 1996 is completely wrong. Of course all analogies are flawed, but they’re useful, they’re how we think — and maybe there is another, more accurate, and far more telling, analogy here.

I propose a counter-narrative. I put it to you that blockchains today aren’t like the Internet in 1996; they’re more like Linux in 1996. That is in no way a dig — but, if true, it’s something of a death knell for those who hope to profit from mainstream usage of blockchain apps and protocols.

Decentralized blockchain solutions are vastly more democratic, and more technically compelling, than the hermetically-sealed, walled-garden, Stack-ruled Internet of today. Similarly, open-source Linux was vastly more democratic, and more technically compelling, than the Microsoft and Apple OSes which ruled computing at the time. But nobody used it except a tiny coterie of hackers. It was too clunky; too complicated; too counterintuitive; required jumping through too many hoops — and Linux’s dirty secret was that the mainstream solutions were, in fact, actually fine, for most people.

Sound familiar? Today there’s a lot of work going into decentralized distributed storage keyed on blockchain indexes; Storj, Sia, Blockstack, et al. This is amazing, groundbreaking work… but why would an ordinary person, one already comfortable with Box or Dropbox, switch over to Storj or Blockstack? The centralized solution works just fine for them, and, because it’s centralized, they know who to call if something goes wrong. Blockstack in particular is more than “just” storage … but what compelling pain point is it solving for the average user?

The similarities to Linux are striking. Linux was both much cheaper and vastly more powerful than the alternatives available at the time. It seemed incredibly, unbelievably disruptive. Neal Stephenson famously analogized 90s operating systems to cars. Windows was a rattling lemon of a station wagon; MacOS was a hermetically sealed Volkswagen Beetle; and then, weirdly … beyond weirdly … there was

Linux, which is right next door, and which is not a business at all. It’s a bunch of RVs, yurts, tepees, and geodesic domes set up in a field and organized by consensus. The people who live there are making tanks. These are not old-fashioned, cast-iron Soviet tanks; these are more like the M1 tanks of the U.S. Army, made of space-age materials and jammed with sophisticated technology from one end to the other. But they are better than Army tanks. They’ve been modified in such a way that they never, ever break down, are light and maneuverable enough to use on ordinary streets, and use no more fuel than a subcompact car. These tanks are being cranked out, on the spot, at a terrific pace, and a vast number of them are lined up along the edge of the road with keys in the ignition. Anyone who wants can simply climb into one and drive it away for free.

Customers come to this crossroads in throngs, day and night. Ninety percent of them go straight to the biggest dealership and buy station wagons … They do not even look at the other dealerships.

I put it to you that just as yesterday’s ordinary consumers wouldn’t use Linux, today’s won’t use Bitcoin and other blockchain apps, even if Bitcoin and the the other apps build atop blockchains are technically and politically amazing (which some are.) I put it to you that “the year of widespread consumer use of [Bitcoin | Ripple | Stellar | ZCash | decentralized ether apps | etc]” is perhaps analogous to “the year of [Ubuntu | Debian | Slackware | Red Hat | etc] on the desktop.”

Please note: this is not a dismissive analogy, or one which in any way understates the potential eventual importance of the technology! There are two billion active Android devices out there, and every single one runs the Linux kernel. When they communicate with servers, aka “the cloud,” they communicate with vast, warehouse-sized data centers … teeming with innumerable Linux boxes. Linux was immensely important and influential. Most of modern computing is arguably Linux-to-Linux.

It’s very easy to imagine a similar future for blockchains and cryptocurrencies. To quote my friend Shannon: “It [blockchain tech] definitely seems like it has a Linux-like adoption arc ahead of it: There’s going to be a bunch of doomed attempts to make it a commercially-viable consumer product while it gains dominance in vital behind-the-scenes applications.”

But if your 1996 investment thesis had been that ordinary people would adopt Linux en masse over the next decade — which would not have seemed at all crazy — then you would have been in for a giant world of hurt. Linux did not become important because ordinary people used it. Instead it became commodity infrastructure that powered the next wave of the Internet.

It’s easy to envision how and why an interwoven mesh of dozens of decentralized blockchains could slowly, over a period of years and years, become a similar category of crucial infrastructure: as a reserve/settlement currency, as replacements for huge swathes of today’s financial industry, as namespaces (such as domain names), as behind-the-scenes implementations of distributed storage systems, etc. … while ordinary people remain essentially blissfully unaware of their existence. It’s even easy to imagine them being commoditized. Does Ethereum gas cost too much? No problem; just switch your distributed system over to another, cheaper, blockchain.

So don’t tell me this is like the Internet in 1996, not without compelling evidence. Instead, wake me up when cryptocurrency prices begin to track the demonstrated underlying value of the apps and protocols built on their blockchains. Because in the interim, in its absence of that value, I’m sorry to say that instead we seem to be talking about decentralized digital tulips.


Disclosure, since it seems requisite: I mostly avoid any financial interest, implicit or explicit, long or short, in any cryptocurrency, so that I can write about them sans bias. I do own precisely one bitcoin, though, which I purchased a couple of years ago because I felt silly not owning any while I was advising a (since defunct) Bitcoin-based company.

Featured Image: Jorge González/Flickr UNDER A CC BY-SA 2.0 LICENSE

Growing a Compiler (2009)

$
0
0

by Bill McKeeman and Lu He

MathWorks and Dartmouth, May 2009

Contents

Abstract

Self-compiling compilers are common. The question is: How far can one go, bootstrapping a (very) small compiler-compiler into more capable compilers?

Context-free grammars are extended to accomodate output. A grammar executing machine (GEM) is introduced which accepts an input text and a grammar, and outputs another text. Both the input text and the output text can also be grammars, permitting the production of ever more powerful grammars. GEM itself can be extended to build-in the capabilities of the previous grammars. The rules of the game require that changing GEM does not add to its original capability -- it merely makes the implementation more robust or faster.

The grammars and the machine have some simple symmetries that lead to actions such as backtracking and decompiling. It is also possible to directly execute bit-strings in the Intel x86 hardware.

Chapters

  1. Base GEM
    • statement of the problem
    • executable grammars
    • simple examples
  2. Robust GEM
    • pre-entered character classes
    • using nowhite, pretty, invert
  3. GEM with builtin nowhite and chars
    • using multi-character input and output symbols
    • left-associative arithmetic expressions
    • X86 floating point stack
  4. GEM with builtin multichar symbols
    • using Kleene * and + in executable grammars
  5. Running Intel X86 code
    • X86 Assembler
    • calculator
    • atoi
  6. Plenty Phrase Names

Notes

The origin of the idea is a undergraduate thesis (UC Santa Cruz, 1978) written by Doug Michels under the supervsion of Bill McKeeman.

The title is inspired by: Guy Steele's 1998 OOPSLA talk Growing a Language.

Thanks to Steve Johnson for critical advice in the preparation of this presentation.

The default font sizes in Firefox are uncomfortably large for this paper. Try [view][edit][zoom][text only][zoom out][zoom out].

References

Signatures

  • Bill McKeeman , MathWorks Fellow
  • Lu He, Computer Science Department, Dartmouth

An earlier version was presented to the Computer Science Colloquium, Stanford, March 4, 2009

The US Library of Congress has put 25M items free online

$
0
0

Knowledge is power, the old saying goes, but it isn't much use if it's hidden away – so we're excited to learn that the US Library of Congress is making 25 million of its records available for free, for anyone to access online.

The bibliographic data sets, like digital library cards, cover music, books, maps, manuscripts, and more, and their publication online marks the biggest release of digital records in the Library's history.

"The Library of Congress is our nation's monument to knowledge and we need to make sure the doors are open wide for everyone, not just physically but digitally too," says Librarian of Congress Carla Hayden.

"Unlocking the rich data in the Library's online catalogue is a great step forward. I'm excited to see how people will put this information to use."

Researchers and analysts will get most use out of the new records, but there's plenty of potential for them to be used in apps and databases as well. The Library hosted a Hack-to-Learn workshop looking at how the data could be used.

The new mine of information covers records from 1968, and the earliest days of electronic cataloguing, right up to 2014.

"The Library of Congress catalogue is literally the gold standard for bibliographic data and we believe this treasure trove of information can be used for much more than its original purpose," says the Library's Beacher Wiggins.

Thanks to the spread of a little invention known as the internet, we're seeing more and more libraries, organisations, and agencies put their valuable data online for all to use.

Last year NASA decided to make all of the scientific research it funds available on the web for free, hoping to spark further studies and "magnify the impact" of its papers.

NASA also allows developers to download and build upon its software applications, without paying any royalty or copyright fees, so whether you're wanting to build a rocket or analyse satellite data, you can find a tool to help.

Want to know more about Darwin's iconic Origin of the Species work? Point your browser at the American Museum of National History website and you can digitally leaf through 16,000 high-resolution images free of charge.

Meanwhile, the Unpaywall plug-in is designed to get past scientific journal paywalls legally and easily, so inquiring minds can learn more about our world without having to stump up for a subscription.

There's lots out there. If you're eager to get your hands on as much free educational material as possible, here are 8 awesome resources you can totally get behind.

That the US Library of Congress is adding to the trend is definitely welcome news – the library is the largest in the world, having been established at the start of the 19th century as a resource for Congress.

The Library's collections include more than 38 million books and more than 70 million manuscripts, and now some of that vast pile of reference data and other resources can be accessed by anyone for free.

"We hope this data will be put to work by social scientists, data analysts, developers, statisticians and everyone else doing innovative work with large data sets to enhance learning and the formation of new knowledge," says Wiggins.

A 16th-century engineer whose work almost defeated the Ottomans

$
0
0
Enlarge/ An image of the Siege of Rhodes from the Süleymannâme, a chronicle of the Sultan's life.

Detail from the Süleymannâme

Suleiman the Magnificent earned his epithet, at least militarily. The Sultan of the Ottoman Empire for 46 years, he spent much of his time on campaign. Hungary and Persia felt the brunt of his martial genius, but perhaps his most famous victory was the Siege of Rhodes in 1522. It was a grudge match.

The island of Rhodes was a blemish on the Ottoman Empire's record. It was held by the Order of St. John (also known as the Knights Hospitaller), and it withstood the Ottoman troops' siege in 1480. The Order of St. John had first been established to care for sick pilgrims in the Holy Land, but had been beaten back and militarized as Christians lost control over the region. At Rhodes they stood firm, but both sides knew that more conflict was inevitable.

As soon as the enemy boats had disappeared over the horizon in 1480, the Order began raising and thickening the walls around their stronghold. By 1522, their fortifications stood against the barrage laid down by the naval blockade of Suleiman's military. This did not discourage the Sultan. He knew there was another way in: underground.

Engineers had long been an integral part of warfare. They helped cities create better defenses, with high, strong walls punctuated by slits for firing arrows—or crenelations for releasing boiling oil. For their part, army engineers crafted siege engines to hurl things against or over walls. Tunneling under walls and blowing them up from beneath was also a well-established siege technique, dating back to several centuries BC.

The Siege of Rhodes, however, was one of the best-documented cases of a war won and lost by engineers, not because of the Sultan's offense, but because of Rhodes' defense. To maintain the integrity of their walls, the defenders of Rhodes brought an early genius of military engineering, Gabriele Tadini.

Tunnel warfare and surveillance

Military mining was equally hard on the body and the mind. Tunnelers worked in darkness, digging corridors that could collapse on them and knowing that, somewhere in the earth, the enemy was doing the same. If one group of miners detected another, they could dig toward the enemy tunnel and blow it up.

An image of Tadini.

An image of Tadini.

Tadini, who had trained first as a doctor and then as an engineer in the Venetian army, came to Rhodes specifically to fight the Ottoman empire. The first thing he did was eliminate the haphazard nature of military tunneling. He put the citizenry to work creating a well-planned underground network in and around the walls, making a kind of grid of established tunnels.

He peopled these tunnels with specially-trained monitors, generally children or people too old to fight. Each of the monitors kept their ears close to what must have looked to them like a tambourine. This invention of Tadini's was made from a parchment membrane stretched across a drum-like frame. Tiny bells hung from the edges of the drum. The slightest vibration would set the bells jingling and allow the monitors to detect and slowly close in on an Ottoman tunnel in progress.

After a tunnel was detected, Tadini supervised a careful digging effort toward that tunnel. This was a nerve-shredding process—moving toward an opposing force, never knowing when the opposition would blow their charge. Even if the other side was nowhere near ready to lay a mine, a tunneler faced other dangers. Getting too far from the surface meant suffocation. One of the reasons Tadini was prized as a tactician was his ability to keep the tunnels well-ventilated.

Once the miners got close enough to the opposing tunnel, it was time to blow the other tunnel to smithereens. This had its own problem. Too big a blast, and the defenders would fell their own walls. Here, another Tadini innovation kept Rhodes intact. Most tunnels and vents were straight, which saved effort on the part of the miners. But that meant, once a mine was detonated, the blast came roaring outwards. Tadini directed the construction of spiraling vents, which dampened the force of each explosion and limited the damage to the earth around Rhodes.

And so Rhodes stood, behind its walls, on top of an innovative network of monitoring tunnels, vibrating with the workings of a corps of engineers seeking out and destroying the mines of the invaders.

The end of Tadini

Unfortunately for the Order of St. John, Tadini's tunnels were not a permanent solution. Even the most advanced monitoring network eventually misses a signal. As the months went on and the besieged force lost supplies and people, their only hope was for Suleiman to lift the siege. This, he did not do.

Suleiman also had a bit of luck when Tadini, who regularly scaled the walls to calculate the best way to bombard the enemy, was shot in the eye and gravely wounded. Eventually, the Sultan's forces managed to tumble the walls in two sections of the city, which lead to hand-to-hand fighting devastating to both sides.

Still, the defenders hung in there until Tadini confirmed to Philippe Villiers, the Grand Master of the Order, that the city was no longer defensible. Villiers had ignored the many warriors who told him the same, but listened to the engineer.

Suleiman was respectful and generous in victory. The citizens of Rhodes were to be exempt from both taxation and conscription for the next five years. Tadini was allowed to leave. He went to the colonies of Genoa, where he again fought invading Ottoman forces and again lost.

The Order of St. John was allowed to leave in peace and build a new fortress elsewhere. They did so on Malta, where their walls could be built on stone and therefore could not be undermined. Toward the end of his life, Suleiman sent a force to conquer Malta. This time the siege failed, in part because he was not there to keep order or impose his implacable will.

Despite Tadini's losses, his methods have continued to influence combat into the present. Mining and countermining, with all their attendant surveillance and engineering, are still staples of warfare.

Hub: Wraps Git with extra features that make working with GitHub easier

$
0
0

README.md

hub is a command line tool that wraps git in order to extend it with extra features and commands that make working with GitHub easier.

$ hub clone rtomayko/tilt# expands to:
$ git clone git://github.com/rtomayko/tilt.git

hub is best aliased as git, so you can type $ git <command> in the shell and get all the usual hub features. See "Aliasing" below.

Installation

Dependencies:

Homebrew

hub can be installed through Homebrew:

$ brew install hub
$ hub version
git version 1.7.6
hub version 2.2.3

Chocolatey

hub can be installed through Chocolatey on Windows.

> choco install hub> hub version
git version 2.7.1.windows.2
hub version 2.2.3

Standalone

hub can be easily installed as an executable. Download the latestcompiled binaries and put it anywhere in your executable path.

Source

To install hub from source:

$ git clone https://github.com/github/hub.git
$ cd hub
$ make install prefix=/usr/local

Prerequisites for compilation are:

  • make
  • Go 1.8+
  • Ruby 1.9+ with Bundler - for generating man pages

If you don't have make, Ruby, or want to skip man pages (for example, if you are on Windows), you can build only the hub binary:

You can now move bin/hub to somewhere in your PATH.

Finally, if you've done Go development before and your $GOPATH/bin directory is already in your PATH, this is an alternative installation method that fetches hub into your GOPATH and builds it automatically:

$ go get github.com/github/hub

Aliasing

Using hub feels best when it's aliased as git. This is not dangerous; yournormal git commands will all work. hub merely adds some sugar.

hub alias displays instructions for the current shell. With the -s flag, it outputs a script suitable for eval.

You should place this command in your .bash_profile or other startup script:

PowerShell

If you're using PowerShell, you can set an alias for hub by placing the following in your PowerShell profile (usually~/Documents/WindowsPowerShell/Microsoft.PowerShell_profile.ps1):

A simple way to do this is to run the following from the PowerShell prompt:

Add-Content $PROFILE"`nSet-Alias git hub"

Note: You'll need to restart your PowerShell console in order for the changes to be picked up.

If your PowerShell profile doesn't exist, you can create it by running the following:

New-Item -Type file -Force $PROFILE

Shell tab-completion

hub repository contains tab-completion scripts for bash and zsh. These scripts complement existing completion scripts that ship with git.

Installation instructions

Meta

Viewing all 25817 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>