Quantcast
Channel: Hacker News
Viewing all 25817 articles
Browse latest View live

Why read old philosophy?

$
0
0

I’m going to try to explain a mystery that puzzled me for years. This answer finally dawned on me in the middle of one of those occasional conversations in which non-perplexed friends patiently try to explain the issue to me. So I am not sure if mine is a novel explanation, or merely the explanation that my friends were trying to tell me, in which case my contribution is explaining it in a way that is at all comprehensible to a person like me. If it is novel, apparently some other people disagree with it and have an almost entirely satisfactory alternative, which has the one downside that it is impossible to explain to me.

The puzzle is this:

Why do people read old philosophers to learn about philosophy?

We read old physicists if we want to do original research on the history of physics. Or maybe if we are studying an aspect of physics so obscure that nobody has covered it in hundreds of years. If we want to learn physics we read a physics textbook. As far as I know, the story is similar in math, chemistry, engineering, economics, and business (though maybe some other subjects that I know less about are more like philosophy).

Yet go to philosophy grad school, and you will read original papers and books by historical philosophers. Research projects explore in great detail what it is that Aristotle actually said, thought, and meant. Scholars will learn the languages that the relevant texts were written in, because none of the translations can do the texts the necessary justice. The courses and books will be named after people like ‘Hume’ as often as they are named after topics of inquiry like ‘Causality’ and larger subject areas will be organized by the spatiotemporal location of the philosopher, rather than by the subject matter: Ancient Philosophy, Early Modern Philosophy, Chinese Philosophy, Continental Philosophy.

The physics situation makes a lot more sense to me. Hypothetically, who would I rather read an explanation of ‘The Alice Effect’ by? —Alice, the effect’s seventeenth century discoverer, or Bob, a modern day physics professor authoring a textbook?

Some salient considerations, neutrality not guaranteed:

  • Alice’s understanding of the Alice effect is probably the most confused understanding of it in all of history, being the first ‘understanding of the Alice effect’ to set itself apart from ‘confusion and ignorance about the Alice effect’.
  • In the billions of lifetimes that have passed since Alice’s time, the world has probably thought substantially more about The Alice Effect than Alice managed to in her lifetime, at least if it is important at all.
  • Alice’s very first account of the effect probably contained imperfections. Bob can write about the theory as it stood after years of adjustment.
  • Even if Alice’s account was perfectly correct, it was probably not perfectly well explained, unless she happens to have been a great explainer as well as a great physicist.
  • Physics has made many discoveries since Alice’s time, such as Claire forces, Evan motion and Roger fields. It might be easier to understand all of this by starting with the Roger fields, and explaining the Alice effect as a consequence. However literature from the likes of Alice is constrained to cover topics chronologically by date of discovery.
  • Bob speaks a similar version of English to me
  • Bob can be selected for having particular skill at writing and explanation, whereas Alice must be selected for having the scientific prowess to make the discovery.
  • Bob is actually trying to explain the thing to a 21st Century reader, while Alice is writing to pique the interest of some seventeenth century noblemen who lack modern intellectual machinery and are interested in issues like whether this is compatible with religion. An accurate impression of a 21st Century reader would probably cause Alice to fall over.

I think Bob is a solid choice.

How might philosophy be different?

Some pieces of explanations I heard, or made up while hearing other explanations:

  • You have to be smarter than the original philosopher to summarize their work well, so there are few good summaries
  • The translations are all terrible for conveying the important parts
  • Philosophy is not trying to communicate normal content that can be in explicit statements, of the kind you might be able to explain well and check the understanding of and such.
  • Philosophy is about having certain experiences which pertain to the relevant philosophy, much like reading a poem is different to reading a summary of its content.

I don’t find any of these compelling. If I understood some material well enough to make use of it, I would generally expect to be able to summarize it or describe it in a different language that I knew. So if nobody is capable of summarizing or translating the material it is hard to believe that I am getting much out of it by reading it. ‘Some content can’t be described’ isn’t much of an explanation, and even if it was, how did the philosophers describe it? And if you found it, but then couldn’t describe it, what would be the point? And if philosophy is about having certain experiences, like poetry, but then it would seem to be a kind of entertainment rather than a project to gain knowledge, which is at least not what most philosophers would tell you. So none of these explanations for learning philosophy to involve so much attention to very old philosophers seemed that plausible.

Ok, so that’s the mystery.

Here’s my explanation. Reading Aristotle describe his thoughts about the world is like watching Aristotle ride a skateboard if Aristotle were a pro skater. You are not getting value from learning about the streets he is gliding over (or the natural world that he is describing) and you should not be memorizing the set of jumps he chooses (or his particular conceptualizations of the world). You are meant to be learning about how to carry out the activity that he is carrying out: how to be Aristotle. How to do what Aristotle would do, even in a new environment.

An old work of philosophy does not describe the thing you are meant to be learning about. It was created by the thing you are meant to be learning about, much like watching a video from skater-Aristotle’s GoPro. And the value proposition is that with this high resolution Aristotle’s-eye-view, you can infer the motions.

There is not a short description  of the insights you should learn (or at least not one available), because the insights you are hopefully learning are not the insights that Aristotle is trying to share. Aristotle might have highly summarizable insights, but what you want to know is how to be Aristotle, and nobody has necessarily developed an abstract model of how to be Aristotle from which summary statements can be extracted.

So it is not that the useful content being transmitted is of a special kind that is immune to being communicated as statements. It is just not actually known in statements. Nobody knows which aspects of being Aristotle are important, and nobody has successfully made a simplified summary. What we ‘know’ is this one very detailed example. Much like if I showed you a bee because I thought I couldn’t communicate it in words—it would not be because bees are mysteriously indescribable, it would be that I haven’t developed the understanding required to describe what is important about it, so I’m just showing you the whole bee.

On this theory, if someone doesn’t realize what is going on, and tries to summarize Aristotle’s writings in the way that you would usually summarize the content of a passage, you entirely lose what was valuable about it. Much as you would if you summarized a video of a skater in motion into a description of the environment that they had interacted with. I hypothesize that this is roughly what happens, and is why it feels like summaries can’t capture what is important, and probably why translations seem bad always. Whenever a person tries to do a translation, they faithfully communicate the content of the thoughts at the expense of faithfully communicating the thinking procedure.

For instance, suppose I have a sentence like this:

We have enough pieces of evidence to say that friendly banter is for counter-signaling.

If not quite the same words were available in a different language, it might get translated to:

We have seen enough evidence to know that friendly banter is for counter-signaling.

Which tells us something very similar about whether friendly banter is for counter-signaling.

But something subtle is lost about the process: in the initial statement, the author is suggesting that they are relying on the accretion of many separate pieces of evidence that may not have been independently compelling, whereas in the latter that is not clear. Over a long text, sentences like the former might give the reader an implicit understanding of how disparate and independently uncompelling evidence might be combined in the intuition of the author, without the issue ever being explicitly discussed. In the latter, this implication is entirely lost.

So I think this explains the sense that adequate summarization is impossible and translation is extremely difficult. At least, if we assume that people either don’t know what is really going on.

As an aside, I explained my theory to Ben Hoffman, and also asked him what on earth Plato was trying to do since when I tried to read him he made some points about fashion and sports that seemed worthy of a blog post, but maybe not of historical significance. Ben had a neat answer. He said Plato is basically doing the kind of summarization that a person who knew what was going on in my theory would do. He listened to Socrates a lot and thought that Socrates had interesting methods of thought. Then instead of summarizing Socrates’ points, he wrote fictionalized account of conversations with Socrates that condense and highlight the important elements of thinking and talking like Socrates.

This doesn’t explain why philosophy is different to physics (and basically all of the other subjects). Why would you want to be like Socrates, and not like Newton? Especially since Newton had more to show for his thoughts than an account of what his thoughts were like. I suspect the difference is that because physicists invent explicit machinery that can be easily taught, when you learn physics you spend your time mastering these tools. And perhaps in the process, you come to think in a way that fits well with these tools. Whereas in philosophy there is much less in the way of explicit methods to learn, so the most natural thing to learn is how to do whatever mental processes produce good philosophy. And since there is not a consensus on what they are like in the abstract, emulating existing good philosophers is a plausible way to proceed.

I was in the CMU philosophy department, which focuses on more formal methods that others might not class as philosophy—logic, algorithms for determining causality, game theory—and indeed in logic class we learned a lot of logical lemmas and did a lot of proofs and did not learn much about Frege or Gödel, though we did learn a bit about their history and thought at other point in the program.

(This story would suggest that in physics students are maybe missing out on learning the styles of thought that produce progress in physics. My guess is that instead they learn them in grad school when they are doing research themselves, by emulating their supervisors, and that the helpfulness of this might partially explain why Nobel prizewinner advisors beget Nobel prizewinner students.)

The story I hear about philosophy—and I actually don’t know how much it is true—is that as bits of philosophy come to have any methodological tools other than ‘think about it’, they break off and become their own sciences. So this would explain philosophy’s lone status in studying old thinkers rather than impersonal methods—philosophy is the lone ur-discipline without impersonal methods but thinking.

This suggests a research project: try summarizing what Aristotle is doing rather than Aristotle’s views. Then write a nice short textbook about it.


Desktop Operating System Market Share

$
0
0
Browser    Operating System Group    Search Engine    Device Type    

The Piston image library is now pure Rust

$
0
0

Yesterday, oyvindln made a PR to image-png to replace flate2 with deflate, a DEFLATE and zlib encoder written in safe rust. The PR was merged, and currently I am ecoing the new version in the Piston ecosystem.

This means that the image library is now pure Rust! C is gone!

Just think of it: An image library supporting several popular image formats, written from scratch in Rust, over a period of 3 years!

To celebrate this moment, here is an overview of the people who made this possible (listing major components):

Notice! When library was moved, some people were left out, but hopefully they are in the list of authors. There are also lot of people who contributed indirectly by testing and feedback. Thanks to you all!

A special credit goes to these 2 fine people:

  • ccgn, who started the project in 2014
  • nwin, who has been the top maintainer since then

During the early start of the Piston project, in a storm of rustc breaking changes in 2014, these two stood together as pillars, keeping the project afloat on top of a build script hacked together in Make and Bash, before Cargo came and saved us all from drowning.

OK, perhaps not that dramatic, but those breaking changes were intense.

Btw, we welcome new contributors!

Dd is not a disk writing tool (2015)

$
0
0

If you’ve ever used dd, you’ve probably used it to read or write disk images:

# Write myfile.iso to a USB drive
dd if=myfile.iso of=/dev/sdb bs=1M

Usage of dd in this context is so pervasive that it’s being hailed as the magic gatekeeper of raw devices. Want to read from a raw device? Use dd. Want to write to a raw device? Use dd.

This belief can make simple tasks complicated. How do you combine dd with gzip? How do you use pv if the source is raw device? How do you dd over ssh?

The fact of the matter is, dd is not a disk writing tool. Neither “d” is for “disk”, “drive” or “device”. It does not support “low level” reading or writing. It has no special dominion over any kind of device whatsoever.

dd just reads and writes file.

On UNIX, the adage goes, everything is a file. This includes raw disks. Since raw disks are files, and dd can copy files, dd be used to copy raw disks.

But do you know what else can read and write files? Everything:

# Write myfile.iso to a USB drive
cp myfile.iso /dev/sdb

# Rip a cdrom to a .iso file
cat /dev/cdrom > myfile.iso

# Create a gzipped image
gzip -9 /tmp/myimage.gz

However, this does not mean that dd is useless! The reason why people started using it in the first place is that it does exactly what it’s told, no more and no less.

If an alias specifies -a, cp might try to create a new block device rather than a copy of the file data. If using gzip without redirection, it may try to be helpful and skip the file for not being regular. Neither of them will write out a reassuring status during or after a copy.

dd, meanwhile, has one job*: copy data from one place to another. It doesn’t care about files, safeguards or user convenience. It will not try to second guess your intent, based on trailing slashes or types of files. When this is no longer a convenience, like when combining it with other tools that already read and write files, one should not feel guilty for leaving dd out entirely.

This is not to say I think dd is overrated! Au contraire! It’s one of my favorite Unix tools!

dd is the swiss army knife of the open, read, write and seek syscalls. It’s unique in its ability to issue seeks and reads of specific lengths, which enables a whole world of shell scripts that have no business being shell scripts. Want to simulate a lseek+execve? Use dd! Want to open a file with O_SYNC? Use dd! Want to read groups of three byte pixels from a PPM file? Use dd!

It’s a flexible, unique and useful tool, and I love it. My only issue is that, far too often, this great tool is being relegated to and inappropriately hailed for its most generic and least interesting capability: simply copying a file from start to finish.


* dd actually has two jobs: Convert and Copy. Legend has it that the intended name, “cc”, was taken by the C compiler, so the letters were shifted by one to give “dd”. This is also why we ended up with a Window system called X.

Basic Linux-related things, Linux

Show HN: How to Setup an OpenVPN Server on Digital Ocean

$
0
0

README.md

Steps I take when setting up a VPN server on Digital Ocean

Create SSH keys on client computer

Check for existing SSH keys

Generate new SSH key

ssh-keygen -t rsa -b 4096 -C your_email@example.com

Public key is now located in /home/demo/.ssh/id_rsa.pub. Private key is now located in /home/demo/.ssh/id_rsa. While creating new droplet, add these keys.

Login after creating droplet

Login as root

ssh root@server_ip_address

Upgrade system

sudo apt-get update && sudo apt-get upgrade && sudo apt-get dist-upgrade

Create new user

Give root privileges

Add public key authentication for new user using client computer. Call new public key id_rsa_demo

ssh-keygen -t rsa -b 4096 -C your_email@example.com

Copy contents of public key by CTRL-C or (cat ~/.ssh/id_rsa_demo.pub)

Manually install public key on server

su - demo
mkdir .ssh
chmod 700 .ssh

Paste in public key while in nano

sudo nano .ssh/authorized_keys

chmod 600 .ssh/authorized_keys

Exit returns to root

Login as new user

Disable root login and change SSH port

It is possible to change SSH port to anything you like as long as it doesn't conflict with other active ports. Port 22 is written below, but any port can be used. Allow new port in ufw rules below and restart ufw before restarting ssh

sudo nano /etc/ssh/sshd_config

Port 22
PermitRootLogin without-password
reload ssh
sudo restart ssh

Enable UFW

ufw limit 22
ufw allow 1194/udp
ufw allow 500/udp
ufw allow 4500/udp

Change from DROP to ACCEPT

sudo nano /etc/default/ufw

DEFAULT_FORWARD_POLICY="ACCEPT"

Add these lines to the before.rules file

sudo nano /etc/ufw/before.rules# START OPENVPN RULES# NAT table rules*nat
:POSTROUTING ACCEPT [0:0]# Allow traffic from OpenVPN client to eth0
-A POSTROUTING -s 10.8.0.0/8 -o eth0 -j MASQUERADE
COMMIT# END OPENVPN RULES

UFW rules should look similar to this

#Status: active#Logging: on (low)#Default: deny (incoming), allow (outgoing), allow (routed)#New profiles: skip#To                         Action      From#--                         ------      ----#22                         LIMIT IN    Anywhere#1194/udp                   ALLOW IN    Anywhere#500/udp                    ALLOW IN    Anywhere#4500/udp                   ALLOW IN    Anywhere#1194/udp (v6)              ALLOW IN    Anywhere (v6)#22 (v6)                    LIMIT IN    Anywhere (v6)#500/udp (v6)               ALLOW IN    Anywhere (v6)#4500/udp (v6)              ALLOW IN    Anywhere (v6)

Install OpenVPN

#https://github.com/Nyr/openvpn-install
wget git.io/vpn --no-check-certificate -O openvpn-install.sh && bash openvpn-install.sh

Copy unified .ovpn to client computer

scp -P root@server_ip_address:client.ovpn Downloads/

Install Libreswan

#https://blog.ls20.com/ipsec-l2tp-vpn-auto-setup-for-ubuntu-12-04-on-amazon-ec2/#https://github.com/hwdsl2/setup-ipsec-vpn
wget https://github.com/hwdsl2/setup-ipsec-vpn/raw/master/vpnsetup.sh -O vpnsetup.sh
sudo nano -w vpnsetup.sh

PSK:your_private_key
Username:your_username
Password:your_password

Run following commands if OpenVPN doesn't work after reboot

sudo iptables -I INPUT -p udp --dport 1194 -j ACCEPT
sudo iptables -I FORWARD -s 10.8.0.0/24 -j ACCEPT
sudo iptables -I FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
sudo service ufw stop
sudo service ufw start
sudo /etc/init.d/openvpn restart

sudo iptables-save > /etc/iptables.rules
sudo nano /etc/rc.local

iptables-restore < /etc/iptables.rules

Install Dnsmasq

Check current nameserver configuration

Install Dnsmasq

sudo apt-get install dnsmasq
cat /etc/resolv.conf

Take note of query time

dig duckduckgo.com @localhost

Check again after cached

dig duckduckgo.com @localhost

Install NTP

sudo dpkg-reconfigure tzdata
sudo ntpdate pool.ntp.org
sudo service ntp start

Install send only SSMTP service

sudo apt-get install ssmtp
sudo nano /etc/ssmtp/ssmtp.conf#root=postmaster
root=your_email@example.com#mailhub=mail
mailhub=smtp.gmail.com:587
AuthUser=your_email@example.com
AuthPass=your_password
UseTLS=YES
UseSTARTTLS=YES#rewriteDomain=
rewriteDomain=gmail.com#hostname=your_hostname
hostname=your_email@example.com

Test ssmtp in terminal

ssmtp recipient_email@example.com

Format message as below

To: recipient_email@example.com
From: myemailaddress@gmail.com
Subject: test emailtest email

Insert blank line after Subject:. This is the body of the email. Press CTRL-D to send message. Sometimes pressing CTRL-D a second time after about 10 seconds is needed if message is not sent.

Install Fail2ban

sudo apt-get install fail2ban
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
sudo nano /etc/fail2ban/jail.local# "ignoreip" can be an IP address, a CIDR mask or a DNS host. Use space separator to add more than one IP
ignoreip = 127.0.0.⅛
bantime  = 600
maxretry = 3

destemail = your_email@example.com
sendername = Fail2Ban
mta = sendmail
#_mwl sends email with logs
action = %(action_mwl)s

Jails which can be initially set to true without any errors

#ssh#dropbear#pam-generic#ssh-ddos#postfix#couriersmtp#courierauth#sasl#dovecot

Restart Fail2ban

sudo service fail2ban stop
sudo service fail2ban start

Check list of banned IPs for Fail2ban

fail2ban-client status ssh
iptables --list -n | fgrep DROP

Full system backup using rsync.

Using the -aAX set of options, all attributes are preserved

rsync -aAXv --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} root@your_hostname:/ /home/demo/backup/

Install TripWire

#https://www.digitalocean.com/community/tutorials/how-to-use-tripwire-to-detect-server-intrusions-on-an-ubuntu-vps
sudo apt-get install tripwire

Set the Site-Key and Local-Key passphrase

Create policy file

sudo twadmin --create-polfile /etc/tripwire/twpol.txt

Initialize database

sudo tripwire --init
sudo sh -c 'tripwire --check | grep Filename > /etc/tripwire/test_results'

Entries may look like this

less /etc/tripwire/test_results
     Filename: /etc/rc.boot
     Filename: /root/mail
     Filename: /root/Mail
     Filename: /root/.xsession-errors
     Filename: /root/.xauth
     Filename: /root/.tcshrc
     Filename: /root/.sawfish
     Filename: /root/.pinerc
     Filename: /root/.mc
     Filename: /root/.gnome_private
     Filename: /root/.gnome-desktop
     Filename: /root/.gnome
     Filename: /root/.esd_auth
     Filename: /root/.elm
     Filename: /root/.cshrc
     Filename: /root/.bash_profile
     Filename: /root/.bash_logout
     Filename: /root/.amandahosts
     Filename: /root/.addressbook.lu
     Filename: /root/.addressbook
     Filename: /root/.Xresources
     Filename: /root/.Xauthority
     Filename: /root/.ICEauthority
     Filename: /proc/30400/fd/3
     Filename: /proc/30400/fdinfo/3
     Filename: /proc/30400/task/30400/fd/3
     Filename: /proc/30400/task/30400/fdinfo/3

Edit text policy in editor

sudo nano /etc/tripwire/twpol.txt

Search for each of the files that were returned in the test_results file. Comment out lines that match.

    {
        /dev                    ->$(Device);
        /dev/pts                ->$(Device);#/proc                  -> $(Device) ;
        /proc/devices           ->$(Device);
        /proc/net               ->$(Device);
        /proc/tty               ->$(Device);...

Comment out /var/run and /var/lock lines

    (
  rulename = "System boot changes",
  severity = $(SIG_HI)
    )
    {#/var/lock              -> $(SEC_CONFIG) ;#/var/run               -> $(SEC_CONFIG) ; # daemon PIDs
        /var/log                ->$(SEC_CONFIG);
    }

Save and close

Re-create encrypted policy file

sudo twadmin -m P /etc/tripwire/twpol.txt

Re-initialize database

Warnings should be gone. If there are still warnings, continue editing /etc/tripwire/twpol.txt file until gone.

Check current status of warnings

Delete test_results file that was just created

sudo rm /etc/tripwire/test_results

Remove plain text configuration files

sudo sh -c 'twadmin --print-polfile > /etc/tripwire/twpol.txt'

Move text version to backup location and recreate it

sudo mv /etc/tripwire/twpol.txt /etc/tripwire/twpol.txt.bak
sudo sh -c 'twadmin --print-polfile > /etc/tripwire/twpol.txt'

Remove plain text files

sudo rm /etc/tripwire/twpol.txt
sudo rm /etc/tripwire/twpol.txt.bak

Send an email notifications

sudo apt-get install mailutils

See if we can send email

sudo tripwire --check | mail -s "Tripwire report for `uname -n`" your_email@example.com

Check report that was sent with the email

sudo tripwire --check --interactive

Remove x from box if not ok with change. Re-run above command to reset warning after each email received

Automate Tripwire with Cron

Check if root already has crontab by issuing this command

If crontab is present, pipe into file to back it up

sudo sh -c 'crontab -l > crontab.bad'

Edit crontab

To have tripwire run at 3:30am every day, insert this line

30 3 *** /usr/sbin/tripwire --check | mail -s "Tripwire report for `uname -n`" your_email@example.com

Enable Automatic Upgrades

sudo apt-get install unattended-upgrades
sudo dpkg-reconfigure unattended-upgrades

Update the 10 periodic file. 1 means that it will upgrade every day

sudo nano /etc/apt/apt.conf.d/10periodic

APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Download-Upgradeable-Packages "1";
APT::Periodic::AutocleanInterval "1";
APT::Periodic::Unattended-Upgrade "1";

Autostart OpenVPN on Debian client computer

sudo nano /etc/default/openvpn

Uncomment:

Copy client.ovpn to /etc/openvpn/client.conf by renaming file

gksu -w -u root gksu thunar

Reload openvpn configuration

/etc/init.d/openvpn reload /etc/openvpn/client.conf

Check for tun0 interface

Allow multiple clients to connect with same ovpn file

Note: It is safer to create multiple ovpn files

sudo nano /etc/openvpn/server.conf

Uncomment following line:

Restart OpenVPN service

sudo service openvpn restart

Maintenance Commands

#Programs holding open network socket
lsof -i#Show all running processes
ps -ef#Who is logged on
who -u#Kill the process that you wantkill"pid"#Check SSH sessions
ps aux | egrep "sshd: [a-zA-Z]+@"#Check SSHD
ps fax#Check last logins
last#Check ufw status
sudo ufw status verbose#Delete ufw rules
sudo ufw delete deny "port"#Check logs
grep -ir ssh /var/log/*
grep -ir sshd /var/log/*
grep -ir breakin /var/log/*
grep -ir security /var/log/*#Tree directory 
http://www.cyberciti.biz/faq/linux-show-directory-structure-command-line/#See all files
tree -a#List directories only
tree -d#Colorized output
tree -C#File management
https://www.digitalocean.com/community/tutorials/basic-linux-navigation-and-file-management
http://www.computerworld.com/article/2598082/linux/linux-linux-command-line-cheat-sheet.html
http://www.debian-tutorials.com/beginners-how-to-navigate-the-linux-filesystem#LSOF Commands
https://stackoverflow.com/questions/106234/lsof-survival-guide#How to kill zombie process
ps aux | grep 'Z'#Find the parent PID of the zombie
pstree -p -s 93572#Check IPTables traffic
sudo iptables -v -x -n -L#Report file system disk space
df -Th#Check trash size
sudo find / -type d -name '*Trash*'| sudo xargs du -h | sort#Check size of packages in apt
du -h /var/cache/apt/#Check size of log files
sudo du -h /var/log#Check size of lost+found folder
sudo find / -name "lost+found"| sudo xargs du -h#How to delete lots of text in nano
Scroll to top of text, press Alt+A, Ctrl-V to bottom of text, press Ctrl-K to cut the text, Ctrl-O to save, Ctrl-X to exit#How to scan top 8000 ports using nmap
nmap -vv --top-ports 8000 your_hostname#Delete ufw and iptable rules by line number. In this example we use number 666
sudo ufw status numbered
sudo ufw delete 666

sudo iptables -L --line-numbers
sudo iptables -D INPUT 666

LaTeX Coffee Stains (2009)

$
0
0

LaTeX Coffee StainsThis package provides an essential feature to LaTeX that has been missing for too long. It adds a coffee stain to your documents. A lot of time can be saved by printing stains directly on the page rather than adding it manually.

coffee.pdf– This is an example of all included stains and also the documentation of the package.

coffee.tar.gz– This archive contains all files including the documentation (you probably want to download this).

coffee.sty– This is the actual package which contains everything including the images.

coffee.tex– This is the source code of the example file.

Update (23 November 2010):

coffee2.sty– This is an improved version that works with pdflatex. Thanks to Evan Sultanik!

Update (24 March 2011):

coffee3.tar.gz– This is another improved version that works with pdflatex. It allows you to scale, rotate and change the transparency of any coffee stain. Thanks to Professor Luis Randez!

Update (25 May 2012):

coffee4.tar.gz– Adrian Robson sent me this improved version. He writes: “I find I rarely manage to put my coffee mug down exacly in the middle of my papers.
So I have amended coffee3.sty to support off centre coffee stains.

How can existentialist philosophy help with the anxiety and dread of fatherhood?

$
0
0

A meeting of existential philosophers tends to be the spectacle one might expect: black berets whisper in hushed tones about death and anxiety; nervous hands and pursed lips smoke cigarettes in hotel rooms; throats are cleared to deliver scholarly papers to the chosen few. (What exactly would ‘The Patency of Art: Transubstantiation, Synesthesia, and Self-Touching Touch in Merleau-Ponty’s and Nancy’s Aesthetics’ be about?) There are, however, spectacles you will rarely see: the kind that children leave in their wake.

This is a gathering of predominately male philosophers, and male philosophers are notoriously bad fathers. Of course, there are exceptions, but think of Socrates shooing his family away in his final moments so that he can have alone time with his philosophical buddies, or, even worse, Jean-Jacques Rousseau writing Emile (1762), a tract about raising kids, while abandoning his own. Instead of being bad parents, many of the titans of European existentialism – Friedrich Nietzsche, Søren Kierkegaard, Jean-Paul Sartre – remained childless. 

We defied the odds: we are both philosophers, existentialists even, and both of us are fathers. How this happened was not exactly noble or well-considered: honeymoon babies, unexpected, but welcome, babies – that’s how we became fathers. And our tenure as parents has often been a haphazard mess, anything but thoroughly philosophical. Occasionally over the years, though, we’ve drawn on the wisdom of the fathers of philosophy, even the childless fathers of existentialism, and in so doing have become marginally better parents.

First, a word about childlessness: it would be easy to chalk up an existentialist’s avoidance of fatherhood to his guiding ideals of autonomy and freedom. We are, according to Sartre, ‘condemned to be free’, and this strange life sentence means that we must at every point choose our own path forward. This doesn’t suggest that one can’t take influence from another but, ultimately, individuals are solely responsible for the choices they make. The imperative to have children, one that remains widespread, should not therefore have the usual traction for an existentialist. He or she is wholly free in declining to procreate and raise a brood of kids. For an existentialist, there is no shame in this. None whatsoever.

Many philosophers steer clear of child-rearing because of the sheer difficulty of parenting well. ‘Raising children is an uncertain thing,’ the pre-Socratic philosopher Democritus tells us. ‘Success is reached only after a life of battle and worry.’ Many philosophers – many people – are not well-equipped for this battle. Some know it, and opt out. In our culture, it is tempting to interpret avoiding parenting as a refusal to be appropriately responsible. While there is nothing particularly wrong about this interpretation, it exerts a type of pressure that leads many to become horrible parents. Many adults become parents as a matter of course, rather than as an active choice, despite the fact that they might not be wholly prepared or willing.

Get Aeon straight to your inbox

‘Art thou a man entitled to desire a child?’ Nietzsche asks in his childlessness in Thus Spoke Zarathustra (1883-91). ‘Art thou the victorious one, the self-conqueror, the ruler of thy passions, the master of thy virtues? Thus do I ask thee.’ For many people, including Nietzsche, reticence and refusal is the most appropriate response to such difficult questions. In the Republic, Socrates comments that the reluctant ruler is the only one who should lead the polis, and the same might go for parenting: only those who fear and tremble in the face of fatherhood are worthy of assuming its infinite responsibility. Perhaps being scared and running away just means that you are paying attention.

But let’s pretend that an existentialist, after careful consideration or random accident, becomes a father. How can he remain a parent without jumping philosophical ship? According to his essay Anti-Semite and Jew (1946), the core of existential freedom is what Sartre terms ‘authenticity’, the courage to have ‘a true and lucid consciousness of the situation, in assuming the responsibilities and risks it involves, in accepting it in pride or humiliation, sometimes in horror and hate’.

Here is what a ‘true and lucid consciousness of the situation’ of fatherhood might resemble: you watch wide-eyed as your beloved pushes a stranger out of a bodily orifice that seems altogether too small for the labour; when the gore is cleaned up, the stranger becomes your most intimate companion and life-long dependent; existence, from that day forward, is structured around this dependency; and then, if everything goes well, the child will grow up to no longer need you. At the end of the existential day, your tenure as a father will end in one of two ways: either your child will die or you will. As Kierkegaard writes in Either/Or (1843): ‘You will regret both.’

Parenting authentically also involves coming to terms with what children are really like. They are not angels or hellions, sweethearts or monsters: they are little people who, as Kierkegaard suggests, are both angelic and beastly. This banal platitude expresses a deep truth about the human condition, namely that we are the sorts of creatures, perhaps the only ones, who possess radical freedom. Most of adult life is geared to ignoring this aspect of human nature, and modernity sets artificial constraints on behaviour, pretending that these constraints are God-given. Of course, for an existentialist, as for a child, all of this is nonsense – nothing is God-given. The boundaries that define civilised life are, more often than not, self-imposed, which is to say radically contingent. A child knows, in a way that most parents intentionally forget, that the range of life’s possibilities is always profoundly open. And the difficulty of life is to choose for oneself which possibilities should become actual.

Fatherhood has traditionally been about limiting a child’s sense of possibility. The expression ‘father knows best’ has a correlate: ‘child does not’. Obviously, there is something right about this position: a toddler rifling through a detergent cupboard should be stopped. Children occasionally explore possibilities that are harmful – physically and psychologically – and, as parents, it is our place to keep tabs on the threat that existential freedom poses to our kids. But existentialists such as Nietzsche suggest that our overblown risk-aversion doesn’t track the actual danger of a particular situation, but rather our own sense of anxiety.

The more we argue that it is about the kids’ safety, the more obvious that its all about us

Anxiety and dread – in everyday life, they are assiduously avoided. More specifically, we avoid the objects (spiders, exams, shots, clowns) that spur us to anxiety and dread. These experiences, however, have very particular meanings for European philosophers of the 19th and 20th centuries, and these thinkers generally agree that they are not the sorts of things that can or should be avoided. According to Kierkegaard, dread has no particular object or cause, but rather emanates uncomfortably from the very pit of being human. It is, in his words, the ‘sense of freedom’s possibility’. Imagine all the possibilities that you have in life, now multiply them by a power of 10, and then another power of 10, and then finally let yourself consider the many options that you have from a very young age forbidden yourself. Those are the ones that we should really talk about. Now, whatever you are feeling – that is something like a weak, attenuated sense of freedom’s infinite possibility. The routine of adulthood usually numbs us to this sort of dread, but children do their best to remind us of its force.

Why do we put limits on our children? Why is a daughter not allowed to climb that tree or jump across a river? Why is a son discouraged from wearing a dress or forced to play ice hockey? Why are neither daughters nor sons allowed to run away? Father knows best. Of course, virtually all fathers think that they are operating in their child’s best interests, but we have been at this long enough to know, if we are honest or authentic, that most of us protect our children, at least in part, because we are avoiding or coming to grips with our own Kierkegaardian anxiety. The more we argue that it is about the kids’ safety, the more obvious it is that it is all about us. Children remind us, in very delightful and painful ways, what it is to be a person. Their untethered curiosity, naïve bravery, complete lack of shame, remind their parents that they too, at one distant point, had these possibilities – and they had no small amount of trouble doing away with them.

Both of us have daughters. We remember the dread, or anxiety, of watching them climb the jungle gym. At a basic level, we thought that we were simply worried about compound fractures but, over the years, it is clear that what we really feared was losing control, relinquishing some of the mastery that we’d acquired over freedom’s dreadful reach. But what our children remind us is that we really have no mastery over freedom. Daughters and (we can only assume) sons have a proclivity for the full range of human potentialities, including the disastrous ones. And this is the truth about children, as far as we see it: parents hinge their self-conception to little beast-angels who are free to self-destruct, but we would like to think that this is not the case. Parenting a toddler is painful for a host of well-worn reasons but, at least for us, its tortures have less to do with the way our daughters defy our specific commands than with the anxiety of caring for someone who often intentionally and joyfully ignores what is obviously in her own best interest, much like the narrator of Fyodor Dostoevsky’s existentialist novella Notes from Underground (1864). 

So how do parents deal with the sense of anxiety that mounts as children hit their teenage years (surely the time when freedom’s possibility and limits are sensed most pointedly)? Infantilising or clamping down on the kids for the sake of our own sense of coherence and sanity is the best way to spurn a young adult to all-out revolt. Again, the childless philosophers have a clue in this regard. Sartre maintains that parents might do well to embrace a basic truth about dealing with adults, young and old: ‘Hell is other people.’ This isn’t a pessimistic or dire statement. (Okay, it is pessimistic, but it isn’t dire.) Coexistence is ‘hell’ because it entails the variability, vulnerability and tragedy of living with another human being, one who is wholly free to explore her own freedom exactly as she chooses. We can love her, and we surely do, but this doesn’t mean that she will act in accordance with our will or, even if she does, that this will turn out for the best. Ultimately, it won’t.

More than a century before Sartre, Arthur Schopenhauer, arguably the first existentialist philosopher, suggests that one should adjust his expectations about life or, in this case, life with children. It is best to view it, in Schopenhauer’s words ‘as an unprofitable episode, disturbing the blessed calm of non-existence’. This is not to say that one should hate parenting or think that children don’t do their share in making it bearable or even enjoyable. It’s to suggest that parenting, like the rest of life, is ‘a task to be done’, in Schopenhauer’s words. It is the very difficult journey of negotiating freedom such that, when each of us is delivered to our unceremonious end, there isn’t the nagging sense that one hasn’t lived. When we say that we want our children to be happy and safe, what we should mean is this – that they have grown up to make free decisions that are meaningful and that they are willing to stake their lives on them.

We know that all of this sounds painfully severe. Most parents will want to gloss over the difficulties of parenting and concentrate on its many joys. Existentialists, however, suggest that such optimism is often a form of ‘bad faith’: it is a way of masking the freedom that underpins parenting and being a child. When a parent emphasises only what ‘fits’ into his conception of being a father, or being a child, rather than attending to the specific nuances of day-to-day interaction, existentialists, such as Sartre, would sound the alarm. Life with children is chaos at best. Things slip through the cracks. Daughters fall off jungle gyms. Sons run away. It happens, and not always to someone else’s children. If a man presumes that fatherhood is going to go perfectly smoothly, he is either going to be upset or self-deceived.

At its core, bad faith is a form of self-deception that attempts to hide the unruly remainders of human freedom in acceptable cultural roles. The classic example that Sartre gives is the Parisian waiter who is obviously just playing at serving patrons at a café: his motions are forced and exaggerated; he smiles too broadly and bows too deeply; he embraces a phoney role rather than a form of authentic personhood. Sartre could have picked a better example of bad faith by attending a toddler’s birthday and talking to the parents for three minutes. Soccer mom, helicopter parent, sports-obsessed father, tiger mother – the roles of parenthood abound. More often than not, however, the roles converge on maintaining a single façade: flawlessness. What is at stake for a parent in maintaining the semblance of normality or perfection? It certainly isn’t the mental wellbeing of the children. Schopenhauer suggests that we forego appearances and admit, once and for all, that parenting, along with life in general, is a hell that forever deviates from the scripts we have for it.

There is something paradoxical in accepting Schopenhauer’s dark suggestion. One might think that it makes life harder but, in our experience, when a father takes Schopenhauer’s assertion – to view life as a ‘uselessly disturbing episode’ – the experience of fatherhood somehow becomes more manageable. The shame, disappointment and guilt that so many parents face are often a function of unrealistic expectations. When an existentialist father is at his wit’s end, he has already prepared himself for the experience. It might be painful, but it doesn’t come as a huge shock. In his essay ‘On the Sufferings of the World’ (1850), Schopenhauer writes:

If you accustom yourself to this view of life you will regulate your expectations accordingly, and cease to look upon all its disagreeable incidents, great and small, its sufferings, its worries, its misery, as anything unusual or irregular; nay, you will find that everything is as it should be, in a world where each of us pays the penalty of existence in his own peculiar way.

Many optimists are secretly unhappy people: their hopes and aspirations are dashed with surprising regularity. On the other hand, many pessimists – or shall we call them realists – are actually amazingly buoyant: their hopes and aspirations are well-fitted to a world that ultimately comes up short. This might sound like we are cheating our kids and ourselves out of the chance to ‘dream big’, to take risks, to reach for the stars. Nothing could be further from the truth.

There is a sort of thick, glossy veneer to a helicopter parent that forbids genuine communion. Their kids aren’t allowed in, either

Existentialists encourage their readers to take full responsibility for the course of their lives, and also to venture beyond one’s self-imposed limits. This is what Nietzsche means when he commands us to ‘give birth to a dancing star’. Nietzsche, however, holds that there is no transcendental guarantee of success when a child, or anyone else, explores the full range of human possibilities. When one gives birth to a dancing star, the labour is painful. There is no anaesthesia for the procedure. In the words of Albert Camus, our efforts in life, pitted against the indifference of the world, often resemble the frustrations of Sisyphus, who is fated to push his boulder up an endless mountain. So Nietzsche and Camus, along with existentialists of the 20th century, ultimately counsel resilience – and what better lesson for a young parent with young children.

In recent years, we have come to slowly appreciate the underlying moral of Schopenhauer’s existential worldview. Adjusting one’s unrealistic expectations is the easy part. What he intends for us to realise, or become, is considerably harder. Schopenhauer suggests that the feigned optimism – what later existentialists will call a form of ‘bad faith’ – has the strange consequence of alienating others. Have you ever tried to be real friends with a diehard helicopter parent? We have, and it doesn’t work. There is a sort of thick, glossy veneer that forbids genuine communion. Their kids aren’t allowed in, either. Existentialists ask us to strip away this barrier, to own up to the universal nature of human suffering – the fact that all of us, young and old – will encounter untold tragedies in our lives, despite the best efforts of our parents or guardians.

This truth, according to the dour Schopenhauer, should allow us to cultivate a bit of forbearance for others, including our children, even in the most trying of moments. Life really is brutally hard – for each of us in our own special ways. If we permit ourselves a moment of existential authenticity, a chance to see our kids as they really are rather than what we wish them to be, it is clear that childhood is often terrifying. So is parenthood. This realisation, as dismal as it might seem, grants a parent something that optimism typically forbids: meaningful empathy, the capacity to feel for someone else. ‘This may perhaps sound strange,’ Schopenhauer admits, ‘but it is in keeping with the facts; it puts others in a right light; and it reminds us of that which is after all the most necessary thing in life – the tolerance, patience, regard, and love of neighbour, of which everyone stands in need, and which, therefore, every man owes to his fellow’ – even the very small one.

Mathematicians bring ocean to life for Disney's 'Moana'

$
0
0
The hit Disney movie “Moana” features stunning visual effects, including the animation of water to such a degree that it becomes a distinct character in the film. Credit: Walt Disney Animation Studios

UCLA mathematics professor Joseph Teran, a Walt Disney consultant on animated movies since 2007, is under no illusion that artists want lengthy mathematics lessons, but many of them realize that the success of animated movies often depends on advanced mathematics.

"In general, the animators and artists at the studios want as little to do with mathematics and physics as possible, but the demands for realism in animated movies are so high," Teran said. "Things are going to look fake if you don't at least start with the correct physics and mathematics for many materials, such as water and snow. If the physics and mathematics are not simulated accurately, it will be very glaring that something is wrong with the animation of the material."

Teran and his research team have helped infuse realism into several Disney movies, including "Frozen," where they used science to animate snow scenes. Most recently, they applied their knowledge of math, physics and computer science to enliven the new 3-D computer-animated hit, "Moana," a tale about an adventurous teenage girl who is drawn to the ocean and is inspired to leave the safety of her island on a daring journey to save her people.

Alexey Stomakhin, a former UCLA doctoral student of Teran's and Andrea Bertozzi's, played an important role in the making of "Moana." After earning his Ph.D. in applied mathematics in 2013, he became a senior software engineer at Walt Disney Animation Studios. Working with Disney's effects artists, technical directors and software developers, Stomakhin led the development of the code that was used to simulate the movement of water in "Moana," enabling it to play a role as one of the characters in the film.

"The increased demand for realism and complexity in animated movies makes it preferable to get assistance from computers; this means we have to simulate the movement of the ocean surface and how the water splashes, for example, to make it look believable," Stomakhin explained. "There is a lot of mathematics, physics and computer science under the hood. That's what we do."

"Moana" has been praised for its stunning visual effects in words the mathematicians love hearing. "Everything in the movie looks almost real, so the movement of the water has to look real too, and it does," Teran said. "'Moana' has the best water effects I've ever seen, by far."

The video will load shortly

Credit: Walt Disney Animation Studios

Stomakhin said his job is fun and "super-interesting, especially when we cheat physics and step beyond physics. It's almost like building your own universe with your own laws of physics and trying to simulate that universe.

"Disney movies are about magic, so magical things happen which do not exist in the real world," said the . "It's our job to add some extra forces and other tricks to help create those effects. If you have an understanding of how the real physical laws work, you can push parameters beyond physical limits and change equations slightly; we can predict the consequences of that."

To make animated movies these days, movie studios need to solve, or nearly solve, partial differential equations. Stomakhin, Teran and their colleagues build the code that solves the partial differential equations. More accurately, they write algorithms that closely approximate the because they cannot be solved perfectly. "We try to come up with new algorithms that have the highest-quality metrics in all possible categories, including preserving angular momentum perfectly and preserving energy perfectly. Many algorithms don't have these properties," Teran said.

Stomakhin was also involved in creating the ocean's crashing waves that have to break at a certain place and time. That task required him to get creative with physics and use other tricks. "You don't allow physics to completely guide it," he said. "You allow the wave to break only when it needs to break."

Depicting boats on waves posed additional challenges for the scientists.

"It's easy to simulate a boat traveling through a static lake, but a boat on waves is much more challenging to simulate," Stomakhin said. "We simulated the fluid around the boat; the challenge was to blend that fluid with the rest of the ocean. It can't look like the boat is splashing in a little swimming pool—the blend needs to be seamless."

The movement of water was precisely choreographed by mathematicians who applied principles of physics and mathematics to the task. Credit: Walt Disney Animation Studios

Stomakhin spent more than a year developing the code and understanding the physics that allowed him to achieve this effect.

"It's nice to see the great visual effect, something you couldn't have achieved if you hadn't designed the algorithm to solve accurately," said Teran, who has taught an undergraduate course on scientific computing for the visual-effects industry.

While Teran loves spectacular visual effects, he said the research has many other scientific applications as well. It could be used to simulate plasmas, simulate 3-D printing or for surgical simulation, for example. Teran is using a related algorithm to build virtual livers to substitute for the animal livers that surgeons train on. He is also using the algorithm to study traumatic leg injuries.

Teran describes the work with Disney as "bread-and-butter, high-performance computing for simulating materials, as mechanical engineers and physicists at national laboratories would. Simulating water for a movie is not so different, but there are, of course, small tweaks to make the water visually compelling. We don't have a separate branch of research for computer graphics. We create new algorithms that work for simulating wide ranges of materials."

Teran, Stomakhin and three other applied mathematicians—Chenfanfu Jiang, Craig Schroeder and Andrew Selle—also developed a state-of-the-art simulation method for fluids in graphics, called APIC, based on months of calculations. It allows for better realism and stunning visual results. Jiang is a UCLA postdoctoral scholar in Teran's laboratory, who won a 2015 UCLA best dissertation prize. Schroeder is a former UCLA postdoctoral scholar who worked with Teran and is now at UC Riverside. Selle, who worked at Walt Disney Animation Studios, is now at Google.

Their newest version of APIC has been accepted for publication by the peer-reviewed Journal of Computational Physics.

"Alexey is using ideas from high-performance computing to make movies," Teran said, "and we are contributing to the scientific community by improving the algorithm."

Explore further:UCLA mathematician works to make virtual surgery a reality


GCC code generation for C++ Weekly Ep 43 example

$
0
0
Episode 43 of “C++ Weekly” talks about evaluating and eliminating code at compile time, and the example is fun as it triggers a few different deficiencies in the GCC optimization passes (using the -O3 optimization level with GCC trunk r243987 built for x86_64-linux-gnu).

The example
#include <type_traits>#include <numeric>#include <iterator>template<typename ... T>int sum(T ... t)
{
  std::common_type_t<T...> array[sizeof...(T)]{ t... };return std::accumulate(std::begin(array), std::end(array), 0);
}int main()
{return sum(5,4,3,2,1);
}
is meant to be optimized to return a constant value, and it does
main:
    movl    $15, %eax
    ret
But the behavior varies a lot depending on how many arguments are passed to sum1–5 arguments work as expected, while calling sum with 6 or 7 arguments is generated as
main:
 movdqa .LC0(%rip), %xmm0
 movaps %xmm0, -40(%rsp)
 movl -32(%rsp), %edx
 movl -28(%rsp), %eax
 leal 14(%rdx,%rax), %eax
 ret
where .LC0 is an array consisting of four constants. 9–12 arguments are similar (but with more code). 

13–28 arguments are generated as a constant again
main:
    movl    $91, %eax
    ret

29–64 arguments are optimized to a constant, but with some redundant stack adjustments when the number of arguments is not divisible by four
main:
    subq    $16, %rsp
    movl    $435, %eax
    addq    $16, %rsp
    ret

Finally, 65 and more arguments are generated as a vectorized monstrosity in a separate function, called from main by pushing all the arguments to the stack, one at a time
main:
    subq    $16, %rsp
    movl    $60, %r9d
    movl    $61, %r8d
    pushq   $1
    pushq   $2
    movl    $62, %ecx
    pushq   $3
    ushq    $4...
This is essentially as far from generating a constant as you can come. 😀

The rest of the blog post will look at how GCC is reasoning when trying to optimize this, by examining GCC's internal representation of the program at different points in the optimization pipeline. The IR works essentially as a restricted version of the C language, and you can get GCC to write the IR to a file after each pass by using the command-line option -fdump-tree-all.

1–5 arguments

The use of std::accumulate and iterators expand to five functions, and the compiler starts by inlining and simplifying this to
int main() ()
{
  common_type_t array[5];
  int __init;
  int * __first;
  int _3;<bb 2> [16.67%]:
  array[0] = 5;
  array[1] = 4;
  array[2] = 3;
  array[3] = 2;
  array[4] = 1;<bb 3> [100.00%]:
  # __first_2 = PHI <&array(2), __first_6(4)>
  # __init_4 = PHI <0 __init_5>
  if (&MEM[(void *)&array + 20B] == __first_2)
    goto <bb 5>; [16.67%]
  else
    goto <bb 4>; [83.33%]<bb 4> [83.33%]:
  _3 = *__first_2;
  __init_5 = _3 + __init_4;
  __first_6 = __first_2 + 4;
  goto <bb 3>; [100.00%]<bb 5> [16.67%]:
  array ={v} {CLOBBER};
  return __init_4;
}
The loop is immediately unrolled and simplified to
int main() ()
{
  common_type_t array[5];
  int __init;
  int * __first;
  int _15;
  int _20;
  int _25;
  int _30;
  int _35;<bb 2> [16.70%]:
  array[0] = 5;
  array[1] = 4;
  array[2] = 3;
  array[3] = 2;
  array[4] = 1;
  _15 = MEM[(int *)&array];
  __init_16 = _15;
  _20 = MEM[(int *)&array + 4B];
  __init_21 = _15 + _20;
  _25 = MEM[(int *)&array + 8B];
  __init_26 = __init_21 + _25;
  _30 = MEM[(int *)&array + 12B];
  __init_31 = __init_26 + _30;
  _35 = MEM[(int *)&array + 16B];
  __init_36 = __init_31 + _35;
  array ={v} {CLOBBER};
  return __init_36;
}
that is then optimized to a constant by the fre3 (“Full Redundancy Elimination”) pass
int main() ()
{<bb 2> [16.70%]:
  return 15;
}

6–12 arguments

The early optimizations that handled the previous case are there mostly to get rid of noise before the heavier optimizations (such as loop optimizations) kicks in, and the loop doing 6 iterations is considered too big to be unrolled by the early unroller.

The “real” loop optimizers determine that the loop is not iterating enough for vectorization to be profitable, so it is just unrolled. The “SLP vectorizer” that vectorizes straight line code is run right after the loop optimizations, and it sees that we are copying constants into consecutive addresses, so it combines four of them to a vector assignment

MEM[(int *)&array] = { 6, 5, 4, 3 };
array[4] = 2;
array[5] = 1;
This is now simplified by the dom3 pass that does SSA dominator optimizations (jump threading, redundancy elimination, and const/copy propagation), but it does not understand that a scalar initialized by a constant vector is a constant, so it only propagates the constants for array[4] and array[5] that were initialized as scalars, and the code passed to the backend looks like
int main() ()
{
  common_type_t array[6];
  int __init;
  int _15;
  int _22;
  int _27;
  int _32;<bb 2> [14.31%]:
  MEM[(int *)&array] = { 6, 5, 4, 3 };
  _15 = MEM[(int *)&array];
  _22 = MEM[(int *)&array + 4B];
  __init_23 = _15 + _22;
  _27 = MEM[(int *)&array + 8B];
  __init_28 = __init_23 + _27;
  _32 = MEM[(int *)&array + 12B];
  __init_33 = __init_28 + _32;
  __init_43 = __init_33 + 3;
  array ={v} {CLOBBER};
  return __init_43;
}

13–28 arguments

The loop is now iterated enough times that the compiler determines that vectorization is profitable. The idea behind the vectorization is to end up with something like
tmp = { array[0], array[1], array[2], array[3] }
    + { array[4], array[5], array[6], array[7] }
    + { array[8], array[9], array[10], array[11] };
sum = tmp[0] + tmp[1] + tmp[2] + tmp[3] + array[12];
and the vectorizer generates two loops — one that consumes four elements at a time as long as possible, and one that consumes the remaining elements one at a time. The rest of the loop optimizers know how many times the loops are iterating, so the loops can then be unrolled etc. as appropriate.

The vectorizer is, unfortunately, generating somewhat strange code for the checks that there are enough elements
_35 = (unsigned long) &MEM[(void *)&array + 52B];
_36 = &array + 4;
_37 = (unsigned long) _36;
_38 = _35 - _37;
_39 = _38 /[ex] 4;
_40 = _39 & 4611686018427387903;
if (_40 <= 4)
  goto ; [10.00%]
else
  goto ; [90.00%]
that confuse the rest of the loop optimizations, with the result that the IR contains lots of conditional code of this form. This is not the first time I have seen GCC having problems with the pointer arithmetics from iterators (see bug 78847), and I believe this is the same problem (as the bitwise and should be optimized away when the pointer arithmetics has been evaluated to a constant).

The subsequent passes mostly manage to clean up these conditionals, and dom3 optimizes the vector operations to a constant. But it does not understand the expression used to decide how many scalar elements need to be handled if the iteration count is not a multiple of four (that check is eliminated by the value range propagation pass after dom3 is run), so the scalar additions are kept in the code given to the backend
int main() ()
{
  common_type_t array[13];
  int __init;
  int _23; [7.14%]:
  array[12] = 1;
  _23 = MEM[(int *)&array + 48B];
  __init_3 = _23 + 90;
  array ={v} {CLOBBER};
  return __init_3;
}
This is, however, not much of a problem for this program, as the backend manages to optimize this to
main:
    movl    $91, %eax
    ret
when generating the code.

2964 arguments

The backend eliminates the memory accesses to the array in the previous case, but the array seems to be kept on the stack. That makes sense — the code is supposed to be optimized before being passed to the backend, so the backend should not be able to eliminate variables, and there is no need to implement code doing this.

Leaf functions do not need to adjust the stack, but GCC does some stack adjustment on leaf functions too when more than 112 bytes are placed on the stack. You can see this for the meaningless function
voidfoo()
{volatilechar a[113];
    a[0] = 0;
}
where the stack is adjusted when the array size is larger than 112.
foo:
    subq    $16, %rsp
    movb    $0, -120(%rsp)
    addq    $16, %rsp
    ret
I do not understand what GCC is trying to do here...

Anyway, passing 29 arguments to sum makes the array large enough that GCC adds the stack adjustments.

65– arguments

The sequence of assignments initializing the array is now large enough that sum is not inlined into main.

Deep Text Correcter

$
0
0

Code |Demo

While context-sensitive spell-check systems (such as AutoCorrect) are able to automatically correct a large number of input errors in instant messaging, email, and SMS messages, they are unable to correct even simple grammatical errors. For example, the message “I’m going to store” would be unaffected by typical autocorrection systems, when the user most likely intendend to communicate “I’m going to the store”.

Inspired by recent advancements in NLP driven by deep learning (such as those in Neural Machine Translation by Bahdanau et al., 2014), I decided to try training a neural network to solve this problem. Specifically, I set out to construct sequence-to-sequence models capable of processing a sample of conversational written English and generating a corrected version of that sample. In this post I’ll describe how I created this “Deep Text Correcter” system and present some encouraging initial results.

All code is available on GitHub here, and a demo of this system is live here.

Correcting Grammatical Errors with Deep Learning

The basic idea behind this project is that we can generate large training datasets for the task of grammar correction by starting with grammatically correct samples and introducing small errors to produce input-output pairs. The details of how we construct these datasets, train models using them, and produce predictions for this task are described below.

Datasets

To create a dataset for training Deep Text Correcter models, I started with a large collection of mostly grammatically correct samples of conversational written English. The primary dataset considered in this project is the Cornell Movie-Dialogs Corpus, which contains over 300k lines from movie scripts. This was the largest collection of conversational written English I could find that was (mostly) grammatically correct.

Given a sample of text like this, the next step is to generate input-output pairs to be used during training. This is done by:

  1. Drawing a sample sentence from the dataset.
  2. Setting the input sequence to this sentence after randomly applying certain perturbations.
  3. Setting the output sequence to the unperturbed sentence.

where the perturbations applied in step (2) are intended to introduce small grammatical errors which we would like the model to learn to correct. Thus far, these perturbations have been limited to:

  • the subtraction of articles (a, an, the)
  • the subtraction of the second part of a verb contraction (e.g. “‘ve”, “‘ll”, “‘s”, “‘m”)
  • the replacement of a few common homophones with one of their counterparts (e.g. replacing “their” with “there”, “then” with “than”)

For example, given the sample sentence:

the following input-output pair could be generated:

("And who was enemy ?", "And who was the enemy ?")

The rates with which these perturbations are introduced are loosely based on figures taken from the CoNLL 2014 Shared Task on Grammatical Error Correction. In this project, each perturbation is randomly applied in 25% of cases where it could potentially be applied.

Training

In order to artificially increase the dataset when training a sequence-to-sequence model, I performed the sampling strategy described above multiple times over the Movie-Dialogs Corpus to arrive at a dataset 2-3x the size of the original corups. Given this augmented dataset, training proceeded in a very similar manner to TensorFlow’s sequence-to-sequence tutorial. That is, I trained a sequence-to-sequence model consisting of LSTM encoders and decoders bridged via an attention mechanism, as described in Bahdanau et al., 2014.

Decoding

Instead of using the most probable decoding according to the sequence-to-sequence model, this project takes advantage of the unique structure of the problem to impose the prior that all tokens in a decoded sequence should either exist in the input sequence or belong to a set of “corrective” tokens. The “corrective” token set is constructed during training and contains all tokens seen in the target, but not the source, for at least one sample in the training set. The intuition here is that the errors seen during training involve the misuse of a relatively small vocabulary of common words (e.g. “the”, “an”, “their”) and that the model should only be allowed to perform corrections in this domain.

This prior is carried out through a modification to the decoding loop in TensorFlow’s seq2seq model in addition to a post-processing step that resolves out-of-vocabulary (OOV) tokens:

Biased Decoding

To restrict the decoding such that it only ever chooses tokens from the input sequence or corrective token set, this project applies a binary mask to the model’s logits prior to extracting the prediction to be fed into the next time step.

This is done by constructing a mask:

mask[i]==1.0ifiin(inputorcorrective_tokens)else0.0

and then using it during decoding in the following manner:

token_probs=tf.softmax(logits)biased_token_probs=tf.mul(token_probs,mask)decoded_token=math_ops.argmax(biased_token_probs,1)

Since this mask is applied to the result of a softmax transformation (which guarantees all outputs are positive), we can be sure that only input or corrective tokens are ever selected.

Note that this logic is not used during training, as this would only serve to hide potentially useful signal from the model.

Handling OOV Tokens

Since the decoding bias described above is applied within the truncated vocabulary used by the model, we will still see the unknown token in its output for any OOV tokens. The more generic problem of resolving these OOV tokens is non-trivial (e.g. see Addressing the Rare Word Problem in NMT), but in this project we can again take advantage of the unique structure of the problem to create a fairly straightforward OOV token resolution scheme.

Specifically, if we assume the sequence of OOV tokens in the input is equal to the sequence of OOV tokens in the output sequence, then we can trivially assign the appropriate token to each “unknown” token encountered in the decoding.

For example, in the following scenario:

Input sequence: "Alex went to store"
Target sequence: "Alex went to the store"
Decoding from model: "UNK went to the store"

this logic would replace the UNK token in the decoding with Alex.

Empirically, and intuitively, this appears to be an appropriate assumption, as the relatively simple class of errors these models are being trained to address should never include mistakes that warrant the insertion or removal of a rare token.

Experiments and Results

Below are some anecdotal and aggregate results from experiments using the Deep Text Correcter model with the Cornell Movie-Dialogs Corpus. The dataset consists of 304,713 lines from movie scripts, of which 243,768 lines were used to train the model and 30,474 lines each were used for the validation and testing sets. For the training set, 2 samples were drawn per line in the corpus, as described above. The sets were selected such that no lines from the same movie were present in both the training and testing sets.

The model being evaluated below is a sequence-to-sequence model, with attention, where the encoder and decoder were both 2-layer, 512 hidden unit LSTMs. The model was trained with a vocabulary consisting of the 2,000 most common words seen in the training set (note that we can use a small vocabulary here due to our OOV resolution strategy). A bucketing scheme similar to that in Bahdanau et al., 2014 is used, resulting in 4 models for input-output pairs of sizes smaller than 10, 15, 20, and 40.

Aggregate Performance

Below are reported the corpus BLEU scores (as computed by NLTK) and accuracy numbers over the test dataset for both the trained model and a baseline. The baseline used here is simply the identity function, which assumes no errors exist in the input; the motivation for this is to test whether the introduction of the trained model could add value to an existing system with no grammar-correction system in place.

Encouragingly, the trained model outperforms this baseline for all bucket sizes in terms of accuracy, and outperforms all but one in terms of BLEU score. This tells us that applying the Deep Text Correcter model to a potentially errant writing sample would, on average, result in a more grammatically correct writing sample. Anyone who tends to make errors similar to those the model has been trained on could therefore benefit from passing their messages through this model.

Bucket (seq length)Baseline BLEUModel BLEUBaseline AccuracyModel Accuracy
Bucket 1 (10)0.83410.85160.90830.9384
Bucket 2 (15)0.88500.88600.81560.8491
Bucket 3 (20)0.88760.88800.72910.7817
Bucket 4 (40)0.90990.90450.60730.6425

Examples

In addition to the encouraging aggregate performance of this model, we can see that its is capable of generalizing beyond the specific language styles present in the Movie-Dialogs corpus by testing it on a few fabricated, grammatically incorrect sentences. Below are a few examples, but you can try out your own examples using the demo here.

Decoding a sentence with a missing article:

In[31]:decode("Kvothe went to market")Out[31]:'Kvothe went to the market'

Decoding a sentence with then/than confusion:

In[30]:decode("the Cardinals did better then the Cubs in the offseason")Out[30]:'the Cardinals did better than the Cubs in the offseason'

Note that in addition to correcting the grammatical errors, the system is able to handle OOV tokens without issue.

Future Work

While these initial results are encouraging, there is still a lot of room for improvement. The biggest thing holding the project back is the lack of a large dataset – the 300k samples in the Cornell Movie Dialogs dataset is tiny by modern deep learning standards. Unfortunately, I am not aware of any publicly available dataset of (mostly) grammatically correct English. A close proxy could be comments in a “higher quality” online forum, such as Hacker News or certain subreddits. I may try this next.

Given a larger dataset, I would also like to try to introduce many different kinds of errors into the training samples. The perturbations used thus far are limited to fairly simple grammatical mistakes; it would be very interesting to see if the model could learn to correct somewhat more subtle mistakes, such as subject-verb agreement.

On the application front, I could see this system eventually being accessible via a “correction” API that could be leveraged in a variety of messaging applications.

The Zcash Anonymous Cryptocurrency [video]

$
0
0
C3TV - The Zcash anonymous cryptocurrency

pesco

Zcash is the third iteration of an extension to the Bitcoin protocol that provides true untraceability, i.e. fully anonymous transactions. It is arguably the first serious attempt to establish this extension, in the form of its own blockchain, beyond the form of an academic proposal. The talk provides an introduction to the magic that makes it work.

Download

These files contain multiple languages.

This Talk was translated into multiple languages. The files available for download contain all languages as separate audio-tracks. Most desktop video players allow you to choose between them.

Please look for "audio tracks" in your desktop video player.

Tags

Building a Billion User Load Balancer

$
0
0

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.

Is Your Internet Up-To-date? Test Your IPv6, DNSSEC, TLS, DKIM, SPF and DMARC

$
0
0

Several tests are being performed. During their execution, you can select one of the adjacent tests to get more information. When the tests are completed, you can click to the next page to see the results.

Measuring execution performance of C++ exceptions vs. error codes

$
0
0
In an earlier blog post we found out that C++ exceptions produce smaller executables than corresponding code using error codes. The most common comment to it was that executable size is not that important, performance is what matters. Since we have the tooling, let's do perf measurements as well.

Test setup

We generate code that calls a few other functions that call other functions and so on until we reach the bottommost function. In that function we either raise an exception or error or return success. This gives us two parameters to vary: the depth of the call hierarchy and what percentage of calls produce an error.

The code itself is simple. The C++ version looks like this:

int func0() {
    int num = random() % 5;
    if(num == 0) {
        return func1();
    }
    if(num == 1) {
        return func2();
    }
    if(num == 2) {
        return func3();
    }
    if(num == 3) {
        return func4();
    }
    return func5();
}

We generate a random number and based on that call a function deeper in the hierarchy. This means that every time we call the test it goes from the top function to the final function in a random way. We use the C random number generator and seed it with a known value so both C and C++ have the same call chain path, even though they are different binaries. Note that this function calls on average a function whose number is its own number plus three. So if we produce 100 functions, the call stack is on average 100/3 = 33 entries deep when the error is generated.

The plain C version that uses error objects is almost identical:

int func0(struct Error **error) {
    int num = random() % 5;
    int res;
    if(num == 0) {
        res = func1(error);
    } else if(num == 1) {
        res = func2(error);
    } else if(num == 2) {
        res = func3(error);
    } else if(num == 3) {
        res = func4(error);
    } else {
        res = func5(error);
    }
    /* This is a no-op in this specific code but is here to simulate
     * real code that would check the error and based on that
     * do something. We must have a branch on the error condition,
     * because that is what real world code would have as well.
     */
    if(*error) {
        return -1;
    }

    return res;
}

The code for generating this code and running the measurements can be found in this Github repo.

Results

We do not care so much about how long the executables take to run, only which one of them runs faster. Thus we generate output results that look like this (Ubuntu 16/10, GCC 6.2, Intel i7-3770):

CCeCCCCCCC
CCCCCCccec
eEcCCCeECE
cceEEeECCe
EEEcEEEeCE
EEEEEeccEE
EEEEEEEEEE
EEEEEEEEEE
EEEEEEEEEE
EEEEEEEEEE

In this diagram C means that plain C error codes are faster and E that exceptions are faster for that particular pair of parameters. For easier reading all cases where error codes win have been given a red background color. Each row represents a different number of functions. The top row has 50 functions (call depth of 16) and each row adds 50 so the bottommost row has 500 functions.

The columns represent different error rates. The leftmost column has 0% errors, the next one has 1% and so on. A capital letter means that the runtime difference was bigger than 10%. Each executable was executed five times and the fastest run was taken in each case. Full results look like the following:

GCC 6.2      Clang 3.8.1  Clang 4.0 trunk

CCeCCCCCCC  EecCcCcCcC  ECcECcEeCC
CCCCCCccec  CceEEeEcEC  EEeEECEcCC
eEcCCCeECE   EEEEEECEeE   EEEECECeCC
cceEEeECCe   EcEeEECEEE   EEEEceeEeC
EEEcEEEeCE   EEEeEEEEEe   EEEEEEEEEE
EEEEEeccEE   EEEEEEEEEE   EEEEEEEEEE
EEEEEEEEEE   EEEEEEEEEE   EEEEEEEEEE
EEEEEEEEEE   EEEEEEEEEE   EEEEEEEEEE
EEEEEEEEEE   EEEEEEEEEE   EEEEEEEEEE
EEEEEEEEEE   EEEEEEEEEE   EEEEEEEEEE


Immediately we see that once the stack depth grows above a certain size (here 200/3 = 66), exceptions are always faster. This is not very interesting, because call stacks are usually not this deep (enterprise Java notwithstanding). For lower depths there is a lot of noise, especially for GCC. However we can see that for small error rates exceptions are sometimes faster, especially for Clang. Because of the noise we redid the measurements several times. This changed the individual measurements but produced the same overall shape where small errors seem to be faster with exceptions but as the error rate grows error codes become faster.

On a Raspberry Pi the results look like this:

GCC 4.9.2

eeccCCCCCC
EeeeccccCC
Eeeeeecccc
Eeeeeeeccc
EEeeeeeeec
EEEEEeeeee
EEEEEEEEee
EEEEEEEEEE
EEEEEEEEEE
EEEEEEEEEE

Here the results are a lot clearer and consistent, probably due to the simpler architecture of the ARM processor. Exceptions are faster except for small call depths and large error rates. When using 50 functions error objects only become noticeable faster at 4% error rate.

Finally let's do the same measurements on an i5-6600K.

GCC 4.8.4    gcc 5.4.0   Clang 3.{4, 6, 8}

ECCCCCCCCCC eeccCCCCCC EeeccCCCCC
EecCCCCCCCC EEEeeeeccc EEEEEeeeee
EEecCCCCCCC EEEEEEeeee  EEEEEEEEEE
EEEeecCCCCC EEEEEEEEEe  EEEEEEEEEE
EEEEeecccCC EEEEEEEEEE  EEEEEEEEEE
EEEEEeeeccc EEEEEEEEEE  EEEEEEEEEE
EEEEEEeeeec EEEEEEEEEE  EEEEEEEEEE
EEEEEEEEeee  EEEEEEEEEE  EEEEEEEEEE
EEEEEEEEEee  EEEEEEEEEE  EEEEEEEEEE
EEEEEEEEEEe  EEEEEEEEEE  EEEEEEEEEE

Here we see that the compiler used can make a massive difference. GCC 4.8.4 favors error codes but 5.4.0 is the opposite as are all versions of Clang (which produce identical results). On this platform exceptions seem to be even more performant than on the Pi.

The top left corner seems to be the most interesting part of this entire diagram, as it represents shallow call chains with few errors. This is similar to most real world code used in, for example, XML or JSON parsing. Let's do some more measurements focusing on this area.

i5               Raspberry Pi

GCC    Clang     GCC    Clang
5.4.0  3.8       4.9.2  3.5.0

cCCCC eCCCC    eCCCC eCCCC
ecCCC EeCCC    ecCCC ecCCC
ecCCC EeccC    eccCC eccCC
eccCC EEecc    eecCC eecCC
eeccC EEEee     eeccC eeccC


In these measurements the parameters go from 10 functions at the top row to 50 in the bottom row in increments of 10 and the colums go from 0% error on the left to 4% on the right in increments of 1. Here we see that exceptions keep their performance advantage when there are no errors, especially when using Clang, but error codes get better as soon as the error rate goes up even a little.

Conclusions

Whenever a discussion on C++ exceptions occurs, there is That Guy who comes in and says "C++ exceptions are slow, don't use them". At this point the discussion usually breaks down on fighting about whether exceptions are fast or not. These discussion are ultimately pointless because asking whether exceptions are "fast" is the wrong question.

The proper question to ask is "under which circumstances are exceptions faster". As we have seen here, the answer is surprisingly complex. It depends on many factors including which platform, compiler, code size and error rate is used. Blanket statements about performance of X as compared to Y are usually simplifying, misleading and just plain wrong. Such is the case here as well.

(Incidentally if someone can explain why i7 has those weird performance fluctuations, please post in the comments.)

Thanks to Juhani Simola for running the i5 measurements.

Chisel – A fast TCP tunnel over HTTP

$
0
0

README.md

Chisel is a fast TCP tunnel, transported over HTTP. Single executable including both client and server. Written in Go (Golang). Chisel is mainly useful for passing through firewalls, though it can also be used to provide a secure endpoint into your network. Chisel is very similar to crowbar though achieves much higher performance. Warning Chisel is currently beta software.

overview

Install

Binaries

See the latest release

Docker

docker run --rm -it jpillora/chisel --help

Source

$ go get -v github.com/jpillora/chisel

Features

Demo

A demo app on Heroku is running this chisel server:

$ chisel server --port $PORT --proxy http://example.com# listens on $PORT, proxy web requests to 'http://example.com'

This demo app is also running a simple file server on :3000, which is normally inaccessible due to Heroku's firewall. However, if we tunnel in with:

$ chisel client https://chisel-demo.herokuapp.com 3000# connects to 'https://chisel-demo.herokuapp.com',# tunnels your localhost:3000 to the server's localhost:3000

and then visit localhost:3000, we should see a directory listing of the demo app's root. Also, if we visit the demo app in the browser we should hit the server's default proxy and see a copy of example.com.

Usage


    Usage: chisel [command] [--help]

    Version: 0.0.0-src

    Commands:
      server - runs chisel in server mode
      client - runs chisel in client mode

    Read more:
      https://github.com/jpillora/chisel

chisel server --help


    Usage: chisel server [options]

    Options:

      --host, Defines the HTTP listening host – the network interface
      (defaults to 0.0.0.0).

      --port, Defines the HTTP listening port (defaults to 8080).

      --key, An optional string to seed the generation of a ECDSA public
      and private key pair. All commications will be secured using this
      key pair. Share this fingerprint with clients to enable detection
      of man-in-the-middle attacks.

      --authfile, An optional path to a users.json file. This file should
      be an object with users defined like:
        "<user:pass>": ["<addr-regex>","<addr-regex>"]
        when <user> connects, their <pass> will be verified and then
        each of the remote addresses will be compared against the list
        of address regular expressions for a match. Addresses will
        always come in the form "<host/ip>:<port>".

      --proxy, Specifies the default proxy target to use when chisel
      receives a normal HTTP request.

      -v, Enable verbose logging

      --help, This help text

    Read more:
      https://github.com/jpillora/chisel

chisel client --help


    Usage: chisel client [options] <server> <remote> [remote] [remote] ...

    server is the URL to the chisel server.

    remotes are remote connections tunnelled through the server, each of
    which come in the form:

        <local-host>:<local-port>:<remote-host>:<remote-port>

        * remote-port is required.
        * local-port defaults to remote-port.
        * local-host defaults to 0.0.0.0 (all interfaces).
        * remote-host defaults to 0.0.0.0 (server localhost).

        example remotes

            3000
            example.com:3000
            3000:google.com:80
            192.168.0.5:3000:google.com:80

    Options:

      --fingerprint, An optional fingerprint (server authentication)
      string to compare against the server's public key. You may provide
      just a prefix of the key or the entire string. Fingerprint 
      mismatches will close the connection.

      --auth, An optional username and password (client authentication)
      in the form: "<user>:<pass>". These credentials are compared to
      the credentials inside the server's --authfile.

      --keepalive, An optional keepalive interval. Since the underlying
      transport is HTTP, in many instances we'll be traversing through
      proxies, often these proxies will close idle connections. You must
      specify a time with a unit, for example '30s' or '2m'. Defaults
      to '0s' (disabled).

      -v, Enable verbose logging

      --help, This help text

    Read more:
      https://github.com/jpillora/chisel

See also programmatic usage.

Security

Encryption is always enabled. When you start up a chisel server, it will generate an in-memory ECDSA public/private key pair. The public key fingerprint will be displayed as the server starts. Instead of generating a random key, the server may optionally specify a key seed, using the --key option, which will be used to seed the key generation. When clients connect, they will also display the server's public key fingerprint. The client can force a particular fingerprint using the --fingerprint option. See the --help above for more information.

Authentication

Using the --authfile option, the server may optionally provide a user.json configuration file to create a list of accepted users. The client then authenticates using the --auth option. See users.json for an example authentication configuration file. See the --help above for more information.

Internally, this is done using the Password authentication method provided by SSH. Learn more about crypto/ssh here http://blog.gopheracademy.com/go-and-ssh/.

Performance

With crowbar, a connection is tunnelled by repeatedly querying the server with updates. This results in a large amount of HTTP and TCP connection overhead. Chisel overcomes this using WebSockets combined with crypto/ssh to create hundreds of logical connections, resulting in one TCP connection per client.

In this simple benchmark, we have:

                    (direct)
        .--------------->----------------.
       /    chisel         chisel         \
request--->client:2001--->server:2002---->fileserver:3000
       \                                  /
        '--> crowbar:4001--->crowbar:4002'
             client           server

Note, we're using an in-memory "file" server on localhost for these tests

direct

:3000 => 1 bytes in 1.440608ms
:3000 => 10 bytes in 658.833µs
:3000 => 100 bytes in 669.6µs
:3000 => 1000 bytes in 570.242µs
:3000 => 10000 bytes in 655.795µs
:3000 => 100000 bytes in 693.761µs
:3000 => 1000000 bytes in 2.156777ms
:3000 => 10000000 bytes in 18.562896ms
:3000 => 100000000 bytes in 146.355886ms

chisel

:2001 => 1 bytes in 1.393731ms
:2001 => 10 bytes in 1.002992ms
:2001 => 100 bytes in 1.082757ms
:2001 => 1000 bytes in 1.096081ms
:2001 => 10000 bytes in 1.215036ms
:2001 => 100000 bytes in 2.09334ms
:2001 => 1000000 bytes in 9.136138ms
:2001 => 10000000 bytes in 84.170904ms
:2001 => 100000000 bytes in 796.713039ms

~100MB in 0.8 seconds

crowbar

:4001 => 1 bytes in 3.335797ms
:4001 => 10 bytes in 1.453007ms
:4001 => 100 bytes in 1.811727ms
:4001 => 1000 bytes in 1.621525ms
:4001 => 10000 bytes in 5.20729ms
:4001 => 100000 bytes in 38.461926ms
:4001 => 1000000 bytes in 358.784864ms
:4001 => 10000000 bytes in 3.603206487s
:4001 => 100000000 bytes in 36.332395213s

~100MB in 36 seconds

See more test/

Known Issues

  • WebSockets support is required
    • IaaS providers all will support WebSockets
      • Unless an unsupporting HTTP proxy has been forced in front of you, in which case I'd argue that you've been downgraded to PaaS.
    • PaaS providers vary in their support for WebSockets
      • Heroku has full support
      • Openshift has full support though connections are only accepted on ports 8443 and 8080
      • Google App Engine has no support

Contributing

Changelog

  • 1.0.0 - Init
  • 1.1.0 - Swapped out simple symmetric encryption for ECDSA SSH

Todo

  • Better, faster tests
  • Expose a stats page for proxy throughput
  • Treat client stdin/stdout as a socket
  • Allow clients to act as an indirect tunnel endpoint for other clients
  • Keep local connections open and buffer between remote retries

MIT License

Copyright © 2015 Jaime Pillora <dev@jpillora.com>

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.


Attachment theory is having a breakout moment

$
0
0

By the end of our first year, we have stamped on our baby brains a pretty indelible template of how we think relationships work, based on how our parents or other primary caregivers treat us. From an evolutionary standpoint, this makes sense, because we need to figure out early on how to survive in our immediate environment.

“If you’re securely attached, that’s great, because you have the expectation that if you are distressed you will be able to turn to someone for help and feel you can be there for others,” said Miriam Steele, the co-director of the Center for Attachment Research at the New School for Social Research in New York.

It’s not so great if you are one of the 40 percent to 50 percent of babies who, a meta-analysis of research indicates, are insecurely attached because their early experiences were suboptimal (their caregivers were distracted, overbearing, dismissive, unreliable, absent or perhaps threatening). “Then you have to earn your security,” Dr. Steele said, by later forming secure attachments that help you override your flawed internal working model.

Given that the divorce rate is also 40 percent to 50 percent, it would seem that this is not an easy task. Indeed, researchers said, people who have insecure attachment models tend to be drawn to those who fit their expectations, even if they are treated badly. They may subconsciously act in ways that elicit insensitive, unreliable or abusive behavior, whatever is most familiar. Or they may flee secure attachments because they feel unfamiliar.

“Our attachment system preferentially sees things according to what has happened in the past,” said Dr. Amir Levine, a psychiatrist at Columbia University and the co-author of the book “Attached,” which explores how attachment behaviors affect the neurochemistry of the brain. “It’s kind of like searching in Google where it fills in based on what you searched before.”

But again, history is not necessarily destiny. Intervention programs at the New School and the University of Delaware are having marked success helping at-risk groups like teenage mothers change their attachment behaviors (often passed down through generations) and establish more secure relationships. Another attachment-based intervention strategy called Circle of Security, which has 19,000 trained facilitators in 20 countries, has also proved effective.

What these protocols have in common is promoting participants’ awareness of their attachment style, and their related sabotaging behaviors, as well as training on how to balance vulnerability and autonomy in relationships.

One reason attachment theory has “gained so much traction lately is its ideas and observations are so resonant with our daily lives,” said Kenneth Levy, an associate professor of psychology at Pennsylvania State University who researches attachment-oriented psychotherapy.

Indeed, if you look at the classic categories of attachment styles — secure; insecure anxious; insecure avoidant; and insecure disorganized — it’s pretty easy to figure out which one applies to you and others in your life. The categories stem from tens of thousands of observations of babies and toddlers whose caregivers leave them briefly, either alone or with a stranger, and then return, a test known as the “strange situation.” The labels can also apply to how adults behave toward loved ones in times of stress.

Secure children get upset when their caregivers leave, and run toward them with outstretched arms when they return. They fold into the caregiver and are quickly soothed. A securely attached adult similarly goes to a loved one for comfort and support when they, say, are passed over for a promotion at work or feel vulnerable or hurt. They are also eager to reciprocate when the tables are turned.

Children high on the insecure anxious end of the spectrum get upset when caregivers leave and may go to them when they return. But these children aren’t easily soothed, usually because the caregiver has proved to be an unreliable source of comfort in the past. They may kick and arch their back as if they are angry. As adults, they tend to obsess about their relationships and may be overly dramatic in order to get attention. They may hound romantic interests instead of taking it slow.

Insecure avoidant children don’t register distress when their caregivers leave (although their stress hormones and heart rate may be sky high) and they don’t show much interest when caregivers return, because they are used to being ignored or rebuffed. Alternatively, a parent may have smothered them with too much attention. Insecure avoidant adults tend to have trouble with intimacy and are more likely to leave relationships, particularly if they are going well. They may not return calls and resist talking about their feelings.

Finally, insecure disorganized children and adults display both anxious and avoidant behaviors in an illogical and erratic manner. This behavior is usually the lingering result of situations where a childhood caregiver was threatening or abusive.

Tools to determine your dominant attachment style include the Adult Attachment Interview, which is meant to be administered by a clinician, or self-report questionnaires like the Attachment Styles and Close Relationships Survey. But critics said their accuracy depends on the skill and training of the interviewer in the case of the former and the self-awareness of the test taker in the latter, which perhaps explains why you can take both tests and end up in different categories.

“It can also be possible that people should be viewed as along a continuum in all categories,” said Glenn I. Roisman, the director of the Relationships Research Lab at the University of Minnesota in Minneapolis.

It’s worth noting that just as people in the insecure categories can become more secure when they form close relationships with secure people, secure people can become less so if paired with people who are insecure. “You need social context to sustain your sense of security,” said Peter Fonagy, a professor of psychoanalysis at University College London.

He added that having secure attachments is not about being a perfect parent or partner but about maintaining communication to repair the inevitable rifts that occur. In the daily battering of any relationship, Dr. Fonagy said, “if free flow of communication is impaired, the relationship is, too.”

Continue reading the main story

RustgreSQL

$
0
0
Hi all,

Is anyone working on porting PostgreSQL to Rust?

Corrode looks a bit limited for the task, but maybe it can be a start.
It doesn't support goto or switch, but maybe the gotos patterns are not too
complicated.

My motivation is primarily I don't want to learn all the over-complicated
details of C,
but at the same time I would like to be productive in a safe system
language,
a category in which Rust seems to be alone.

Porting PostgreSQL to Rust would be a multi-year project,
and it could only be done if the process could be fully automated,
by supporting all the coding patterns used by the project,
otherwise a Rust-port would quickly fall behind the master branch.
But if all git commits could be automatically converted to Rust,
then the RustgreSQL project could pull all commits from upstream
until all development has switched over to Rust among all developers.

Is this completely unrealistic or is it carved in stone PostgreSQL will
always be a C project forever and ever?

Responses

pgsql-hackers by date

Next:From: Fabien COELHODate: 2017-01-08 09:20:59
Subject: Re: proposal: session server side variables (fwd)
Previous:From: Joel JacobsonDate: 2017-01-08 08:52:09
Subject: Re: merging some features from plpgsql2 project

Braess’ paradox

$
0
0
From Wikipedia, the free encyclopedia

Braess' paradox or Braess's paradox is a proposed explanation for a seeming improvement to a road network being able to impede traffic through it. It was discovered in 1968 by mathematician Dietrich Braess, who noticed that adding a road to a congested road traffic network could increase overall journey time, and it has been used to explain instances of improved traffic flow when existing major roads are closed.

The paradox may have analogues in electrical power grids and biological systems. It has been suggested that in theory, the improvement of a malfunctioning network could be accomplished by removing certain parts of it.

Discovery and definition

Dietrich Braess, a mathematician at Ruhr University, Germany, noticed the flow in a road network could be impeded by adding a new road, when he was working on traffic modelling. His idea was that if each driver is making the optimal self-interested decision as to which route is quickest, a shortcut could be chosen too often for drivers to have the shortest travel times possible. More formally, the idea behind Braess' discovery is that the Nash equilibrium may not equate with the best overall flow through a network.[1]

The paradox is stated as follows:

"For each point of a road network, let there be given the number of cars starting from it, and the destination of the cars. Under these conditions one wishes to estimate the distribution of traffic flow. Whether one street is preferable to another depends not only on the quality of the road, but also on the density of the flow. If every driver takes the path that looks most favorable to him, the resultant running times need not be minimal. Furthermore, it is indicated by an example that an extension of the road network may cause a redistribution of the traffic that results in longer individual running times."

Adding extra capacity to a network when the moving entities selfishly choose their route can in some cases reduce overall performance. That is because the Nash equilibrium of such a system is not necessarily optimal. The network change induces a new game structure which leads to a (multiplayer) prisoner's dilemma. In a Nash equilibrium, drivers have no incentive to change their routes. While the system is not in a Nash equilibrium, individual drivers are able to improve their respective travel times by changing the routes they take. In the case of Braess' paradox, drivers will continue to switch until they reach Nash equilibrium despite the reduction in overall performance.

If the latency functions are linear, adding an edge can never make total travel time at equilibrium worse by a factor of more than 4/3.[2]

Possible instances of the paradox in action

Prevalence

In 1983, Steinberg and Zangwill provided, under reasonable assumptions, the necessary and sufficient conditions for Braess' paradox to occur in a general transportation network when a new route is added. (Note that their result applies to the addition of any new route, not just to the case of adding a single link.) As a corollary, they obtain that Braess' paradox is about as likely to occur as not occur; their result applies to random rather than planned networks and additions.[3]

Traffic

In 1968, Dietrich Braess showed that the "extension of the road network may cause a redistribution of the traffic that results in longer individual running times". This paradox has a counterpart in case of a reduction of the road network (which may cause a reduction of individual commuting time).[4]

In Seoul, South Korea, a speeding up in traffic around the city was seen when a motorway was removed as part of the Cheonggyecheon restoration project.[5] In Stuttgart, Germany, after investments into the road network in 1969, the traffic situation did not improve until a section of newly built road was closed for traffic again.[6] In 1990 the temporary closing of 42nd Street in New York City for Earth Day reduced the amount of congestion in the area.[7] In 2008 Youn, Gastner and Jeong demonstrated specific routes in Boston, New York City and London where that might actually occur and pointed out roads that could be closed to reduce predicted travel times.[8] In 2009, New York experimented with closures of Broadway at Times Square and Herald Square, which resulted in improved traffic flow and permanent pedestrian plazas.[9]

In 2012, Paul Lecroart, of the institute of planning and development of the Île-de-France, wrote that "Despite initial fears, the removal of main roads does not cause deterioration of traffic conditions beyond the starting adjustments. The traffic transfer are limited and below expectations".[4] He also notes that some motorized travels are not transferred on public transport and simply disappear ("evaporate").[4]

The same phenomenon was also observed when road closing was not part of an urban project but the consequence of an accident. In 2012 in Rouen, a bridge was burned by an accident; during the two following years, other bridges were more used, but the total number of cars crossing bridges was reduced.[4] Similarly, in 2015 in Warsaw, a bridge was closed; authorities observed an increased use of other roads and public transport, but half of the vehicles usually crossing the bridge "disappeared" (52,000 out of 105,000 daily).[4]

Electricity

In 2012, scientists at the Max Planck Institute for Dynamics and Self-Organization demonstrated, through computational modeling, the potential for the phenomenon to occur in power transmission networks where power generation is decentralized.[10] In 2012, an international team of researchers from Institut Néel (CNRS, France), INP (France), IEMN (CNRS, France) and UCL (Belgium) published in Physical Review Letters[11] a paper showing that Braess' paradox may occur in mesoscopic electron systems. In particular, they showed that adding a path for electrons in a nanoscopic network paradoxically reduced its conductance. That was shown both by theoretical simulations and experiments at low temperature using as scanning gate microscopy.

Biology

According to Adilson E. Motter possible Braess' paradox outcomes may exist in many biological systems. Motter suggests cutting out part of a damaged network could rescue it. For resource management of endangered species food webs, in which extinction of many species might follow sequentially, a deliberate elimination of a doomed species from the network could be used to bring about the positive outcome of preventing a series of further extinctions.[1]

Team sports strategy

It has been suggested that in basketball, a team can be seen as a network of possibilities for a route to scoring a basket, with a different efficiency for each pathway, and a star player could reduce the overall efficiency of the team, analogous to a shortcut that is overused increasing the overall times for a journey through a road network. A proposed solution for maximum efficiency in scoring is for a star player to shoot about the same number of shots as teammates.[12]

In soccer Helenio Herrera is well-known for his famous quote "with 10 [players] our team plays better than with 11".

Mathematical approach

Example

Braess paradox road example.svg

Consider a road network as shown in the adjacent diagram on which 4000 drivers wish to travel from point Start to End. The travel time in minutes on the Start-A road is the number of travelers (T) divided by 100, and on Start-B is a constant 45 minutes (likewise with the roads across from them). If the dashed road does not exist (so the traffic network has 4 roads in total), the time needed to drive Start-A-End route with A drivers would be . The time needed to drive the Start-B-End route with B drivers would be . If either route were shorter, it would not be a Nash equilibrium: a rational driver would switch routes from the longer route to the shorter route. As there are 4000 drivers, the fact that can be used to derive the fact that when the system is at equilibrium. Therefore, each route takes minutes.

Now suppose the dashed line is a road with an extremely short travel time of approximately 0 minutes. In that situation, all drivers will choose the Start-A route rather than the Start-B route because Start-A will take only minutes at its worst, and Start-B is guaranteed to take 45 minutes. Once at point A, every rational driver will elect to take the "free" road to B and from there continue to End because once again A-End is guaranteed to take 45 minutes while A-B-End will take at most minutes. Each driver's travel time is minutes, an increase from the 65 minutes required when the fast A-B road did not exist. No driver has an incentive to switch, as the two original routes (Start-A-End and Start-B-End) are both now 85 minutes. If every driver were to agree not to use the A-B path, every driver would benefit by reducing their travel time by 15 minutes. However, because any single driver will always benefit by taking the A-B path, the socially optimal distribution is not stable and so Braess' paradox occurs.

Existence of an equilibrium

If one assumes the travel time for each person driving on an edge to be equal, an equilibrium will always exist.

Let be the formula for the travel time of each person traveling along edge when people take that edge. Suppose there is a traffic graph with people driving along edge . Let the energy of e, , be

(If let ). Let the total energy of the traffic graph be the sum of the energies of every edge in the graph.

Take a choice of routes that minimizes the total energy. Such a choice much exist because there are finitely many choices of routes. That will be an equilibrium.

Assume, for contradiction, this is not the case. Then, there is at least one driver who can switch the route and improve the travel time. Suppose the original route is while the new route is . Let be total energy of the traffic graph, and consider what happens when the route is removed. The energy of each edge will be reduced by and so the will be reduced by . That is simply the total travel time needed to take the original route. If the new route is then added, , the total energy will be increased by the total travel time needed to take the new route. Because the new route is shorter than the original route, must decrease relative to the original configuration, contradicting the assumption that the original set of routes minimized the total energy.

Therefore, the choice of routes minimizing total energy is an equilibrium.

Finding an equilibrium

The above proof outlines a procedure known as best response dynamics, which finds an equilibrium for a linear traffic graph and terminates in a finite number of steps. The algorithm is termed "best response" because at each step of the algorithm, if the graph is not at equilibrium then some driver has a best response to the strategies of all other drivers and switches to that response.

Pseudocode for Best Response Dynamics:

 Let P be some traffic pattern.whileP is not at equilibrium:
   compute the potential energy e of Pfor each driver d in P:for each alternate path p available to d:
        compute the potential energy n of the pattern when d takes path pifn< e:
          modify P so that d takes path pcontinue the topmost while

At each step, if some particular driver could do better by taking an alternate path (a "best response"), doing so strictly decreases the energy of the graph. If no driver has a best response, the graph is at equilibrium. Since the energy of the graph strictly decreases with each step, the best response dynamics algorithm must eventually halt.

How far from optimal is traffic at equilibrium?

If the travel time functions are linear, that is for some , then at worst, traffic in the energy-minimizing equilibrium is twice as bad as socially optimal.[13]

Proof: Let Z be some traffic configuration, with associated energy E(Z) and total travel time T(Z). For each edge, the energy is the sum of an arithmetic progression, and using the formula for the sum of an arithmetic progression, one can show that E(Z) ≤ T(Z) ≤ 2E(Z). If is the socially-optimal traffic flow and is the energy-minimizing traffic flow, the inequality implies that .

Thus, the total travel time for the energy-minimizing equilibrium is at most twice as bad as for the optimal flow.

Dynamics analysis of Braess' paradox

In 2013, Dal Forno and Merlone [14] interpret Braess' paradox as a dynamical ternary choice problem. The analysis shows how the new path changes the problem. Before the new path is available, the dynamics is the same as in binary choices with externalities, but the new path transforms it into a ternary choice problem. The addition of an extra resource enriches the complexity of the dynamics. In fact, there can even be coexistence of cycles, and the implication of the paradox on the dynamics can be seen from both a geometrical and an analytical perspective.

See also

References

  1. ^ abNew Scientist, 42nd St Paradox: Cull the best to make things better, 16 January 2014 by Justin Mullins
  2. ^Roughgarden, Tim; Tardos, Éva. "How Bad is Selfish Routing?"(PDF). Journal of the ACM. Archived(PDF) from the original on 2016-04-09. Retrieved 2016-07-18.
  3. ^Steinberg, R.; Zangwill, W. I. (1983). "The Prevalence of Braess' Paradox". Transportation Science. 17 (3): 301. doi:10.1287/trsc.17.3.301.
  4. ^ abcde(French) Olivier Razemon, "Le paradoxde de l'« évaporation » du trafic automobile", Le monde, Thursday 25 August 2016, page 5. Published on-line as "Et si le trafic s’évaporait ?" on 24 August 2016 and updated on 25 August 2016 (page visited on 19 September 2016).
  5. ^Easley, D.; Kleinberg, J. (2008). Networks. Cornell Store Press. p. 71.
  6. ^Knödel, W. (31 January 1969). Graphentheoretische Methoden Und Ihre Anwendungen. Springer-Verlag. pp. 57–59. ISBN 978-3-540-04668-4.
  7. ^Kolata, Gina (1990-12-25). "What if They Closed 42d Street and Nobody Noticed?". New York Times. Retrieved 2008-11-16.
  8. ^Youn, Hyejin; Gastner, Michael; Jeong, Hawoong (2008). "Price of Anarchy in Transportation Networks: Efficiency and Optimality Control". Physical Review Letters. 101 (12): 128701. arXiv:0712.1598Freely accessible. Bibcode:2008PhRvL.101l8701Y. doi:10.1103/PhysRevLett.101.128701. ISSN 0031-9007. PMID 18851419.External link in |title= (help)
  9. ^http://www.uh.edu/engines/epi2814.htm
  10. ^Staff (Max Planck Institute) (September 14, 2012), "Study: Solar and wind energy may stabilize the power grid", R&D Magazine, rdmag.com, retrieved September 14, 2012
  11. ^Pala, M. G.; Baltazar, S.; Liu, P.; Sellier, H.; Hackens, B.; Martins, F.; Bayot, V.; Wallart, X.; Desplanque, L.; Huant, S. (2012) [6 Dec 2011 (v1)]. "Transport Inefficiency in Branched-Out Mesoscopic Networks: An Analog of the Braess Paradox". Physical Review Letters. 108 (7). arXiv:1112.1170Freely accessible. Bibcode:2012PhRvL.108g6802P. doi:10.1103/PhysRevLett.108.076802. ISSN 0031-9007.
  12. ^The price of Anarchy in Basketball, Brian Skinner
  13. ^Easley, David; Kleinberg, Jon. "Networks, Crowds, and Markets: Reasoning about a Highly Connected World (8.3 Advanced Material: The Social Cost of Traffic at Equilibrium)"(PDF). Jon Kleinberg's Homepage. Jon Kleinberg. Archived(PDF) from the original on 2015-03-16. Retrieved 2015-05-30. - This is the preprint of ISBN 9780521195331
  14. ^Dal Forno, Arianna; Merlone, Ugo (2013). "Border-collision bifurcations in a model of Braess paradox". Mathematics and Computers in Simulation. 87: 1–18. doi:10.1016/j.matcom.2012.12.001. ISSN 0378-4754.

Further reading

  • D. Braess, Über ein Paradoxon aus der Verkehrsplanung. Unternehmensforschung 12, 258–268 (1969) [1][2]
  • Katharina Belaga-Werbitzky: „Das Paradoxon von Braess in erweiterten Wheatstone-Netzen mit M/M/1-Bedienern“ ISBN 3-89959-123-2
  • Translation of the Braess 1968 article from German to English appears as the article "On a paradox of traffic planning," by D. Braess, A. Nagurney, and T. Wakolbinger in the journal Transportation Science, volume 39, 2005, pp. 446–450. More information
  • Irvine, A. D. (1993). "How Braess' paradox solves Newcomb's problem". International Studies in the Philosophy of Science. 7 (2): 141. doi:10.1080/02698599308573460.
  • Steinberg, R.; Zangwill, W. I. (1983). "The Prevalence of Braess' Paradox". Transportation Science. 17 (3): 301. doi:10.1287/trsc.17.3.301.
  • A. Rapoport, T. Kugler, S. Dugar, and E. J. Gisches, Choice of routes in congested traffic networks: Experimental tests of the Braess Paradox. Games and Economic Behavior 65 (2009) [3]
  • T. Roughgarden. "The Price of Anarchy." MIT Press, Cambridge, MA, 2005.

External links

Origins of Python's “Functional” Features

$
0
0
I have never considered Python to be heavily influenced by functional languages, no matter what people say or think. I was much more familiar with imperative languages such as C and Algol 68 and although I had made functions first-class objects, I didn't view Python as a functional programming language. However, earlier on, it was clear that users wanted to do much more with lists and functions.

A common operation on lists was that of mapping a function to each of the elements of a list and creating a new list. For example:

def square(x):
    return x*x

vals = [1, 2, 3, 4]
newvals = []
for v in vals:
    newvals.append(square(v))

In functional languages such as Lisp and Scheme, operations such as this were provided as built-in functions of the language. Thus, early users familiar with such languages found themselves implementing comparable functionality in Python. For example:
def map(f, s):
    result = []
    for x in s:
            result.append(f(x))
    return result

def square(x):
    return x*x

vals = [1, 2, 3, 4]
newvals = map(square,vals)

A subtle aspect of the above code is that many people didn't like the fact that you to define the operation that you were applying to the list elements as a completely separate function. Languages such as Lisp allowed functions to simply be defined "on-the-fly" when making the map function call. For example, in Scheme you can create anonymous functions and perform mapping operations in a single expression using lambda, like this:
(map (lambda (x) (* x x)) '(1 2 3 4))  

Although Python made functions first-class objects, it didn't have any similar mechanism for creating anonymous functions.

In late 1993, users had been throwing around various ideas for creating anonymous functions as well as various list manipulation functions such as map(), filter(), and reduce(). For example, Mark Lutz (author of "Programming Python") posted some code for a function that created functions using exec:

def genfunc(args, expr):
    exec('def f(' + args + '): return ' + expr)
    return eval('f')

# Sample usage
vals = [1, 2, 3, 4]
newvals = map(genfunc('x', 'x*x'), vals)

Tim Peters then followed up with a solution that simplified the syntax somewhat, allowing users to type the following:
vals = [1, 2, 3, 4]
newvals = map(func('x: x*x'), vals)

It was clear that there was a demand for such functionality. However, at the same time, it seemed pretty "hacky" to be specifying anonymous functions as code strings that you had to manually process through exec. Thus, in January, 1994, the map(), filter(), and reduce() functions were added to the standard library. In addition, the lambda operator was introduced for creating anonymous functions (as expressions) in a more straightforward syntax. For example:
vals = [1, 2, 3, 4]
newvals = map(lambda x:x*x, vals)

These additions represented a significant, early chunk of contributed code. Unfortunately I don't recall the author, and the SVN logs don't record this. If it's yours, leave a comment! UPDATE: As is clear from Misc/HISTORY in the repo these were contributed by Amrit Prem, a prolific early contributor.

I was never all that happy with the use of the "lambda" terminology, but for lack of a better and obvious alternative, it was adopted for Python. After all, it was the choice of the now anonymous contributor, and at the time big changes required much less discussion than nowadays, for better and for worse.

Lambda was really only intended to be a syntactic tool for defining anonymous functions. However, the choice of terminology had many unintended consequences. For instance, users familiar with functional languages expected the semantics of lambda to match that of other languages. As a result, they found Python’s implementation to be sorely lacking in advanced features. For example, a subtle problem with lambda is that the expression supplied couldn't refer to variables in the surrounding scope. For example, if you had this code, the map() function would break because the lambda function would run with an undefined reference to the variable 'a'.

def spam(s):
    a = 4
    r = map(lambda x: a*x, s)

There were workarounds to this problem, but they counter-intuitively involved setting default arguments and passing hidden arguments into the lambda expression. For example:
def spam(s):
    a = 4
    r = map(lambda x, a=a: a*x, s)

The "correct" solution to this problem was for inner functions to implicitly carry references to all of the local variables in the surrounding scope that are referenced by the function. This is known as a "closure" and is an essential aspect of functional languages. However, this capability was not introduced in Python until the release of version 2.2 (though it could be imported "from the future" in Python 2.1).

Curiously, the map, filter, and reduce functions that originally motivated the introduction of lambda and other functional features have to a large extent been superseded by list comprehensions and generator expressions. In fact, the reduce function was removed from list of builtin functions in Python 3.0. (However, it's not necessary to send in complaints about the removal of lambda, map or filter: they are staying. :-)

It is also worth nothing that even though I didn't envision Python as a functional language, the introduction of closures has been useful in the development of many other advanced programming features. For example, certain aspects of new-style classes, decorators, and other modern features rely upon this capability.

Lastly, even though a number of functional programming features have been introduced over the years, Python still lacks certain features found in “real” functional programming languages. For instance, Python does not perform certain kinds of optimizations (e.g., tail recursion). In general, because Python's extremely dynamic nature, it is impossible to do the kind of compile-time optimization known from functional languages like Haskell or ML. And that's fine.

For 15 Years, New Orleans Was Divided into Three Separate Cities

$
0
0

In 1803, when the United States bought New Orleans, along with the rest of the land in the Louisiana Purchase, the city had only about 8,000 people living in it. Planned on a tight grid, the city stretched just eleven blocks along a curve of the Mississippi River and six blocks back from the levee, to Rampart Street.

A little more than three decades later, New Orleans had become a world port, and in 1836 edged out New York City as the busiest export center in the United States. The population had grown to more than 60,000 people—many of them Anglo-Americans who, to the alarm of the city’s Francophone natives, had flocked to the port to make their fortunes.

Today, New Orleans trades on the languid charm of its French and Spanish past to lure visitors to its historic center, but in the early 19th century, American newcomers to the city had no patience for Louisiana’s old guard, known as creoles, whom they saw as Catholic, corrupt, and overly permissive to the city’s large population of slaves and free people of color.

But in New Orleans, the American upstarts lacked the political power to change the city’s ways as fast as they would like. Instead, in 1836, the city’s Anglo-Americans convinced the state legislature to split New Orleans into pieces—three semi-autonomous municipalities divided along ethnic lines. For more than 15 years, the city was divided, while the Americans consolidated their power and re-shaped the city to their own ends.

Map of New Orleans made from a 1815 survey.
Map of New Orleans made from a 1815 survey. Library of Congress, Geography and Map Division

The idea of dividing New Orleans between the Francophone old guard and the Anglophone newcomers first came up in the 1820s, when rival military factions would challenge each other’s authority, edging towards but never quite exploding into violent conflict. In his book Louisiana in the Age of Jackson, Joseph G. Tregle, Jr., the Louisiana historian who has studied this period most closely, tells how American-led sections of the militia would refuse to follow orders from foreign French officers who still held military positions. When Americans held command, the French would reciprocate.

At times, different factions of the militia would march through the streets, showing off their defiance and power, and it was during one such conflict that a local American editor, R.D. Richardson, started calling for the city to be split along ethnic lines. A rival French editor, who had fought for Napoleon, responded by offering five dollars to anyone who’d give Richardson a good whap over the head.

There were two main areas in which the entrenched French creoles made the incoming Americans crazy. The first was infrastructure: when the U.S. bought the Louisiana territory, New Orleans had no paved roads, no street signs, and no colleges. Much of the population was illiterate, and justice was dispensed according to the French legal code: Tregle calls the place “a colonial backwash of French and Spanish imperialism.”

The French Quarter, later in the 19th century.
The French Quarter, later in the 19th century. Library of Congress/LC-DIG-det-4a26981

The second was the permissive culture: Sunday in New Orleans means sitting at a café and going out dancing or perhaps to a horse race. In this city, black and white people mingled more freely than elsewhere in America, and even slaves had more leeway to move freely than in other cities.

All this shocked the Protestant, Puritan-minded American settlers, many of whom came from places in the South where the movement of black people was highly restricted and regulated. (Meanwhile, the native creole population was appalled by the crude Americans, who they called Kaintucks and vulgar Yankees.) The Anglo-American settlers tried to change everything from the city’s laws to the looser culture, but even as they gained power of New Orleans’ commercial life, they did not have enough political power to mold the city as they would have liked.

After the conflicts of the 1820s, the newcomers kept trying to split the city—if they couldn’t fix the whole place, at least they could control part of it. About a decade later, in 1836, Anglo-Americans finally got their way. New Orleans was divided into three municipalities, one Anglophone and two Francophone.

New Orleans in 1845.
New Orleans in 1845. Library of Congress, Geography and Map Division

The three sections of split New Orleans roughly correspond to the today’s Central Business District, French Quarter, and Marigny neighborhood. The Second Municipality, which the Americans controlled, began at Canal Street and went upriver, past the place where the Pontchartrain Expressway cuts through the city today. The First Municipality included all of the French Quarter, from Canal to Esplanade Avenue, and the Third Municipality stretched downriver from there, through the Marigny towards today’s Bywater neighborhood. Each municipality had its own police force, its own schools, its own infrastructure, and services. In the First Municipality, English was the language of commerce and government; in the other two, French dominated.

When this story is told quickly, it’s usually said that New Orleans was a place neatly divided between old and new, French and English, a more multiracial society and a white one, with Canal Street as the clear border. But according to the work of Tregle and later historians, the divide was not initially so stark as legend might have it.

When Americans started moving to New Orleans, they moved first to the French Quarter, on the upriver side, closer to Canal Street. Only once that section of town filled up did they continue building into Faubourg Ste. Marie, the newer suburb on the far side of Canal. It was always desirable to live in the French Quarter, where by the 1830s Chartres Street had become a major commercial thoroughfare, dotted with American shops selling books, jewelry, and other goods. On the other side of Canal Street, it was common for prominent Francophone creoles to make their home in Ste. Marie, too. To the extent that there was a clear dividing line between the old and new populations, it was at St. Louis Street, close to the center of the city.

New Orleans, 1852.
New Orleans, 1852. Library of Congress/LC-DIG-ppmsca-09333

But once the city was divided in three political entities, the distinctions between the Second Municipality and the other two started to grow. Trade in the Second Municipality thrived, and bustling warehouses and insurance companies started going up on Camp and Magazine streets.

The growing economy led to a housing boom, where Greek revival architecture mixed with the brick-faced warehouses of a northeastern port city; one traveler noted that the American part of the city “lacked that mellowness of age and charm of the bizarre which set old New Orleans apart.” Whatever it lacked in beauty, though, it made up for in wealth and city services: the Second Municipality soon had a modern school system, as well as new wharves, gas lights, and paved streets.

On the other side of town, the area below Jackson Square was suffering, as poverty increased and the old French houses aged. The Third Municipality, sometimes called the “The Dirty Third,” was in particularly bad shape, since the waste of the rest of the city floated downriver to pollute its shores, air, and water. By the time the city was reunited, in 1852, its wealth had concentrated upriver: Tregle found that by 1860 the area upriver of St. Louis Street had 76 percent of the city’s taxable property.

Bird's eye view, 1863.
Bird’s eye view, 1863. Library of Congress, Geography and Map Division

More than any other American city at the time, New Orleans could claim to be a diverse place where people of color had more freedom than elsewhere in the South. But during the period where white newcomers divided New Orleans, the American values and laws that were shaping both the city and the new state of Louisiana were reducing those freedoms quickly. The American values imported to New Orleans included not just an emphasis on better infrastructure and education, but on more legally encoded racism, that was strictly enforced.

“On the upper side of town…white inhabitants were often times more hostile to the very notion of free people of color,” writes Richard Campanella, a contemporary historian who studies New Orleans geography. The white Americans who were gaining power were uncomfortable with the rights that slaves and free people of color had in the city and sought to restrict their movements and freedoms. There was also a second major wave of white people coming to the city, who saw themselves as being in conflict with the black population. During the 1830s, immigrants from Ireland and Germany flocked to New Orleans and made it a majority white city for the first time: in 1835, white workers protested black employment in certain desirable jobs.

As much as the division allowed trade to thrive and infrastructure to improve in parts of New Orleans, it was mostly to the benefit of white, American-born people. When the city did merge back into one city, it was only because the Anglo-Americans had enough power to control not just commerce but city politics, too. With the influx of immigrants, they outnumbered the Francophone old guard. The immigrant-heavy suburb of Lafayette was incorporated into the city, becoming today’s Garden District. Only once the Anglo-Americans could shift the whole city to their own ways did they let it become whole again.

Viewing all 25817 articles
Browse latest View live




Latest Images