Quantcast
Channel: Hacker News
Viewing all 25817 articles
Browse latest View live

PaperTTY – Python module to render a TTY on e-ink

$
0
0

This is an experimental command-line driven Python module to render the contents of a Linux virtual terminal (/dev/tty[1-63]) or standard input onto a Waveshare e-Paper display. See list of supported displays.

Note: Testing has been minimal and I probably forgot something, so 'caveat utilitor'.

Note: I am also not affiliated with Waveshare in any way.

Updates

  • 2018-08-31
    • Installation instructions had an issue: running with sudo didn't respect the virtualenv - fixed
  • 2018-08-26
  • 2018-08-25
    • Now all the SPI models listed in Waveshare's Wiki have a driver implemented - all it needs now is some testing to see if any of it works
    • Fixed little issue with --nopartial: the screen was updated twice instead of once when enabled
    • Added support for:
      • EPD 2.13" D (monochrome, flexible)
        • Note: the original driver for this model seemed to have been written by someone who didn't write the rest of the drivers - it appeared to be essentially broken and written while drunk, I tried to fix it but who knows if it'll work
    • Added support for:
      • EPD 5.83" (monochrome)
      • EPD 5.83" B (black/white/red)
      • EPD 5.83" C (black/white/yellow)
    • Just one still missing:
      • EPD 2.13" D (monochrome, flexible)
  • 2018-08-24
    • Major overhaul
      • Please create an issue (or pull request) if things don't work

      • Converted code to Python 3 only

      • Install by default into a virtualenv

        • Can co-exist with the previous version
      • Replace old PIL with Pillow

      • Bundled reorganized/modified display drivers to (hopefully) support more displays

        • Created class structure to reduce code duplication in the drivers (got rid of ~60%)
          • Perhaps overkill and more complicated, but meh, I just couldn't bear the repetition and accomodating all the models would have been messy otherwise
        • Supported models (SPI)
          • EPD 1.54" (monochrome) - [probably works, with partial refresh]
          • EPD 1.54" B (black/white/red)
          • EPD 1.54" C (black/white/yellow)
          • EPD 2.13" (monochrome) - [TESTED, with partial refresh]
          • EPD 2.13" B (black/white/red)
          • EPD 2.13" C (black/white/yellow) - should work with EPD2in13b
          • EPD 2.7" (monochrome)
          • EPD 2.7" B (black/white/red)
          • EPD 2.9" (monochrome) - [probably works, with partial refresh]
          • EPD 2.9" B (black/white/red)
          • EPD 2.9" C (black/white/yellow) - should work with EPD2in9b
          • EPD 4.2" (monochrome)
          • EPD 4.2" B (black/white/red)
          • EPD 4.2" C (black/white/yellow) - should work with EPD4in2b
          • EPD 7.5" (monochrome)
          • EPD 7.5" B (black/white/red)
          • EPD 7.5" C (black/white/yellow) - should work with EPD7in5b
        • Missing models
          • EPD 2.13" D (monochrome, flexible)
          • EPD 5.83" (monochrome)
          • EPD 5.83" B (black/white/red)
          • EPD 5.83" C (black/white/yellow)
        • Special drivers
          • Dummy - no-op driver
          • Bitmap - output frames as bitmap files (for debugging)
        • Note: PaperTTY doesn't use the red/yellow colors ... yet
        • See drivers/README.md
      • Some CLI changes related to driver selection

      • I learned that my particular unit has some flaw that means it doesn't do full refreshes properly (never has)

        • Doh!
        • I'll just assume it works as expected with other people's units
      • Added new video

      • Heard that a partial refresh LUT for the 7.5" is nontrivial to do if at all possible, so best not to get your hopes up too much regarding those - there will probably be better panels available eventually

  • 2018-08-16
    • Included a very tiny bitmap font, "Tom Thumb", to use as default
  • 2018-08-14
    • Added support for PIL bitmap fonts (autodetected)
  • 2018-08-13
    • After browsing the Waveshare Wiki a bit, it seems that the smaller models support partial refresh out of the box but the bigger ones need some (hopefully minor) modification, if I ever get one I'll look into it
      • My guess is that this code should work as-is on the Black/White 1.54", 2.13" and 2.9" models - no guarantees though
      • The 2.7", 4.2", 5.83", 7.5" models have slightly different code and need LUT modification or some hacking to achieve partial refresh, and I'm not sure if it's feasible to get it to work with the color models at all
      • Modifying the code to work with full refreshes should be pretty easy if you happen to own one of the bigger monochrome displays
        • This is now done, and may work with color ones as well (2018-08-24)

Some features

  • Designed to be used with a Raspberry Pi and Raspbian.
  • It should enable you to run interactive console programs (vim, tmux, irssi, nethack ...) and display whatever you want easily with scripts.
  • Especially with a small font, it is fast enough for interactive use but could be improved to be even faster. Also, it's quite a bit snappier on the Raspberry Pi 3 than the Zero.
  • Only the changed region is updated on the display, so typing is faster than full screen scrolling.
  • The cursor is also drawn and the image updated as it moves.
  • Flicker-free.
  • Allows changing the font, font size, orientation and some other parameters.
  • Supports TrueType and bitmap fonts (in PIL format).
  • Bundled with a systemd service unit to start the service early at boot and gracefully stop it.

It isn't perfect and has only been tested with the monochrome 2.13" HAT, but it might work for other models too, and allows you to at least try.

  • The PaperTTY code is in the public domain and you run it at your own risk.
  • The driver code (in drivers/) is GPL 3.0 licensed, because it is based on Waveshare's GPL code - you still run it at your own risk.

Screenshots

Collage of running various programs in tmux

Running Nethack outside in the noon sun, powered directly by a solar panel, connected to a Bluetooth keyboard

Action video - terminal usage (Raspberry Pi Zero W)

Showcasing input feedback.

Youtube Video

Action video 2 - cacafire (Raspberry Pi 3)

The RPi3 is noticeably faster - cacafire is 3x slower on the Zero. Typical terminal usage works pretty well.

Youtube Video

Installation

All of the code was written for Raspbian Stretch and Python 3.5+. These instructions assume you're going to run this on a Raspberry Pi, otherwise you're on your own.

The code includes a reimplementation/refactoring of the Waveshare reference drivers - unlike the rest of the code which is CC0, the drivers have the GPL 3.0 license, because that's what Waveshare used. The drivers for models that aren't in the repo have been acquired from their Wiki's demo code packages.

See the driver page for details and the supported models.

The earlier, initial version of PaperTTY (tag: v0.01) did not have instructions for using virtualenv (though it would work) - you can still run it as before using the system packages and alongside this new version. Using the virtualenv means that PIL and Pillow can also coexist on the same system.

Requirements

  • Enable SPI (sudo raspi-config)
    • Interfacing Options -> SPI -> Yes
    • Reboot

Steps

  1. Clone the repo somewhere and enter the directory
    • git clone https://github.com/joukos/PaperTTY.git
    • cd PaperTTY
  2. Install virtualenv and libopenjp2
    • sudo apt install virtualenvwrapper python3-virtualenv libopenjp2-7
  3. Source the wrapper to use mkvirtualenv (you may want to add this to ~/.bashrc)
    • source /usr/share/virtualenvwrapper/virtualenvwrapper.sh
  4. Create the Python 3 virtualenv and install packages in requirements.txt
    • mkvirtualenv -p /usr/bin/python3 -r requirements.txt papertty
    • This will create ~/.virtualenvs/papertty which contains the required environment
  5. After creating the virtualenv, it should become active and you should see (papertty) on your prompt
    • Note: the software needs to be run with sudo in the typical case, so you need to explicitly start the interpreter within the virtualenv - otherwise the program attempts to import system packages instead
    • You should now be able to run sudo ~/.virtualenvs/papertty/bin/python3 ./papertty.py list to see the available drivers and start using the software
  6. Not really needed, but to (de)activate the virtualenv afterwards, run:
    • ~/.virtualenvs/papertty/bin/activate - activate the virtualenv
      • Or, workon papertty if you have sourced virtualenvwrapper.sh
    • deactivate - deactivate the virtualenv

Alternative install without virtualenv, using system packages

  • If you don't care to use the virtualenv, just install the requirements as system packages:
    • sudo apt install python3-rpi.gpio python3-spidev python3-pil python3-click
    • And run the program directly: sudo ./papertty.py list

Fonts

You can use TrueType fonts or bitmap fonts, but the bitmap fonts need to be in the right format. With bitmap fonts the --size option is ignored.

Included as default is a very small bitmap font called Tom Thumb, it is fairly readable for its tiny size and fits 20 rows with 62 columns on the 2.13". Thanks go to Brian Swetland and Robey Pointer for their work on the font and for releasing it under CC0.

Another included font is the nanofont, which is an extremely tiny (3x4 pixels) font and also released under CC0. Thanks go to the author, Michael Pohoreski. The conversion was done by generating the BMP, then transformed it with Pillow so that everything was on one line, then used Fony to save a BDF and converted that to PIL.

Why would you use such a microscopic font, I hear you ask? One good reason is that some programs refuse to start unless the terminal size is big enough, and using this font will allow you to get things theoretically readable and run those programs even on the smaller displays. One example being Dungeon Crawl Stone Soup which wouldn't otherwise start on the 2.13" display (hooray!):

Playing the game like this would be quite challenging, however...

Unless you're happy with the awesome default font, find a nice monospaced TrueType or bitmap font: Andale Mono (sudo apt install ttf-mscorefonts-installer) is pretty great for very small sizes and on the 2.13" (128x250 pixels) can fit 17 rows and 50 columns

  • You can use a proportional font but the terminal will probably look horrible

Pillow includes a utility called pilfont.py, you can use this to convert a BDF/PCF font file into a .pil and a .pbm (I didn't have luck with some fonts - remember to use the pilfont.py version that's on your Pi):

# convert Terminus 
gunzip -c /usr/share/fonts/X11/misc/ter-u12b_unicode.pcf.gz > terminus-12.pcf
pilfont.py terminus-12.pcf
# you should get terminus-12.pil that you can pass with the --font option

All font options expect a path to the font file - the system font directories are not searched for them.

Usage

Remember to activate the virtualenv, then run sudo ./papertty.py to get help.

  • You'll want to sudo unless you've set it up so that SPI works without and you've given read access to /dev/vcsa*

To do anything, you'll need to tell the script which model you're using - in my case this would be epd2in13. Use the top-level option --driver to set the desired driver.

Append --help with the subcommands to get help with their parameters.

You can just edit papertty.py to your liking - the code is very simple and commented.

Top-level options

OptionDescriptionDefault
--driver NAMESelect driver to use - requiredno default
--nopartialDisable partial refresh even if the display supported itdisabled
--encoding NAMESelect encoding to useutf-8

Note: The encoding settings are a bit questionable right now - encoding/decoding is done explicitly to have ignore on any errors, but I think this needs some more work as it's not an entirely trivial issue. If you feel like there's a big dum-dum in the code regarding these, a PR is very appreciated.

Note 2:To get scandinavian accents to show (ä,ö etc.), try --encoding cp852.

Commands

list - List display drivers

# Example
sudo ./papertty.py list

scrub - Scrub display

This command mostly makes sense with the partial refresh models, although you can run it with full refresh too - it's just going to take a pretty long time to run. I needed this because my own unit can't handle a full refresh so it was the only way to clear the screen properly!

If you're left with "burn-in" or the display doesn't seem to work properly, this usually helps to even it out (may even need to run it twice sometimes if the display is not in a steady state).

This will slowly fill the screen with bands of black, then white.

OptionDescriptionDefault
--size NChunk width (pixels) to fill with (valid values: 8-32)16
# Example
sudo ./papertty.py --driver epd2in13 scrub

stdin - Render standard input

Render stdin on the display, simple as that. Leaves the image on the display until something else overwrites it. Very useful for showing script output or just about anything that updates irregularly.

OptionDescriptionDefault
--font FILENAMEPath to a TrueType or PIL font to use - strongly recommended to use monospacedtom-thumb.pil
--size NFont size8
--width NFit to a particular width (characters)display width / font width
--portraitEnable portrait modedisabled
--nofoldDisable folding (ie. don't wrap to width)disabled
--spacingSet line spacing0
# Example
cowsay "Hello World"| sudo ./papertty.py --driver epd2in13 stdin --nofold

terminal - Render a virtual terminal

The most prominent feature.

This requires read permission to the virtual console device (/dev/vcsa[1-63]) and optionally write permission to the associated terminal device (/dev/tty[1-63]) if you want to set the TTY size via ioctls.

If you're going to use terminal with a display that doesn't support partial refresh, you probably want to set --sleep a bit larger than the default, such as a few seconds, unless you enjoy blinking.

The process handles two signals:

  • SIGINT - stop and clear the screen (unless --noclear was given), same as pressing Ctrl-C
    • sudo pkill -INT -f papertty.py
    • By default, the systemd service unit attempts to stop the process using SIGINT
  • SIGUSR1 - apply scrub and keep running
    • sudo pkill -USR1 -f papertty.py

See details on how all of this works further down this document.

OptionDescriptionDefault
--vcsa FILENAMEVirtual console device (/dev/vcsa[1-63])/dev/vcsa1
--font FILENAMEPath to a TrueType or PIL font to use - strongly recommended to use monospacedtom-thumb.pil
--size NFont size8
--noclearLeave display content on exitdisabled
--nocursorDon't draw cursordisabled
--sleepMinimum delay between screen updates (seconds)0.1
--rowsSet TTY rows (--cols required too)no default
--colsSet TTY columns (--rows required too)no default
--portraitEnable portrait modedisabled
--flipxMirror X axis (experimental / buggy)disabled
--flipyMirror Y axis (experimental / buggy)disabled
--spacingSet line spacing0
--scrubApply scrub when startingdisabled
--autofitTry to automatically set terminal rows/cols for the fontdisabled
# Examples# by default the first virtual terminal (/dev/vcsa1 == /dev/tty1) is displayed
sudo ./papertty.py --driver epd2in13 terminal# set font size to 16, update every 10 seconds, set terminal rows/cols to 10x20
sudo ./papertty.py --driver epd2in13 terminal --size 16 --sleep 10 --rows 10 --cols 20# auto-fit terminal rows/cols for the font and use a bitmap font# (fitting may not work for very small fonts in portrait mode because of terminal restrictions)
sudo ./papertty.py --driver epd2in13 terminal --autofit --font myfont.pil

How to use the terminal

Logging in?

After you've gotten the terminal to render, you'll want to run something there.

As the program mirrors the system virtual terminals, you can either attach a keyboard to the Pi and simply log in or use the openvt program to start something there without messing around with cables, if you already have SSH access.

The following commands are run over SSH.

For example, to start htop for user pi on tty1 (via sudo, twice):

# "as a sudoer, start sudo forcibly on VT 1 (tty1) to run 'htop' as the user 'pi'"
sudo openvt -fc 1 -- sudo -u pi htop

After you exit the process, agetty may go haywire though (hogging CPU). Give it a nudge to fix it:

And you should have the login prompt there again.

In practice, you'll want to use tmux (or screen, if you prefer) to have the most flexible control over the terminal (these are terminal multiplexers, and if you haven't used one before, now is the time to start):

# start a new tmux session (or just run 'tmux' with a connected keyboard)
sudo openvt -fc 1 -- sudo -u pi tmux# (see the session starting up on the display)# now, attach to the session
tmux attach

Lo and behold! You should now be attached to the tiny session visible on the display.

You can kill the papertty.py process at any time - the stuff that runs in the TTY will be unaffected (unless they react badly to console resizing) and you can just restart the terminal to get the display back and play around with the settings.

Start up at boot

A simple systemd service unit file is included with the package, called papertty.service. It calls start.sh so that instead of editing the service file, you can edit the start script (and easily add whatever you need) without needing to run systemctl daemon-reload all the time.

  • You can simply put the command in the service file too, it's your choice
  • You probably want to set the script to be owned and writable by root only: sudo chown root:root start.sh; sudo chmod 700 start.sh
  • Remember: to run the command under the virtualenv, you need to run the python3 command from within the virtualenv's bin directory - this will ensure the environment is correct

To have the display turn on at boot, first edit the command you're happy with into start.sh:

# Remember: you probably want to set rows and cols here, because at reboot they're reset.# Also, when booting up after a power cycle the display may have some artifacts on it, so # you may want to add --scrub to get a clean display (during boot it's a bit slower than usual)
VENV="/home/pi/.virtualenvs/papertty/bin/python3"${VENV} papertty.py --driver epd2in13 terminal --autofit

Then make sure you have the right paths set in the service file:

...### Change the paths below to match yours
WorkingDirectory=/home/pi/code/PaperTTY
ExecStart=/home/pi/code/PaperTTY/start.sh###
...

Then (read the unit file more carefully and) do the following steps:

sudo cp papertty.service /etc/systemd/system
sudo systemctl daemon-reload
sudo systemctl enable papertty# To disable the service:# sudo systemctl disable papertty# sudo systemctl stop papertty

This will incorporate the service with systemd and enables it. Before rebooting and trying it out, you may want to stop any other instances of the papertty.py and then see if the service works:

sudo systemctl start papertty# (the service should start and the terminal should appear on the display,# if you need to edit any settings, run 'systemctl daemon-reload' again after# saving the service file)
sudo systemctl stop papertty# (the service should stop and the display should be cleared, unless you used --noclear)

If the service seemed to work, try rebooting and enjoy watching the bootup. If you need to scrub the display while the service is running, you can send the SIGUSR1 signal to the process.

If the service didn't work, check that the paths are correct and that start.sh has the execute bit set.


Why?

Kindles and the like have been around for a long time already, but there have been very few attempts at a general purpose e-ink display. General purpose meaning that I can use the programs I'm used to using and can display them on the e-ink display.

Why would anyone want such a thing, anyway? Here are some reasons:

  • First of all, their power consumption is very low, making them suitable for many embedded applications where you just need to display some information periodically
  • They are beautiful and easy on the eyes
  • They are readable in direct sunlight with no glare to speak of - and could run indefinitely off solar too
  • Many of us spend most of their time reading and editing mostly static text, and this is where e-ink should excel
  • Sometimes the refresh rate does not matter at all, as long as the eventual feedback is there - you may not want a backlit, power-hungry display for something you need updated just once a day
  • You can still read your ebooks - in Vim!

The problem

Aside from digital price tags and similar special markets, there are some viable commercial offerings for mainstream computing on e-ink, such as the Onyx Boox Max2 that not only boasts a proper tablet form factor with an e-ink display, but also an HDMI input for using it as a secondary display (squee!). While it seems really cool, it's quite expensive, rare and more than just a simple display unit (and those cost just as much).

The display modules sold by Waveshare are exceptional in that they are very affordable (~15-90 USD), offer a wide range of sizes (1.54" up to 7.5") and even have "color" models (black/white/red). Earlier such offerings simply weren't there and people used to hack Kindles in very complex ways to get any of the fun.

So now that anyone can buy cheap e-ink, there is but one problem: how to get your content on it?

The display looks really cool and nifty but all you'll get in the package is just that and some code examples to draw something on it - with a program you need to write yourself. After unboxing, how does someone browse the Internet with it? Sadly, they don't.

The solution

I've had a Waveshare 2.13" HAT for the Raspberry Pi for a while now, and from time to time I've tried to find if someone had already implemented something like this since it sounds simple enough, but at the time of writing I don't know of any programs that mirror the terminal onto an e-ink, so I had a go at it.

For my purposes I just need proper terminal program support. The next step might be implementing a VNC client which should naturally translate quite well to e-ink's partial updating, but I don't have the time.

How it works

The principle of operation is deceptively simple:

  • Reads the virtual terminal contents via /dev/vcsa* (see man vcsa)
    • For example, content of /dev/tty1 (that you get with Ctrl-Alt-F1) is available at /dev/vcsa1
    • This includes the attributes, but they are ignored (if I had a tricolor display, they could be useful)
    • Terminal size (character and pixel) is encoded in the first four bytes - this is used to read the rows and columns
  • Optionally sets the desired terminal size with ioctls (requires write access to the /dev/ttyX device)
  • Adds newlines according to the terminal width (unlike the screendump utility that reads from /dev/tty*, reading from a vcsa* does not include newlines)
  • Renders the content and the cursor on an Image object
  • Compares the newly rendered content to the previous content and updates the changed region on the display
    • Done in a very simple fashion with just one bounding box
    • This results in non-flickering updates and decent speed in typical use cases

Caveats, shortcomings

Some notes:

  • Hardly tested, developed for a particular model - other models may not work or may need some code tweaking first
    • If it sorta works but crashes or something else goes wrong and your display doesn't seem to work like usual anymore, don't panic, try the scrub command a couple of times first and wait for it to finish - powering off and disconnecting the module completely ought to help as a last resort
    • Turns out my particular unit is actually flawed and doesn't do full refreshes properly so implementing it for other models has been mostly guesswork and wishful thinking
      • The scrub feature may be entirely unnecessary for normally functioning units
  • The code is surely littered with bugs and could use some refactoring
  • You need to figure out the parameters, font and encodings that work for you
    • Importantly, Unicode support is lacking because the virtual terminal stores glyph indices in the buffer and the original value is lost in translation - my understanding is that there is currently development being done for the kernel to implement /dev/vcsu* which would rectify this, but it's not yet in the mainline kernel - option to use a pseudo TTY would be welcome in the mean time
  • Not much thought given to tricolor displays - you need to modify the part where attributes are skipped and implement it yourself (or donate such a display and I might take a look...)
  • Minimal error handling
  • You can't set an arbitrary size for the terminals with ioctls - it would be better to use some pseudo terminal for this but then again, sometimes you specifically want tty1 (imagine server crashing and having the kernel log imprinted on the e-ink)
  • Cursor placement is a bit obscure - this has to do with how the imaging library handles fonts and their metrics and it's not always very clear to me how they scale with the font... it works well enough though
  • The mirroring features were just an afterthought and don't work perfectly (probably simple to fix), also arbitrary rotation is missing (but easy to add)
  • The code was written for Python 2 - there are some forks and improvements on the Waveshare code around, but I wanted to make this work on the stock offering so didn't bother incorporating that stuff here
    • The code is now for Python 3
  • While testing out some imaging library functions, I noticed that on another computer the library seemed to lack the spacing keyword argument for drawing text - this may be a problem in some environments but I didn't think much of it

Conclusion

Even with all the caveats in mind, I still think the program is very useful and fills a niche. I wish I could have tested it with more than one display model, but that's why I'm releasing it as public domain, so anyone can try it out and hopefully turn it into something better.

- Jouko Strömmer


An Introduction to Modern CMake

$
0
0
The page you're looking for could not be found (404)GitLab Logo

The page you're looking for could not be found.


The resource that you are attempting to access does not exist or you don't have the necessary permissions to view it.

Make sure the address is correct and that the page hasn't moved.

Please contact your GitLab administrator if you think this is a mistake.

Go back

Ten Trillion-Degree Quasar Astonishes Astronomers

$
0
0

A newly deployed space telescope has struck pay dirt almost immediately, discovering a quasar – a superheated region of dust and gas around a black hole – that is releasing jets at least seventy times hotter than was thought possible.

RadioAstron is unusual among space telescopes in operating at radio wavelengths. Although the telescope itself is small compared to giant ground-based dishes (10 meters or 33 feet across), it is capable of combining with ground-based instruments operating at the same wavelengths. Together they produce images with the resolution of a single telescope as wide as the distance between them, far exceeding collaborations between dishes half a world apart.

One of the first targets for this extraordinary tool was the quasar 3C 273, the first of these enormously bright objects to be identified, and one of the most luminous. Despite being 4 trillion times as bright as the Sun, 3C 273 is hard to study, located an estimated 2.4 billion light-years away at the center of a giant elliptical galaxy.

Something that bright has to be mind-bendingly hot, and the same applies to the radiating jets 3C 273 spits out. Models suggested that it was impossible for these jets’ temperatures to exceed 100 billion degrees Kelvin, at which point electrons produce radiation that should quickly cool them in what is known as the inverse Compton catastrophe. However, in The Astrophysical Journal Letters an international team has estimated the true temperature.

“We measure the effective temperature of the quasar core to be hotter than 10 trillion degrees!” said Dr. Yuri Kovalev, RadioAstron project scientist, in a statement. “This result is very challenging to explain with our current understanding of how relativistic jets of quasars radiate.”

Combining with Earth-based telescopes, including Arecibo and the Very Large Array, RadioAstron examined the radiation from 3C 373 at wavelengths of 18, 6.2, and 1.35 centimeters (7, 2.4, and 0.5 inches) providing both an overall temperature estimates that varied from 7 to 14 trillion degrees, and a view of the substructure of the quasar’s jets.

A quasar as seen by the Hubble Telescope, but the resolution provided by space and Earth-based telescopes in combination is far greater.
A quasar as seen by the Hubble Telescope, but the resolution provided by space and Earth-based telescopes in combination is far greater. ESA/Hubble & NASA

“Only this space-Earth system could reveal this temperature, and now we have to figure out how that environment can reach such temperatures,” said Kovalev in a separate statement. “This result is a significant challenge to our current understanding of quasar jets.”

The resolution made possible by this collaboration was so detailed that the team was able to detect the scattering effects on their measurements of variations in the ionized interstellar medium within the Milky Way. “This is like looking through the hot, turbulent air above a candle flame,” said first author Dr. Michael Johnson, of the Harvard-Smithsonian Center for Astrophysics. “We had never been able to see such distortion of an extragalactic object before.”

The authors explained that when “averaged over long timescales – days to months – the scattering blurs compact features in the image, resulting in lower apparent brightness temperatures,” part of the reason these extraordinary temperatures had not been recognized before. Over shorter periods the scattering creates the impression of bright and dark spots known as “refractive substructure.”

RadioAstron has been in space since 2011, but it has taken time to analyze the first observations. Knowing that the maximum baseline it can provide is more than double the 171,000 kilometers (106,000 miles) used in this case, astronomers can’t wait to see what it discovers next.

Quasar 3C 273 as seen at different wavelengths showing the effect of the interference of the interstellar medium on the incoming rays.
Quasar 3C 273 as seen at different wavelengths showing the effect of the interference of the interstellar medium on the incoming rays. Johnson et al, The Astrophysical Journal.

Why Read the Classics? (1986)

$
0
0

Let us begin with a few suggested definitions.

1) The classics are the books of which we usually hear people say: “I am rereading…” and never “I am reading….”

This at least happens among those who consider themselves “very well read.” It does not hold good for young people at the age when they first encounter the world, and the classics as a part of that world.

The reiterative prefix before the verb “read” may be a small hypocrisy on the part of people ashamed to admit they have not read a famous book. To reassure them, we need only observe that, however vast any person’s basic reading may be, there still remain an enormous number of fundamental works that he has not read.

Hands up, anyone who has read the whole of Herodotus and the whole of Thucydides! And Saint-Simon? And Cardinal de Retz? But even the great nineteenth-century cycles of novels are more often talked about than read. In France they begin to read Balzac in school, and judging by the number of copies in circulation, one may suppose that they go on reading him even after that, but if a Gallup poll were taken in Italy, I’m afraid that Balzac would come in practically last. Dickens fans in Italy form a tiny elite; as soon as its members meet, they begin to chatter about characters and episodes as if they were discussing people and things of their own acquaintance. Years ago, while teaching in America, Michel Butor got fed up with being asked about Emile Zola, whom he had never read, so he made up his mind to read the entire Rougon-Macquart cycle. He found it was completely different from what he had thought: a fabulous mythological and cosmogonical family tree, which he went on to describe in a wonderful essay.

In other words, to read a great book for the first time in one’s maturity is an extraordinary pleasure, different from (though one cannot say greater or lesser than) the pleasure of having read it in one’s youth. Youth brings to reading, as to any other experience, a particular flavor and a particular sense of importance, whereas in maturity one appreciates (or ought to appreciate) many more details and levels and meanings. We may therefore attempt the next definition:

2) We use the word “classics” for those books that are treasured by those who have read and loved them; but they are treasured no less by those who have the luck to read them for the first time in the best conditions to enjoy them.

In fact, reading in youth can be rather unfruitful, owing to impatience, distraction, inexperience with the product’s “instructions for use,” and inexperience in life itself. Books read then can be (possibly at one and the same time) formative, in the sense that they give a form to future experiences, providing models, terms of comparison, schemes for classification, scales of value, exemplars of beauty—all things that continue to operate even if the book read in one’s youth is almost or totally forgotten. If we reread the book at a mature age we are likely to rediscover these constants, which by this time are part of our inner mechanisms, but whose origins we have long forgotten. A literary work can succeed in making us forget it as such, but it leaves its seed in us. The definition we can give is therefore this:

3) The classics are books that exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.

There should therefore be a time in adult life devoted to revisiting the most important books of our youth. Even if the books have remained the same (though they do change, in the light of an altered historical perspective), we have most certainly changed, and our encounter will be an entirely new thing.

Hence, whether we use the verb “read” or the verb “reread” is of little importance. Indeed, we may say:

4) Every rereading of a classic is as much a voyage of discovery as the first reading.

5) Every reading of a classic is in fact a rereading.

Definition 4 may be considered a corollary of this next one:

6) A classic is a book that has never finished saying what it has to say.

Whereas definition 5 depends on a more specific formula, such as this:

7) The classics are the books that come down to us bearing upon them the traces of readings previous to ours, and bringing in their wake the traces they themselves have left on the culture or cultures they have passed through (or, more simply, on language and customs).

All this is true both of the ancient and of the modern classics. If I read the Odyssey I read Homer’s text, but I cannot forget all that the adventures of Ulysses have come to mean in the course of the centuries, and I cannot help wondering if these meanings were implicit in the text, or whether they are incrustations or distortions or expansions. When reading Kafka, I cannot avoid approving or rejecting the legitimacy of the adjective “Kafkaesque,” which one is likely to hear every quarter of an hour, applied indiscriminately. If I read Turgenev’s Fathers and Sons or Dostoevsky’s The Possessed, I cannot help thinking how these characters have continued to be reincarnated right down to our own day.

The reading of a classic ought to give us a surprise or two vis-à-vis the notion that we had of it. For this reason I can never sufficiently highly recommend the direct reading of the text itself, leaving aside the critical biography, commentaries, and interpretations as much as possible. Schools and universities ought to help us to understand that no book that talks about a book says more than the book in question, but instead they do their level best to make us think the opposite. There is a very widespread topsyturviness of values whereby the introduction, critical apparatus, and bibliography are used as a smoke screen to hide what the text has to say, and, indeed, can say only if left to speak for itself without intermediaries who claim to know more than the text does. We may conclude that:

8) A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives a lot of pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity. From all this we may derive a definition of this type:

9) The classics are books that we find all the more new, fresh, and unexpected upon reading, the more we thought we knew them from hearing them talked about.

Naturally, this only happens when a classic really works as such—that is, when it establishes a personal rapport with the reader. If the spark doesn’t come, that’s a pity; but we do not read the classics out of duty or respect, but only out of love. Except at school. And school should enable you to know, either well or badly, a certain number of classics among which—or in reference to which—you can then choose your classics. School is obliged to give you the instruments needed to make a choice, but the choices that count are those that occur outside and after school.

It is only by reading without bias that you might possibly come across the book that becomes your book. I know an excellent art historian, an extraordinarily well-read man, who out of all the books there are has focused his special love on the Pickwick Papers; at every opportunity he comes up with some quip from Dickens’s book, and connects each and every event in life with some Pickwickian episode. Little by little he himself, and true philosophy, and the universe, have taken on the shape and form of the Pickwick Papers by a process of complete identification. In this way we arrive at a very lofty and demanding notion of what a classic is:

10) We use the word “classic” of a book that takes the form of an equivalent to the universe, on a level with the ancient talismans. With this definition we are approaching the idea of the “total book,” as Mallarmé conceived of it.

But a classic can establish an equally strong rapport in terms of opposition and antithesis. Everything that Jean-Jacques Rousseau thinks and does is very dear to my heart, yet everything fills me with an irrepressible desire to contradict him, to criticize him, to quarrel with him. It is a question of personal antipathy on a temperamental level, on account of which I ought to have no choice but not to read him; and yet I cannot help numbering him among my authors. I will therefore say:

11) Your classic author is the one you cannot feel indifferent to, who helps you to define yourself in relation to him, even in dispute with him.

I think I have no need to justify myself for using the word “classic” without making distinctions about age, style, or authority. What distinguishes the classic, in the argument I am making, may be only an echo effect that holds good both for an ancient work and for a modern one that has already achieved its place in a cultural continuum. We might say:

12) A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.

At this point I can no longer put off the vital problem of how to relate the reading of the classics to the reading of all the other books that are anything but classics. It is a problem connected with such questions as, Why read the classics rather than concentrate on books that enable us to understand our own times more deeply? or, Where shall we find the time and peace of mind to read the classics, overwhelmed as we are by the avalanche of current events?

We can, of course, imagine some blessed soul who devotes his reading time exclusively to Lucretius, Lucian, Montaigne, Erasmus, Quevedo, Marlowe, the Discourse on Method, Wilhelm Meister, Coleridge, Ruskin, Proust, and Valéry, with a few forays in the direction of Murasaki or the Icelandic sagas. And all this without having to write reviews of the latest publications, or papers to compete for a university chair, or articles for magazines on tight deadlines. To keep up such a diet without any contamination, this blessed soul would have to abstain from reading the newspapers, and never be tempted by the latest novel or sociological investigation. But we have to see how far such rigor would be either justified or profitable. The latest news may well be banal or mortifying, but it nonetheless remains a point at which to stand and look both backward and forward. To be able to read the classics you have to know “from where” you are reading them; otherwise both the book and the reader will be lost in a timeless cloud. This, then, is the reason why the greatest “yield” from reading the classics will be obtained by someone who knows how to alternate them with the proper dose of current affairs. And this does not necessarily imply a state of imperturbable inner calm. It can also be the fruit of nervous impatience, of a huffing-and-puffing discontent of mind.

Maybe the ideal thing would be to hearken to current events as we do to the din outside the window that informs us about traffic jams and sudden changes in the weather, while we listen to the voice of the classics sounding clear and articulate inside the room. But it is already a lot for most people if the presence of the classics is perceived as a distant rumble far outside a room that is swamped by the trivia of the moment, as by a television at full blast. Let us therefore add:

13) A classic is something that tends to relegate the concerns of the moment to the status of background noise, but at the same time this background noise is something we cannot do without.

14) A classic is something that persists as a background noise even when the most incompatible momentary concerns are in control of the situation.

There remains the fact that reading the classics appears to clash with our rhythm of life, which no longer affords long periods of time or the spaciousness of humanistic leisure. It also contradicts the eclecticism of our culture, which would never be capable of compiling a catalog of things classical such as would suit our needs.

These latter conditions were fully realized in the case of Leopardi, given his solitary life in his father’s house (his “paterno ostello“), his cult of Greek and Latin antiquity, and the formidable library put at his disposal by his father, Monaldo. To which we may add the entire body of Italian literature and of French literature, with the exception of novels and the “latest thing out” in general, all of which were at least swept off into the sidelines, there to comfort the leisure of his sister Paolina (“your Stendhal,” he wrote her once). Even with his intense interest in science and history, he was often willing to rely on texts that were not entirely up-to-date, taking the habits of birds from Buffon, the mummies of Frederik Ruysch from Fontanelle, the voyage of Columbus from Robertson.

In these days a classical education like the young Leopardi’s is unthinkable; above all, Count Monaldo’s library has multiplied explosively. The ranks of the old titles have been decimated, while new ones have proliferated in all modern literatures and cultures. There is nothing for it but for all of us to invent our own ideal libraries of classics. I would say that such a library ought to be composed half of books we have read and that have really counted for us, and half of books we propose to read and presume will come to count—leaving a section of empty shelves for surprises and occasional discoveries.

I realize that Leopardi is the only name I have cited from Italian literature—a result of the explosion of the library. Now I ought to rewrite the whole article to make it perfectly clear that the classics help us to understand who we are and where we stand, a purpose for which it is indispensable to compare Italians with foreigners and foreigners with Italians.

Then I ought to rewrite it yet again lest anyone believe that the classics ought to be read because they “serve any purpose” whatever. The only reason one can possibly adduce is that to read the classics is better than not to read the classics.

And if anyone objects that it is not worth taking so much trouble, then I will quote Cioran (who is not yet a classic, but will become one):

While they were preparing the hemlock, Socrates was learning a tune on the flute. “What good will it do you,” they asked, “to know this tune before you die?”

translated by Patrick Creagh
English translation copyright © 1986 Harcourt Brace Jovanovich, Inc.

The Really Big One (2015)

$
0
0

When the 2011 earthquake and tsunami struck Tohoku, Japan, Chris Goldfinger was two hundred miles away, in the city of Kashiwa, at an international meeting on seismology. As the shaking started, everyone in the room began to laugh. Earthquakes are common in Japan—that one was the third of the week—and the participants were, after all, at a seismology conference. Then everyone in the room checked the time.

Seismologists know that how long an earthquake lasts is a decent proxy for its magnitude. The 1989 earthquake in Loma Prieta, California, which killed sixty-three people and caused six billion dollars’ worth of damage, lasted about fifteen seconds and had a magnitude of 6.9. A thirty-second earthquake generally has a magnitude in the mid-sevens. A minute-long quake is in the high sevens, a two-minute quake has entered the eights, and a three-minute quake is in the high eights. By four minutes, an earthquake has hit magnitude 9.0.

When Goldfinger looked at his watch, it was quarter to three. The conference was wrapping up for the day. He was thinking about sushi. The speaker at the lectern was wondering if he should carry on with his talk. The earthquake was not particularly strong. Then it ticked past the sixty-second mark, making it longer than the others that week. The shaking intensified. The seats in the conference room were small plastic desks with wheels. Goldfinger, who is tall and solidly built, thought, No way am I crouching under one of those for cover. At a minute and a half, everyone in the room got up and went outside.

It was March. There was a chill in the air, and snow flurries, but no snow on the ground. Nor, from the feel of it, was there ground on the ground. The earth snapped and popped and rippled. It was, Goldfinger thought, like driving through rocky terrain in a vehicle with no shocks, if both the vehicle and the terrain were also on a raft in high seas. The quake passed the two-minute mark. The trees, still hung with the previous autumn’s dead leaves, were making a strange rattling sound. The flagpole atop the building he and his colleagues had just vacated was whipping through an arc of forty degrees. The building itself was base-isolated, a seismic-safety technology in which the body of a structure rests on movable bearings rather than directly on its foundation. Goldfinger lurched over to take a look. The base was lurching, too, back and forth a foot at a time, digging a trench in the yard. He thought better of it, and lurched away. His watch swept past the three-minute mark and kept going.

Oh, shit, Goldfinger thought, although not in dread, at first: in amazement. For decades, seismologists had believed that Japan could not experience an earthquake stronger than magnitude 8.4. In 2005, however, at a conference in Hokudan, a Japanese geologist named Yasutaka Ikeda had argued that the nation should expect a magnitude 9.0 in the near future—with catastrophic consequences, because Japan’s famous earthquake-and-tsunami preparedness, including the height of its sea walls, was based on incorrect science. The presentation was met with polite applause and thereafter largely ignored. Now, Goldfinger realized as the shaking hit the four-minute mark, the planet was proving the Japanese Cassandra right.

For a moment, that was pretty cool: a real-time revolution in earthquake science. Almost immediately, though, it became extremely uncool, because Goldfinger and every other seismologist standing outside in Kashiwa knew what was coming. One of them pulled out a cell phone and started streaming videos from the Japanese broadcasting station NHK, shot by helicopters that had flown out to sea soon after the shaking started. Thirty minutes after Goldfinger first stepped outside, he watched the tsunami roll in, in real time, on a two-inch screen.

In the end, the magnitude-9.0 Tohoku earthquake and subsequent tsunami killed more than eighteen thousand people, devastated northeast Japan, triggered the meltdown at the Fukushima power plant, and cost an estimated two hundred and twenty billion dollars. The shaking earlier in the week turned out to be the foreshocks of the largest earthquake in the nation’s recorded history. But for Chris Goldfinger, a paleoseismologist at Oregon State University and one of the world’s leading experts on a little-known fault line, the main quake was itself a kind of foreshock: a preview of another earthquake still to come.

Most people in the United States know just one fault line by name: the San Andreas, which runs nearly the length of California and is perpetually rumored to be on the verge of unleashing “the big one.” That rumor is misleading, no matter what the San Andreas ever does. Every fault line has an upper limit to its potency, determined by its length and width, and by how far it can slip. For the San Andreas, one of the most extensively studied and best understood fault lines in the world, that upper limit is roughly an 8.2—a powerful earthquake, but, because the Richter scale is logarithmic, only six per cent as strong as the 2011 event in Japan.

Just north of the San Andreas, however, lies another fault line. Known as the Cascadia subduction zone, it runs for seven hundred miles off the coast of the Pacific Northwest, beginning near Cape Mendocino, California, continuing along Oregon and Washington, and terminating around Vancouver Island, Canada. The“Cascadia” part of its name comes from the Cascade Range, a chain of volcanic mountains that follow the same course a hundred or so miles inland. The “subduction zone” part refers to a region of the planet where one tectonic plate is sliding underneath (subducting) another. Tectonic plates are those slabs of mantle and crust that, in their epochs-long drift, rearrange the earth’s continents and oceans. Most of the time, their movement is slow, harmless, and all but undetectable. Occasionally, at the borders where they meet, it is not.

Take your hands and hold them palms down, middle fingertips touching. Your right hand represents the North American tectonic plate, which bears on its back, among other things, our entire continent, from One World Trade Center to the Space Needle, in Seattle. Your left hand represents an oceanic plate called Juan de Fuca, ninety thousand square miles in size. The place where they meet is the Cascadia subduction zone. Now slide your left hand under your right one. That is what the Juan de Fuca plate is doing: slipping steadily beneath North America. When you try it, your right hand will slide up your left arm, as if you were pushing up your sleeve. That is what North America is not doing. It is stuck, wedged tight against the surface of the other plate.

Without moving your hands, curl your right knuckles up, so that they point toward the ceiling. Under pressure from Juan de Fuca, the stuck edge of North America is bulging upward and compressing eastward, at the rate of, respectively, three to four millimetres and thirty to forty millimetres a year. It can do so for quite some time, because, as continent stuff goes, it is young, made of rock that is still relatively elastic. (Rocks, like us, get stiffer as they age.) But it cannot do so indefinitely. There is a backstop—the craton, that ancient unbudgeable mass at the center of the continent—and, sooner or later, North America will rebound like a spring. If, on that occasion, only the southern part of the Cascadia subduction zone gives way—your first two fingers, say—the magnitude of the resulting quake will be somewhere between 8.0 and 8.6. Thats the big one. If the entire zone gives way at once, an event that seismologists call a full-margin rupture, the magnitude will be somewhere between 8.7 and 9.2. That’s the very big one.

Flick your right fingers outward, forcefully, so that your hand flattens back down again. When the next very big earthquake hits, the northwest edge of the continent, from California to Canada and the continental shelf to the Cascades, will drop by as much as six feet and rebound thirty to a hundred feet to the west—losing, within minutes, all the elevation and compression it has gained over centuries. Some of that shift will take place beneath the ocean, displacing a colossal quantity of seawater. (Watch what your fingertips do when you flatten your hand.) The water will surge upward into a huge hill, then promptly collapse. One side will rush west, toward Japan. The other side will rush east, in a seven-hundred-mile liquid wall that will reach the Northwest coast, on average, fifteen minutes after the earthquake begins. By the time the shaking has ceased and the tsunami has receded, the region will be unrecognizable. Kenneth Murphy, who directs FEMA’s Region X, the division responsible for Oregon, Washington, Idaho, and Alaska, says, “Our operating assumption is that everything west of Interstate 5 will be toast.”

In the Pacific Northwest, the area of impact will cover* some hundred and forty thousand square miles, including Seattle, Tacoma, Portland, Eugene, Salem (the capital city of Oregon), Olympia (the capital of Washington), and some seven million people. When the next full-margin rupture happens, that region will suffer the worst natural disaster in the history of North America. Roughly three thousand people died in San Francisco’s 1906 earthquake. Almost two thousand died in Hurricane Katrina. Almost three hundred died in Hurricane Sandy. FEMA projects that nearly thirteen thousand people will die in the Cascadia earthquake and tsunami. Another twenty-seven thousand will be injured, and the agency expects that it will need to provide shelter for a million displaced people, and food and water for another two and a half million. “This is one time that I’m hoping all the science is wrong, and it won’t happen for another thousand years,” Murphy says.

In fact, the science is robust, and one of the chief scientists behind it is Chris Goldfinger. Thanks to work done by him and his colleagues, we now know that the odds of the big Cascadia earthquake happening in the next fifty years are roughly one in three. The odds of the very big one are roughly one in ten. Even those numbers do not fully reflect the danger—or, more to the point, how unprepared the Pacific Northwest is to face it. The truly worrisome figures in this story are these: Thirty years ago, no one knew that the Cascadia subduction zone had ever produced a major earthquake. Forty-five years ago, no one even knew it existed.

In May of 1804, Meriwether Lewis and William Clark, together with their Corps of Discovery, set off from St. Louis on America’s first official cross-country expedition. Eighteen months later, they reached the Pacific Ocean and made camp near the present-day town of Astoria, Oregon. The United States was, at the time, twenty-nine years old. Canada was not yet a country. The continent’s far expanses were so unknown to its white explorers that Thomas Jefferson, who commissioned the journey, thought that the men would come across woolly mammoths. Native Americans had lived in the Northwest for millennia, but they had no written language, and the many things to which the arriving Europeans subjected them did not include seismological inquiries. The newcomers took the land they encountered at face value, and at face value it was a find: vast, cheap, temperate, fertile, and, to all appearances, remarkably benign.

A century and a half elapsed before anyone had any inkling that the Pacific Northwest was not a quiet place but a place in a long period of quiet. It took another fifty years to uncover and interpret the region’s seismic history. Geology, as even geologists will tell you, is not normally the sexiest of disciplines; it hunkers down with earthly stuff while the glory accrues to the human and the cosmic—to genetics, neuroscience, physics. But, sooner or later, every field has its field day, and the discovery of the Cascadia subduction zone stands as one of the greatest scientific detective stories of our time.

The first clue came from geography. Almost all of the world’s most powerful earthquakes occur in the Ring of Fire, the volcanically and seismically volatile swath of the Pacific that runs from New Zealand up through Indonesia and Japan, across the ocean to Alaska, and down the west coast of the Americas to Chile. Japan, 2011, magnitude 9.0; Indonesia, 2004, magnitude 9.1; Alaska, 1964, magnitude 9.2; Chile, 1960, magnitude 9.5—not until the late nineteen-sixties, with the rise of the theory of plate tectonics, could geologists explain this pattern. The Ring of Fire, it turns out, is really a ring of subduction zones. Nearly all the earthquakes in the region are caused by continental plates getting stuck on oceanic plates—as North America is stuck on Juan de Fuca—and then getting abruptly unstuck. And nearly all the volcanoes are caused by the oceanic plates sliding deep beneath the continental ones, eventually reaching temperatures and pressures so extreme that they melt the rock above them.

The Pacific Northwest sits squarely within the Ring of Fire. Off its coast, an oceanic plate is slipping beneath a continental one. Inland, the Cascade volcanoes mark the line where, far below, the Juan de Fuca plate is heating up and melting everything above it. In other words, the Cascadia subduction zone has, as Goldfinger put it, “all the right anatomical parts.” Yet not once in recorded history has it caused a major earthquake—or, for that matter, any quake to speak of. By contrast, other subduction zones produce major earthquakes occasionally and minor ones all the time: magnitude 5.0, magnitude 4.0, magnitude why are the neighbors moving their sofa at midnight. You can scarcely spend a week in Japan without feeling this sort of earthquake. You can spend a lifetime in many parts of the Northwest—several, in fact, if you had them to spend—and not feel so much as a quiver. The question facing geologists in the nineteen-seventies was whether the Cascadia subduction zone had ever broken its eerie silence.

In the late nineteen-eighties, Brian Atwater, a geologist with the United States Geological Survey, and a graduate student named David Yamaguchi found the answer, and another major clue in the Cascadia puzzle. Their discovery is best illustrated in a place called the ghost forest, a grove of western red cedars on the banks of the Copalis River, near the Washington coast. When I paddled out to it last summer, with Atwater and Yamaguchi, it was easy to see how it got its name. The cedars are spread out across a low salt marsh on a wide northern bend in the river, long dead but still standing. Leafless, branchless, barkless, they are reduced to their trunks and worn to a smooth silver-gray, as if they had always carried their own tombstones inside them.

What killed the trees in the ghost forest was saltwater. It had long been assumed that they died slowly, as the sea level around them gradually rose and submerged their roots. But, by 1987, Atwater, who had found in soil layers evidence of sudden land subsidence along the Washington coast, suspected that that was backward—that the trees had died quickly when the ground beneath them plummeted. To find out, he teamed up with Yamaguchi, a specialist in dendrochronology, the study of growth-ring patterns in trees. Yamaguchi took samples of the cedars and found that they had died simultaneously: in tree after tree, the final rings dated to the summer of 1699. Since trees do not grow in the winter, he and Atwater concluded that sometime between August of 1699 and May of 1700 an earthquake had caused the land to drop and killed the cedars. That time frame predated by more than a hundred years the written history of the Pacific Northwest—and so, by rights, the detective story should have ended there.

But it did not. If you travel five thousand miles due west from the ghost forest, you reach the northeast coast of Japan. As the events of 2011 made clear, that coast is vulnerable to tsunamis, and the Japanese have kept track of them since at least 599 A.D. In that fourteen-hundred-year history, one incident has long stood out for its strangeness. On the eighth day of the twelfth month of the twelfth year of the Genroku era, a six-hundred-mile-long wave struck the coast, levelling homes, breaching a castle moat, and causing an accident at sea. The Japanese understood that tsunamis were the result of earthquakes, yet no one felt the ground shake before the Genroku event. The wave had no discernible origin. When scientists began studying it, they called it an orphan tsunami.

Finally, in a 1996 article in Nature, a seismologist named Kenji Satake and three colleagues, drawing on the work of Atwater and Yamaguchi, matched that orphan to its parent—and thereby filled in the blanks in the Cascadia story with uncanny specificity. At approximately nine o’ clock at night on January 26, 1700, a magnitude-9.0 earthquake struck the Pacific Northwest, causing sudden land subsidence, drowning coastal forests, and, out in the ocean, lifting up a wave half the length of a continent. It took roughly fifteen minutes for the Eastern half of that wave to strike the Northwest coast. It took ten hours for the other half to cross the ocean. It reached Japan on January 27, 1700: by the local calendar, the eighth day of the twelfth month of the twelfth year of Genroku.

Once scientists had reconstructed the 1700 earthquake, certain previously overlooked accounts also came to seem like clues. In 1964, Chief Louis Nookmis, of the Huu-ay-aht First Nation, in British Columbia, told a story, passed down through seven generations, about the eradication of Vancouver Island’s Pachena Bay people. “I think it was at nighttime that the land shook,” Nookmis recalled. According to another tribal history, “They sank at once, were all drowned; not one survived.” A hundred years earlier, Billy Balch, a leader of the Makah tribe, recounted a similar story. Before his own time, he said, all the water had receded from Washington State’s Neah Bay, then suddenly poured back in, inundating the entire region. Those who survived later found canoes hanging from the trees. In a 2005 study, Ruth Ludwin, then a seismologist at the University of Washington, together with nine colleagues, collected and analyzed Native American reports of earthquakes and saltwater floods. Some of those reports contained enough information to estimate a date range for the events they described. On average, the midpoint of that range was 1701.

It does not speak well of European-Americans that such stories counted as evidence for a proposition only after that proposition had been proved. Still, the reconstruction of the Cascadia earthquake of 1700 is one of those rare natural puzzles whose pieces fit together as tectonic plates do not: perfectly. It is wonderful science. It was wonderful for science. And it was terrible news for the millions of inhabitants of the Pacific Northwest. As Goldfinger put it, “In the late eighties and early nineties, the paradigm shifted to ‘uh-oh.’ ”

Goldfinger told me this in his lab at Oregon State, a low prefab building that a passing English major might reasonably mistake for the maintenance department. Inside the lab is a walk-in freezer. Inside the freezer are floor-to-ceiling racks filled with cryptically labelled tubes, four inches in diameter and five feet long. Each tube contains a core sample of the seafloor. Each sample contains the history, written in seafloorese, of the past ten thousand years. During subduction-zone earthquakes, torrents of land rush off the continental slope, leaving a permanent deposit on the bottom of the ocean. By counting the number and the size of deposits in each sample, then comparing their extent and consistency along the length of the Cascadia subduction zone, Goldfinger and his colleagues were able to determine how much of the zone has ruptured, how often, and how drastically.

Thanks to that work, we now know that the Pacific Northwest has experienced forty-one subduction-zone earthquakes in the past ten thousand years. If you divide ten thousand by forty-one, you get two hundred and forty-three, which is Cascadia’s recurrence interval: the average amount of time that elapses between earthquakes. That timespan is dangerous both because it is too long—long enough for us to unwittingly build an entire civilization on top of our continent’s worst fault line—and because it is not long enough. Counting from the earthquake of 1700, we are now three hundred and fifteen years into a two-hundred-and-forty-three-year cycle.

It is possible to quibble with that number. Recurrence intervals are averages, and averages are tricky: ten is the average of nine and eleven, but also of eighteen and two. It is not possible, however, to dispute the scale of the problem. The devastation in Japan in 2011 was the result of a discrepancy between what the best science predicted and what the region was prepared to withstand. The same will hold true in the Pacific Northwest—but here the discrepancy is enormous. “The science part is fun,” Goldfinger says. “And I love doing it. But the gap between what we know and what we should do about it is getting bigger and bigger, and the action really needs to turn to responding. Otherwise, we’re going to be hammered. I’ve been through one of these massive earthquakes in the most seismically prepared nation on earth. If that was Portland”—Goldfinger finished the sentence with a shake of his head before he finished it with words. “Let’s just say I would rather not be here.”

The first sign that the Cascadia earthquake has begun will be a compressional wave, radiating outward from the fault line. Compressional waves are fast-moving, high-frequency waves, audible to dogs and certain other animals but experienced by humans only as a sudden jolt. They are not very harmful, but they are potentially very useful, since they travel fast enough to be detected by sensors thirty to ninety seconds ahead of other seismic waves. That is enough time for earthquake early-warning systems, such as those in use throughout Japan, to automatically perform a variety of lifesaving functions: shutting down railways and power plants, opening elevators and firehouse doors, alerting hospitals to halt surgeries, and triggering alarms so that the general public can take cover. The Pacific Northwest has no early-warning system. When the Cascadia earthquake begins, there will be, instead, a cacophony of barking dogs and a long, suspended, what-was-that moment before the surface waves arrive. Surface waves are slower, lower-frequency waves that move the ground both up and down and side to side: the shaking, starting in earnest.

Soon after that shaking begins, the electrical grid will fail, likely everywhere west of the Cascades and possibly well beyond. If it happens at night, the ensuing catastrophe will unfold in darkness. In theory, those who are at home when it hits should be safest; it is easy and relatively inexpensive to seismically safeguard a private dwelling. But, lulled into nonchalance by their seemingly benign environment, most people in the Pacific Northwest have not done so. That nonchalance will shatter instantly. So will everything made of glass. Anything indoors and unsecured will lurch across the floor or come crashing down: bookshelves, lamps, computers, cannisters of flour in the pantry. Refrigerators will walk out of kitchens, unplugging themselves and toppling over. Water heaters will fall and smash interior gas lines. Houses that are not bolted to their foundations will slide off—or, rather, they will stay put, obeying inertia, while the foundations, together with the rest of the Northwest, jolt westward. Unmoored on the undulating ground, the homes will begin to collapse.

Across the region, other, larger structures will also start to fail. Until 1974, the state of Oregon had no seismic code, and few places in the Pacific Northwest had one appropriate to a magnitude-9.0 earthquake until 1994. The vast majority of buildings in the region were constructed before then. Ian Madin, who directs the Oregon Department of Geology and Mineral Industries (DOGAMI), estimates that seventy-five per cent of all structures in the state are not designed to withstand a major Cascadia quake. FEMA calculates that, across the region, something on the order of a million buildings—more than three thousand of them schools—will collapse or be compromised in the earthquake. So will half of all highway bridges, fifteen of the seventeen bridges spanning Portland’s two rivers, and two-thirds of railways and airports; also, one-third of all fire stations, half of all police stations, and two-thirds of all hospitals.

Certain disasters stem from many small problems conspiring to cause one very large problem. For want of a nail, the war was lost; for fifteen independently insignificant errors, the jetliner was lost. Subduction-zone earthquakes operate on the opposite principle: one enormous problem causes many other enormous problems. The shaking from the Cascadia quake will set off landslides throughout the region—up to thirty thousand of them in Seattle alone, the city’s emergency-management office estimates. It will also induce a process called liquefaction, whereby seemingly solid ground starts behaving like a liquid, to the detriment of anything on top of it. Fifteen per cent of Seattle is built on liquefiable land, including seventeen day-care centers and the homes of some thirty-four thousand five hundred people. So is Oregon’s critical energy-infrastructure hub, a six-mile stretch of Portland through which flows ninety per cent of the state’s liquid fuel and which houses everything from electrical substations to natural-gas terminals. Together, the sloshing, sliding, and shaking will trigger fires, flooding, pipe failures, dam breaches, and hazardous-material spills. Any one of these second-order disasters could swamp the original earthquake in terms of cost, damage, or casualties—and one of them definitely will. Four to six minutes after the dogs start barking, the shaking will subside. For another few minutes, the region, upended, will continue to fall apart on its own. Then the wave will arrive, and the real destruction will begin.

Among natural disasters, tsunamis may be the closest to being completely unsurvivable. The only likely way to outlive one is not to be there when it happens: to steer clear of the vulnerable area in the first place, or get yourself to high ground as fast as possible. For the seventy-one thousand people who live in Cascadia’s inundation zone, that will mean evacuating in the narrow window after one disaster ends and before another begins. They will be notified to do so only by the earthquake itself—“a vibrate-alert system,” Kevin Cupples, the city planner for the town of Seaside, Oregon, jokes—and they are urged to leave on foot, since the earthquake will render roads impassable. Depending on location, they will have between ten and thirty minutes to get out. That time line does not allow for finding a flashlight, tending to an earthquake injury, hesitating amid the ruins of a home, searching for loved ones, or being a Good Samaritan. “When that tsunami is coming, you run,” Jay Wilson, the chair of the Oregon Seismic Safety Policy Advisory Commission (OSSPAC), says. “You protect yourself, you don’t turn around, you don’t go back to save anybody. You run for your life.”

The time to save people from a tsunami is before it happens, but the region has not yet taken serious steps toward doing so. Hotels and businesses are not required to post evacuation routes or to provide employees with evacuation training. In Oregon, it has been illegal since 1995 to build hospitals, schools, firehouses, and police stations in the inundation zone, but those which are already in it can stay, and any other new construction is permissible: energy facilities, hotels, retirement homes. In those cases, builders are required only to consult with DOGAMI about evacuation plans. “So you come in and sit down,” Ian Madin says. “And I say, ‘That’s a stupid idea.’ And you say, ‘Thanks. Now we’ve consulted.’ ”

These lax safety policies guarantee that many people inside the inundation zone will not get out. Twenty-two per cent of Oregon’s coastal population is sixty-five or older. Twenty-nine per cent of the state’s population is disabled, and that figure rises in many coastal counties. “We can’t save them,” Kevin Cupples says. “I’m not going to sugarcoat it and say, ‘Oh, yeah, we’ll go around and check on the elderly.’ No. We won’t.” Nor will anyone save the tourists. Washington State Park properties within the inundation zone see an average of seventeen thousand and twenty-nine guests a day. Madin estimates that up to a hundred and fifty thousand people visit Oregon’s beaches on summer weekends. “Most of them won’t have a clue as to how to evacuate,” he says. “And the beaches are the hardest place to evacuate from.”

Those who cannot get out of the inundation zone under their own power will quickly be overtaken by a greater one. A grown man is knocked over by ankle-deep water moving at 6.7 miles an hour. The tsunami will be moving more than twice that fast when it arrives. Its height will vary with the contours of the coast, from twenty feet to more than a hundred feet. It will not look like a Hokusai-style wave, rising up from the surface of the sea and breaking from above. It will look like the whole ocean, elevated, overtaking land. Nor will it be made only of water—not once it reaches the shore. It will be a five-story deluge of pickup trucks and doorframes and cinder blocks and fishing boats and utility poles and everything else that once constituted the coastal towns of the Pacific Northwest.

To see the full scale of the devastation when that tsunami recedes, you would need to be in the international space station. The inundation zone will be scoured of structures from California to Canada. The earthquake will have wrought its worst havoc west of the Cascades but caused damage as far away as Sacramento, California—as distant from the worst-hit areas as Fort Wayne, Indiana, is from New York. FEMA expects to coördinate search-and-rescue operations across a hundred thousand square miles and in the waters off four hundred and fifty-three miles of coastline. As for casualties: the figures I cited earlier—twenty-seven thousand injured, almost thirteen thousand dead—are based on the agency’s official planning scenario, which has the earthquake striking at 9:41 A.M. on February 6th. If, instead, it strikes in the summer, when the beaches are full, those numbers could be off by a horrifying margin.

Wineglasses, antique vases, Humpty Dumpty, hip bones, hearts: what breaks quickly generally mends slowly, if at all. OSSPAC estimates that in the I-5 corridor it will take between one and three months after the earthquake to restore electricity, a month to a year to restore drinking water and sewer service, six months to a year to restore major highways, and eighteen months to restore health-care facilities. On the coast, those numbers go up. Whoever chooses or has no choice but to stay there will spend three to six months without electricity, one to three years without drinking water and sewage systems, and three or more years without hospitals. Those estimates do not apply to the tsunami-inundation zone, which will remain all but uninhabitable for years.

How much all this will cost is anyone’s guess; FEMA puts every number on its relief-and-recovery plan except a price. But whatever the ultimate figure—and even though U.S. taxpayers will cover seventy-five to a hundred per cent of the damage, as happens in declared disasters—the economy of the Pacific Northwest will collapse. Crippled by a lack of basic services, businesses will fail or move away. Many residents will flee as well. OSSPAC predicts a mass-displacement event and a long-term population downturn. Chris Goldfinger didn’t want to be there when it happened. But, by many metrics, it will be as bad or worse to be there afterward.

On the face of it, earthquakes seem to present us with problems of space: the way we live along fault lines, in brick buildings, in homes made valuable by their proximity to the sea. But, covertly, they also present us with problems of time. The earth is 4.5 billion years old, but we are a young species, relatively speaking, with an average individual allotment of three score years and ten. The brevity of our lives breeds a kind of temporal parochialism—an ignorance of or an indifference to those planetary gears which turn more slowly than our own.

This problem is bidirectional. The Cascadia subduction zone remained hidden from us for so long because we could not see deep enough into the past. It poses a danger to us today because we have not thought deeply enough about the future. That is no longer a problem of information; we now understand very well what the Cascadia fault line will someday do. Nor is it a problem of imagination. If you are so inclined, you can watch an earthquake destroy much of the West Coast this summer in Brad Peyton’s “San Andreas,” while, in neighboring theatres, the world threatens to succumb to Armageddon by other means: viruses, robots, resource scarcity, zombies, aliens, plague. As those movies attest, we excel at imagining future scenarios, including awful ones. But such apocalyptic visions are a form of escapism, not a moral summons, and still less a plan of action. Where we stumble is in conjuring up grim futures in a way that helps to avert them.

That problem is not specific to earthquakes, of course. The Cascadia situation, a calamity in its own right, is also a parable for this age of ecological reckoning, and the questions it raises are ones that we all now face. How should a society respond to a looming crisis of uncertain timing but of catastrophic proportions? How can it begin to right itself when its entire infrastructure and culture developed in a way that leaves it profoundly vulnerable to natural disaster?

The last person I met with in the Pacific Northwest was Doug Dougherty, the superintendent of schools for Seaside, which lies almost entirely within the tsunami-inundation zone. Of the four schools that Dougherty oversees, with a total student population of sixteen hundred, one is relatively safe. The others sit five to fifteen feet above sea level. When the tsunami comes, they will be as much as forty-five feet below it.

In 2009, Dougherty told me, he found some land for sale outside the inundation zone, and proposed building a new K-12 campus there. Four years later, to foot the hundred-and-twenty-eight-million-dollar bill, the district put up a bond measure. The tax increase for residents amounted to two dollars and sixteen cents per thousand dollars of property value. The measure failed by sixty-two per cent. Dougherty tried seeking help from Oregon’s congressional delegation but came up empty. The state makes money available for seismic upgrades, but buildings within the inundation zone cannot apply. At present, all Dougherty can do is make sure that his students know how to evacuate.

Some of them, however, will not be able to do so. At an elementary school in the community of Gearhart, the children will be trapped. “They can’t make it out from that school,” Dougherty said. “They have no place to go.” On one side lies the ocean; on the other, a wide, roadless bog. When the tsunami comes, the only place to go in Gearhart is a small ridge just behind the school. At its tallest, it is forty-five feet high—lower than the expected wave in a full-margin earthquake. For now, the route to the ridge is marked by signs that say “Temporary Tsunami Assembly Area.” I asked Dougherty about the state’s long-range plan. “There is no long-range plan,” he said.

Dougherty’s office is deep inside the inundation zone, a few blocks from the beach. All day long, just out of sight, the ocean rises up and collapses, spilling foamy overlapping ovals onto the shore. Eighty miles farther out, ten thousand feet below the surface of the sea, the hand of a geological clock is somewhere in its slow sweep. All across the region, seismologists are looking at their watches, wondering how long we have, and what we will do, before geological time catches up to our own. ♦

*An earlier version of this article misstated the location of the area of impact.

Show HN: Designer, (novice)Developer, community essay editing tool

Ten Trillion-Degree Quasar Astonishes Astronomers (2016)

$
0
0

A newly deployed space telescope has struck pay dirt almost immediately, discovering a quasar – a superheated region of dust and gas around a black hole – that is releasing jets at least seventy times hotter than was thought possible.

RadioAstron is unusual among space telescopes in operating at radio wavelengths. Although the telescope itself is small compared to giant ground-based dishes (10 meters or 33 feet across), it is capable of combining with ground-based instruments operating at the same wavelengths. Together they produce images with the resolution of a single telescope as wide as the distance between them, far exceeding collaborations between dishes half a world apart.

One of the first targets for this extraordinary tool was the quasar 3C 273, the first of these enormously bright objects to be identified, and one of the most luminous. Despite being 4 trillion times as bright as the Sun, 3C 273 is hard to study, located an estimated 2.4 billion light-years away at the center of a giant elliptical galaxy.

Something that bright has to be mind-bendingly hot, and the same applies to the radiating jets 3C 273 spits out. Models suggested that it was impossible for these jets' temperatures to exceed 100 billion degrees Kelvin, at which point electrons produce radiation that should quickly cool them in what is known as the inverse Compton catastrophe. However, in The Astrophysical Journal Letters an international team has estimated the true temperature.

“We measure the effective temperature of the quasar core to be hotter than 10 trillion degrees!” said Dr. Yuri Kovalev, RadioAstron project scientist, in a statement. “This result is very challenging to explain with our current understanding of how relativistic jets of quasars radiate.”

Combining with Earth-based telescopes, including Arecibo and the Very Large Array, RadioAstron examined the radiation from 3C 373 at wavelengths of 18, 6.2, and 1.35 centimeters (7, 2.4, and 0.5 inches) providing both an overall temperature estimates that varied from 7 to 14 trillion degrees, and a view of the substructure of the quasar's jets..

A quasar as seen by the Hubble Telescope, but the resolution provided by space and Earth-based telescopes in combination is far greater. ESA/Hubble & NASA 

"Only this space-Earth system could reveal this temperature, and now we have to figure out how that environment can reach such temperatures," said Kovalev in a separate statement. "This result is a significant challenge to our current understanding of quasar jets."

The resolution made possible by this collaboration was so detailed that the team was able to detect the scattering effects on their measurements of variations in the ionized interstellar medium within the Milky Way. “This is like looking through the hot, turbulent air above a candle flame," said first author Dr. Michael Johnson, of the Harvard-Smithsonian Center for Astrophysics. "We had never been able to see such distortion of an extragalactic object before.”

The authors explained that when “averaged over long timescales – days to months – the scattering blurs compact features in the image, resulting in lower apparent brightness temperatures,” part of the reason these extraordinary temperatures had not been recognized before. Over shorter periods the scattering creates the impression of bright and dark spots known as “refractive substructure.”

RadioAstron has been in space since 2011, but it has taken time to analyze the first observations. Knowing that the maximum baseline it can provide is more than double the 171,000 kilometers (106,000 miles) used in this case, astronomers can't wait to see what it discovers next.

Quasar 3C 273 as seen at different wavelengths showing the effect of the interference of the interstellar medium on the incoming rays. Johnson et al, The Astrophysical Journal.

NextCloud on OpenBSD

$
0
0

NextCloud is an awesome, secure and private alternative for proprietary platforms like Dropbox and Google Drive. Installing NextCloud can be achieved easily with pkg_add nextcloud - but I'd like to do it manually to benefit performance and stability.

General

Setting up NextCloud manually offers some room for improvements:

  • PostgreSQL as the database engine. This benefits a multi-user instance when it comes to performance.
  • PHP 7.0 for even more increased performance - it is way faster than PHP 5.6.
  • Caching file-locking through APCu and Redis.

Caveats:

  • PostgreSQL is more performant for a multi user setup. It is possible to switch database engines afterwards.
  • FFS might not be the best fit for bulky data (>8TB) or when high IOPS are a requirement.
  • OpenBSD doesn't support RAID stacking (eg, RAID1 and FDE) - but NextCloud offers server- and user level encryption.

Coffee is needed

Step 1: Preparations

This guide should work with the latest -stable and the most recent -current. You might prefer a clean and fresh OpenBSD installation to prevent existing configurations from interfering. Without further ado, brew yourself a cup of coffee, disable any distractions and let's go!

We're going to install NextCloud on the subdomain nc.h3artbl33d.nl and are using regular user johndoe. Take note of this and replace these values in commands and configuration files.

1.1: Login and set some defaults

After having logged in to the target machine - that is going to run your NextCloud machine - we'll first set some sane defaults.

  1. Enable doas.
# echo 'permit johndoe' >> /etc/doas.conf
  1. Edit /etc/ssh/sshd_config and set these values - unless you explicitly need them:

    LogLevel VERBOSE
    PermitRootLogin no
    MaxAuthTries 2
    MaxSessions 2
    AllowAgentForwarding no
    AllowTcpForwarding no
    TCPKeepAlive no
    Compression no
    ClientAliveInterval 2
    ClientAliveCountMax 2
  2. Download the pf ruleset I've prepared for you and edit it (specifically the interface and remove the IPv6 rules if you are not using IPv6). When you are done, move it to the default config and check the syntax.

    $ ftp https://h3artbl33d.nl/pf-nc.txt
    $ doas mv pf-nc.txt /etc/pf.conf
    $ doas pfctl -nf /etc/pf.conf

1.2: Download/install the prerequisites

We'll need some files further on, let's make sure we have everything ready.

  1. Install PostgreSQL server:

    $ doas pkg_add postgresql-server
  2. Install PHP and some modules. If you are asked which PHP-version to install, answer PHP 7 each time.

    $ doas pkg_add php php-curl php-gd php-intl php-mcrypt php-pdo_pgsql php-xml php-zip
  3. Enable the PHP modules.

    $ doas ln -s /etc/php-7.0.sample/* /etc/php-7.0/
  4. Automatically start these services. Do not start these services yet.

    $ doas rcctl enable postgresql-server && doas rcctl enable php70_fpm && rcctl enable redis
  5. Install a file so the webserver can resolve DNS queries from inside its chroot.

    $ doas mkdir /var/www/etc && doas cp /etc/resolv.conf /var/www/etc/resolv.conf && chown -R www:www /var/www/etc

Keyboard warriors prepare

Step 2: Prepare the webserver

We need to prepare the webserver. Obviously, we're using httpd(8) - see this article if you prefer nginx for some reason. We're going to kick this chapter off with httpd itself.

2.1: Enable and configure httpd

Between 6.3-stable and 6.4-current there have been some syntax changes in httpd.conf. Hence, hereby the valid configuration for both versions.

6.3-stable: use this configuration as a starting point for your /etc/httpd.conf.

server "nc.h3artbl33d.nl" {
        listen on * port 80
        root "/nextcloud"
        location "/.well-known/acme-challenge/*" {
                root { "/acme", strip 2 }
        }
}

6.4-current: use this configuration as a starting point for your /etc/httpd.conf.

server "nc.h3artbl33d.nl" {
        listen on * port 80
        root "/nextcloud"
        location "/.well-known/acme-challenge/*" {
                root { "/acme" }
                request strip 2
        }
}

Check whether the configuration is deemed valid with doas httpd -n. Moving on to preparing acme-client for the SSL certificates, courtesy of Let's Encrypt. Edit /etc/acme-client:

authority letsencrypt {
        api url "https://acme-v01.api.letsencrypt.org/directory"
        account key "/etc/acme/letsencrypt-privkey.pem"
}
authority letsencrypt-staging {
        api url "https://acme-staging.api.letsencrypt.org/directory"
        account key "/etc/acme/letsencrypt-staging-privkey.pem"
}
domain nc.h3artbl33d.nl {
        alternative names { www.nc.h3artbl33d.nl }
        domain key "/etc/ssl/private/nc.h3artbl33d.nl.key"
        domain certificate "/etc/ssl/nc.h3artbl33d.nl.crt"
        domain full chain certificate "/etc/ssl/nc.h3artbl33d.nl.pem"
        sign with letsencrypt
}

And create the corresponding directories:

$ doas mkdir -p -m 700 /etc/acme
$ doas mkdir -p -m 700 /etc/ssl/acme/private
$ doas mkdir -p -m 755 /var/www/acme

Time to fetch the certificates!

$ doas rcctl enable httpd && doas rcctl start httpd && doas acme-client -vAD nc.h3artbl33d.nl

If that went successful, grab the OCSP stapling file.

$ doas ocspcheck -N -o /etc/ssl/nc.h3artbl33d.nl.ocsp.pem /etc/ssl/nc.h3artbl33d.nl.pem

Edit the crontab to automatically renew the certificates and stapling file. Append the following in the crontab (doas crontab -e).

0 0 * * * acme-client nc.h3artbl33d.nl && rcctl reload httpd
0 * * * * ocspcheck -N -o /etc/ssl/nc.h3artbl33d.nl.ocsp.pem /etc/ssl/nc.h3artbl33d.nl.pem && rcctl reload httpd

2.2: Configuring httpd further

6.3-stable: edit /etc/httpd.conf with these values. Do not restart httpd afterwards!

server "nc.h3artbl33d.nl" {
        listen on * tls port 443
        hsts {
                preload
                subdomains
        }
        root "/nextcloud"
        directory index "index.php"
        tls {
                certificate "/etc/ssl/nc.h3artbl33d.nl.pem"
                key "/etc/ssl/private/nc.h3artbl33d.nl.key"
                ocsp "/etc/ssl/nc.h3artbl33d.nl.ocsp.pem"
                ciphers "ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA"
                dhe "auto"
                ecdhe "P-384"
        }
        connection max request body 537919488

        location "/.well-known/acme-challenge/*" {
                root { "/acme", strip 2 }
        }

        # First deny access to the specified files
        location "/db_structure.xml" { block }
        location "/.ht*"             { block }
        location "/README"           { block }
        location "/data*"            { block }
        location "/config*"          { block }

        location "/*.php*" {
                fastcgi socket "/run/php-fpm.sock"
        }
}

server "nc.h3artbl33d.nl" {
        listen on * port 80
        block return 301 "https://nc.h3artbl33d.nl$REQUEST_URI"
}

If using 6.4-current, replace the location "/.well-known/acme-challenge/*" block to

        location "/.well-known/acme-challenge/*" {
                root { "/acme" }
                request strip 2
        }

Prepare for some hard work

Step 3: PHP kung fu and SQL jiujitsu

In the first step, we've installed PostgreSQL-server and PHP along with some extensions. We've met the preconditions, time to get this all up and running. This is the most intense step, so you might want to grab another cup of coffee. Or tea, if that is more your thing.

3.1: Kicking PostgreSQL online

Since this is the first time we're using PostgreSQL on this machine, we'll need to initialize the database.

$ doas su - _postgresql
$ mkdir /var/postgresql/data
$ initdb -D /var/postgresql/data -U postgres -A md5 -W

Switch back to our regular user, johndoe, by typing exit twice. Now, let's start the database by issuing a single command.

$ doas rcctl enable postgresql && doas rcctl start postgresql

If you expect to house a busy NextCloud instance, you might want to do some configuration tweaking according to the instructions in cat /usr/local/share/doc/pkg-readmes/postgresql-server-{ver}.

3.2: Compiling PHP modules

Like mentioned earlier, for caching and improved performance, we are going to use the APCu and Redis extensions for PHP. We need to compile these ourself - which isn't as hard as it might sound. First, let's take care of some dependencies. When asked which autoconf to install, select the most recent version and write the version number down, minus the patch level. Eg, if you are installing autoconf-2.69p2, write down 2.69. We'll need that later on.

$ doas pkg_add install autoconf
$ doas ln -s /usr/local/bin/php-7.0 /usr/local/bin/php && doas ln -s /usr/local/bin/phpize-7.0 /usr/local/bin/phpize && doas ln -s /usr/local/bin/php-config-7.0 /usr/local/bin/php-config

Fetch both APCu and Redis for PHP.

$ ftp https://pecl.php.net/get/{apcu-5.1.12.tgz,redis-4.1.1.tgz}
$ tar zxf apcu-5.1.12.tgz && tar zxf redis-4.1.1.tgz && rm apcu-5.1.12.tgz redis-4.1.1.tgz

We'll start with APCu. Remember writing down the autoconf version? This is the part where you'll need it. In the following example, we're using 2.69.

$ cd apcu-5.1.12
$ export AUTOCONF_VERSION=2.69
$ phpize
$ ./configure
$ doas make install

That was the whole process of compiling and installing APCu. Peanuts, isn't it? Let's repeat these steps for Redis:

$ cd ../redis-4.1.1
$ export AUTOCONF_VERSION=2.69
$ phpize
$ ./configure
$ doas make install

Now we have to change the PHP configuration with some higher limits - as the default only allows uploading of files that are two MB at max. Open /etc/php-7.0.ini and edit these values.

memory_limit = 512M
max_input_time = 180
upload_max_filesize = 512M
post_max_size = 32M
opcache.enable=1
opcache.enable_cli=1
opcache.memory_consumption=128
opcache.interned_strings_buffer=8
opcache.max_accelerated_files=10000
opcache.revalidate_freq=1
opcache.save_comments=1

Next, edit /etc/php-fpm.ini.

security.limit_extensions =

And to enable both APCu and Redis, there is one more step. Create /etc/php-7.0/cache.ini with the following content.

extension=redis.so
extension=apcu.so

Almost there, maestro

Step 4: Finally, installing NextCloud

We've come quite a long way. Fortunately, we're almost there! Grab the most current version of NextCloud and extract it to /var/www/nextcloud.

$ ftp https://download.nextcloud.com/server/releases/nextcloud-13.0.5.zip
$ doas unzip -d /var/www
$ doas chown -R www:www /var/www/nextcloud

Before we can use NextCloud, we need to create a database to store the data. Replace secret-password with a strong passphrase of your liking.

$ doas su - _postgresql
$ psql -d template1 -U postgres
template1=# CREATE USER nextcloud WITH PASSWORD 'secret-password';
template1=# CREATE DATABASE nextcloud;
template1=# GRANT ALL PRIVILEGES ON DATABASE nextcloud to nextcloud;
template1=# \q

Check whether you are still running as the postgresql user with whoami - if so, just give it an exit to switch back to johndoe. Start the required services. We've set these services earlier to enabled, so with a restart, the services will also start on boot time.

$ doas rcctl start httpd redis php70_fpm postgresql

Fire up your browser and head to your subdomain. You should be greeted there with the installation wizard. Select PostgreSQL as your database, using 127.0.0.1:5432 as the server, nextcloud as the user and the password you've set. Take caution to place the datafolder outside htdocs, eg, in /var/www/ncdata.

After the installation, edit /var/www/nextcloud/config/config.php and add these lines before the closing );.

  'memcache.local' => '\OC\Memcache\APCu',
  'memcache.locking' => '\OC\Memcache\Redis',
  'redis' => array(
    'host' => '127.0.0.1',
    'port' => 6379,
  ),

Step 5: Troubleshooting

If something is amiss, there are two logs you should check first and foremost: /var/www/logs/error_log and nextcloud.log in your NextCloud data directory.

Feel free to reach out to me on Mastodon and via e-mail (hello@h3b.nl if you require additional help.

FAQ - Frequently Asked Questions (completely made-up by the author)

Q: The error log mentiones "the Redis server went away"

A: somewhat cryptic error message, I agree. Luckily, the Redis server didn't go on holiday nor went anywhere else, out of your reach. This message merely means that the Redis server is most likely offline. Check with rcctl check redis.

Q: I got a nasty 500/520 error while trying to access NextCloud

A: Most likely, PHP went belly up or it won't start due to an error in the configuration. Check the daemon status with rcctl check php70fpm or try to restart it with rcctl restart php70fpm.

Q: How about crypto, how safe is my data?

A: OpenBSD is the most secure OS out there. Having said that, nothing is one hundred percent secure. If someone says otherwise, they are flat-out lying to you. This guide follows best practice methods. There is an additional step you could take, that this guide doesn't cover: using encryption from within the NextCloud webinterface. Be cautious, losing either your passphrase or your keys will result in inaccessible data. There is a recovery method, but each user has to allow that in the individual profile. It's disabled by default - being the sanest default.

Q: Why go to all this trouble to get NextCloud up and running - when there is a NextCloud package available?

A: As mentioned before, purely for performance reasons. Furthermore, the 6.3-stable package uses PHP 5.6 by default. Having said that, you could always use the package. If you are running -current, almost everything is available through packages - thanks to the awesome Gonzalo - except for APCu.

Q: Are you mad? Spending this much time on a tutorial?

A: Whether I am mad is arguable but off-topic. I like sharing knowledge, writing and am honored that at least someone got this far. Any and all feedback is appreciated but completely optional.

Q: Can you write a tutorial about X?

A: I can always check whether it matches with my interest and whether I deem it worth my time. Feel free to drop me a message.


Military robots are getting smaller and more capable

$
0
0

ON NOVEMBER 12th a video called “Slaughterbots” was uploaded to YouTube. It is the brainchild of Stuart Russell, a professor of artificial intelligence at the University of California, Berkeley, and was paid for by the Future of Life Institute (FLI), a group of concerned scientists and technologists that includes Elon Musk, Stephen Hawking and Martin Rees, Britain’s Astronomer Royal. It is set in a near-future in which small drones fitted with face-recognition systems and shaped explosive charges can be programmed to seek out and kill known individuals or classes of individuals (those wearing a particular uniform, for example). In one scene, the drones are shown collaborating with each other to gain entrance to a building. One acts as a petard, blasting through a wall to grant access to the others.

“Slaughterbots” is fiction. The question Dr Russell poses is, “how long will it remain so?” For military laboratories around the planet are busy developing small, autonomous robots for use in warfare, both conventional and unconventional. In America, in particular, a programme called MAST (Micro Autonomous Systems and Technology), which has been run by the US Army Research Laboratory in Maryland, is wrapping up this month after ten successful years. MAST co-ordinated and paid for research by a consortium of established laboratories, notably at the University of Maryland, Texas A&M University and Berkeley (the work at Berkeley is unrelated to Dr Russell’s). Its successor, the Distributed and Collaborative Intelligent Systems and Technology (DCIST) programme, which began earlier this year, is now getting into its stride.

In 2008, when MAST began, a spy drone that you could hold in the palm of your hand was an idea from science fiction. Such drones are now commonplace. Along with flying drones, MAST’s researchers have been developing pocket-sized battlefield scouts that can hop or crawl ahead of soldiers. DCIST’s purpose is to take these autonomous robots and make them co-operate. The result, if the project succeeds, will be swarms of devices that can take co-ordinated action to achieve a joint goal.

A hop, skip and jump away

At the moment, America’s defence department is committed to keeping such swarms under human control, so that the decision to pull a trigger will always be taken by a person rather than a machine. The Pentagon is as alarmed by the prospect of freebooting killer robots as the FLI is. But, as someone said of nuclear weapons after the first one was detonated, the only secret worth keeping is now out: the damn things work. If swarms of small robots can be made to collaborate autonomously, someone, somewhere will do it.

Existing small drones are usually polycopters—helicopters that have a set of rotors (generally four or six) arranged at the vertices of a regular polygon, rather than a single one above their centre of gravity. Some MAST researchers, however, think they have alighted on something better.

Their proposed replacement is the cyclocopter. This resembles an airborne paddle steamer. Though the idea of cyclocopters has been around for a while, the strong, lightweight materials needed to make them have hitherto been unavailable and the computing tools needed to design them have only recently been created. Now that those materials and tools do exist, things are advancing rapidly. Over the course of the MAST project the researchers have shrunk cyclocopters from being behemoths weighing half a kilogram to svelte devices that tip the scales at less than 30 grams. Such machines can outperform polycopters.

Cyclocopter aerodynamics is more like that of insects than of conventional aircraft, in that lift is generated by stirring the air into vortices rather than relying on its flow over aerofoils. For small cyclocopters this helps. Vortex effects become proportionately more powerful as an aircraft shrinks, but, in the case of conventional craft, including polycopters, that makes things worse, by decreasing stability. Cyclocopters get better as they get smaller.

They are also quieter. As Moble Benedict of Texas A&M, one of the leaders of the cyclocopter project, observes, “aerodynamic noise is a strong function of the blade-tip speed”—hence the whup-whup-whup of helicopters. The blade-tip speeds of cyclocopters are much lower. That makes them ideal for spying. They also have better manoeuvrability, and are less disturbed by gusts of wind.

Dr Benedict reckons cyclocopters are about two years away from commercial production. Once that happens they could displace polycopters in many roles, not just military ones. But they are not the only novel technology in which MAST has been involved. The programme has also worked on robots that hop.

One of the most advanced is Salto, developed by the Biomimetic Millisystems Laboratory at the University of California, Berkeley. Salto (pictured) is a monopod weighing 98 grams that has a rotating tail and side-thrusters. These let it stabilise itself and reorient in mid-leap. That gives it the agility to bounce over uneven surfaces and also to climb staircases.

Salto’s speed (almost two metres a second) puts huge demands on its single leg. Ron Fearing, one of the electrical engineers developing it, puts things thus: “imagine a cheetah running at top speed using only one leg, and then cut the amount of time that leg spends on the ground in half.” As with cyclocopters, the materials and processing power needed to do this have only recently come into existence.

Dr Fearing says Salto and its kin are quieter than aerial drones and can operate in confined spaces where flying robots would be disturbed by turbulence reflected from the walls. They can also travel over terrain, such as collapsed buildings, that is off-limits to wheeled vehicles. Salto still needs work. In particular, it needs to be able to cling more effectively to what it lands on. Dr Fearing uses the analogy of a squirrel leaping from branch to branch. Arriving at the next branch is only half the battle. The other half is staying there. Once that is solved, though, which it should be in the next year or two, small non-flying robots that can go where their wheeled, or even track-laying, brethren cannot should become available for practical use.

Bouncing over the rubble of a collapsed building is not the only way to explore it. Another is to weave through the spaces between the debris. Researchers at the Biomimetic Millisystems lab are working on that, too. Their solution resembles a cockroach. Its body is broad and flat, which gives it stability but also permits it to crawl through narrow spaces—if necessary by going up on one side. Should it tip over whilst attempting this, it has wing-like extensions it can use to flip itself upright again.

Getting into a building, whether collapsed or intact, is one thing. Navigating around it without human assistance is quite another. For this purpose MAST has been feeding its results to the Defence Advanced Research Projects Agency (DARPA), America’s main federal military-research organisation. According to Brett Piekarski, who led MAST and is now in charge of DCIST, the Fast Lightweight Autonomy (FLA) programme at DARPA will continue MAST’s work with the aim of developing small drones that can “ingress and egress into buildings and navigate within those buildings at high speeds”. Some of that has already been done. In June DARPA reported that polycopters souped up by the FLA programme were able to slalom through woodlands, swerve around obstacles in a hangar and report back to their starting-point, all by themselves.

Unity is strength

The next challenge—the one that people like Dr Russell particularly worry about—is getting the robots to swarm and co-ordinate their behaviour effectively. Under the aegis of MAST, a group from the General Robotics, Automation, Sensing & Perception (GRASP) laboratory at the University of Pennsylvania did indeed manage to make drones fly together in co-ordinated formations without hitting each other. They look good when doing so—but, to some extent, what is seen is an illusion. The drones are not, as members of a swarm of bees or a flock of birds would be, relying on sensory information they have gathered themselves. Instead, GRASP’s drone swarms employ ground-based sensors to track individual drones around, and a central controller to stop them colliding.

That is starting to change. A farewell demonstration by MAST, in August, showed three robots (two on the ground and one in the air) keeping station with each other using only hardware that was on board the robots themselves. This opens the way for larger flocks of robots to co-ordinate without outside intervention.

Moreover, as that demonstration showed, when drones and other robots can routinely flock together in this way, they will not necessarily be birds of a feather. “Heterogeneous group control” is a new discipline that aims to tackle the thorny problem of managing units that consist of various robots—some as small as a postage stamp, others as large as a jeep—as well as human team members. Swarms will also need to be able to break up into sub-units to search a building and then recombine once they have done so, all in a hostile environment.

Such things are the goals of DCIST. The first tranche of grants to these ends, some $27m of them, has already been awarded to the University of Pennsylvania, the Massachusetts Institute of Technology, the Georgia Institute of Technology and the University of California, Berkeley. When DCIST itself wraps up, probably in 2022, the idea of Slaughterbots may seem a lot less fictional than it does now.

Japan starts space elevator experiments

$
0
0

Shizuoka University and contractor Obayashi aim to launch two small (10 sq cm) satellites connected by a 10m steel cable from the International Space Station.

Containers on the cable will move forward and back recorded by a camera.

 Obayashi envisages a space elevator using six oval-shaped cars, each measuring 18m x 7.2m holding 30 people, connected by a cable from a platform on the sea to a satellite at 36,000 kilometers above Earth.

The  elevator would be powered by an electric motor pulley.

The cars would travel at  up to 200kph and arrive at the space station eight days after departure from Earth.

The total length of a cable to be used for the vehicle will be 96,000 kilometers, and the total cost is estimated at $9 billion.

The cost of transport is expected to be about one-hundredth of that of the space shuttle.

Carbon nanotube is the most likely material to be used for the cables.

Has elegance betrayed physics?

$
0
0

Why do I have to complete a CAPTCHA?

We did not recognize your IP address.

Completing the CAPTCHA proves you are a human and gives you access to the web property for the remainder of your session.

What can I do to prevent this in the future?

If you are on a personal connection, like at home, you can run an anti-virus scan on your device to make sure it is not infected with malware.

If you are at an office or shared network, you can ask the network administrator to run a scan across the network looking for misconfigured of infected devices. Alternately, you can ask the network administrator to contact Scitation for a permanent IP unblocking

A Failure of Academic Quality Control [pdf]

How parchment is made

$
0
0

Short Cuts

Mary Wellesley

The work of making parchment is unglamorous, and sometimes it smells like the inside of a boxing glove: like cheese and sweat and hard work. There is only one firm of parchment makers left in the UK. There are places elsewhere in the world where parchment is produced, but the process is partly mechanised. At William Cowley’s – located somewhat improbably near Milton Keynes – everything is still done by hand. What happens there is probably not much different from what was done in the Hellenistic city of Pergamon during the reign of Eumenes II (197-159 BCE). Eumenes was an avid bibliophile and built a library to rival that of Alexandria; at its peak it contained 200,000 volumes. According to Pliny, Ptolemy of Egypt was so enraged by his neighbour’s acquisitive habits that he banned the sale of papyrus. Eumenes instructed his subjects to find an alternative writing material, and parchment was born. Where plant-based papyrus was fibrous, brittle and liable to break, parchment was flexible, durable and milky smooth. The material gave its name to the city: pergamenum is the Latin word for parchment.

In making parchment, the first stage involves working with whole goat or calf hides (the word ‘vellum’ is often used to distinguish calf hide from other kinds of hide), which come fresh from the abattoir, covered in hair. They still bear the traces of a lopped-off head and the beginnings of a tail. I saw one that had the remnants of the animal’s castrated testicles tied up in a rubber band. In a storeroom at Cowley’s was a huge pile of these hides, folded up, like furry pillowcases waiting to be laundered. The hides are soaked for two weeks in a vat of lime (not the citrus variety). At the end of this, they come out sodden on the hair side and slippery on the flesh side. The lime breaks down the follicles and loosens the hair. I watched as a hide was fished out and thrown over a wooden stump with a wet thwack. It lay hair-side up, liquid dripping from its curled brown ends. The stump is a smooth-topped wooden block, which comes to just below chest height. Once there, the hair (known as the nap) is removed with a long, curved knife (called a scudder) which has wooden handles at both ends. I had a go at this and the hair came away like the skin from a potato – the sensation was satisfying, if disconcerting. What is exposed is unmistakably flesh: faintly translucent, with the suggestion of veins beneath. There is something glaring about such a large piece of disembodied skin; I was reminded of The Silence of the Lambs and of those statues of Saint Bartholomew, his flayed skin slung over his shoulder like a shawl.

That I was struck by the skin-ness of the hide in front of me isn’t just a reflection of my modern sensibility. Several texts from the Middle Ages show a keen awareness of the materials on which they are written. The Middle English poem The Long Charter of Christ purports to be a legal document in which Christ grants salvation to mankind. In one version of the work on parchment manuscript in the British Library, Christ describes the events of the Passion:

To a pilour y was py3t
I tugged and towed al a ny3t
And washen on myn owne blode
And strey3t y steyned on þe rode
Streyned to drye on a tre
As parchemyne ou3t for to be
Hyreþ now & 3e schul wyten [know]
How þis chartre was wryten
Upon my face was mad þe ynke
[ … ]
þe penne þat þe lettres was with wryten
Of scorges þat I was with smyten

Imagine the sensation for the devotional reader on encountering a text written on parchment, with the words of John 1:14 – ‘And the Word became flesh and dwelt among us’ – thrumming in their ears.

After the hair is removed, the skins are dried and stretched across a frame, known as a herse. The word comes from the French herse, meaning a harrow, and ultimately from the Latin hirpex. It is a cousin of ‘hearse’, which originally meant a frame for carrying lighted tapers over a coffin. The funereal connotation seems appropriate. Parchment skins cannot be nailed to a frame because they would rip during the drying process. Instead, the edge of the material is gathered in bunches around balls of newspaper (in the medieval period, this would have been done with pebbles known as pippins) and tied with string to pegs along the length of the frame. Treated in this way, the skin and frame take on the appearance of a rustic trampoline.

As the skin is stretched by twisting the pegs, hot water is applied and any remaining fat, especially from the meat side, is removed using a knife shaped like a crescent moon, called a lunellum or sometimes a lunellarium (at Cowley’s they simply call it a luna). The process of stretching and scraping is repeated several times before the frame is put into the oven, which is really a large drying room. I made the mistake of venturing into the oven, which was hot and milky-smelling in a heady way. My curiosity died there.

‘Pellis de carne, de pelle caro removetur: tu de carne tua carnea vota trahe,’ Conrad de Mure (c.1210-81) wrote. ‘Skin from the flesh, flesh from the skin is pulled: you pull from the flesh your fleshly desires.’ The meaning is a little obscure, as ‘vota’ is ambiguous, but de Mure’s point seems to be that the labour of preparing parchment makes one purer and closer to God. (A similar idea occurs in Piers Plowman, which compares the cleaning of parchment to the shedding of pride: ‘Of pompe and of pride þe parchemyn decourreþ [peels away]’.) I didn’t feel my fleshly desires leaving me altogether, despite the unsexiness of the process, but I did see how this was a labour of love – the kind of labour that lends itself to religious significance, with its rituals of purification and fastidious repetitions.

After the oven, the skins are ready for the final stage of their preparation. The last layer is shaved off, removing any dark patches or traces of hair, this time with a slightly larger luna. Picking up the knife, I was sure I was about to scrape too hard and break the skin. Parchment holes are a common feature of medieval manuscripts. But the skin is remarkably strong; I could have hacked at it without breaking it. One of the main uses for parchment today is drumskins. If you get the correct angle – a neat 45 degrees – the blade makes a high-pitched trill. Do it right and the blade will sing, I was told. There is something early medieval about the idea of a blade with a voice. It reminded me of several of the Anglo-Saxon riddles from the Exeter Book (one of the four manuscripts which contain the majority of surviving Old English verse). There, disgruntled tools are given voice. The opening of Riddle Five reads:

Ic eom anhaga   iserne wund,
bille gebennad,   beadoweorca sæd,
ecgum werig.

I am a lonely one, wounded by iron,
beaten by sword, burned out by battle-work,
weary of blades.

The usual solution to this is a shield, but a chopping block is also sometimes proposed. The voice of the grumpy chopping block is characteristic of an Anglo-Saxon tendency to animate objects. One of the runic inscriptions on the Franks Casket alludes to its own creation from hronæsban (whale bone) and the Alfred Jewel announces ‘AELFRED MEC HEHT GEWYRCAN’ (‘Alfred caused me to be made’). It’s a feature of a world-view that sees objects as things of value, not to be readily disposed of. In some senses the Anglo-Saxons were more materialistic than we are.

Parchment itself is an emblem of a pre-disposable culture: it is built to last. You need only look at the almost pristine pages of the Codex Sinaiticus, made sometime in the fourth century CE, to recognise this. Cheap 20th-century books with glued spines and paper that withers like an autumn leaf present a greater challenge to library conservation departments than parchment manuscripts. Unlike many of the materials we use today, parchment was often recycled. It was cut up to make new bindings, fill holes or repair damage. Sometimes it was scraped clean of its writing and used again, leaving ghostly palimpsests for scholars to uncover. Cowley’s is a place that plies an ancient trade, but also a place that prizes what is durable and recyclable. You get the sense that little there is thrown away unnecessarily. Everything shows the marks of use and reuse: the knives glossy and smooth from years of handling, the wooden blocks worked on again and again. What riddling complaints might these tools make?

The Camera Is the Lidar

$
0
0

It was clear when Ouster started developing the OS-1 three years ago that deep learning research for cameras was outpacing lidar research. Lidar data has incredible benefits — rich spatial information and lighting agnostic sensing to name a couple — but it lacks the raw resolution and efficient array structure of camera images, and 3D point clouds are still more difficult to encode in a neural net or process with hardware acceleration.

With the tradeoffs between both sensing modalities in mind, we set out to bring the best aspects of lidars and cameras together in a single device from the very beginning. Today we’re releasing a firmware upgrade and update to our open source driver that deliver on that goal. Our OS-1 lidar now outputs fixed resolution depth images, signal images, and ambient images in real time, all without a camera. The data layers are perfectly spatially correlated, with zero temporal mismatch or shutter effects, and have 16 bits per pixel and linear photo response. Check it out:

Simultaneous real time image layers output from the OS-1. What you see from top to bottom is ambient, intensity, range, and point cloud — ALL from just our lidar. Notice the ambient image captures the cloudy sky and shadows from trees and vehicles.

The OS-1’s optical system has a larger aperture than most DSLRs, and the photon counting ASIC we developed has extreme low light sensitivity so we’re able to collect ambient imagery even in low light conditions. The OS-1 captures both the signal and ambient data in the near infrared, so the data closely resembles visible light images of the same scenes, which gives the data a natural look and a high chance that algorithms developed for cameras translate well to the data. In the future we’ll work to remove fixed pattern noise from these ambient images, but we wanted to let customers get their hands on the data in the meantime!

We’ve also updated our open source driver to output these data layers as fixed resolution 360° panoramic frames so customers can begin using the new features immediately, and we’re providing a new cross-platform visualization tool built on VTK for viewing, recording, and playback of both the imagery and the point clouds side by side on Linux, Mac, and Windows. The data output from the sensor requires no post processing to achieve this functionality — the magic is in the hardware and the driver simply assembles streaming data packets into image frames.

Our new open source visualizer. Full unedited video: https://www.youtube.com/watch?v=LcnbOCBMiQM

Customers who have been given early access to the update have been blown away, and we encourage anyone curious about the OS-1 to watch our unedited videos online, or download our raw data and play it back yourself with the visualizer.

Firmware update page: https://www.ouster.io/downloads

Github & sample data: www.github.com/ouster-LIDAR

This Is Not a Gimmick.

We’ve seen multiple lidar companies market a lidar/camera fusion solution by co-mounting a separate camera with a lidar, performing a shoddy extrinsic calibration, and putting out a press release for what ultimately is a useless product. We didn’t do that, and to prove it we want to share some examples of how powerful the OS-1 sensor data can be, which brings us back to deep learning.

Because the sensor outputs fixed resolution image frames with depth, signal, and ambient data at each pixel, we’re able to feed these images directly into deep learning algorithms that were originally developed for cameras. We encode the depth, intensity, and ambient information in a vector much like a network for color images would encode the red, green, and blue channels at the input layer. The networks we’ve trained have generalized extremely well to the new lidar data types.

As one example, we trained a per pixel semantic classifier to identify driveable road, vehicles, pedestrians, and bicyclists on a series of depth and intensity frames from around San Francisco. We’re able to run the resultant network on an NVIDIA GTX 1060 in real time and achieved encouraging results, especially considering this is the first implementation we’ve attempted. Take a look:

Full video: https://www.youtube.com/watch?v=JxR9MasA9Yc

Because all data is provided per pixel, we’re able to seamlessly translate the 2D masks into the 3D frame for additional real time processing like bounding box estimation and tracking.

In other cases, we’ve elected to keep the depth, signal, and ambient images separate and pass them through the same network independently. As an example, we took the pre-trained network from DeTone et al.’s SuperPoint project [link] and ran it directly on our intensity and depth images. The network is trained on a large number of generic RGB images, and has never seen depth/lidar data, but the results for both the intensity and depth images are stunning:

Full video: https://www.youtube.com/watch?v=igsJxrbaejw

On careful inspection, it’s clear that the network is picking up different key points in each image. Anyone who has worked on lidar and visual odometry will grasp the value of the redundancy embodied in this result. Lidar odometry struggles in geometrically uniform environments like tunnels and highways, while visual odometry struggles with textureless and poorly lit environments. The OS-1’s camera/lidar fusion provides a multi-modal solution to this long standing problem.

It’s results like these that make us confident that well fused lidar and camera data are much more than the sum of their parts, and we expect further convergence between lidars and cameras in the future.

Look out for more exciting developments from Ouster in the future!

LINKS

  1. Videos:

Sunny drive with ambient data:

Pixelwise semantic segmentation: https://www.youtube.com/watch?v=JxR9MasA9Yc

Superpoint: https://www.youtube.com/watch?v=igsJxrbaejw

2. Firmware update page: https://www.ouster.io/downloads

3. Github & sample data: www.github.com/ouster-LIDAR

Ask HN: Reading recommendations for understanding food allergies?

$
0
0
The prevalence of food allergies is increasing in the USA, at least, if not around the world. In my case, I have two daughters with serious food allergies and sensitivities. I’d like to educate myself on this issue, and I’d be grateful for some recommendations about where to start. Thanks!

How to Design for the Modern Web

$
0
0

Every now and then I see a page that hasn’t jumped on the bandwagon; we just can’t have that so lets run through how to design for the modern web.

These practices are immutable, you must follow them because I’m a developer advocate. They’re also in effect on quite a few of the top websites as ranked by Alexa, but most importantly, developer advocate.

Let Users Know About Your Mobile Application

The very first thing you must do when a user visits your website is to show them a big modal dialog telling them that they should just install the mobile application instead.

A neat little trick is to make the link to close the modal small and put it very close to the link that lets the user install the application. This will make users much more happy as it’s so easy to install the application and not accidentally close the dialog!

Let Users Know About Cookies

Should the user continue and insist on using the web version we need to let them know that we use cookies to track them.

Let Users Know They Can Sign Up

Sometimes a link is not enough, a modal can be pretty helpful here in letting the user know they can actually sign up for your website.

Research has shown that modals that cannot be closed have the best conversion rate.

Block Users from Europe

Europeans nowadays have these pesky laws which muddy the waters on what we are allowed to track. The best practice here is just to do nothing, continue tracking everything about your users and just block Europeans from accessing the site.

Denmark is a failed socialist cup cake state anyway, its users are not worth the effort.

Let Users Know About Your Mobile Application

It’s good practice to show the user the dialog again when a user clicks on any link, most likely they just accidentally clicked away the first time.

Again we need to help our users out here, so we make the cancel link very small to minimize the chances of it accidentally being tapped. Happy users everywhere!

Allow Users To Opt-Out

Now it’s very important that we are not intrusive to the user, so we must allow them to opt-out of the mobile application modals.

Best practice is to put it somewhere the user will easily spot it, like inside one of the account preferences pages.

Advertise Your Application

If the user should opt out of the prompt’s for the mobile application, we can till get them in the long run. Advertising the mobile application somewhere on the website works great.

Eventually the users will give in and convert!

Always Bet on JavaScript

These modals obviously require JavaScript, and of course it’s important to have endless scrolling but make sure you future proof yourself by using the latest framework. You may think “oh it’s only a couple of modals” today. But in the future, it may be many many more modals and oh boy! When that happens you’ll regret you did not make an isomorphic application with React and code splitting!

Conclusion

Now that you know these best practices for modern web development, make sure you apply them everywhere. You are now also certified and ready to apply for top ranking sites like Reddit and Medium as long as you remember these simple rules during the interview process.

Don’t know anything about web development at all? Don’t worry you can just take a week long bootcamp!

Already a web developer? Buy the C programming language book here and get out while you still can.

Show HN: Jott – A minimal tool for quickly writing and sharing notes

$
0
0

A minimal tool for quickly writing and sharing notes. Check out https://jott.live for a demo.

Website

Navigate to the site and set a title in the 'name' field. To set a key for editing the note, use 'name#key' in that field.

  • /note/<note-name> will return the default HTML rendering of the note.
  • /texdown/note/<note-name> will return a minimal TeXDown rendering of the note. Example
  • /raw/note/<note-name> will return the raw note. (Useful for the command line.)
  • /edit/note/<note-name> will open a basic editor for the note.

Command line

The note script in jott/scripts makes it easy to upload and read notes from the command line.

$ echo "this is a test" | note my_test_note secret_password
Success! Note "my_test_note" saved
$ note my_test_note
this is a test
$ echo "updating without the key" | note my_test_note
Note already saved with different key
$ note my_test_note
this is a test
$ note -d my_test_note secret_password
Success! Note "my_test_note" deleted

Although you can use https://jott.live to test out this project, do not rely on it for anything important.

If you find this useful, I'd recommend hosting your own instance. It is quite lightweight.

Requirements:

  • flask (pip install flask)

Run the server with

FLASK_ENV=prod python3 main.py

Kevin McCarthy leads charge against Silicon Valley

$
0
0

House Majority Leader Kevin McCarthyKevin Owen McCarthyTrump: Google, Facebook, Amazon may be in a 'very antitrust situation'Bail bond industry mobilizes against Calif. law eliminating cash bailWashington's fall agenda: GOP, tech industry clash over bias claimsMORE is leading the charge against President TrumpDonald John Trump7 subtle jabs speakers took at Trump during John McCain’s funeralGraham invited Ivanka Trump to McCain's funeral: reportTrump blasts trade talks with Canada: We shouldn't have to buy our friendsMORE’s new favorite punching bag: big tech.

The California Republican, who hopes to replace Speaker Paul RyanPaul Davis RyanDems vow rules overhaul to empower members if House flipsTrump says he will 'take a look' at increases for federal workersCongress gives McCain the highest honorMORE (R-Wis.) next year, has been aggressively promoting a campaign to “stop the bias,” referring to what critics say is a pattern of discrimination against conservative voices on social media.

To that end, Twitter CEO Jack Dorsey is scheduled to testify on Capitol Hill this week at a hearing requested by McCarthy.

“I’ve had many conversations with the president about how we have to stop this bias,” McCarthy, one of Trump’s closest allies on Capitol Hill, told Fox News on Thursday.

“I’ve spoken to Jack Dorsey throughout the month,” McCarthy added. “He and I philosophically disagree, but we do agree on one thing: We believe in the First Amendment. But we also believe in transparency and accountability.”

The majority leader’s recent crusade against social media comes at a time when Trump has ramped up his own rhetoric against the tech industry. His eagerness to champion the cause could endear him to both the White House and conservative lawmakers -- two constituencies that could be crucial to securing the Speakership.

Trump recently asserted that Google and other platforms are “rigged” against him, an accusation that Google rejected, and one that came on the heels of allegations from conservatives that Twitter has been “shadow banning” certain Republicans so that their accounts are less visible to users.

That controversy started after prominent conservatives aligned with Trump, including Rep. Matt GaetzMatthew (Matt) GaetzHouse Republicans say Ohr interview escalates surveillance concernsOvernight Defense: Duncan Hunter refusing to step down from committees | Trump awards Medal of Honor to widow of airman | Pentagon names pick for Mideast commanderTrump awards posthumous Medal of Honor to family of fallen Air Force sergeant MORE (R-Fla.), House Freedom Caucus ChairmanMark MeadowsMark Randall MeadowsDems vow rules overhaul to empower members if House flipsTrump calls North Carolina redistricting ruling ‘unfair’Steele told DOJ official Russia thought it had 'Trump over a barrel': reportMORE (R-N.C.) and Rep.Jim JordanJames (Jim) Daniel JordanThe Hill's Morning Report: Trump shifts campaign focus from Senate to HouseHouse Republicans say Ohr interview escalates surveillance concernsGOP’s McMorris Rodgers on Jordan for Speaker: 'I am staying open'MORE (R-Ohio) failed to appear on Twitter’s auto-populated drop-down search box when users typed in their names.

The company said that it does not “shadow ban” according to political ideology, but acknowledged that its attempts to crack down on hate speech have unintentionally reduced search results for lawmakers from both parties.

McCarthy was quick to take up the mantle on the issue, tweeting more than a dozen times last month with the hashtag “stop the bias.” He also made the media rounds to step up pressure on Dorsey to publicly testify on Capitol Hill.

But not all of McCarthy’s efforts on that front were successful. One of his tweets attempting to demonstrate censorship of conservatives drew criticism for being misleading.

After sharing a screenshot of a tweet from Fox News host Laura Ingraham that was covered by language warning of “potentially sensitive content,” Twitter users were quick to point out that Ingraham’s tweet was covered up due to settings in McCarthy’s own Twitter account, not because of a company campaign to silence conservative voices.

Still, the crusade to keep the issue of alleged anti-conservative bias in the spotlight could earn McCarthy some political capital. One of the lawmakers who Republicans say has been targeted by Twitter’s “shadow banning” practice is Jordan, a leader of the Freedom Caucus who is also running for Speaker.

While Jordan may struggle to secure the 218 votes needed to win the gavel, his far-right group has the power to veto any Speaker hopeful if Republicans retain control of the House in the midterm elections.

The House Energy and Commerce Committee on Wednesday will hear testimony from Dorsey, but it’s unclear whether Congress or the Trump administration will take any action against tech and social media companies, which lawmakers on both sides of the aisle have been reluctant to regulate further.

Proposed steps for regulation, however, could include making Facebook and other social media platforms a public utility, forcing Google to be more transparent about its algorithms and making it easier for individuals to sue technology companies.

McCarthy says all options are on the table.

“Congress is going to look at everything, because of how powerful they have become,” he told Fox News.

US Court of Appeals: An IP address isn't enough to identify a pirate

$
0
0
Why it matters: Judge rules that copyright trolls need more than just an IP address if they want to go after copyright infringement. An IP is not enough proof to tie a person to a crime.

In a win for privacy advocates and pirates, the Ninth Circuit Court of Appeals ruled that an IP address alone is not enough to go after someone for alleged copyright infringement. They ruled that being the registered subscriber of an infringing IP address does not create a reasonable inference that the subscriber is also the infringer.

The case began back to 2016 and has been playing out in the legal system ever since. The creators of the film 'The Cobbler' alleged that Thomas Gonzales had illegally downloaded their movie and sued him for it.

Gonzales was a Comcast subscriber and had set up his network with an open Wi-Fi access point. At some point, someone had used his network to download the movie and the film creators captured Gonzales's IP address.

The judge stated that in order for a proper case, the copyright owners would need more than just an IP address. This is often difficult to provide since it is challenging to prove who was connected to what and when. This case is made even more challenging since Gonzales's network was open and anyone could have downloaded the movie.

The new ruling upholds a previous ruling by a lower district court on the same case. The Appeals Court issued the following statements:

In this copyright action, we consider whether a bare allegation that a defendant is the registered subscriber of an Internet Protocol (‘IP’) address associated with infringing activity is sufficient to state a claim for direct or contributory infringement. We conclude that it is not.

The direct infringement claim fails because Gonzales’s status as the registered subscriber of an infringing IP address, standing alone, does not create a reasonable inference that he is also the infringer. Because multiple devices and individuals may be able to connect via an IP address, simply identifying the IP subscriber solves only part of the puzzle. A plaintiff must allege something more to create a reasonable inference that a subscriber is also an infringer.

In addition to direct infringement, the copyright owner also attempted an indirect infringement claim. They alleged that Gonzales had encouraged users of his network to download the movie but this failed as well since they were unable to provide any proof. Finally, the judge ordered Cobbler Nevada LLC, the copyright holder, to pay more than $17,000 in legal fees for Gonzales.

Cannabis and the adolescent brain: What science knows, and doesn’t know

$
0
0

Jonathan Kropman, 23, shown in a rehab facility in Port Hope, Ont., took his first toke of weed when he was 11 years old. Over the years, and constant cravings for cannabis and alcohol eventually led him to harder drugs like cocaine. ‘I still crave it, absolutely,’ he says. ‘I am going to give it my best shot.’

Mark Blinch/The Globe and Mail

Part of cannabis and your kids

On the April morning of 4/20, the “weed holiday” when crowds toke up in public places, Nick wrote a Grade 12 history test on the French Revolution while stoned: “I got an 82,” he says, “which is a pretty good mark.”

He’s offering this detail as evidence that, after three years of experimenting with pot in high school, his memory is intact, his brain is not fried, the life-destroying danger warned of in drug-education assemblies has not, in his experience, come true.

On a Sunday afternoon of the May long weekend, with his parents away, he’s been playing the video game Fortnite in his Toronto-area bedroom, listening to the rapper Rich the Kid, and now, on his smartphone, he’s making the case for why he thinks the adults need to chill out about marijuana. He smoked a joint an hour before the interview, which might explain the mellow amble into the merits of Fortnite. Otherwise, given his rapid-fire recall of pot research, it’s hard to tell.

Story continues below advertisement

“Did you know,” he asks, “that you’d have to smoke 1,500 pounds of marijuana for like, 30 minutes, to die? That’s physically impossible.” (I check: He’s quoting a 1988 ruling from the U.S. Drug Enforcement Administration, almost verbatim: The time stated is actually 15 minutes.) Yeah, he says, “I’ve done a lot of reading.”

Nick would prefer his last name not be used – he doesn’t advertise his pot smoking to his parents. When we first spoke a couple of years ago, Nick was conducting his own single-subject marijuana trial. If he binged for a few days, he’d take a week or two off. “To make sure I don’t go overboard,” he said then. Now, in his last semester of high school, he’s smoking more than ever – three or four times a week, mostly on weekends. “I’ve been paying close attention to see if there are any faults in my memory, and the truth is I haven’t seen any.”

In fact, he says, he’s getting the best grades of his high-school career. He’s made the honour roll. Until recently, he was swimming competitively. He’s holding down a part-time job. And he was accepted into both his top choices for university.

As one scientist observed, Nick might be doing even better if he weren’t getting high so often. But for a teenager with a relatively heavy drug habit, he comes across as a successful, functional 17-year-old.

And when it comes to pot research, he’s still a puzzle.

Last year, a committee of experts for the National Academies of Sciences, Engineering, and Medicine in the United States released a detailed analysis of the published studies on marijuana. They looked at more than 10,000 papers, selected the ones they deemed to be of highest quality, and assessed the strength of the science. The report is 500 pages of carefully fudged conclusions that highlight the gaping holes in cannabis research. For instance: Does pot cause lung cancer? “There is moderate evidence of no statistical association,” the reports says. Does it make you more likely to die prematurely, or from an overdose? “There is no or insufficient evidence to refute or support a statistical association.” On the possible long-term consequences that cause parents the most worry – pot’s association with psychosis, brain damage and addiction – the report also offers more caveats than conclusions.

The studies that have rolled out since have been equally confounding. A meta-analysis published in April in JAMA Psychiatry, for example, found that once teenaged pot users went off the drug – even for as little as 72 hours – their scores on cognitive tests were not significantly different from non-drug users.

Story continues below advertisement

Other studies have also found the “stoner” deficit to be temporary. Scientists believe that the adolescent brain is at higher risk because it’s developing rapidly in the same areas affected by the “high” from pot. But brain scans showing structural changes are often based on smaller samples of chronic, frequent users. Since they don’t have a before-picture of those same brains, researchers can’t say for certain whether marijuana is the cause, or a consequence that happens along the way. James MacKillop, co-director of the Michael G. DeGroote Centre for Medicinal Cannabis Research at McMaster University, compares the problem to a crime-scene photograph. “Pot may be present, but is it the culprit?”

One issue is that the subjects in studies tend to be people using a lot of marijuana for a long time. But it’s much more common for teenagers to dabble in pot for a while, and wean themselves when adult responsibilities arrive. And researchers know even less about this second group.

As Canada heads into its own real-world pot experiment, the best they can say is that cannabis will likely turn out to be neither devastating for most teens, nor benign for all of them.

In other words, most Nicks will probably end up just fine.

But not everyone is a Nick.

Jonathan Kropman smokes a cigarette at the Port Hope rehab centre. His neck is tattooed with his grandmother’s name.

Mark Blinch/The Globe and Mail

The cautionary tale

A week before Nick’s pot-hazed history exam, Jonathan Kropman, 23, went to rehab. Driving to a treatment facility in Port Hope, Ont., run by the Canadian Centre for Addictions, he made his father stop five times so he could smoke pot on the side of the road. At the centre, he reluctantly parted with his last three joints, after his dad persuaded him not to sneak them in.

Story continues below advertisement

It was a long journey to this point. In the spring, Mr. Kropman flew home a week early from a Mexican vacation with his family – booking the flight without his parents’ knowledge – because he felt he couldn’t go another seven days without pot. The previous April, he had woken up in a jail cell at Toronto’s Old City Hall with a foggy memory of the night before. He needed to be told that, while intoxicated and smoking weed at the corner of Yonge and Dundas, he’d swung a punch at the police officer who stopped to question him. This wasn’t his first pot-induced trouble – in Grade 8, he was suspended from school for possessing a joint. But it was adding up: “I really have to change my life,” he decided. For one thing, he said later, from rehab, “no woman is going to want to be with an addict.”

Mr. Kropman took his first haul on a joint in a playground after school in Grade 6, when he was 11 years old. This alone makes him an outlier: According to a 2017 Ontario student drug survey, only about 2 per cent of teens have tried pot by Grade 8, a number that rises to 37 per cent in Grade 12. But Mr. Kropman played sports with older kids, and weed was always around. “It was the cool thing to hang out with the bad crowd. I didn’t know what I was getting into,” he recalls. “From the first toke, I knew – I was addicted to the feeling. It was euphoric, an escape.”

Even before then, Mr. Kropman says he was “definitely a trouble-maker” and a risk-taker. He was sent home in Grade 4 for fighting in school. He had a special worker who sat with him in class to help him with work and monitor his behaviour. By junior high, he was using pot all the time, and drinking as well, although he preferred weed to alcohol. Pot, he explains, “is a lot easier to hide. You just put [eye drops] in your eyes.”

The warnings that he was causing damage to his brain, he says, “went in one ear and out the other.” His mother was firm: She didn’t want him using. His father was less strict – growing up, Mr. Kropman says, his dad also smoked a lot of weed. As they became more aware of his habit, they tried to talk him into cutting back: “I wasn’t listening.”

Instead, his drug use escalated – he got weed from friends with medical marijuana licences and, eventually, his own prescription, he says, given to him for help with sleeping. He couldn’t cope without a ready supply. Working construction, after high school, he smoked weed to stay focused during the tedium of painting houses. Eventually, he was drinking, smoking and, finally, using cocaine, which he tried one night when he was stumbling drunk. (Pot’s reputation as a gateway drug was also cast into doubt by the National Academies report, which stated that the examined studies “did not provide compelling evidence.”) Mr. Kropman insists it was alcohol – not cannabis – that paved his path to the hard stuff; stoned, he says, he would have been too nervous to try it.

At the time we speak, Mr. Kropman had completed more than a month of treatment, cold turkey. “This place definitely saved my life.” The physical cravings are gone, he explained, but the mental struggle is just beginning. “I still crave it, absolutely.” On a Saturday in May, he left rehab for a sober-living house, where he can stay for three months as long as he passes regular drug tests. As of this Thursday, he’s been “clean and sober” for 78 days. “I am going to give it my best shot.”

Of all the pot-induced problems that adults worry about, addiction should be at the top of the list, says Jean-Sébastien Fallu, a psychologist at the University of Montreal who has edited a book on cannabis research coming out this fall. About 9 per cent of adults who start smoking pot will become addicted, Mr. Fallu says. Among teenagers, he says, the rate is 16 per cent – and nine times more likely, for instance, than the risk of a pot-smoking teenager developing schizophrenia.

’The place definitely saved my life,’ Mr. Kropman says of his time in rehabilitation.

Mark Blinch/The Globe and Mail

Detective work

So why is Nick on the honour roll, and Jonathan going through rehab?

That’s the real detective work required to unravel the role that pot plays in Mr. MacKillop’s crime-scene photo. Are teens who start using cannabis at young ages different from their peers? How much do certain risk factors matter – an impulsive personality, a deviant peer group, childhood trauma – and in what combination? Is pot a supporting character in a tale of unfortunate events, or the villain that turns the tide?

Risk factors appear to be at least a partial explanation for many of the problems associated with cannabis. Teenagers who begin using early, and increase their use over time – as Mr. Kropman did – are more likely to become addicted; having a history of addiction in the family is also a contributing factor.

In some studies, accounting for risk factors significantly reduces – and even eliminates – the negative consequences associated with cannabis. A 2012 longitudinal study– one of the most well-regarded cannabis studies, and hence often cited as proof of the drug’s negative side effects – found that IQ fell among long-term, heavy users. But twin studies that compared pot-using teens with their non-using siblings didn’t find cognitive differences. When researchers adjust for criteria such as family background, parental education, intelligence and personality, the link between negative consequences, such as high-school dropouts and poor grades, often collapses. How often teenagers use pot also appears to be important: a June 2017 study found that young people who used twice a week or less performed as well as – and even slightly better – than non-users on tests measuring memory and executive control. And the National Academies report found “limited to no evidence” that, once people stopped using marijuana, there were sustained effects on learning, memory or attention.

“There is a nuance to the findings that need to be appreciated,” says Cobb Scott, a psychology professor at the University of Pennsylvania and lead author of the June 2017 study, who studies the effect of dose and abstinence on cognitive skills. “Our research suggests that these effects may be smaller than people were worried about and it’s possible they don’t reflect long-term damage to brain networks that support cognitive functioning.”

If damage does occur, however, the research suggests that early exposure is likely a key factor. A University of Montreal study published last December that tracked boys in Montreal from age six into their 30s found that, among those who smoked pot after age 17, there was no overall difference in cognitive skills – intelligence, executive function, decision-making – compared with their peers who hadn’t smoked. Those boys who started using cannabis before age 14 performed more poorly, especially in verbal cognitive tests. But even so, says Natalie Castellanos-Ryan, a psychology professor who co-authored the Montreal study, “we can’t assume it is the toxic effect of cannabis.” In many cases, other high risk factors were also present before marijuana, including poor grades and delinquent behaviour.

And even when studies do find a statistically significant negative association with cannabis use, Ms. Castellanos-Ryan says that the actual effect on an individual’s life may be small. Bottom line, she concludes: “There is a lot of missing research, and many of us are trying to fill in the gaps.”

One area where the science appears to be closer to an answer is the association between cannabis and schizophrenia. Rather than cause the serious mental illness, Ms. Castellanos-Ryan says, current researchsuggests that marijuana works as “a trigger” for young people already predisposed to psychosis. (In one study of pot users, for example, only those with a family history of schizophrenia developed the disease.)

It is still not clear whether not using cannabis would have prevented the disease in high-risk individuals. But those teenaged exposures appear to be key: Steven Laviolette, a biologist at the University of Western Ontario, says that while dosing juvenile rats with THC produces memory issues and psychotic behaviour, those changes are not seen in adult rats given the same dose.

Since schizophrenia develops during adolescence, and teenagers may not know their family history or whether they have a genetic sensitivity, Mr. Laviolette says it’s another argument for waiting until adulthood to use cannabis. “We haven’t figured it out yet,” he said. “But we can conclusively say it is a risky thing to do.”

Pot smokers are complicated study subjects – many of the same teens also drink alcohol and smoke cigarettes, both of which may bring even more significant harms than cannabis. (The National Academies report, for instance, found stronger evidence for tobacco as the first gateway drug than pot.) Comparisons between studies are tricky, since researchers may not ask about the same risk factors, or record data the same way.

Most studies also rely on self-reports, which aren’t particularly reliable, even for subjects not doing drugs. Aside from being present in real time, researchers can’t determine the potency of pot being ingested, or precisely how much.

Yet, one of the most important outstanding questions is whether stronger versions of cannabis – as well as different ways of using it – will produce different levels of harm. Today’s street pot is not Woodstock weed. The amount of THC, the mind-altering chemical in marijuana, has roughly tripled since the 1970s, and new concentrated versions, known by names such as shatter, wax and budder, may be 80-per-cent THC, or more than six times the level of the modern-day plant version.

“We certainly do not have strong evidence that low frequency use is associated with a lot of negative outcomes,” says Mr. MacKillop of the DeGroote Centre . But at the same time, he adds: “What was somewhat a murky science to start with is already playing catch-up to the brave new world of cannabis consumption.”

Science may eventually conclude that what makes some teenagers more susceptible to negative outcomes when they use cannabis is a complicated mix of interwoven conditions and life circumstances, both inherited and acquired.

“That is not an unlikely scenario,” says Susan Weiss, a researcher at the National Institute on Drug Abuse in the United States. “But it doesn’t mean that cannabis isn’t adding to the problem.”

Ms. Weiss is part of a groundbreaking, new study trying to get closer to the answer by following more than 10,000 American nine- and 10-year-olds through their teenage years, using regular cognitive tests, psychosocial surveys and brain scans to capture a before-and-after picture of drug use. But it won’t have findings for many years, long after pot is legal in Canada.

So while scientists are figuring things out, how do Canadian parents talk so their teens will listen?

High on the honour roll

“I figure live fast, die young,” explained Alex, a 17-year-old in St. Catharines, Ont., who smokes pot, he says, five or six times a week but, as with Nick, can also boast about being on the honour roll, with a university acceptance letter.

When it comes to the risk of marijuana, he says, “I worry about it the same way that you worry about eating at McDonald’s. It might give you a heart attack or make you fat, but people eat at McDonald’s all the time because it tastes so good.”

Would Alex really want to eat Big Macs every day? “No, probably not,” he admits. But it’s not a bad analogy. Even with the science still to be determined, the researchers interviewed for this story were unanimous in their best advice: Teenagers should wait until their brains are more developed (ideally, age 25), use infrequently, take breaks and avoid weed with high levels of THC. Adolescents with known risk factors – such as a history of schizophrenia or addiction in the family – should take special care. At the very least, like fast food, marijuana is safest consumed in small, spare doses. In the best-case scenario, researchers agreed, teenagers would never use pot.

On the positive side, says Mr. Fallu in Montreal, the fear that legalization will launch a tsunami of teenaged potheads has been grossly overstated, according to existing research. That’s not happened to date in states such as Washington and Colorado, where recreational cannabis has already been legalized.

In a 2017 Ontario student drug survey of more than 11,000 teens, 10 per cent of high-school students said they intended to try pot after it was legalized, and 4.9 per cent said they planned to use more often. By comparison, 53 per cent said they didn’t plan to use it either way.

Canadian teens already use pot at relatively high rates – a 2014 report by the World Health Organization placed Canada second behind France for 15-year-olds who said they’d used pot in the past 30 days. (Unlike many other countries, girls use at roughly the same rates as boys.) But the Ontario student study also found that overall drug use among teenagers – including cannabis – was the lowest recorded since the survey began in 1977, especially among younger grades.

And it’s not as if weed is hard to find now, as even teenagers not using pot told The Globe and Mail. Ellen, an Alberta high-school student, said she’d only have to wander across the street at lunch hour to the stoner kids smoking outside the 7-Eleven.

As legalization approaches, researchers such as Jean-Sébastien Fallu are calling for balance, so adults don’t discredit themselves with teenagers, who check everything on Google anyway. It is particularly frustrating, Mr. Fallu says, that parents often exaggerate the risks of cannabis, while minimizing the danger of alcohol, even though there’s research to suggest that binge drinking, in particular, may be even more harmful to physical health and the teenage brain.

Mr. Fallu suggests that a rational, science-based message about cannabis should present the risks, admit what is yet unknown and make clear that using drugs when you’re young is a gamble with potentially high stakes.

“Human beings want to have black-and-white facts, but the truth is rarely that clear, and not just in drug use,” he says. “There is uncertainty, but we can educate around that.”

Joe Mahoney/The Canadian Press

1. There is a lot scientists don’t know. But researchers believe that starting before age 15 and using heavily through your teenage years is associated with the highest risk of harm. The government is legalizing marijuana not because it is completely safe but to regulate it and, ideally, make the street version of the drug less available to young people.

2. If you are going to use, take breaks. Pot may not be as addictive as other drugs, but the risk of addiction is still higher for teenagers. Taking a break is also good for your developing brain (and your lungs), and a way to see how marijuana might be negatively affecting your life and your relationships.

3. Just like alcohol, there are different potencies of pot. Again, it’s not certain, but the science suggests that higher levels of THC, the mind-altering chemical in the drug, may cause more damage to a young brain. The highest level of THC – as high as 80 per cent –is found in butane hash oil extractions, called shatter, wax or budder.

4. Be careful with edibles, which are the main cause of emergency-room visits for pot. All the THC in that pot brownie will be absorbed into your body. And the effect takes longer, so people may take too much before they realize it.

A cupcake "edible" is shown at a stall at a pop-up event in Toronto.

Chris Young/The Canadian Press

5. Marijuana may not cause schizophrenia but scientists believe it is a potential trigger for teens with risk factors, such as family history. If you have a severe reaction to pot – hallucinations, for instance, or frightening paranoia – that may be a warning sign.

6. Some scientists will say that alcohol is worse than cannabis. But the harms of each drug are different. Using any drug when you’re young is more risky than when you’re an adult. Using them together appears to increase those risks.

7. Don’t drive if you have used marijuana, especially if you have been drinking too. Some people may say they drive better stoned, but research suggests otherwise – when people are high they react more slowly and think less clearly. Researchers who analyzed the existing data found a higher risk of car accidents when people drove while stoned.

8. Be skeptical of any headline that suggests a study has found the answer. Science gets misused on both sides, and legalization has also given cannabis companies a vested interest in overselling the drug’s positive effects.

9. Studies usually group a lot of people together to produce average findings. How each individual reacts to marijuana may be very different. But one thing is true for everyone: Your brain is developing until about age 25. Just like getting good sleep and eating well, avoiding alcohol and marijuana as much and for as long as possible helps ensure you’ll get the best one you can.

Erin Anderssen

Viewing all 25817 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>