Quantcast
Channel: Hacker News
Viewing all 25817 articles
Browse latest View live

A US university is tracking students’ locations to predict future dropouts

$
0
0

At the University of Arizona, school officials know when students are going to drop out before they do.

The public college in Tucson has been quietly collecting data on its first-year students’ ID card swipes around campus for the last few years. The ID cards are given to every enrolled student and can be used at nearly 700 campus locations including vending machines, libraries, labs, residence halls, the student union center, and movie theaters.

They also have an embedded sensor that can be used to track geographic history whenever the card is swiped. These data are fed into an analytics system that finds “highly accurate indicators” of potential dropouts, according to a press release last week from the university. “By getting [student’s] digital traces, you can explore their patterns of movement, behavior and interactions, and that tells you a great deal about them,” Sudha Ram, a professor of management systems, and director of the program, said in the release. “It’s really not designed to track their social interactions, but you can, because you have a timestamp and location information,” Ram added.

For an example of how granular those data points can get, take Ram’s explanation of social tracking: “There are several quantitative measures you can extract from these networks, like the size of [students’] social circle, and we can analyze changes in these networks to see if their social circle is shrinking or growing, and if the strength of their connections is increasing or decreasing over time.”

The University of Arizona currently generates lists of likely dropouts from 800 data points, which do not yet include Ram’s research but include details like demographic information and financial aid activity. Those lists, made several times a year, are shared with college advisers so they can intervene before it’s too late. The schools says the lists are 73% accurate and Ram’s research yields 85% to 90% accuracy, though it did not give details on how those rates are measured.

The University of Arizona freshman retention rate jumped from 80.5% to 83.3% last year—so its so-called “Smart Campus” project appears to be useful. But as Gizmodo points out, algorithms are not free from bias, and relying on these sorts of predictive tools can create major blind spots.

At the end of the day, universities are businesses trying to retain customers. Other schools also keep tabs on their students’ activities—some, as Quartz found in 2015, even track the online footprints of prospective students—but the fact that University of Arizona students are not asked to opt into the project when signing up for their ID cards makes this different from, say, a person knowingly signing up for an account on Amazon or Google. The university will have to contend with these issues and others before it expands the program, and particularly if it has aims to use Smart Campus to actively make decisions about students instead of just spitting out predictive lists.


Read next: It’s the end of the university as we know it


Deep Quaternion Networks [pdf]

$
0
0
Abstract: The field of deep learning has seen significant advancement in recent years. However, much of the existing work has been focused on real-valued numbers. Recent work has shown that a deep learning system using the complex numbers can be deeper for a fixed parameter budget compared to its real-valued counterpart. In this work, we explore the benefits of generalizing one step further into the hyper-complex numbers, quaternions specifically, and provide the architecture components needed to build deep quaternion networks. We go over quaternion convolutions, present a quaternion weight initialization scheme, and present algorithms for quaternion batch-normalization. These pieces are tested in a classification model by end-to-end training on the CIFAR-10 and CIFAR-100 data sets and a segmentation model by end-to-end training on the KITTI Road Segmentation data set. The quaternion networks show improved convergence compared to real-valued and complex-valued networks, especially on the segmentation task.
Comments:8 pages, 1 figure
Subjects:Neural and Evolutionary Computing (cs.NE); Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:1712.04604 [cs.NE]
 (or arXiv:1712.04604v2 [cs.NE] for this version)
From: Chase Gaudet [view email]
[v1] Wed, 13 Dec 2017 04:19:24 GMT (117kb,D)
[v2] Tue, 30 Jan 2018 16:08:56 GMT (117kb,D)

Making a TTS model with 1 minute of speech samples within 10 minutes

$
0
0

README.md

Seeing my implementaions of Tacotron and DCTTS, many people have asked me "How large speech dataset is needed for neural TTS?" or "Can you make a TTS model with X hour(s)/minute(s) of training data?" I'm fully aware of the importance of those questions. When you plan a service using TTS, it is not always likely to get lots of speech samples. I would like to give an answer. I really do. But unfortunately I have no answer. The only thing I know is that I could train a model successfully with five hours of speech samples I extracted from Kate Winslet's audiobook. I haven't tried less data than that. I could try it, but I actually I have a better idea. Since I have a decent model trained with the LJ Speech Dataset for several days, why don't I use it? After all, we all have different voices, but the way we speak English is not totally different.

In the above two repos, I trained TTS models using all the speech samples of my two favorite celebrities, Nick Offerman and Kate Winslet, from scratch. This time, I use only one minute of the speech samples. The following are the synthesized samples after 10 minutes of fine-tuning training. Do you think they sound like them?

Check here to see the model details, source code and the pretrained model which served as a seed.

An Open Source Tool to analyze wasted EBS capacity in your AWS environment

$
0
0

While AWS CloudWatch provides many useful storage statistics by default, such as the number of bytes written or read during a period of time, reads/writes per second, etc. a common metric customers want to know about is disk utilization. How much of EBS provisioned capacity is unused, resulting in wasted spending, is a question asked often.

As it’s very difficult to know how much storage an application will need over its lifetime, most people tend to over-provision their storage as an insurance policy.  In the AWS cloud, however, you pay for what you provision, not for what you actually use.  While we at FittedCloud solve this problem with our EBS Optimizer (which automatically and transparently provisions storage capacity for you) you may just want to simply know how much EBS space you are actually using.  Having this knowledge at your fingertips is helpful for cloud cost monitoring and cloud cost optimization.

Current approaches to determining your disk utilization involve running scripts on each instance that submit custom metrics to CloudWatch. This process involves:

  • Deploying scripts on each of your instances
  • Submitting custom metrics to CloudWatch on each instance
  • Configuring cron jobs to run the script periodically on each instance

Alternatively, you could use ssh to run commands remotely on Linux instances, however this approach would require juggling around key files and installing ssh servers on Windows instances.

At FittedCloud, we wanted a simpler, generic solution.

I’m pleased to announce the release of an open-source tool that you can use to fetch information about your disk utilization.  The script requires that Amazon’s AWS System Management Agent (SSM) is installed on your EC2 instances.  Among other things, the SSM Agent allows you to run shell commands remotely on your EC2 instances.

To use this tool:

1. Install AWS SSM Agent on the instances you wish to query.

If your instances are running Windows or Amazon Linux, SSM Agent is installed automatically so you can skip this step.  Otherwise, the instructions for installing the SSM Agent can be found here: https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent.html.

2. Enable IAM permission to run commands through SSM on your instances.

This can be done in the EC2 Management Console by selecting an EC2 Instance, under the Actions dropdown, select Instance Settings > Attach/Replace IAM Role, add the RunCommand role and click Apply.

3. Enable IAM permission for user to use SSM through the IAM Management Console.

Add the Actions SendCommand and GetCommandInvocations for the SSM Service for an individual user, or alternatively, create a new Role with SSM permissions (e.g., AmazonSSMFullAccess) and assign it to a user.

At the end of the day, using our new open-source tool means that you don’t have to write or install any third-party scripts on your instances or configure cron jobs to submit custom metric information to CloudWatch.  Furthermore, you don’t have to write any additional scripts to query CloudWatch for the new metrics.

You can download the open-source utility here: https://github.com/FittedCloud/diskanalyzer

Running diskanalyzer.py with the –help option will display its usage:

To simply query all of your instances to get their disk utilization, run:

$ python2 diskanalyzer.py -a <access-key> -s <secret-key>

Optionally, you may specify a region or comma-separated list of regions to limit the queries.  The output looks as follows:

You may also output in json format with the -j option, which includes additional fields such as the timestamp.

FittedCloud Cloud Cost Optimization Solutions
FittedCloud offers machine learning based cloud cost optimization solutions that help customers reduce AWS spend significantly. Our current solution includes machine learning driven actionable advisories with click through actions for EC2, EBS, RDS, DynamoDB, ElastiCache, ElasticSearch, AutoScale, Lambda, etc. and full/lights out automation for EC2, EBS, DynamoDB and RDS. Our solution typically can save customers up to 50% of their cost on AWS. For a 14-day Premium free trial, please visit https://www.fittedcloud.com/register-now/.

KotCity – an open source city simulation game written in Kotlin

$
0
0

README.md

Build StatusBuild Status

WARNING!

This is pre alpha software with super obvious bugs, rough edges etc. In the spirit of "release early and release often", I am posting the code. This project is far from done but I prefer to get the code out there to be used by whomever. Disagree with the project? Fork it :).

New in this release

  • Fire stations and fire coverage (by @sabieber)
  • Improvements to path finding
  • Tweaks to economy

KotCity Screenshot

What is KotCity?

KotCity is a city simulator written in Kotlin inspired by the statistical city simulators of old. This game aims to achieve a mark somewhere between SimCity (1989) and SC2000. Hopefully this mark will be hit and we can set our sights higher. The game will be fully supported on Windows, macOS, and Linux.

Gimme the Software!

A build for Windows, macOS and Linux is available at https://github.com/kotcity/kotcity/releases/tag/0.43.

Java 8+ is required. On Windows it will look for JRE and bring you to download page if you don't have it.

(note, on Ubuntu do apt-get install openjfx)

Quick Start

Be on the lookout for super easy to install package soon... until then...

  • Install JDK 1.8+.
  • Clone the project.
  • Run Gradle using ./gradlew run.

It's easy to setting up the development environment. You can use either IntelliJ or other IDEs supporting Gradle, then import this as a Gradle project. Voilà, the project can be worked on.

The UI is done with FXML created with Gluon's SceneBuilder.

FAQ

Q: Why 2D?
A: This project is a lot of work already without having to worry about 3D modeling and so forth. One of my bottlenecks is art, so 2D is an easy way to sidestep that concern. Additionally, the actual "renderers" for the game are kept semi-separate from the simulation, so there's no reason why this couldn't turn into 3D later.

Q: Why Kotlin?
A: It has a lot of libraries (pick any random Java library...) It's pretty productive! Gee-whiz functional stuff baked in. Besides, if this project gets to a place where it's really awesome but just needs extra speed we can reach for that C++ or Rust book.

Why Another City Simulator?

After many years of not seeing any new city builders descend that are satisfactory, I decided to take matters into my own hands. Why? SimCity 2013 was REALLY disappointing. Cities Skylines is fun, but it doesn't seem to scratch that itch that SimCity 4 does. Even though there are still patches and new content coming out for SimCity 4, it's definitely on life support. I looked around at a few of the city simulators available but it doesn't seem like anyone is really working on a modern version of SimCity.

Community

If you get stuck or want to suggest suggestions, you can discuss it in our topic on Simtropolis. Chat with the developers at https://gitter.im/kotcity/Lobby.

Contribution

You can contribute buildings (see assets directory), ideas for the game, help with art and so on by creating issues or fork the repo and start to make pull requests.

By submitting the work to the repository, you're agree that your work should be compatible with Apache License 2.0 and assets license (undecided for now).

Current Status

  • GPU accelerated graphics.
  • Map generation (simplex noise based).
  • "Perfect" A* pathfinding.
  • Zoning of residential, commercial, industrial, similar to SimCity.
  • Moddable buildings.
  • City saving and loading.
  • Data overlays for traffic, desirability, natural resources.
  • Multi-threaded engine that allows for speedy traffic / city calculations.
  • As-you-want map size (Can your PC handle 100km^2? Go for it!).
  • Power plants and coverage.
  • Dynamic economy where goods, services and labor are exchanged.
  • Happiness (available in separate branch).

Future Plans

To make a game that "takes over" from SimCity 4. We have a loooooong way to go:

  • Implement land values.
  • Have traffic affect desirability.
  • Bus and rail and stations.
  • More types of power plant (hydro, wind, etc).
  • Create buildings that use resources under the ground (coal, etc).
  • Implement "module upgrade" system from SimCity 2013 (upgrades to power plants etc).
  • Improve graphics.
  • Obtain sound effects / music.
  • Add many, many additional types of buildings.
  • Add "mod manager" (think Steam workshop... SC4 has many mods but they really suck to obtain/install).

License

This project is licensed with Apache License 2.0. Assets license currently undecided. This project includes icons by various graphic designers from the Noun Project.

Complex Human Cultures Are Older Than Scientists Thought

$
0
0

When Rick Potts started digging at Olorgesailie, the now-dry basin of an ancient Kenyan lake, he figured that it would take three years to find everything there was to find. That was in 1985, and Potts is now leading his fourth decade of excavation. It’s a good thing he stayed. In recent years, his team has uncovered a series of unexpected finds, which suggest that human behavior and culture became incredibly sophisticated well before anyone suspected—almost at the very dawn of our species, Homo sapiens.

The team found obsidian tools that came from sources dozens of miles away—a sign of long-distance trade networks. They found lumps of black and red rock that had been processed to create pigments—a sign of symbolic thought and representation. They found carefully crafted stone tools that are indicative of the period known as the Middle Stone Age; that period was thought to have started around 280,000 years ago, but the Olorgesailie tools are between 305,000 and 320,000 years old.

Collectively, these finds speak to one of the most important questions in human evolution: When did anatomically modern people, with big brains and bipedal stances, become behaviorally modern, with symbolic art, advanced tools, and a culture that built on itself? Scientists used to believe that the latter milestone arrived well after the former, when our species migrated into Europe between 50,000 and 60,000 years ago, and went through a “creative explosion” that produced the evocative cave art of Lascaux and Chauvet. But this conspicuously Eurocentric idea has been overturned by a wealth of evidence showing a much earlier origin for modern human behavior—in Africa, the continent of our birth.

The new discoveries at Olorgesailie push things back even further. They suggest that many of our most important qualities—long-term planning, long-distance exploration, large social networks, symbolic representation, and innovative technology—were already in place 20,000 to 40,000 years earlier than believed. That coincides with the age of the earliest known human fossils, recently found elsewhere in Africa. “What we’re seeing in Olorgesailie is right at the root of Homo sapiens,” Potts says. “It seems that this package of cognitive and social behaviors were there from the outset.”

“They demonstrate human ways of thinking and doing that cannot be traced easily in the remains of our skeletons or genes,” says Marlize Lombard, an archeologist at the University of Johannesburg. “They provide strong indicators that by about 300,000 years ago we were well on our way to become modern humans in Africa.”

It’s a “textbook example of good archaeological practice,” adds Lyn Wadley from the University of Witwatersrand.

For the longest time, most of the tools that were uncovered at Olorgesailie were Acheulean handaxes—large, teardrop-shaped tools made by chipping away at cores of stone. Hominids like Homo erectus used these implements to butcher meat and cut wood. At Olorgesailie, they started doing this 1.2 million years ago, and continued until at least 500,000 years ago. And during all that time, the basic design of the axes changed very little. In an age where the phones in our pockets can become obsolete in a year, “the idea of a single technology lasting that long is almost inconceivable,” says Potts.

Acheulean hand axes did eventually go obsolete, giving way to the tools of the Middle Stone Age. These were smaller, more carefully shaped, more specialized, and more varied. Instead of just bulky axes and cleavers, they also included spear tips, scrapers, and awls. Potts’ team started finding these at Olorgesailie in the early 2000s, and Alan Deino from the Berkeley Geochronology Center worked out how old they are by analyzing levels of radioactive isotopes of argon and uranium in the samples. He concluded that these tools had completely replaced the Acheulean designs by at least 305,000 years ago.

Many of the tools were made from a black volcanic rock called obsidian, which was brought to the site and processed there. But from where? There aren’t any obsidian outcrops near Olorgesailie. The chemistry of the tools suggests that they came from sources up to 100 kilometers away. But “these are straight-line distances that, in some cases, go over the top of a mountain,” says Alison Brooks from George Washington University.

It’s unlikely that the residents of Olorgesailie regularly commuted to get their obsidian. Instead, they probably took part in long-distance trade networks, receiving obsidian from people who lived in distant locales presumably in exchange for other goods. “There’s an occasional piece in the Acheulean that gets transported these distances,” says Brooks. “But we have thousands of pieces in this one site that’s smaller than most people’s kitchens. There has been a really major import of raw materials.” If she’s right, then Olorgesailie’s obsidian network precedes other examples of long-distance trade by 80,000 to 100,000 years.

These networks help to explain another Olorgesailie discovery: colored rocks. One site contains 86 rounded lumps of manganese ore, which would have produced dark brown or black pigments. Another harbored two lumps of iron minerals that had clearly been deliberately ground with some sharp, chiseling tool to extract the red powder within. “Mixed with any kind of fat, or even rubbed on oily skin, it would have made a very wonderful paint,” says Brooks. “Pigments are often seen as the root of complex symbolic behavior,” says Potts. “Think of the way we use color on clothes, flags, and tattoos—all signals of social identity.”

As evidence for symbolic behavior, Brooks would give this a six on a scale of one to 10, where 10 would be something unambiguous like ochre-covered beads or cave art. Still, she notes that the colored rocks, like the obsidian, came from distant sources, which says something about their value. “Why go through all the trouble of importing pigment?” she says. “If you’re thinking about a way to signal at a distance that you’re not an enemy, having something red on your person is a good way to do it.”

“These findings mark a step forward in our understanding of the origin of complex cultures,” says Daniela Rosso from the University of Bordeaux. She notes that archeologists have found evidence of pigment use in sites from France, Kenya, and the Netherlands, all of which have been dated between 250,000 and 300,000 years ago. The Olorgesailie specimens, once again, are even older, and the earliest known examples of clearly worked pigments.

In fact, some of the innovations that the team discovered were evident even earlier. Between 500,000 and 615,000 years ago, Acheulean technology still dominated at Olorgesailie. But there are occasional signs of smaller tools, more sophisticated designs, and materials being imported from long distances. “In the late Acheulean, we see the precursors of what became crystallized in the Middle Stone Age,” says Potts. It’s almost as if humanity already had the capacity for our later leap, but were missing some kind of trigger—something that precipitated a break with hundreds of thousands of years of cultural stagnation. But what?

The animal bones at Olorgesailie provide a clue. When people were still making Acheulean hand-axes, the landscape was dominated by large grazing mammals like elephants and giant baboons. But by the time the Middle Stone Age tools appear, 85 percent of these species have disappeared, and are replaced by smaller ones like springbok. “This shows that there’s something bigger going on than just changes in the hominins,” says Potts. “The hominins are responding to something, as are the rest of the mammals.”

By analyzing the sediments at the site, Potts found that its cultural shifts took place at a time of—quite literally—great upheaval. Around 500,000 years ago, the relatively stable lake basin at Olorgesailie turned into an etch-a-sketch landscape that was continuously remodeled by earthquakes. By 360,000 years ago, the climate had become incredibly unstable, with big swings between dry and wet seasons, and large changes in the layout of rivers, lakes, and floodplains.

Perhaps it’s no coincidence that this was the world in which modern human adaptability arose—one of unpredictable weather and unreliable resources. Brooks has shown that modern hunter-gatherers also build larger networks in times of environmental turmoil. “It spreads the risk over a much wider landscape,” she says. “There’s no other way they can save for a future disaster. They don’t have crops or animals. They have friends. It’s part of a human way of life.”

“In view of this, the movement of stone and pigments could indicate increased interaction with immediately surrounding groups,” says Polly Weissner, an anthropologist at the University of Utah. And perhaps these broader networks ignited the development of new technology, allowing the culture to ratchet up from the longstanding Acheulean tradition into something more advanced.

“Could these sorts of behaviors been the leading edge of the origin of our species?” asks Potts. The old view says that Homo sapiens evolved complex cultures a long time after becoming a distinct species. But given the antiquity of the Olorgesailie finds, Potts now wonders if the emergence of complex behavior was “really the thing that distinguished the earliest members of our gene pool from other hominins?” Brooks agrees. “There was the argument that Homo sapiens came along and then developed all these things,” she says. “But now it seems that the behavior and the morphology came along together. Maybe the behavior even came first.”

“To my knowledge, the Olorgesailie studies document the context at the dawn of our species in much greater detail than any other early Middle Stone Age occurrence on the continent,” says Yonatan Sahle from Tubingen University. But he cautions that the emergence of the Middle Stone Age “was neither straightforward nor uniform across space and time.” In different parts of Africa, it varies in when it first appears, how much it overlaps with the older Acheulean tech, and whether it occurs together with Homo sapiens fossils.

Indeed, “there’s a tendency for archeologists to say that every important thing happened at my site,” says Potts, “and I don’t want to mislead and say that the Middle Stone Age originated in Olorgesailie. What’s going on there is simply representative of these changes in behavior.”

For now, Olorgesailie has nothing to say about a crucial window of time between 320,000 and 500,000 years ago. Earthquakes and erosion have destroyed the artifacts from this period, which is also when the local fauna and climate changed so radically, and when the local hominins started behaving differently. “It’s that period everyone should be looking at,” Potts says. “And we have data coming that fills in that gap.”

The town of Gujo Hachiman is the centre of Japan's replica food industry

$
0
0

With a gentle swish through hot water, and some deft tearing and shaping, Kurumi Kono turns a rectangular sheet of white and green wax into what, improbably, is quickly coming to resemble an iceberg lettuce.

Kono makes it look deceptively easy. “Place it in your hands and pull out the edges like this,” she says. “Then roll the remaining wax into a ball to make a small lettuce. Place it onto your hand and, starting from the back, fold the bigger leaf towards the middle. Using both hands, gently form a sphere. And there you have it.”

She then drips a yellowy liquid wax from a paper cup into the hot water. Within seconds, it forms a solid coating, in which she encases a “cooked prawn” to produce a flawless piece of tempura. It really does look good enough to eat.

A godsend to foreign tourists who, faced with a Japanese-language menu, can simply point and order, shokuhin sanpuru (food samples) have been tempting diners into Japan’s restaurants for almost a century.

Map

Gujo Hachiman, a picturesque town tucked in the mountains more than three hours west of Tokyo, lays claim to being the home of a replica food industry now worth an estimated $90m.

It is said that the father of replica food, Takizo Iwasaki, was inspired by the drops of candle wax that formed on the tatami-mat floor at the home he shared with his wife, Suzu, in Osaka, according to his biography, Flowers of Wax.

After months of perfecting his technique, Iwasaki made Suzu a fake omelette, garnished with tomato sauce, that she initially failed to distinguish from the real thing.

While some artisans had already started making rudimentary food models in the 1920s, Iwasaki pioneered a production method that combined accuracy with volume, and opened a workshop in his hometown of Gujo Hachiman.

His omelette appeared at a department store in Osaka in 1932, and an industry was born.

The more prosaic theory is that the replica food boom grew out of demand by restaurants for models that re-created the increasingly eclectic range of Japanese and foreign dishes that appeared on menus in the postwar period.

“Eating out could be a challenge for some people in those days, so restaurateurs saw display models as a way of putting customers at ease,” says Katsuji Kaneyama, president of Sanpuru Kobo (Sample Kobo), one of several replica food firms in Gujo Hachiman, whose products account for about two-thirds of the domestic market.

“The trick is in striking a balance between realism and aestheticism – the model that looks the most delicious isn’t necessarily the most realistic,” says Kaneyama, whose 10 full-time artists produce as many as 130,000 samples a year, made from durable PVC rather than wax. “And the most realistic models might not look all that tasty.”

In the shop attached to the Sample Kobo workshop, lines of tourists fill baskets with key rings, fridge magnets, USB flash drives, pencil sharpeners and other souvenirs, and try their hand at making fake tempura and lettuces.

Plastic hamburger earrings Photograph: Toru Yamanaka/AFP/Getty Images

The replicas don’t come cheap, however. Some of the more intricate models can cost several hundred dollars, and all items cost more than the dishes they represent.

The artists at Sample Kobo, like at other replica firms, make every item by hand, painting and airbrushing each morsel until they’re practically indistinguishable from the real thing.

Kaneyama, though, dismisses concern that the industry will be overtaken by 3D printing technology. “3D printers make a decent product, but it actually takes longer and costs more than you’d think.”

But his main objection is aesthetic. “I can easily tell the difference between a printed model and one that has been painstakingly created by hand,” he said. “There is something about the way a handmade replica looks and feels that I don’t think can be re-created on a 3D printer.”

In the workshop, artists paint seeds on to slices of banana and glue slices of tuna belly to oblongs of rice. On the workbench in front of them is a whimsical ramen presentation, the chopsticks and a mouthful of noodles seemingly suspended in mid-air above the bowl. Shelves heave with an eclectic mix of western and Japanese food, from ayu river fish that appear to have just been pulled from the water to glasses of lager and spider crabs.

Kaneyama believes his artists can reproduce the most obscure items of food demanded by tens of thousands of restaurants across Japan.

And the most difficult to get right? He smiles and answers: “Sushi.”

Breaking a Wine Glass in Python by Detecting the Resonant Frequency

$
0
0

For 50 days straight between January and March of this year, I wrote a creative programming sketch in Python every day.

On day six, I tried to write a program to detect the resonant frequency of a glass, and break it. As you can see below, my desktop computer speaker wasn’t quite loud enough to break the glass alone.

In today’s post, I walk through the journey of writing a Python program to break wine glasses on demand, by detecting their resonant frequency.

Along the way we’ll 3D print a cone, learn about resonant frequencies, and see why I needed an amplifier and compression driver. So, let’s get started.

First, the Inspiration

I first saw the video above, which goes into a lot of detail about how to actually break a wine glass with your voice. A few main points are covered, and Mike Boyd, the creator, quickly touches upon them. He recommends:

  • Having a real crystal glass
  • Preferring a thinner, longer crystal glass
  • Striking the glass to get it to resonate
  • Using a spectrum analyzer to find the resonant frequency of the glass
  • Using a straw to visualize when the glass is resonating

With that in order, it wasn’t until I tried breaking my third glass that I realized the last thing that ensures you really break a glass on command:

  • Having micro abrasions in the glass. It can’t be perfectly new and clean.

Without the micro abrasions (either caused by cleaning the glass with soap and water or a very light sanding), the glass wouldn’t break at all.

An Aside About How Resonance Works

When Mike Boyd breaks the glass in the above video, he’s using the acoustic resonance of the wine glass.

Crystal wine glasses have a natural frequency of vibration that you can hear when you strike them.

Any sounds at this frequency have the potential to make the wine glass absorb more energy than at any other frequency. Our hope is that our program and speakers will generate enough energy in the glass to break it.

Writing a Prototype Sketch

After seeing the video and being inspired to try recreating it, I needed to first get a prototype in place.

Would I be able to get a straw to resonate like in that example video?

Luckily, my Voice Controlled Flappy Bird post already does the difficult part of detecting the current pitch frequency from a microphone.

So, hypothetically, striking the wine glass should produce the proper tone to break the glass itself.

Using the pitch detection code, the only thing that would be left is to play out the pitch frequency detected by a microphone. The code is straightforward enough using Pygame, Music21, and PyAudio:

fromthreadingimportThreadimportpygameimportpyaudioimportnumpyasnpimporttimefromrtmidi.midiutilimportopen_midiinputfromvoiceControllerimportq,get_current_noteclassMidiInputHandler(object):def__init__(self,port,freq):self.port=portself.base_freq=freqself._wallclock=time.time()self.freq_vals=[0foriinrange(6)]def__call__(self,event,data=None):globalcurrentFrequencymessage,deltatime=eventself._wallclock+=deltatimeprint("[%s] @%0.6f %r"%(self.port,self._wallclock,message))ifmessage[1]==16:self.freq_vals[0]=(message[2]-62)*.5elifmessage[1]==17:self.freq_vals[1]=(message[2]-62)*.01elifmessage[1]==18:self.freq_vals[2]=(message[2]-62)*.005elifmessage[1]==19:self.freq_vals[3]=(message[2]-62)*.0001elifmessage[1]==20:self.freq_vals[4]=(message[2]-62)*.00001new_freq=self.base_freqforiinrange(6):new_freq+=self.freq_vals[i]currentFrequency=new_freqprint(new_freq)port=1try:midiin,port_name=open_midiinput(port)except(EOFError,KeyboardInterrupt):exit()midiSettings=MidiInputHandler(port_name,940.0)midiin.set_callback(midiSettings)pygame.init()screenWidth,screenHeight=512,512screen=pygame.display.set_mode((screenWidth,screenHeight))clock=pygame.time.Clock()running=TruetitleFont=pygame.font.Font("assets/Bungee-Regular.ttf",24)titleText=titleFont.render("Hit the Glass Gently",True,(0,128,0))titleCurr=titleFont.render("",True,(0,128,0))noteFont=pygame.font.Font("assets/Roboto-Medium.ttf",55)t=Thread(target=get_current_note)t.daemon=Truet.start()low_note=""high_note=""have_low=Falsehave_high=TruenoteHoldLength=10# how many samples in a row user needs to hold a notenoteHeldCurrently=0# keep track of how long current note is heldnoteHeld=""# string of the current notecentTolerance=10# how much deviance from proper note to toleratedefbreak_the_internet(frequency,notelength=.1):p=pyaudio.PyAudio()volume=0.9# range [0.0, 1.0]fs=44100# sampling rate, Hz, must be integerduration=notelength# in seconds, may be floatf=frequency# sine frequency, Hz, may be float# generate samples, note conversion to float32 array# for paFloat32 sample values must be in range [-1.0, 1.0]stream=p.open(format=pyaudio.paFloat32,channels=1,rate=fs,output=True)# play. May repeat with different volume values (if done interactively)samples=(np.sin(2*np.pi*np.arange(fs*duration)*f/fs)).astype(np.float32)stream.write(volume*samples)newFrequency=0breaking=FalsecurrentFrequency=0breaking_zone=Falsesuper_breaking_zone=FalsenoteLength=5.0whilerunning:foreventinpygame.event.get():ifevent.type==pygame.QUIT:running=Falseifevent.type==pygame.KEYDOWNandevent.key==pygame.K_q:running=Falseifevent.type==pygame.KEYDOWNandevent.key==pygame.K_SPACEandnewFrequency!=0:breaking=TruemidiSettings.base_freq=newFrequencycurrentFrequency=newFrequency-10ifevent.type==pygame.KEYDOWNandevent.key==pygame.K_s:noteLength=30.0breaking_zone=Trueifevent.type==pygame.KEYDOWNandevent.key==pygame.K_a:super_breaking_zone=TruenoteLength=5.0screen.fill((0,0,0))ifbreaking:titleCurr=titleFont.render("Current Frequency: %f"%currentFrequency,True,(128,128,0))# our user should be singing if there's a note on the queueelse:ifnotq.empty():b=q.get()pygame.draw.circle(screen,(0,128,0),(screenWidth//2+(int(b['Cents'])*2),300),5)noteText=noteFont.render(b['Note'],True,(0,128,0))ifb['Note']==noteHeldCurrently:noteHeld+=1ifnoteHeld==noteHoldLength:titleCurr=titleFont.render("Frequency is: %f"%b['Pitch'].frequency,True,(128,128,0))newFrequency=b['Pitch'].frequencyelse:noteHeldCurrently=b['Note']noteHeld=1screen.blit(noteText,(50,400))screen.blit(titleText,(10,80))screen.blit(titleCurr,(10,120))pygame.display.flip()clock.tick(30)ifbreaking:break_the_internet(currentFrequency,noteLength)

Now, there are a few things to note here.

One, I use rtmidi utility to be able to dial in the resonant frequency using a Midi controller. This lets me adjust the knobs in order to find a place where the straw vibrates the most inside of my wine glass.

Secondly, we use pyaudio to generate our sine wave of the resonant frequency. We could have used another library with more immediate feedback, but this was easiest to get in place.

Now, the way the program works is that you strike the glass, and hopefully the voiceController library detects a resonant frequency. The resonant frequency of wine glasses should be between 600 - 900 Hz. Once you’ve got that, you press the SPACEBAR, and that starts playing that resonant frequency out the speakers.

With this, I was able to generate the results you see from my daily sketch on my desktop computer speakers.

However, I wasn’t able to break the glass. I had a hunch it was because I wasn’t able to generate the volume necessary.

Bringing in the Big Guns

Since I couldn’t get my glass to break with my desktop speakers, I started looking around the web to see how other people have broken glass with speakers.

The best resource I found was from the University of Salford. They recommend using a 2” “compression driver in order to get results. So, I went off to Amazon, and picked up a 2” compression driver.

With this, the only thing remaining was to get an audio amplifier capable of driving the speaker to an (un)reasonable volume. I used this, and I plan on using both the amplifier and driver in more audio projects in the future.

Finally, I needed to build a stand for the speaker, along with a cone to focus the energy of the wave. For this, I used Blender, along with my Prusa i3 MK2. I printed it in PLA, and it seemed to have done the job well enough.

With the speaker cone 3D printed, I was finally ready to build my setup.

It looks something like this:

It consists of a Snowball USB microphone, plugged into a laptop. The laptop runs the sketch from above, first waiting to detect a resonant frequency.

You strike the glass, and if the detected resonant frequency is in the zone you’d expect (again, 600Hz to around 900Hz), you press SPACEBAR to begin playing back that resonant frequency.

The audio output is hooked up to an amplifier, and that amplifier is hooked up to the driver with the 3D printed code. That cone is pointed directly at the top of the wine glass, as you saw in the gif in the introduction.

With that, the glass breaks!

Where to Go From Here

The code, along with the 3D printable cone STL is at Github.

The program that finally breaks the glass used in the video is slightly different from what I’ve written above. It doesn’t adjust with the MIDI controller, because I found it wasn’t necessary in my case to adjust the detected frequency. The glass broken anyways.

If you plan on trying to recreate my experiment, please wear safety glasses and ear protection.

Ear damage is accumulative, and not wearing proper ear protection means damaging your ears permanently. So please don’t do that.

And the glass breaks all over the place, so please wear eye protection too. This project is fun, but not worth getting seriously injured over.

If you’re still learning Python and Pygame, or you want a visual introduction to programming, check out my book, Make Art with Python. The first three chapters are free.

Finally, feel free to share this post with your friends. It helps me to continue making these sort of tutorials.


Microsoft Offers Bug Bounty to Prevent Another Spectre-Meltdown Fiasco

$
0
0

With critical vulnerabilities like Meltdown and Spectre having been disclosed to the public, it's clearer than ever that more eyeballs are needed when it comes to making sure that our software and hardware is secure. Not long after Intel suffered the bulk of fallout from Meltdown and Spectre, the company bolstered its bug bounty program to encourage more people to dive in and discover bugs before they can be exploited.

Intel made great strides to improve the program overall by cutting out the invite-only requirement, allowing anyone to find, explore and report potential bugs. Clearly, Microsoft liked that idea, as it has also enhanced its bug bounty program to offer the the same top quarter million dollar reward that Intel is coughing up.

Microsoft Building

There is a caveat, however; this particular set of bug bounty rules is exclusive to vulnerabilities that surround speculative execution bugs, which are at the heart of Meltdown and Spectre. Microsoft lays out explicit details about what kind of bug would qualify:

  • A novel category or exploit method for a Speculative Execution Side Channel vulnerability.
  • A novel method of bypassing a mitigation imposed by a hypervisor, host or guest using a Speculative Execution Side Channel attack. For example, this could include a technique that can read sensitive memory from another guest.
  • A novel method of bypassing a mitigation imposed by Windows using a Speculative Execution Side Channel attack. For example, this could include a technique that can read sensitive memory from the kernel or another process.
  • A novel method of bypassing a mitigation imposed by the Microsoft Edge using a Speculative Execution Side Channel attack. For example, this could include a technique that can read sensitive memory from the Microsoft Edge content.

In order to qualify for the big "prize" of $250,000, the submission must involve a bug that is in a novel category of speculative execution attack that neither Microsoft nor industry partners are aware of. Ideally, this is the payout Microsoft would pay most often, because those bugs would clearly be the most severe. Other levels include reading sensitive memory involving virtual machines and verification of certain bugs actually being exploitable with select Microsoft products, such as Windows 10 or the Edge web browser.

Overall,  this is a great move from Microsoft and i's somewhat reassuring as normal end-users to see this kind of commitment being made. 

').insertAfter(jQuery('#initdisqus')); } loadDisqus(jQuery('#initdisqus'), disqus_identifier, url); } else { setTimeout(function () { disqusDefer(); }, 50); } } disqusDefer(); function loadDisqus(source, identifier, url) { if (jQuery("#disqus_thread").length) { jQuery("#disqus_thread").remove(); } jQuery('

').insertAfter(source); if (window.DISQUS) { DISQUS.reset({ reload: true, config: function () { this.page.identifier = identifier; this.page.url = url; } }); } else { //insert a wrapper in HTML after the relevant "show comments" link disqus_identifier = identifier; //set the identifier argument disqus_url = url; //set the permalink argument //append the Disqus embed script to HTML var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = 'https://' + disqus_shortname + '.disqus.com/embed.js'; jQuery('head').append(dsq); } jQuery('.show-disqus').show(); source.hide(); }; function disqusEvent() { idleTime = 0; }

FreeBSD Speculative Execution Vulnerabilities Security Advisory

$
0
0

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 ============================================================================= FreeBSD-SA-18:03.speculative_execution Security Advisory The FreeBSD Project Topic: Speculative Execution Vulnerabilities Category: core Module: kernel Announced: 2018-03-14 Credits: Jann Horn (Google Project Zero); Werner Haas, Thomas Prescher (Cyberus Technology); Daniel Gruss, Moritz Lipp, Stefan Mangard, Michael Schwarz (Graz University of Technology); Paul Kocher; Daniel Genkin (University of Pennsylvania and University of Maryland), Mike Hamburg (Rambus); Yuval Yarom (University of Adelaide and Data6) Affects: All supported versions of FreeBSD. Corrected: 2018-02-17 18:00:01 UTC (stable/11, 11.1-STABLE) 2018-03-14 04:00:00 UTC (releng/11.1, 11.1-RELEASE-p8) CVE Name: CVE-2017-5715, CVE-2017-5754 Special Note: Speculative execution vulnerability mitigation is a work in progress. This advisory addresses the most significant issues for FreeBSD 11.1 on amd64 CPUs. We expect to update this advisory to include 10.x for amd64 CPUs. Future FreeBSD releases will address this issue on i386 and other CPUs. freebsd-update will include changes on i386 as part of this update due to common code changes shared between amd64 and i386, however it contains no functional changes for i386 (in particular, it does not mitigate the issue on i386). For general information regarding FreeBSD Security Advisories, including descriptions of the fields above, security branches, and the following sections, please visit . I. Background Many modern processors have implementation issues that allow unprivileged attackers to bypass user-kernel or inter-process memory access restrictions by exploiting speculative execution and shared resources (for example, caches). II. Problem Description A number of issues relating to speculative execution were found last year and publicly announced January 3rd. Two of these, known as Meltdown and Spectre V2, are addressed here. CVE-2017-5754 (Meltdown) - ------------------------ This issue relies on an affected CPU speculatively executing instructions beyond a faulting instruction. When this happens, changes to architectural state are not committed, but observable changes may be left in micro- architectural state (for example, cache). This may be used to infer privileged data. CVE-2017-5715 (Spectre V2) - -------------------------- Spectre V2 uses branch target injection to speculatively execute kernel code at an address under the control of an attacker. III. Impact An attacker may be able to read secret data from the kernel or from a process when executing untrusted code (for example, in a web browser). IV. Workaround No workaround is available. V. Solution Perform one of the following: 1) Upgrade your vulnerable system to a supported FreeBSD stable or release / security branch (releng) dated after the correction date, and reboot. 2) To update your vulnerable system via a binary patch: Systems running a RELEASE version of FreeBSD on the i386 or amd64 platforms can be updated via the freebsd-update(8) utility, followed by a reboot into the new kernel: # freebsd-update fetch # freebsd-update install # shutdown -r now 3) To update your vulnerable system via a source code patch: The following patches have been verified to apply to the applicable FreeBSD release branches. a) Download the relevant patch from the location below, and verify the detached PGP signature using your PGP utility. [FreeBSD 11.1] # fetch https://security.FreeBSD.org/patches/SA-18:03/speculative_execution-amd64-11.patch # fetch https://security.FreeBSD.org/patches/SA-18:03/speculative_execution-amd64-11.patch.asc # gpg --verify speculative_execution-amd64-11.patch.asc b) Apply the patch. Execute the following commands as root: # cd /usr/src # patch and reboot the system. VI. Correction details CVE-2017-5754 (Meltdown) - ------------------------ The mitigation is known as Page Table Isolation (PTI). PTI largely separates kernel and user mode page tables, so that even during speculative execution most of the kernel's data is unmapped and not accessible. A demonstration of the Meltdown vulnerability is available at https://github.com/dag-erling/meltdown. A positive result is definitive (that is, the vulnerability exists with certainty). A negative result indicates either that the CPU is not affected, or that the test is not capable of demonstrating the issue on the CPU (and may need to be modified). A patched kernel will automatically enable PTI on Intel CPUs. The status can be checked via the vm.pmap.pti sysctl: # sysctl vm.pmap.pti vm.pmap.pti: 1 The default setting can be overridden by setting the loader tunable vm.pmap.pti to 1 or 0 in /boot/loader.conf. This setting takes effect only at boot. PTI introduces a performance regression. The observed performance loss is significant in microbenchmarks of system call overhead, but is much smaller for many real workloads. CVE-2017-5715 (Spectre V2) - -------------------------- There are two common mitigations for Spectre V2. This patch includes a mitigation using Indirect Branch Restricted Speculation, a feature available via a microcode update from processor manufacturers. The alternate mitigation, Retpoline, is a feature available in newer compilers. The feasibility of applying Retpoline to stable branches and/or releases is under investigation. The patch includes the IBRS mitigation for Spectre V2. To use the mitigation the system must have an updated microcode; with older microcode a patched kernel will function without the mitigation. IBRS can be disabled via the hw.ibrs_disable sysctl (and tunable), and the status can be checked via the hw.ibrs_active sysctl. IBRS may be enabled or disabled at runtime. Additional detail on microcode updates will follow. The following list contains the correction revision numbers for each affected branch. Branch/path Revision - ------------------------------------------------------------------------- stable/11/ r329462 releng/11.1/ r330908 - ------------------------------------------------------------------------- To see which files were modified by a particular revision, run the following command, replacing NNNNNN with the revision number, on a machine with Subversion installed: # svn diff -cNNNNNN --summarize svn://svn.freebsd.org/base Or visit the following URL, replacing NNNNNN with the revision number: VII. References The latest revision of this advisory is available at -----BEGIN PGP SIGNATURE----- iQKTBAEBCgB9FiEE/A6HiuWv54gCjWNV05eS9J6n5cIFAlqon0RfFIAAAAAALgAo aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldEZD MEU4NzhBRTVBRkU3ODgwMjhENjM1NUQzOTc5MkY0OUVBN0U1QzIACgkQ05eS9J6n 5cKORw/+Lc5lxLhDgU1rQ0JF6sb2b80Ly5k+rJLXFWBvmEQt0uVyVkF4TMJ99M04 bcmrLbT4Pl0Csh/iEYvZQ4el12KvPDApHszsLTBgChD+KfCLvCZvBZzasgDWGD0E JhL4eIX0wjJ4oGGsT+TAqkmwXyAMJgWW/ZgZPFVXocylZTL3fV4g52VdG1Jnd2yu hnkViH2kVlVJqXX9AHlenIUfEmUiRUGrMh5oPPpFYDDmfJ+enZ8QLxfZtOKIliD7 u+2GP8V/bvaErkxqF5wwobybrBOMXpq9Y/fWw0EH/om7myevj/oORqK+ZmGZ17bl IRbdWxgjc1hN2TIMVn9q9xX6i0I0wSPwbpLYagKnSnE8WNVUTZUteaj1GKGTG1rj DFH2zOLlbRr/IXUFldM9b6VbZX6G5Ijxwy1DJzB/0KL5ZTbAReUR0pqHR7xpulbJ eDv8SKCwYiUpMuwPOXNdVlVLZSsH5/9A0cyjH3+E+eIhM8qyxw7iRFwA0DxnGVkr tkMo51Vl3Gl7JFFimGKljsE9mBh00m8B0WYJwknvfhdehO4WripcwI7/V5zL6cwj s018kaW4Xm77LOz6P1iN8nbcjZ9gN2AsPYUYYZqJxjCcZ7r489Hg9BhbDf0QtC0R gnwZWiZ/KuAy0C6vaHljsm0xPEM5nBz/yScFXDbuhEdmEgBBD6w= =fqrI -----END PGP SIGNATURE-----

Medieval Lucky Charms

$
0
0

Today is St Patrick’s Day, and to celebrate all things Irish we are exploring medieval Irish charms in the British Library's collections. The use of protective charms in Ireland can be traced back to the early medieval period, and possibly to St Patrick’s own lifetime.

1_royal_ms_20_d_vi_f213v

St Patrick asleep, with a figure holding a book, France, 2nd quarter of the 13th century: Royal MS 20 D VI, f. 213v

2_egerton_ms_93_f019r

Text page of the lorica of St Patrick, 15th century: Egerton MS 93, f. 19r

A lorica is a medieval Christian charm or prayer that will grant Divine protection when invoked. In classical Latin, the word ‘lorica’ refers to a protective breastplate worn as armour by Roman soldiers. The lorica of St Patrick, or St Patrick’s Breastplate, was supposedly composed by the saint himself to celebrate the victory of the Irish Church over paganism. The British Library houses one of only three surviving medieval copies of this charm, in a 15th-century manuscript containing an account in Middle Irish of St Patrick’s life (now Egerton MS 93). The lorica is also composed in Middle Irish, and is formed of seven verses beginning Attoruig indiu nert triun togairm trinoite (‘I arise today through a mighty strength, the invocation of the Trinity’). A preface accompanies the lorica in an 11th-century manuscript known as the Liber hymnorum (Trinity College Dublin MS 1441), which states that the prayer was written to safeguard St Patrick and his monks against deadly enemies and would protect anyone who read it from devils and sudden death.

3_add_ms_30512_f035v

Text page of the lorica of St Fursey: Add MS 30512, f. 35v

Another notable protective charm is attributed to St Fursey (d. c. 650), an Irish monk from modern day Co. Galway and the first recorded Irish missionary to Anglo-Saxon England in c. 630. The only known copy of St Fursey’s lorica survives in a Middle Irish collection of theological works composed in the 15th and 16th centuries, known as the Leabhar Uí Maolconaire (Add MS 30512). Like the lorica of St Patrick, St Fursey’s prayer invokes the power of the Holy Trinity to protect one against evil. The text begins Robé mainrechta Dé forsind [f]ormnassa (‘The arms of God be around my shoulders’).

4_harley_ms_2965_f038r

Head, shoulders, knees and bones: the opening of the lorica of Laidcenn to protect the body, late 8th or early 9th century: Harley MS 2965, f. 38r

Protective Irish charms also survive in medieval English manuscripts, such the Book of Nunnaminster (Harley MS 2965), produced in Mercia in the late 8th or early 9th century. This manuscript contains the earliest known copy of the lorica of Laidcenn (d. c. 660), a monk and scholar at the monastery of Clonfert-Mulloe in modern day Co. Laois. The text was copied in Latin, and invokes the protection of individual limbs and body parts from demons, including the eyes:

Deliver all the limbs of me a mortal

with your protective shield guarding every member,

lest the foul demons hurl their shafts

into my sides, as is their wont.

Deliver my skull, head with hair and eyes.

mouth, tongue, teeth and nostrils,

neck, breast, side and limbs,

joints, fat and two hands.

In the same manuscript, Laidcenn’s lorica is accompanied by a prayer against poison. These many surviving protective charms give new meaning to the saying, ‘the luck of the Irish’!

5_harley_ms_2965_f037r

A charm against poison: Harley MS 2965, f. 37r

Alison Ray

Follow us on Twitter @BLMedieval

Sources:

Edition and translation of the preface and lorica of St Patrick: Whitley Stokes & John Strachan (eds.), Thesaurus palaeohibernicus (Cambridge: Cambridge University Press, 1903), vol. 2, pp. 354–58.

Translation of the lorica of St Fursey: John Ó Ríordáin, The Music of What Happens (Dublin: The Columba Press, 1996) pp. 46–47.

Edition and translation of the lorica of Laidcenn: Michael W. Herren, The Hisperica famina (Toronto: Pontifical Institute of Medieval Studies, 1987), vol. 2, pp. 76–89.

“Peak Civilization”: The Fall of the Roman Empire (2009)

$
0
0
This text describes the presentation that I gave at the "Peak Summit" in Alcatraz (Italy) on June 27, 2009 (the picture shows me speaking there). It is not a transcription, but something that I wrote from memory, mostly in a single stretch, while I had it still fresh in my mind. The result is that my 40 minutes talk became a text of more than 10,000 words, much longer than a typical internet document (but still less than Gibbon's six volumes on the same subject!) A talk, anyway, can be longer and more effective than a post, mostly because the people listening to you are not distracted by the infinite distractions of an internet connection. So, I wrote this post trying to maintain the style of an oral presentation. I don't know if it will turn out to be more easily readable than the usual style but, if you arrive to the end, you'll tell me what you think of it. Ladies and gentlemen, first of all thank you for being here. This afternoon I'll try to say something about a subject that I am sure you are all interested in: the decline and the fall of the Roman Empire. It is something that has been discussed over and over; it is because we think that our civilization may follow the same destiny as the Roman one: decline and fall. So, the Roman Empire offers us some kind of a model. We can say it is the paradigm of collapsing societies. And, yet, we don't seem to be able to find an agreement on what caused the collapse of the Roman Empire.

Historians - and not just historians - have been debating this subject and they came up with literally dozens of explanations: barbarian invasions, epidemics, lead poisoning, moral decadence and what you have. Which is the right one? Or are all these explanations right? This is the point that I would like to discuss today. I'll be focusing on the interpretation of Joseph Tainter, based on the fact that empires and civilizations are "complex" systems and try to use system dynamics to describe collapse.

Before we go into this matter, however, let me add a disclaimer. I am not a historian and I don't pretend to be one. It is not my intention of criticizing or disparaging the work of historians. You see, there are several ways of making a fool of oneself: one which is very effective is to try teaching to people who know more than you. For some reasons, however, it happens all the time and not just with history; just look at the debate on climate change! So, what I am trying to do here is just to apply system dynamics on the history of the Roman Empire which - as far as I know - has not been done, so far. It is a qualitative version of system dynamics; making a complete model of the whole Roman Empire is beyond my means. But the results are very interesting; or so I believe.

The collapse seen from inside.

Let's start from the beginning and here the beginning is with the people who were contemporary to the collapse, the Romans themselves. Did they understand what was happening to them? This is a very important point: if a society, intended as its government, can understand that collapse is coming, can they do something to avoid it? It is relevant to our own situation, today.

Of course, the ancient Romans are long gone and they didn't leave us newspapers. Today we have huge amounts of documents but, from Roman times, we have very little. All what has survived from those times had to be slowly hand copied by a Medieval monk, and a lot has been lost. We have a lot of texts by Roman historians - none of them seemed to understand exactly what was going on. Historians of that time were more like chroniclers; they reported the facts they knew. Not that they didn't have their ideas on what they were describing, but they were not trying to make models, as we would say today. So, I think it may be interesting to give a look to documents written by people who were not historians; but who were living the collapse of the Roman Empire. What did they think of what was going on?

Let me start with Emperor Marcus Aurelius, who lived from 120 to 180 A.D. He was probably the last Emperor who ruled a strong empire. Yet, he spent most of his life fighting to keep the Empire together; fighting barbarians. Maybe you have seen the movie "The Gladiator": Marcus Aurelius appears in the first scenes. The movie is not historically accurate, of course, but it is true that Aurelius died in the field, while he was fighting invaders. He wasn't fighting for glory, he wasn't fighting to conquer new territories. He was just fighting to keep the Empire together, and he had a terribly hard time just doing that. Times had changed a lot from the times of Caesar and of Trajan.

Marcus Aurelius did what he could to keep the barbarians away but, a few decades after his death, the Empire had basically collapsed. That was what historians call "the third century crisis". It was really bad; a disaster. The empire managed to survive for a couple of centuries longer as a political entity, but it wasn't the same thing. It was not any longer the Empire of Marcus Aurelius; it was something that just tried to survive as best as it could, fighting barbarians, plagues, famines, warlords and all kinds of disasters falling on them one after the other. Eventually, the Empire disappeared also as a political entity. It did that with a whimper - at least in its Western part, in the 5th century a.d. The Eastern Empire lasted much longer, but it is another story.

Here is a piece of statuary from Roman times. We know what Marcus Aurelius looked like.

Now, if it is rare that we have the portrait of a man who lived so long ago, it is even rarer that we can also read his inner thoughts. But that we can do that with Marcus Aurelius. He was a "philosopher-emperor" who left us his "Meditations"; a book of philosophical thoughts. For instance, you can read such things as:

Though thou shouldst be going to live three thousand years, and as many times ten thousand years, still remember that no man loses any other life than this which he now lives, nor lives any other than this which he now loses.

That is the typical tune of the book - you may find it fascinating or perhaps boring; it depends on you. Personally, I find it fascinating. The "Meditations" is a statement from a man who was seeing his world crumbling down around him and who strove nevertheless to maintain a personal balance; to keep a moral stance. Aurelius surely understood that something was wrong with the Empire: during all their history, the Romans had been almost always on the offensive. Now, they were always defending themselves. That wasn't right; of course.

But you never find in the Meditations a single line that lets you suspect that the Emperor thought that there was something to be done other than simply fighting to keep the barbarians out. You never read that the Emperor was considering, say, things like social reform, or maybe something to redress the disastrous situation of the economy. He had no concern, apparently, that the Empire could actually fall one day or another.

Now, I'd like to show you an excerpt from another document; written perhaps by late 4th century. Probably after the battle of Adrianopolis; that was one of last important battles fought (and lost) by the Roman Empire. This is a curious document. It is called, normally, "Of matters of war" because the title and the name of the author have been lost. But we have the bulk of the text and we can say that the author was probably somebody high up in the imperial bureaucracy. Someone very creative - clearly - you can see that from the illustrations of the book. Of course what we see now are not the original illustrations, but copies made during the Middle Ages. But the fact that the book had these illustration was probably what made it survive: people liked these colorful illustrations and had the book copied. So it wasn't lost. The author described all sorts of curious weaponry. One that you can see here is a warship powered by oxen.

Of course, a ship like this one would never have worked. Think of how to feed the oxen. And think of how to manage the final results of feeding the oxen. Probably none of the curious weapons invented by our anonymous author would ever have worked. It all reminds me of Jeremy Rifkin and his hydrogen based economy. Rifkin understands what is the problem, but the solutions he proposes, well, are a little like the end result of feeding the oxen; but let me not go into that. The point is that our 4th century author does understand that the Roman Empire is in trouble. Actually, he seems to be scared to death because of what's happening. Read this sentence, I am showing it to you in the original Latin to give you a sense of the flavor of this text.

“In primis sciendum est quod imperium romanum circumlatrantium ubique nationum perstringat insania et omne latus limitum tecta naturalibus locis appetat dolosa barbaries."

Of course you may not be able to translate from Latin on the spot. For that, being Italian gives you a definite advantage. But let me just point out a word to you: "circumlatrantium" . which refers to barbarians who are, literally, "barking around" the empire's borders. They are like dogs barking and running around; and not just barking - they are trying hard to get in. It is almost a scene from a horror movie. A nightmare. So the author of "Of matters of war" is thinking of how to get rid of these monsters. But his solutions were not so good. Actually it was just wishful thinking. None of these strange weapons were ever built. Even our 4th century author, therefore, fails completely in understanding what were the real problems of the Empire.

Now, I would like to show you just another document from the time of the Roman Empire. It is "De Reditu suo", by Rutilius Namatianus. The title means "of his return". Namatianus was a patrician who lived in the early 5th century; he was a contemporary of St. Patrick, the Irish saint. He had some kind of job with the imperial administration in Rome. It was some decades before the "official" disappearance of the Western Roman Empire; that was in 476, when the last emperor, Romolus Augustulus, was deposed. You may have seen Romulus Augustulus as protagonist of the movie "The Last Legion". Of course that is not a movie that pretends to be historically accurate, but it is fun to think that after so many years we are still interested in the last years of the Roman Empire - it is a subject of endless fascination. Even the book by Namatianus has been transformed into a movie, as you can see in the figure. It is a work of fantasy, but they have tried to be faithful to the spirit of Namatianus' report. It must be an interesting movie, but it has been shown only in theaters in Italy, and even there for a very short time; so I missed it. But let's move on.

Namatianus lived at a time that was very close to the last gasp of the Empire. He found that, at some point, it wasn't possible to live in Rome any longer. Everything was collapsing around him and he decided to take a boat and leave. He was born in Gallia, that we call "France" today, and apparently he had some properties there. So, that is where he headed for. That is the reason for the title "of his return". He must have arrived there and survived for some time, because the document that he wrote about his travel has survived and we can still read it, even though the end is missing. So, Namatianus gives us this chilling report. Just read this excerpt:

"I have chosen the sea, since roads by land, if on the level, are flooded by rivers; if on higher ground, are beset with rocks. Since Tuscany and since the Aurelian highway, after suffering the outrages of Goths with fire or sword, can no longer control forest with homestead or river with bridge, it is better to entrust my sails to the wayward."

Can you believe that? If there was a thing that the Romans had always been proud of were their roads. These roads had a military purpose, of course, but everybody could use them. A Roman Empire without roads is not the Roman Empire, it is something else altogether. Think of Los Angeles without highways. "Sic transit gloria mundi" , as the Romans would say; there goes the glory of the world. Namatianus tells us also of silted harbors, deserted cities, a landscape of ruins that he sees as he moves north along the Italian coast.

But what does Namatianus think of all this? Well, he sees the collapse all around him, but he can't understand it. For him, the reasons of the fall of Rome are totally incomprehensible. He can only interpret what is going on as a temporary setback. Rome had hard times before but the Romans always rebounded and eventually triumphed over their enemies. It has always been like this, Rome will become powerful and rich again.

There would be much more to say on this matter, but I think it is enough to say that the Romans did not really understand what was happening to their Empire, except in terms of military setbacks that they always saw as temporary. They always seemed to think that these setbacks could be redressed by increasing the size of the army and building more fortifications. Also, it gives us an idea of what it is like living a collapse "from the inside". Most people just don't see it happening - it is like being a fish: you don't see the water.

The situation seems to be the same with us: talking about the collapse of our civilization is reserved to a small bunch of catastrophists; you know them; ASPO members, or members of The Oil Drum - that kind of people. Incidentally, we can't rule out that at some moment at the time of the Roman Empire there was something like a "Roman ASPO", maybe "ASPE," the "association for the study of peak empire". If it ever existed, it left no trace. That may also happen with our ASPO; actually it is very likely, but let's go on.

What destroyed the Roman Empire?

From our perspective we can see the cycle of the Roman Empire as one that is nicely complete. We can see it from start to end; from the initial expansion to the final collapse. As I said, a lot of documents and data have been lost but, still, we have plenty of information on the Empire - much more than for other past empires and civilizations that collapsed and disappeared as well. Yet, we don't seem to be able to find an agreement on the reasons of the collapse.

You have surely read Edward Gibbon's "Decline and Fall of the Roman Empire"; at least parts of it. Gibbon wrote a truly monumental account of the story of the Empire, but he doesn't really propose us a "theory" of the causes of the fall, as most historians would do, later on. On reading Gibbon's work, you understand that he thinks there was a sort of loss of moral fiber in the Romans. He attributes this loss to the negative effect of Christianity. That is, the noble virtues of the Ancient Romans - he says - had been corrupted by this sect of fanatics coming from the East. This had made the Romans unable to resist to the invading barbarians.

You'll probably agree that this explanation by Gibbon is a bit limited; just as are limited other interpretations by authors who came later. Spengler and Tonybee are two examples, but if we were to discuss their work in detail it would take - well - weeks; not hours. So, let me jump forward to the historian who - I think - has given a new and original interpretation of the decline of Rome: Joseph Tainter with his "The Collapse of Complex Societies". His book was published for the first time in 1990.

It is a great book. I suggest to you to read it and ponder it. It is truly a mine of information about collapses. It doesn't deal just with the Roman Empire, but with many other civilizations. Tainter goes well beyond the simplistic interpretation of many earlier authors and identifies a key point in the question of collapse. Societies are complex entities; he understands that. And, hence, their collapse must be related to complexity. Here is an excerpt of Tainter's way of thinking. It is a transcription of a interview that Tainter gave in the film "Blind Spot" (2008)

In ancient societies that I studied, for example the Roman Empire, the great problem that they faced was when they would have to incur very high costs just to maintain the status quo. Invest very high amounts in solving problems that don't yield a net positive return, but instead simply allowed them to maintain what they already got. This decreases the net benefit of being a complex society.

Here is how Tainter describes his view in graphical form; in his book.

So, you see that Tainter has one thing very clear: complexity gives a benefit, but it is also a cost. This cost is related to energy, as he makes clear in his book. And in emphasizing complexity, Tainter gives us a good definition of what we intend for collapse. Very often people have been discussing the collapse of ancient societies without specifying what they meant for "collapse". For a while, there has been a school of thought that maintained that the Roman Empire had never really "collapsed". It had simply transformed itself into something else. But if you take collapse defined as "a rapid reduction of complexity" then you have a good definition and that's surely what happened to the Roman Empire.

So, what was important with the collapse of the Roman Empire is not whether or not there was an emperor in Rome (or, as it was the case later, in Ravenna). We might well imagine that the line of the emperors could have continued well after Romulus Augustulus - the last emperor. And even after him there remained a legitimate Roman Emperor in Byzantium, in the Eastern Empire. You could very well say that the Empire didn't disappear as long as there were emperors in Byzantium, that is, until Costantinople fell, in the 15th cenntury. And since the Russian Czars saw themselves as Roman emperors (that is where "Czar" comes from, from "Caesar"), you could say that the Roman Empire didn't disappear until the last Czar was deposed, in 1917. But that is not the point. The point is that the Roman Empire had started undergoing a catastrophic loss of complexity already during the third century. So, that was the real collapse. What happened later on is another story.

After that Tainter has spoken of complexity, and of the energy cost of complexity, it is perhaps surprising for us that he doesn't consider resource depletion as a cause of collapse. Resource depletion, after all, is the main theme of Jared Diamond's book "Collapse". It is how he interprets the collapse of many societies. Tainter explicitly denies that in his book. He says that if such a thing as depletion appears, then society should react against it. After all, it is normal: society always reacts to all kinds of crisis, and why shouldn't it react to resource depletion? This point made by Tainter may appear surprising - actually unpalatable - to people who have made resource depletion the centerpiece of their thought. Peak oilers, for instance.

The disagreement between peak oilers (and Diamond) and Tainter may not be so strong as it appears. That we'll see as we go deeper into the details. But before we do that, let me say something general about these explanations that people give for collapse. It happens all the time that people discover something that they describe as if it was the only cause for collapse. That is, they sort of get enamored of a single cause for collapse. They say, "I have the solution; it is this and nothing else."

Consider the story that Roman Empire collapsed because the Romans used to drink wine in lead goblets; and so they died of lead poisoning. That has some truth: there is evidence of lead poisoning in ancient Roman skeletons; there are descriptions of lead poisoning in ancient Roman texts. Surely it was a problem, probably even a serious one. But you can't see this story of lead poisoning in isolation; otherwise you neglect everything else: the Roman Empire was not just people drinking wine in lead goblets. Think of a historian of the future who describes the fall of the American Empire as the result of Americans eating hamburgers. That would have some truth and for sure the kind of food that most Americans eat today is - well - we know that it is doing a lot of damage to Americans in general. But you wouldn't say that hamburgers can be the cause of the fall of the American Empire. There is much more to that.

The same kind of reasoning holds for other "causes" that have been singled out for the fall of Rome. Think, for instance, of climatic change. Also here, there is evidence that the fall of the Roman Empire was accompanied by droughts. That may surely have been a problem for the Romans. But, again, we might fall in the same mistake of a future historian who might attribute the fall of the American Empire - say - to the hurricane Katrina. (I have nothing special against the American Empire, it is just that it is the current empire)

The point that Tainter makes, quite correctly, in his book is that it is hard to see the fall of such a complex thing as an empire as due to a single cause. A complex entity should fall in a complex manner, and I think it is correct. In Tainter's view, societies always face crisis and challenges of various kinds. The answer to these crisis and challenges is to build up structures - say, bureaucratic or military - in response. Each time a crisis is faced and solved, society finds itself with an extra layer of complexity. Now, Tainter says, as complexity increases, the benefit of this extra complexity starts going down - he calls it "the marginal benefit of complexity". That is because complexity has a cost - it costs energy to maintain complex systems. As you keep increasing complexity, this benefit become negative. The cost of complexity overtakes its benefit. At some moment, the burden of these complex structures is so great that the whole society crashes down - it is collapse.

I think that Tainter has understood a fundamental point, here. Societies adapt to changes. Indeed, one characteristic of complex systems is of adapting to changing external conditions. It is called "homeostasis" and I tend to see it as the defining characteristic of a complex system (as opposed to simply complicated). So, in general, when you deal with complex systems, you should not think in terms of "cause and effect" but, rather, in terms of "forcing and feedback". A forcing is something that comes from outside the system. A feedback is how the system reacts to a forcing, usually attaining some kind of homeostasis. Homeostasis, is a fundamental concept in system dynamics. Something acts on something else, but also that something else reacts. It is feedback. It may be positive (reinforcing) or negative (damping) and we speak of "feedback loops" which normally stabilize systems - within limits, of course.

Homeostasis has to be understood for what it is. It is not at all the same thing as "equilibrium" as it is defined in thermodynamics. For example, a human being is a complex system. When you are alive, you are in homeostasis. If you are in equilibrium, it means that you are dead. Homeostasis is a dynamical equilibrium of forces.

Also, homeostasis cannot contradict the principles of physics. It can only adapt to physical laws. Think of yourself swimming in the sea. Physics says that you should float, but you need to expend some energy to maintain a homeostatic condition in which your head stays above the water. Now, suppose that your feet get entangled with something heavy. Then, physics says that you should sink. Yet, you can expend more energy, swim harder, and still keep your head above the water - again it is homeostasis. But, if nothing changes, at some moment you'll run out of energy, you get tired and you can't keep homeostasis any more. At this point, physics takes over and you sink, and you drown. It is the typical behavior of complex systems. They can maintain homeostasis for a while, as long as they have resources to expend for this purpose.

Something similar occurs for human societies. When there is a forcing, say, an epidemics that kills a lot of people, societies react by generating more children. Look at the demographic statistics for our societies: there is a dip in numbers for the world wars, but it is rapidly compensated by more births afterward. Also in Roman times there were epidemics and the eruption of the Vesuvius that killed a lot of people. But those were small forcings that the Roman society could compensate.

Not all forcings can be compensated, but we know that the Romans were not destroyed by an asteroid that fell into the Mediterranean Sea. It might have happened, and in that case there would have been no feedback able to keep the empire together. We would have a single cause for the disappearance of the Roman Empire and everybody would agree on that. But that has not happened, of course. Perhaps, something like that has happened to the Cretan civilization; destroyed by a volcanic eruption - but that's another story.

So, in Tainter's view there is this feedback relationship between complexity and energy. At least the way I interpret it. Complexity feeds on energy and also strains the availability of energy. It is feedback. And not just energy; resources in general. So, I think that Tainter is right in refusing a simple explanation like "resource depletion is the cause of the fall of the Roman Empire". But, clearly, resources are an important part of his model. I think Tainter had in mind the Roman Empire when he developed this model, but it is of quite general validity. If this is the way things stand, his model is not in contrast with the models we have that see resource depletion as the main factor that causes collapse. But not the only cause. We must see collapse as something dynamic, and now I'll try to explain just that.

Dynamic models of collapse

Once we start reasoning in terms of complexity, we immediately see the relationship of Tainter's model with other models. I can cite John Greer's theory of "catabolic collapse" but we can go directly to the mother of all theories based on feedback: the study called "The Limits to Growth" that appeared for the first time in 1972.

As we know, "The Limits to Growth" was not about the fall of the Roman Empire. The authors tried to describe our contemporary world, but the model they used is very general and perhaps we can apply it also to the Roman Empire. So, first of all, we need to understand how the model works. Let me show you a simplified graphic representation of the model

This image was made by Magne Myrtveit a few years ago and I think it nicely summarizes the main elements of the world model used for the Limits to Growth studies.

There is this problem with dynamic models; they are often very complex and difficult to understand. They use a graphical formalism, but if you look at one of these models made - for instance - using the "Stella" or "Vensim" software, all what you see is a jumble of boxes and arrows. If you are not trained in this kind of things, you can't understand what the model is about. Personally, often I find that the equations are clearer than all those boxes and arrows.

So, we need something more graphical, easier to understand, especially if we have to show these things to politicians. And, as I said, I think that Myrtveit has struck the right balance here: this graphic is "mind sized" (I am using a term from Seymour Paypert, who invented the "logo" programming language). It is mind sized because I think that you can make sense of this diagram in a few minutes. There remains a problem with politicians. Their attention span is more of the order of thirty seconds or less. But that is another problem.

So, Myrtveit's image shows us the major elements of the world model - the model of The Limits to Growth" - and their relationships. You see population, agriculture, natural resources, pollution and capital· Five main elements of the model; each one is rather intuitive to understand. What is important is the feedback relationship that exists among these elements. Perhaps the most important feedback loop is the one between capital and natural resources. Here is how the authors of "The Limits to Growth" have described this relationship:

The industrial capital stock grows to a level that requires an enormous input of resources. In the very process of that growth it depletes a large fraction of the resources available. As resource prices rise and mines are depleted, more and more capital must be used for obtaining resources, leaving less to be invested for future growth. Finally investment cannot keep up with depreciation, and the industrial base collapses, taking with it the service and agricultural systems, which have become dependent on industrial inputs.

Considering just two elements, instead of five, is not in contradiction with the more complex model. It makes sense especially when you are not considering a whole empire but something more limited, for instance the oil industry. Here are the results of this approach, this time with the equations written in clear.

You see that we obtain "bell shaped" curves. We do see bell shaped curves whenever a natural resource is exploited in a free market conditions. The "Hubbert curve" for oil production is just one case. There are many others. The curve is the result of a phenomenon called overexploitation or overshoot that leads to destroy even resources which are in principle renewable. The story of whaling in 19th centuries is typical and I wrote a paper on that - I am writing another one. It is a fascinating subject: whales are a renewable resource, of course, because they reproduce. But they were hunted so efficiently that, by end 19th century, it is estimated that in the Oceans there were only 50 females left of the species that was most hunted: the "right whale." ("right" is because it was easy to kill, of course nobody had asked the opinion of whales on this name).

If you consider all five elements, things become more complex, but the general approach doesn't change that much. You can play a game with the scheme in Myrtveit's figure and you can relate it to what Tainter said about human societies. You remember that Tainter says that if a crisis emerges, society will try to cope with it. From the scheme, you can see what happens as time goes by and as people do things to avoid collapse.

So, suppose that pollution becomes a serious problem. Let's imagine that fumes from smokestacks are killing people; then society will allocate some capital to reduce fumes. Say, they would place filters on smokestacks. But filters need energy and natural resources to be built and that will place some further strain on natural resources. That will put strain on capital - so, fighting pollution may accelerate collapse, but not fighting it may cause collapse as well, although for different reasons - because pollution kills people and that makes it more difficult to generate capital and so on. You see how it works.

Let's make another example. Suppose that population grows to the point that there is not enough food for everybody. In response, society will use a fraction of its natural resources to produce fertilizers which will increase the yield of agriculture. That, however, will create a further increase in population that will put further strain on population and generate more pollution. That, in turn, will put new strain on capital and resources, and so on... Within limits, society can always adapt in this way - it is homeostasis, as I said. But only within limits.

You can play this kind of game in various ways. The five elements figure by Magne Myrtveit is a good tool to gain a feeling of how society reacts to external interventions and how it evolves with the gradual depletion of natural resources. If these resources are non renewable, as it is the case of our mineral resources, eventually, the amount of capital that can be created and maintained must go down - it is one of the possible causes of collapse. Probably the most common. But in order to see how it is going to happen, you need to run the model in a computer and see what you get. Here are typical results, from the 2004 edition of "The Limits to Growth".

This is called the "standard run" or the "base case" scenario. It is a run of the model with the parameters most closely fitting the present situation. You see collapse occurring - it is when you see industrial and agricultural production crashing down. As you can see, the more complex model still produces bell shaped curves, although non symmetric ones.

Note that the model doesn't have a "complexity" parameter built in. However, it is clear that when the industrial and the agricultural system cease to function it is complexity going down (and population as well, of course). So, in a certain sense, the "Limits to Growth" model is compatible with Tainter's model - or so I tend to see things.

Of course, you don't have to take this scenario as a prophecy. It is just a mental tool designed to amplify your understanding of the system. You can change the parameters and the assumptions - collapse can be postponed, but the model is very robust. An important point is that these bell shaped curves are typical and are always the result, unless you use very specific assumptions in input, usually assuming human intervention to avoid collapse.

People are very good at optimizing exploitation. The problem is that they exaggerate and take out of the system more than what the system can replace. And that is the reason of the curve. First you go up because you are so good at exploiting the resource; then you go down because you have exploited it too much. In the middle, there has to be a peak -it is "peak-resource". In the case of crude oil, people speak of "peak oil". In the case of a whole civilization, we may speak of "peak civilization". And, as we said before, peak civilization also corresponds to "peak complexity", in the sense that Tainter described.

One last point, here. Collapse is not irreversible. Society goes in overshoot, then collapses, but the collapse gives time to the overexploited resource to reform, so growth can restart after a while. Homeostasis is like orbiting around an equilibrium point, without ever reaching it. It is a cycle that may keep going up and down, or may dampen out to reach an approximately stable state. That is, if the resource is renewable. If it is not, like oil or uranium, when it is used up, there is no more. In this case, there is no return from collapse. Also, from the viewpoint of a human being, even a reversible collapse that involves society as a whole tends to last much longer than a human lifetime. So, for what we are concerned, collapse is irreversible if we are caught by it and is something that we don't like, clearly. So we set up things like ASPO and TOD to see if we can convince politicians to do something to avoid collapse. Whether we'll succeed is another matter, but let's not go into that now.

The dynamic fall of the Roman Empire

Now we know that we should expect to see these bell curves in the behavior of a complex civilization or an empire. So, we can try to give a look to the Roman Empire in this perspective and see if it agrees with an interpretation based on system dynamics. So, first of all, let me propose a simplified model based on the same scheme that Magne Myrtveit proposed for our world as described in "The Limits to Growth".

Please do not take it as anything more than a sketch, but it may be helpful for us to understand the mechanism that lead the Empire to collapse.

Now, let me try to explain how this scheme could work. We know that the Roman Empire was based mainly on two kinds of resources: military and agricultural. I put the image of a legionnaire for "capital resources" because legions can be seen as the capital of the Roman Empire; military capital. This capital, legions, would be built on a natural resource that was mainly gold. The legions didn't mine gold, they took it from the people who had mined it (or had stolen it from somebody else).

This feedback between military capital and gold is a point that is very well described by Tainter in his book. You can read how military adventures played a fundamental role in the growth of the empire, and earlier on of the Roman Republic. There was a clear case of positive feedback. The Empire would defeat a nearby kingdom, rob it of gold and take part of the population as slaves. Gold could be used to pay for more legions and go on conquering more lands. Positive feedback: the more legions you have, the more gold you can rob; the more gold you have, the more legions you can create. And so on...

One of the inventions of the Roman was their capability of transforming gold into legions and legions into gold - as I said it is a very clear case of feedback. Still today we use the word "soldier", which comes from Latin, and it means "hired" or "salaried". It was not only gold, legionnaires were also paid in silver, but the concept remains the same. Legions paid for the salaries of the legionnaires using the profit they made from looting the conquered lands.

But, as conquest proceeded, soon the Romans found themselves without easy lands to conquer. It was a problem of EROEI; energy return on energy invested. In this case, GROGI (gold return on gold invested). After the easy conquests of the 1st century b.c., Gallia for instance, then things became difficult. The energy yield of conquering new lands went down. On the North East, the Germans were too poor - and also warlike. Conquering them was not only difficult, but didn't generate a profit. In the East, the Parthians were rich, but militarily powerful. Then on the West there was the Atlantic Ocean, the North was too cold, the south too dry. Negative feedback, you see?

With the legions not bringing any more gold, gold disappeared from the Empire for various reasons. In part it was to buy luxury items that the Empire couldn't manufacture inside its borders, silk for instance. In part, it disappeared because barbarian chieftains were paid not to invade the Empire or to fight alongside with the Romans. There were other reasons, but anyway gold was a dwindling resource for the Roman Empire, a little like our "black gold", petroleum. During the good times, the legions would bring back from foreign conquests more gold than what was spent but, with time, the balance had become negative.

Of course, military conquest was not the only source of gold for Rome. As I said, we are describing a complex system, and complex systems have many facets. So, the Romans had gold mines in Africa and in Spain. And they also had silver mines in Spain. There are no mines in the scheme; we could add mines to it, that wouldn't be a problem. But the problem here is that we don't have enough data to understand exactly the role of mines in the economy of the Roman Empire. We know, for instance, that silver mining declined in Spain with the decline of the empire. Did mining decline cause the collapse of the empire? Personally, I think not. At least, the Romans had started their expansion much earlier than they had conquered Spain and these mines. At the time of the wars with Carthage, it was the Carthaginians who held Spain and, I imagine, the silver mines. But this silver didn't help them much, since they lost the war and were wiped out by the Romans. So, we should be wary of single explanations for complex events. We can only say that mines are subjected to the same kind of negative feedback that affects military conquests. After you exploit the easy ores (or lands to be conquered) you are left with difficult ores (or lands) that don't yield the same profit. It is negative feedback, again.

Then, there was agriculture. Surely it was an important economic activity of the Roman Empire, as you can read, again, in Tainter's book. Agriculture is also subjected to positive and negative feedbacks as you can see in the scheme. With good agriculture, the population increases. With more population, you can have more farmers. In the case of the Roman Empire, as population grows, you can have also more legions which will bring back home slaves which can be put to work in the fields. But agriculture has also a negative feedback, and that is erosion.

You can see erosion in the scheme listed as "pollution". It affects agriculture negatively. It reduces population and sets everything backwards: negative feedback, again. The more you try to force agriculture to support a large populations (including the legions) the more strain you put on the fertile soil. Fertile soil is a non renewable resource; it takes centuries to reform the fertile soil, after that it has been lost. So, erosion destroys agriculture, population falls, you have a smaller number of legions and, in the end, you are invaded by barbarians. This is another negative feedback loop that is related to the fall of the Roman Empire.

The question of agriculture during Roman times is rather complex ad the data we have are contradictory; at least in some respects. There is clear evidence of erosion and deforestation, especially during the expansion period of the early Roman Empire. Then, during and after the third century, we have famines and plagues. These two things are related, plagues are often the result of poor nutrition. At the same time, we have evidence that the Romans of the late empire were unable to exploit in full the land they had. It is reported that plenty of land was not cultivated - apparently for lack of manpower. We also know that forests were returning with the 4th century a.d. So, there are various elements of the dynamic scheme which connect with each other. Apparently, the emphasis on military power took away resources from agriculture and ingenerated still another negative feedback: not enough people (or slaves) to cultivate the land. But it may also be that some areas of land were not cultivated because erosion had ruined them.

So, I have proposed to you a scheme and I described how it could work. But does it work? We should now compare the scheme with real data; fit the data to a model. The problem is that we don't have enough data to fit - we probably never will. So, I didn't try fitting anything; but I think I can show you some sets of data that are an impressive indication that there is something true in this dynamical model.

First of all, if the decline and fall of the Roman Empire has been a case of overexploitation of resources, we should expect to see bell curves for industrial and agricultural production, for population, and for other parameters. As I said, the historical data are scant, but we have archaeological data. So, let me show a plot that summarizes several industrial and agricultural indicators, together with a graph that shows how the extension of the Empire varied in time. It is taken from In search of Roman economic growth, di W. Scheidel, 2007" The other graph is taken from Tainter's book.

Especially the upper graph is impressive. There has been a "peak-empire", at least in terms of production and agriculture, somewhere around mid 1st century. Afterward, there was a clear decline - it was not just a political change. It was also a real reduction in complexity as Tainter defines collapse. The Roman Empire really collapsed in mid 3rd century. It had a sort of "Hubbert peak" at that time.

The other parameter shown in the figure, the extension of the empire, also shows an approximately bell shaped curve. The Empire continued to exist as a political entity even after it had been reduced to an empty shell in economic terms. If we think that the extension of the empire is proportional to the "capital" accumulated, then this relationship makes sense if we think of the dynamic model that we saw before. Capital, as we saw, should peak after production. This is a bit stretched as an interpretation, I admit. But at least we see also here a bell shaped curve.

There is more. Do you remember the curves that are calculated in the dynamic model for the relation Capital/resources? You expect the production curve to peak before the capital curve. Now, I proposed that this relation capital/resources exists between the Roman Army and the gold that they looted. So, do we have data that show this relationship? Yes, we do, although only approximately. Let's first see the data for gold. We don't have data for the amount of gold circulating within the Empire, but Tainter shows us the data for the devaluation of the Roman silver coin, which we would expect to follow the same path. Here are the data (the figure is taken from this site ):

Now, the amount of precious metal within a denarius is not a precise measurement of the total gold or silver in the Empire, but it is at least an indication that this amount was going down after the first century A.D. And, since the Romans had started poor, earlier on, there must have been a peak at some time, "peak gold", probably in the 1st century a.d.

About the size of the Roman Army, we have this figure from Wikipedia . As you see, the data are uncertain, but if we consider the Western Empire only, there was a peak around 3rd century a.d.

So, you see? Army and gold show the correct relationship that we expect to have between capital and resource. they both peak, but gold peaks before the army. The Romans kept increasing the size of their army even after the economic returns that they got from military activities went down, actually may have become negative. It is exactly the same behavior of whalers in 19th century who kept increasing the size of the whaling fleet even it was clear that there weren't enough whales to catch to justify that. I think this is an impressive result. At least, it convinced me.

There is more if we look at the Roman population curve, although for this we must rely on very uncertain data (see, e.g. the paper by Walter Schneidel . I can't show you a graph, here, too uncertain are the data. But it seems that, in any case, there was a population peak in the Roman Empire around mid 2nd century. If this is the case, the Roman population peak arrives after the production peak - just as shown in the "standard run" calculations for the world3 model.

So, I think we have enough data, here, to prove the validity of the model - at least in qualitative terms. Maybe somebody should collect good data, archaeological and historical, and made a complete dynamic model of the Roman Empire. That would be very interesting, but it is beyond my possibilities for now. Anyway, even from these qualitative data we should be able to understand why the Empire was in trouble. One of the main causes of the trouble was that it had this big military apparatus, the legions, that needed to be paid and didn't bring in any profit. It was the start of an hemorrhage of gold that couldn't be reversed. In addition, the Empire bled itself even more by building an extensive system of fortifications - the limes that had to be maintained and manned, besides being expensive in themselves.

The story of the fortifications is a good example of what we had said; the attempt of a complex system to maintain homeostasis. The Romans must have understood that legions were too expensive if you had to keep so many of them to keep the borders safe. So, they built these walls. I imagine that the walls were built by slaves; and a slave surely cost less than a legionnaire. Slaves, however, were not good as fighters - I suppose that if you gave a sword to a slave he might think to run away or to use it against you. You know the story of Spartacus, the leader of a slave revolt in Roman times. I am sure that the Romans didn't want to risk that again. But with walls the Romans had found a way to replace legionnaires with slaves. You needed less legionnaires to defend a fortification than to defend an open field. That was a way to save money, to keep homeostasis. But it wasn't enough - obviously. The Romans still needed to pay for the legions and - as a disadvantage - the walls were a rigid system of defense that couldn't be changed. The Romans were forced to man the walls all along their extension and that must have been awfully expensive. The Empire had locked itself in a cage from which it would never be able to escape. Negative feedback kills.

Military expenses were not the only cause of the fall. With erosion gnawing at agricultural yields and mine productivity going down, we should not be surprised if the empire collapsed. It simply couldn't do otherwise. So, you see that the collapse of the Roman Empire was a complex phenomenon where different negative factors reinforced each other. It was a cascade of negative feedbacks, not a single one, that brought down the empire. And this shows how closely related to the Romans we are. Surely there are differences: our society is more of a mining society and less of a military based society. We don't use slaves but, rather, machines. We also have plenty of gadgets that the Romans didn't have. But, in the end, the interactions of the various elements of our economy are not that much different. What brought down the Romans, and eventually will bring us down, is the overexploitation of the resources. If the Romans could have found a way to use their resources, agriculture for instance, in ways that didn't destroy them, erosion in this case, their society could have lasted for a longer time. But they never found an equilibrium point - they went down always using a bit too much of what they had.

Avoiding Collapse

From our viewpoint, we see what was the history of the Roman Empire. But, from inside, as we saw, it wasn't clear at all. But let's assume that someone had it clear, already at the time of Marcus Aurelius. I said that there might have been something like an ASPE; "association for the study of peak empire". Or let's imagine that a wise man, a Druid from foggy Britannia, an ancestor of Merlin the wise, was smart enough to figure out what was going on. You don't really need computers to make dynamical models, or maybe this druid made one using wooden cogs and wheels, the whole thing powered by slaves. So, let's say that this druid understood that the troubles of the Empire are caused by a combination of negative feedbacks and that these feedbacks come from the cost of the army and of the bureaucracy, the overexploitation of the fertile soil, the fact that Rome had exhausted the "easy" targets for conquest.

Now, it is a tradition of Druids (and also of ASPO) of alerting kings and rulers of the dangers ahead. After all, Merlin did that for King Arthur and we may imagine that the druid we are thinking of felt that it was his duty to do that with Emperor Marcus Aurelius. So, he decides to go to Rome and speak to the Emperor. Suppose you were that druid; what would you say to the Emperor?

Good question, right? I have asked it to myself many times. We could think of many ways of answering it. For instance, if gold is running out from the Empire's coffers, why not suggest to the Emperor to mount a naval expedition to the Americas? It is what Columbus would do, more than a millennium afterwards and the result was the Spanish empire - it was also based on gold and it didn't last for long. Maybe the Romans could have done something like that. But they didn't have the right technology to cross the oceans and, at the time of Marcus Aurelius, they had run out of the resources to develop it. So, they had to remain in Europe and to come to terms with the limits of the area they occupied. The Empire had to return its economy within these limits. So, there is only one thing that you, as the wise Druid from Britannia, can tell the Emperor: you have to return within the limits that the Empire's economy can sustain.

So you walk to Rome - kind of a long walk from Eburacum, in Britannia; a place that today we call "York". You are preceded by your fame of wise man and so the Emperor receives you in his palace. You face him, and you tell him what you have found:

"Emperor, the empire is doomed. If you don't do something now, it will collapse in a few decades"

The Emperor is perplexed, but he is a patient man. He is a philosopher after all. So he won't have your head chopped off right away, as other emperors would, but he asks you, "But why, wise druid, do you say that?"

"Emperor, " you say, "you are spending too much money for legions and fortifications. The gold accumulated in centuries of conquests is fast disappearing and you can't pay enough legionnaires to defend the borders. In addition, you are putting too much strain on agriculture: the fertile soil is being eroded and lost. Soon, there won't be enough food for the Romans. And, finally, you are oppressing people with too much bureaucracy, which is also too expensive."

Again, the Emperor considers having your head chopped off, but he doesn't order that. You have been very lucky in hitting on a philosopher-emperor. So he asks you, "Wise druid, there may be some truth in what you say, but what should I do?"

"Emperor, first you need to plant trees. the land needs rest. In time, trees will reform the fertile soil."
"But, druid, if we plant trees, we won't have enough food for the people."
"Nobody will starve if the patricians renounce to some of their luxuries!"
"Well, Druid, I see your point but it won't be easy....."
"And you must reduce the number of legions and abandon the walls!"
"But, but.... Druid, if we do that, the barbarians will invade us....."
"It is better now than later. Now you can still keep enough troops to defend the cities. Later on, it will be impossible. It is sustainable defense."
"Sustainable?"
"Yes, it means defense that you can afford. You need to turn the legions into city militias and..."
"And...?"
"You must spend less for the Imperial Bureaucracy. The Imperial taxes are too heavy! You must work together with the people, not oppress them! Plant trees, disband the army, work together!"

Now, Emperor Marcus Aurelius seriously considers whether it is appropriate to have your head chopped off, after all. Then, since he is a good man, he sends to you back to Eburacum under heavy military escort, with strict orders that you should never come to Rome again.

This is a little story about something that never happened but that closely mirrors what happened to the modern druids who were the authors of "The Limits to Growth." They tried to tell to the world's rulers of their times something not unlike what our fictional druid tried to tell to Emperor Marcus Aurelius. The heads of the authors of "The Limits to Growth" weren't chopped off, but they were surely "academically decapitated" so to say. They were completely ignored. Not just ignored, ridiculed and vituperated. It is not easy to be a druid.

So, here we found another similarity between our times and the Roman ones. We are subjected to the "fish in the water" curse. We don't understand that we are surrounded by water. And we don't want to be told that water exists.

As things stands, we seem to be blithely following the same path that the Roman Empire followed. Our leaders are unable to understand complex systems and continue to implement solutions that worsen the problem. As the wise druid was trying to tell to Marcus Aurelius, building walls to keep the barbarians out was a loss of resources that was worse than useless. But I can see the politicians of the time running on a platform that said, "Keep the barbarians out! More walls to defend the empire". It is the same for us. Tell a politician that we are in trouble with crude oil and he/she will immediately say "drill deeper!" or "drill, baby, drill!" Negative feedback kills.

But I would like to point out to you something: let's go back to what our fictional druid was telling to Emperor Aurelius. He had this slogan "Plant trees, disband the army and work together". I had invented it in a post that I had written on the collapse of Tuscan society in 16th century; it is another story but one that shows how all societies follow similar paths. Anyway, can you see what kind of world the Druid was proposing to the Emperor? Think about that for a moment: a world of walled cities defended by city militias, no central authority or a weak one, an economy based on agriculture.

Do you see it.....? Sure, it is Middle Ages! Think about that for a moment and you'll see that you could define Middle Ages as a solution for the problems of the Roman Empire!

So, our Druid had seen the future and was describing it to Emperor Aurelius. He had seen the solution of the problems of Empire: Middle Ages. It was where the Empire was going and where it could not avoid going. What the Druid was proposing was to go there in a controlled way. Ease the transition, don't fight it! If you know where you are going, you can travel in style and comfort. If you don't, well, it will be a rough ride.

We may imagine a hypothetical "driven transition" in which the government of the Roman Empire at the time of Marcus Aurelius would have done exactly that: abandon the walls, reduce the number of legion and transform them into city militias, reduce bureaucracy and Imperial expenses, delocalize authority, reduce the strain on agriculture: reforest the land. The transition would not have been traumatic and would have involved a lower loss of complexity: books, skills, works of art and much more could have been saved and passed to future generations.

All that is, of course, pure fantasy. Even for a Roman Emperor, disbanding the legions couldn't be easy. After all, the name "Emperor" comes from the Latin word "imperator" that simply means "commander". The Roman Emperor was a military commander and the way to be Emperor was to please the legions that the Emperor commanded. A Roman Emperor who threatened to disband the legions wouldn't have been very popular and, most likely, he was to be a short lived Emperor. So, Emperors couldn't have done much even if they had understood system dynamics. In practice, they spent most of their time trying to reinforce the army by having as many legions as they could. Emperors, and the whole Roman world, fought as hard as they could to keep the status quo ante , to keep things as they had always been. After the 3rd century crisis, Emperor Diocletian resurrected the Empire transforming it into something that reminds us of the Soviet Union at the time of Breznev. An oppressive dictatorship that included a suffocating bureaucracy, heavy taxes for the citizens, and a heavy military apparatus. It was such a burden for the Empire that it destroyed it utterly in little more than a century.

Our Druids may be better than those of the times of the Roman Empire, at least they have digital computers. But our leaders are no better apt at understanding complex system than the military commanders who ruled the Roman Empire. Even our leaders were better, they would face the same problems: there are no structures that can gently lead society to where it is going. We have only structures that are there to keep society where it is - no matter how difficult and uncomfortable it is to be there. It is exactly what Tainter says: we react to problems by building structure that are more and more complex and that, in the end, produce a negative return. That's why societies collapse.

So, all our efforts are to keep the status quo ante . For this reason we are so desperately looking for something that can replace crude oil and leave everything else the same. It has to be something that is liquid, that burns and, if possible, even smells bad. Drill more, drill deeper, boil tar sands, make biofuels even if people will starve. We do everything we can to keep things as they are.

And, yet, we are going where the laws of physics are taking us. A world with less crude oil, or with no crude oil at all, cannot be the same world we are used to, but it doesn't need to be the Middle Ages again. If we manage to deploy new sources of energy, renewable or nuclear - fast enough to replace crude oil and the other fossil fuels, we can imagine that the transition would not involve a big loss of complexity, perhaps none at all. More likely, a reduced flux of energy and natural resources in the economic system will entail the kind of collapse described in the simulations of "The Limits to Growth." We can't avoid to go where the laws of physics are taking us.

Conclusion: showdown at Teutoburg

Two thousand years ago, three Roman legions were annihilated in the woods of Teutoburg by a coalition of tribes of the region that the Romans called "Germania". Today, after so many years, the woods of the region are quiet and peaceful places, as you can see in this picture

It is hard for us to imagine what the three days of folly of the battle of Teutoburg must have been. The legions surprised by the ambush of the Germans, their desperate attempt to retreat: under heavy rain and strong winds in the woods, they never were able to form a line and fight as they were trained to. One by one, almost all of them were killed; their general, Varus, committed suicide. The Germans left the bodies rotting in the woods as a sort of sacred memory to the battle. The ultimate disgrace for the legions was the loss of their sacred standards. It was such a disaster that it led to the legend that Emperor Augustus would wander at night in his palace screaming "Varus, give me back my legions!"

I think we could pause for a moment and remember these men, Germans and Romans, who fought so hard and died. We have seen so many similarity between our world and the Roman one that we may feel something that these men felt as well. Why did they fight, why did they die? I think that many of them fought because they were paid to fight. Others because their commander or their chieftain told them so. But, I am sure, a good number of them had some idea that they were fighting for (or against) the abstract concept that was the Roman Empire. Some of them must have felt that they stood for defending civilization against barbarians, others for defending their land against evil invaders.

Two millennia after the battle of Teutoburg, we can see how useless it was that confrontation in the woods soaked with rain. A few years later, the Roman general Germanicus, nephew of Emperor Tiberius, went back to Teutoburg with no less than eight legions. He defeated the Germans, recovered the standards of the defeated legions, and buried the bodies of the Roman dead. Arminius, the German leader who had defeated Varus, suffered a great loss of prestige and, eventually, he was killed by his own people. But all that changed nothing. The Roman Empire had exhausted its resources and couldn't expand any more. Germanicus couldn't conquer Germany any more than Varus could bring back his legions from the realm of the dead.

Civilizations and empires, in the end, are just ripples in the ocean of time. They come and go, leaving little except carved stones proclaiming their eternal greatness. But, from the human viewpoint, Empires are vast and long standing and, for some of us, worth fighting for or against. But those who fought in Teutoburg couldn't change the course of history, nor can we. All that we can say - today as at the time of the battle of Teutoburg - is that we are going towards a future world that we can only dimly perceive. If we could see clearly where we are going, maybe we wouldn't like to go there; but we are going anyway. In the end, perhaps it was Emperor Marcus Aurelius who had seen the future most clearly:

Nature which governs the whole will soon change all things which thou seest, and out of their substance will make other things, and again other things from the substance of them, in order that the world may be ever new.

Marcus Aurelius Verus - "Meditations" ca. 167 A.D.

The Collector’s Fallacy

$
0
0

There’s a tendency in all of us to gather useful stuff and feel good about it. To collect is a reward in itself. As knowledge workers, we’re inclined to look for the next groundbreaking thought, for intellectual stimulation: we pile up promising books and articles, and we store half the internet as bookmarks, just so we get the feeling of being on the cutting edge.

Let’s call this “The Collector’s Fallacy”. Why fallacy? Because ‘to know about something’ isn’t the same as ‘knowing something’. Just knowing about a thing is less than superficial since knowing about is merely to be certain of its existence, nothing more. Ultimately, this fake-knowledge is hindering us on our road to true excellence. Until we merge the contents, the information, ideas, and thoughts of other people into our own knowledge, we haven’t really learned a thing. We don’t change ourselves if we don’t learn, so merely filing things away doesn’t lead us anywhere.

Messy desk
Collections make us drown in liabilities. Photo Credit: Kris Krug, cc

Preparing reading material alone doesn’t get you anywhere. It’s quite common that students prepare lots and lots of photocopies of the texts they have to read — and stop just there. The copies grow to be an alibi, says Umberto Eco: “there’s a lot someone doesn’t know anything about precisely because she photocopied a text; she has given herself in to the illusion of having read the text already.”[162, 1] (My translation.)

The worst we could do is to pile up copies until the stack grows intimidatingly high, until it becomes unmanageable. After that it will be ignored in its entirety. Because to take a photocopy of a text is so much faster than actually reading it and learning what’s inside, we tend to amass days worth of deliberate reading in about half an hour standing next to the copier. We have to pay attention not to copy more texts than we may handle.

The same holds true when it comes to managing bookmarks. We stumble upon an interesting web page and don’t want to lose the information, thus we keep it as a bookmark. The digital pile of bookmarks isn’t any different from a tangible pile of papers we consider worth knowing. Here, too, kept isn’t read, though.

Why do we hoard stuff and clutter our lives like that?

Photocopying is potentially addictive. That’s because we are rewarded with sheets of paper for pressing the ‘copy’ button, and we’re rewarded promptly. The stack grows quickly when we use modern high-volume copiers which spit out printed pages rapidly. Moreover, accumulating photocopies is tangible. When we can see a stack growing and feel its weight, the feeling of reward is even higher. When photocopying, we condition ourselves like Skinner conditioned doves:

The dove’s behavior is reinforced by food. Pressing the ‘copy’ button is immediately rewarded with copied paper. These reinforcements are satisfying. From there stems the illusion we have done something meaningful: “Look how big my pile of paper is!”

Just like photocopying is self-rewarding and addictive, I argue that we fall into the same trap of false comfort when we bookmark web pages and sort the bookmarks into folders or tagged categories. Bookmarking a web page is satisfying because we get rid of the fear of losing access to the information. I get into detail in another post.

What can we do about the addictive behavior of collecting?

Research, Read, Assimilate; rinse and repeat

Collecting, just as Eco warned us, does not magically increase our knowledge. We have to read a text effectively to assimilate its ideas and learn from it. Reading effectively means the text changes our knowledge permanently. Only when we learn from it and begin to work with the ideas it presents. We need to extract what’s inside and write things down.

If we read without taking notes, our knowledge increases for a short time only. Once we forget what we knew, having read the text becomes worthless. You can bet that you’ll forget about the text’s information one day. It’s guaranteed. Thus, reading without taking notes is just a waste of time in the long run. It’s as if reading never happened.

That’s the reason we are used to picking up a reference text again and again when we work on our writing projects. We read and take the information from the text, put it in our short-term memory, get back to our own draft and pour in the information. We transfer information from one place to another but fail to increase our knowledge on the way. That’s the usual, inefficient way.

It’s only rational to take notes when you read a text because a system of notes can become an extension to your mind and memory. This will integrate the text’s information into our own knowledge. To increase one’s knowledge is a meaningful and the only sustainable way of working with information. Instead of shoveling information from the source text into your own project with the help of your working memory, you can integrate it into your knowledge system once and have it available forever. We may expand our knowledge permanently only by storing notes permanently.

Taking notes thoroughly means you can rely on your notes alone and rarely need to look up a detail in the original text.

I rarely consult secondary sources again. If I have to do so, it means that I did not do the job right the first time.
MK, of “Taking Note Now”

This is a first step to conquer Collector’s Fallacy: to realize that having a text at hand does nothing to increase our knowledge. We have to work with it instead. Reading alone won’t suffice: we have to create notes, too, to create real, sustainable knowledge.

Especially when we start to research something new, Eco recommends we read and highlight texts right after we create copies.[1] If we train ourselves to process photocopied texts soon, we get a feeling of how much we can really handle.

Shorter cycles of research, reading, and knowledge assimilation are better than long ones. With every full cycle from research to knowledge assimilation, we learn more about the topic. When we know more, our decisions are more informed, thus our research gets more efficient. If, on the other hand, we take home a big pile of material to read and process, some of it will turn out be useless once we finished parts of the pile. To minimize waste, both of time and of paper, it’s beneficial to immerse oneself step by step and learn on the way instead of making big up-front decisions based on guesswork.

The habit of keeping the cycle of research, reading, and knowledge assimilation short is a powerful way to circumvent our innate addiction to gather piles of stuff.

Update 2014–07–17:More recently, I wrote about this topic and included a more elaborate schedule to form a counter-habit. It’s called the Knowledge Cycle.

To form a habit, you have to set yourself actionable limits and keep score.

  • To get started, do research for one hour and no second more. Process the collected material until the stack is empty.
  • Then do a quick review of the cycle: how well did it go? Did you learn something new? Was it too much or too little you found in the amount of time?
  • Afterwards, change the time limit a bit if you think it wasn’t appropriate.

Repeat the cycle and keep track of your perceived productivity until you establish a feedback-supported routine which suits your needs.

Up next, we look at how to circumvent the Collector’s Fallacy and how to stay organized when we read online and process our RSS subscription queues.

Want to stop collecting with no end and start to get productive? Try the routine and tell me how it works for you. What time cap did you end up with?

GIMP Convolution Matrix

$
0
0

Here is a mathematician's domain. Most of filters are using convolution matrix. With the Convolution Matrix filter, if the fancy takes you, you can build a custom filter.

What is a convolution matrix? It's possible to get a rough idea of it without using mathematical tools that only a few ones know. Convolution is the treatment of a matrix by another one which is calledkernel.

The Convolution Matrix filter uses a first matrix which is the Image to be treated. The image is a bi-dimensional collection of pixels in rectangular coordinates. The used kernel depends on the effect you want.

GIMP uses 5x5 or 3x3 matrices. We will consider only 3x3 matrices, they are the most used and they are enough for all effects you want. If all border values of a kernel are set to zero, then system will consider it as a 3x3 matrix.

The filter studies successively every pixel of the image. For each of them, which we will call the initial pixel, it multiplies the value of this pixel and values of the 8 surrounding pixels by the kernel corresponding value. Then it adds the results, and the initial pixel is set to this final result value.

A simple example:

On the left is the image matrix: each pixel is marked with its value. The initial pixel has a red border. The kernel action area has a green border. In the middle is the kernel and, on the right is the convolution result.

Here is what happened: the filter read successively, from left to right and from top to bottom, all the pixels of the kernel action area. It multiplied the value of each of them by the kernel corresponding value and added results. The initial pixel has become 42: (40*0)+(42*1)+(46*0) + (46*0)+(50*0)+(55*0) + (52*0)+(56*0)+(58*0) = 42. (the filter doesn't work on the image but on a copy). As a graphical result, the initial pixel moved a pixel downwards.

Design of kernels is based on high levels mathematics. You can find ready-made kernels on the Web. Here are a few examples:



Figure 17.153. Edge enhance


Figure 17.154. Edge detect



NIH Courted Alcohol Industry to Fund Study on Benefits of Moderate Drinking

$
0
0

“This must have seemed like a dream come true for industry. Of course they would pay for it,” he said. “They’re admitting the trial is designed to provide a justification for moderate drinking. That’s not objective science.”

Asked about the meetings, Dr. Mukamal did not deny he had participated, but said the slides did not convey the full complexity of his presentation.

Last year, Dr. Mukamal told The New York Times that he had had “literally no contact with anyone in the alcohol industry in the planning of this.” He defended that statement saying the presentations took place long before the N.I.H. announced the funding grant in late 2015.

The description of the trial that he gave at the meetings was just a “boilerplate,” he said.

“My job there wasn’t to raise money,” Dr. Mukamal added. “It was to educate.”

A Vexing Question

The N.I.H. awards most research funds on a competitive basis, and grant applications undergo a two-tier review of the scientific merit and public import of a project, as well as the scientific integrity.

At a cost of $100 million, the new trial aims to resolve a persistent medical conundrum. Though excessive drinking is harmful and problem drinking is on the rise in the United States, many observational studies have found that moderate drinkers outlive abstainers and have less heart disease.

These studies don’t prove that moderate drinking is the reason these people live longer, however. The new trial, called the Moderate Alcohol and Cardiovascular Health Trial (M.A.C.H.), is intended to answer that question.

In January, Dr. Mukamal and his colleagues started recruiting volunteers ages 50 and older who are at high risk for heart disease; eventually there are to be 7,800 participants at 16 sites worldwide. Half will be told to abstain from alcohol. The rest, including both men and women, will be told to have one serving of alcohol a day.

No other long-term trial has ever asked participants to drink, much less drink every day. Scientists will track the two groups for six years on average to see whether daily drinkers have fewer heart attacks and strokes, and lower odds of diabetes and death.

The research will attempt to track the risks of drinking, but critics say it may not fully capture the harms. For one thing, the study will be too short to detect an increase in cancers linked to alcohol consumption, which may take decades to develop.

In addition, two servings has long been considered moderate drinking for men. Lowering the threshold may reduce falls, car accidents and alcohol abuse among the subjects; but one drink daily also may not reflect real-life habits.

Moreover, many people whose health might be compromised by light drinking — anyone with a history of addiction, psychiatric, liver or kidney problems, certain cancers or a family history of breast cancer — will not be allowed to participate. People who have never drunk alcohol also are excluded.

“You’re picking off the people who are most likely to have the harms,” said Dr. Richard Saitz, chair of the Department of Community Health Sciences at Boston University School of Public Health, after reviewing the parameters of the study.

But if the study finds even a modest cardiovascular benefit to light drinking, he added, “You can be sure that the way it will be understood by the general public is that this applies to everybody.”

Despite its shortcomings, M.A.C.H. may well be the last word on the subject of moderate drinking, since trials like these are both expensive and logistically complicated to carry out.

Photo
No other long-term trial has ever asked participants to drink, much less drink every day, or encouraged participants to drink whatever they like.Credit Edu Bayer for The New York Times

Dr. Mukamal, who has published nearly a hundred scientific papers on the relationship between moderate drinking and cardiovascular disease. emphasized in an interview that he was committed to reporting the results accurately based on the data.

“If anyone has any doubt whatsoever that our intent is to provide the most accurate and precise description of our findings, they are sorely mistaken,” he said.

‘They’d Be Happy’

Private contributions for the study from the alcohol industry are being channeled through the Foundation for the N.I.H., a nongovernmental entity that raises money for N.I.H. research and manages the partnerships established to direct private donations.

In this case, the industry donors were expected to be held at “arm’s length” and not to play any role in the research or to communicate with the scientists, said Julie Wolf-Rodda, director of development for the foundation.

George Koob, the current director of the National Institute on Alcohol Abuse and Alcoholism, said the foundation constitutes an impregnable “firewall” that prevents donors from interfering with research.

In an interview, he said he was unaware of the meetings between N.I.H. officials and industry officials to rally support for the study, most of which took place before he took the helm of the institute in late January 2014. He denied that the institute had solicited funding.

Raising his voice during an interview, Dr. Koob insisted the industry’s sponsorship would not compromise the study and said that the study protocol went through several rigorous reviews. “We do things right at N.I.H.,” he said.

But his predecessor, Dr. Ken Warren, who helped organize and participated in some of the meetings as acting director of the alcohol abuse institute, acknowledged in an interview that the scientists’ presentations were meant to both “demonstrate to the industry that the study was feasible” and “to determine if they had interest in taking part” as funders.

Questions about moderate drinking and heart disease are important public health questions, Dr. Warren said, and a government trial would be more credible than research “directly funded by an entity such as the alcoholic beverage industry, which could be considered biased.”

Most of the cost of this government trial, however, is being picked up by five of the world’s largest alcoholic beverage makers — Anheuser-Busch InBev, Heineken, Diageo, Pernod Ricard and Carlsberg.

In an interview, Dr. Lorraine Gunzerath, a retired senior adviser to Dr. Warren, took credit for coming up with the idea of reaching out to the alcohol industry for funding.

Clinical trials like this one don’t fall neatly under the mission of the alcohol abuse institute, she said. “We were supposed to be preventing alcoholism, so to spend that kind of money on research for a possible good use of alcohol was something that would never fly,” she said.

But, referring to the alcohol industry, Dr. Gunzerath said, “If we had a clinical trial, and it was a positive result — which we thought it might be, you sort of think you know where it’s going — they’d be happy.”

All the N.I.H. had to do was “make a business case to the industry that it would be to their benefit, even if they couldn’t actually control the trial’s outcome,” Dr. Gunzerath said.

She arranged for the university scientists to address executives at alcohol industry meetings. “If they didn’t like the research team, they would have said no,” she said.

Photo
The study was designed not to differentiate between types of alcohol, expecting to find that one serving of beer, liquor or wine each day would be beneficial.Credit Edu Bayer for The New York Times

It was no secret at the time that Dr. Mukamal and his collaborators “already believed that moderate alcohol is a good thing,” she said. He already had published papers suggesting as much.

After the scientists’ presentations, which were provided to The Times by Dr. Gunzerath, she would speak briefly to say that “it would be nice if we could get money from the industry,” but explain that funds would have to flow through the foundation.

On Sept. 30, 2013, Dr. Gunzerath sent an email headed “URGENT! Response needed ASAP!” to Dr. Mukamal, inviting him to Philadelphia to address the annual meeting of the Worldwide Brewing Alliance, to get the brewers’ “buy in” and “extra overall funding potential as well.”

“I can make it,” Dr. Mukamal responded. “I could do any version or part or the whole day, night before or drive down that day etc. whatever works best for you.”

Dr. Gunzerath and Dr. Warren also arranged meetings between the scientists and industry representatives at the Distilled Spirits Council’s Washington headquarters on Nov. 21, 2013, and Jan. 28, 2014.

A spokesman for the Distilled Spirits Council said that after the N.I.A.A.A. approached the trade group in 2013, the council “provided them with a forum to present the initial outline of their study.”

Representatives of Anheuser-Busch InBev, Heineken and Diageo confirmed that the scientists’ presentations played a role in the companies’ decisions to underwrite the trial.

“When Heineken was invited by the N.I.H. to partially fund the N.I.A.A.A. trial for a duration of ten years, as part of our decision making process, the scientists presented the research project to us so we would have a sound understanding of the trial,” Michael Fuchs, a company spokesman, said in an email.

Growing Trend

These days, it is not unusual for the N.I.H. to look to business to participate in public-private partnerships to fund medical research. When an N.I.H. center is seeking outside funding from the private sector, it starts by submitting a “request for collaboration” to a steering committee of the N.I.H. and the foundation based in the office of the director of the N.I.H., Dr. Francis Collins.

For the moderate drinking trial, the alcohol abuse institute signed an agreement with the foundation that said, “Under no circumstances shall N.I.A.A.A. or its representatives communicate directly with any Donor in order to raise funds for the project or to disclose to any Donor any information” about “the name and affiliation of the awardee” or “details and information relevant to the award.”

But by the time the institute submitted the request for outside funding in early 2015, its officials and outside scientists had already met with alcohol industry executives. Representatives of beer and liquor companies had already heard directly from Dr. Mukamal.

The alcohol abuse institute took an extra step to secure Dr. Mukamal’s position as top contender for the grant. While N.I.H. grants are supposed to be awarded on a competitive basis, the institute’s request for outside funding said the award would be restricted to applicants with “unique” resources and backgrounds — and specifically mentioned Dr. Mukamal, who had helped persuade the alcohol industry to fund the research.

Whether scientists studying alcohol should accept money from the industry has long been controversial. Many scientists and policymakers have publicly said that any engagement with the alcohol industry undermines the credibility of the research.

In 2016, a group representing hundreds of scientists and policymakers published a statement saying researchers should never accept direct or indirect industry funding, and that “any form of engagement with the alcohol industry may influence the independence, objectivity, integrity and credibility” of the research.

“We know that industry funding not only affects the results of studies but affects the questions that are asked, how the results are analyzed and what the answers are,” said Dr. Adriane Fugh-Berman, a professor of pharmacology at Georgetown University and director of Pharmed Out, a group that researches drug marketing.

If the health effects of moderate drinking are a priority for the N.I.H., she added, “they should fund it themselves.”

Continue reading the main story

Fukushima nuclear disaster: did the evacuation raise the death toll?

$
0
0

Satoru Yamauchi was working in his soba noodle shop when the Tohoku earthquake struck on March 11 2011. He remembers escaping to high ground, then going home to rescue his dog, making it back in time to see a “white wall” — the tsunami — roaring in from the Pacific.

The destruction was beyond his imagination. But Mr Yamauchi, and his family, survived. Even their home in the town of Naraha was just high enough to escape the water. Then the next day, city hall ordered an evacuation: there was trouble at the Fukushima nuclear plant, and a friend at Tokyo Electric, the plant’s operator, said it might be serious.

The family headed south and spent three days in an evacuation centre. It was desperately cold. Mr Yamauchi was pressed into duty as a cook, even as the rumours surrounding the condition of the reactors grew ever more terrifying. “My children were saying: ‘We don’t want to die from radiation. Let’s go to Tokyo. Let’s go to Tokyo.’”

Satoru Yamauchi was forced out of his home by the imposition of an exclusion zone © Tokuyuki Matsubuchi/FT

So the family moved to the Japanese capital, 200km away, which is where their troubles really began. For the past seven years they have struggled with cramped conditions, money troubles, bullying at school, depression, lack of purpose and the insidious fear of a death sentence from radiation exposure. “Psychologically we were wrecked,” says Mr Yamauchi. “I’m still taking pills for high blood pressure.”

As life slowly returns to normal in Fukushima— visitors to the plant no longer need radiation suits, a face mask is sufficient — it is becoming increasingly clear, say experts, that the evacuation, not the nuclear accident itself, was the most devastating part of the disaster. It reaped a terrible toll in depression, joblessness and alcoholism among the 63,000 people who were displaced beyond the prefecture; of those, only 29,000 have since returned.

There were 2,202 disaster-related deaths in Fukushima, according to the government’s Reconstruction Agency, from evacuation stress, interruption to medical care and suicide; so far, there has not been a single case of cancer linked to radiation from the plant. That is prompting a shocking reassessment among some scholars: that the evacuation was an error. The human cost would have been far smaller had people stayed where they were, they argue. The wider death toll from the quake was 15,895, according to the National Police Agency.

Zero evacuation may be implausible. At the height of the crisis there were fears of much worse contamination. The question is rather whether people should have been kept away for weeks, not years. “With hindsight, we can say the evacuation was a mistake,” says Philip Thomas, a professor of risk management at the University of Bristol and leader of a recent research project on nuclear accidents. “We would have recommended that nobody be evacuated.”

Fukushima prompted a global turn away from nuclear power and correspondingly higher carbon emissions in countries such as Germany and Japan. Yet if much of the suffering was proved to be avoidable, it might change that calculation. The future of nuclear energy, as well as the correct response to other catastrophes that cause evacuation, may rest on learning the right lessons from the disaster.

The nuclear accident, the worst since Chernobyl in 1986, unfolded after the tsunami knocked out power supplies at the Fukushima Daiichi plant. As workers fought desperately amid the rubble and water, three of the reactors lost cooling, leading to hydrogen explosions and the release of nuclear contaminants into the atmosphere. Ultimately, those three suffered meltdowns.

The first evacuation, of those within a 2km radius of the plant, was ordered on the evening of March 11, just hours after the tsunami. The following morning the exclusion zone was expanded to 10km, but with high radiation levels recorded at the site boundary after the first explosion that day, it was further extended to 20km around the plant, taking in the Yamauchi’s home in Naraha.

Disaster related deaths by age

Elderly evacuees read a newspaper about the disaster near their homes in Fukushima in March 2011 © AFP

Evacuations took place in an atmosphere of panic and disorganisation. Large buses simply turned up at town halls and people got on with whatever they could carry. The sick and vulnerable suffered most.

“If you compare nursing homes that evacuated with those that didn’t, the death rate was three times higher among those who moved,” says Sae Ochi, a doctor at the Japan agency for medical research and development who has worked in Fukushima. Of the disaster-related deaths, 1,984 were people over the age of 65.

The physical effects on evacuees living in temporary accommodation were acute. People who had previously walked had to drive. Farmers used to the outdoors were cooped up inside. Higher rates of liver dysfunction, diabetes and hypertension were recorded.

“The thing we worry about most is disaster-related suicides,” says Koichi Tanigawa, a professor at Fukushima Medical University. The impact of the disaster on people’s mental health got worse over time, with suicides peaking in 2013, when 23 Fukushima disaster victims took their own lives. “Initially, everyone was really determined, but they got tired and that’s when depression started to increase,” says Dr Ochi.

The human cost:Fumio Okubo

Mieko Okubo with a portrait of her father-in-law, Fumio Okubo © Reuters

The case of Fumio Okubo is a stark example of how the evacuation affected the elderly. Aged 102 at the time of the disaster he lived 30km inland from the plant, in Iitate.

As people began to leave the area. his day care service shut, trapping him at home. Then at lunchtime on April 11, a month after the disaster, he learnt, via television news, that a complete evacuation of the village, which lay along the fallout path from the reactors, had been ordered. “I don’t want to leave,” his daughter-in-law (pictured above with a photograph of Mr Okubo) recalled him saying, according to court filings. “I’ve lived too long.”

That night Mr Okubo hanged himself. Plant operator Tokyo Electric was recently ordered to pay his relatives ¥15.2m ($142,000) in damages.

The result that did not materialise was sickness from radiation. “At present, there are no cases of cancer relating to radiation, and that includes workers at the plant,” says Dr Tanigawa. Among 173 workers exposed to radiation above occupational safety limits, there may eventually be a handful of incidents of cancer, he says. But the maximum dose to Fukushima residents was below those levels. “Statistically speaking, there should be no detectable increase in cancer in the general public.”

Anti-nuclear campaigners point to more than 100 diagnoses of thyroid cancer in Fukushima children. But doctors say radiation cannot be the cause, since the disease typically takes four or five years to develop after exposure, and the cancers were found immediately. Rather, the thyroid cases were a result of screening every child in the prefecture using ultrasensitive equipment.

Detection rates in Fukushima were similar to those found using the same equipment in other Japanese prefectures. “If we go looking for thyroid cancer then we’ll find it through a screening effect,” Dr Tanigawa says.

Avoiding deaths from radiation was the whole point of the evacuation. The crucial question is how sick people would have been had they stayed. Prof Thomas has published calculations using UN radiation data from Fukushima and standard models of how it translates to disease. He found modest risks.

“The sort of dose for even the worst-affected villages was something that was accepted in the nuclear industry 30 years ago,” he says. In the worst-affected towns of Tomioka, Okuma and Futaba he found that evacuees extended their lives by an average of 82, 69 and 49 days respectively, thanks to the radiation they avoided.

In Mr Yamauchi’s hometown of Naraha, the decrease in lifespan avoided through evacuation was just a couple of days. In a few places, the figure was negative because people evacuated to areas with higher levels of radiation. Evacuation makes relatively greater sense for the young, who are more sensitive to radiation, and have more length of life to lose.

But purely based on an economic calculation of cost and benefit, the evacuation was not worth it, says Prof Thomas. The expected compensation bill to evacuees is ¥7.9tn ($74bn). Add in the terrible health consequences of disrupting lives “and it becomes many more times not worth doing”. The lifetime risk of death from a 100 millisievert dose of radiation — more than any resident actually received — is about 0.5 per cent.

In retrospect, the evacuation looks excessive. Less clear is whether those in charge at the time could have acted any other way. Naoto Kan, the prime minister who ordered the evacuations, says his decision was correct. In the terrifying days after the accident, he was presented with nightmare scenarios of massive radioactive contamination requiring an evacuation within a 250km radius of the plant.

“There were 50m people in that area, including the entire population of Tokyo. The capital would have been a ghost town,” he says. “Given this scenario was possible, then basically we had to order an early evacuation.”

Mr Kan was not alone in that decision. Based on its own independent understanding, the US told its citizens to evacuate an even wider area of 50 miles around the stricken plant. The one mistake Mr Kan identifies is not evacuating faster in villages along the fallout path north-west of the reactors. “That was inexcusable to the victims,” he says.

Prof Thomas draws a distinction between evacuation while the disaster was continuing and relocation in its aftermath. He compares it to an evacuation last year below the Oroville Dam in California, where residents were swiftly returned to their homes once the dam was stabilised.

But Dr Ochi wonders if it was possible to keep people in place, even once the nightmare scenarios were averted. “If you look now simply at the amount of radiation then it would have been better not to evacuate,” she says. “[But] people were scared, and it wouldn’t have been possible to get food and fuel to them.”

Instead of second-guessing the decisions taken in Fukushima, she says, it is more important to think about better ways to manage an evacuation in the future. Japan’s new nuclear contingency plans include an evacuation within 5km and orders to shelter in a 30km radius in the event of a similar disaster.

Greenpeace specialists measure radiation levels opposite a school in Namie Town, Fukushima last September © EPA

“Perhaps the most crucial thing is to say — at the time of the evacuation — under what conditions you should return,” she says. Safe radiation levels are a matter of dispute among scientists, but people are unlikely to trust a figure set after the accident. Ms Ochi also says it is safe to take time over evacuating the sick because only cumulative radiation exposure is dangerous.

Prof Thomas takes her arguments a step further. “The first thing to realise is that relocation is probably going to be a bad idea,” he says, suggesting that nuclear companies start providing real-time health information on the risks of living around their plants. “This is what your loss of life expectancy is from the current level of contamination,” he says. If people realise it would only be a few days, they can make an informed decision to stay.

“People understand temperature very well,” says Dr Tanigawa. “They need that understanding of radiation.”

What these approaches require, however, is a sophisticated understanding of risk and public willingness to act on it. It is indisputable that nuclear power means some risks. An accident such as Fukushima means some radioactive contamination, and staying in a contaminated area means some long-term increase in the risk of cancer.

As for Mr Yamauchi, he is returning to Fukushima to reopen his noodle shop. “Will we be able to manage there? I don’t know,” he says. The evacuation order for Naraha was lifted in 2015, but the population is still a fraction of what it was. He worries about radiation and is distrustful of the plant’s operator, Tepco, and any official suggestion that the health risks are under control. “There’s absolutely no need for nuclear power,” he says. “With just one mistake, terrible things happen.”

The human cost: Seiichi Kanno

Seiichi Kanno was at home in the city of Minamisoma when the earthquake struck. He lived with his elderly mother, who died shortly after the disaster, and initially ignored the evacuation order.

He spent weeks in the deserted town, patrolling for looters and trying to help abandoned pets. “I could hear all the animals crying,” he says. He doesn’t know if the disaster contributed to his mother’s death, but remembers that the ambulance refused to come to the house.

He is phlegmatic about radiation risks. “I spoke with some of the volunteers and realised radiation wasn’t such a lot to be frightened of. As long as you wore a mask, had long trousers and didn’t eat anything outside you were all right,” he says.

About six weeks after the accident, the exclusion zone was extended and Mr Kanno, a carpenter, moved to an evacuation centre where he lived for months before transferring to temporary housing. He has been employed on the reconstruction effort but worries about the future with so many young people having left.

“Personally, I think it would have been fine just to stay at home,” he says. “I’m already fairly old. Even if there was radiation it wouldn’t make such a difference.”

Letter in response to this article:

If it was my family, I’d want them out of there / From Dr Paul Dorfman, University College London, UK

RedisGraph: A High Performance In-Memory Graph Database as a Redis Module

$
0
0

Abstract

Graph based data is everywhere now days, Facebook, Google, Twitter and Pinterest are only a few who've realize the power behind relationship data and are utilizing it to the fullest, as a direct result we see a rise both in interest and variety of graph data solutions.

With the introduction of Redis Modules we've seen the great potential of introducing a graph data structure to Redis arsenal, a native C implementation with emphasis on performance was developed to bring new graph database capabilities to Redis, the RedisGraph is now available as an open source project on GitHub.

In this document we'll discuss the internal design and feature of RedisGraph and demonstrate its current capabilities.

RedisGraph At-a-Glance

RedisGraph is a graph database developed from scratch on top of Redis, using the new Redis Modules API to extend Redis with new commands and capabilities. Its main features include: - Simple, fast indexing and querying - Data stored in RAM, using memory-efficient custom data structures - On disk persistence - Tabular result sets - Simple and popular graph query language (Cypher) - Data Filtering, Aggregation and ordering

A Little Taste: RedisGraph in Action

Let’s look at some of the key concepts of RedisGraph using this example over the redis-cli tool:

Constructing a graph:

It is a common concept to represent entities as nodes within a graph, In this example, we'll create a small graph with both actors and movies as its entities, an "act" relation will connect actors to movies they casted in. We use the graph.QUERY command to issue a CREATE query which will introduce new entities and relations to our graph.

graph.QUERY <graph_id> 'CREATE (:<label> {<attribute_name>:<attribute_value>,...})'
graph.QUERY <graph_id> 'CREATE (<source_node_alias>)-[<relation> {<attribute_name>:<attribute_value>,...}]->(<dest_node_alias>)'

Construct our graph in one go:

graph.QUERY IMDB 'CREATE (aldis:actor {name: "Aldis Hodge", birth_year: 1986}),                         (oshea:actor {name: "OShea Jackson", birth_year: 1991}),                         (corey:actor {name: "Corey Hawkins", birth_year: 1988}),                         (neil:actor {name: "Neil Brown", birthyear: 1980}),                         (compton:movie {title: "Straight Outta Compton", genre: "Biography", votes: 127258, rating: 7.9, year: 2015}),                         (neveregoback:movie {title: "Never Go Back", genre: "Action", votes: 15821, rating: 6.4, year: 2016}),                         (aldis)-[act]->(neveregoback),                         (aldis)-[act]->(compton),                         (oshea)-[act]->(compton),                         (corey)-[act]->(compton),                         (neil)-[act]->(compton)'

Querying the graph:

RedisGraph exposes a subset of openCypher graph language, although only a number of language capabilities are supported there's enough functionality to extract valuable insights from your graphs, to execute a query we use the GRAPH.QUERY command:

GRAPH.QUERY <graph_id> <query>

Let's execute a number of queries against our movies graph:

Find the sum, max, min and avg age of the Straight Outta Compton cast:

GRAPH.QUERY IMDB "MATCH (a:actor)-[act]->(m:movie {title:\"Straight Outta Compton\"})RETURN m.title, SUM(a.age), MAX(a.age), MIN(a.age), AVG(a.age)"

RedisGraph will reply with:

1)"m.title, SUM(a.age), MAX(a.age), MIN(a.age), AVG(a.age)"2)"Straight Outta Compton,123.000000,37.000000,26.000000,30.750000"

The first row is our result-set hearder which name each column according to the return clause. Second row contains our query result.

Let's try another query, this time we'll find in how many movies each actor played.

GRAPH.QUERY IMDB "MATCH (actor)-[act]->(movie) RETURN actor.name, COUNT(movie.title) AS movies_count ORDER BYmovies_count DESC"1)"actor.name, movies_count"2)"Aldis Hodge,2.000000"3)"O'Shea Jackson,1.000000"4)"Corey Hawkins,1.000000"5)"Neil Brown,1.000000"

The Theory: Ideas behind RedisGraph

Different graph databases uses different structures for representing a graph. Some use adjacency list, others might use an adjacency matrix. Each structure has its advantages and disadvantages. For RedisGraph it was crucial to find a data structure which will enable us to perform fast searches on the graph, and thus we have decided to use a concept calledHexastore in order to hold all the relationships within the graph.

Graph representation: Hexastore

A Hexastore is simply a list of triplets, where each triplet is composed of three parts:

  1. Subject
  2. Predicate
  3. Object

Where the Subject refers to a source node, predicate represents a relationship and the object refers to a destination node. For each relationship within the graph our hexastore will contain all six permutation of the source node, relationship edge and destination node. For example consider the following relation:

(Aldis_Hodge)-[act]->(Straight_Outta_Compton)

where: - Aldis_Hodge is the source node - act is the relationship - Straight_Outta_Compton is the destination node

All six possibilities of representing this connection are as follows:

SPO:Aldis_Hodge:act:Straight_Outta_Compton
SOP:Aldis_Hodge:Straight_Outta_Compton:act
POS:act:Straight_Outta_Compton:Aldis_Hodge
PSO:act:Aldis_Hodge:Straight_Outta_Compton
OPS:Straight_Outta_Compton:act:Aldis_Hodge
OSP:Straight_Outta_Compton:Aldis_Hodge:act

With the Hexastore constructed we can easily search our graph, suppose I would like to find the cast of the movie Straight Outta Compton, all I've to do is search my Hexastore for all strings containing the prefix: OPS:Straight_Outta_Compton:act:*

Or if I'm interested in all the movies Aldis Hodge played in, I can search for all strings containing the prefix: SPO:Aldis_Hodge:act:*

Although a Hexastore uses plenty of memory (six triplets for each relation), we're using a trie data structure which is not only fast in terms of search but is also memory efficient as it doesn't create duplication of string prefixes it already seen.

Query language: openCypher

There are a number of Graph Query languages, we didn't want to reinvent the wheel and come up with our own language, and so we've decided to implement a subset of one of the most popular graph query language out there openCypher. The Open-Cypher project provides means to create a parser for the language, although convenient we decided to create our own parser with Lex as a tokenizer and Lemon which generates a C target parser.

As mentioned only a subset of the language is supported, but it is our intention to continue adding new capabilities and extend the language.

Runtime: query execution

Let's review the steps our module takes when executing a query. Consider the following query which finds all actors who've played alongside Aldis Hodge and are over 30 years old:

MATCH (aldis::actor {name:"Aldis Hodge"})-[act]->(m:movie)<-[act]-(a:actor) WHERE a.age > 30 RETURN m.title, a.name

RedisGraph will - Parse query, build abstract syntax tree (AST) - Construct a query execution plan composed of: - Label scan operation - Filter operation (filter tree) - Expand operation - Expand into operation - Execute plan - Populate result-set with matching entities attributes

Query parser

Given a valid query the parser will generate an AST containing six primary nodes one for each clause:

  1. MATCH
  2. CREATE
  3. DELETE
  4. WHERE
  5. RETURN
  6. ORDER

Generating an abstract syntax tree is a common way of describing and structuring a language.

Filter tree

A query can filter out entities by creating predicates. In our example we're filtering actors which are younger then 30. It's possible to combine predicates using the OR, AND keywords to form granular conditions. During runtime the WHERE clause is used to construct a filter tree. Each node within the tree is either a condition e.g. A > B or an operation (AND/OR). When finding candidate entities they are passed through the tree and get evaluated.

Query processing

The MATCH clause describes relations between queried entities (nodes), a node can have an alias which will allow us to refer to it at later stages within the executing query lifetime (WHERE, RETURN clause), but all nodes must eventually be assign an ID. The process of assigning IDs to nodes is refered to as the search phase.

During the search we'll be querying the Hexastore for IDs according to the MATCH clause structure. For instance, in our example we'll start our search by looking for movies in which Aldis Hodge played in. For each movie we'll extend our search to find out which other actors played in the current processed movie.

As you might imagine the search process is a recursive operation which traverse the graph. At each step a new ID is discovered. Once every node has an ID assigned to it we can be assured that current entities have passed our filters. At this point we can extract requested attributes (as specified in the return clause) and append a new record to the final result set.

Benchmarks

Depending on the underlying hardware results may vary. That said, inserting a new relationship is done in O(1). RedisGraph is able to create 100K new relations within one second.

Retrieving data really depends on the size of the graph and the type of query you're executing. On a small size graph ~1000 entities and ~2500 edges, RedisGraph is able to perform ~65K friend of a friend query every second.

It's worth mentioning that besides the hexastore, entities are not indexed. It’s our intention to introduce entities indexing which should decrease query execution time dramatically.

License

Redis-Graph is published under AGPL-3.0.

Conclusion

Although RedisGraph is still a young project, it can be an alternative to other graph databases. With its subset of operations one can use it to analyze and explore its graph data. Being a Redis module this project is accessible from every Redis client without the need to make any adjustments. It's our intention to keep on improving and extending RedisGraph with the help of the open source community.

Tim Berners-Lee shares his disappointment on World Wide Web’s 29th birthday

$
0
0



Tim Berners-Lee also known as the father of World Wide Web shared his thoughts about the internet on the occasion of World Wide Web’s 29th birthday (12th March). He’s definitely not happy with the way internet has changed over the years. In an open letter appearing in The Guardian, Tim Berners Lee expressed his fears and ideas on how the internet should be for the people.

Since its inception in 1989 internet has come a long way and has undergone many changes. The accessibility and usage of the internet are no longer limited to a handful of people or just bits of data. Today internet plays an important role in millions of people’s lives. Right from the navigating the roads to booking a movie or a flight ticket, or video/audio calls to heavy online gaming, online shopping, online education and what not? Today internet has made almost everything and information’s available with ease.

Tim Berners-Lee is disappointed with the fact that web is really messed up today. He wrote “The web that many connected to years ago is not what new users will find today. What was once a rich selection of blogs and websites has been compressed under the powerful weight of a few dominant platforms. This concentration of power creates a new set of gatekeepers, allowing a handful of platforms to control which ideas and opinions are seen and shared.”

What do these dominant forces do?

As per Tim Berners-Lee, “These dominant platforms are able to lock in their position by creating barriers for competitors. They acquire startup challengers, buy up new innovations and hire the industry’s top talent. Add to this the competitive advantage that their user data gives them and we can expect the next 20 years to be far less innovative than the last.” He further writes:
“The fact that power is concentrated among so few companies has made it possible to weaponise the web at scale. In recent years, we’ve seen conspiracy theories trend on social media platforms, fake Twitter and Facebook accounts stoke social tensions, external actors interfere in elections, and criminals steal troves of personal data.

We’ve looked to the platforms themselves for answers. Companies are aware of the problems and are making efforts to fix them – with each change they make affecting millions of people. The responsibility – and sometimes burden – of making these decisions falls on companies that have been built to maximise profit more than to maximise social good.

How do we fix this problem?

Tim Berners-Lee suggests that a legal or regulatory framework that accounts for social objectives may help ease those tensions. Let’s assemble the brightest minds from business, technology, government, civil society, the arts and academia to tackle the threats to the web’s future. At the Web Foundation, we are ready to play our part in this mission and build the web we all want. Let’s work together to make it possible.

Ask HN: Is a blog clever marketing or just a waste of time?

$
0
0

Your blog is one of the best sources to get new traffic for your app. Search content marketing if you're not doing it already. I knew a client who go at least 20 signups per day using a blog post that ranked #1 on Google.

Of course your content has to be amazing and needs a few backlinks from industry leaders or authority sites. Also the most important part of your blog post is the bottom section where you have your call to action to convert the reader into a subscriber, so don't ever use a generic click here type CTA.

E.g. If your blog post is about 10 ways to lose weight, then add a resource box below which says "It's time to get started, download this FREE time-line pdf I prepared for you which helped me and 1000 members of this website lose over 10 lbs in 22 days" and bam.. there you have a new signup!

P.S. Goes without saying but you must give good advice. You have to win their trust, the article, the pdf must give them what they're searching for (give them 80% and remaining 20% after payment).


It doesn't have to be a blog, but agree that SEO is a huge benefit of done correctly.

Blogs that look stale or infrequently updated have a negative buying effect for me.

But create a couple SEO pages like a How To guide, FAQs, etc. and it's a short term investment with long term benefits.


It can really depend on the niche. For some niches, it's basically as simple as "write an article, and you'll be #1". For others, it's a case of "more money than you'll earn in your lifetime is spent each year fighting for this spot"

Obviously, though, the rewards in the former are far lower. But, it certainly means it's worth trying one article to see how big the rewards are. Basically just search like a user would, and see if the results are satisfying. If none are, then it should be straightforward to rank #1 or near the top.


As someone who has gotten some decent traffic out of articles: you may find that a "blog" isn't necessarily worthwhile, but that producing 10-20 high quality articles is. Especially since articles can often double as a high quality FAQ, lowering your customer support requirements.

Good blogs basically serve the same role, they generate a lot of posts which could be effective standalone articles.


Glad I could help. But I would like to caution you that it does require quite a lot of commitment and time.

You can't treat your blog posts any less and try to be done with it in one evening and then forget about it (it won't show in Google or generate traffic or signups yet most people think of blogging that way only).

Sometimes you may have to spend days researching / writing an article, graphics and PDF while spending $500 on your graphics, editing, proof-reading, etc. It requires the same type of effort you would put in creating a product.

Then the outreach is the worst part where you have to convince other people in your niche to add your articles as a resource (Brian Dean has some great advice on this btw).


There are 2 main strategies imo.

1. A blog that makes you a thought leader. People will like you, check out what you're doing, and perhaps start using your app. This is the typical way to go for b2b, or if you have a big ego :) Another pro is that if you change your app, you still (may) have your readership.

2. A blog that extends the content of your app. This fundamentally is a set of landing pages that improve your SEO. One common strategy is to talk about problems that people may have, and that your app solves. Whether you want or not to be too explicit about your app it's a matter of tastes.


Format 3. "Problem I had while running my business, and my solution." Engineering blog posts are very popular on HN or /r/programming. Marketing, growth or product development might do better shared on Twitter or LinkedIn.

People read it because the solution is interesting to them, but get exposed to your business as a consequence. CandyJapan's posts on dealing with credit card fraud were how I found out about their service, for instance: https://www.candyjapan.com/behind-the-scenes/how-i-got-credi...


It also helps a lot, in my experience as both a reader and a writer, to have something unique or unusual to say. Most people do not and consequently don't get readers and often waste a lot of time in the process.

I'm admittedly in a somewhat unusual position, though, because I contribute to a blog about grant writing (http://www.seliger.com/blog) and basically no one else has anything useful or interesting to say about the topic. I exaggerate, but only slightly.

I don't know how well blogging works for others, but I do feel like I've seen a lot of people and companies with really lame blogs and I have to think, "Is that really doing anything for you?" I'd be curious to see anyone with data on this subject, especially because I'm not even real sure what "data" would look like in this case.


Blog is always the best way to get leads whether you are a business owner or a developer/other professional, provided you love writing.

I use my blog to promote my skills and for last couple of years my site and blog helped me to earn some good contracts/gigs. Even my current job is due to my blog which someone read and contacted me.

<Shameless_Plug>

I am not an SEO expert but these tools did help me in early days. I wrote how I used them:

http://blog.adnansiddiqi.me/3-free-seo-tools-you-should-use-...

</Shameless_Plug>


Disclaimer: I do content marketing consulting and used to own an agency that did same.

Don't think of it as a blog. See it as your own media platform. Whose only purpose is to broadcast information that drives sales.

It is also an always-ready salesperson who never sleeps and can be trained to overcome any objection to close sales.

A properly managed blog is definitely worth your time. However, the content must be clearly integrated with your marketing and advertising.

The best type of content? Anything that helps current users better use your product. Tutorials, guides, docs, videos, etc. The second best is content that shows how your product solves a specific problem.

You don't need to limit yourself to one blog either. A multi-outlet approach works really well and typically helps corner a market.


If you want to acquire users from google search its absolutely required that you have fresh and frequently updated content with links from other reputable sources to even be considered to be displayed on a search result.

If you don't care about acquiring users from search as you have other means then yes a blog may be a complete waste of time.

If your site already has some crawl-able content and your already receiving some traffic from google, rather than create a blog your time is probably better spent doing the following:

creating more content pages to help google understand your site

creating a sitemap

decreasing the load time of your site

spend money on google ads

get high quality back links


One of the things i use to evaluate a company's legitimacy and ability to succeed is their ability to be a thought leader through content marketing and influencing. Not just blogging...but a mix of blogging, youtube videos/tutorials, meetups etc...and above all else actually be an influencer with that content. meaning lots of people subscribe and share that stuff.

That being said, blogging is just a medium to show the world what you know and how good you are at teaching others as well. If you don't know anything and don't have anything valuable to share, then your blog will probably be a 'waste of time'.


>Is having a blog alongside my website clever marketing or just a waste of my time?

I used to blog all the time, but that was back when blogs were just a way for people to communicate things they were interested in, not primarily a means of marketing. Just do it - if you enjoy it, keep doing it, if you don't, stop.

Chances are if it's just for marketing purposes, and unless you care enough and are at least a decent enough writer to be engaging, then it's not going to get a lot of readers anyway, because you won't be writing anything worth reading.

So to me, the question you should be asking yourself isn't whether it would be a clever way to market the app, but whether or not you want to blog.


A lot of blog articles out there = fluffy content on topics tangentially related to the product or service.

Instead of blog articles per se, I'd recommend creating high value resource articles on topics people who'd benefit from your app may search for. For example, if you offer a goal progress app, people with interest may search for resources on 'long-term strategies to boost productivity that you won't give up after a few days'.

High value = it should provide a comprehensive breakdown of the topic and cover points the top ranking articles on the same topic don't. This will boost your article's search rankings and help it attract more backlinks and social shares.

You can also offer a PDF download of the resource in exchange for an email address so you can nurture these leads.

Examples of these resource articles:

[1] https://mailshake.com/masterclass/find-email-addresses/

[2] https://artofemails.com/sales-follow-up


It probably depends on your product.

The more expensive your service is, the more you'll probably need to offer proof that it's worth the price. If your service is going to cost me a one time purchase price of $1, I'm going to buy it regardless of what you say. If it's going to cost me $499/month, I'm going to research the heck out of it, which would probably include reading your blog and/or the best parts of your site to find the information I need.

The harder your product is to use, because somethings are just complicated no how simple you try to make them, you'll need an efficient way to educate your customers. One use for a blog is to have a place where you can tell stories about people using your product. "How Susie solved use getp0d.com to double her income" or "Using getp0d.com as a CRM for your motel's reservations". You can use a blog as a place to put any ol' random thought you have until there's a better place to put said thought. My company's old blog was mostly for making it sound like my services my be worth what the price I quoted you.


It depends on your customers. If your potential customers are likely to be interested in reading things around the problem you are solving then maybe. But as others have mentioned, it needs to be "content first". If the content isn't good and just a thin excuse to talk about your product, it won't work.

I've been doing basic content marketing for my service (https://medium.com/revenuecat-blog) and it is my #1 source of new signups.

I have a lot of expertise in the subject. It is a wide domain with lots of adjacent problems I can write about. And, it is something that people search for and read about. All these together has made it worthwhile.

That said, it is very time consuming. I try to average one post a week but a good post that is worth publishing takes me probably 5-10 hours of work between research, drafting, proofing, and promoting. This is time taken away from product development, so you need to consider the ROI.


Done right, a blog is a very cheap and powerful marketing tool.

As @superasn has mentioned, 1 huge win (again if done right) is you can keep getting recurring signups / new users if 1 or more of your blog posts ranks on top for keywords relating to your web app / product offering.

I've successfully grown a blog to over 100K monthly visitors in less than 6 months, all using organic SEO techniques. Too bad I didn't (still don't) have a product to market on that blog. It ranks on 1st page for several tech related keywords, incl. a couple related to apple and steve jobs.

What's your web app / domain? Without knowing much about it, I can't give you domain / topic specific advise. My email is in bio if you wanna chat more.


Is having a blog alongside my website clever marketing or just a waste of my time?

The only real way to know is to try it and see if it works.

I will note that you can pay for writing. You typically give someone a topic to write about and maybe a link to include or some key words. There are people who do this as freelancers and there are services that do this.

It doesn't completely get you off the hook, but it can reduce the workload involved. If it makes more money than it costs you, it can be well worth it.

Edit: You also might be interested in reading this:

http://www.doreenmicheletraylor.com/2018/02/actionable-conte...


Another strategy (I can't find the post, but HN regular mbuckbee evangelizes this) is to create small free tools / indexes to get inbound traffic. They might be small byproducts you made while creating your business.

Examples: AWS in plain english, foragoodstrftime.com, everytimezone.com, the various 'awesome' lists, etc. I've personally thought about writing batch scripts for dumb little tasks. Books that lead into your business are also common.

Make it something your audience will value. Then the general idea then is you can share it to your mailing list (i.e. in amy hoy parlance an 'ebomb').


It's worth it for developer marketing. People are constantly googling for anything that solves their problem, so if you can post useful solutions that tie into your product, preferably with some soft of freemium hook, its great.

I have no idea how often people google specific keywords in other verticals, so ymmv.


That wouldn't provide any SEO benefits for his website, which I imagine is one of the primary goals.

Medium is good if you care only about having your articles read.

Your own website is good if you want to boost your website's Google rank.


for a mobile app: absolutely not worth while

for a product like a “food supplement”: yes, totally

the model is: right something useful which will get very relevant traffic via google, and be able to convert this traffic immediately to money


The fact that you’re asking shows me that it is a waste of your time for you. You can’t fake/brute force your way to a great blog. Unless you’re really into writing it, it’s not going to be enjoyable to write or read.

You can get some value out of it, but odds are you’ll be frustrated and hate doing it.


I personally love writing but don't have the time for a real blog. If you want to add content but not be strapped into regular updates, well written articles added to your site never need to be labeled as such.

For example I decided to write once every 6 weeks to my mailing list this year (I hate more than that, so I'm not sending what I wouldn't accept to receive). Once I have a few built up, I will then add them to my site and not hesitate to rewrite the content if I have feedback to integrate.

I think you need an internal metric for regularity or your content writing will just slip away from you and won't happen.


This is a good perspective. We have been conditioned to think of content marketing as only one thing, regularity. But since most posters agree that one great article can pull all the business, it might be worth asking whether we should be going for one big article versus the steady stream approach.

I think I would prefer the one big article, every month or so, to the pressure to put out "something" every week...

Tech Giants Set to Face 3% Tax on Revenue Under New EU Plan

$
0
0

Large digital companies operating in the European Union, such as Alphabet Inc. or Twitter Inc., could face a 3 percent tax on their gross revenues based on where their users are located, according to a draft proposal by the European Commission.

The draft, seen by Bloomberg, was circulated on Friday and outlines how a targeted levy on gross revenues would increase the tax bill digital giants face, as the bloc seeks to raise money from an industry it says provides less than it should to public coffers. EU countries have been looking into methods to tax digital companies, including Amazon.com Inc. and Facebook Inc., in a way that captures the true value created in the region.

The commission’s planned revenue tax, which is expected to be proposed on March 21, would only represent a targeted, short-term solution. The bloc also plans to propose a more comprehensive, longer-term approach that will focus on a digital permanent establishment.

The scope of the planned tax would cover companies offering services such as advertising or the sale of user data, according to the draft prepared by the EU’s executive arm. It would also cover services provided by multi-sided digital platforms, which let users find and interact with each other and where users supply goods and services directly to each other.

Digital Revenue

The levy would cover companies that have annual worldwide total revenue exceeding 750 million euros ($920 million) and total taxable annual revenue from offering digital services in the EU above 50 million euros, according to the draft. The parameters may change until the proposal is approved.

The levy, which would be charged annually based on gross revenues, would be at a single rate across the EU of 3 percent, according to the draft proposal, although the rate, too, could change in the final version. Earlier drafts envisaged the rate somewhere between 1 percent and 5 percent.

The commission’s proposal comes as traditional taxation practices have so far failed to capture business proceeds from an industry where value added tends to be virtual rather than material and digital companies have sought to take advantage of loopholes created by uncoordinated European regulation.

Even as national governments accept that the current taxation system needs to be altered, the path forward is fraught with difficulties, with some countries warning that a new levy could discourage digital use and push customers to products outside of Europe.

Any tax proposal will need the unanimous approval of all 28 current members of the EU before turning into law, so one country alone could block it.

Other countries have argued that discussions and decisions on this issue should be tackled at a global level and with the help of the Organisation for Economic Cooperation and Development, a group that advises its 35 members on tax policy.

But a report by the OECD published on March 16 indicated that there is still no global consensus on how best to proceed with the taxation of the digital economy or on the merits of an interim solution.

Viewing all 25817 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>