Well, this is huge, so I'd like to draw your attention to what's happening right now. This is a very alarming case, and it concerns every ad blocker user.
Brief introduction into ad blocking
To understand better what's happened, you should first learn a bit more about ad blocking. Every ad blocker work is based on using so-called filters lists, which are maintained (mostly) by volunteers. That said, whichever ad blocker you use, credits for actual ad blocking belong to the filter lists maintainers. The most popular filters list is called EasyList and this is what this story is about.
Got it, so what happened?
Yesterday a strange commit landed in the EasyList repo. The "functionalclam.com" domain was removed with a comment "Removed due to DMCA takedown request".
An ad server was unblocked by all ad blockers due to a DMCA request. Let that sink in for a moment...
A small research was conducted by the community in the comments section of that commit. It appears that the story began 23 days ago with a comment by a freshly registered Github account to the commit that added "functionalclam.com" to EasyList. @dmcahelper threatened with "the file or repository disruption," but his threats were not taken seriously that time.
The domain in question hosts an image describing its work as "used by digital publishers to control access to copyrighted content in accordance with the DMCA and understand how visitors are accessing their copyrighted content".
However, further research showed that this domain hosts the code of an anti-adblocking startup Admiral, so we can assume that it is the company we should blame for this. Where did they get this glorious idea? The wording of the original comment from 23 days ago reminds me awfully of this post claiming that DMCA can be applied to ad blockers.
Why should I care?
This might set a very important precedent of an advertising company exploiting DMCA to force people to see their ads, and can lead to ridiculous consequences if left unnoticed.
EasyList is a community project and may not be able to protect themselves from such an attack. I am calling on other ad blockers developers, you people and everybody else concerned about people's rights (EFF, please) to stand up to this threat and protect ad blocking.
UPD (11 Aug, 8:09GMT): EFF representative offered their help to EasyList maintainers.
We received a DMCA request from Github, as the server in question may've been used as Anti-Adblock Circumvention/Warning on some websites. To keep transparency with the Easylist community,
the commit showed this filter was removed due to DMCA.
We had no option but to remove the filter without putting the Easylist repo in jeopardy. If it is a Circumvention/Adblock-Warning adhost, it should be removed from Easylist even without the need for a DMCA request.
In regards to Adblock-Warning/Anti-adblock, the amount of filters being added recently to Easylist has been greatly limited due to issues like this. As list authors we have to be careful in what we add.
We'll certainly look at our legal options and it will be contested if we get DMCA requests for any legit adservers or websites that use DMCA as a way to limit Easylist's ability to block ads.
In the 1930s, the U.K. built a massive network of state-of-the-art bike trails. Now the challenge is to revive them.
Imagine having a 300-mile network of segregated, long-distance bike lanes that provided an alternative to a highway system. In the U.K., it’s not just an urbanist’s daydream—it already exists, but everyone forgot about it.
The nation is currently rediscovering its large-scale grid of bike lanes, built across the country in the 1930s and still substantially in place, if largely neglected over the years. The old lanes are now at the heart of a crowdfunding campaign, one designed to promote these lanes’ rehabilitation as part of Britain’s contemporary cycling infrastructure. What makes the find all the more remarkable is that the network was essentially rediscovered by accident.
The lanes came to light while transit historian Carlton Reid was doing research for his recent book Bike Boom. Reading up on the Dutch Ministry of Transport’s early forays into bike infrastructure, Reid discovered that the Dutch had in fact advised Britain’s transport ministry in the 1930s on the creation of its own cycle network. Some of these lanes were already known in cycling advocate circles—one well-known example was opened in West London in 1934 by Britain’s transport minister Leslie Hore-Belisha. Elsewhere, their sheer extent had been forgotten.
In fact, as Reid discovered, Britain went through something of a cycle lane boom in the late 1930s. Between 1937 and 1940, Britain’s government demanded that any state-funded scheme to build an arterial road must also include a 9-foot-wide cycle track running the length of the road. Just as surprising as the government’s enlightened attitude is how cycling groups responded: with deep suspicion.
“At the time, the feeling among cycling groups was, ‘We deserve our place on the road. We don't want to be relegated to a secondary system,’” says John Dales, an engineer working with Carlton Reid on the campaign. By contrast, Dales says, government thinking on the issue sounds strikingly modern. “Immediately after the war, the Ministry of Transport published a road design guide for built-up areas that insisted that ‘segregation should be the keynote of modern design.’” As a reflection of this attitude, the bike routes’ construction was taken quite seriously. They were solidly built, with broad 9-foot-wide concrete surfaces often rimmed with granite curbs.
This enlightened official approach chimed with the times. Cycling was still a vital means of transit in a country where car ownership only became common in the late 1950s. Many of the new, broader roads that would ultimately take the burden of Britain’s car boom were still being planned and constructed between the wars. The cycle network grew up as part of this new road network, rather than by scraping existing lane space away from motor vehicles. It thus avoided the controversies that cluster around such projects today.
Still, if these routes had been heavily used, we’d probably have known more about them. Contemporary references to the network are scant, but it’s possible that actual usage was light because the lanes sprung up along new roads and in newly laid-out districts where traffic was still pretty low. They may have been laid out to plan for future demand rather than to cater to needs that already existed. Wartime priorities saw an end to the cycle lane growth spurt and, once the austerity of Britain’s immediate postwar years started clearing toward the close of the 1940s, bike use dropped off sharply.
Many of the routes nonetheless remain in place. They’re just not expressly marked as bike lanes and have faded into usage as roadside pathways or parking, or have been grassed over. To date, Reid and Dales have discovered more than 90 protected bike lane plans (mapped below) covering more than 300 miles. It’s possible—indeed likely, given the considerable scope of British 1930s road building—that there are still more out there waiting to be rediscovered. Clearly there are plenty of people keen to know exactly what else is out there. The crowdfunding campaign has already exceeded its modest £7,000 initial target threefold, with still almost a week before it closes.
The money will fund further investigations locating neglected bikeways, and the project will then approach local authorities to discuss ways to reopen them and mesh them with existing infrastructure. There’s admittedly a large jump between this step and the reopening of a full national network, but the project could still score major victories for cycling advocates. Uncovering and reusing these old tracks could prove far cheaper than constructing new lanes. The network provides a readymade answer to claims that there isn’t enough space for bike highways between towns. And it makes clear that, if Britain managed to find money to produce state of the art bike lanes during the Great Depression, it can definitely do so again.
How do you get to the city of the future? Seven cities have spent six months racing to answer that question. They’re vying for a $40 million start-up prize from the U.S. Department of Transportation, which asked them to juggle three big ideas: automation, climate change and urban inequality. But even without the federal cash or “Smart City” crown, city officials say big changes are coming faster than people realize.
I’ve been ordering used lenses for years and have never had a problem with any purchases. That is, until now. I recently ordered a $1,500 used camera lens from Amazon. The lens never showed up, and Amazon is refusing to return my money because they claim the tracking number shows that it was delivered to my address.
Here’s the story of how I fell victim to this used lens scam.
On June 29th, I ordered a used lens through Amazon Marketplace, and the seller’s name was “Lana’s Store”. The lens is a Canon 70-200mm f/2.8 II, which normally runs around between $1,500 and $1,600 used. The description of the used lens indicated the lens is in excellent condition and the price seemed very good… maybe too good:
The seller had one good review but didn’t appear to be selling anything else. I figure it was just a person selling their used items.
On July 8th, I received a status update for the order. It was marked as “shipped” along with a USPS tracking number. Finally, it shipped! I had waited a week and a half patiently since sometimes private sellers take some time to ship. Unfortunately, after looking at the package tracking information, my happiness was immediately wiped away:
It appears the package was delivered to the door and signed by someone I don’t know! At first, I panicked, went looking for a package, and asking neighbors if they knew this Mr. “R” person who had signed for my package. After calming down a little I realized that there is this section of the tracking page:
I filled the form and within a minute received a PDF file with the “proof of delivery”:
So apparently the package was delivered and signed by “L” who has the same last name as “R”. Also, it’s a good thing that USPS makes you write the first line of your address. So, apparently, my package was delivered to the wrong address, which explains why I didn’t get it and why someone else would sign for it.
I sent a few messages to the seller to see if the package was sent to the wrong address but never received any replies.
Eventually, I decided to visit the address on the proof of delivery — a house located on the other side of my town. The people living at the address appeared very nice and helpful. They did confirm that they received the package and did sign for it. Here are some pictures of the package they received:
The package appears to be addressed to “Mr. S or Current Resident”. Mrs. “L” informed me that Mr. “S” is her late father who passed away at the end of May, which is why she signed for it. So here’s what I found out at this point:
They didn’t order or expect this package.
The package contained two baking mats.
Package weight was around 8 oz.
The sender’s address is some apartment complex in Minnesota.
Who pays extra to get a signature for something like this and then writes “or Current Resident”?
Afterward, I did some digging and found out that the late Mr. “S”’s obituary with his address is available on the Internet and can be easily found by typing his very common last name and “Gilberts, IL” into Google, which is most likely how the seller found that address. I won’t be posting the obituary for privacy reasons, but it’s amazing how much personal information is in it. Also, notice the package weight is 8 oz, while the lens (according to specs) is 3.2 lbs.
The seller knew Amazon’s system. When I made the order, they shipped a box of baking maps to Mr. “S”’s address in Gilberts, Illinois, and then used the tracking number for my order.
I contacted Amazon customer service on July 10th because the seller didn’t respond and I already knew the package didn’t come to my address. Each contact was on the phone with Customer Service, who then forwards a short message to the A-to-Z claims people. The case was closed on July 11th because:
We have closed your claim for order 111-5490204-451xxxx because the tracking information below shows that someone at your address signed for the package. This order is not eligible for the A-to-z Guarantee because it arrived at your address.
I appealed the case, explaining once more that I didn’t sign it and it didn’t go to my address, but the case was kept closed because apparently they just check the tracking number, which indicates it was signed by someone (it doesn’t appear to matter who) and delivered in Gilberts, Illinois. This is the key: apparently, that’s all they check!
I appealed again on July 27, asking them to get proof of delivery from USPS and open an investigation, but to no avail:
I then finally figured out that I can reply directly to these emails with attachments myself, so I submitted the proof of delivery proving it was sent to the wrong address. But Amazon then told me:
Before we can cover this item under our guarantee, we need to confirm that it was delivered to the incorrect address.
Please send a copy of a statement from the shipper to us and the seller at the email addresses below. To help us match the statement to the correct claim, be sure to include your order number in the email.
We will not be replying to your mail if the proof provided is insufficient or invalid.
At this point, I felt like this whole thing was driving me insane. There was no lens in the box, and the package wasn’t even addressed to me. But still, I went to USPS and got a letter from them that explained the package was never addressed in my name or to my address (which starts with the number 4).
I even added “jeff@amazon.com” to the email thread, hoping Amazon CEO Jeff Bezos would receive it. On July 29th, I got this response:
I submitted one final appeal, and here’s what Amazon said:
The seller has provided us with the USPS proof of delivery for this order which confirms that this package was delivered at your address with signature confirmation.
This is when I felt like I hit an “Amazon wall,” with nothing left to do or try. I’m disputing the charge with my credit card right now. If the credit card dispute doesn’t work out in my favor, I am left with only one option, and that’s small claims court. According to Amazon’s Customer Agreement, I am exempt from Arbitration if my dispute qualifies for small claims court.
So that’s the story of how I found myself in a customer service nightmare after getting tricked by a $1,500 Canon camera lens. Hopefully, my story serves as a warning to be careful when buying used gear, even if it’s through an online retailer as trustworthy as Amazon.
About the author: Ziemowit Pierzycki is a photographer based in Gilberts, Illinois. The opinions expressed in this article are solely those of the author.
Hollywood celebrities have been known to deduct a few numbers when declaring their ages. Turns out Silicon Valley startups do the same thing.
In a business where everyone is searching for the next big disruptive concept, old age is rarely considered an asset. As such, some companies make their stated dates of founding subject to change.
Chris O’Neill, chief executive of Evernote Corp., uses June 24, 2008—the day the...
SoundCloud has just closed the necessary funding round to keep the struggling music service afloat. CEO Alex Ljung will step aside though remain chairman as former Vimeo CEO Kerry Trainor replaces him. Mike Weissman will become COO as SoundCloud co-founder and CTO Eric Wahlforss stays as chief product officer. New York investment bank Raine Group and Singapore’s sovereign wealth fund Temasek have stepped in to lead the new Series F funding round of $169.5 million.
SoundCloud laid off 40 percent of its staff last month, with 173 employees departing in an effort to cut costs. The company only had enough runway left to last into Q4, and today’s investor decision was viewed as a do-or-die moment for the company. Now it will have the opportunity to try to right the ship, or sail into an established port via acquisition.
SoundCloud declined to share the valuation or quantity of the new funding round. Yesterday, Axios reported the company was raising $169.5 million at a $150 million pre-money valuation. That’s a steep decline in value from the $700 million it was valued at in previous funding rounds. The new Series F round supposedly gives Raine and Temasek liquidation preferences that override all previous investors, and the Series E investors are getting their preferences reduced by 40 percent. They’re surely happy about that, but it’s better than their investment vaporizing.
Raine will get two board seats for bailing out SoundCloud, with partner and former music industry attorney Fred Davis, and the vice president who leads music investments, Joe Puthenveetil, taking those seats.
While abdicating the CEO role probably wasn’t exactly what Ljung had hoped for, at least he gets to stay on with the company as chairman of the board. “This financing means SoundCloud remains strong, independent and here to stay,” he wrote.
SoundCloud says its total revenue is now at a $100 million annual run-rate. If it can keep costs low and grow that number, it may eventually get to break even and no longer need infusions of investor capital.
TechCrunch broke news about the magnitude of the SoundCloud crisis last month. Sources from the company told us the layoffs had been planned for months, but SoundCloud still recklessly hired employees up until the last minute, with some being let go within weeks of starting. Employees told TechCrunch that the company was “a shitshow” with inconsistent product direction and dwindling cash. Ljung was seen as reluctant to be honest with the team, and unfocused as he partied around the world like a rock star.
Our report led to a flurry of follow-on coverage, prompting fans and artists to speak up in favor of the service. The rally was reminiscent of the love shown to Vine after Twitter announced it would shut down. Popular musician Chance The Rapper tried to get involved to save the company. He, like many other indie hip-hop artists, made their name on the platform as part of a genre that came to be called “SoundCloud Rap.” In the end, SoundCloud was saved when Vine wasn’t.
SVP of Entertainment, AOL Kerry Trainor attends the 2011 TV Summit Presented By Academy Of Television Arts & Sciences Foundation & Variety at Renaissance Hollywood Hotel on February 15, 2011 in Hollywood, California. (Photo by Todd Williamson/WireImage)
For now, music and other audio saved on SoundCloud is safe. But the company will need to find a way to make its subscription tiers more appealing and scale up its advertising despite having much less staff to drive the changes. If it can’t, SoundCloud could be back begging for cash in a year.
The new management should provide some additional confidence. I’ve interviewed both Ljung and Wahlforss in the past, and neither had answers to the big questions facing SoundCloud about its product direction, business model and the spurious copyright takedowns that have eroded its trust with musicians.
Trainor may be able to institute some more discipline at the startup. He was the CEO of Vimeo from 2012 to 2016, and has poached his former COO there to help run SoundCloud. They helped Vimeo fend off bigger rivals like YouTube by doubling down on what was special about the service: a focus on high-quality artful film rather than amateur viral videos. That experience makes Trainor a great fit to lead SoundCloud, which is fending off bigger rivals like Spotify and Apple Music.
SoundCloud’s best bet isn’t to battle them directly, but double down on the user-uploaded indie music scene, including garage demos, DJ sets, unofficial remixes and miscellaneous audio you can’t find elsewhere. Whether it stays independent long term or tries to seduce an acquirer, SoundCloud will benefit from spotlighting its unique community of creators and hardcore listeners.
It was easy to miss, with the impending end of civilization burning up the headlines, but a beyond-belief financial story recently crept into public view.
A Bloomberg headline on the story was a notable achievement in the history of understatement. It read: LIBOR'S UNCERTAIN FUTURE TRIGGERS $350 TRILLION SUCCESSION HEADACHE
The casual news reader will see the term "LIBOR" and assume this is just a postgame wrapup to the LIBOR scandal of a few years back, in which may of the world's biggest banks were caught manipulating interest rates.
It isn't. This is a new story, featuring twin bombshells from a leading British regulator – one about our past, the other our future. To wit:
Going back twenty years or more, the framework for hundreds of trillions of dollars worth of financial transactions has been fictional.
We are zooming toward a legal and economic clusterfuck of galactic proportions – the "uncertain future" Bloomberg humorously referenced.
LIBOR stands for the London Interbank Offered Rate. It measures the rate at which banks lend to each other. If you have any kind of consumer loan, it's a fair bet that it's based on LIBOR.
A 2009 study by the Cleveland Fed found that 60 percent of all mortgages in the U.S. were based on LIBOR. Buried somewhere in your home, you probably have a piece of paper that outlines the terms of your credit card, student loan, or auto loan, and if you peek in the fine print, you have a good chance of seeing that the rate you pay every month is based on LIBOR.
Years ago, we found out that the world's biggest banks were manipulating LIBOR. That sucked.
Now, the news is worse: LIBOR is made up.
Actually it's worse even than that. LIBOR is probably both manipulated and made up. The basis for a substantial portion of the world's borrowing is a bent fairy tale.
The admission comes by way of Andrew Bailey, head of Britain's Financial Conduct Authority. He said recently (emphasis mine): "The absence of active underlying markets raises a serious question about the sustainability of the LIBOR benchmarks. If an active market does not exist, how can even the best run benchmark measure it?"
As a few Wall Street analysts have quietly noted in the weeks since those comments, an "absence of underlying markets" is a fancy way of saying that LIBOR has not been based on real trading activity, which is a fancy way of saying that LIBOR is bullshit.
LIBOR is generally understood as a measure of market confidence. If LIBOR rates are high, it means bankers are nervous about the future and charging a lot to lend. If rates are low, worries are fewer and borrowing is cheaper.
It therefore makes sense in theory to use LIBOR as a benchmark for borrowing rates on car loans or mortgages or even credit cards. But that's only true if LIBOR is actually measuring something.
Here's how it's supposed to work. Every morning at 11 a.m. London time, twenty of the world's biggest banks tell a committee in London how much they estimate they'd have to pay to borrow cash unsecured from other banks.
The committee takes all 20 submissions, throws out the highest and lowest four numbers, and then averages out the remaining 12 to create LIBOR rates.
Theoretically, a fine system. Measuring how scared banks are to lend to each other should be a good way to gauge market stability. Except for one thing: banks haven't been lending to each other for decades.
Up through the Eighties and early Nineties, as global banks grew bigger and had greater demand for dollars, trading between banks was heavy. That robust interbank lending market was why LIBOR became such a popular benchmark in the first place.
But beginning in the mid-nineties, banks began to discover that other markets provided easier and cheaper sources of funding, like the commercial paper or treasury repurchase markets. Trading between banks fell off.
Ironically, as trading between banks declined, the use of LIBOR as a benchmark for mortgages, credit cards, swaps, etc. skyrocketed. So as LIBOR reflected reality less and less, it became more and more ubiquitous, burying itself, tick-like, into the core of the financial system.
The flaw in the system is that banks don't have to report to the LIBOR committee what they actually paid to borrow from each other. Instead, they only have to report what they estimatethey'd have to pay.
The LIBOR scandal of a few years ago came about when it was discovered that the banks were intentionally lying about these estimates. In some cases, they were doing it with the assent of regulators.
In the most infamous instance, the Bank of England appeared to encourage Barclays to lower its LIBOR submissions, as a way to quell panic after the 2008 crash.
It later came out that banks had not only lied about their numbers during the crisis to make the financial system look safer, but had been doing it generally just to rip people off, pushing the number to and fro to help their other bets pay off.
Written exchanges between bank employees revealed hilariously monstrous activity, with traders promising champagne and sushi and even sex to LIBOR submitters if they fudged numbers.
"It's just amazing how LIBOR fixing can make you that much money!" one trader gushed. In writing.
Again, this was bad. But it paled in comparison to the fact that the numbers these nitwits were manipulating were fake to begin with. The banks were supposed to be estimating how much it would cost them to borrow cash. But they weren't borrowing cash from anyone.
For decades now, the world's biggest banks have been dutifully reporting a whole range of numbers every morning at 11 a.m. London time – the six-month Swiss franc rate, the three-month yen, the one-month dollar, etc. And none of it seems to have been real.
These numbers, even when sociopathic lunatics weren't fixing them, were arbitrary calculations based on previous, similarly arbitrary calculations – a rolling fantasy that has been gathering speed for decades.
When regulators dug into the LIBOR scandal of a few years ago, they realized that any interbank lending rate that depended upon the voluntary reports of rapacious/amoral banks was inherently problematic.
But these new revelations tell us forcing honesty won't work, either. There could be a team of regulators sitting in the laps of every LIBOR submitter in every bank, and it wouldn't help, because there is no way to honestly describe a nonexistent market.
The FCA's Bailey put it this way (emphasis mine): "I don't rule out that you could have another benchmark that would measure what Libor is truly supposed to measure, which is bank credit risk in the funding market," he said. "But that would be – and I use this term carefully – a synthetic rate because there isn't a funding market."
There isn't a funding market! This is absurdity beyond satire. It's Chris Morris' "Cake is a made-up drug!" routine, only in life. LIBOR is a made-up number!
Think about this. Millions of people have been taking out mortgages and credit cards and auto loans, and countless towns and municipalities have all been buying swaps and other derivatives, all based on a promise buried in the fine print that the rate they will pay is based on reality.
Since we now know those rates are not based on reality – there isn't a funding market – that means hundreds of trillions of dollars of transactions have been based upon a fraud. Some canny law firm somewhere is going to figure this out, sooner rather than later, and devise the world's largest and most lucrative class-action lawsuit: Earth v. Banks.
In the meantime, there is the question of how this gets fixed. The Brits and Bailey have announced a plan to replace LIBOR with "viable risk-free alternatives by 2021."
This means that within five years, something has to be done to reconfigure a Nepalese mountain range of financial contracts – about $350 trillion worth, according to Bloomberg. A 28 Days Later style panic is not out of the question. At best, it's going to be a logistical nightmare.
"It's going to be a feast for financial lawyers," Bill Blain, head of capital markets and alternative assets at Mint Partners, told Bloomberg.
With Donald Trump in office, most other things are not worth worrying about. But global finance being a twenty-year psychedelic delusion is probably worth pondering for a few minutes. Man, do we live in crazy times.
Immutable JS data structures which are backwards-compatible with normal Arrays and Objects.
Use them in for loops, pass them to functions expecting vanilla JavaScript data structures, etc.
var array =Immutable(["totally", "immutable", {hammer:"Can’t Touch This"}]);
array[1] ="I'm going to mutate you!"
array[1] // "immutable"
array[2].hammer="hm, surely I can mutate this nested object..."
array[2].hammer// "Can’t Touch This"for (var index in array) { console.log(array[index]); }// "totally"// "immutable"// { hammer: 'Can’t Touch This' }JSON.stringify(array) // '["totally","immutable",{"hammer":"Can’t Touch This"}]'
This level of backwards compatibility requires ECMAScript 5 features like Object.defineProperty and Object.freeze to exist and work correctly, which limits the browsers that can use this library to the ones shown in the test results below. (tl;dr IE9+)
Performance
Whenever you deeply clone large nested objects, it should typically go much faster with Immutable data structures. This is because the library reuses the existing nested objects rather than instantiating new ones.
In the development build, objects are frozen. (Note that Safari is relatively slow to iterate over frozen objects.) The development build also overrides unsupported methods (methods that ordinarily mutate the underlying data structure) to throw helpful exceptions.
The production (minified) build does neither of these, which significantly improves performance.
We generally recommend to use the "development" build that enforces immutability (and this is the default in Node.js). Only switch to the production build when you encounter performance problems. (See #50 for how to do that in Node or using a build tool - essentially do explicitely refer to the production build.)
Intentional Abstraction Leaks
By popular demand, functions, errors, dates, and React
components are treated as immutable even though technically they can be mutated.
(It turns out that trying to make these immutable leads to more bad things
than good.) If you call Immutable() on any of these, be forewarned: they will
not actually be immutable!
Add-ons
seamless-immutable is tightly focused on the mechanics of turning existing JavaScript data structures into immutable variants.
Additional packages are available to build on this capability and enable additional programming models:
Compact Cursor Library built on top of the excellent seamless-immutable. Cursors can be used to manage transitions and manipulations of immutable structures in an application.
API Overview
Immutable() returns a backwards-compatible immutable representation of whatever you pass it, so feel free to pass it absolutely anything that can be serialized as JSON. (As is the case with JSON, objects containing circular references are not allowed. Functions are allowed, unlike in JSON, but they will not be touched.)
Since numbers, strings, undefined, and null are all immutable to begin with, the only unusual things it returns are Immutable Arrays and Immutable Objects. These have the same ES5 methods you’re used to seeing on them, but with these important differences:
All the methods that would normally mutate the data structures instead throw ImmutableError.
All the methods that return a relevant value now return an immutable equivalent of that value.
Attempting to reassign values to their elements (e.g. foo[5] = bar) will not work. Browsers other than Internet Explorer will throw a TypeError if use strict is enabled, and in all other cases it will fail silently.
A few additional methods have been added for convenience.
For example:
Immutable([3, 1, 4]).sort()// This will throw an ImmutableError, because sort() is a mutating method.Immutable([1, 2, 3]).concat([10, 9, 8]).sort()// This will also throw ImmutableError, because an Immutable Array's methods// (including concat()) are guaranteed to return other immutable values.
[1, 2, 3].concat(Immutable([6, 5, 4])).sort()// This will succeed, and will yield a sorted mutable array containing// [1, 2, 3, 4, 5, 6], because a vanilla array's concat() method has// no knowledge of Immutable.var obj =Immutable({all:"your base", are: {belong:"to them"}});Immutable.merge(obj, {are: {belong:"to us"}})// This will return the following:// Immutable({all: "your base", are: {belong: "to us"}})
Static or instance syntax
Seamless-immutable supports both static and instance syntaxes:
var Immutable = require("seamless-immutable").static;
Immutable.setIn(obj, 'key', data)
var Immutable = require("seamless-immutable");
obj.setIn('key', data)
Although the later is shorter and is the current default, it can lead to
collisions and some users may dislike polluting object properties when it comes
to debugging. As such the first syntax is recommended, but both are supported.
Immutable.from
If your linter cringes with the use of Immutable without a preceding new
(e.g. ESLint's new-cap rule),
use Immutable.from:
Immutable.from([1, 2, 3]);// is functionally the same as calling:Immutable([1, 2, 3])
Immutable Array
Like a regular Array, but immutable! You can construct these by passing
an array to Immutable():
Immutable([1, 2, 3])// An immutable array containing 1, 2, and 3.
Effectively performs a map over the elements in the array, except that whenever the provided
iterator function returns an Array, that Array's elements are each added to the final result.
Effectively performs a map over the elements in the array, expecting that the iterator function
will return an array of two elements - the first representing a key, the other
a value. Then returns an Immutable Object constructed of those keys and values.
You can also call .asObject without passing an iterator, in which case it will proceed assuming the Array
is already organized as desired.
Returns a mutable copy of the array. For a deeply mutable copy, in which any instances of Immutable contained in nested data structures within the array have been converted back to mutable data structures, call Immutable.asMutable(obj, {deep: true}) instead.
Like a regular Object, but immutable! You can construct these by passing an
object to Immutable().
Immutable({foo:"bar"})// An immutable object containing the key "foo" and the value "bar".
To construct an Immutable Object with a custom prototype, simply specify the
prototype in options (while useful for preserving prototypes, please note
that custom mutator methods will not work as the object will be immutable):
functionSquare(length) { this.length= length };Square.prototype.area=function() { returnMath.pow(this.length, 2) };Immutable(newSquare(2), {prototype:Square.prototype}).area();// An immutable object, with prototype Square,// containing the key "length" and method `area()` returning 4
Currently you can't construct Immutable from an object with circular references. To protect from ugly stack overflows, we provide a simple protection during development. We stop at a suspiciously deep stack level and show an error message.
If your objects are deep, but not circular, you can increase this level from default 64. For example:
Immutable(deepObject, null, 256);
This check is not performed in the production build.
merge
var obj =Immutable({status:"good", hypothesis:"plausible", errors:0});Immutable.merge(obj, {status:"funky", hypothesis:"confirmed"});// returns Immutable({status: "funky", hypothesis: "confirmed", errors: 0})var obj =Immutable({status:"bad", errors:37});Immutable.merge(obj, [
{status:"funky", errors:1}, {status:"groovy", errors:2}, {status:"sweet"}]);// returns Immutable({status: "sweet", errors: 2})// because passing an Array is shorthand for// invoking a separate merge for each object in turn.
Returns an Immutable Object containing the properties and values of both
this object and the provided object, prioritizing the provided object's
values whenever the same key is present in both objects.
Multiple objects can be provided in an Array in which case more merge
invocations will be performed using each provided object in turn.
A third argument can be provided to perform a deep merge: {deep: true}.
replace
var obj1 =Immutable({a: {b:'test'}, c:'test'});var obj2 =Immutable.replace(obj1, {a: {b:'test'}}, {deep:true});// returns Immutable({a: {b: 'test'}});
obj1 === obj2// returns falseobj1.a===obj2.a// returns true because child .a objects were identical
Returns an Immutable Object containing the properties and values of the
second object only. With deep merge, all child objects are checked for
equality and the original immutable object is returned when possible.
A second argument can be provided to perform a deep merge: {deep: true}.
Returns an Immutable Object excluding the given keys or keys/values satisfying
the given predicate from the existing object.
Multiple keys can be provided, either in an Array or as extra arguments.
asMutable
var obj =Immutable({when:"the", levee:"breaks"});var mutableObject =Immutable.asMutable(obj);mutableObject.have="no place to go";
mutableObject // {when: "the", levee: "breaks", have: "no place to go"}
Returns a mutable copy of the object. For a deeply mutable copy, in which any instances of Immutable contained in nested data structures within the object have been converted back to mutable data structures, call Immutable.asMutable(obj, {deep: true}) instead.
Releases
7.1.2
Treat Error instances as immutable.
7.1.1
Fix .npmignore
7.1.0
Add getIn and assumption that Promises are immutable.
7.0.0
Add Immutable.static as the preferred API. Default to development build in webpack.
6.3.0
Adds optional deep compare for .set, .setIn and .replace
6.2.0
Adds static alternatives to methods, e.g. Immutable.setIn
Fixes bug with setting a new object on an existing leaf array.
6.1.2
Fixes bug where on some systems arrays are treated as plain objects.
6.1.1
without now handles numeric keys the same way as string keys.
6.1.0
Alias Immutable.from() to Immutable() for linters.
6.0.1
React components are now considered immutable.
6.0.0
Add cycle detection.
5.2.0
Add update and updateIn.
5.1.1
Immutable(Object.create(null)) now works as expected.
5.1.0
Add predicate support to without()
5.0.1
Fix missing dev/prod builds for 5.0.0
5.0.0
In development build, freeze Dates and ban mutating methods. (Note: dev and prod builds were mistakenly
not generated for this, so to get this functionality in those builds, use 5.0.1)
4.1.1
Make setIn more null safe.
4.1.0
Adds set and setIn
4.0.1
Now when you require("seamless-immutable"), you get the development build by default.
4.0.0
main now points to src/seamless-immutable.js so you can more easily build with envify yourself.
3.0.0
Add support for optional prototyping.
2.4.2
Calling .asMutable({deep: true}) on an Immutable data structure with a nested Date no longer throws an exception.
2.4.1
Arrays with nonstandard prototypes no longer throw exceptions when passed to Immutable.
2.4.0
Custom mergers now check for reference equality and abort early if there is no more work needed, allowing improved performance.
2.3.2
Fixes a bug where indices passed into iterators for flatMap and asObject were strings instead of numbers.
2.3.1
Fixes an IE and Firefox bug related to cloning Dates while preserving their prototypes.
2.3.0
Dates now retain their prototypes, the same way Arrays do.
2.2.0
Adds a minified production build with no freezing or defensive unsupported methods, for a ~2x performance boost.
2.1.0
Adds optional merger function to #merge.
2.0.2
Bugfix: #merge with {deep: true} no longer attempts (unsuccessfully) to deeply merge arrays as though they were regular objects.
2.0.1
Minor documentation typo fix.
2.0.0
Breaking API change: #merge now takes exactly one or exactly two arguments. The second is optional and allows specifying deep: true.
1.3.0
Don't bother returning a new value from #merge if no changes would result.
1.2.0
Make error message for invalid #asObject less fancy, resulting in a performance improvement.
1.1.0
Adds #asMutable
1.0.0
Initial stable release
Development
Run npm install -g grunt-cli, npm install and then grunt to build and test it.
How to Obscure Any URL How Spammers And Scammers Hide and Confuse Last Updated Sunday, 13 January 2002
NOTICE: the IP address of this site has changed of late, and I've been unable to set aside time for the rather large task of revising this page. Its numerous links to the old IP address won't work. It'll be updated soon!
Since this page was first written in 1999, Internet Explorer and Netscape
have both begun dealing with URLs differently, particularly in versions 6 and above.
Some of the examples here will no longer work with those browser versions.
The weird-looking address above takes advantage of several
things many people don't know about the structure of a valid URL.
There's a little more to Internet addressing than commonly
meets the eye; there are conventions which allow for some
interesting variations in how an Internet address is expressed.
These tricks are known to the spammers and scammers, and
they're used freely in unsolicited mails. You'll also see them in
ad-related URLs and occasionally on web pages where the writer
hopes to avoid recognition of a linked address for whatever
reason. Now, I'm making these tricks known to you. Read
on, and you'll soon be very hard to fool.
(Note: Depending on your browser type and its version, some of the
oddly-formatted URLs on this page may not work. Also if you're on a LAN
and using a proxy [gateway] for Internet access, many of them are
unlikely to work. Also, fear not; this page does not exploit the"Dotless
IP Address" vulnerability of some IE versions.)
First take note of the "@" symbol that appears amid
all those numbers. In actual fact, everything between
"http://" and "@" is completely irrelevant!
Just about anything can go in there and it makes
no difference whatsoever to the final result. Here are two
examples:
Go ahead and use the links. If they work at all with your browser,
you'll be back to this page again.
This feature is actually used for authentication. If a
login name and/or password is required to access a web page,
it can be included here and login will be automatic.
If you didn't know better, you might think this page were at
playboy.com!
By the way, the @ symbol can be
represented by its hex code %40 to further
confuse things; this works for the IE browser, but not for Netscape.(Thanks to The Webskulker
for this.)
All right, so what about that long number after the
"@"? How does 3468664375 get you to www.pc-help.org?
In actual fact, the two are equivalent to one another.
This takes a little explaining so follow me carefully here.
The first thing you need to know (most Net users know this),
is that Internet names translate to numbers called IP addresses.
An IP address is normally seen in "dotted decimal"
format. www.pc-help.org translates to 206.191.158.55.
So of course, this page's address can be expressed as: http://206.191.158.55/obscure.htm.
Numeric IP addresses are generally unrecognizable to people, and not
easily rememberd. That's why we use names for network locations in the
first place.
Merely using an IP address, in its usual dotted-decimal format, in
place of the name is commonly done and can be quite effective at leaving
the human reader in the dark about which website he's visiting.
But there are other ways to express that same number.
The alternate formats are:
"dword" - meaning double word
because it consists essentially of two binary "words" of 16 bits;
but it is expressed in decimal (base 10);
The dword equivalent of 206.191.158.55 is 3468664375. Its octal and
hexadecimal equivalents are also illustrated below.
Why obscure names in the first place? Most often it's because by
publicly-available registration records, the owners of domain names
can often be identified. Even if the owner isn't traceable by that
record, his service provider is. The last thing any scammer or
spammer wants is to be found by his victims, or to have his service
provider alerted to his abuses. Although the use of obscured URLs is
far from their only means of avoiding retribution, it's been a favorite.
It's beginning to make some sense, isn't
it? But what's all that gibberish on the right? Here's how that works:
Individual characters of a URL's path and filename can be represented
by their numbers in hexadecimal form. Each hex number is
preceded by a "%" symbol to identify the following two
numbers/letters as a hexadecimal representation of the character. The
practical use for this is to make it possible to include spaces and
unusual characters in a URL. But it works for all characters and
can render perfectly readable text into a complete hash.
In my example, I have interspersed hex representations with the real
letters of the URL. It simply spells out
"/obscure.htm" in the final analysis:
/ o %62 s %63 ur %65 %2e %68 t %6D
/ o b s c ur e . h t m
The letters used in the hex numbers can be either upper or lower
case. The "slashes" in the address cannot be represented in
hex; nor can the IP address be rendered this particular way. But
everything else can be.
Hex character codes are simply the hexadecimal (base 16) numbers for
the ASCII character set; that is, the number-to-letter representations
which comprise virtually all computer text.
To find the numeric value for an ASCII character, I often use a
little batchfile I wrote for the purpose years ago; and then if I
want the hex equivalent I usually do the math in my head. It just
requires familiarity with the multiples of 16 up to 256.
For most people, the conversion is probably best done with a chart.
The best ASCII-to-hex chart I have ever seen is on the website of Jim
Price: http://www.jimprice.com/jim-asc.htm.
Jim explains the ASCII character set wonderfully well, and provides a
wealth of handy charts.
I can't improve on Jim's excellent work! Print out Jim's
ASCII-to-hex chart and you're in business. If Jim's site ever
disappears, let me know and I'll do a chart of my own.
IP addresses are most commonly written in the dotted-decimal format.
A dotted-decimal IP number normally has 4 numeric segments, each
separated by a period. The numbers must range from 0 to 255.
Translation of a network name to its IP address is usually done in
the background by your network software, invisible to the user. Given a
name, your browser queries a name server, a machine somewhere on
the Net which performs this basic network addressing function; it
thereby obtains the numeric IP address and then uses that address to
direct its requests to the right computer, somewhere out there on the
Net.
There is a standard utility which allows the user to perform
these name server lookups directly and see the results. It's
called NSLOOKUP.
A wide variety of nslookup utilites is available on the Net,
often for free download. Some provide a graphical interface under
Windows, but the original and most basic nslookup is run from a
textual command line. One such command-line utility is included
in my free Network Tracer.
Please download it if you're interested.
Place NSLOOKUP.EXE
in your Windows directory and you can use it from a DOS window. A
simple nslookup query is structured as follows:
nslookup [name or IP address] [name server]
A name server has to be specified if you're using Windows 9x/ME,
either by name or IP address. Find out the address of your ISP's
Primary DNS Server -- it can usually be found in your Dial-Up
Networking setup or in the documents provided for setup of your
Internet connection.
If you're using XP or NT, the name server need not be specified.
A valid query for my ISP's web server address would be:
nslookup www.nwi.net [name server]
Here's what that command puts out in response, with my comments:
Non-authoritative answer: Name: sundance.nwinternet.com Address: 206.159.40.2 Aliases: www.nwi.net primary name given to that address, but a valid one.
It's a powerful utility; it can find names for known addresses,
addresses for known names, and a variety of other information relevant
to an Internet address. But doing some of the fancier things with NSLOOKUP is difficult if
you're not already technically savvy. For the technically inclined,
there is a manual; and several examples of
its use can be found in TRACE.BAT, the primary component of my Network Tracer.
If you're determined to avoid the DOS command line, and want a tool
that will do most of the thinking for you, I recommend NetScanTools, a
reasonably-priced network utility toolbox. It's available as a 30-day
shareware demo and a bargain at just $25. NetScanTools is not merely an
address-lookup utility; it can do a great many things. For a Windows
user trying to comprehend the nuts and bolts of the Net, it's a whole
world of discovery.
If you're using Internet Explorer, this address should work (It
doesn't work with at least some versions of Netscape): http://462.447.414.311/obscure.htm
Normally, the four IP numbers in a standard dotted-decimal address
will all be between 0 and 255. In fact they must translate to an
8-bit binary number (ones and zeroes), which can represent a quantity no
higher than 255.
But the way this number is handled by some software often allows for
a value higher than 255. The program uses only the 8 right-hand digits
of the binary number, and will drop the rest if the number is too large.
This means you can add multiples of 256 to any or all of the 4
segments of an IP address, and it will often still work. In my tests, it
was limited to 3 digits per number; values over 999 didn't work.
I could create a math lesson about this, and tell
you all about bits and bytes and base 16. But it's not really
necessary. Anyone with a Windows system has a handy calculator
that makes it simple to convert decimal numbers to hex, and to
find the dword equivalent of any dotted-decimal IP number. You
should find it by selecting Start ... Programs ... Accessories
... Calculator. It will look like this:
or, in Scientific mode, it looks like this:
I suggest Scientific mode for this
purpose.
Start with an IP address. In this example we'll use 206.191.158.55.
Enter the following keystrokes into the calculator exactly as shown:
206 * 256 + 191 = * 256 + 158 = * 256 + 55 =
The dword equivalent of the IP address will be the
result. In this case, 3468664375.
Now, there is a further step that can make this address even more
obscure. You can add to this dword number, any multiple of the quantity4294967296 (2564) -- and it will
still work. This is because when the sum is converted to its basic digital
form, the last 8 hexadecimal digits will remain the same. Everything
to the left of those 8 hex digits is discarded by the IP software and
therefore irrelevant.
Thus, the following URLs will also lead to this page:
There now exist a handful of utilities that will do dword (and other)
conversions of IP addresses and URLs. When time permits, I'll be sure
to list them on this page. Meanwhile, there's a handy script on
Matthias Fichtner's website which will quickly convert any IP address to
its dword value and vice-versa: http://www.fichtner.net/tools/ip2dword/.
The PING utility that's in every Windows system can decipher dword
IPs. In fact, it deals with every method of expressing an IP
address that's described on this page. (My thanks to Steven,
who pointed this out on the NTBugTraq list.)
Just open a DOS window and type:
ping [IPAddress]
PING will then do its usual job, in which it contacts the remote
system (if any) at that address and gauges its response times. In the
process, it displays the ordinary dotted-decimal equivalent of the IP
address you entered.
As if all this weren't enough, an IP address can also be represented
in octal form -- base 8.
The URL for this page with its IP address in octal form looks like
this: http://0316.0277.0236.067/obscure.htm Go ahead, try it. You'll be right back here once again..
Note the leading zeroes. They're necessary to convey to your
browser the fact that this is an octal number.
Octal numbers are easily derived with the Windows calculator in
Scientific mode. Enter a decimal number, then select the "Oct" button
at upper left. The octal number will appear. The reverse operation
translates octal to decimal.
(Those who find all this unwieldy can always
use the handy URLomatic at www.samspade.org. It will reveal the
dotted-decimal IP address of a dword- or octal-formatted URL, as well as
to decode hex character codes.
This link to the URLomatic will completely decipher my original
example address. Many thanks to Dan Renner
of R&B Computerhelp).
You thought that was all? Well, so did I, until one Daniel Doèekal
informed me otherwise. There is yet another obscure way to
express an IP address.
Starting with the method outlined above, you can readily calculate
the hexadecimal number for 206.191.158. 55. In Scientific mode,
calculate the dword value. Then select the Hex button. The resulting
hexadecimal number (CEBF9E37) can be
expressed as an IP address in this manner:
0xCE.0xBF.0x9E.0x37
The "0x" designates each
number as a hex quantity.
The dots can be omitted, and the entire hex number preceded by 0x:0xCEBF9E37
And, additional arbitrary hex digits can be added to the left of the
"real" number: 0x9A3F0800CEBF9E37
Ah, you thought you had it all nailed down? Well, it's mix-and-match time!
Believe it or not, the following URL, which uses hex, decimaland octal numbers in the IP address, actually works: http://0xCE.191.0236.0x37/obscure.htm
A variation on the dword IP address is one where a portion of the IP address is similarly converted. This only works with the rightmost two or three numbers, not the leftmost.
Let's start again with 206.191.158.55. Leaving the "206" as it is, we do the same calculation with the last three numbers:
Meaningless or deceptive text can be added after
"http://" and before an "@" symbol.
The domain name can be expressed as an IP address, indotted-decimal, dword, octalorhexadecimal format;
and all of these formats have variants.
Characters in the URL can expressed as hexadecimal
(base 16) numbers.
An Increasingly Common Exception
As IP address space becomes more valuable, web hosting services
increasingly use systems that place many websites at one
IP address. The server differentiates between sites by means of the
domain name portion of the URL. Sites on such a server cannot be
addressed using the IP address alone.
Some Notes on Compatibility
I've been getting a lot of feedback about this page from people who
are running various browsers and proxies. So far, reports and my own
rather limited tests seem to indicate that:
hex-coded IPs and values over 255 in dotted-decimal IPs don't
work with Netscape;
most, perhaps all of the dword-coded IPs don't work with some
versions of IE; this could be an effect of the MS patch
for the "dotless IP" exploit.
Later IE versions seem to reject any hex-coded IP that's not
broken up by dots as in my first example above;
Opera 3.60 doesn't allow non-dotted hexadecimal IPs.
Netscape won't allow the following characters in the
authentication text: /?
IE won't allow the following characters in the authentication
text: /\# and it exhibits problems or inconsistencies with: %'"<>
MS-Proxy reportedly rejects almost any IP address that's not in
dotted-decimal IP format, as may some other proxies. Reports
indicate that most proxies handle them all just fine.
If you notice anything more you think I should know about URL formats
or the behavior of some particular software, feel free to drop me a note.
Then in CMD cd into the directory netrunner is extracted to the run netrunner http://example.com/
OR
Use the installer (doesn't add netrunner to path yet)
Crown shyness (also canopy disengagement,[1]canopy shyness,[2] or intercrown spacing[3]) is a phenomenon observed in some tree species, in which the crowns of fully stocked trees do not touch each other, forming a canopy with channel-like gaps.[4][5] The phenomenon is most prevalent among trees of the same species, but also occurs between trees of different species.[6][7] There exist many hypotheses as to why crown shyness is an adaptive behavior, though research suggests that it might inhibit spread of leaf-eating insect larvae.[8]
Possible physiological explanations
The exact physiological basis of crown shyness is not certain.[6] The phenomenon has been discussed in scientific literature since the 1920s.[9] The variety of hypotheses and experimental results might suggest that there are multiple mechanisms across different species, an example of convergent evolution.
Some hypotheses contend that the interdigitation of canopy branches leads to “reciprocal pruning” of adjacent trees. Trees in windy areas suffer physical damage as they collide with each other during winds. As the result of abrasions and collisions, there is an induced crown shyness response. Studies suggest that lateral branch growth is largely uninfluenced by neighbors until disturbed by mechanical abrasion.[10] If the crowns are artificially prevented from colliding in the winds, they gradually fill the canopy gaps.[11] This explanation explains instances of crown shyness between branches of the same organism. Proponents of this idea cite that shyness is particularly seen in conditions conducive to this pruning, including windy forests, stands of flexible trees, and early succession forests where branches are flexible and limited in lateral movement.[6][12] By this explanation, variable flexibility in lateral branches has a large bearing on degree of crown shyness.
Similarly, some research suggests that constant abrasion at growth nodules disrupts bud tissue such that it is unable to continue with lateral growth. Australian forester M.R. Jacobs, who studied the crown shyness patterns in eucalyptus in 1955, believed that the trees' growing tips were sensitive to abrasion, resulting in canopy gaps.[13] Miguel Franco (1986) observed that the branches of Picea sitchensis (Sitka spruce) and Larix kaempferi (Japanese larch) suffered physical damage due to abrasion, which killed the leading shoots.[14][15]
A prominent hypothesis is that canopy shyness has to do with mutual light sensing in by adjacent plants. The photoreceptor-mediated shade avoidance response is a well-documented behavior in a variety of plant species.[16] Neighbor detection is thought to be a function of several unique photoreceptors. Plants are able to sense the proximity of neighbors by sensing backscattered far-red (FR) light, a task largely thought to be accomplished by the activity of the phytochrome photoreceptors.[17] Many species of plant respond to an increase in FR light (and, by extension, encroaching neighbors) by directing growth away from the FR stimulus and by increasing the rate of elongation.[18] Similarly, blue (B) light is used by plants to induce the shade-avoidance response, likely playing a role in the recognition of neighboring plants,[19] though this modality is just beginning to be characterized.[20]
The characterization of these behaviors might suggests that crown shyness is simply the result of mutual shading based on well-understood shade avoidance responses.[6][21] Malaysian scholar Francis S.P. Ng, who studied Dryobalanops aromatica in 1977 suggested that the growing tips were sensitive to light levels and stopped growing when nearing the adjacent foliage due to the induced shade.[6][21]
A recent study has suggested that Arabidopsis shows different leaf placement strategies when grown amongst kin and unrelated conspecifics, shading dissimilar neighbors and avoiding kin. This response was shown to be contingent on the proper functioning of multiple photosensory modalities.[22] Studies have proposed similar systems of photoreceptor-mediated inhibition of growth as explanations of crown shyness,[6][21] though a causal link between photoreceptors and crown asymmetry has yet to be experimentally proven. This might explain instances of intercrown spacing that are only exhibited between conspecifics [6][7]
Species
Trees that display crown shyness patterns include:
^Ballaré, CL; Scopel, AL; Sánchez, RA (19 January 1990). "Far-red radiation reflected from adjacent leaves: an early signal of competition in plant canopies.". Science. 247 (4940): 329–32. PMID17735851. doi:10.1126/science.247.4940.329.
^Ballare, C. L.; Sanchez, R. A.; Scopel, Ana L.; Casal, J. J.; Ghersa, C. M. (September 1987). "Early detection of neighbour plants by phytochrome perception of spectral changes in reflected sunlight.". Plant, Cell and Environment. 10 (7): 551–557. doi:10.1111/1365-3040.ep11604091.
^BALLARE, C. L.; SCOPEL, A. L.; SANCHEZ, R. A. (June 1997). "Foraging for light: photosensory ecology and agricultural implications". Plant, Cell and Environment. 20 (6): 820–825. doi:10.1046/j.1365-3040.1997.d01-112.x.
^Jansen, Marcel A.K; Gaba, Victor; Greenberg, Bruce M (April 1998). "Higher plants and UV-B radiation: balancing damage, repair and acclimation". Trends in Plant Science. 3 (4): 131–135. doi:10.1016/S1360-1385(98)01215-1.
^Christie, JM; Reymond, P; Powell, GK; Bernasconi, P; Raibekas, AA; Liscum, E; Briggs, WR (27 November 1998). "Arabidopsis NPH1: a flavoprotein with the properties of a photoreceptor for phototropism.". Science. 282 (5394): 1698–701. PMID9831559. doi:10.1126/science.282.5394.1698.
^ abcF.S.P. Ng (1997). "Shyness in trees". Nature Malaysiana. 2: 34–37.
^Crepy, María A.; Casal, Jorge J. (January 2015). "Photoreceptor-mediated kin recognition in plants". New Phytologist. 205 (1): 329–338. PMID25264216. doi:10.1111/nph.13040.
Annual note to self – most of the world exists outside the tech bubble.
—–
We have a summer home in New England in a semi-rural area, just ~10,000 people in town, with a potato farm across the street. Drive down the road and you can see the tall stalks of corn waving on other farms. Most people aren’t in tech or law or teaching in universities; they fall solidly in what is called working-class. They work as electricians, carpenters, plumbers, in hospitals, restaurants, as clerks, office managers, farmers, etc. They have solid middle-class values of work, family, education and country – work hard, own a home, have a secure job, and save for their kids’ college and their retirement.
This summer I was sitting in the Delekta Pharmacy in the nearby town of Warren having a Coffee Cabinet (a coffee milkshake). It’s one of the last drugstores with a real soda fountain. The summer tourists mostly come through on the weekend but during the week the locals come by to gab with the guy behind the counter. There are four small wooden booths along the wall in front of the fountain, and as I drank my Cabinet I got to overhear townie conversations from the other three booths.
Unlike every cafe I sit in the valley or San Francisco, their conversations were not about tech.
While they own tech, smartphones and computers, most can’t tell you who the ex-CEO of Uber is, or the details of the diversity blowup at Google. More important issues dominate their daily lives.
I was listening to one guy talk about how much his mortgage and kid’s college expenses were increasing while he hadn’t gotten a raise in three years and was worried about paying the bills. A woman talked about her husband, and how after 21 years as an electrician in the local hospital, he had just been laid-off. Others chimed in with their stories, best summarized by a feeling of economic anxiety. Of being squeezed with no real exit.
It was a long time ago, but I knew the feeling well.
I grew up in New York in a single-parent household that teetered on the bottom end of what today we’d call working class. My parents were immigrants and when they were divorced my mother supported us on the $125 a week she made as a bookkeeper. The bills got paid, and we had food in the house, but there was nothing extra left. No vacations. New clothes were bought once a year before school.
Years later when I got out of the Air Force, I installed broadband process control systems in automobile assembly plants and steel mills across the industrial heart of the Midwest. I got to see the peak of America’s manufacturing prowess in the 1970s, when we actually made things – before we shipped the factories and jobs overseas. I hung out with the guys who worked there, went bowling and shooting with them, complained about the same things, wives, girlfriends, jobs and bosses, and shared their same concerns.
Listening to these conversations in the Pharmacy, and the other stories I have heard as I explored the small towns here, reminded me that people I grew up with, served with and others I worked with, still live in this world. In fact, more than half of Americans fall into the working class. And the conversations I was listening to were a real-life narrative of the “middle-class squeeze.” While the economy has continued to grow, in the name of corporate efficiency and profitability we’ve closed the shipyards and factories and moved those jobs overseas. The bulk of those gains have ended up in the pockets of the very affluent. Income inequality stares you in the face here. The level of despair is high. The small city next to us has been hard hit in the opioid crisis: 63 people died last year.
My annual trek out here reminds me that that I live in a Silicon Valley bubble—and that a good part of the country is not reading what we read, caring about what we care about or thinking about what we think about. They have a lot more immediate concerns.
It’s good to spend time outside the bubble – but I get to go back. My neighbors here, people in that pharmacy and the many others like them can’t. In the U.S. people used to move to where the jobs are. But today, Americans are less mobile. Some are rooted, embedded in their communities; and some are trapped — because housing is unaffordable where the better paying jobs are. And the jobs that are high paying are not the jobs they built their lives on. Likely their circumstances won’t have changed much by the time I return next year.
I don’t know how the people I listened to and talked to voted, but it’s easy to see why they might feel as if no one in Washington is living their lives. And that the tech world is just as distant as Hollywood or Wall Street.
In his book 1984, George Orwell detailed a dystopian world wherein a person or persona called “Big Brother” saw everything that people did and where the central government pushed its agenda through propaganda, spying, monitoring, and thought controls.
That book was published in 1949. It is now 2017, and while we do not exactly have a Big Brother persona governing us, the Orwellian scenario is pretty much familiar. And it is not by means of some ultra-fascist government or political party. Rather, our loss of privacy and Big Brother’s influence on us are brought about by none other than our penchant for sharing on social media.
What privacy?
In 2013 Vint Cerf, who is considered as the father of the internet, said that “privacy may actually be an anomaly.” Throughout history, people preferred communal settings in just about anything — the concept of solitude and privacy was something limited to monasticism.
Greg Ferenstein outlined the history of 3,000 years of privacy through 46 images. You might notice that history agrees with Cerf — and the artworks and imagery at least showed how people did things on a communal nature. It was only during the industrial revolution that we started to have a preference for privacy.
And with the rise of social media, that cycle means we are now moving again toward loss of privacy.
Imagine how much people have been sharing online, with friends and even the public. This includes photos, status updates, locations, all that while tagging friends who may not be aware they are being connected with photos, events, and places.
It’s not even limited to Facebook. No matter how little you share, all the meta data involved in just about anything you do online can constitute your digital persona.
All of these digital crumbs, so to speak, paint a digital picture of us, which bots, machines, and even data scientists, can lead to our digital makeup. Add to that the evolving technologies of facial recognition and machine learning — this means tech companies might know more about us than we do.
And this is extremely useful to anyone who needs to do any customer targeting. Ask advertising agencies and marketers.
In fact, ask Facebook.
Did you know that the social network may have the capability to listen in even when we are not actively sharing information or using the mobile app?
Facebook may be listening
You heard that right.
Given the amount of permissions we give social networks when we install apps on our mobile devices, we might as well just hand them over privileged access to our personal lives. With passive listening technologies, for instance, Facebook might be able to eavesdrop on conversations.
In 2016, a University of South Florida mass communications professor, Kelli Burns, shared her observations that the Facebook app delivered content based on things she mentioned in a conversation.
The idea that Facebook is passively spying has since been debunked, and Burns herself said her comments may have been taken out of context. However, this does not preclude the fact that Facebook itself has admitted to using smartphones’ microphones whenever necessary. “We only access your microphone if you have given our app permission and if you are actively using a specific feature that requires audio,” it said in a statement.
This refers to app features like song recognition and video capture, among others. However, it does not discount the fact that Burns observed a few relevant ads to be pushed in her feed when she mentioned certain keywords.
It all boils down to the permissions you have granted Facebook when you install the app. In most cases, granting permission is an all-or-nothing affair. This means you cannot cherry-pick the permissions to grant or deny when installing an app. You either accept or decline.
What you can do
This does not mean we can just simply let Facebook or other applications get away with potentially being able to eavesdrop on our conversations in order to “serve better ads”.
The choices here involve four things:
Uninstall the social networking apps
Find alternative ways to run the applications or use alternative ones
Find apps that can block social networking apps from recording audio
Switch to secured and private decentralized social networks
My first solution is quite drastic, but for people who are really paranoid about their privacy, then it is a good choice. Facebook is known to be a resource hog anyway — it reportedly shortens battery life significantly, due to its need to refresh and update on a regular basis. Thus, uninstalling apps like Facebook can help both improve your privacy and reduce battery drain.
On the second solution, there are alternative applications to Facebook and Messenger. For example, you can easily access the social network through your mobile web browser and set it to “desktop mode”.
There are also alternative apps like Fast and Friendly that give you access to the social network without the added overhead and features that the official app has. You can also use Telegram which ensures encryption and privacy across all communications.
Thirdly, you can take control of your smartphone’s mic through another means. On Android, an app called RYL or “record your life” lets you do just that.
It compiles a live recording thus letting you keep track of your daily activities (at least the audio) for up to seven days. The added benefit? Because RYL locks the mic for its own use, other eavesdropping apps (Facebook, Google Assistant, etc.) will not be able to listen in.
There is a fourth alternative that is even more drastic – it might not exactly please avid Facebook users. You can leave Facebook in favor of other social networking services. But what exactly can we look forward to in social networking, when Facebook seems to be the apex of social networking apps today?
Perhaps a technology that offers more transparency, security, and assurance of user benefit would be a better alternative. Case in point: With the rise of blockchain technology as a distributed means of accomplishing transactions, exchanges, and digital contracts, this can certainly include social networking as a natural extension of information exchange that offers both privacy and transparency. This is what Nexus, a new secured and decentralized cross-platform social network is planning.
By running its platform on top of blockchain technology, Nexus integrates social networking, crowdfunding, and even e-commerce features embedded. Nexus is aiming to “eliminate all invasion of privacy that large corporation are currently performing” according to its founder, Jade Mulholland.
Toward the future of social networking
We are at an era wherein we have chosen to give up our privacy all for the convenience of being able to share and engage with our friends online. The more we share, the lower our privacy becomes. Social networking does not have to stop there, however. And we do not need to entrust all our social connectivity needs to a single company.
Perhaps decentralization is key to ensuring a secure social network built for the long run. Nexus is actually launching its initial coin offering (ICO), which aims to raise resources and give users the chance to own a part of the social network through cryptographic tokens. Where blockchain-powered social networks will lead us, only time will tell. But it seems that the wave of the future is focused on decentralization, auditability, and flexibility offered by the blockchain.
Your privacy should still be considered sacred, especially if it pertains to what you say to another person face-to-face. If you are worried about your smartphone or other device listening in via its apps, you should act now, in order to further protect yourself against eavesdropping, which is very real.
ON CEDROS ISLAND IN MEXICO—Matthew Des Lauriers got the first inkling that he had stumbled on something special when he pulled over on a dirt road here, seeking a place for his team to use the bathroom. While waiting for everyone to return to the car, Des Lauriers, then a graduate student at the University of California, Riverside, meandered across the landscape, scanning for stone tools and shell fragments left by the people who had lived on the island in the past 1500 years.
As he explored, his feet crunched over shells of large Pismo clams—bivalves that he hadn't seen before on the mountainous island, 100 kilometers off the Pacific coast of Baja California. The stone tools littering the ground didn't fit, either. Unlike the finely made arrow points and razor-sharp obsidian that Des Lauriers had previously found on the island, these jagged flakes had been crudely knocked off of chunky beach cobbles.
"I had no idea what it meant," says Des Lauriers, now a professor at California State University (Cal State) in Northridge. Curiosity piqued, he returned for a test excavation and sent some shell and charcoal for radiocarbon dating. When Des Lauriers's adviser called with the results, he said, "You should probably sit down." The material dated from nearly 11,000 to more than 12,000 years ago—only a couple thousand years after the first people reached the Americas.
That discovery, in 2004, proved to be no anomaly; since then, Des Lauriers has discovered 14 other early sites and excavated two, pushing back the settlement of Cedros Island to nearly 13,000 years ago. The density of early coastal sites here "is unprecedented in North America," says archaeologist Loren Davis of Oregon State University in Corvallis, who joined the project in 2009.
The Cedros Island sites add to a small but growing list that supports a once-heretical view of the peopling of the Americas. Whereas archaeologists once thought that the earliest arrivals wandered into the continent through a gap in the ice age glaciers covering Canada, most researchers today think the first inhabitants came by sea. In this view, maritime explorers voyaged by boat out of Beringia—the ancient land now partially submerged under the waters of the Bering Strait—about 16,000 years ago and quickly moved down the Pacific coast, reaching Chile by at least 14,500 years ago.
Findings such as those on Cedros Island bolster that picture by showing that people were living along the coast practically as early as anyone was in the Americas. But these sites don't yet prove the coastal hypothesis. Some archaeologists argue that the first Americans might have entered via the continental interior and turned to a maritime way of life only after they arrived. "If they came down an interior ice-free corridor, they could have turned right, saw the beaches of California, and said, ‘To hell with this,’" says archaeologist David Meltzer of Southern Methodist University in Dallas, Texas.
The evidence that might settle the question has been mostly out of reach. As the glaciers melted starting about 16,500 years ago, global sea level rose by about 120 meters, drowning many coasts and any settlements they held. "We are decades into the search for coastal dispersers, and we're still waiting for solid evidence or proof," says Gary Haynes, an archaeologist at the University of Nevada in Reno, who thinks the first Americans likely took an inland route.
The hunt for that evidence is now in high gear. A dedicated cadre of archaeologists is searching for maritime sites dating to between 14,000 and 16,000 years ago, before the ice-free corridor became fully passable. They're looking at the gateway to the Americas, along stretches of the Alaskan and Canadian coasts that were spared the post–ice age flooding. They are even looking underwater. And on Cedros Island, Des Lauriers is helping fill in the picture of how early coastal people lived and what tools they made, details that link them to maritime cultures around the Pacific Rim and imply that they were not landlubbers who later turned seaward. "All eyes are on the coast," Meltzer says.
On a sunny June day, Des Lauriers crouches in a gully here, bracing himself against the wind blowing off the ocean. He leans over to examine what could be a clue to how people lived here 12,000 years ago: a delicate crescent of shell glinting in the sun. A few centimeters away, a sharply curved shell point lies broken in two pieces. Des Lauriers knows he's looking at the remains of an ancient fishhook. He has already found four others on the island. One of those, at about 11,500 years old, is the oldest fishhook discovered in the Americas, as reported this summer in American Antiquity.
Des Lauriers wasn't planning to collect artifacts on this trip, but the shell fishhook is too precious to leave to the elements. His team scrambles for anything they can use to package the delicate artifact. Someone produces a roll of toilet paper, and Des Lauriers scoops up the fragments with his trowel and eases them onto the improvised padding. Each fragment is wrapped snuggly and slipped into a plastic bag.
Twenty years ago, most archaeologists believed the first Americans were not fishermen, but rather big-game hunters who had followed mammoths and bison through the ice-free corridor in Canada. The distinctive Clovis spear points found at sites in the lower 48 states starting about 13,500 years ago were thought to be their signature. But bit by bit, the Clovis-first picture has crumbled.
The biggest blow came in 1997, when archaeologists confirmed that an inland site at Monte Verde in Chile was at least 14,500 years old—1000 years before Clovis tools appeared. Since then, several more pre-Clovis sites have come to light, and the most recent date from Monte Verde stretches back to 18,500 years ago, although not all researchers accept it. Genetic evidence from precontact South American skeletons now suggests that the earliest Americans expanded out of Beringia about 16,000 years ago.
Not only were the Clovis people not the first to arrive, but many researchers also doubt the first Americans could have made it by land. Glaciers likely covered the land route through western Canada until after 16,000 years ago, according to recent research that dated minerals in the corridor's oldest sand dunes. Another study showed that bison from Alaska and the continental United States didn't mingle in the corridor until about 13,000 years ago, implying that the passage took at least 2000 years to fully open and transform into a grassland welcoming to megafauna and their human hunters.
That makes the coastal route the first Americans' most likely—or perhaps only—path. It would have been inviting, says Knut Fladmark, a professor emeritus of archaeology at Simon Fraser University in Burnaby, Canada, one of the first to propose a coastal migration into the Americas back in 1979. "The land-sea interface is one of the richest habitats anywhere in the world," he says. Early Americans apparently knew how to take full advantage of its abundant resources. At Monte Verde, once 90 kilometers from the coast, archaeologist Tom Dillehay of Vanderbilt University in Nashville found nine species of edible and medicinal seaweed dated to about 14,000 years ago.
On Cedros Island, artifacts suggest that people found diverse ways to make a living from the sea. That isn't a given because 13,000 years ago, the island was connected to the mainland, hanging off the Baja peninsula like a hitchhiker's outstretched thumb; early sites cluster around freshwater springs that would have been several kilometers inland back then. But Des Lauriers's work reveals that the Cedros Islanders ate shellfish, sea lions, elephant seals, seabirds, and fish from all sorts of ocean environments, including deep-water trenches accessible only by boat.
In addition to making fishhooks, the island's inhabitants fashioned beach cobbles into crude scrapers and hammers—"disposable razors," as Des Lauriers, a stone tool expert, calls them. Such tools are best for scraping and cutting plant fibers, suggesting that the islanders were processing agave into fishing lines and nets. Researchers have found a similar suite of tools at other early sites along the Pacific coast, hinting that fishing technologies were widespread even though the organic nets, lines, and boats likely decayed long ago.
Certain tool types found here suggest even more distant connections. Des Lauriers often finds stemmed points, a style of spear point found from Japan to Peru and perhaps used on the island to hunt sea mammals and native pygmy deer. The shell fishhooks even resemble the world's oldest known fishhooks, which were crafted from the shells of sea snails on Okinawa in Japan about 23,000 years ago.
Although the evidence of a widespread, sophisticated maritime way of life along the ancient Pacific coast—what Meltzer calls "Hansel and Gretel leaving a trail of artifacts"—is provocative, it can't prove the coastal migration theory, he says. The oldest sites on Cedros Island are younger than the first Clovis spear points used to bring down big game on the mainland.
But older coastal sites are beginning to turn up. This year Dillehay announced the discovery of a nearly 15,000-year-old site at Huaca Prieta, about 600 kilometers north of Lima. Its earliest residents lived in an estuary 30 kilometers from the Pacific shoreline but still ate mostly shark, seabirds, marine fish, and sea lions, and their artifacts resemble those at other coastal sites. "I was stunned how similar [the tools of Huaca Prieta] were to [those of] Cedros Island," Davis says.
Still, pinning down the coastal migration theory will take a string of well-dated sites beginning before 15,000 years ago in southwestern Alaska or British Columbia in Canada and extending through time down the coast. To find them, archaeologists will have to take the plunge.
Loren Davis tries to stay steady as he makes his way into a laboratory aboard the research vessel Pacific Storm. The archaeologist was desperately seasick in his cabin for 2 days in late May as the 25-meter-long ship fought rough seas more than 35 kilometers off the Oregon coast. With Davis laid low, his team members scanned the ocean floor with sound waves.
They are seeking the now-flooded landscape ancient maritime explorers would have followed on their journey south, when today's coastlines were dozens of kilometers inland. Some coastal travelers did eventually turn landward, as shown by early inland sites such as Oregon's Paisley Caves, which yielded a 14,200-year-old human coprolite. But the earliest chapters of any coastal migration are almost certainly underwater.
Sixteen thousand years later, it's tempting to envision such a migration as a race from beach to beach. But as people expanded into the uninhabited Americas, they had no destination in mind. They stopped, settled in, ventured beyond what they knew, and backtracked into what they did. So the first step for archaeologists is to figure out where, exactly, those early mariners would have chosen to stick around.
The decision likely came down to one resource: freshwater. "Water is the lifeblood of everything," Davis says. So he has been painstakingly mapping the probable courses of ancient rivers across the now-drowned coastline, hoping that those channels are still detectable, despite now being filled with sediment and covered by deep ocean.
As team members pulled up early results to show Davis during May's cruise, a black line representing the present-day sea floor squiggled horizontally across the screen. Then it diverged into two lines, a gap like a smile opening across the image: An ancient river channel lay below the modern sea floor, right where Davis's model had predicted. "If I hadn't been so sick—and if there had been alcohol on the ship—that would have been a champagne moment," he says. "We can [now] begin to visualize where the hot spots [of human occupation] are probably going to be."
This summer, Davis's colleague Amy Gusick, an archaeologist at Cal State in San Bernardino, used one of his maps to take the first sample from another probable hot spot: a drowned river off the coast of California's Channel Islands. Terrestrial sites on the islands have already yielded 13,000-year-old human bones as well as characteristically coastal stone tools. But since then, the rising sea has inundated 65% of the islands' ancient area. Gusick and her colleagues are confident that submerged sites, possibly even older than the ones on land, exist off today's coast.
In June, she used a 5-meter sampling tube to pierce what Davis's map told her was the ancient riverbank. The muck she collected will reveal whether ancient soil, perhaps including plant remains, pollen, animal bones, or human artifacts, can still be recovered from deep underwater. Eventually, Gusick hopes to understand the drowned landscape well enough to pick out anomalies on the sonar map—possible shell middens or houses—and target them for coring that might bring up artifacts and the organic material needed to date them. A date of 15,000 years or older would show that before the ice-free corridor fully opened, adept mariners had explored the Channel Islands, which were never connected to the mainland and could be reached only by boat.
"This is the biggest scientific effort to move us down the road to answering this question" of how and when people settled the Americas, says Todd Braje, an archaeologist at San Diego State University in California, one of the leaders of the coring project. "Those submerged landscapes are really the last frontier for American archaeology," says Jon Erlandson, an anthropologist at the University of Oregon in Eugene who has excavated on the Channel Islands for decades and also is part of the project.
All the same, to make a definitive case for the coastal route, researchers must find pre-Clovis coastal sites in the doorway to the Americas itself: on the shores of southwestern Alaska or British Columbia. Luckily, archaeologists working there may not even have to go underwater to do it.
About 13,200 years ago, someone strolled through the intertidal zone just above the beach on Calvert Island, off the coast of British Columbia, leaving footprints in the area's wet, dense clay. When high tide rolled in, sand and gravel filled the impressions, leaving a raised outline. Layers of sediment built up over the millennia, preserving the barely eroded footprints under half a meter of earth.
Daryl Fedje, an archaeologist at the University of Victoria (UVic) and the Hakai Institute on Quadra Island in Canada, spotted that outline while excavating on the beach in 2014. Since then, he and his UVic and Hakai colleague Duncan McLaren have documented 29 of those footprints beneath Calvert's beaches. A piece of wood embedded in a footprint's fill provided the radiocarbon date. "It raises the hairs on the back of your neck," says McLaren, who in April presented the footprints at the annual meeting of the Society for American Archaeology in Vancouver, Canada.
Such an intimate view of early coastal Americans is possible on Calvert Island because of a geological quirk. The melting ice sheets flooded coastlines elsewhere. But when the coasts of British Columbia and southwestern Alaska were suddenly freed from the weight of the nearby glaciers, parts of the underlying crust began to rebound, lifting some islands high enough to largely escape the flood.
To maximize their chances of finding ancient sites, McLaren, Fedje, and their UVic colleague Quentin Mackie have spent decades mapping the local sea level changes along the coast of British Columbia. On Calvert Island, where the footprints were discovered, sea level rose only 2 meters. Around nearby Quadra Island, local sea level actually fell, stranding ancient shorelines in forests high above modern beaches. There, "potentially the entire history of occupation is on dry land," Mackie says.
The painstaking work required to identify and search those ancient coastlines is paying off with a march of increasingly older dates from the British Columbia coast. The remains of an ancient bear hunt—spear points lying in a cluster of bear bones—in Gaadu Din cave on the Haida Gwaii archipelago date to 12,700 years ago. The Calvert footprints stretch back 13,200 years. And a cluster of stone tools next to a hearth on Triquet Island is 14,000 years old—the region's oldest artifact so far, according to radiocarbon dates from the hearth's charcoal. Although reports about the footprints and the Triquet tools have yet to be peer reviewed, several archaeologists say they are impressed by the British Columbia team's approach. "They're looking in exactly the right place," Erlandson says.
Despite the proliferating evidence for the coastal route, not everyone is ready to discount the ice-free corridor entirely. The region has barely been studied and is ripe for "interesting surprises," says John Ives, an archaeologist at the University of Alberta in Edmonton, Canada. For example, the corridor may not have been a welcoming grassland until 14,000 years ago, but Haynes says it is naïve to assume that people couldn't have ventured into the corridor as soon as the ice was gone. Before grass took root, "the inland corridor route would have been full of freshwater sources, seasonally migrating or resident waterfowl by the millions, and large and small mammals exploring new ranges," he says. "Eastern Beringia's inland foragers of 14,000 years ago were descendants of expert pioneers and could have traveled far south on foot."
And so the hunt continues. Before breakfast one morning on Cedros Island, Des Lauriers spreads out satellite images of the island's southern edge. Most of the land appears as brown pixels, as one would expect from a desert island. But here and there, clusters of blue pixels appear—signs of moisture in the ground. Find the springs, Des Lauriers knows, and he'll find the people.
Davis and the rest of the team pile into the back of a pickup truck, and Des Lauriers follows a dirt path to a spring he hasn't visited before. The patch of green lies at the bottom of a steep-sided arroyo, which is otherwise bone dry. Algae cover the surface of a meter-deep pool. The dark soil is rich with organic matter, unusual for arid Cedros Island and possibly indicating an ancient settlement. Stone tools characteristic of the earliest islanders dot the surface. "There's a lot of stuff here, Matt," Davis calls to Des Lauriers. "It's punching all the boxes."
Interspersed with the recognizably early tools are things neither of them has seen on the island before: large, striated scallop shells belonging to a species known as mano de león (lion's paw). Today those scallops live in lagoons east of here, on the coast of the Baja peninsula. Des Lauriers says he suspects that similar lagoons connected Cedros Island to the mainland before 13,000 years ago. Were people here early enough to visit such lagoons? Could those shells be hinting at a phase of settlement even older than the one signaled by the Pismo clams 13 years ago?
To find out, Des Lauriers will have to wait until the team excavates and takes samples for radiocarbon dating. He records the site's GPS coordinates and then, just as people have done here for millennia, sets off up the arroyo in search of the next source of freshwater.
The Soul of The Sims, by Will Wright --
Macintosh HD:XmotiveHarness:src/Motive.c --
Tuesday, January 28, 1997 / 9:25 AM
This is the prototype for the soul of The Sims, which Will Wright
wrote on January 23, 1997.
I had just started working at the Maxis Core Technology Group on
"Project X" aka "Dollhouse", and Will Wright brought this code
in one morning, to demonstrate his design for the motives,
feedback loop and failure conditions of the simulated people.
While going through old papers, I ran across this print-out that
I had saved, so I scanned it and cleaned the images up, and got
permission from Will to publish it.
This code is a interesting example of game design, programming
and prototyping techniques. The Sims code has certainly changed
a lot since Will wrote this original prototype code. For
example, there is no longer any "stress" motive. And the game
doesn't store motives in global variables, of course.
My hope is that this code will give you a glimpse of how Will
Wright designs games, and what was going on in his head at the
time!
LWN.net is a subscriber-supported publication; we rely on subscribers
to keep the entire operation going. Please help out by buying a subscription and keeping LWN on the
net.
November 9, 2016
This article was contributed by Neil Brown
For a little longer than a year now, I have been using Notmuch as my primary means
of reading email. Though the experience has not been without some
annoyances, I feel that it has been a net improvement and expect to keep
using Notmuch for quite some time. Undoubtedly it is not a tool suitable
for everyone though; managing email is a task where individual needs and
preferences play an important role and different tools will each fill a
different niche. So before I discuss what I have learned about Notmuch,
it will be necessary to describe my own peculiar preferences and
practices.
Notmuch context
I can identify three driving forces in my attitude to email. First
there is a desire to be in control. I want my email to be stored primarily
on my hardware, preferably in my home. For this reason, various popular
hosted email services are of little interest to me. Second is the
difficulty I have with throwing things away; I'm sure I'll never want to
look at 99.999% of my email a second time, but I don't know which 0.001%
will interest me again, and I don't want to risk deleting something I may
someday want. Finally, I am somewhat obsessive about categorization.
"A place for everything, and everything in its place" is a goal,
but rarely a reality, for me. Email is one area where that goal seems
achievable, so I have a broad collection of categories for filing incoming
mail, possibly more than I need.
My most recent experience before committing to Notmuch was to use Claws Mail as an IMAP client that
accessed email using the Dovecot IMAP
server running on a machine in my home. This was sufficient for many years,
mostly because I work at home with good network connectivity between
client and server. On those rare occasions when I traveled to the other
side of the world, latency worsened and the upstream bandwidth over my
home ADSL connection was not sufficient to provide acceptable service; it
was bearable, but that is all. Using procmail to filter my email into
different folders met most of my obsessive need to categorize, but when an
email sensibly belonged in multiple categories, there wasn't really any
good solution.
I think the frustration that finally pushed me to commit to the rather
large step of transitioning to Notmuch was the difficulty of searching.
Claws doesn't have a very good search interface and Dovecot (at least
in the default configuration) doesn't have very good performance. I knew
Notmuch could do better, so I made the break.
A close second in the
frustration stakes was that I had to use a different editor for composing
email than I used for writing code. Prior to choosing Claws, I used the
Emacs View Mail
mode; it had been difficult giving up that
seamless integration between code editor and email editor. Notmuch offered
a chance to recover that uniformity.
Notmuch of a mail system
Notmuch has been introduced and reviewed (twice) in these pages
previously so only a brief recap will be provided here. Notmuch describes
itself as "not much of an email program"; it doesn't aim to
provide a friendly user interface, just a back-end engine that indexes and
retrieves email messages. Most of the user-interface tasks are left to a
separate tool such as an Emacs mode that I use. In this
vein of self-deprecation, the web site states that
even "for what it does do, that work is
provided by an external library, Xapian". This is a little unfair as
Notmuch does contain other functionality. It decodes MIME messages in
order to index the decoded text with the help of libgmime. It manages a
configuration file with the help of g_key_file
from GLib. And it will decrypt encrypted messages, using GnuPG. It even
has some built-in functionality for managing tags and tracking message
threads.
The genius of Notmuch is really the way it combines all these various
libraries together into a useful whole that can then be used to build a
user interface. That interface can run the Notmuch tool separately, or
can link with the libnotmuch library to perform searches and
access email messages.
Notmuch need for initial tagging
Notmuch provides powerful functionality but, quite appropriately, does
not impose any particular policy for how this functionality should be used.
It quickly became clear to me that there is a tension between using tags
and using saved searches as the primary means of categorizing incoming
email. Tags are simple words, such as "unread",
"inbox", "spam", or "list-lkml", that can be
associated with individual
messages. Saved searches were not natively supported by
Notmuch before version 0.23, which was released in early October (and which
calls them "named queries"), but are easily supported by
user-interface implementations.
Using tags as a primary categorization is the idea behind the "Approaches to initial
tagging" section of the Notmuch documentation. This page provides
some examples of how a "hook" can be run when new mail arrives to
test each message against a number of rules and then to possibly add a
selection of tags to that message. The user interface can then be asked to
display all messages with a particular tag.
I chose not to pursue this approach, primarily because I want to be able
to change the rules and have the new rule apply equally to old emails,
which doesn't work when rules are applied at the moment of mail delivery.
The alternative is to use fairly complex saved searches.
This ran into a problem when I wanted one saved search to make reference to
another, as neither the Emacs interface nor the Notmuch backend had a
syntax including one saved search in another search. For example, I have one saved
search to identify email from businesses (that I am happy to receive email
from) whose mail otherwise looks like spam.
So my "spam" saved search is something like:
tag:spam and not saved:commercial
The new "named
queries" support should make this easy to handle but, until I upgrade
my Notmuch installation, I have a wrapper script around the
"notmuch" tool that performs simple substitutions to interpolate
saved searches as required.
It also causes a minor problem in that I have several saved searches
that are intermediaries that I'm not directly interested in, but which
still appear in my list of saved searches. Those tend to clutter up the
main screen in the Emacs interface.
Unfortunately, the indexing that Notmuch performs is not quite complete,
so some information is not directly accessible to saved searches, resulting
in the need for some limited handling at mail delivery time. Notmuch does
not index all headers; two missed headers that are of interest to me are
"X-Bogosity" and "References".
I use bogofilter to
detect spam, which adds the "X-Bogosity" header to
messages to indicate their status. Further, when someone replies to an
email that I sent out, I like that reply to be treated differently from
regular email, and particularly to get a free pass through my spam filter.
I can detect replies by simple pattern matching on theReferences or In-reply-to headers. While Notmuch
does include these values in the index so that threads can be tracked, it
does not index them in a way that allows pattern matching, so there is no
way for Notmuch to directly find replies to my emails.
To address this need, I have a small procmail
filter that runs bogofilter and then files email in one of the folders
"spam", "reply", or "recv" depending on which
headers are found. Notmuch supports "folder:" queries for
searches, so that my saved search can now differentiate based on these
headers that Notmuch cannot directly see.
I find that tags still are useful, but that use is largely
orthogonal to classification based on content. When new mail arrives, it is
automatically tagged as both "unread" and "inbox".
When I read a message, the "unread" tag is cleared;
when I archive
it, the "inbox" tag is cleared. I would like an extra tag,
"new", which would be cleared as soon as I see the
subject in a list of
new email, but the Emacs interface I use doesn't yet support that.
There are other uses for tags, such as marking emails that I should
review when submitting my tax return or that need to be reported to
bogofilter because it guessed wrongly about their spam status, but they all
reflect decisions that I consciously make rather than decisions that are
made automatically.
Notmuch remote access
Remote access via IMAP can be slow, but that is still faster than not
having remote access at all, which is the default situation when the
mail store only provides local access. I have two
mechanisms for remote access that work well enough for me.
When I am in my home city, I only need occasional remote access; this
is easily achieved by logging in remotely with SSH and running
"emacsclient -t" in a terminal window. This connects to my
running Emacs instance and gives me a new window through which I can access
Notmuch almost as easily as on my desktop. A few things don't work
transparently, viewing PDF files and other non-text attachments in particular,
but as this is only an occasional need, lack of access to non-text content
is not a real barrier. Here we
see again the genius of Notmuch in making use of existing technology rather
than inventing everything itself. Notmuch isn't contributing at all to
this remote access but, since it supports Emacs as a user-interface, all the
power of Emacs is immediately available.
For times when I am away from home and need more regular and complete
remote access, there is muchsync,
a tool that synchronizes two Notmuch mail stores. All email
messages are stored one per file, so synchronizing those simply requires
determining which files have been added or removed since the last
synchronization and copying or deleting them. Tags are stored in the Xapian database,
so a little more effort is required there but, again, muchsync just looks to
see what has changed since the last sync and copies the relevant tags. I
don't know yet if muchsync will synchronize the named queries and other
configuration that can be stored in the database in the latest Notmuch
release. Confirming that is a major prerequisite to upgrading.
Before discovering muchsync, I had used rsync to synchronize mail
stores; I was happy to find that muchsync was significantly faster. While rsync
is efficient when there are small changes to large files, it is not so
efficient when there are small changes to a large list of files. The first
step in an rsync transaction is to exchange a complete list of file names,
which can be slow when there are tens of thousands of them. Muchsync doesn't waste
time on this step as it remembers what is known to be on the replica, so it
can deduce changes locally.
With muchsync, reading email on my notebook is much like reading email
on my desktop. Unfortunately, I cannot yet read email on my phone, though I
don't personally find that to be a big cost. There is a web interface for Notmuch
written in Haskell, but I have not put enough effort into that to get
it working so I don't know if it would be a usable interface for me.
When Notmuch mail is too much
As noted above, I don't like deleting email because I'm never quite sure what
I want to keep. Notmuch allows me to simply clear the inbox
flag; thereafter I'll never see the message again unless I explicitly
search for older messages,
as my saved searches all include that tag. As a result, I haven't deleted
email since I started using Notmuch and have over 600,000 messages at
present (528,000 in the last year, over half of that total from the
linux-kernel mailing list). The mail store and associated index consume nearly ten
gigabytes. I'm hoping that Moore's law will save me from ever having to
delete any of this. This large store allows me to see if very large amounts
of email is too much or if, as the
program claims, "that's not much mail".
As far as I can tell, the total number of messages has no effect on
operations that don't try to access all of those messages, so extracting a
message by message ID, listing messages with a particular tag, or adding or
clearing a tag, for example, are just as fast in a mail store with 100,000
messages as in one with 100 messages. The times when a lot of
mail can seem to be too much is when a search matches thousands of messages
or more. There are two particular times when I find this
noticeable.
As you might imagine, given my need for categorization, I have quite a
few saved searches. The Emacs front end for Notmuch has a
"hello" page that helpfully lists all the saved searches
together with the number of matching messages. Some of these searches are
quite complex and, while the complexity doesn't seem to be a particular
problem, the number of matches does. Counting the 217,952 linux-kernel messages
still marked as in my inbox takes
four to eight seconds, depending on the hardware. It only takes a few saved
searches that take more than a couple of
seconds for there to be an irritating lag when Emacs wants to update the
"hello" page. Similarly, generating the list of matches for a
large search can take a couple of seconds just to start producing the list,
and much longer to create the whole list.
None of these delays need to be a problem. Having precise
up-to-the-moment counts for each search is not really necessary, so updating
those counts asynchronously would be perfectly satisfactory and rarely
noticeable. Unfortunately, the Notmuch Emacs mode updates them all
synchronously and (in the default configuration) does so every time the
"hello" window is displayed. This delay can become tiresome.
When displaying the summary lines for a saved search, the Emacs
interface is not synchronous, so there is no need to wait for the full list
to be generated, but one still needs to wait the second or two for the
first few entries in a large list to be displayed. If the condition
"date:-1month.." is added to a search, only messages that
arrived
in the last month will be displayed, but they will normally be displayed
without any noticeable delay as there are far fewer of them. The
user interface could then collect earlier months asynchronously so they
can be displayed quickly if the user scrolls down. The Emacs interface
doesn't yet support this approach.
Notmuch locking
As a general rule, those Notmuch operations that have the potential to
be slow can usually be run asynchronously, thus removing much of the cost
of the slowness. Putting this principle into practice causes one to
quickly run up against the somewhat interesting approach to locking that
Xapian uses for the indexing database.
When Xapian tries to open the database for write access and finds that it
is already being written to, its response is to return an error.
As I run "notmuch new" periodically in the background to
incorporate new mail, attempts to, for example, clear the "inbox"
flag sometimes fail because the database cannot be updated, and I have to
wait a moment and try again. I'd much rather Notmuch did the waiting for
me transparently.
If one process has the database open for read access and another process
wants write access, the writer gets the access it wants and the reader
will get an error the next time that it tries to retrieve data. This may
be an appropriate approach for the original use case for Xapian but seems
poorly suited for email access. It was sufficient to drive me to extend my
wrapper script to take a lock on a file before calling the real Notmuch
program, so that it would never be confronted with unsupported
concurrency.
The most recent version of Xapian, the 1.4 series released in recent
months, adds support for blocking locks, and Notmuch 0.23 makes use of
these to provide a more acceptable experience when running Notmuch
asynchronously.
Working with threads
One feature of Notmuch that I cannot quite make my mind up about is the
behavior of threads. In a clear contrast to my finding with JMAP, the
problem is not that the threads are too simplistic, but that they are
rich and I'm not sure how best to tame them.
As I never delete
email, every message in a thread remains in the mail store indefinitely.
When Notmuch performs a search against the mail store it will normally list
all the threads in which any message matches the search criteria. The
information about the thread includes the parent/child relationship between
messages, flags indicating which messages matched the search query, and
what tags each individual message has.
The Emacs interface uses the parent/child information to
display a tree structure using indenting. It uses the "matched"
flag to de-emphasize the non-matching messages, either greying them out in
the message summary list or collapsing them to a single line in the default
thread display, which concatenates all messages in a thread into a single
text buffer. It uses some of tags to adjust the color or font such as to
highlight unread messages.
This all makes perfect sense and I cannot logically fault it, yet
working with threads sometimes feels a little clumsy and I cannot say why.
The most probable answer is that I haven't made the effort to learn all the
navigation commands that are available; a rich structure will naturally
require more subtle navigation and I'm too lazy to learn more than the
basics until they prove insufficient. Maybe a focus on some self-education
will go a long way here. Certainly I like the power provided by Notmuch
threads, I just don't feel that I personally have tamed that power yet.
Notmuch of a wish list
Though I am sufficiently happy with Notmuch to continue using it, I
always seem to want more. The need for sensible locking and for native
saved searches should be addressed once I upgrade to the latest release, so
I expect to be able to cross them off my wish list soon.
Asynchronous updates of the match-counts for saved searches and for the
messages in a summary is the wish that is at the top of my list, but my
familiarity with Emacs Lisp is not sufficient to even begin to address
that, so I expect to have to live without it for a while yet.
One feature that is close to the sweet spot for being both desirable and
achievable is to support an outgoing mail queue. Usually when I send email
it is delivered quite promptly, thought not instantly, to the server for my
email provider. Sometimes it takes longer, possibly due to a network
outage, or possibly due to a configuration problem. I would like outgoing
email to be immediately stored in the Notmuch database with a tag to say
that it is queued. Then some Notmuch hook could periodically try to send
any queued messages, and update the tag once the transmission was
successful. This would mean that I never have to wait while mail is sent,
but can easily see if there is anything in the outgoing queue, and can
investigate at my leisure.
There are plenty of other little changes I would like to see in the user
interface, but none really interesting enough to discuss here. The
important aspect of Notmuch is that the underlying indexing model is sound
and efficient and suits my needs. It is a good basis on which to
experiment with different possibilities in the user interface.
When dinossaurs ruled the earth there was a young kid that had just installed Red Hat Linux 5 on his computer. Fascinated with the “OS for the elite” he was about to experience a devouring frustration. The kind that gives rise to dark divinities that come to haunt you in your sleep for years to come:
Trying to exit vim.
After a few unsuccessful atempts the computer power switch was the only option left. Like that the dream of a promising carreer in elite-land was shattered for that young boy.
That young boy was me.
High tech editor, low tech coder
My stubbornness always took the best part of my reason and vim was my editor for years to come. Since rough starts are the perfect driver for a good love story I then ditched it to take a stroll in netbeans land. After a while a pardon was due and got back to it again. Then I ditched it again for Visual Studio. Got back to it again. And so on.
Code editors today try to do everything and a pair of boots. Even vim, known for its minimalist looks, is very prone to config-pr0n worthy of the most wild and nasty fantasies.
I’ll show you mine if you show me yours
At one point I got myself in the middle of a config discussion between developers. They were arguing about good .vimrc config options and plugins for vim.
I decided to participate and show my ~60 lines .vimrc file. They laughed. It was so small. Almost useless.
That got me thinking:
What do i really need in a code editor ?
After giving a hard thought on my usage/preferences and some possible optimizations I took a look into the market of code editors and decided to upgrade from vim to vi (by vi I mean nvi 1.81.6).
The rest of this blog post is about the nvi good parts (vi, not vim) and how a minimal approach can actually free yourself from the clutches of bloatware and help you perform better.
Minimal config file (.exrc)
The nvi config file is named .exrc, here is my current setup:
set showmode
set showmatch
set ruler
set shiftwidth=2
set tabstop=2
set ts=2
set verbose
set leftright
set cedit=\
set filec=\
10 lines only. No maintenance. No BS.
These options are mostly similar to those in .vimrc, a special note for cedit and filec, these have a =\<TAB> (the <TAB> is the actual tab character there after the \). The cedit property sets the character to trigger command expansion in the vi command colon mode. The filec sets the the character for file name expansion (auto-complete) when opening a new file inside vi (e.g. with :e).
All of these are well documented in the man page.
$ man nvi
No unicode support
There are some vi implementations that support multibyte characters like nvi2. But for this blog post I am assuming your vi is plain nvi.
No multibyte, wide-char, wtf-8, extended codepoints. Although those are very important to learn and master I do prefer to keep code in plain strict single-byte ASCII (UTF-8 supersedes it).
Why ?
It keeps the language coherent with the programming reserved words (more than logic constructs, while, for, if, are english words).
It makes it perfectly visible when an accidental wide-char is inserted (particularly helpful if you are using a keyboard layout that does not use the US key-mapping). This is good to make sure your code is available to read on every system, regardless of locale (it even works if someone opens the code you wrote in an editor that is set to default to UTF-8 encoding).
Another good thing is that it works with a wider variety of fonts. Some monospace terminal fonts can’t correctly display all UTF-8 characters.
But I sometimes need to write documents with strange characters
That is one of the scenarios where I would use another text editor. nvi is strictly a code/config editor.
No syntax highlighting
This is another personal preference. It has been a long time since I had to worry about syntax when producing code. If you still struggle with syntax then please use syntax highlighting, it will help those special words stand out. Otherwise why not give it a try without syntax highlighting for a while (a few weeks to be slightly above the habituation threshold) and measure how you perform ?
It does help to keep your functions small and easy to read.
Comments are shown with their true weight and your commented code is promoted to the same importance as your production code.
Your focus will be in semantics and it is easier to get into it without the syntax aggressively jumping at your face.
Fast undo
Like in vim the undo in vi is very convenient but has a slightly different way of operating. Instead of pressing u multiple times to go through the various undo levels and then ctrl-r to redo, in vi you do it by pressing u once to undo and the ‘.’ to go through the various undo levels.
To redo you press u twice (undo the undo) and then ‘.’ to go through the multiple redo levels.
I think this is slightly more coherent and it also makes use of the ‘.’ (repeat action) operator in an arguably more logical way.
Like in vim, you can use the U command to restore a line to the state it was before the cursor was placed on top of it (undoing all the chances since that time).
No visual mode
There is no visual mode in vi (visual mode as in pressing the v command and using your movement keys to select an area of text).
At first this might seem like a big handicap, but vi shares a few commands with vim that once mastered can make you more productive than using the visual mode.
Marks m
You can setup marks in text, if you type mx it will define the mark x at the current cursor position. Typically these are used to move quickly through the file and go to certain marked positions. For that the ’ command is used followed by the mark character name (moves to the line) or with the ` command (moves to the exact cursor position of the mark). To move to mark x do ’x (assuming you did mx in some place before).
Marks can also be used with other common commands like yank, delete, etc…, so instead of using the visual mode you can yank or delete to a a mark set at some position. To do that use the t or f commands, like ytx (I read it as: yank to x), it will yank all lines up to (but not including) the line at mark x. To include it use the f as in yfx.
No tabs
Tabs was a feature that I used a lot in vim, unfortunately they only work with vim, so if you want to run a shell command or have a terminal to read compiler output you will need to use them with some other kind of tab system.
On my journey to the limits of usefulness I found myself using vim tabs together with tmux tabs and terminal (osx) tabs.
After those crazy days (months ?) I finally decided to ditch all tabs and stick to a single tool for that purpose, one that is versatile enough to cover my use cases for tabs.
Do one thing and do it well
Tmux is my current tool of choice, nowadays I don’t use terminal tabs or text editor tabs. I leave that work for tmux, an amazing productivity tool that allows me to work fullscreen in zen mode with the code.
Panes in vi
Like in vim you can split your window with panes, these are useful to use another section of code as a quick reference or to do some quick yank/past in vi between files/sections.
The pane system in vi works slightly different from vim. To vertically split the window you can also use the :vs, but to horizontally split the window :sp won’t work. In vi the command to horizontally split a window is the :E, as in Edit (the same as :e but horizontally split).
To switch between panes ctrl-w will immediately move your cursor to the next window pane. Instead of pressing twice like vim you only need to press it once. This is hard to get used to at first but it is also hard to live without once used to it.
My nvi with tmux on macos
No macros
One feature that I consistently used in vim was macros (vim q command). It was easy to create bundles of commands and run them to do your repeated text tasks or grunt work. Very useful when replicating huge data files or setting up a big JSON test file.
In vi you can also use the power of macros through buffers.
Buff it up
Buffers are easy to use, you can prefix your common text manipulation commands (dd, yy, etc…) to a named buffer. To do that start the command with “ where is any letter from a-z.
Here are some examples:
To delete 4 lines and place them into buffer a“a4dd
To paste buffer a to the current cursor position“ap
To view all your buffers go to ex mode (:) and do :di b
Who needs macros when you have buffers
Write a few commands to be executed in your file, place then in a buffer and tell vi that you want to execute the buffer:
2)
4dd
G
Line by line:
move two lines down
delete 4 lines
go to the end of the file
Put these into a buffer named c with “c3yy (yank 3 lines into buffer c), and whenever you want to run it just do @ c. Undo u also works nicely with buffered commands.
Upgrading from vim to vi
There are some cool benefits in using vi like how fast it is and how it handles huge files without a problem.
The case for vi is like any other editor: a matter of getting used to it and keep perfecting your skills through it for what it really matters: writting great code.
Conclusion
Use what you are more confortable with. There are tradeoffs in every editor (even the mighty visual studio code does not allow you to work without syntax highlighting on…) and today nvi fits nicely with my way of working and approach.
I hope this post to be used as a initial reference and motivation to those that are looking for a more minimalistic way of doing things. I don’t intend it to start flame wars.
Unlike the United States and the Soviet Union, which originally sought scientific advances in space exploration for military purposes, India looked to the stars on a quest for self-sufficiency.
Professor Rao worked alongside Vikram Sarabhai and Satish Dhawan, the first leaders of the Indian Space Research Organization, the country’s equivalent of NASA, to establish a complex in Bangalore and secure a budget from the Indian government. By the time he took over as chairman in 1984, he was overseeing 14,000 employees.
But critics in other countries would ask why, with all its problems of poverty and overpopulation, of malnutrition and poor health and illiteracy, India should be spending millions of rupees to go into space.
The question would send Professor Rao into a passionate discourse about the practical benefits of space satellites to ordinary villagers. He told The New York Times in 1983 that satellites would bring television signals to even the most rural parts of India. Meteorological data about weather and floods would help farmers manage their crops. Long-distance calls from one Indian city to another would take seconds instead of (with poor connections) hours.
The economy would grow, he said. Communication would be better. It would “change the face of rural India.”
In 1975, Professor Rao led the team that built India’s first satellite, Aryabhata, named for an ancient Indian astronomer and mathematician. The satellite, launched in the Soviet Union aboard a Soviet-made rocket, conducted experiments to detect low-energy X‐rays, gamma rays and ultraviolet rays in the ionosphere.
Professor Rao was credited with sending 20 more satellites into space, including some of the first to combine communication and meteorological capabilities. Other satellites took crop inventories and looked for signs of underground water reserves. Soil erosion and snow runoff were monitored to help forecast floods.
He was also present in March 1984 when India’s first astronaut was launched into space. The mission: to practice yoga.
The astronaut, Rakesh Sharma, 35, was tasked with seeing if yoga exercises could help astronauts tolerate motion sickness and muscle fatigue, problems that come with weightlessness.
The cosmonauts he traveled with, aboard a Soviet Soyuz T-10 spacecraft, were amused with his plan — to practice three yoga exercises several times a day for five days over the eight-day journey. The results were inconclusive.
There was a lot for India to learn. But Professor Rao was known as a methodical visionary who was realistic about India’s constraints. He found himself having to navigate a tricky path between the competing interests of the United States and the Soviet Union as India looked to both countries for help.
“It was a delicate balance between the two,” Rudrapatna Ramnath, a senior lecturer at the Massachusetts Institute of Technology who knew Professor Rao during his fellowship there, said in an interview. “He navigated this troublesome tide quite skillfully.”
Udupi Ramachandra Rao was born on March 10, 1932, to Laxhminarayana Rao and his wife, Krishnaveni Amma, in a small village near Udupi, in the southern Indian state of Karnataka.
He received his education in India: a bachelor’s from Madras University in 1951, a master’s from Banaras Hindu University in 1953 and a Ph.D. from Gujarat University in 1960.
He was director of Indian Space Research Organization’s satellite program from 1972 to 1984 and chairman of the organization from 1984 until 1994.
Professor Rao helped India develop its rocket technology, including the first Polar Satellite Launch Vehicle, which is still used today to send its own equipment into space. In February, India launched 104 satellites from a single rocket, breaking Russia’s record of 37, set in 2014.
In 2014, India sent a spacecraft to Mars for $74 million to prove that the country could succeed in such a highly technical endeavor. The mission cost a fraction of the $671 million the United States spent on a Mars mission that year, but it showed up a regional rival, China, whose own Mars mission failed in 2012.
Professor Rao published 360 scientific papers on cosmic rays, space physics, and satellite and rocket technology. He was the recipient of prestigious civilian awards given by India in 1976 and earlier this year.
He was elected chairman of the United Nations Committee on Peaceful Uses of Outer Space in 1997 and served three years. He was inducted into the Satellite Hall of Fame in Washington and the International Astronautical Federation’s Hall of Fame in Paris.
His survivors include his wife, Yashoda, and a son, Madan.
Throughout his career, Professor Rao encouraged international collaboration among space scientists, a relatively new phenomenon.
“In the late ′60s and ’70s, collaborations were really hard to do unless you gathered up all your papers, put them in your briefcase and took them to another place,” said Professor Roderick Heelis, who works in the physics department at the University of Texas at Dallas, where Professor Rao worked as an assistant visiting professor in the 1960s.
At the Indian space organization, he was known for instituting a culture of respect and optimism among young scientists. As a tribute on the organization’s website noted, He would tell them, “If others can do, we can do better.”