Microsoft Co-Founder Paul Allen died from complications of non-Hodgkin's lymphoma on Monday afternoon.
Allen's Vulcan Inc. announced that he died in Seattle at 65 years old.
Allen's sister, Jody, said he was "a remarkable individual on every level."
"While most knew Paul Allen as a technologist and philanthropist, for us he was a much-loved brother and uncle, and an exceptional friend. Paul's family and friends were blessed to experience his wit, warmth, his generosity and deep concern," she said in a statement. "For all the demands on his schedule, there was always time for family and friends. At this time of loss and grief for us β and so many others β we are profoundly grateful for the care and concern he demonstrated every day."
Allen ranked among the world's wealthiest individuals. As of Monday afternoon, he ranked 44th on Forbes' 2018 list of billionaires with an estimated net worth of more than $20 billion.
Through Vulcan, Allen's network of philanthropic efforts and organizations, the Microsoft co-founder supported research in artificial intelligence and new frontier technologies. The group also invested in Seattle's cultural institutions and the revitalization of parts of the city.
Allen owned two professional sports teams, the NFL Seattle Seahawks and NBA Portland Trailblazers. He was also an electric guitarist who occasionally jammed with celebrity musicians including Bono and Mick Jagger, and a huge music fan. He funded and designed the Experience Music Project in Seattle, devoted to the history of rock music and dedicated to his musical hero Jimi Hendrix. (It has since been re-christened the Museum of Pop Culture.) The building was designed by architect Frank Gehry to resemble a melted electric guitar.
Vulcan CEO Bill Hilf said, "All of us who had the honor of working with Paul feel inexpressible loss today."
"He possessed a remarkable intellect and a passion to solve some of the world's most difficult problems, with the conviction that creative thinking and new approaches could make profound and lasting impact," Hilf said in a statement.
Earlier this month, Allen revealed that he had started treatment for non-Hodgkin's lymphoma, the same type of cancer he was treated for in 2009. In 1983, Allen left the company he founded with Bill Gates when he was first diagnosed with Hodgkin's disease, which he defeated.
Bill Gates, who co-founded Microsoft with Allen, said that "personal computing would not have existed without him":
"I am heartbroken by the passing of one of my oldest and dearest friends, Paul Allen. From our early days together at Lakeside School, through our partnership in the creation of Microsoft, to some of our joint philanthropic projects over the years, Paul was a true partner and dear friend. Personal computing would not have existed without him.
But Paul wasn't content with starting one company. He channeled his intellect and compassion into a second act focused on improving people's lives and strengthening communities in Seattle and around the world. He was fond of saying, "If it has the potential to do good, then we should do it." That's the kind of person he was.
Paul loved life and those around him, and we all cherished him in return. He deserved much more time, but his contributions to the world of technology and philanthropy will live on for generations to come. I will miss him tremendously."
Current Microsoft CEO Satya Nadella said Allen made "indispensible" contributions to Microsoft and the technology industry. Nadella also said he learned a lot from Allen and will continue to be inspired by him.
"As co-founder of Microsoft, in his own quiet and persistent way, he created magical products, experiences and institutions, and in doing so, he changed the world," Nadella said in a statement.
Former Microsoft CEO Steve Ballmer called Allen a "truly wonderful, bright and inspiring person."
Steven Sinofsky, former president of Microsoft's Windows division, said Allen "did so much to shape lives with computing and his later work in science, community, and research."
Seahawks Coach Pete Carroll said he was deeply saddened by Allen's death.
NFL Commissioner Roger Goodell said Allen was "the driving force behind keeping the NFL in the Pacific Northwest." Goodell said he valued Allen's advice on a wide range of subjects and sent his condolences.
"His passion for the game, combined with his quiet determination, led to a model organization on and off the field. He worked tirelessly alongside our medical advisers to identify new ways to make the game safer and protect our players from unnecessary risk" Goodell said in a statement.
The Trail Blazers tweeted, "We miss you. We thank you. We love you."
Allen's death was met with an outpouring of condolences from tech leaders. Google CEO Sundar Pichai said with Allen's death, the world has "lost a great technology pioneer today."
Apple CEO Tim Cook called him a "pioneer" and a "force for good."
Salesforce CEO Marc Benioff said he was saddened by Allen's passing.
Amazon CEO Jeff Bezos praised his "relentless" push forward in technology:
β CNBC's Matt Rosoff, Ryan Ruggiero and Reuters contributed to this report.
Accelerates Twilioβs Mission to Fuel the Future of Communications
Brings Together the Two Leading Communication Platforms for Developers
The Combination to Create One, Best-in-Class Cloud Communications Platform for Companies to Communicate with Customers Across Every Channel
Twilio & SendGrid Together Serve Millions of Developers, Have 100,000+ Customers, and Have a Greater than $700 Million Annualized Revenue Run Rate*
Twilio (NYSE:TWLO) and SendGrid today announced that they have entered into a definitive agreement for Twilio to acquire SendGrid in an all-stock transaction valued at approximately $2 billion. At the exchange ratio of 0.485 shares of Twilio Class A common stock per share of SendGrid common stock, this price equates to approximately $36.92 per share based on todayβs closing prices. The transaction is expected to close in the first half of 2019.
Adding the leading email API platform to the leading cloud communications platform can drive tremendous value to the combined customer bases. The resulting company would offer developers a single, best-in-class platform to manage all of their important communication channels -- voice, messaging, video, and now email as well. Together, the companies currently drive more than half a trillion customer interactions annualized*, and growing rapidly.
βIncreasingly, our customers are asking us to solve all of their strategic communications challenges - regardless of channel. Email is a vital communications channel for companies around the world, and so it was important to us to include this capability in our platform," said Jeff Lawson, Twilio's co-founder and chief executive officer. "The two companies share the same vision, the same model, and the same values. We believe this is a once-in-a-lifetime opportunity to bring together the two leading developer-focused communications platforms to create the unquestioned platform of choice for all companies looking to transform their customer engagement.β
βThis is a tremendous day for all SendGrid customers, employees and shareholders,β said Sameer Dholakia, SendGridβs chief executive officer. βOur two companies have always shared a common goal - to create powerful communications experiences for businesses by enabling developers to easily embed communications into the software they are building. Our mission is to help our customers deliver communications that drive engagement and growth, and this combination will allow us to accelerate that mission for our customers.β
Details Regarding the Proposed SendGrid Acquisition The boards of directors of Twilio and SendGrid have each approved the transaction.
Under the terms of the transaction, Twilio Merger Subsidiary, Inc., a Delaware corporation and a wholly-owned subsidiary of Twilio, will be merged with and into SendGrid, with SendGrid surviving as a wholly-owned subsidiary of Twilio. At closing, each outstanding share of SendGrid common stock will be converted into the right to receive 0.485 shares of Twilio Class A common stock, which represents a per share price for SendGrid common stock of $36.92 based on the closing price of Twilio Class A common stock on October 15, 2018. The exchange ratio represents a 14% premium over the average exchange ratio for the ten calendar days ending, October 15, 2018.
The transaction is expected to close in the first half of 2019, subject to the satisfaction of customary closing conditions, including shareholder approvals by each of SendGridβs and Twilioβs respective stockholders and the expiration of the applicable waiting period under the Hart-Scott-Rodino Antitrust Improvements Act. Certain stockholders of SendGrid owning approximately 6% of the outstanding SendGrid shares have entered into voting agreements and certain stockholders of Twilio who control approximately 33% of total Twilio voting power have entered into voting agreements, or proxies, pursuant to which they have agreed, among other things, and subject to the terms and conditions of the agreements, to vote in favor of the SendGrid acquisition and the issuance of Twilio shares in connection with the SendGrid acquisition, respectively.
Goldman Sachs & Co. LLC is serving as exclusive financial advisor to Twilio and Goodwin Procter LLP is acting as legal counsel to Twilio. Morgan Stanley & Co. LLC. is serving as exclusive financial advisor to SendGrid and Cooley LLP and Skadden, Arps, Slate, Meagher & Flom LLP are acting as legal counsel to SendGrid.
Q3 2018 Results and Guidance Both companies will report their respective financial results for the three months ended September 30, 2018 on November 6, 2018. However, both Twilio and SendGrid are announcing that they have exceeded the guidance provided on Aug. 6th and July 31st, respectively, for their third fiscal quarters.
Guidance for the combined company will be provided after the proposed transaction has closed.
Conference Call Information Twilio will host a conference call today, October 15, 2018, to discuss the SendGrid acquisition, at 2:30 p.m. Pacific Time, 5:30 p.m. Eastern Time. A live webcast of the conference call, as well as a replay of the call, will be available at https://investors.Twilio.com. The conference call can also be accessed by dialing (844) 453-4207, or +1 (647) 253-8638 (outside the U.S. and Canada). The conference ID is 6976357. Following the completion of the call through 11:59 p.m. Eastern Time on Oct. 22, 2018, a replay will be available by dialing (800) 585-8367 or +1 (416) 621-4642 (outside the U.S. and Canada) and entering passcode 6976357. Twilio has used, and intends to continue to use, its investor relations website as a means of disclosing material non-public information and for complying with its disclosure obligations under Regulation FD.
About SendGrid SendGrid is a leading digital communications platform enabling businesses to engage with their customers via email reliably, effectively and at scale. A leader in email deliverability, SendGrid has processed over 45 billion emails each month for internet and mobile-based customers as well as more traditional enterprises.
Additional Information and Where To Find It In connection with the proposed transaction between Twilio and SendGrid, Twilio will file a Registration Statement on Form S-4 and joint proxy statement/prospectus forming a part thereof. BEFORE MAKING ANY VOTING DECISION, TWILIOβS AND SENDGRIDβS RESPECTIVE INVESTORS AND STOCKHOLDERS ARE URGED TO READ THE REGISTRATION STATEMENT AND JOINT PROXY STATEMENT/PROSPECTUS (INCLUDING ANY AMENDMENTS OR SUPPLEMENTS THERETO) REGARDING THE PROPOSED TRANSACTION WHEN THEY BECOME AVAILABLE BECAUSE THEY WILL CONTAIN IMPORTANT INFORMATION. Investors and security holders will be able to obtain free copies of the Registration Statement, the joint proxy statement/prospectus (when available) and other relevant documents filed or that will be filed by Twilio or SendGrid with the SEC through the website maintained by the SEC at http://www.sec.gov. They may also be obtained for free by contacting Twilio Investor Relations by email at ir@twilio.com or by phone at 415-801-3799 or by contacting SendGrid Investor Relations by email at ir@sendgrid.com or by phone at 720-588-4496, or on Twilioβs and SendGridβs websites at www.investors.twilio.com and www.investors.sendgrid.com, respectively.
No Offer or Solicitation This communication does not constitute an offer to sell or the solicitation of an offer to buy any securities nor a solicitation of any vote or approval with respect to the proposed transaction or otherwise. No offering of securities shall be made except by means of a prospectus meeting the requirements of Section 10 of the U.S. Securities Act of 1933, as amended, and otherwise in accordance with applicable law.
Participants in the Solicitation Each of Twilio and SendGrid and their respective directors and executive officers may be deemed to be participants in the solicitation of proxies from their respective shareholders in connection with the proposed transaction. Information regarding the persons who may, under the rules of the SEC, be deemed participants in the solicitation of Twilio and SendGrid shareholders in connection with the proposed transaction and a description of their direct and indirect interests, by security holdings or otherwise will be set forth in the Registration Statement and joint proxy statement/prospectus when filed with the SEC. Information regarding Twilioβs executive officers and directors is included in Twilioβs Proxy Statement for its 2018 Annual Meeting of Stockholders, filed with the SEC on April 27, 2018 and information regarding SendGridβs executive officers and directors is included in SendGridβs Proxy Statement for its 2018 Annual Meeting of Stockholders, filed with the SEC on April 20, 2018. Additional information regarding the interests of the participants in the solicitation of proxies in connection with the proposed transaction will be included in the joint proxy statement/prospectus and other relevant materials Twilio and SendGrid intend to file with the SEC.
Use of Forward-Looking Statements This communication contains βforward-looking statementsβ within the meaning of federal securities laws. Forward-looking statements may contain words such as βbelievesβ, βanticipatesβ, βestimatesβ, βexpectsβ, βintendsβ, βaimsβ, βpotentialβ, βwillβ, βwouldβ, βcouldβ, βconsideredβ, βlikelyβ and words and terms of similar substance used in connection with any discussion of future plans, actions or events identify forward-looking statements. All statements, other than historical facts, including statements regarding the expected timing of the closing of the proposed transaction and the expected benefits of the proposed transaction, are forward-looking statements. These statements are based on managementβs current expectations, assumptions, estimates and beliefs. While Twilio believes these expectations, assumptions, estimates and beliefs are reasonable, such forward-looking statements are only predictions, and are subject to a number of risks and uncertainties that could cause actual results to differ materially from those described in the forward-looking statements. The following factors, among others, could cause actual results to differ materially from those described in the forward-looking statements: (i) failure of Twilio or SendGrid to obtain stockholder approval as required for the proposed transaction; (ii) failure to obtain governmental and regulatory approvals required for the closing of the proposed transaction, or delays in governmental and regulatory approvals that may delay the transaction or result in the imposition of conditions that could reduce the anticipated benefits from the proposed transaction or cause the parties to abandon the proposed transaction; successful completion of the proposed transaction; (iii) failure to satisfy the conditions to the closing of the proposed transactions; (iv) unexpected costs, liabilities or delays in connection with or with respect to the proposed transaction; (v) the effect of the announcement of the proposed transaction on the ability of SendGrid or Twilio to retain and hire key personnel and maintain relationships with customers, suppliers and others with whom SendGrid or Twilio does business, or on SendGridβs or Twilioβs operating results and business generally; (vi) the outcome of any legal proceeding related to the proposed transaction; (vii) the challenges and costs of integrating, restructuring and achieving anticipated synergies and benefits of the proposed transaction and the risk that the anticipated benefits of the proposed transaction may not be fully realized or take longer to realize than expected; (vii) competitive pressures in the markets in which Twilio and SendGrid operate; (viii) the occurrence of any event, change or other circumstances that could give rise to the termination of the merger agreement; and (ix) other risks to the consummation of the proposed transaction, including the risk that the proposed transaction will not be consummated within the expected time period or at all. Additional factors that may affect the future results of Twilio and SendGrid are set forth in their respective filings with the SEC, including each of Twilioβs and SendGridβs most recently filed Annual Report on Form 10-K, subsequent Quarterly Reports on Form 10-Q, Current Reports on Form 8-K and other filings with the SEC, which are available on the SECβs website at www.sec.gov. See in particular Part II, Item 1A of Twilioβs Quarterly Report on Form 10-Q for the quarter ended June 30, 2018 under the heading βRisk Factorsβ and Part II, Item 1A of SendGridβs Quarterly Report on Form 10-Q for the quarter ended June 30, 2018 under the heading βRisk Factors.β The risks and uncertainties described above and in Twilioβs most recent Quarterly Report on Form 10-Q and SendGridβs most recent Quarterly Report on Form 10-Q are not exclusive and further information concerning Twilio and SendGrid and their respective businesses, including factors that potentially could materially affect their respective businesses, financial condition or operating results, may emerge from time to time. Readers are urged to consider these factors carefully in evaluating these forward-looking statements, and not to place undue reliance on any forward-looking statements. Readers should also carefully review the risk factors described in other documents that Twilio and SendGrid file from time to time with the SEC. The forward-looking statements in these materials speak only as of the date of these materials. Except as required by law, Twilio and SendGrid assume no obligation to update or revise these forward-looking statements for any reason, even if new information becomes available in the future.
* Annualized data for the quarterly period ended June 30, 2018.
For decades many researchers have tended to view astrobiology as the underdog of space science. The fieldβwhich focuses on the investigation of life beyond Earthβhas often been criticized as more philosophical than scientific, because it lacks in tangible samples to study.
Now that is all changing. Whereas astronomers once knew of no planets outside our solar system, today they have thousands of examples. And although organisms were previously thought to need the relatively mild surface conditions of our world to survive, new findings about lifeβs ability to persist in the face of extreme darkness, heat, salinity and cold have expanded researchersβ acceptance that it might be found anywhere from Martian deserts to the ice-covered oceans of Saturnβs moon Enceladus.
Highlighting astrobiologyβs increasing maturity and clout, a new Congressionally mandated report from the National Academy of Sciences (NAS) urges NASA to make the search for life on other worlds an integral, central part of its exploration efforts. The field is now well set to be a major motivator for the agencyβs future portfolio of missions, which could one day let humanity know whether or not we are alone in the universe. βThe opportunity to really address this question is at a critically important juncture,β says Barbara Sherwood Lollar, a geologist at the University of Toronto and chair of the committee that wrote the report.
The astronomy and planetary science communities are currently gearing up to each perform their decadal surveysβonce-every-10-year efforts that identify a fieldβs most significant open questionsβand present a wish list of projects to help answer them. Congress and government agencies such as NASA look to the decadal surveys to plan research strategies; the decadals, in turn, look to documents such as the new NAS report for authoritative recommendations on which to base their findings. Astrobiologyβs reception of such full-throated encouragement now may boost its odds of becoming a decadal priority.
Another NAS study released last month could be considered a second vote in astrobiologyβs favor. This βExoplanet Science Strategyβ report recommended NASA lead the effort on a new space telescope that could directly gather light from Earth-like planets around other stars. Two concepts, the Large Ultraviolet/Optical/Infrared (LUVOIR) telescope and the Habitable Exoplanet Observatory (HabEx), are current contenders for a multibillion-dollar NASA flagship mission that would fly as early as the 2030s. Either observatory could use a coronagraph, or βstarshadeββobjects that selectively block starlight but allow planetary light throughβto search for signs of habitability and of life in distant atmospheres. But either would need massive and sustained support from outside astrobiology to succeed in the decadal process and beyond.
There have been previous efforts to back large, astrobiologically focused missions such as NASAβs Terrestrial Planet Finder conceptsβambitious space telescope proposals in the mid-2000s that would have spotted Earth-size exoplanets and characterized their atmospheres (if these projects had ever made it off the drawing board). Instead, they suffered ignominious cancellations that taught astrobiologists several hard lessons. There was still too little information at the time about the number of planets around other stars, says Caleb Scharf, an astrobiologist at Columbia University, meaning advocates could not properly estimate such a missionβs odds of success. His community had yet to realize that in order to do large projects it needed to band together and show how its goals aligned with those of astronomers less professionally interested in finding alien life, he adds. βIf we want big toys,β he says. βWe need to play better with others.β
There has also been tension in the past between the astrobiological goals of solar system exploration and the more geophysics-steeped goals that traditionally underpin such efforts, says Jonathan Lunine, a planetary scientist at Cornell University. Missions to other planets or moons have limited capacity for instruments, and those specialized for different tasks often end up in ferocious competitions for a slot onboard. Historically, because the search for life was so open-ended and difficult to define, associated instrumentation lost out to hardware with clearer, more constrained geophysical research priorities. Now, Lunine says, a growing understanding of all the ways biological and geologic evolution are interlinked is helping to show that such objectives do not have to be at odds. βI hope that astrobiology will be embedded as a part of the overall scientific exploration of the solar system,β he says. βNot as an add-on, but as one of the essential disciplines.β
Above and beyond the recent NAS reports, NASA is arguably already demonstrating more interest in looking for life in our cosmic backyard than it has for decades. This year the agency released a request for experiments that could be carried to another world in our solar system to directly hunt for evidence of living organismsβthe first such solicitation since the 1976 Viking missions that looked for life on Mars. βThe Ladder of Life Detection,β a paper written by NASA scientists and published in Astrobiology in June, outlined ways to clearly determine if a sample contains extraterrestrial creaturesβa goal mentioned in the NAS report. The document also suggests NASA partner with other agencies and organizations working on astrobiological projects, as the space agency did last month when it hosted a workshop with the nonprofit SETI Institute on the search for βtechno-signatures,β potential indicators of intelligent aliens. βI think astrobiology has gone from being something that seemed fringy or distracting to something that seems to be embraced at NASA as a major touchstone for why weβre doing space exploration and why the public cares,β says Ariel Anbar, a geochemist at Arizona State University in Tempe.
All this means is astrobiologyβs growing influence is helping bring what once were considered outlandish ideas into reality. Anbar recalls attending a conference in the early 1990s, when thenβNASA Administrator Dan Goldin displayed an Apollo-era image of Earth from space and suggested the agency try to do the same thing for a planet around another star.
βThat was pretty out there 25 years ago,β he says. βNow itβs not out there at all.β
After Sydney-native Peter Vogel graduated from high school in 1975, his classmate Kim Ryrie approached him with the idea of a creating a computer microprocessor-driven electronic musical synthesiser. Ryrie was frustrated with his attempts at building an analogue synth, feeling that the sounds that it could produce were extremely limited.
Vogel agreed, the pair spent the next six months in the basement of the house they rented to be Fairlightβs headquarters working on potential designs. However, it wasnβt until they met Motorola consultant Tony Furse that they made a breakthrough.
In 1972 Furse had worked with the Canberra School of Electronic Music to build a digital synthesiser using two 8-bit Motorola 6800 microprocessors, called the Qasar. It had a monitor for displaying simple graphical representations of music, and a light pen for manipulating them.
However, Furseβs synthesiser lacked the ability to create harmonic partials (complementary frequencies created in addition to the βrootβ frequency of a musical note in acoustic instruments, for example when the string of a piano or guitar is struck) and the sounds it emitted lacked fullness and depth. Ryrie and Vogel thought they could solve the problem, and licensed the Qasar from Furse. They worked on the problem for a year without really getting anywhere.
Late one night in 1978, Vogel proposed they took a sample (digital recording) of an acoustic instrument, and extract the harmonics using Fourier analysis. Then they could recreate the harmonics using oscillators. But after sampling a piano, Vogel decided to see what would happen if he simply routed the sample back through the Qasarβs oscillators verbatim. It sounded like a piano! And by varying the speed of playback, he could control the pitch.
It wasnβt perfect, but it was better than anything else they had come up with, and off they went.
They continued to work on the idea of digital sampling while selling computers to offices in order to keep the lights on. They added the ability to mask the digitised sounds with an ADSR (Attack Decay Sustain Release) programmable envelope, allowing for some variation.
They also added a QWERTY keyboard to go with the monitor and light pen (a light-sensing βpenβ which can tell its location on the surface of a CRT by synchronising with the video signal), and an 8-inch floppy diskette for storing sample data, which was loaded into the CMIβs 208KB of memory). It really wasnβt much room β at 24 kilohertz (a CD-quality recording is typically 44.1khz) a sample could only last for one-half to one second β not very long.
Longer sounds needed to be recorded at even lower sample rates, but Vogel credited their low-fidelity (think landline telephone) for giving the CMI a certain sound. However, despite its deficiencies, Australian distributors and consumers were interested, so much so that the Musicianβs Union warned that such devices posed a βlethal threatβ to its members, afraid that humans in orchestras could be replaced!
In the summer of 1979, Vogel visited the home of English singer-songwriter Peter Gabriel, who was in the process of recording his third solo album.Vogel demonstrated the CMI and Gabriel was instantly engrossed with it, using it over the following week to βplayβ sounds such as a glass and bricks on songs in the album. He was so happy with it he volunteered to start a UK distributor for the CMI, which went on to sell it to other British music artists such as Kate Bush, Alan Parsons and Thomas Dolby.
The Americans soon caught on as well, with Stevie Wonder, Herbie Hancock and Todd Rundgren all taking a shining to the CMIΒ amongst many others. But they werenβt interested in using it for reproducing real instruments β rather it was the surreal quality of its sounds combined with the built-in sequencer which made it an attractive addition to their musical toolbox.
Over the following decade, three generations of CMI, with upgrades such as MIDI support, higher sampling rates and more memory, would contribute heavily to the sound of 1980s popular music, spawning new musical styles such as techno, hip hop and drum and bass.
The Page R sequencer in the Fairlight CMI Series II inspired a great many musician software developers to create versions for 1980s-era home computers, including the Atari 400/800, the Apple II and the Commodore 64.
While these 8-bit machines were limited to simple waveform-based sound synthesis, and couldnβt (generally) play back digital samples the way the CMI could, note-based sequencers provided not only a simple way to both learn music notation but also create 3-voice arrangements of original and popular tunes (and also Christmas carols!) with the noise channels in most sound chips providing primitive drums.
Considering the contemporary equivalent was the repetitive (and cheesy) accompaniment available in the common household electronic organ, this was considered to be an improvement!
Atari and Commodore both released note-based music software for their respective computers; Commodoreβs included a musical-keyboard overlay that went over top of the alphanumeric keyboard on its Commodore 64.Β A number of third-party software programs were also produced, and 8-year old music composers flourished.
Bank Street Music Writer was a typical music application of the time. Written by Glen Clancy and published by Mindscape, the Atari version was released in 1985. Like competitors such as Music Construction Set, users can place graphical representations of notes on to a musical staff, making the creation of computer-generated music much more traditional than step-entry piano-roll type methods.
This was only practical due to the visual nature of a computer monitor, which wouldnβt itself have been possible without the cathode-ray tube, the work of A.A. Campbell Swinton, Philo Farnsworth and many others. This sort of interactive music editing highlights the varied artistic software applications the CRT made possible, not just in visual arenas such as video, photography and digital art, but also in literature and music, where digital composition is a standard practice today.
8-bit music notation software led to the rise of the first βbedroom musiciansβ, amateurs who were now able to compose coherent, sequenced tunes without the need for expensive equipment. Many of them would go on to write music for video games, and/or became professional musicians when they became older β much like many of todayβs bedroom EDM producers, who use descendants of that software.
The higher video resolutions available in 16-bit computers such as the Atari ST (640Γ400) and the Apple Macintosh (512Γ342) led to an improvement in the graphical quality of music software. The crispness of their monochrome CRT displays made musical notes more readable, and thus more of them were legible on screen at one time than had been on their lower-resolution 8-bit predecessors.
The Atari ST also featured a built-in MIDI interface, which allowed for the connection of external keyboards (for both input and ouput) and digital-sampled βsound banksβ such as the Roland MT-32, which set the standard for MIDI instrument assignments and allowed for greater portability of MIDI files between different electronic musical instruments and devices.
As they had with the Fairlight CMI, professional musicians began to take notice as consumer-grade computers developed complex music-notation and sequencing software. Paired with MIDI instruments capable of outputting dozens of voices simultaneously, these consumer computers began to overtake dedicated musical computer systems such as the Fairlight CMI, with the Atari ST (commonly paired with Steinbergβs Cubase music sequencing application) becoming fairly standard in music studios around the world for much of the 1990s.
These days, most music is sequenced using an off-the-shelf Macbook Pro!
Game Off is our annual game jam, where participants spend one month creating games based on a theme that we provide. Everyone around the world is welcome to participate, from newbies to professional game developersβand your game can be as simple or complex as you want. Itβs a great excuse to learn a new technology, collaborate on something over the weekends with friends, or create a game for the first time!
Weβre announcing this yearβs theme on Thursday, November 1, at 13:37 pm (PDT). From that point, you have 30 days to create a game loosely based on (or inspired by) the theme.
Using open source game engines, libraries, and tools is encouraged, but youβre free to use any technology you want. Have you been wanting an excuse to experiment with something new? Nowβs your chance to take on a new engine youβd like to try.
As always, weβll highlight some of our favorites games on the GitHub Blog, and the world will get to enjoy (and maybe even contribute to or learn from) your creations.
HelpβIβve never created a game before!
With so many free, open source game engines and tutorials available online, thereβs never been an easier (or more exciting!) time to try out game development.
Are youβ¦
Into JavaScript? You might be interested in Phaser.
Comfortable with C++ or C#?Godot might be a good match for you.
In love with Lua (and/or retrogames)? Drop everything and check out LIKO-12.
Do you really like retro games? Maybe you canβ¦
Whatever genre of game youβre interested in and language you want to use, youβre bound to find a GitHub project that will help you take your game from idea to launch in only a month.
Have a repository or tutorial youβd like to share, tag us with #GitHubGameOff.
HelpβIβve never used version control, Git, or GitHub before!
Donβt worry, we have tons of resources for you. From how to use Git, to all things GitHub, youβll βgitβ it in no time.
GitHub Help offers tons of information about GitHub, from basics like creating an account, to advanced topics, such as resolving merge conflicts
Git documentation has everything you need to know to start using Git (including version control)
Did you know? You donβt have to use Git on the command line. You can use GitHub Desktop (our client for macOS and Windows), or bring Git and GitHub to your favorite editors:
GLHF! We canβt wait to see what you build! πβ€οΈ
A Binary Heap data structure is the simplest implementation of a Priority Queue (for instance order-books). It "partially sorts" data so that the highest priority item can always be found instantly at the root.
Why?
Block-Gas-Limit and the iteration problem
Allowing users to insert data into a contract can result in an issue where it costs too much gas to iterate through. This is a gas-limit-attack.
If directly using an array, an attacker can fill the array to the point where iterating through it will cost more gas than is allowed in a single transaction (the block-gas-limitcurrently 8 million). When such a contract is worth attacking, it will be attacked. Don't write contracts this way. Its not safe.
A Heap mitigates these issues because the structure does not require iteration through the elements. It instead iterates only through the height of a tree.
Data structures to the rescue
Unfortunately, even though many tree structures have O log(n) costs under normal circumstances, they are not safe to use in public Ethereum contracts, because attackers can find conditions that degenerate the tree toward O(n) costs. Degenerating a tree is when you make one branch get really long.
Self Balancing trees solve this issue, because they cannot degenerate. They rotate or swap nodes during insertion to stay balanced, thus preserving their O log(n) costs even under worst-case conditions.
Options
A Binary Heap is a partially-sorted, self balancing tree that has worst-case characteristics proportional to O log(n).
If you need a fully sorted self balancing tree, you can use a 2-3-4 Tree, Red Black Tree, or an AVL Tree. Piper Merriam wrote an AVL Tree in Solidity that he's used for the Ethereum Alarm Clock.
Fully-sorted vs Partially-sorted?
A Heap allows you to quickly find the largest of some property. It is not as quick however as the other trees at iterating from largest to smallest.
For example:
The Heap was built to accommodate the order-book for a decentralized exchange where.
Users can make (and remove) as many orders as they wish
The contract has to automatically match the highest order
When someone creates a sell-order, the contract must find the highest price buy-order to see if it matches (and vice-versa). If there is not a match, we do not need to find the next highest price buy-order, so a heap will suffice. If there is a match, we extractMax(), and the heap will re-adjust so the new highest-price order is at the top.
The more I think about it, the more I think you can solve on Ethereum using this Heap. Remember, the cost reduction requirement is only relevant to logic that's executed on-chain. Off-chain we can easily iterate through all the data and locally cache it however appropriate. There is a dump() function for doing just that. There is also an index.js file that can rebuild the heap in javascript and print it visually.
constTestHeap=artifacts.require("TestHeap");constHelpers=require("../index") // or `require("eth-heap")` from a project using npmconstHeap=Helpers.HeapconstNode=Helpers.Node// create a testHeap contract and fill it with datalet dumpSig ="0xe4330545"//keccak("dump()")[0-8]let response =awaitweb3.eth.call({to:heap.address, data: dumpSig})newHeap(response).print()
The only benefit of a fully-sorted tree, is that you can iterate through it from greatest to least... but that just brings back the block-gas-limit attack problem. I cant think of an application that would require an AVL Tree or a Red-Black Tree, but wouldn't run into the gas-limit attack problem.
How? (to use)
npm install eth-heap --save
Then from a truffle contract, import the library
import"eth-heap/contracts/Heap.sol";
Initialize
Call init() once on the library before use
Data Store
Heaps allow for insertion, extraction, and extraction of the Maximum.
This particular heap also supports getById(), and extractById() which solves race conditions. struct Nodes have only id and priority properties (packed into 1 storage slot), but you can extend this to any arbitrary data by pointing to a struct that you define in a separate mapping, with matching id from the heap.
Think of it simply as a data store. insert things into it, extract, or find / remove the largest element. Don't manipulate the heap structure except through the API, or risk corrupting its integrity.
Max-heap / Min-heap.
This is a max-Heap, if you would like to use it as a min-heap, simply reverse the sign before inputing (multiply by -1 (Although I haven't tested this yet)).
Error Handling
Bad input will result in returning the (default) zero node Node(0,0). For the most part, the functions will not throw any errors. This allows you to handle errors in your own way. If you'd like to throw an error in these situations, perform require(Heap.isNode(myNode)); on the returned node;
API.
Note that if you want to return the Heap.Node data types from a public function, you have to use the experimental ABIEncoderV2 for now.
struct Data{int128 idCount;
Node[] nodes; // root is index 1; index 0 not usedmapping (int128 => uint) indices; // unique id => node index
}struct Node{int128 id; //use with a mapping to store arbitrary object typesint128 priority;
}function init(Data storage self) internal {}function insert(Data storage self, int128priority) internalreturns(Node){}function extractMax(Data storage self) internalreturns(Node){}function extractById(Data storage self, int128id) internalreturns(Node){}function dump(Data storage self) internalviewreturns(Node[]){}function getById(Data storage self, int128id) internalviewreturns(Node){}function getByIndex(Data storage self, uinti) internalviewreturns(Node){}function getMax(Data storage self) internalviewreturns(Node){}function size(Data storage self) internalviewreturns(uint){}
Bounty
It is extremely important for Ethereum code to be bullet-proof. ETH ETC and BTC are the most hostile programing environments ever created. We are in a paradigm shift, and bounties are an important part of the solution. This bounty will start at 10 ETH, and increase over time for at least a month.
Welcome. This is different from many other bounties where you would "report" a bug and hope that we reimburse you fairly. This bounty has the ETH locked right into the smart contract, ready to be withdrawn instantly upon exploitation of any bug.
In fact: if you find a potential attack vector you should tell no one until you successfully exploit it yourself (securing the ETH to your account). You could even do this anonymously, but I would prefer you find a way to document the bug after-the-fact (it would really save me some time). Open a Github issue after executing your exploit.
First I wrote the Heap.sol library. Then, I wrote a second contract BountyHeap.sol (utilizing the library), which exposes all the operations to a single "public" heap that anyone can send transactions to. In this second contract, I took the definitions of what makes a heap a heap, and wrote public functions that release funds iff these properties are broken.
The Heap Property
In a heap, all child nodes, should have a value less-than-or-equal-to their parents. If you are able to get the contract into any state where this is untrue, simply call the
function, and the contract will release its full bounty.
There are many other subtle properties that must stay intact for the heap to be secure. I've made corresponding functions that each release the entire bounty if exploited. I will describe the others below.
A Binary Heap is a complete tree . This means it can be, and in this case is implemented using a dynamic-sized array (no pointers). The array should contain no empty spots (even as nodes are inserted and extracted from any position). This architecture actually allowed for a significant gas cost reduction! If this property is broken, the heap is sure to be corrupted.
ID Maintenance Properties
The rest of the functions have to do with a design decision I made to give each node an unique id. This id allows the heap to organize data of any type. For example, if you want a buyOrder struct with the highest price, find it using the heap's getMax(), and then lookup your buyOrder in a separate mapping using the returned id. The id also allows a user to remove a specific node whereas using another value (like its index), could change unpredictably due to other transactions from other users being mined before it.
To benefit these use cases a mapping from id to index (in the nodes array) was used. It is carefully updated behind the scenes whenever a node is inserted, deleted, or moved.
If there is more than one node with the same id, something has gone terribly wrong. take your ETH using:
function breakIdUniqueness(uintindex1, uintindex2, addressrecipient)
Furthermore, there should never be an id in the mapping that points to an empty or differing node in the array or vice-versa. Use the following to prove otherwise:
function breakIdMaintenance(int128id, addressrecipient)function breakIdMaintenance2(uintindex, addressrecipient)
Gas Usage
All gas costs rise logarithmically at worst, but the simplicity of a binary heap makes it particularly cheaper than alternatives. Because the heap is a complete tree, it is able to be implemented using an array. This makes navigating the structure much cheaper. Instead of pointers to children and parent nodes (requiring the most expensive thing: storage space), it uses simple arithmetic to move from child to parent (index/2) and parent to leftChild or rightChild (index*2 or index*2+1).
performed on 500 item sets
extractById() Average Gas Costs: 69461
insert() Average Gas Costs: 101261
extractMax() Average Gas Costs: 170448
Heuristic: The cost of these functions can go up by about 20,000 gas every time you double the number of data items.
red lines => worst-case data
green lines => best-case data
blue dots (insert) => randomized data
red lines => worst-case data
green lines => best-case data
blue dots (extractMax) => randomized data
brown dots (extractById) => randomized data
This alone will never exceed the block-gas-limit and "lock-up" given Ethereum's current architecture.
I want to tell you about a brouhaha in my field over a βnewβ medical discipline three hundred years ago. Half my fellow doctors thought it weighed them down and wanted nothing to do with it. The other half celebrated it as a means for medicine to finally become modern, objective, and scientific. The discipline was thermometry, and its controversial tool a glass tube used to measure body temperature called a thermometer.
This all began in 1717, when Daniel Fahrenheit moved to Amsterdam and offered his newest temperature sensor to the German physician Hermann Boerhaave. Boerhaave tried it out and liked it. He proposed using measurements with this device to guide diagnosis and therapy.
Boerhaaveβs innovation was not embraced. Doctors were all for detecting fevers to guide diagnosis and treatment, but their determination of whether fever was present was qualitative. βThere is, for example, that acrid, irritating quality of feverish heat,β the French physician Jean Charles Grimaud said as he scorned the thermometerβs reducing his observations down to numbers. βThese [numerical] differences are the least important in practice.β
Grimaud captured the prevailing view of the time when he argued that the physicianβs touch captured information richer than any tool, and for over a hundred years doctors were loath to use the glass tube. Researchers among them, however, persevered. They wanted to discover reproducible laws in medicine, and the verbal descriptions from doctors were not getting them there. Words were idiosyncratic; they varied from doctor to doctor and even for the same doctor from day to day. Numbers never wavered.
In 1851 at the Leipzig university hospital in Germany, Carl Reinhold Wunderlich started recording temperatures of his patients. 100,000 cases and several million readings later, he published the landmark work βOn the Temperature in Diseases: a manual of medical thermometry.β His text established an average body temperature of 37 degrees, the variation from this mean which could be considered normal, and the cutoff of 38 degrees as a bona fide fever. Wunderlichβs data were compelling; he could predict the course of illness better when he defined fever by a number than when fever had been defined by feel alone. The qualitative status quo would have to change.
Using a thermometer had previously suggested incompetence in a doctor. By 1886, not using one did. βThe information obtained by merely placing the hand on the body of the patient is inaccurate and unreliable,β remarked the American physician Austin Flint. βIf it be desirable to count the pulse and not trust to the judgment to estimate the number of beats per minute, it is far more desirable to ascertain the animal heat by means of a heat measurer.β
Evidence that temperature signaled disease made patient expectations change too. After listening to the doctorβs exam and evaluations, a patient in England asked, βDoctor, you didnβt try the little glass thing that goes in the mouth? Mrs Mc__ told me that you would put a little glass thing in her mouth and that would tell just where the disease wasβ¦β
Thermometry was part of a seismic shift in the nineteenth century, along with blood tests, microscopy, and eventually the x-ray, to what we now know as modern medicine. From impressionistic illnesses that went unnamed and thus had no systematized treatment or cure, modern medicine identified culprit bacteria, trialled antibiotics and other drugs, and targeted diseased organs or even specific parts of organs.
Imagine being a doctor at this watershed moment, trained in an old model and staring a new one in the face. Your patients ask for blood tests and measurements, not for you to feel their skin. Would you use all the new technology even if you didnβt understand it? Would you continue feeling skin, or let the old ways fall to the wayside? And would it trouble you, as the blood tests were drawn and temperatures taken by the nurse, that these tools didnβt need you to report their results. That if those results dictated future tests and prescriptions, doctors may as well be replaced completely?
The original thermometers were a foot long, available only in academic hospitals, and took twenty minutes to get a reading. How wonderful that now they are now cheap and ubiquitous, and that pretty much anyone can use one. It's hard to imagine a medical technology whose diffusion has been more successful. Even so, the thermometer's takeover has hardly done away with our use for doctors. If we have a fever we want a doctor to tell us what to do about it, and if we don't have a fever but feel lousy we want a doctor anyway, to figure out what's wrong.
Still, the same debate about technology replacing doctors rages on. Today patients want not just the doctorβs opinion, but everything from their microbiome array and MRI to tests for their testosterone and B12 levels. Some doctors celebrate this millimeter and microliter resolution inside patientsβ bodies. They proudly brandish their arsenal of tests and say technology has made medicine the best itβs ever been.
The other camp thinks Grimaud was on to something. They resent all these tests because they miss things that listening to and touching the patient would catch. They insist there is more to health and disease than what quantitative testing shows, and try to limit the tests that are ordered. But even if a practiced touch detects things tools miss, it is hard to deny that tools also detect things we would miss that we donβt want to.
Modern CT scans, for example, perform better than even the best surgeonsβ palpation of a painful abdomen in detecting appendicitis. As CT scans become cheaper, faster, and dose less radiation, they will become even more accurate. The same will happen with genome sequences and other up-and-coming tests that detect what overwhelms our human senses. There is no hope trying to rein in their ascent, nor is it right to. Medicine is better off with them around.
What's keeping some doctors from celebrating this miraculous era of medicine is the nagging concern that we have nothing to do with its triumphs. We are told the machinesβ autopilot outperforms us so we sit quietly and get weaker, yawning and complacent like a mangy tiger in captivity. We wish we could do as Grimaud said: βdistinguishing in feverish heat qualities that may be perceived only by a highly practiced touch, and which elude whatever means physics may offer.β
A childrenβs hospital in Philadelphia tried just that. Children often have fevers, as anyone who has had children around them well knows. Usually, they have a simple cold and thereβs not much to fuss about. But about once in a thousand cases, feverish kids have deadly infections and need antibiotics, ICU care, all that modern medicine can muster.
An experienced doctorβs judgment picks the one in a thousand very sick child about three quarters of the time. To try to capture the remainder of these children being missed, hospitals started using quantitative algorithms from their electronic health records to choose which fevers were dangerous based on hard facts alone. And indeed, the computers did better catching the serious infections nine times out of ten, albeit also with ten times the false alarms.
The Philadelphia hospital accepted the computer-based list of worrisome fevers, but then deployed their best doctors and nurses to apply Grimaudβs βhighly practiced touchβ and look over the children before declaring the infection was deadly and bringing them into the hospital for intravenous medications. Their teams were able to weed out the algorithmβs false alarms with high accuracy, and in addition find cases the computer missed, bringing their detection rate of deadly infections from 86.2 percent by the algorithm alone, to 99.4 percent by the algorithm in combination with human perception.
Too many doctors have resigned that they have nothing to add in a world of advanced technology. They thoughtlessly order tests and thoughtlessly obey the results. When, inevitably, the tests give unsatisfying answers they shrug their shoulders. I wish more of them knew about the Philadelphia pediatricians, whose close human attention caught mistakes a purely numerical rules-driven system would miss.
Itβs true that a doctorβs eyes and hands are slower, less precise, and more biased than modern machines and algorithms. But these technologies can count only what they have been programmed to count: human perception is not so constrained.
Our distractible, rebellious, infinitely curious eyes and hands decide moment-by-moment what deserves attention. While this leeway can lead us astray, with the best of training and judgment, it can also lead us to the as of yet undiscovered phenomena that no existing technology knows to look for. My profession and other increasingly automated fields would do better to focus on finding new answers than on fettering old algorithms.
One of the web app projects I'm working on had an interesting requirement recently - it needed to provide a save/load feature without relying on cookies, local storage or server side storage (no accounts or logins). My first pass at the save feature implementation was to take my data, serialise it as JSON, dynamically create a new link element with a data URL and the download attribute set and trigger a click event on this link. That worked pretty well on desktop browsers. It failed miserably on mobile Safari.
Problem - Mobile Safari ignores the download attribute in the link element. This leads to the serialised JSON data being displayed in the browser window without any way of storing it on the user's device. There was no way to disable this.
Solution - Present the user with something that stores data and that they can save to their device. An image is an obvious choice here. This doesn't create the same save/load experience but is close enough to be workable.
I did try using QR codes for this and found them incredibly easy to generate but the decoding side was not so simple and required rather large libraries to be included, so I quickly discarded the idea of using them.
The challenge then was to work out how to store arbitrary text data in a PNG. This was not a new idea and has been donepreviously, however I didn't want to have a completely generic storage container and was happy to impose some constraints to make my job easier.
Constraints/Requirements
The generated image had to be easy to save and should have preset dimensions.
The save/load data I was dealing with was in the order of several dozen kilobytes.
I wanted to store my data as JSON.
I didn't want to deal with the details saving/loading in any particular image format.
Sounds simple enough right? Well there were a few catches. But first lets see the general approach.
Images are fundamentally 2D arrays of pixels. Each pixel is a tuple of 3 bytes, one for each colour component - RGB. Each of the colour components has a range of 0 to 255. This lends itself to storing byte/character arrays naturally. For example a single pixel can be used to store the array of ASCII characters ['F', 'T', 'W'] by encoding their ASCII codes as a colour intensity like so...
The result is a rather grey and boring pixel but it stores the data we want. Whole sentences can be encoded in the same manner - "The quick brown fox jumps over the lazy dog" - is a sequence of these ASCII codes...
Which ends up as 15 pixels like so...
The last 3-tuple only has one character code so it is padded with two zero values to produce the resulting pixel.
That was the basic approach. Then I had to address my requirements:
Though storing and generating an image that was a 1-pixel line would have been the easiest to implement, this is not easy to tap to save so I had to use a square image of sufficient size. Using a preset maximum size (256 x 256 pixels) of the image worked well towards this but it required keeping track of the size of the actual encoded data. This encoded size was the length of a square and had to be stored in the generated image. Using a single colour of the first pixel would let me have a square of up to 255 x 255 in size - the first line is forfeited to store this size value and since it's a square the last column in the image is also forfeited. The size of the byte/character array being encoded also had to be preserved somehow, this would require more than a byte of storage to store but I had the remainder of the first line worth of pixels to deal with this (which I didn't end up needing due to a fortunate issue I encountered with the alpha channel).
Since the maximum size of the available pixel data was 255 x 255 pixels, this gave me 65025 pixels to play with. In turn this translated to 195075 bytes (190kB) of text data. This was well above what I actually needed.
Using TextEncoder I could convert my serialised JSON data into a byte array (Uint8Array in JavaScript).
Using an off-screen canvas would allow me to manipulate pixel data at will and then convert to an image data URL in my desired format.
Converting objects to a byte array
So now I had the general approach worked out and had a container for my byte array. The next step was to convert my objects into a form that could be stored in a byte array. This was easy, using JSON.stringify() and TextEncoder.encode() I could get a Uint8Array. I could also then work out the size of the square image that would be big enough to store this data.
Converting byte array to an image data
Then it was time to take my byte array data and convert it into an ImageData object that could be used with a canvas. That's where I came across the first issue - ImageData expected a Uint8ClampedArray and I had a Uint8Array. Fundamentally though since my data was already 'clamped' in a sense by the TextEncoder conversion I didn't really have to worry too much.
Since I needed a lossless format to store my image data I went for PNG as the output format. This also meant that instead of storing data as RGB, it would be stored as RGBA. There was an additional Alpha channel per pixel and therefore an extra byte to play with. However after some experimentation I ran into an issue that had to do with RGB corruption when the alpha channel was set to zero.
That threw a spanner in the works and I had to write code to convert my 3-tuple byte array into a 4-tuple array with the 4th (alpha) component being set to full opacity (255). This turned out to be an advantage for decoding later since I could skip all zero-padded data easily. It wasn't the most efficient code but it did the trick.
As a bonus I now had the correctly typed Uint8ClampedArray byte array and could finally construct my ImageData object.
Drawing the image
With the ImageData object available I could now create a canvas and draw the image data that was holding my encoded JSON. First the canvas was created 'off screen' and its context retrieved and the background set to a solid colour (actual colour doesn't matter here).
Then I could 'draw' the pixel that represented the size of the square image that encoded my data.
Then I could draw the image data...
Saving the image
The image could now be saved from the canvas to the file system (or in the case of mobile Safari displayed in a new tab) with a bit of jQuery code...
When programmers discuss the relative merits of different programming
languages, they often talk about them in prosaic terms as if they were so many
tools in a tool beltβone might be more appropriate for systems programming,
another might be more appropriate for gluing together other programs to
accomplish some ad hoc task. This is as it should be. Languages have different
strengths and claiming that a language is better than other languages without
reference to a specific use case only invites an unproductive and vitriolic
debate.
But there is one language that seems to inspire a peculiar universal reverence:
Lisp. Keyboard crusaders that would otherwise pounce on anyone daring to
suggest that some language is better than any other will concede that Lisp is
on another level. Lisp transcends the utilitarian criteria used to judge other
languages, because the median programmer has never used Lisp to build anything
practical and probably never will, yet the reverence for Lisp runs so deep that
Lisp is often ascribed mystical properties. Everyoneβs favorite webcomic,
xkcd, has depicted Lisp this way at least twice: In one
comic, a character reaches some sort of Lisp
enlightenment, which appears to allow him to comprehend the fundamental
structure of the universe. In another comic, a robed,
senescent programmer hands a stack of parentheses to his padawan, saying that
the parentheses are βelegant weapons for a more civilized age,β suggesting that
Lisp has all the occult power of the Force.
Another great example is Bob Kanefskyβs parody of a
song called βGod Lives on Terra.β His parody, written in the mid-1990s and
called βEternal Flameβ, describes how God must have created the world using
Lisp. The following is an excerpt, but the full set of lyrics can be found in
the GNU Humor
Collection:
For God wrote in Lisp code When he filled the leaves with green. The fractal flowers and recursive roots: The most lovely hack Iβve seen. And when I ponder snowflakes, never finding two the same, I know God likes a language with its own four-letter name.
I can only speak for myself, I suppose, but I think this βLisp Is Arcane Magicβ
cultural meme is the most bizarre and fascinating thing ever. Lisp was
concocted in the ivory tower as a tool for artificial intelligence research, so
it was always going to be unfamiliar and maybe even a bit mysterious to the
programming laity. But programmers now urge each other to βtry Lisp before you
dieβ
as if it were some kind of mind-expanding psychedelic. They do this even though
Lisp is now the second-oldest programming language in widespread use, younger
only than Fortran, and even then by just one year. Imagine if your job were
to promote some new programming language on behalf of the organization or team
that created it. Wouldnβt it be great if you could convince everyone that your
new language had divine powers? But how would you even do that? How does a
programming language come to be known as a font of hidden knowledge?
How did Lisp get to be this way?
The cover of Byte Magazine, August, 1979.
Theory A: The Axiomatic Language
John McCarthy, Lispβs creator, did not originally intend for Lisp to be an
elegant distillation of the principles of computation. But, after one or two
fortunate insights and a series of refinements, thatβs what Lisp became. Paul
Grahamβwe will talk about him some more laterβhas written that, with Lisp,
McCarthy βdid for programming something like what Euclid did for geometry.β
People might see a deeper meaning in Lisp because McCarthy built Lisp out of
parts so fundamental that it is hard to say whether he invented it or
discovered it.
McCarthy began thinking about creating a language during the 1956 Darthmouth
Summer Research Project on Artificial Intelligence. The Summer Research Project
was in effect an ongoing, multi-week academic conference, the very first in the
field of artificial intelligence. McCarthy, then an assistant professor of
Mathematics at Dartmouth, had actually coined the term βartificial
intelligenceβ when he proposed the event. About ten or so people attended
the conference for its entire duration. Among them were Allen Newell and
Herbert Simon, two researchers affiliated with the RAND Corporation and
Carnegie Mellon that had just designed a language called IPL.
Newell and Simon had been trying to build a system capable of generating proofs
in propositional calculus. They realized that it would be hard to do this while
working at the level of the computerβs native instruction set, so they decided
to create a languageβor, as they called it, a βpseudo-codeββthat would help
them more naturally express the workings of their βLogic Theory Machine.β
Their language, called IPL for βInformation Processing Languageβ, was more of a
high-level assembly dialect then a programming language in the sense we mean
today. Newell and Simon, perhaps referring to Fortran, noted that other
βpseudo-codesβ then in development were βpreoccupiedβ with representing
equations in standard mathematical notation. Their language focused instead
on representing sentences in propositional calculus as lists of symbolic
expressions. Programs in IPL would basically leverage a series of
assembly-language macros to manipulate and evaluate expressions within one or
more of these lists.
McCarthy thought that having algebraic expressions in a language,
Fortran-style, would be useful. So he didnβt like IPL very much. But he
thought that symbolic lists were a good way to model problems in artificial
intelligence, particularly problems involving deduction. This was the germ of
McCarthyβs desire to create an algebraic list processing language, a language
that would resemble Fortran but also be able to process symbolic lists like
IPL.
Of course, Lisp today does not resemble Fortran. Over the next few years,
McCarthyβs ideas about what an ideal list processing language should look like
evolved. His ideas began to change in 1957, when he started writing routines
for a chess-playing program in Fortran. The prolonged exposure to Fortran
convinced McCarthy that there were several infelicities in its design, chief
among them the awkward IF statement. McCarthy invented an alternative,
the βtrueβ conditional expression, which returns sub-expression A if the
supplied test succeeds and sub-expression B if the supplied test fails and
which also only evaluates the sub-expression that actually gets returned.
During the summer of 1958, when McCarthy worked to design a program that could
perform differentiation, he realized that his βtrueβ conditional expression
made writing recursive functions easier and more natural. The
differentiation problem also prompted McCarthy to devise the maplist
function, which takes another function as an argument and applies it to all the
elements in a list. This was useful for differentiating sums of
arbitrarily many terms.
None of these things could be expressed in Fortran, so, in the fall of 1958,
McCarthy set some students to work implementing Lisp. Since McCarthy was now an
assistant professor at MIT, these were all MIT students. As McCarthy and his
students translated his ideas into running code, they made changes that further
simplified the language. The biggest change involved Lispβs syntax. McCarthy
had originally intended for the language to include something called
βM-expressions,β which would be a layer of syntactic sugar that made Lispβs
syntax resemble Fortranβs. Though M-expressions could be translated to
S-expressionsβthe basic lists enclosed by parentheses that Lisp is known forβ
S-expressions were really a low-level representation meant for the machine. The
only problem was that McCarthy had been denoting M-expressions using square
brackets, and the IBM 026 keypunch that McCarthyβs team used at MIT did not
have any square bracket keys on its keyboard. So the Lisp team stuck with
S-expressions, using them to represent not just lists of data but function
applications too. McCarthy and his students also made a few other
simplifications, including a switch to prefix notation and a memory model
change that meant the language only had one real type.
In 1960, McCarthy published his famous paper on Lisp called βRecursive
Functions of Symbolic Expressions and Their Computation by Machine.β By that
time, the language had been pared down to such a degree that McCarthy realized
he had the makings of βan elegant mathematical systemβ and not just another
programming language. He later wrote that the many simplifications that
had been made to Lisp turned it βinto a way of describing computable functions
much neater than the Turing machines or the general recursive definitions used
in recursive function theory.β In his paper, he therefore presented Lisp
both as a working programming language and as a formalism for studying the
behavior of recursive functions.
McCarthy explained Lisp to his readers by building it up out of only a very
small collection of rules. Paul Graham later retraced McCarthyβs steps, using
more readable language, in his essay βThe Roots of
Lispβ. Graham is able to
explain Lisp using only seven primitive operators, two different notations for
functions, and a half-dozen higher-level functions defined in terms of the
primitive operators. That Lisp can be specified by such a small sequence of
basic rules no doubt contributes to its mystique. Graham has called McCarthyβs
paper an attempt to βaxiomatize computation.β I think that is a great way
to think about Lispβs appeal. Whereas other languages have clearly artificial
constructs denoted by reserved words like while or typedef or public
static void, Lispβs design almost seems entailed by the very logic of
computing. This quality and Lispβs original connection to a field as esoteric
as βrecursive function theoryβ should make it no surprise that Lisp has so much
prestige today.
Theory B: Machine of the Future
Two decades after its creation, Lisp had become, according to the famousHackerβs Dictionary, the βmother
tongueβ of artificial intelligence research. Early on, Lisp spread quickly,
probably because its regular syntax made implementing it on new machines
relatively straightforward. Later, researchers would keep using it because of
how well it handled symbolic expressions, important in an era when so much of
artificial intelligence was symbolic. Lisp was used in seminal artificial
intelligence projects like the SHRDLU natural language
program, the Macsyma algebra
system, and the ACL2 logic
system.
By the mid-1970s, though, artificial intelligence researchers were running out
of computer power. The PDP-10, in particularβeveryoneβs favorite machine for
artificial intelligence workβhad an 18-bit address space that increasingly was
insufficient for Lisp AI programs. Many AI programs were also supposed to
be interactive, and making a demanding interactive program perform well on a
time-sharing system was challenging. The solution, originally proposed by Peter
Deutsch at MIT, was to engineer a computer specifically designed to run Lisp
programs. These Lisp machines, as I described in my last post on Chaosnet, would give each user a dedicated
processor optimized for Lisp. They would also eventually come with development
environments written entirely in Lisp for hardcore Lisp programmers. Lisp
machines, devised in an awkward moment at the tail of the minicomputer era but
before the full flowering of the microcomputer revolution, were
high-performance personal computers for the programming elite.
For a while, it seemed as if Lisp machines would be the wave of the future.
Several companies sprang into existence and raced to commercialize the
technology. The most successful of these companies was called Symbolics,
founded by veterans of the MIT AI Lab. Throughout the 1980s, Symbolics produced
a line of computers known as the 3600 series, which were popular in the AI
field and in industries requiring high-powered computing. The 3600 series
computers featured large screens, bit-mapped graphics, a mouse interface, andpowerful graphics and animation software.
These were impressive machines that enabled impressive programs. For example,
Bob Culley, who worked in robotics research and contacted me via Twitter, was
able to implement and visualize a path-finding algorithm on a Symbolics 3650
in 1985. He explained to me that bit-mapped graphics and object-oriented
programming (available on Lisp machines via the Flavors
extension) were
very new in the 1980s. Symbolics was the cutting edge.
Bob Culleyβs path-finding program.
As a result, Symbolics machines were outrageously expensive. The Symbolics 3600
cost $110,000 in 1983. So most people could only marvel at the power of
Lisp machines and the wizardry of their Lisp-writing operators from afar. But
marvel they did. Byte Magazine featured Lisp and Lisp machines several times
from 1979 through to the end of the 1980s. In the August, 1979 issue, a special
on Lisp, the magazineβs editor raved about the new machines being developed at
MIT with βgobs of memoryβ and βan advanced operating system.β He thought
they sounded so promising that they would make the two prior yearsβwhich saw
the launch of the Apple II, the Commodore PET, and the TRS-80βlook boring by
comparison. A half decade later, in 1985, a Byte Magazine contributor
described writing Lisp programs for the βsophisticated, superpowerful Symbolics
3670β and urged his audience to learn Lisp, claiming it was both βthe language
of choice for most people working in AIβ and soon to be a general-purpose
programming language as well.
I asked Paul McJones, who has done lots of Lisp preservation
work for the Computer
History Museum in Mountain View, about when people first began talking about
Lisp as if it were a gift from higher-dimensional beings. He said that the
inherent properties of the language no doubt had a lot to do with it, but he
also said that the close association between Lisp and the powerful artificial
intelligence applications of the 1960s and 1970s probably contributed too. When
Lisp machines became available for purchase in the 1980s, a few more people
outside of places like MIT and Stanford were exposed to Lispβs power and the
legend grew. Today, Lisp machines and Symbolics are little remembered, but they
helped keep the mystique of Lisp alive through to the late 1980s.
Theory C: Learn to Program
In 1985, MIT professors Harold Abelson and Gerald Sussman, along with Sussmanβs
wife, Julie Sussman, published a textbook called Structure and Interpretation
of Computer Programs. The textbook introduced readers to programming using the
language Scheme, a dialect of Lisp. It was used to teach MITβs introductory
programming class for two decades. My hunch is that SICP (as the title is
commonly abbreviated) about doubled Lispβs βmystique factor.β SICP took Lisp
and showed how it could be used to illustrate deep, almost philosophical
concepts in the art of computer programming. Those concepts were general enough
that any language could have been used, but SICPβs authors chose Lisp. As a
result, Lispβs reputation was augmented by the notoriety of this bizarre and
brilliant book, which has intrigued generations of programmers (and also become
a very strange
meme).
Lisp had always been βMcCarthyβs elegant formalismβ; now it was also βthat
language that teaches you the hidden secrets of programming.β
Itβs worth dwelling for a while on how weird SICP really is, because I think
the bookβs weirdness and Lispβs weirdness get conflated today. The weirdness
starts with the bookβs cover. It depicts a wizard or alchemist approaching a
table, prepared to perform some sort of sorcery. In one hand he holds a set of
calipers or a compass, in the other he holds a globe inscribed with the words
βevalβ and βapply.β A woman opposite him gestures at the table; in the
background, the Greek letter lambda floats in mid-air, radiating light.
The cover art for SICP.
Honestly, what is going on here? Why does the table have animal feet? Why is
the woman gesturing at the table? What is the significance of the inkwell? Are
we supposed to conclude that the wizard has unlocked the hidden mysteries of
the universe, and that those mysterious consist of the βeval/applyβ loop and
the Lambda Calculus? It would seem so. This image alone must have done an
enormous amount to shape how people talk about Lisp today.
But the text of the book itself is often just as weird. SICP is unlike most
other computer science textbooks that you have ever read. Its authors explain
in the foreword to the book that the book is not merely about how to program in
Lispβit is instead about βthree foci of phenomena: the human mind, collections
of computer programs, and the computer.β Later, they elaborate, describing
their conviction that programming shouldnβt be considered a discipline of
computer science but instead should be considered a new notation for βprocedural
epistemology.β Programs are a new way of structuring thought that only
incidentally get fed into computers. The first chapter of the book gives a
brief tour of Lisp, but most of the book after that point is about much more
abstract concepts. There is a discussion of different programming paradigms, a
discussion of the nature of βtimeβ and βidentityβ in object-oriented systems,
and at one point a discussion of how synchronization problems may arise because
of fundamental constraints on communication that play a role akin to the fixed
speed of light in the theory of relativity. Itβs heady stuff.
All this isnβt to say that the book is bad. Itβs a wonderful book. It discusses
important programming concepts at a higher level than anything else I have
read, concepts that I had long wondered about but didnβt quite have the
language to describe. Itβs impressive that an introductory programming textbook
can move so quickly to describing the fundamental shortfalls of object-oriented
programming and the benefits of functional languages that minimize mutable
state. Itβs mind-blowing that this then turns into a discussion of how a stream
paradigm, perhaps something like todayβs
RxJS, can give you the best of both
worlds. SICP distills the essence of high-level program design in a way
reminiscent of McCarthyβs original Lisp paper. The first thing you want to do
after reading it is get your programmer friends to read it; if they look it
up, see the cover, but then donβt read it, all they take away is that some
mysterious, fundamental βeval/applyβ thing gives magicians special powers over
tables with animal feet. I would be deeply impressed in their shoes too.
But maybe SICPβs most important contribution was to elevate Lisp from
curious oddity to pedagogical must-have. Well before SICP, people told each
other to learn Lisp as a way of getting better at programming. The 1979 Lisp
issue of Byte Magazine is testament to that fact. The same editor that raved
about MITβs new Lisp machines also explained that the language was worth
learning because it βrepresents a different point of view from which to analyze
problems.β But SICP presented Lisp as more than just a foil for other
languages; SICP used Lisp as an introductory language, implicitly making the
argument that Lisp is the best language in which to grasp the fundamentals of
computer programming. When programmers today tell each other to try Lisp before
they die, they arguably do so in large part because of SICP. After all, the
language Brainfuck presumably offers
βa different point of view from which to analyze problems.β But people learn
Lisp instead because they know that, for twenty years or so, the Lisp point of
view was thought to be so useful that MIT taught Lisp to undergraduates before
anything else.
Lisp Comes Back
The same year that SICP was released, Bjarne Stroustrup published the first
edition of The C++ Programming Language, which brought object-oriented
programming to the masses. A few years later, the market for Lisp machines
collapsed and the AI winter began. For the next decade and change, C++ and then
Java would be the languages of the future and Lisp would be left out in the
cold.
It is of course impossible to pinpoint when people started getting excited
about Lisp again. But that may have happened after Paul Graham, Y-Combinator
co-founder and Hacker News creator, published a series of influential essays
pushing Lisp as the best language for startups. In his essay βBeating the
Averages,β for example, Graham argued that
Lisp macros simply made Lisp more powerful than other languages. He claimed
that using Lisp at his own startup, Viaweb, helped him develop features faster
than his competitors were able to. Some programmers at
least
were persuaded. But the vast majority of programmers did not switch to Lisp.
What happened instead is that more and more Lisp-y features have been
incorporated into everyoneβs favorite programming languages. Python got list
comprehensions. C# got Linq. Ruby got⦠well, Ruby is a
Lisp.
As Graham noted even back in 2001, βthe default language, embodied in a
succession of popular languages, has gradually evolved toward Lisp.β
Though other languages are gradually becoming like Lisp, Lisp itself somehow
manages to retain its special reputation as that mysterious language that few
people understand but everybody should learn. In 1980, on the occasion of
Lispβs 20th anniversary, McCarthy wrote that Lisp had survived as long as it
had because it occupied βsome kind of approximate local optimum in the space of
programming languages.β That understates Lispβs real influence. Lisp
hasnβt survived for over half a century because programmers have begrudgingly
conceded that it is the best tool for the job decade after decade; in fact, it
has survived even though most programmers do not use it at all. Thanks to its
origins and use in artificial intelligence research and perhaps also the legacy
of SICP, Lisp continues to fascinate people. Until we can imagine God creating
the world with some newer language, Lisp isnβt going anywhere.
If you enjoyed this post, more like it come out every two weeks! Follow
@TwoBitHistory on Twitter or subscribe to the
RSS feed
to make sure you know when a new post is out.
Datasets used for deep learning may reach millions of files. The well known image dataset, ImageNet, contains 1m images, and its successor, the Open Images dataset has over 9m images. Google and Facebook have published papers on datasets with 300m and 2bn images, respectively. Typically, developers want to store and access these datasets as image files, stored in a distributed file system. However, according to Uber, thereβs a problem:
βmultiple round-trips to the filesystem are costly. It is hard to implement at large scale, especially using modern distributed file systems such as HDFS and S3 (these systems are typically optimized for fast reads of large chunks of data).β
Uberβs proposed solution is to pack image files into larger Apache Parquet files. Parquet is a columnar database file format, and thousands of individual image files can be packed into a single Parquet file, typically 64-256MB in size. For many image and NLP datasets, however, this introduces costly complexity. Existing tools for processing/viewing/indexing files/text need to be rewritten. An alternative approach would be to use HopsFS.
HopsFS solves this problem by now being able to transparently integrate NVMe disks into its HDFS-compatible file system, see our peer-reviewed paper to be published at ACM Middleware 2018. HDFS (and S3) are designed around large blocks (optimized to overcome slow random I/O on disks), while new NVMe hardware supports fast random disk I/O (and potentially small blocks sizes). However, as NVMe disks are still expensive, it would be prohibitively expensive to store tens of terabytes or petabyte-sized datasets on only NVMe hardware. In Hops, our hybrid solution involves storing files smaller than a configurable threshold (default: 64KB, but scales up to around 1MB) on NVMe disks in our metadata layer. On top of this, files under a smaller threshold, typically 1KB, we store replicated in-memory in the metadata layer (due to their minimal overhead). This design choice was informed from our collaboration with Spotify, where we observed that most of their filesystem operations are on small files (β64% of file read operations are performed on files less than 16 KB). Similar file size distributions have been reported at Yahoo!, Facebook, and others.
The result is that when clients read and write small files, they can do so at an order of higher magnitude in throughput (number of files per second), and with massively reduced latency (>90% of file writes on the Spotify workload completed in less than 10ms, compared to >100ms for Apache HDFS). In HopsFS, most incredibly, the small files stored at the metadata layer are not cached and are replicated at more than one host. That is, the scale-out metadata layer in HopsFS, can be scaled out like a key-value store and provide file read/write performance for small files comparable with get/put performance for a modern key-value stores.
NVMe Disk Performance
As we can see from Google Cloud disk performance figures, NVMe disks support more than two orders of magnitude more IOPs than magnetic disks, and over one order of magnitude more IOPs than standard SATA SSD disks.
Key-Value Store Performance for Small Files
In our Middleware paper, we observed up to 66X throughput improvements for writing small files and up 4.5X throughput improvements for reading small files, compared to HDFS. For latency, we saw operational latencies on Spotifyβs Hadoop workload were 7.39 times lower for writing small files and 3.15 times lower for reading small files. For real-world datasets, like the Open Images 9m images dataset, we saw 4.5X improvements for reading files and 5.9X improvements for writing files. These figures were generated using only 6 NVMe disks, and we are confident that we scale to must higher numbers with more NVMe disks.
We also discuss in the paper how we solved the problems of maintaining full-HDFS Compatibility: changes for handling small files do not break HopsFSβ compatibility with HDFS clients. We also address the problem of migrating data between different storage types: when the size of a small file that is stored in the metadata layer exceeds some threshold then the file is reliably and safely moved to the HopsFS datanodes.
Running HopsFS (on-premise or in the cloud)
You can already benefit from our small files solution for HopsFS that has been available since HopsFS 2.8.2, released in 2017. We have been running www.hops.site using small files since October 2017, and we are very happy with its stability in production.
Since early 2018, NVMe disks are now available at Google Cloud, AWS, and Azure. Logical Clocks can help with providing support for running HopsFS in the cloud, including running HopsFS in an availability-zone fault-tolerant configuration, available only in Enterprise Hops.
Aeromovel train at Taman Mini Indonesia Indah, Jakarta, Indonesia, opened in 1989. The girder under the train forms an air duct. The vehicle is connected to a propulsion plate in the duct which is then driven by air pressure.
An atmospheric railway uses differential air pressure to provide power for propulsion of a railway vehicle. A static power source can transmit motive power to the vehicle in this way, avoiding the necessity of carrying mobile power generating equipment. The air pressure, or partial vacuum (i.e. negative relative pressure) can be conveyed to the vehicle in a continuous pipe, where the vehicle carries a piston running in the tube. Some form of re-sealable slot is required to enable the piston to be attached to the vehicle. Alternatively the entire vehicle may act as the piston in a large tube.
Several variants of the principle were proposed in the early 19th century, and a number of practical forms were implemented, but all were overcome with unforeseen disadvantages and discontinued within a few years.
A modern proprietary system has been developed and is in use for short-distance applications. Porto Alegre Metro airport connection is one of them.
In the early days of railways, single vehicles or groups were propelled by human power, or by horses. As mechanical power came to be understood, locomotive engines were developed; the iron horse. These had serious limitations, in particular being much heavier than the wagons in use, they broke the rails; and adhesion at the iron-to-iron wheel-rail interface was a limitation, for example in trials on the Kilmarnock and Troon Railway.
Many engineers turned their attention to transmitting power from a static power source, a stationary engine, to a moving train. Such an engine could be more robust and with more available space, potentially more powerful. The solution to transmitting the power, before the days of practical electricity, was the use of either a cable system or air pressure.
Medhurst
In 1799 George Medhurst of London discussed the idea of moving goods pneumatically through cast iron pipes, and in 1812 he proposed blowing passenger carriages through a tunnel.[1]
Medhurst proposed two alternative systems: either the vehicle itself was the piston, or the tube was relatively small with a separate piston. He never patented his ideas and they were not taken further by him.[2]
19th century
Vallance
In 1824 a man called Vallance took out a patent and built a short demonstration line; his system consisted of a 6-ft diameter cast iron tube with rails cast in to the lower part; the vehicle was the full size of the tube and bear skin was used to seal the annular space. To slow the vehicle down, doors were opened at each end of the vehicle. Vallance's system worked, but was not adopted commercially.[2]
Pinkus
Arriving at Kingstown on the Dalkey Atmospheric Railway in 1844
In 1835 Henry Pinkus patented a system with a large (9 sq ft) square section tube with a low degree of vacuum, limiting leakage loss.[3] He later changed to a small-bore vacuum tube. He proposed to seal the slot that enabled the piston to connect with the vehicle with a continuous rope; rollers on the vehicle lifted the rope in front of the piston connection and returned it afterwards.
He built a demonstration line alongside the Kensington Canal, and issued a prospectus for his National Pneumatic Railway Association. He was unable to interest investors, and his system failed when the rope stretched. However his concept, a small bore pipe with a resealable slot was the prototype for many successor systems.[2]
Samuda and Clegg
Developing a practical scheme
Jacob and Joseph Samuda were shipbuilders and engineers, and owned the Southwark Ironworks; they were both members of the Institution of Civil Engineers. Samuel Clegg was a gas engineer and they worked in collaboration on their atmospheric system. About 1835 they read Medhurst's writings, and developed a small bore vacuum pipe system. Clegg worked on a longitudinal flap valve, for sealing the slot in the pipe.
In 1838 they took out a patent "for a new improvement in valves" and built a full-scale model at Southwark. In 1840 Jacob Samuda and Clegg leased half a mile of railway line on the West London Railway at Wormholt Scrubs (later renamed Wormwood Scrubs), where the railway had not yet been opened to the public. In that year Clegg left for Portugal, where he was pursuing his career in the gas industry.
Samuda's system involved a continuous (jointed) cast iron pipe laid between the rails of a railway track; the pipe had a slot in the top. The leading vehicle in a train was a piston carriage, which carried a piston inserted in the tube. It was held by a bracket system that passed through the slot, and the actual piston was on a pole ahead of the point at which the bracket left the slot. The slot was sealed from the atmosphere by a continuous leather flap that was opened immediately in advance of the piston bracket and closed again immediately behind it. A pumping station ahead of the train would pump air from the tube, and air pressure behind the piston would push it forward.
The Wormwood Scrubbs demonstration ran for two years. The traction pipe was of 9 inches diameter, and a 16Β hp stationary engine was used for power. The gradient on the line was a steady 1 in 115. In his treatise, described below, Samuda implies that the pipe would be used in one direction only, and the fact that only one pumping station was erected suggests that trains were gravitated back to the lower end of the run after the atmospheric ascent, as was later done on the Dalkey line (below). Many of the runs were public. Samuda quotes the loads and degree of vacuum and speed of some of the runs; there seems to be little correlation; for example:
11 June 1840; 11 tons 10 cwt; maximum speed 22.5Β mph; 15 inches of vacuum
10 August 1840: 5 tons 0 cwt; maximum speed 30Β mph; 20 inches of vacuum.[4]
Competing solutions
There was enormous public interest in the ideas surrounding atmospheric railways, and at the same time as Samuda was developing his scheme, other ideas were put forward. These included:
Nickels and Keane; they were to propel trains by pumping air into a continuous canvas tube; the train had a pair of pinch rollers squeezing the outside of the tube, and the air pressure forced the vehicle forward; the effect was the converse of squeezing a toothpaste tube. They claimed a successful demonstration in a timber yard in Waterloo Road.
James Pilbrow; he proposed a loose piston fitted with a toothed rack; cog wheels would be turned by it, and they were on spindle passing through glands to the outside of the tube; the leading carriage of the train would have a corresponding rack and be impelled forward by the rotation of the cog wheels. Thus the vehicle would keep pace with the piston exactly, without any direct connection to it.
Henry Lacey conceived a wooden tube, made by barrelmakers as a long, continuous barrel with the opening slot and a timber flap retained by an india-rubber hinge;
Clarke and Varley proposed sheet iron tubes with a continuous longitudinal slit. If the tubes were made to precision standards, the vacuum would keep the slit closed, but the piston bracket on the train would spring the slit open enough to pass; the elasticity of the tube would close it again behind the piston carriage.
Joseph Shuttleworth suggested a hydraulic tube; water pressure rather than a partial atmospheric vacuum, would propel the train. In mountainous areas where plentiful water was available, a pumping station would be unnecessary: the water would be used directly. Instead of the flap to seal the slot in the tube, a continuous shaped sealing rope, made of cloth impregnated with india-rubber would be within the pipe. Guides on the piston would lift it into position and the water pressure would hold it in place behind the train. Use of a positive pressure enabled a greater pressure differential than a vacuum system. However the water in the pipe would have to be drained manually by staff along the pipe after every train.
Samuda's treatise
Illustration from A Treatise on the Adaptation of Atmospheric Pressure to the Purposes of Locomotion on Railways, Samuda
In 1841 Joseph Samuda published A Treatise on the Adaptation of Atmospheric Pressure to the Purposes of Locomotion on Railways.[4]
It ran to 50 pages, and Samuda described his system; first the traction pipe:
The moving power is communicated to the train through a continuous pipe or main, laid between the rails, which is exhausted by air pumps worked by stationary steam engines, fixed on the road side, the distance between them varying from one to three miles, according to the nature and traffic of the road. A piston, which is introduced into this pipe, is attached to the leading carriage in each train, through a lateral opening, and is made to travel forward by means of the exhaustion created in front of it. The continuous pipe is fixed between the rails and bolted to the sleepers which carry them; the inside of the tube is unbored, but lined or coated with tallow 1/10th of an inch thick, to equalize the surface and prevent any unnecessary friction from the passage of the travelling piston through it.
The operation of the closure valve was to be critical:
Along the upper surface of the pipe is a continuous slit or groove about two inches wide. This groove is covered by a valve, extending the whole length of the railway, formed of a strip of leather riveted between iron plates, the top plates being wider than the groove and serving to prevent the external air forcing the leather into the pipe when the vacuum is formed within it; and the lower plates fitting into the groove when the valve is shut, makes up the circle of the pipe, and prevents the air from passing the piston; one edge of this valve is securely held down by iron bars, fastened by screw bolts to a longitudinal rib cast on the pipe, and allows the leather between the plates and the bar to act as a hinge, similar to a common pump valve; the other edge of the valve falls into a groove which contains a composition of beeswax and tallow: this composition is solid at the temperature of the atmosphere, and becomes fluid when heated a few degrees above it. Over this valve is a protecting cover, which serves to preserve it from snow or rain, formed of thin plates of iron about five feet long hinged with leather, and the end of each plate underlaps the next in the direction of the piston's motion,[note 1] thus ensuring the lifting of each in succession.
The piston carriage would open and then close the valve:
To the underside of the first carriage in each train is attached the piston and its appurtenances; a rod passing horizontally from the piston is attached to a connecting arm, about six feet behind the piston. This connecting arm passes through the continuous groove in the pipe, and being fixed to the carriage, imparts motion to the train as the tube becomes exhausted; to the piston rod are also attached four steel wheels, (two in advance and two behind the connecting arm,) which serve to lift the valve, and form a space for the passage of the connecting arm, and also for the admission of air to the back of the piston; another steel wheel is attached to the carriage, regulated by a spring, which serves to ensure the perfect closing of the valve, by running over the top plates immediately after the arm has passed. A copper tube or heater, about ten feet long, constantly kept hot by a small stove, also fixed to the underside of the carriage, passes over and melts the surface of the composition (which has been broken by lifting the valve,) which upon cooling becomes solid, and hermetically seals the valve. Thus each train in passing leaves the pipe in a fit state to receive the next train.
Entering and leaving the pipe was described:
The continuous pipe is divided into suitable sections (according to the respective distance of the fixed steam engines) by separating valves, which are opened by the train as it goes along: these valves are so constructed that no stoppage or diminution of speed is necessary in passing from one section to another. The exit separating valve, or that at the end of the section nearest to its steam engine, is opened by the compression of air in front of the piston, which necessarily takes place after it has passed the branch which communicates with the air-pump; the entrance separating valve, (that near the commencement of the next section of pipe,) is an equilibrium or balance valve, and opens immediately the piston has entered the pipe. The main pipe is put together with deep socket joints, in each of which an annular space is left about the middle of the packing, and filled with a semi-fluid: thus any possible leakage of air into the pipe is prevented.[5]
At that time railway were developing rapidly, and solutions to the technical limitations of the day were eagerly sought, and not always rationally evaluated. Samuda's treatise put forward the advantages of his system:
transmission of power to trains from static (atmospheric) power stations; the static machinery could be more fuel efficient;
the train would be relieved of the necessity of carrying the power source, and fuel, with it;
power available to the train would be greater so that steeper gradients could be negotiated; in building new lines this would hugely reduce construction costs by enabling reducing earthworks and tunnels;
elimination of a heavy locomotive from the train would enable lighter and cheaper track materials to be used;
passengers, and lineside residents, would be spared the nuisance of smoke emission from passing trains; this would be especially useful in tunnels;
collisions between trains would be impossible, because only one train at a time could be handled on any section between two pumping stations; collisions were at the forefront of the mind of the general public in those days before modern signalling systems, when a train was permitted to follow a preceding train after a defined time interval, with no means of detecting whether that train had stalled somewhere ahead on the line;
the piston travelling in the tube would hold the piston carriage down and, Samuda claimed, prevent derailments, enabling curves to be negotiated safely at high speed;
persons on the railway would not be subjected to the risk of steam engine boiler explosions (then a very real possibility[2]).
Samuda also rebutted criticisms of his system that had obviously become widespread:
that if a pumping station failed the whole line would be closed because no train could pass that point; Samuda explained that a pipe arrangement would enable the next pumping station ahead to supply that section; if this was at reduced pressure, the train would nonetheless be able to pass, albeit with a small loss of time;
that leakage of air at the flap or the pipe joints would critically weaken the vacuum effect; Samuda pointed to experience and test results on his demonstration line, where this was evidently not a problem;
the capital cost of the engine houses was a huge burden; Samuda observed that the capital cost of steam locomotives was eliminated, and running costs for fuel and maintenance could be expected to be lower.[4]
A patent
In April 1844 Jacob and Joseph Samuda took out a patent for their system. Soon after this Joseph Samuda died, and it was left to his brother Jacob to continue the work. The patent was in three parts: the first describing the atmospheric pipe and piston system, the second describing how in areas of plentiful water supply, the vacuum might be created by using tanks of water at differing levels; and the third section dealt with level crossings of an atmospheric railway.[2]
The Dublin and Kingstown Railway opened in 1834 connecting the port of DΓΊn Laoghaire (then called Kingstown) to Dublin; it was a standard gauge line. In 1840 it was desired to extend the line to Dalkey, a distance of about two miles. A horse tramway on the route was acquired and converted: it had been used to bring stone from a quarry for the construction of the harbour. It was steeply graded (at 1 in 115 with a 440-yard stretch of 1 in 57) and heavily curved, the sharpest being 570 yards radius. This presented significant difficulties to the locomotives then in use. The treasurer of the company, James Pim, was visiting London and hearing of Samuda's project he viewed it. He considered it to be perfect for the requirements of his company, and after petitioning government for a loan of Β£26,000,[6] it was agreed to install it on the Dalkey line. Thus became the Dalkey Atmospheric Railway.
A 15-inch traction pipe was used, with a single pumping station at Dalkey, at the upper end of the 2,400-yard run. The engine created 110 ihp and had a flywheel of 36 feet diameter. Five minutes before the scheduled departure of a train from Kingstown, the pumping engine started work, creating a 15-inch vacuum in two minutes. The train was pushed manually to the position where the piston entered the pipe, and the train was held on the brakes until it was ready to start. When that time came, the brakes were released and the train moved off. (The electric telegraph was later installed, obviating reliance on the timetable for engine operation.)
On 17 August 1843 the tube was exhausted for the first time, and the following day a trial run was made. On Saturday 19 August the line was opened to the public.[note 2] In service a typical speed of 30Β mph was attained; return to Kingstown was by gravitation down the gradient, and slower. By March 1844, 35 train movements operated daily, and 4,500 passengers a week travelled on the line, mostly simply for the novelty.
It is recorded that a young man called Frank Elrington was on one occasion on the piston carriage, which was not attached to the train. On releasing the brake, the light vehicle shot off at high speed, covering the distance in 75 seconds, averaging 65Β mph.
The line continued to operate successfully for ten years, outliving the atmospheric system on British lines, although the Paris - St Germain line continued until 1860.[8]
When the system was abolished in 1855 a 2-2-2 steam locomotive called Princess was employed, incidentally the first steam engine to be manufactured in Ireland. Although a puny mechanism, the steam engine successfully worked the steeply graded line for some years.[2]
Paris - Saint Germain
Saint Germain piston carriage
In 1835 the brothers Pereire obtained a concession from the Compagnie du Chemin de fer de Paris Γ Saint-Germain. They opened their 19Β km line in 1837, but only as far as Le Pecq, a river quay on the left bank of the Seine, as a daunting incline would have been necessary to reach Saint-Germain-en-Laye, and locomotives of the day were considered incapable of climbing the necessary gradient, adhesion being considered the limiting factor.
It was through his interest that the Pereire brothers to adopt the system for an extension to St Germain itself, and construction started in 1845, with a wooden bridge crossing the Seine followed by a twenty-arch masonry viaduct and two tunnels under the castle. The extension was opened on 15 April 1847; it was 1.5Β km in length on a gradient of 1 in 28 (35Β mm/m).
The traction pipe was laid between the rails; it had a diameter of 63Β cm (25 inches) with a slot at the top. The slot was closed by two leather flaps. The pumps are powered by two steam engines with a capacity of 200Β hp, located between the two tunnels at Saint-Germain. Train speed on the ascent was 35Β km/h (22Β mph). On the descent the train ran by gravity as far as Pecq, where the steam locomotive took over for the run to Paris.
The system was technically successful, but the development of more powerful steam locomotives led to its abandonment from 3 July 1860, when steam locomotive ran throughout from Paris to St Germain, being assisted by a pusher locomotive up the gradient. This arrangement continued for more than sixty years until the electrification of the line.[10]
A correspondent of the Ohio State Journal described some details; there seem to have been two tube sections:
An iron tube is laid down in the centre of the track, which is sunk about one-third of its diameter in the bed of the road. For a distance of 5,500 yards the tube has a diameter of only 1ΒΎ feet [i.e. 21 inches], the ascent here being so slight as not to require the same amount of force as is required on the steep grade to St Germain, where the pipe, for a distance of 3,800 yards, is 2 feet 1 inch [i.e. 25 inches] in diameter.
The steam engines had accumulators:
To each engine is adapted two large cylinders, which exhaust fourteen cubic feet of air per second. The pressure in the air cauldron (claudieres) attached to the exhausting machines is equal to six absolute atmospheres.
He described the valve:
Throughout the entire length of the tube, a section is made in the top, leaving an open space of about five inches. In each cut edge of the section there is an offset, to catch the edges of a valve which fits down upon it. The valve is made of a piece of sole leather half an inch thick, having plates of iron attached to it on both the upper and corresponding under side to give it strengthΒ ... which are perhaps one-fourth of an inch in thicknessΒ ... The plates are about nine inches long, and their ends, above and below, are placed three quarters of an inch apart, forming joints, so as to give the leather valve pliability, and at the same time firmness.[11]
Clayton records the name of the engineer, Mallet, who had been Inspector general of Public Works, and gives a slightly different account: Clayton says that Mallet used a plaited rope to seal the slot. He also says that vacuum was created by condensing steam in a vacuum chamber between runs, but that may have been a misunderstanding of the pressure accumulators.[2]
London and Croydon Railway
A steam railway at first
Jolly-sailor station on the London and Croydon Railway in 1845, showing the pumping station, and the locomotive-less train
The London and Croydon Railway (L&CR) obtained its authorising Act of Parliament in 1835, to build its line from a junction with the London and Greenwich Railway (L&GR) to Croydon. At that time the L&GR line was under construction, and Parliament resisted the building of two railway termini in the same quarter of London, so that the L&CR would have to share the L&GR's London Bridge station. The line was built for ordinary locomotive operation. A third company, the London and Brighton Railway (L&BR) was promoted and it too had to share the route into London by running over the L&CR.
When the lines opened in 1839 it was found that congestion arose due to the frequent stopping services on the local Croydon line; this was particularly a problem on the 1 in 100 ascent from New Cross to Dartmouth Arms.[3] The L&CR engineer, William Cubitt proposed a solution to the problem: a third track would be laid on the east side of the existing double track main line, and all the local trains in both directions would use it. The faster Brighton trains would be freed of the delay following a stopping train. Cubitt had been impressed during his visit to the Dalkey line, and the new L&CR third track would use atmospheric power. The local line would also be extended to Epsom, also as a single track atmospheric line. These arrangements were adopted and Parliamentary powers obtained on 4 July 1843, also authorising a line to a terminal at Bricklayers Arms. Arrangements were also made with the L&GR for them to add an extra track on the common section of their route. On 1 May 1844 the Bricklayers Arms terminus opened, and a frequent service was run from it, additional to the London Bridge trains.[2][3][12]
Now atmospheric as well
The L&CR line diverged to the south-west at Norwood Junction (then called Jolly Sailor, after an inn), and needed to cross the L&BR line. The atmospheric pipe made this impossible on the flat, and a flyover was constructed to enable the crossing: this was the first example in the railway world.[13] This was in the form of a wooden viaduct with approach gradients of 1 in 50. A similar flyover was to be built at Corbetts Lane Junction, where the L&CR additional line was to be on the north-east side of the existing line, but this was never made.
A 15-inch diameter traction pipe was installed between Forest Hill (then called Dartmouth Arms, also after a local inn) and West Croydon. Although Samuda supervised the installation of the atmospheric apparatus, a weather flap, a hinged iron plate that covered the leather slot valve in the Dalkey installation, was omitted. The L&CR had an Atmospheric Engineer, James Pearson. Maudsley, Son and Field supplied the three 100Β hp steam engines and pumps at Dartmouth Arms, Jolly Sailor and Croydon (later West Croydon), and elaborate engine houses had been erected for them. They were designed in a gothic style by W H Brakespear, and had tall chimneys which also exhausted the evacuated air at high level.[note 4]
A two-needle electric telegraph system was installed on the line, enabling station staff to indicate to the remote engine house that a train was ready to start.
This section, from Dartmouth Arms to Croydon started operation on the atmospheric system in January 1846.
The traction pipe slot and the piston bracket were handed; that is the slot closure flap was continuously hinged on one side, and the piston support bracket was cranked to minimise the necessary opening of the flap. This meant that the piston carriage could not simply be turned on a turntable at the end of a trip. Instead it was double ended, but the piston was manually transferred to the new leading end. The piston carriage itself had to be moved manually (or by horse power) to the leading end of the train. At Dartmouth Arms the station platform was an island between the two steam operated lines. Cubitt designed a special system of pointwork that enabled the atmospheric piston carriage to enter the ordinary track.[note 5]
The Board of Trade inspector, General Pasley, visited the line on 1 November 1845 to approve it for opening of the whole line. The Times newspaper reported the event; a special train left London Bridge hauled by a steam locomotive; at Forest Hill the locomotive was detached and:
the piston carriage substituted and the train thence became actuated by atmospheric pressure. The train consisted of ten carriages (including that to which the piston is attached) and its weight was upward of fifty tons. At seven and a half minutes past two the train left the point of rest at the Dartmouth Arms, and at eight and three-quarter minutes past, the piston entered the valve,[note 6] when it immediately occurred to us that one striking advantage of the system was the gentle, the almost imperceptible, motion on starting. On quitting the station on locomotive lines we have frequently experienced a "jerk" amounting at times to an absolute "shock" and sufficient to alarm the nervous and timid passenger. Nothing of the sort, however, was experienced here. Within a minute and a quarter of the piston entering the pipe, the speed attained against a strong headwind was at the rate of twelve miles an hour; in the next minute, viz. at eleven minutes past two, twenty-five miles an hour; at thirteen minutes past two, thirty-four miles an hour; fourteen minutes past two, forty miles an hour; and fifteen minutes past two, fifty-two miles an hour, which was maintained until sixteen minutes past two, when the speed began to diminish, and at seventeen and a half minutes past two, the train reached the Croydon terminus, thus performing the journey from Dartmouth Arms, five miles, in eight minutes and three-quarters. The barometer in the piston carriage indicated a vacuum of 25 inches and that in the engine house a vacuum of 28 inches.[note 7][14]
The successful official public run was widely reported and immediately new schemes for long-distance railways on the atmospheric system were being promoted; the South Devon Railway's shares appreciated overnight.
Opening
Pasley's report of 8 November was favourable, and the line was clear to open. The directors hesitated, desiring to gain a little more experience beforehand. On 19 December 1845 the crankshaft of the Forest Hill stationary engine fractured, and the engine was unusable. However the part was quickly replaced and on 16 January 1846 the line opened.
At 11:00 that morning the crankshaft of one of the Croydon engines broke. Two engines had been provided, so traffic was able to continue using the other,[note 8] until at 7:20Β p.m. that engine suffered the same fate. Again repairs were made until on 10 February 1846 both the Croydon engines failed.
This was a bitter blow for the adherents of the atmospheric system; shortcomings in the manufacture of the stationary engines procured from a reputable engine-maker said nothing about the practicality of the atmospheric system itself, but as Samuda said to the Board:
"The public cannot discriminate (because it cannot know) the cause of the interruptions, and every irregularity is attributed to the atmospheric system."[15]
Two months later the beam of one of the Forest Hill engines fractured. At this time the directors were making plans for the Epsom extension; they quickly revised their intended purchase of engines from Maudsley, and invited tenders; Boulton and Watt of Birmingham were awarded the contract, their price having been considerably less than their competitors'.
Amalgamation
The London and Brighton Railway amalgamated with the L&CR on 6 July 1846, forming the London, Brighton and South Coast Railway (LB&SCR). For the time being the directors of the larger company continued with the L&CR's intentions to use the atmospheric system.
Technical difficulties
The summer of 1846 was exceptionally hot and dry, and serious difficulties with the traction pipe flap valve started to show themselves. It was essential to make a good seal when the leather flap was closed, and the weather conditions made the leather stiff. As for the tallow and beeswax compound that was supposed to seal the joint after every train, Samuda had originally said "this composition is solid at the temperature of the atmosphere, and becomes fluid when heated a few degrees above it"[4] and the hot weather had that effect. Samuda's original description of his system had included a metal weather valve that closed over the flap, but this had been omitted on the L&CR, exposing the valve to the weather, and also encouraging the ingestion of debris, including, an observer reported, a handkerchief dropped by a lady on to the track. Any debris lodging in the seating of the flap could only have reduced its effectiveness.
Moreover the tallowβ that is, rendered animal fat β was attractive to the rat population; their bodies drawn in to the traction pipe at the beginning of pumping in the morning told its story. Delays became frequent, due to inability to create enough vacuum to move the trains, and stoppages on the steep approach inclines at the flyover were commonplace, and widely reported in the press.
The Directors now began to feel uneasy about the atmospheric system, and in particular the Epsom extension, which was to have three engines. In December 1846 they asked Boulton and Watt about cancelling the project, and were told that suspending the supply contract for a year would cost Β£2,300. The Directors agreed to this.
The winter of 1846/7 brought new meteorological difficulties: unusually cold weather made the leather flap stiff, and snow got into the tube[note 9] resulting in more cancellations of the atmospheric service. A track worker was killed in February 1847 while steam substitution was in operation. This was tragically unfortunate, but it had the effect of widespread reporting that the atmospheric was, yet again, non-operational.[16]
Sudden end
Through this long period, the Directors must have become less and less committed to pressing on with the atmospheric system, even as money was being spent on extending it towards London Bridge. (It opened from Dartmouth Arms to New Cross in January 1847, using gravitation northbound and the Dartmouth Arms pumping station southbound.) In a situation in which public confidence was important, the Directors could not express their doubts publicly, at least until a final decision had been taken. On 4 May 1847[17] the directors announced "that the Croydon Atmospheric pipes were pulled up and the plan abandoned".
The reason seems not to have been made public at once, but the trigger seems to have been the insistence of the Board of trade inspector on a second junction at the divergence of the Brighton and Epsom lines. It is not clear what this refers to, and may simply have been a rationalisation of the timing of a painful decision. Whatever the reason, there was to be no more atmospheric work on the LB&SCR.[2]
The Great Western Railway (GWR) and the Bristol and Exeter Railway working collaboratively had reached Exeter on 1 May 1844, with a broad gauge railway connecting the city to London. Interested parties in Devonshire considered it important to extend the connection to Plymouth, but the terrain posed considerable difficulties: there was high ground with no easy route through.
After considerable controversy, the South Devon Railway Company (SDR) obtained its Act of Parliament authorising a line, on 4 July 1844.
Determining the route
The Company's engineer was the innovative engineer Isambard Kingdom Brunel. He had visited the Dalkey line and he had been impressed with the capabilities of the atmospheric system on that line. Samuda had always put forward the advantages of his system, which (he claimed) included much better hill climbing abilities and lighter weight on the track. This would enable a line in hilly terrain to be planned with steeper than usual gradients, saving substantial cost of construction.
If Brunel had decided definitely to use the atmospheric system at the planning stage, it would have allowed him to strike a route that would have been impossible with the locomotive technology of the day. The route of the South Devon Railway, still in use today, has steep gradients and is generally considered "difficult". Commentators often blame this on it being designed for atmospheric traction; for example:
Sekon, describing the topography of the line, says that beyond Newton Abbot,
the conformation of the country is very unsuitable for the purpose of constructing a railway with good gradients. This drawback did not at the time trouble Mr. Brunel, the engineer to the South Devon Railway Company, since he proposed to work the line on the atmospheric principle, and one of the advantages claimed for the system being that steep banks were as easy to work as a level.[18]
The line "was left with a legacy of a line built for atmospheric working with the consequent heavy gradients and sharp curves".[19]
Brunel "seriously doubted the ability of any engine to tackle the kind of gradients which would be necessary on the South Devon".[20]
In fact the decision to consider the adoption of the atmospheric system came after Parliamentary authorisation, and the route must have been finalised before submission to Parliament.
Eight weeks after passage of the Act, the shareholders heard that "Since the passing of the Act, a proposal has been receivedΒ ... from Messrs. Samuda BrothersΒ ... to apply their system of traction to the South Devon Line." Brunel and a deputation of the directors had been asked to visit the Dalkey line. The report went on that as a result,
In view of the fact that at many points of the line both the gradients and curves will render the application of this principle particularly advantageous, your directors have resolved that the atmospheric system, including an electric telegraph, should be adopted on the whole line of the South Devon Railway.[21]
Construction started at once on the section from Exeter to Newton Abbot (at first called Newton); this first part is broadly level: it was the section onwards from Newton that was hilly. Contracts for the supply of the 45 horsepower (34Β kW) pumping engines and machinery were concluded on 18 January 1845, to be delivered by 1 July in the same year. Manufacture of the traction pipes ran into difficulties: they were to be cast with the slot formed,[note 10] and distortion was a serious problem at first.
Delivery of the machinery and laying of the pipes was much delayed, but on 11 August 1846, with that work still in progress, a contract was let for the engines required over the hilly section beyond Newton. These were to be more powerful, at 64 horsepower (48Β kW), and 82 horsepower (61Β kW) in one case, and the traction pipe was to be of a larger diameter.
The train service started between Exeter and Teignmouth on 30 May 1846, but this was operated by steam engines, hired in from the GWR. At length, on 13 September 1847[note 11] the first passenger trains started operating on the atmospheric system.[22][23] Atmospheric goods trains may have operated a few days previously.
Four atmospheric trains ran daily in addition to the advertised steam service, but after a time they replaced the steam trains. At first the atmospheric system was used as far as Teignmouth only, from where a steam engine hauled the train including the piston carriage to Newton, where the piston carriage was removed, and the train continued on its journey. From 9 November some atmospheric working to Newton took place, and from 2 March 1848 all trains on the section were atmospheric.
Through that winter of 1847-8 a regular service was maintained to Teignmouth. The highest speed recorded was an average of 64Β mph (103Β km/h) over 4 miles (6.4Β km) hauling 28 long tons (28Β t), and 35Β mph (56Β km/h) when hauling 100 long tons (100Β t).[citation needed]
Two significant limitations of the atmospheric system were overcome at this period. The first was an auxiliary traction pipe was provided at stations; it was laid outside the track, therefore not obstructing pointwork. The piston carriage connected to it by a ropeβthe pipe must have had its own pistonβand the train could be hauled into a station and on to the start of the onward main pipe. The second development was a level crossing arrangement for the pipe: a hinged cover plate lay across the pipe for road usage, but when the traction pipe was exhausted, a branch pipe actuated a small piston which raised the cover, enabling the piston carriage to pass safely, and acting as a warning to road users. Contemporary technical drawings show the traction pipe considerably lower than normal, with its top about level with the rail heads, and with its centre at the level of the centre of the transoms. No indication is shown as to how track gauge was maintained.
Underpowered traction system
Starcross pumping house.
Although the trains were running ostensibly satisfactorily, there had been technical miscalculations. It seems[24] that Brunel originally specified 12-inch (300Β mm) for the level section to Newton and 15-inch (380Β mm) pipes for the hilly part of the route, and in specifying the stationary engine power and vacuum pumps, he considerably underpowered them. The 12-inch (300Β mm) pipes seem to have been scrapped, and 15-inch (380Β mm) pipes installed in their place, and 22-inch (560Β mm) pipes started to be installed on the hilly sections. Changes to the engine control governors were made to uprate them to run 50% faster than designed. It was reported that coal consumption was much heavier than forecast, at 3s 1Β½d per train mile instead of 1s 0d (and instead of 2s 6d which was the hire charge for the leased GWR steam locomotives). This may have been partly due to the electric telegraph not yet having been installed, necessitating pumping according to the timetable, even though a train might be running late. When the telegraph was ready, on 2 August, coal consumption in the following weeks fell by 25%.[25]
Problems with the slot closure
During the winter of 1847β1848 the leather flap valve that sealed the traction pipe slot began to give trouble. During the cold days of winter the leather froze hard in frost after saturation in rain. This resulted in its failing to seat properly after the passage of a train, allowing air into the pipe and reducing the effectiveness of pumping. In the following spring and summer there was hot and dry weather and the leather valve dried out, with pretty much the same outcome. Brunel had the leather treated with whale oil in an attempt to maintain flexibility. There was said to be a chemical reaction between the tannin in the leather and iron oxide on the pipe. There were also difficulties with the leather cup seal on the pistons.
Commentators observe that the South Devon system omitted the iron weather flap that was used on the Dalkey line to cover the flap valve. On that line iron plates were turned away immediately ahead of the piston bracket. It is not recorded why this was omitted in South Devon, but at speed that arrangement must have involved considerable mechanical force, and generated environmental noise.
In May and June even more serious trouble was experienced when sections of the flap tore away from its fixing, and sections had to be quickly replaced. Samuda had a contract with the company to maintain the system, and he advised installation of a weather cover, but this was not adopted. This would not have rectified the immediate problem, and complete replacement of the leather flap was required; this was estimated to cost Β£32,000βa very large sum of money thenβand Samuda declined to act.
Abandonment
With a contractual impasse during struggles to keep a flawed system in operation, it was inevitable that the end was near. At a shareholders' meeting on 29 August 1848 the directors were obliged to report all the difficulties, and that Brunel had advised abandonment of the atmospheric system; arrangements were being made with the Great Western Railway to provide steam locomotives, and the atmospheric system would be abandoned from 9 September 1848.
Brunel's report to the Directors, now shown the meeting, was comprehensive, and he was also mindful of his own delicate position, and of the contractual obligations of Samuda. He described the stationary engines, obtained from three suppliers: "These engines have not, on the whole, proved successful; none of them have as yet worked very economically, and some are very extravagant in the use of fuel." As to the difficulties with the leather valve in extremes of weather, heat, frost and heavy rain,
The same remedies apply to all three, keeping the leather of the valve oiled and varnished, and rendering it impervious to the water, which otherwise soaks through it in wet weather, or which freezes it in cold, rendering it too stiff to shut down; and the same precaution prevents the leather being dried up and shrivelled by the heat; for this, and not the melting of the composition, is the principal inconvenience resulting from heat. A little water spread on the valve from a tank in the piston carriage has also been found to be useful in very dry weather, showing that the dryness, and not the heat, was the cause of the leakage.
But there was a much more serious problem: "A considerable extent of longitudinal valve failed by the tearing of the leather at the joints between the plates. The leather first partially cracked at these points, which caused a considerable leakage, particularly in dry weather. After a time it tears completely through."
Maintenance of the traction pipe and the valve was Samuda's contractual responsibility, but Brunel indicated that he was blaming the company for careless storage, and for the fact that the valve had been installed for some time before being used by trains; Brunel declined to go into the liability question, alluding to possible palliative measures, but concluded:
The cost of construction has far exceeded our expectations, and the difficulty of working a system so totally different from that to which everybodyβtraveller as well as workmenβis accustomed, have (sic) proved too great; and therefore, although, no doubt, after some further trial, great reductions may be made in the cost of working the portion now laid, I cannot anticipate the possibility of any inducement to continue the system beyond Newton.[26]
Huge hostility was generated among some shareholders, and Samuda, and Brunel in particular were heavily criticised, but the atmospheric system on the line was finished.
Retention recommended
Thomas Gill had been Chairman of the South Devon board and wished to continue with the atmospheric system. In order to press for this he resigned his position, and in November 1848 published a pamphlet urging retention of the system. He created enough support for this that an Extraordinary General Meeting of the Company was held on 6 January 1849. Lengthy technical discussion took place, in which Gill stated that Clark and Varley were prepared to contract to complete the atmospheric system and maintain it over a section of the line. There were, Gill said, twenty-five other inventors anxious to have their creations tried out on the line. The meeting lasted for eight hours, but finally a vote was taken: a majority of shareholders present were in favour of continuing with the system, 645 to 567 shares. However a large block of proxies were held by shareholders who did not wish to attend the meeting, and with their votes abandonment was confirmed by 5,324 to 1,230.
That was the end of the atmospheric system on the South Devon Railway.
Rats
It is often asserted among enthusiasts' groups that the primary cause of the failure of the leather flap was rats, attracted to the tallow, gnawing at it. Although rats are said to have been drawn into the traction pipe in the early days, there was no reference to this at the crisis meeting described above.
Technical details
Wormwood Scrubs demonstration line
The piston carriage on the demonstration line was an open four-wheeled track. No controls of any kind are shown on a drawing. The beam that carried the piston was called the "perch", and it was attached directly to the axles, and pivoted at its centre point; it had a counterweight to the rear of the attachment bracket (called a "coulter").
Dalkey line
The customary train consist was two coaches, the piston carriage, which included a guard's compartment and third class accommodation, and a first class carriage, with end observation windows at the rear. The guard had a screw brake, but no other control. Returning (descending) was done under gravity, and the guard had a lever which enabled him to swing the piston assembly to one side, so that the descent was made with the piston outside the tube.
Saint Germain line
The section put into service, Le Pecq to Saint Germain, was almost exactly the same length as the Dalkey line, and was operated in a similar way except that the descent by gravity was made with the piston in the tube so that air pressure helped retard speed. The upper terminal had sidings, with switching managed by ropes.[27]
London and Croydon
The piston carriages were six-wheeled vans, with a driver's platform at each end, as they were double ended. The driver's position was within the carriage, not in the open. The centre axle was unsprung, and the piston assembly was directly connected to it. The driver had a vacuum gauge (a mercury manometer, connected by a metal tube to the head of the piston. Some vehicles were fitted with speedometers, an invention of Moses Ricardo. As well as a brake, the driver had a by-pass valve which admitted air to the partially exhausted traction tube ahead of the piston, reducing the tractive force exerted. This seems to have been used on the 1 in 50 descent from the flyover. The lever and valve arrangement are shown in a diagram in Samuda's Treatise.
Variable size piston
Part of Samuda's patent included the variable diameter piston, enabling the same piston carriage to negotiate route sections with different traction tube sizes. Clayton describes it: the change could be controlled by the driver while in motion; a lever operated a device rather like an umbrella at the rear of the piston head; it had hinged steel ribs. To accommodate the bracket for the piston, the traction tube slot, and therefore the top of the tube, had to be at the same level whatever the diameter of the tube, so that all of the additional space to be sealed was downwards and sideways; the "umbrella" arrangement was asymmetrical. In fact this was never used on the South Devon Railway as the 22 inch tubes there were never opened; and the change at Forest Hill only lasted four months before the end of the atmospheric system there.[28] A variable diameter piston was also intended to be used on the Saint-Germain railway, where a 15 inch pipe was to be used from Nanterre to Le Pecq, and then a 25 inch pipe on the three and half per cent grade up to Saint-Germain. Only the 25 inch section was completed, so a simple piston was used.[27]
Engine house locations, South Devon Railway
Exeter; south end of St Davids station, up side of the line
Countess Wear; south of Turnpike bridge, at 197m 22c, down side[note 12]
Turf; south of Turf level crossing, down side
Starcross; south of station, up side
Dawlish; east of station, up side
Teignmouth; adjacent to station, up side
Summer House; at 212m 38c, down side
Newton; east of station, down side
Dainton; west of tunnel, down side
Totnes; adjacent to station, up side
Rattery; 50.43156,-3.78313; building never completed
Torquay; 1Β mile north of Torre station (the original terminal, called Torquay), up side
In the Dainton engine house a vacuum receiver was to be installed in the inlet pipe to the pumps. This was apparently an interceptor for debris that might be ingested into the traction pipe; it had an openable door for staff to clear the debris from time to time.[29]
Didcot Railway Centre, Didcot, Oxfordshire: three unused sections of South Devon 22 inch pipe, found under sand in 1993 at Goodrington Sands, near Paignton, mounted in 2000 with GWR rails recovered from another source.
Newton Abbot Town and GWR Museum, Newton Abbot, Devon: a very short cut section of unused South Devon 22 inch pipe, possibly from the 1993 discovery.
Being Brunel, Bristol: one section of unused South Devon 22 inch pipe, possibly from the 1993 discovery.
Museum of Croydon, Croydon: one section of London and Croydon 15 inch pipe with iron and leather valve intact, found in ground in 1933 at West Croydon station.
The nineteenth century attempts to make a practical atmospheric system (described below) were defeated by technological shortcomings. In the present day, modern materials have enabled a practical system to be implemented.
Towards the end of the twentieth century the Aeromovel Corporation of Brazil developed an automated people mover that is atmospherically powered. Lightweight trains ride on rails mounted on an elevated hollow concrete box girder that forms the air duct. Each car is attached to a square plateβthe pistonβwithin the duct, connected by a mast running through a longitudinal slot that is sealed with rubber flaps. Stationary electric air pumps are located along the line to either blow air into the duct to create positive pressure or to exhaust air from the duct to create a partial vacuum. The pressure differential acting on the piston plate causes the vehicle to move.
Electric power for lighting and braking is supplied to the train by a low voltage (50v) current through the track the vehicles run on; this is used to charge onboard batteries. The trains have conventional brakes for accurate stopping at stations; these brakes are automatically applied if there is no pressure differential acting on the plate. Fully loaded vehicles have a ratio of payload to dead-weight of about 1:1, which is up to three times better than conventional alternatives.[30] The vehicles are driverless with motion determined by lineside controls.[31] Aeromovel was designed in the late 1970s by Brazilian Oskar H.W. CoesterΒ (pt).[32]
The system was first implemented in 1989 at Taman Mini Indonesia Indah, Jakarta, Indonesia. It was constructed to serve a theme park; it is a 2-mile (3.22Β km) loop with six stations and three trains.[33]
The Aeromovel system is in operation at Porto Alegre Airport, Brazil. A line connecting the EstaΓ§Γ£o Aeroporto (Airport Station) on the Porto Alegre Metro) and Terminal 1 of Salgado Filho International Airport began operation on Saturday 10 August 2013.[34] The single line is 0.6-mile (1Β km) long with a travel time of 90 seconds. The first 150-passenger vehicle was delivered in April 2013 with a 300-passenger second vehicle delivered later.
In 2016 construction commenced on a 4.7Β km single line with seven stations in the city of Canoas. Construction was due to be completed in 2017 but in March 2018 the new city administration announced that the project had been suspended pending endorsement from central government and that equipment already purchased had been placed in storage. The new installation is part of a planned 18Β km, two line, twenty four station system in the city.[35][36][37]
Concept
Flight Rail Corp. in the USA has developed the concept of a high-speed atmospheric train that uses vacuum and air pressure to move passenger modules along an elevated guideway. Stationary power systems create vacuum (ahead of the piston) and pressure (behind the piston) inside a continuous pneumatic tube located centrally below rails within a truss assembly. The free piston is magnetically coupled to the passenger modules above; this arrangement allows the power tube to be closed, avoiding leakage. The transportation unit operates above the power tube on a pair of parallel steel rails.
The company currently has a 1/6 scale pilot model operating on an outdoor test guideway. The guideway is 2095 feet (639 m) long and incorporates 2%, 6% and 10% grades. The pilot model operates at speeds up to 25Β mph (40Β km/h). The Corporation claims that a full-scale implementation would be capable of speeds in excess of 200Β mph (322Β km/h).[38]
See also
Cable railwayβ a more successful albeit slow way of overcoming steep grades.
Funicularβ a system of overcoming steep grades using the force of gravity on downward cars to raise upward cars
Steam catapultβ used for launching aircraft from ships: the arrangement of seal and traveller is similar, although positive pressure is used.
Vactrainβ a futuristic concept in which vehicles travel in an evacuated tube, to minimise air resistance; the suggested propulsion system is not atmospheric.
^This may mean that the exhaust air was used to create a draught for the fires.
^It is not known exactly what form these points took, but some early engineers used switches in which the lead rails move together to form a butt joint with the approach rails, and it is likely Cubitt used this. The traction pipe can hardly have crossed the ordinary track and trains may have been moved by horses.
^75 seconds in moving the train by human or horse power to the pipe.
^These values are much higher than Samuda arranged during the Wormwood Scrubbs demonstrations; standard atmospheric pressure is taken as 29.92 in Hg.
^The Maudsley engines consisted of two engines driving the same shaft; either could be disconnected if required.
^Snow inside the tube itself might not have been serious; it is likely that compacted snow in the valve seating was the real problem.
^In the Dalkey case the pipes were cast as complete cylinders, and the slot was then machined in.
^Kay states (page 25) that MacDermot and Hadfield wrongly say that Countess Wear house was on the up side of the line.
References
^R. A. Buchanan, The Atmospheric Railway of I.K. Brunel, Social Studies of Science, Vol. 22, No. 2, Symposium on 'Failed Innovations' (May 1992), pp. 231β2.
^ abcdefghijHoward Clayton, The Atmospheric Railways, self-published by Howard Clayton, Lichfield, 1966
^ abcdCharles Hadfield, Atmospheric Railways, Alan Sutton Publishing Limited, Gloucester, 1985 (reprint of 1967), ISBNΒ 0-86299-204-4
^ abcdJ d'A Samuda, A Treatise on the Adaptation of Atmospheric Pressure to the Purposes of Locomotion on Railways, John Weale, London, 1841
^Samdua's treatise; references to parts on diagrams omitted.
^ ab"Report on the railroad constructed from Kingstown to Dalkey, in Ireliand, upon the atmospheric system, and on the application of this system to railroads in general (Abridged Translation)", Mons. Mallet, The Practical Mechanic and Engineer's Magazine, in 4 parts commencing May 1844, p279
Adrian Vaughan, Railway Blunders, Ian Allan Publishing, Hersham, 2008, ISBNΒ 978-0-7110-3169-2; page 21 shows a photograph of L&CR traction tubes unearthed in 1933.
Arthur R Nicholls, The London & Portsmouth Direct Atmospheric Railway, Fonthill Media, 2013, ISBNΒ 978 1 78155244 5; Story of an unsuccessful attempt at a trunk route
Recently, I made an idle threat on Twitter (shown below). Β I was thinking of creating some content along the lines of how to go from being a software developer to a software consultant. Β People ask me about this all the time, and it makes for an interesting subject. Β I was also flattered and encouraged by the enthusiastic response to the tweet.
Iβm still mulling over the best delivery mechanism for such a thing. Β I could do another book, but I could also do something like a video course or perhaps a series of small courses. Β But whatever route I decide to go, I need to chart out the content before doing anything else. Β I could go a mile wide and a mile deep on that, but Iβd say thereβs one sort of fundamental, philosophical key to becoming a software consultant. Β So today Iβd like to speak about that.
Software Consultant, Differentiated
I wonβt bury the lede any further here. Β The cornerstone piece of advice Iβll offer is the one upon which Iβd build all of the rest of my content. Β You probably wonβt like it. Β Or, youβll at least probably think it should take a back seat to other pieces of advice like, βbe sympatheticβ or βask a lot of questionsβ or something. Β But, no.
Donβt ever let would-be consulting clients pay you for code that you write.
Seriously. Β Thatβs the most foundational piece of your journey from software developer to software consultant. Β And the reason has everything to do with something that successful consultants come to understand well: positioning. Β Now, usually, people talk about positioning in the context of marketingΒ as differentiating yourself from competitors. Β Here, Iβm talking about differentiating yourself from what youβre used to doing (and thus obliquely from competitors you should stop bothering to compete with: software developers).
Let me explain, as Iβm wont to do, with narrative.
Leonardo Da Vinci: Renaissance Plumber
By any reckoning, Leonardo Da Vinci was one of the most impressive humans ever to walk the planet. Β Among his diverse achievements, he painted the Mona Lisa, designed a tank, and made important strides in human anatomy. Β But letβs say that, in a Bill and Ted-like deus ex machina, someone transported him 500 years into the future and brought him to the modern world.
Even someone as impressive as Leonardo would, no doubt, need a bit of time to get his bearings. Β So assume that, as he learned modern language, technology, and culture, he took a job as a plumber.
Letβs assume that you happened to have a leaky sink faucet, and you called Leonardoβs plumbing company for help. Β They dispatched him forthwith to take a look and to help you out.
So Leonardo comes over and, since heβs Leonardo, figures out almost immediately that your supply line has come slightly loose. Β He tightens it, and you couldnβt be more pleased with the result.
Leonardo, Ignored
Encouraged by your praise, Leonardo then gets a bit of a twinkle in his eye. Β He does some mental arithmetic and tells you that could actually cut down on your water bill by about 15% if you adopted a counter-intuitive way of cleaning off your dishes after meals. Β He proceeds to tell you how that would work. Β And, while heβs at it, he points out that the painting print on your kitchen wall isnβt a good match for the paint in the room.
And do you know what you do in response to the genius of Leonardo Da Vinci teaching you a better dish washing scheme and helping you with your art? Β You smile, humor him, and think to yourself, βjust fix the sink and get out of here.β Β And youβre absolutely right to do that.
Why? Β Because you have no way of knowing that heβs Leonardo Freakinβ Da Vinci. Β You just understand that you hired someone to fix a sink, and that someone is now giving you unsolicited advice about washing your dishes and decorating your house. Β You hired him to perform labor, and instead (or in addition to), youβre getting his opinions about your life.
Software Developer, Ignored
Anyone reading this that has spent time as a professional software developer can probably relate to my channeling of Da Vinci. Β You understand the better outcomes your company would have if youβd ramp down tech debt. Β You can easily see that managementβs waterfall βmethodologyβ is ineffective and misery-inducing. Β And, while youβre no expert, you even know that the companyβs branding and sales strategies are ineffective.
You offer constructive feedback, but nobody listens. Β Youβre Da Vinci, ignored. Β And Iβm not patronizing you. Β You have to be smart to write software for a living, and in every shop Iβve ever visited, the software developers had good ideas that extended far beyond the boundaries of an IDE. Β And, usually, management humored them or else flat out ignored them. Β You could chalk this up to familiarity breeding contempt, but itβs really the positioning that I mentioned earlier.
Management hired you to perform the labor of writing code. Β Your unsolicited opinions are not part of that equation. Β Because youβre in high demand, people smile, nod, and humor you. Β But they donβt care. Β Thatβs the life of a software developer. Β File your suggestions in the little bin, on the ground next to my desk with a βsuggestionsβ label taped over the βtrashβ label.
Would-Be Software Consultant, Ignored
Letβs say that you tire of this at some point. Β You decide you love the industry and that you love software, but you want more influence. Β Management isnβt for you, so βconsultingβ it is. Β Maybe you hang out your shingle to freelance, or maybe you go work for a software βconsultingβ firm. Β Now, itβll be different. Β Now, people will listen.
And then, to your intense frustration, it doesnβt turn out that way. Β Even though you have βconsultantβ in your title and charter, people still humor you and say, βwhatever, buddy, just code up the spec.β
What gives? Β Well, a big part of the problem lies in the dilution of the term βconsultantβ in our line of work. Β Everyone at your clientβs site that doesnβt have a W2 arrangement with that client is a βconsultant,β whether they personally advise the CIO or whether they lock themselves in a broom closet and write stored procedures.
And, to make matters worse, every firm that does custom app dev and calls it consulting positions themselves in an entirely predictable way. Β βOh, heavens no. Β Our consultants arenβt just coders β they write code AND provide thought leadership and advice.β
Thatβs so utterly expected that some clients would probably find it refreshing to hear one of these shops or people say, βnope, we just turn specs into code.β Β ThankΒ goodness. Β I finally donβt have to listen to the plumber talk about my choice of wall decorations.
Positioning Like You Mean It
So letβs take an honest look at the software consultantβs situation. Β All that βconsultantβ really tells anyone for sure is that you do work for someone that doesnβt send you a W2. Β But, if they play the odds, it tells them that you write code for someone that doesnβt send you a W2 and will offer a lot of opinions, whether anyone wants them or not. Β The stock software consultant persona and thus default positioning is then βopinionated, above average developer.β
Now the people interested in my prospective book or course are people that actually want toΒ beΒ consultants. Β Firms donβt pay consultants for labor (or code); they pay consultants for their opinions. Β So, hereβs the rub. Β If you introduce yourself as a software consultant or someone else introduces you that way, your default positioning is βcoder.β Β But, to achieve your objective, you need to position yourself as an actual consultant, getting paid for advice.
While many subtle options exist to nudge yourself in that direction, you have one foundational one. Β Donβt let your clients pay you to write code.
In a world where every software developer writing code for another company is a βconsultantβ you can position yourself as an actual consultant by not writing code for pay. Β Nobody confuses you with a pro coder then.
Medicine as a Metaphor
Jonathan Stark, of Ditching HourlyΒ and the Freelancerβs Show, has a great metaphor to help you understand the positioning concept. Β And Iβll use it here to drive home a major differentiation between consulting and laboring (i.e. writing code).
He talks about four phases of solving problems for companies. Β Those include diagnosis, prescribing a cure, application of the cure, and re-application of the cure. Β Software developers and most so-called software consultants involve themselves almost exclusively in phase three: application. Β But thatβs a pretty low leverage place to be. Β Consultants exist almost exclusively in phases one and two: diagnosing and prescribing. Β They let laborers take care of phase three and even lower status laborers take care of phase four.
Think of it in terms of other knowledge workers. Β You go to the doctor with an ailment, and the doctor figures out the ailment and prescribes medicine. Β But if that medicine involves rubbing stuff on the bottom of your feet 5 times per day, he doesnβt also handle that β itβs below his pay grade. Β You do that yourself, or you hire a masseuse or something.
When you write code as a software βconsultant,β you tell people that youβre in the business of diagnosis and prescription. Β But when the rubber meets the road, you spend almost all of your time slathering stuff on peopleβs feet and talking at length about (βconsulting onβ) the best ways to slather.
Now, imagine an industry in which diagnosticians and slatherers alike all called themselves doctors. Β When you needed a diagnosis, youβd start to look reflexively for people without goop on their hands in order to tell the difference.
Caveats
First of all, let me clear something up immediately. Β I can practically write the comment myself. Β Someone is going to read this and say, βwell, Iβm a consultant that writes code for my clients and they actually asked me whether they should adopt Scrum or not and then listened.β Β Yes, I believe that in the same way I believe that management does sometimes listen to staff software developersβ opinions. Β It happens. Β But itβs a far cry from you being thereΒ only to tell them whether or not to adopt Scrum.
Secondly, you can write code in a consultative capacity. Β Coaches and trainers make excellent examples. Β Notice that I said not to let people pay you forΒ code that you write. Β Companies donβt pay trainers for the code that they write, but rather for the service of showing their team how to write code. Β As a rule of thumb for differentiating, ask yourself whether the client depends on you to code something intended for production. Β If the answer is yes, youβre slathering foot goop and not diagnosing.
And, finally, I wonβt dispute that some people may walk this line with more than ephemeral success. Β Maybe everywhere they go, they roll up their sleeves and crank code all morning, only to then go into the CIOβs office and provide strategy advice. Β Iβve never actually seen that or anything close to it, but it could happen. Β Or, maybe even more likely, someone consults for some clients and codes for others. Β Whatever the arrangement, some people might succeed in perpetually walking the consultant-coder line. Β And, good for them. Β But what I can tell you is that this is the exception and not the rule.
Thereβs an Awful Lot More to Consulting, But Hereβs Your Start
As I mentioned in early in the post, I could fill a book or course(s) with information about how to succeed as a software consultant. Β Going from software developer to software consultant seems kind of straightforward, but thatβs really foolsβ gold. Β Itβs superficially easy if you accept the extremely loose definition of consulting, but not if you seriously want to get paid for expert opinions, diagnoses and prescriptions instead of for writing code. Β Then you have a good bit of learning and skill acquisition ahead of you.
So why, of all things, do I pick avoiding writing code as foundational? Β Well, as Iβve said all along, positioning yourself is critical, and thatβs your single best piece of positioning. Β In order to get paid for diagnosing, you need someone asking you for a diagnosis and not asking you to slather stuff on their foot and just call that diagnosing.
But thereβs an even subtler reason for the emphasis on not coding, as well. Β Writing code is satisfying, fun, andΒ extremely marketable. Β And so finding people to pay you to write code is tantalizingly easy. Β They need your programming skills so badly, theyβll probably even call you the βCEO of teh codezβ if thatβs what you want to be called. Β You have a ready-made crutch.
Refusing to write code for clients means forcibly removing the crutch. Β Doing this, youβre like a non-native language speaker that flies to a foreign country and practices learning by immersion. Β You have no crutch, and no choice but to figure it out. Β You can write code for fun, in your spare time, and to support your practice. Β But if you want to get serious about consulting, stop slathering so that you can start diagnosing. Β Donβt let βem pay you for your code.
And, by the wayβ¦
By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.Β You can also sign up for my mailing list below to check out a free set of chapters from my book.
Want more content like this?
Sign up for my mailing list, and get about 10 posts' worth of content, excerpted from my latest book as a PDF. (Don't worry about signing up again -- the list is smart enough to keep each email only once.)
');$vidEndSlate.removeClass('video__end-slate--inactive').addClass('video__end-slate--active');}};CNN.autoPlayVideoExist = (CNN.autoPlayVideoExist === true) ? true : false;var configObj = {thumb: 'none',video: 'business/2018/10/03/this-is-what-happens-when-amazon-comes-for-your-business.cnn-business',width: '100%',height: '100%',section: 'international',profile: 'expansion',network: 'cnn',markupId: 'large-media_0',adsection: 'const-article-carousel-pagetop',frameWidth: '100%',frameHeight: '100%',posterImageOverride: {"mini":{"width":220,"type":"jpg","uri":"//cdn.cnn.com/cnnnext/dam/assets/181001125327-world-according-to-amazon-small-169.jpg","height":124},"xsmall":{"width":307,"type":"jpg","uri":"//cdn.cnn.com/cnnnext/dam/assets/181001125327-world-according-to-amazon-medium-plus-169.jpg","height":173},"small":{"width":460,"type":"jpg","uri":"//cdn.cnn.com/cnnnext/dam/assets/181001125327-world-according-to-amazon-large-169.jpg","height":259},"medium":{"width":780,"type":"jpg","uri":"//cdn.cnn.com/cnnnext/dam/assets/181001125327-world-according-to-amazon-exlarge-169.jpg","height":439},"large":{"width":1100,"type":"jpg","uri":"//cdn.cnn.com/cnnnext/dam/assets/181001125327-world-according-to-amazon-super-169.jpg","height":619},"full16x9":{"width":1600,"type":"jpg","uri":"//cdn.cnn.com/cnnnext/dam/assets/181001125327-world-according-to-amazon-full-169.jpg","height":900},"mini1x1":{"width":120,"type":"jpg","uri":"//cdn.cnn.com/cnnnext/dam/assets/181001125327-world-according-to-amazon-small-11.jpg","height":120}}},autoStartVideo = false,isVideoReplayClicked = false,callbackObj,containerEl,currentVideoCollection = [{"title":"What happens when Amazon comes for your business?","duration":"04:24","sourceName":"CNN Business ","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/10/03/this-is-what-happens-when-amazon-comes-for-your-business.cnn-business/index.xml","videoId":"business/2018/10/03/this-is-what-happens-when-amazon-comes-for-your-business.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/181001125327-world-according-to-amazon-large-169.jpg","videoUrl":"/videos/business/2018/10/03/this-is-what-happens-when-amazon-comes-for-your-business.cnn-business/video/playlists/business-amazon/","description":"CNN's Jon Sarlin explains why companies panic when Amazon enters their markets and what unexpected opportunities that can bring.","descriptionText":"CNN's Jon Sarlin explains why companies panic when Amazon enters their markets and what unexpected opportunities that can bring."},{"title":"Will the government attempt to break up Amazon?β","duration":"02:22","sourceName":"CNN Business","sourceLink":"http://www.cnn.com/business","videoCMSUrl":"/video/data/3.0/video/business/2018/10/12/amazon-anti-trust-orig.cnn-business/index.xml","videoId":"business/2018/10/12/amazon-anti-trust-orig.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/181012115857-amazon-government-large-169.jpg","videoUrl":"/videos/business/2018/10/12/amazon-anti-trust-orig.cnn-business/video/playlists/business-amazon/","description":"Amazon is growing rapidly. But how big is too big? Experts weigh in on the competition Amazon faces and growing antitrust concerns.","descriptionText":"Amazon is growing rapidly. But how big is too big? Experts weigh in on the competition Amazon faces and growing antitrust concerns."},{"title":"Amazon is using AI in almost everything it does","duration":"04:27","sourceName":"CNN Business","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/10/03/amazon-ai-behind-the-scenes-orig-mss.cnn-business/index.xml","videoId":"business/2018/10/03/amazon-ai-behind-the-scenes-orig-mss.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/181003173251-amazon-go-large-169.jpg","videoUrl":"/videos/business/2018/10/03/amazon-ai-behind-the-scenes-orig-mss.cnn-business/video/playlists/business-amazon/","description":"Rachel Crane goes inside Amazon HQ to see how Amazon uses AI to improve customer experiences, from cashier-less stores to Alexa's new tricks.","descriptionText":"Rachel Crane goes inside Amazon HQ to see how Amazon uses AI to improve customer experiences, from cashier-less stores to Alexa's new tricks."},{"title":"These industries could be Amazon's next targets","duration":"02:42","sourceName":"CNN Business","sourceLink":"https://www.cnn.com/business","videoCMSUrl":"/video/data/3.0/video/business/2018/10/09/amazon-2023-next-ventures-orig.cnn-business/index.xml","videoId":"business/2018/10/09/amazon-2023-next-ventures-orig.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/181003174843-lista-forbes-millonarios-jeff-bezos-bill-gates-portafolio-cnnee-00000005-large-169.jpg","videoUrl":"/videos/business/2018/10/09/amazon-2023-next-ventures-orig.cnn-business/video/playlists/business-amazon/","description":"Amazon has a voracious appetite for new ventures. Experts weigh in on what's next for the already-trillion-dollar company.","descriptionText":"Amazon has a voracious appetite for new ventures. Experts weigh in on what's next for the already-trillion-dollar company."},{"title":"Can Amazon afford to get into the ad business?","duration":"02:41","sourceName":"CNN Business","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/10/05/amazons-ad-future-orig.cnn-business/index.xml","videoId":"business/2018/10/05/amazons-ad-future-orig.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/180918185033-amazon-advertising-large-169.jpg","videoUrl":"/videos/business/2018/10/05/amazons-ad-future-orig.cnn-business/video/playlists/business-amazon/","description":"CNN's Jon Sarlin looks into Amazon's booming ad business. Could Amazon's move into Google and Facebook's space alienate its customers and vendors?","descriptionText":"CNN's Jon Sarlin looks into Amazon's booming ad business. Could Amazon's move into Google and Facebook's space alienate its customers and vendors?"},{"title":"Will Bezos ever leave Amazon?","duration":"02:16","sourceName":"CNN Business","sourceLink":"http://www.cnn.com/business","videoCMSUrl":"/video/data/3.0/video/business/2018/10/10/amazon-2023-jeff-bezos-future.cnn-business/index.xml","videoId":"business/2018/10/10/amazon-2023-jeff-bezos-future.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/180419121306-amazon-jeff-bezos-happy-large-169.jpg","videoUrl":"/videos/business/2018/10/10/amazon-2023-jeff-bezos-future.cnn-business/video/playlists/business-amazon/","description":"Jeff Bezos is the world's richest man heading up one of the most valuable companies ever. So what's next for the visionary and CEO?","descriptionText":"Jeff Bezos is the world's richest man heading up one of the most valuable companies ever. So what's next for the visionary and CEO?"},{"title":"Amazon is worth more than $1 trillion","duration":"01:24","sourceName":"CNN Business","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/09/26/amazon-is-worth-1-trillion-orig.cnn-business/index.xml","videoId":"business/2018/09/26/amazon-is-worth-1-trillion-orig.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/180830094628-gfx-amazon-1-trillion-large-169.jpg","videoUrl":"/videos/business/2018/09/26/amazon-is-worth-1-trillion-orig.cnn-business/video/playlists/business-amazon/","description":"Amazon has doubled in market value in just a year and now joins Apple in the elite ranks of companies worth $1 trillion.","descriptionText":"Amazon has doubled in market value in just a year and now joins Apple in the elite ranks of companies worth $1 trillion."},{"title":"Is Amazon a monopoly?","duration":"04:14","sourceName":"CNN Business","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/10/03/amazon-monopoly-orig.cnn-business/index.xml","videoId":"business/2018/10/03/amazon-monopoly-orig.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/181001181316-amazon-boxes-large-169.jpg","videoUrl":"/videos/business/2018/10/03/amazon-monopoly-orig.cnn-business/video/playlists/business-amazon/","description":"Amazon dominates online retail and is one of the most valuable companies in the world. But are consumers harmed by its low prices? CNN's Jon Sarlin reports. ","descriptionText":"Amazon dominates online retail and is one of the most valuable companies in the world. But are consumers harmed by its low prices? CNN's Jon Sarlin reports. "},{"title":"Amazon raises minimum wage to $15 an hour","duration":"03:00","sourceName":"CNN Business","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/10/02/amazon-minimum-wage.cnn-business/index.xml","videoId":"business/2018/10/02/amazon-minimum-wage.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/181002115644-01-us-amazon-employees-file-restricted-large-169.jpg","videoUrl":"/videos/business/2018/10/02/amazon-minimum-wage.cnn-business/video/playlists/business-amazon/","description":"Dave Clark, Amazon's SVP of Global Operations, tells CNN's Christine Romans that the minimum wage increase will \"help us hire and retain the best people over the course of time.\"","descriptionText":"Dave Clark, Amazon's SVP of Global Operations, tells CNN's Christine Romans that the minimum wage increase will \"help us hire and retain the best people over the course of time.\""},{"title":"See Amazon's new Prime delivery initiative","duration":"01:35","sourceName":"CNN Business","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/09/21/amazon-prime-delivery-partners-orig.cnn-business/index.xml","videoId":"business/2018/09/21/amazon-prime-delivery-partners-orig.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/180921124933-amazon-prime-partners-cnn-business-large-169.jpg","videoUrl":"/videos/business/2018/09/21/amazon-prime-delivery-partners-orig.cnn-business/video/playlists/business-amazon/","description":"Amazon has announced a new program to create small businesses that can deliver its packages in branded vans and uniforms.","descriptionText":"Amazon has announced a new program to create small businesses that can deliver its packages in branded vans and uniforms."},{"title":"Thanks to Amazon you can talk to your microwave","duration":"01:13","sourceName":"CNNMoney","sourceLink":"http://money.cnn.com","videoCMSUrl":"/video/data/3.0/video/cnnmoney/2018/09/20/amazon-alexa-microwave.cnnmoney/index.xml","videoId":"cnnmoney/2018/09/20/amazon-alexa-microwave.cnnmoney","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/180920184423-amazon-alexa-microwave-large-169.jpg","videoUrl":"/videos/cnnmoney/2018/09/20/amazon-alexa-microwave.cnnmoney/video/playlists/business-amazon/","description":"Amazon's Alexa-enabled microwave can respond to your commands and figure out how long to cook your dinner.","descriptionText":"Amazon's Alexa-enabled microwave can respond to your commands and figure out how long to cook your dinner."}],currentVideoCollectionId = '',isLivePlayer = false,mediaMetadataCallbacks,mobilePinnedView = null,moveToNextTimeout,mutePlayerEnabled = false,nextVideoId = '',nextVideoUrl = '',turnOnFlashMessaging = false,videoPinner,videoEndSlateImpl;if (CNN.autoPlayVideoExist === false) {autoStartVideo = true;if (autoStartVideo === true) {if (turnOnFlashMessaging === true) {autoStartVideo = false;containerEl = jQuery(document.getElementById(configObj.markupId));CNN.VideoPlayer.showFlashSlate(containerEl);} else {CNN.autoPlayVideoExist = true;}}}configObj.autostart = CNN.Features.enableAutoplayBlock ? false : autoStartVideo;CNN.VideoPlayer.setPlayerProperties(configObj.markupId, autoStartVideo, isLivePlayer, isVideoReplayClicked, mutePlayerEnabled);CNN.VideoPlayer.setFirstVideoInCollection(currentVideoCollection, configObj.markupId);var videoHandler = {},isFeaturedVideoCollectionHandlerAvailable = (CNN !== undefined &&CNN.VIDEOCLIENT !== undefined &&CNN.VIDEOCLIENT.FeaturedVideoCollectionHandler !== undefined);if (!isFeaturedVideoCollectionHandlerAvailable) {CNN.INJECTOR.executeFeature('videx').done(function () {jQuery.ajax({dataType: 'script',cache: true,url: '//edition.i.cdn.cnn.com/.a/2.119.1/js/featured-video-collection-player.min.js'}).done(function () {initializeVideoAndCollection();}).fail(function () {throw 'Unable to fetch /js/featured-video-collection-player.min.js';});}).fail(function () {throw 'Unable to fetch the videx bundle';});}function initializeVideoAndCollection() {videoHandler = new CNN.VIDEOCLIENT.FeaturedVideoCollectionHandler(configObj.markupId,"cn-featured-gbmiqd",'js-video_description-featured-gbmiqd',[{"title":"What happens when Amazon comes for your business?","duration":"04:24","sourceName":"CNN Business ","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/10/03/this-is-what-happens-when-amazon-comes-for-your-business.cnn-business/index.xml","videoId":"business/2018/10/03/this-is-what-happens-when-amazon-comes-for-your-business.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/181001125327-world-according-to-amazon-large-169.jpg","videoUrl":"/videos/business/2018/10/03/this-is-what-happens-when-amazon-comes-for-your-business.cnn-business/video/playlists/business-amazon/","description":"CNN's Jon Sarlin explains why companies panic when Amazon enters their markets and what unexpected opportunities that can bring.","descriptionText":"CNN's Jon Sarlin explains why companies panic when Amazon enters their markets and what unexpected opportunities that can bring."},{"title":"Will the government attempt to break up Amazon?β","duration":"02:22","sourceName":"CNN Business","sourceLink":"http://www.cnn.com/business","videoCMSUrl":"/video/data/3.0/video/business/2018/10/12/amazon-anti-trust-orig.cnn-business/index.xml","videoId":"business/2018/10/12/amazon-anti-trust-orig.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/181012115857-amazon-government-large-169.jpg","videoUrl":"/videos/business/2018/10/12/amazon-anti-trust-orig.cnn-business/video/playlists/business-amazon/","description":"Amazon is growing rapidly. But how big is too big? Experts weigh in on the competition Amazon faces and growing antitrust concerns.","descriptionText":"Amazon is growing rapidly. But how big is too big? Experts weigh in on the competition Amazon faces and growing antitrust concerns."},{"title":"Amazon is using AI in almost everything it does","duration":"04:27","sourceName":"CNN Business","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/10/03/amazon-ai-behind-the-scenes-orig-mss.cnn-business/index.xml","videoId":"business/2018/10/03/amazon-ai-behind-the-scenes-orig-mss.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/181003173251-amazon-go-large-169.jpg","videoUrl":"/videos/business/2018/10/03/amazon-ai-behind-the-scenes-orig-mss.cnn-business/video/playlists/business-amazon/","description":"Rachel Crane goes inside Amazon HQ to see how Amazon uses AI to improve customer experiences, from cashier-less stores to Alexa's new tricks.","descriptionText":"Rachel Crane goes inside Amazon HQ to see how Amazon uses AI to improve customer experiences, from cashier-less stores to Alexa's new tricks."},{"title":"These industries could be Amazon's next targets","duration":"02:42","sourceName":"CNN Business","sourceLink":"https://www.cnn.com/business","videoCMSUrl":"/video/data/3.0/video/business/2018/10/09/amazon-2023-next-ventures-orig.cnn-business/index.xml","videoId":"business/2018/10/09/amazon-2023-next-ventures-orig.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/181003174843-lista-forbes-millonarios-jeff-bezos-bill-gates-portafolio-cnnee-00000005-large-169.jpg","videoUrl":"/videos/business/2018/10/09/amazon-2023-next-ventures-orig.cnn-business/video/playlists/business-amazon/","description":"Amazon has a voracious appetite for new ventures. Experts weigh in on what's next for the already-trillion-dollar company.","descriptionText":"Amazon has a voracious appetite for new ventures. Experts weigh in on what's next for the already-trillion-dollar company."},{"title":"Can Amazon afford to get into the ad business?","duration":"02:41","sourceName":"CNN Business","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/10/05/amazons-ad-future-orig.cnn-business/index.xml","videoId":"business/2018/10/05/amazons-ad-future-orig.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/180918185033-amazon-advertising-large-169.jpg","videoUrl":"/videos/business/2018/10/05/amazons-ad-future-orig.cnn-business/video/playlists/business-amazon/","description":"CNN's Jon Sarlin looks into Amazon's booming ad business. Could Amazon's move into Google and Facebook's space alienate its customers and vendors?","descriptionText":"CNN's Jon Sarlin looks into Amazon's booming ad business. Could Amazon's move into Google and Facebook's space alienate its customers and vendors?"},{"title":"Will Bezos ever leave Amazon?","duration":"02:16","sourceName":"CNN Business","sourceLink":"http://www.cnn.com/business","videoCMSUrl":"/video/data/3.0/video/business/2018/10/10/amazon-2023-jeff-bezos-future.cnn-business/index.xml","videoId":"business/2018/10/10/amazon-2023-jeff-bezos-future.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/180419121306-amazon-jeff-bezos-happy-large-169.jpg","videoUrl":"/videos/business/2018/10/10/amazon-2023-jeff-bezos-future.cnn-business/video/playlists/business-amazon/","description":"Jeff Bezos is the world's richest man heading up one of the most valuable companies ever. So what's next for the visionary and CEO?","descriptionText":"Jeff Bezos is the world's richest man heading up one of the most valuable companies ever. So what's next for the visionary and CEO?"},{"title":"Amazon is worth more than $1 trillion","duration":"01:24","sourceName":"CNN Business","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/09/26/amazon-is-worth-1-trillion-orig.cnn-business/index.xml","videoId":"business/2018/09/26/amazon-is-worth-1-trillion-orig.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/180830094628-gfx-amazon-1-trillion-large-169.jpg","videoUrl":"/videos/business/2018/09/26/amazon-is-worth-1-trillion-orig.cnn-business/video/playlists/business-amazon/","description":"Amazon has doubled in market value in just a year and now joins Apple in the elite ranks of companies worth $1 trillion.","descriptionText":"Amazon has doubled in market value in just a year and now joins Apple in the elite ranks of companies worth $1 trillion."},{"title":"Is Amazon a monopoly?","duration":"04:14","sourceName":"CNN Business","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/10/03/amazon-monopoly-orig.cnn-business/index.xml","videoId":"business/2018/10/03/amazon-monopoly-orig.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/181001181316-amazon-boxes-large-169.jpg","videoUrl":"/videos/business/2018/10/03/amazon-monopoly-orig.cnn-business/video/playlists/business-amazon/","description":"Amazon dominates online retail and is one of the most valuable companies in the world. But are consumers harmed by its low prices? CNN's Jon Sarlin reports. ","descriptionText":"Amazon dominates online retail and is one of the most valuable companies in the world. But are consumers harmed by its low prices? CNN's Jon Sarlin reports. "},{"title":"Amazon raises minimum wage to $15 an hour","duration":"03:00","sourceName":"CNN Business","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/10/02/amazon-minimum-wage.cnn-business/index.xml","videoId":"business/2018/10/02/amazon-minimum-wage.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/181002115644-01-us-amazon-employees-file-restricted-large-169.jpg","videoUrl":"/videos/business/2018/10/02/amazon-minimum-wage.cnn-business/video/playlists/business-amazon/","description":"Dave Clark, Amazon's SVP of Global Operations, tells CNN's Christine Romans that the minimum wage increase will \"help us hire and retain the best people over the course of time.\"","descriptionText":"Dave Clark, Amazon's SVP of Global Operations, tells CNN's Christine Romans that the minimum wage increase will \"help us hire and retain the best people over the course of time.\""},{"title":"See Amazon's new Prime delivery initiative","duration":"01:35","sourceName":"CNN Business","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/09/21/amazon-prime-delivery-partners-orig.cnn-business/index.xml","videoId":"business/2018/09/21/amazon-prime-delivery-partners-orig.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/180921124933-amazon-prime-partners-cnn-business-large-169.jpg","videoUrl":"/videos/business/2018/09/21/amazon-prime-delivery-partners-orig.cnn-business/video/playlists/business-amazon/","description":"Amazon has announced a new program to create small businesses that can deliver its packages in branded vans and uniforms.","descriptionText":"Amazon has announced a new program to create small businesses that can deliver its packages in branded vans and uniforms."},{"title":"Thanks to Amazon you can talk to your microwave","duration":"01:13","sourceName":"CNNMoney","sourceLink":"http://money.cnn.com","videoCMSUrl":"/video/data/3.0/video/cnnmoney/2018/09/20/amazon-alexa-microwave.cnnmoney/index.xml","videoId":"cnnmoney/2018/09/20/amazon-alexa-microwave.cnnmoney","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/180920184423-amazon-alexa-microwave-large-169.jpg","videoUrl":"/videos/cnnmoney/2018/09/20/amazon-alexa-microwave.cnnmoney/video/playlists/business-amazon/","description":"Amazon's Alexa-enabled microwave can respond to your commands and figure out how long to cook your dinner.","descriptionText":"Amazon's Alexa-enabled microwave can respond to your commands and figure out how long to cook your dinner."}],'js-video_headline-featured-gbmiqd','',"js-video_source-featured-gbmiqd",true,true,'business-amazon');if (typeof configObj.context !== 'string' || configObj.context.length
Windows has long had a reputation for slow file operations and slow process creation. Have you ever wanted to make these operations even slower? This weeksβ blog post covers a technique you can use to make process creation on Windows grow slower over time (with no limit), in a way that will be untraceable for most users!
And, of course, this post will also cover how to detect and avoid this problem.
This issue is a real one that I encountered earlier this year, and this post explains how I uncovered the problem and found a workaround. Previous posts on making Windows slower include:
Noticing that something is wrong
I donβt go looking for trouble, but I sure seem to find it. Maybe itβs because I build Chrome from source hundreds of times over the weekend, or maybe Iβm just born with it. I guess weβll never know. For whatever reason, this post documents the fifth major problem that I have encountered on Windows while building Chrome.
And this one β an odd design decision that makes process creation slower over time
Tracking a rare crash
Computers should be reliable and predictable and I get annoyed when they arenβt. If I build Chrome a few hundred times in a row then I would like every build to succeed. So, when our distributed compiler process (gomacc.exe) would crash occasionally I wanted to investigate. I have automatic recording of crash dumps configured so I could see that the crashes happened when heap corruption was detected. A simple way of investigating that is to turn on pageheap so that the Windows heap puts each allocation on a separate page. This means that use-after-free and buffer overruns become instant crashes instead of hard to diagnose corruption. Iβve written about enabling pageheap using App Verifier before.
App Verifier causes your program to run more slowly, both because allocations are now more expensive and because the page-aligned allocations mean that your CPUβs cache is mostly neutered. So, I expected my builds to run a bit slower, but not too much, and indeed the build seemed to be running fine.
But when I checked in later the build seemed to have stopped. After about 7,000 build steps there was no apparent sign of progress.
O(n^2) is usually not okay
It turns out that Application Verifier likes to create log files. Never mind that nobody ever looks at these log files, it creates them just in case. And these log files need to have unique names. And Iβm sure it seemed like a good idea to just give these log files numerically ascending names like gomacc.exe.0.dat, gomacc.exe.1.dat, and so on.
To get numerically ascending names you need to find what number you should use next, and the simplest way to do that is to just try the possible names/numbers until you find something that hasnβt been used. That is, try to create a new file called gomacc.exe.0.dat and if that already exists then try gomacc.exe.1.dat, and so on.
Whatβs the worst that could happen?
Actually, the worst is pretty bad
It turns out that if you do a linear search for an unused file name whenever you create a process then launching N processes takes O(N^2) operations. A good rule of thumb is that O(N^2) algorithms are too slow unless you can guarantee that N always stays quite small.
Exactly how bad this will be depends on how long it takes to see if a file name already exists. Iβve since done measurements that show that in this context Windows seems to take about 80 microseconds (80 Β΅s or 0.08 ms) to check for the existence of a file. Launching the first process is fast, but launching the 1,000th process requires scanning through the 1,000 log files that have already been created, and that takes 80 ms, and it keeps getting worse.
A typical build of Chrome requires running the compiler about 30,000 times. Each launch of the compiler requires scanning over the previously created N log files, at 0.08 ms for each existence check. The linear search for the next available log file name means that launching N processes takes (N^2)/2 file existence checks, so 30,000 * 30,000 / 2 which is 450 million. Since each file existence check takes 0.08 ms thatβs 36 million ms, or 36,000 seconds. That means that my Chrome build, which normally takes five to ten minutes, was going to take an additional ten hours.
Darn.
When writing this blog post I reproduced the bug by launching an empty executable about 7,000 times and I saw a nice O(n^2) curve like this:
Oddly enough, if you grab an ETW trace and just look at the average time to call CreateFile on these many different file names then the result β from beginning to end β suggests that it takes less than five microseconds per file (an average of 4.386 microseconds in the example below):
It looks like this just reveals a limitation of ETWβs file I/O tracing. The file I/O events only track the very lowest level of the file system, and there are many layers above Ntfs.sys, including FLTMGR.SYS and ntoskrnl.exe. However the cost canβt hide entirely β the CPU time all shows up in the CPU Usage (Sampled) graph. The screen show below shows a 548 ms time period, representing the creation of one process, mostly just scanning over about 6,850 possible log file names:
Would a faster disk help?
No.
The amount of data being dealt with is tiny, and the amount being written to disk is even tinier. During my tests to repro this behavior my disk was almost completely idle. This is a CPU bound problem because all of the relevant disk data is cached. And, even if the overhead was reduced by an order of magnitude it would still be too slow. You canβt make an O(N^2) algorithm be good.
Detection
You can detect this specific problem by looking in %userprofile%\appverifierlogs for .dat files. You can detect process creation slowdowns more generally by grabbing an ETW trace, and now you know one more thing to look for.
The solution
The simplest solution is to disable the generation of the log files. This also stops your disk from filling up with GB of log files. You can do that with this command:
appverif.exe -logtofile disable
With log file creation disabled I found that my tracked processes started about three times faster (!) than at the beginning of my test, and the slowdown is completely avoided. This allows 7,000 Application Verifier monitored processes to be spawned in 1.5 minutes, instead of 40 minutes. With my simple test batch file and simple process I see these process-creation rates:
200 per second normally (5 ms per process)
75 per second with Application Verifier enabled but logging disabled (13 ms per process)
40 per second with Application Verifier enabled and logging enabled, initially⦠(25 ms per process, increasing to arbitrarily high limits)
0.4 per second after building Chrome once
Microsoft could fix this problem by using something other than a monotonically increasing log-file number. If they used the current date and time (to millisecond or higher resolution) as part of the file name then they would get log file names that were more semantically meaningful, and could be created extremely quickly with virtually no unique-file-search logic.
But, Application Verifier is not being maintained anymore, and the log files are worthless anyway, so just disable them.
Supporting information
The batch files and script to recreate this after enabling Application Verifier for empty.exe can be found here.
An ETW trace from around the end of the experiment can be found here.
The raw timing data used to generate the graph can be found here.
"TNCs and Congestion" report provides the first comprehensive analysis of how Transportation Network Companies Uber and Lyft collectively have affected roadway congestion in San Francisco.
Key findings in the report:
The report found that Transportation Network Companies accounted for approximately 50 percent of the rise in congestion in San Francisco between 2010 and 2016, as indicated by three congestion measures: vehicle hours of delay, vehicle miles travelled, and average speeds.
Employment and population growth were primarily responsible for the remainder of the worsening congestion.
Major findings of the TNCs & Congestion report show that collectively the ride-hailΒ services accounted for:
51 percent of the increase in daily vehicle hours of delay between 2010 and 2016;Β
47 percent of the increase in vehicle miles travelled during that same time period; and
55 percent of the average speed decline on roadways during that same time period.
On an absolute basis, TNCs comprise an estimated 25 percent of total vehicle congestion (as measured by vehicle hours of delay) citywide and 36 percent of delay in the downtown core.
Consistent with prior findings from the Transportation Authorityβs 2017 TNCs Today report, TNCs also caused the greatest increases in congestion in the densest parts of the city - up to 73 percent in the downtown financial district - and along many of the cityβs busiest corridors.Β TNCs had little impact on congestion in the western and southern San Francisco neighborhoods.
The report also found that changes to street configuration (such as when a traffic lane is converted to a bus-only lane), contributed less than 5 percent to congestion.Β
If you have questions about "TNCs Today," or are interested in a research collaboration, please contact Joe Castiglione, Deputy Director for Technology, Data and Analysis via email or Drew Cooper, Planner, via email.
This is the most important column Iβve ever written. Β The message is quite complexβdozens of new health parameters to test for and to optimize, all of them interacting in ways that will require new training for MDs. Β The message is also as simple as it can be: There is a cure for Alzheimerβs disease. You can stop reading right here, and buy two copies of Dale Bredesenβs book, one for you and one for your doctor: Β The End of Alzheimerβs.
Dr Bredesenβs spectacular success is easily lost in a flood of overly-optimistic, early hype about any number of magic cures. Β This is an excuse for the New York Times, the Nobel Prize committee, and the mainstream of medical research, but itβs no excuse for me. Β Iβve known Bredesen for 14 years, and Iβve written about his work in the past. Β His book has been out for a year, and I should have written this column earlier.
I suspect youβre waiting for the punch line: what is Bredesenβs cure? Β Thatβs exactly what I felt when I read about his work three years ago. But there isnβt a short answer. Β Thatβs part of the frustration, but itβs also a reason that Bredesenβs paradigm may be a template for novel research approaches cancer, heart disease, and aging itself.
The Bredesen protocol consists of a battery of dozens of lab tests, combined with interviews, consideration of life style, home environment, social factors, dentistry, leaky gut, mineral imbalances, hormone imbalances, sleep and more. Β This leads to an individual diagnosis: Which of 36 factors known to affect APP cleavage are most important in this particular case? How can they be addressed for this individual patient?
Brain cells have on their surface a protein called APP, which is a dependence receptor. Β It is like a self-destruct switch whose default is in the ON position. Β The protein that binds to the receptor is a neurotrophin ligand, and in the absence of the neurotrophin ligand, Β the receptor signals the cell to die.
APP cleavage is the core process that led Bredesen down a path to his understanding of the etiology of AD 16 years ago. Β APP is Amyloid Precursor Protein, and it is sensitive to dozens of kinds of signals, adding up the pros and the cons to make a decision, to go down one of two paths. Β It can be cleaved in two, creating signal molecules that cause formation of new synapses and formation of new brain cells; or it can be cleaved in four, creating signal molecules that lead to trimming back of existing synapses, and eventually, to apoptosis, cell suicide of neurons.
In a healthy brain, these two processes are balanced so we can learn new things and we can forget what is unimportant. Β But in the Alzheimerβs brain, destruction (synaptoclastic) dominates creation (synaptoblastic), and the brain withers away.
On the right, one of the fragments is beta amyloid.Β Beta amyloid blocks the dependence receptor, so the receptor cannot receive the neurotrophin ligand that gives it permission to go on living. Β Beta amyloid is one of the 4 pieces, when the APP molecule goes down the branch where it is split in 4.
One of the signals that determines whether APP splits in 2 or in 4 is beta amyloid itself. Β This implies a positive feedback loop; beta amyloid leads to even more beta amyloid, and in the Alzhyeimerβs patient, this is a runaway process. Β But positive feedback loops work in both directionsβa boon to Bredesenβs clinical approach. If the balance in signaling can be tipped from the right to the left pathway in the diagram above, this can lead to self-reinforcing progress in the healing direction. Β In the cases where Bredesenβs approach has led to stunning reversals of cognitive loss, this is the underlying mechanism that explains the success.
Amyloid has been identified with AD for decades, and for most of that time the mainstream hypothesis was that beta-amyloid plaques cause the disease. Β (Adherents to this view have been referred to jokingly as BAPtists.) But success in dissolving the plaques has not led to restored cognitive function. Β In Bredesenβs narrative, generation of large quantities of beta amyloid are a symptom of the bodyβs attempts to triage a dying brain.
To tip the balance back toward growing new synapses
Having identified the focal point that leads to AD, Bredesen went to work first in the lab, then in the clinic, to identify processes that tend to tip the balance one way or the other. Β He has compiled quite a list.
Reduce APPΞ²-cleavage
Reduce Ξ³-cleavage
Reduce caspase-6 cleavage
Reduce caspase-3 cleavage (All the above are cleavage in 4)
Reduce NF-ΞΊB (nuclear factor kappa-ligllt-chain-enhancer of activated B cells)
Increase telomere length
Reduce glial scarring
Enhance stein-cell-mediated brain repair
This explains why no single drug can have much effect on AD; itβs because the primary decision point depends on a balance among so many pro-AD (synaptoclastic) and anti-AD (synaptoblastic) signals. Β Addressing them all may be impractical in any given patient, so the Bredesen protocol is built around a detailed diagnostic process that identifies the factors that are most important in each individual case.
Three primary types of AD
Bredesenβs diagnosis begins with classifying each case of AD into one of three broad constellations of symptoms, with associated causes.
Type I is inflammatory. It is found more often in people with carry one or two ApoE4 alleles (a gene long associated with Alzheimerβs) and runs in families. Laboratory testing will often demonstrate an increase in C- reactive protein, in interleukin-2, tumor necrosis factor, insulin resistance and a decrease in the albumin:globulin ratio.
Type II is atrophic. It also occurs more often those who carry one or two copies of ApoΞ΅4, but occurs about a decade later. Here we do not see evidence of inflammatory markers (they may be decreased), but rather deficiencies of support for our brain synapses. These include decreased hormonal levels of thyroid, adrenal, testosterone, progesterone and/or estrogen, low levels of vitamin D and elevated homocysteine.
Type III is toxic.Β This occurs more often in those who carry the ApoΞ΅3 allele rather than ApoΞ΅4 so it does not tend to run in families. This type tends to affect more brain areas, which may show neuroinflammation and vascular leaks on a type of MRI called FLAIR, and associated with low zinc levels, high copper, low cortisol, high Reverse T3, elevated levels of mercury or mycotoxins or infections such as Lyme disease with Β its associated coinfections. Β
Thereβs also a Type 1.5, associated with diabetes and sugar toxicity, a Type IV, which is vascular dementia, and a Type V which is traumatic damage to the brain. These categories are just a start. Β The patient will work closely with an expert physician to determine, first, where are the most important imbalances to address, and, second, which of the changes that cna address them are most accessible for the life style of this particular patient.
Success
Bredesen wrote a paper in 2014 about successes in reversing cognitive decline with his first ten patients. Β As of this writing, he has treated over 3,000 patients with the protocol called RECODE (for REversal of COgnitive DEcline), and he claims success with all of them, in the sense of measurable improvement in cognitive performance. Β This contrasts with the utter failure of all previous methods, which claim, at best, to slow cognitive decline.
Translation to the millions of Alzheimerβs patients will require training of local practitioners all across the country. Β A few doctors have already learned parts of the Bredesen protocol, and Bredesenβs website can help you find someone to guide your program, but you will probably have to travel. Β The first training for doctors is being organized now through the Institute for Functional Medicine.
Implications
This is a new paradigm for how to study chronic, debilitating diseases. Β Type 2 diabetes comes to mind as the next obvious candidate for reversal through an individualized, comprehensive program. Β Terry Wahls has pioneered a similar approach with MS. Β Cancer and heart disease may be in the future.
Iβll go out on a limb and say I think Bredesenβs protocol is the most credible generalized anti-aging program we have. Β (Blame me for the hyperbole, not Dr Bredesen β he has never made any such claim.) Could we adopt Bredesenβs research method to accelerate research in anti-aging medicine? Β Perhaps biomarkers for aging (especially methylation age) are approaching a point where they could be used as feedback for an individualized program, but Horvathβs PhenoAge clock will probably have to be 10 times more accurate to be used for individuals. Β Averaging over ~100 individuals can give this factor of 10 in a clinical trial. Β Still, we donβt have the kind of mechanistic understanding of aging that Bredesen himself developed for AD before bringing his findings to the clinic; and this is probably because causes of aging are more complex and varied than AD.
Disclaimers: Β Iβm pre-disposed to think highly of Dale Bredesen and his ideas for 3 reasons. Β He was a friend to me, and gave me a platform when I was new to the field of aging. Β He believes that aging is programmed. And his multi-factorial approach parallels the research I have advocated for researching other aspects of aging.
The tightly knit team of a half dozen coders converged in July in a 17th floor hotel suite with a panoramic view of the Las Vegas strip. They had picked the center of gambling in America for a symbolic reason: the team had just spent three years working together to build a new kind of prediction market called Augur, to make it possible for bettors anywhere to place wagers on anything.
Now they were about to unleash their creation on the world by uploading the code to the Ethereum network. Where else could they have gone for this historic moment but to the Strip?
Under a glitzy chandelier in a sky suite strewn with pizza boxes at the Aria Resort, it took the programmers a full day to finish the code and upload all the smart contracts for the decentralized application. They didnβt mind: after three years of working together, mostly in a ratty little house on the edge of San Francisco, it was unclear when or if they might see each other again.
That had been the plan all along. They werenβt creating a business where they hoped they all might work. This new kind of enterprise was more like a group of people making an indie movie. They were releasing a protocolβa piece of software that would live forever on the Net. And after that? They would all simply walk away and find something else to work on.
Was it thrilling? βIt was not thrilling,β Jack Peterson, Augurβs co-founder recalls.
Oh, but it was. A lot was at stake. In fact, thousands of people had been betting on Augur for three years, including the 2,500 investors who had bought $5.3 million-worth of highly speculative digital βREPβ tokens to sponsor its development. Though its creators insist it was a βpresale of software licenses,β that event in 2015 was in effect one of theΒ first initial coin offerings.
In the days after the Vegas launch, Augur did not disappoint. Thousands of users had traded upwards of $1.5 million on Augur, and the value of the REP digital tokensΒ grew to the mid $30Β range. Hundreds of betting markets proliferated on Augurβs interface. Would U.S. President Trump be re-elected?Β Who would winΒ the France-Belgium semi-final in the 2018 World Cup? Would the price of etherΒ exceed $500Β by the yearβs end?
But then, within a week, Augur fell to earth. Only a few dozen people were trading it daily. Users complained about the clunky interface, and started to notice the abundance of dud markets (βDoes God exist?β). Worse, morally challenged βassassination marketsβ emerged, which some observers believed might actually encourage bettors to kill celebritiesβif the jackpot got high enough. Some news outletsΒ declaredΒ Augur a joke.Β One publicationΒ lamented the βhype, the horror, and the letdown of prediction market Augur.β
The truth lies somewhere in between.Β
If you want to understand whatβs happening on the Internet right now and why thousands of developers and billions of dollars are being focused on the promise of web3, Augur is a pretty good place to start. Obviously, it is not a Facebook, Google or Twitter, certainly not at this point in history, and maybe never.
But it represents one of those moments in technology that signals the start of something potentially huge. A better analogy might be to consider one of the early Wright Brothers flights at Kitty Hawk. TheΒ Wright FlyerΒ wasnβt much to look at, took you 120 feet and only stayed airborne for seconds. But it was flying, and that was something that might lead somewhere huge, wasnβt it?
Vitalik Buterinβthe person most people consider the godfather of Ethereumβcertainly thought so. Most people donβt know that he was a seed investor, consultant and muse to Augur. Augur, he says, is a success. βEven if it ends up with only 45 users, creating an application of this level of complexity and turning it into an actually working system is still a huge achievement.β
Fortuneβs Children
Krugβs old gaming computer, maxed out with Radeon GPUs to mine bitcoin.
Augurβs story starts with Joey Krug, who was born in Knoxville, Illinois in 1995 to an ER nurse and a doctor. He grew up to love betting, business, and bitcoin, and by age 13, he was already winning βthousandsβ playing the ponies and the stock market, carefully filing the results in an Excel spreadsheet.
He learned about bitcoin in 2011, after reading an article on GPU mining on overclock.net, a hardware site. It ignited his business instinctsβby simply hitching a few Radeon GPU units to a gaming computer and letting the whole thing whirr, he was able to earn money, right in the comfort of his childhood bedroom.
It was bitcoin that prompted him to drop out of Pomona College, California, after his freshman year, where he studied computer science. He left to write third-party bitcoin applications, including an app for buying things in bitcoinβwhich he abandoned once he realised βnobody wanted to buy things in bitcoin.β
Nevertheless, his marginal interests connected him to a Skype group, around 2014, with Buterin, who would soon co-create Ethereum, as well as Peterson, a then 32-year-old engineer and biophysicist working on his own abortive blockchain project, a startup called Dyffy.
Peterson was born in 1982 and grew up in Atlanta, Georgia. Unlike Krug, he had never been much of a gambler. He wasnβt even into money that much. He had had a stash of 100 bitcoins, which he accidentally wiped when reformatting his hard drive. Though he narrowly escaped great riches, he has no regrets.
But he thought Intrade, an early prediction market that had been abruptly shut down in 2013,Β was βincredibly cool.β It was unclear what caused Intradeβs shutdownβthe company claimed it had been forced to shut down due to discovering βfinancial irregularities.β Others pointed to a U.S. government lawsuit that prohibited people in the U.S. from using it, which had cut it off from its American market.
Whatever it was, Peterson remembers wishing that βIntrade could be like bitcoin.β By distributing a prediction marketβs administration across a global, independent network of computers, he reasoned, it would have no single point of failure. Traders could go on trading, no matter what.
Ethereum Savant
Buterinβwhom Krug describes, with mathematical precision, as a βvalue-added personββwould provide the spark. It was 2014, and his invention, Ethereum, was emerging as a programmable alternative to the bitcoin network. He was mulling over how to resolve a problem with Ethereumβs βsmart contracts,β digital agreements that fulfill themselves algorithmically, without human intervention. There was just one problem: what would verify that the conditions of a contract had been met, if not humans?
βBlockchain doesnβt know things from the outside world,β Buterin explains. βIt doesnβt know what time it is, what the temperature is.β For complex smart contracts to work, βyou need to source that info somewhere,β he says. Thatβs known as the βOracle Problem.β
During his research, Buterin came across a widely circulated Princeton treatise,βOn decentralizing prediction markets and limit order books,β as well as a paper by Paul Sztork, a statistician at Yale University, detailing a protocol called βTruthcoin.β Both papers, loosely, advocated deferring the smart contractsβ truth-finding duties to a decentralized network of βreporters,β thereby solving the Oracle Problem by establishing a human link with the code. In their vision, a new kind of prediction market could then run on these contracts, which would dispense payouts automatically, without need for a middlemanβthe algorithmic equivalent of bookies. Thus stimulated, Buterin drafted a blueprint for βSchellingcoin,β which would largely do the same.
Motivated by divisions within the bitcoin community, which Buterin says was then βspiralling into civil warβ over technical differences, he published the paper, hoping it might both resolve the Oracle Problem and foster a new kind of βon-chain,β betting-based governance model that could be adopted by the burgeoning Ethereum network. Such a system, he speculated, would encourage his users to put aside their differences and put their money where their mouths were.
The blueprints for Schellingcoin and Truthcoin found their way to Peterson and Krug, as well as to Joe Costello, an angel investor supporting Petersonβs startup Dyffy. Costello, bored on a holiday in the Maldives, read and re-read Truthcoin so many times that he became βobsessedβ with the idea of building an advanced version of it; he figured, in time, that Augur would support third-party apps he could profit from. Peterson and Krug were willing to take on the project, which Costello helped kick off with a seed fund of Β βaround $1 million.β (Though Peterson puts it at half a million.) Buterin, as well as βbitcoin billionaireβ Jeremy Gardner, would invest too.
Pennies from Heaven
Thus funded, Krug and Peterson, as well as two advisors, wrote a whitepaper detailing how the protocol would run. They named the project Augur, after the Roman seers, or βaugurs,β who predicted the future by observing the flight patterns of birds. The logo was a pyramid, with three points converging upon an all-seeing eye.
To start, they built an alpha version on the bitcoin network. Buterin, however, urged them to change tack and build it on Ethereum, which would be easier; so they did, and it was. Within months, they had a working prototype on the Ethereum βtestnetββtheΒ Jornada del MuertoΒ desert of decentralized softwareβthat they could use to sell the project to public investors.
For a 45-day period between August and September, 2015, they sold off 11 million βREP tokensβ at 60 cents apiece, saving 20 percent for themselves. The idea was that these tokens would provide minor financial incentives for holders of Augurβs REP token.
These holders, in their role as βreporters,β would then collectively act as Augurβs βOracleβ by voting on the outcome of events in exchange for more REPβif they were truthful. If they voted out of line, they would incur a penalty and lose REP. REP holders would also be entitled to 50 percent of all Augurβs trading fees, with the other half going to market makers. (It wasnβt until bitcoinβs vertiginous rise in 2017 that they realised the REP could run up in price and, conceivably, generate supplementary incomes for its holders.)
The auction went astonishingly well. That same month, the Chinese stock market had tanked, and traditional IPOs were struggling to raise money. Yet Augur, on its 19 pages worth of relatively untested ideas, managed to crowdsource $5.3 million without the support of venture capitalists, banks, or any kind of institutional middleman.
Krug remembers being βpleasantly surprised.β
Pleasantly surprised?
βI was pleasantly shocked,β he says.
Peterson plugging Augur at Mountain View, Californiaβs CryptoEconomicon, 2014.Β
Buterin, meanwhile, was pleasantly nothing.
βHmm,β he reminisces.
Did he at least feel proud? His underlying algorithm churns, searching for the correct variant. βHmm,β he concludes.
Bingo.
The Price of Immortality
That money, along with the cut of the REP tokens the team had reserved for themselves, would prop up the Forecast Foundation, a not-for-profit.
The Foundation would write the teamβs checks, support Augurβs development, and, in Costelloβs words, generally βkeep it alive and well.β Yet at the same time, it would be functionally powerless.Β
Thatβs the rub of the decentralized web. Companies, necessarily, must relinquish control over their products, and willingly withdraw themselves as middlemen. Indeed, the Forecast Foundation had, and has, no central power to either shut Augur down or even forcibly upgrade it. The updates to Augurβs interface, like those on the Bitcoin network, would only be optional downloads for its users. Whatβs more, to protect the Foundation from regulation and culpability in the event of Augurβs misuseβsee βassassination marketsββthe Foundation would take no profit from Augurβs markets.
Yet this swings both ways. With no chance of the Foundation generating revenue from the Augur platform itself, these funds, plus the initial seed investments, would have to carry the project through to the end. If those funds were to diminish, Peterson says, the Foundation would be unable to fund its employees and would have to outsource to βvolunteer developers.β
Still, the Augur protocol itself would survive any collapse in the Foundation, albeit in an incomplete form. Says Buterin: βIf the Forecast Foundation disappears,β he says, βthen thereβll be no more future updates.β Itβs like if BMW collapsed, but somehow kept rolling out half-built cars.
Costello, however, still thinks itβs possible to turn a buck: Augur could provide valuable speculative information to βany marketplace where people are trying to make a prediction,β he says, citing sports and finance as two lucrative areas to mine.
Third-party apps that scrape and resell reliable predictions to gamblers, and people trading equities, he says, could generate handsome returns; prediction markets, drawing as they do on vast reserves of crowd knowledge, have an uncanny tendency to hit the nail on the head when guessing the future.
The Long Code To Prediction
It would take three years for the core development team, which consisted of Krug, Peterson, and two others, Chris Calderon and Scott Leonard, to get the final product off the ground.
Most of the team, including Krug, lived and worked in what they called the Bitcoin Basement, a San Francisco crypto hotspot where their advisor, Gardner, also lived. The Basement, Peterson recalls, was βgross.β Three bedroomsβand one toiletβserved six guys, none of whom cleaned. βAt least not regularly,β says Peterson. He lived in Oregon at the time and, when visiting, would politely refuse board and sleep in his car. They kept the Crypto Castle, their next (above-ground) home, in βbetter order,β he says. But not much better.Β
The team programmed. They pushed out a beta in 2016, but it was far from complete. In the summer of 2017, they found a vulnerability that hackers could exploit to lock usersβ money in the smart contracts Augur relied on to dispense payouts automatically. βAnyone who had money in one of those smart contracts would have lost it forever,β says Krug. The team had to transfer the code from Serpent to Solidity, a programming language that didnβt contain the vulnerability. It delayed launch by another year. Investors were growing restless. Why was it taking so long?
But slowly, Augur came to life.
Growing Pains
Pizza before launchβwhich would come much later in the evening. From left to right: Tom Kysar, Joey Krug, Alex Chapman (developer), Jack Peterson, Paul Gebheim, and Scott Bigelow.
Augurβs launch was widely covered. Thousands of users flocked to the network, numbers not seen since CryptoKitties crashed the Ethereum network a year before. Most of the $1.5 million traded was wagered on World Cup-related markets, and other markets proliferated at breakneck speed. But soon the squib was dampened.
Detractors lined up to blast the softwareβs clunky, unintuitive interface, the proliferation of dud markets, the high trading fees, and the dismally low value of the REP token. The cryptocurrency, which traded at $32 and was an enticement to Augurβs reporters, has been consistently hovering around $12.
These are all valid points, says Krug. βWe said it was going to be expensive, slow to use, and have a terrible UX,β he explains. Peterson adds: βPeople had unrealistic expectations for what that first iteration would look like.β Augur is a 1.0 product. Augur 2.0, expected to drop soon, will address technical issues, largely drawn from Β a Reddit βwishlist.β (One upgrade will integrate Augur with the DAI, a dollar-backed stablecoin that will stop tradersβ funds from dropping in value.)
There are other more fundamental problems. Paul Stzork, the Yale statistician who authored the seminal Truthcoin paper that so inspired Buterin, had ripped into the project in a massive take downΒ he published at the end of 2015. The paper lays out every conceivable objection. (The Foundation is also in the midst of an $152 million legal battle with Costelloβs former partner, the developer Matt Liston, whom Sztork supports.) Β
The Daily Debrief delivered to your inbox
Subscribe to our mailing list to receive the latest Daily Debrief in your inbox
Awesome! Check your inbox to confirm your subscription.
Will Jennings, media director of PredictIt, a competitor, asserts that Augur never even fully solved the Oracle Problem. People are easily co-opted, he says, and are otherwise unreliable sources of truth. What if, for instance, fundamental religionists hijacked the network and collectively voted that βGod is real,β ensuring that payouts went only to the faithful? (Though the $9.83 on that market suggests this wonβt happen anytime soon.)
Krug concedes that Augur could indeed be prone to manipulation. But he readily admits the system isnβt perfect. This is a young technology. People thought the first generation iPhone was a βuseless piece of tech,β he says. Now look at it.
And anyway, heβand Petersonβhave since parted ways from Augur.
Though Krug retains his role as βadvisor,β heβs now working full time as a hedge fund manager at Pantera Capital. He doesnβt even use his Augur email address anymore. Peterson, meanwhile, has returned to his old love, biophysics.
And why wouldnβt they leave? They created a business whose explicit end result is no one should run it. So they can jump ship, scot-free, leaving the rest of the development team to subsist on the diminishing winnings of that original, sepia-tinged token sale. And in a way, their departure is a final flourish, a last act of total decentralization; the creators, Augurβs original middlemen, dutifully removing themselves.
And Augurβs muse, Buterin? He frames the whole thing as a massive, cosmic gamble whose payout has yet to be delivered: βWeβre staking a claim that if we make blockchain apps more usable, people will use them,β he says, with recursive simplicity. Β
So will the gamble pay off? Give it some time: It was a decade after Kitty Hawk before the first commercial flight took to the skies.Β Augurβs small base of users arenβt especially bullish on AugurβsΒ value. But theyβre still placing bets.
33 Concepts Every JavaScript Developer Should Know
Introduction
This repository was created with the intention of helping developers master their concepts in JavaScript. It is not a requirement, but a guide for future studies. It is based on an article written by Stephen Curtis and you can read it here. Feel free to contribute.