Quantcast
Channel: Hacker News
Viewing all 25817 articles
Browse latest View live
↧

Paul Allen has died

$
0
0

Microsoft Co-Founder Paul Allen died from complications of non-Hodgkin's lymphoma on Monday afternoon.

Allen's Vulcan Inc. announced that he died in Seattle at 65 years old.

Allen's sister, Jody, said he was "a remarkable individual on every level."

"While most knew Paul Allen as a technologist and philanthropist, for us he was a much-loved brother and uncle, and an exceptional friend. Paul's family and friends were blessed to experience his wit, warmth, his generosity and deep concern," she said in a statement. "For all the demands on his schedule, there was always time for family and friends. At this time of loss and grief for us – and so many others – we are profoundly grateful for the care and concern he demonstrated every day."

Allen ranked among the world's wealthiest individuals. As of Monday afternoon, he ranked 44th on Forbes' 2018 list of billionaires with an estimated net worth of more than $20 billion.

Through Vulcan, Allen's network of philanthropic efforts and organizations, the Microsoft co-founder supported research in artificial intelligence and new frontier technologies. The group also invested in Seattle's cultural institutions and the revitalization of parts of the city.

Allen owned two professional sports teams, the NFL Seattle Seahawks and NBA Portland Trailblazers. He was also an electric guitarist who occasionally jammed with celebrity musicians including Bono and Mick Jagger, and a huge music fan. He funded and designed the Experience Music Project in Seattle, devoted to the history of rock music and dedicated to his musical hero Jimi Hendrix. (It has since been re-christened the Museum of Pop Culture.) The building was designed by architect Frank Gehry to resemble a melted electric guitar.

Vulcan CEO Bill Hilf said, "All of us who had the honor of working with Paul feel inexpressible loss today."

"He possessed a remarkable intellect and a passion to solve some of the world's most difficult problems, with the conviction that creative thinking and new approaches could make profound and lasting impact," Hilf said in a statement.

Earlier this month, Allen revealed that he had started treatment for non-Hodgkin's lymphoma, the same type of cancer he was treated for in 2009. In 1983, Allen left the company he founded with Bill Gates when he was first diagnosed with Hodgkin's disease, which he defeated.

Bill Gates, who co-founded Microsoft with Allen, said that "personal computing would not have existed without him":

"I am heartbroken by the passing of one of my oldest and dearest friends, Paul Allen. From our early days together at Lakeside School, through our partnership in the creation of Microsoft, to some of our joint philanthropic projects over the years, Paul was a true partner and dear friend. Personal computing would not have existed without him.

But Paul wasn't content with starting one company. He channeled his intellect and compassion into a second act focused on improving people's lives and strengthening communities in Seattle and around the world. He was fond of saying, "If it has the potential to do good, then we should do it." That's the kind of person he was.

Paul loved life and those around him, and we all cherished him in return. He deserved much more time, but his contributions to the world of technology and philanthropy will live on for generations to come. I will miss him tremendously."

Current Microsoft CEO Satya Nadella said Allen made "indispensible" contributions to Microsoft and the technology industry. Nadella also said he learned a lot from Allen and will continue to be inspired by him.

"As co-founder of Microsoft, in his own quiet and persistent way, he created magical products, experiences and institutions, and in doing so, he changed the world," Nadella said in a statement.

Former Microsoft CEO Steve Ballmer called Allen a "truly wonderful, bright and inspiring person."

Steven Sinofsky, former president of Microsoft's Windows division, said Allen "did so much to shape lives with computing and his later work in science, community, and research."

Seahawks Coach Pete Carroll said he was deeply saddened by Allen's death.

NFL Commissioner Roger Goodell said Allen was "the driving force behind keeping the NFL in the Pacific Northwest." Goodell said he valued Allen's advice on a wide range of subjects and sent his condolences.

"His passion for the game, combined with his quiet determination, led to a model organization on and off the field. He worked tirelessly alongside our medical advisers to identify new ways to make the game safer and protect our players from unnecessary risk" Goodell said in a statement.

The Trail Blazers tweeted, "We miss you. We thank you. We love you."

Allen's death was met with an outpouring of condolences from tech leaders. Google CEO Sundar Pichai said with Allen's death, the world has "lost a great technology pioneer today."

Apple CEO Tim Cook called him a "pioneer" and a "force for good."

Salesforce CEO Marc Benioff said he was saddened by Allen's passing.

Amazon CEO Jeff Bezos praised his "relentless" push forward in technology:

β€” CNBC's Matt Rosoff, Ryan Ruggiero and Reuters contributed to this report.

↧

Twilio to Acquire Sendgrid

$
0
0

Accelerates Twilio’s Mission to Fuel the Future of Communications

Brings Together the Two Leading Communication Platforms for Developers

The Combination to Create One, Best-in-Class Cloud Communications Platform for Companies to Communicate with Customers Across Every Channel

Twilio & SendGrid Together Serve Millions of Developers, Have 100,000+ Customers, and Have a Greater than $700 Million Annualized Revenue Run Rate*

Twilio (NYSE:TWLO) and SendGrid today announced that they have entered into a definitive agreement for Twilio to acquire SendGrid in an all-stock transaction valued at approximately $2 billion. At the exchange ratio of 0.485 shares of Twilio Class A common stock per share of SendGrid common stock, this price equates to approximately $36.92 per share based on today’s closing prices. The transaction is expected to close in the first half of 2019.

Adding the leading email API platform to the leading cloud communications platform can drive tremendous value to the combined customer bases. The resulting company would offer developers a single, best-in-class platform to manage all of their important communication channels -- voice, messaging, video, and now email as well. Together, the companies currently drive more than half a trillion customer interactions annualized*, and growing rapidly.

β€œIncreasingly, our customers are asking us to solve all of their strategic communications challenges - regardless of channel. Email is a vital communications channel for companies around the world, and so it was important to us to include this capability in our platform," said Jeff Lawson, Twilio's co-founder and chief executive officer. "The two companies share the same vision, the same model, and the same values. We believe this is a once-in-a-lifetime opportunity to bring together the two leading developer-focused communications platforms to create the unquestioned platform of choice for all companies looking to transform their customer engagement.”

β€œThis is a tremendous day for all SendGrid customers, employees and shareholders,” said Sameer Dholakia, SendGrid’s chief executive officer. β€œOur two companies have always shared a common goal - to create powerful communications experiences for businesses by enabling developers to easily embed communications into the software they are building. Our mission is to help our customers deliver communications that drive engagement and growth, and this combination will allow us to accelerate that mission for our customers.”

Details Regarding the Proposed SendGrid Acquisition
The boards of directors of Twilio and SendGrid have each approved the transaction.

Under the terms of the transaction, Twilio Merger Subsidiary, Inc., a Delaware corporation and a wholly-owned subsidiary of Twilio, will be merged with and into SendGrid, with SendGrid surviving as a wholly-owned subsidiary of Twilio. At closing, each outstanding share of SendGrid common stock will be converted into the right to receive 0.485 shares of Twilio Class A common stock, which represents a per share price for SendGrid common stock of $36.92 based on the closing price of Twilio Class A common stock on October 15, 2018. The exchange ratio represents a 14% premium over the average exchange ratio for the ten calendar days ending, October 15, 2018.

The transaction is expected to close in the first half of 2019, subject to the satisfaction of customary closing conditions, including shareholder approvals by each of SendGrid’s and Twilio’s respective stockholders and the expiration of the applicable waiting period under the Hart-Scott-Rodino Antitrust Improvements Act. Certain stockholders of SendGrid owning approximately 6% of the outstanding SendGrid shares have entered into voting agreements and certain stockholders of Twilio who control approximately 33% of total Twilio voting power have entered into voting agreements, or proxies, pursuant to which they have agreed, among other things, and subject to the terms and conditions of the agreements, to vote in favor of the SendGrid acquisition and the issuance of Twilio shares in connection with the SendGrid acquisition, respectively.

Goldman Sachs & Co. LLC is serving as exclusive financial advisor to Twilio and Goodwin Procter LLP is acting as legal counsel to Twilio. Morgan Stanley & Co. LLC. is serving as exclusive financial advisor to SendGrid and Cooley LLP and Skadden, Arps, Slate, Meagher & Flom LLP are acting as legal counsel to SendGrid.

Q3 2018 Results and Guidance
Both companies will report their respective financial results for the three months ended September 30, 2018 on November 6, 2018. However, both Twilio and SendGrid are announcing that they have exceeded the guidance provided on Aug. 6th and July 31st, respectively, for their third fiscal quarters.

Guidance for the combined company will be provided after the proposed transaction has closed.

Conference Call Information
Twilio will host a conference call today, October 15, 2018, to discuss the SendGrid acquisition, at 2:30 p.m. Pacific Time, 5:30 p.m. Eastern Time. A live webcast of the conference call, as well as a replay of the call, will be available at https://investors.Twilio.com. The conference call can also be accessed by dialing (844) 453-4207, or +1 (647) 253-8638 (outside the U.S. and Canada). The conference ID is 6976357. Following the completion of the call through 11:59 p.m. Eastern Time on Oct. 22, 2018, a replay will be available by dialing (800) 585-8367 or +1 (416) 621-4642 (outside the U.S. and Canada) and entering passcode 6976357. Twilio has used, and intends to continue to use, its investor relations website as a means of disclosing material non-public information and for complying with its disclosure obligations under Regulation FD.

About SendGrid
SendGrid is a leading digital communications platform enabling businesses to engage with their customers via email reliably, effectively and at scale. A leader in email deliverability, SendGrid has processed over 45 billion emails each month for internet and mobile-based customers as well as more traditional enterprises.

Additional Information and Where To Find It
In connection with the proposed transaction between Twilio and SendGrid, Twilio will file a Registration Statement on Form S-4 and joint proxy statement/prospectus forming a part thereof. BEFORE MAKING ANY VOTING DECISION, TWILIO’S AND SENDGRID’S RESPECTIVE INVESTORS AND STOCKHOLDERS ARE URGED TO READ THE REGISTRATION STATEMENT AND JOINT PROXY STATEMENT/PROSPECTUS (INCLUDING ANY AMENDMENTS OR SUPPLEMENTS THERETO) REGARDING THE PROPOSED TRANSACTION WHEN THEY BECOME AVAILABLE BECAUSE THEY WILL CONTAIN IMPORTANT INFORMATION. Investors and security holders will be able to obtain free copies of the Registration Statement, the joint proxy statement/prospectus (when available) and other relevant documents filed or that will be filed by Twilio or SendGrid with the SEC through the website maintained by the SEC at http://www.sec.gov. They may also be obtained for free by contacting Twilio Investor Relations by email at ir@twilio.com or by phone at 415-801-3799 or by contacting SendGrid Investor Relations by email at ir@sendgrid.com or by phone at 720-588-4496, or on Twilio’s and SendGrid’s websites at www.investors.twilio.com and www.investors.sendgrid.com, respectively.

No Offer or Solicitation
This communication does not constitute an offer to sell or the solicitation of an offer to buy any securities nor a solicitation of any vote or approval with respect to the proposed transaction or otherwise. No offering of securities shall be made except by means of a prospectus meeting the requirements of Section 10 of the U.S. Securities Act of 1933, as amended, and otherwise in accordance with applicable law.

Participants in the Solicitation
Each of Twilio and SendGrid and their respective directors and executive officers may be deemed to be participants in the solicitation of proxies from their respective shareholders in connection with the proposed transaction. Information regarding the persons who may, under the rules of the SEC, be deemed participants in the solicitation of Twilio and SendGrid shareholders in connection with the proposed transaction and a description of their direct and indirect interests, by security holdings or otherwise will be set forth in the Registration Statement and joint proxy statement/prospectus when filed with the SEC. Information regarding Twilio’s executive officers and directors is included in Twilio’s Proxy Statement for its 2018 Annual Meeting of Stockholders, filed with the SEC on April 27, 2018 and information regarding SendGrid’s executive officers and directors is included in SendGrid’s Proxy Statement for its 2018 Annual Meeting of Stockholders, filed with the SEC on April 20, 2018.
Additional information regarding the interests of the participants in the solicitation of proxies in connection with the proposed transaction will be included in the joint proxy statement/prospectus and other relevant materials Twilio and SendGrid intend to file with the SEC.

Use of Forward-Looking Statements
This communication contains β€œforward-looking statements” within the meaning of federal securities laws. Forward-looking statements may contain words such as β€œbelieves”, β€œanticipates”, β€œestimates”, β€œexpects”, β€œintends”, β€œaims”, β€œpotential”, β€œwill”, β€œwould”, β€œcould”, β€œconsidered”, β€œlikely” and words and terms of similar substance used in connection with any discussion of future plans, actions or events identify forward-looking statements. All statements, other than historical facts, including statements regarding the expected timing of the closing of the proposed transaction and the expected benefits of the proposed transaction, are forward-looking statements. These statements are based on management’s current expectations, assumptions, estimates and beliefs. While Twilio believes these expectations, assumptions, estimates and beliefs are reasonable, such forward-looking statements are only predictions, and are subject to a number of risks and uncertainties that could cause actual results to differ materially from those described in the forward-looking statements.
The following factors, among others, could cause actual results to differ materially from those described in the forward-looking statements: (i) failure of Twilio or SendGrid to obtain stockholder approval as required for the proposed transaction; (ii) failure to obtain governmental and regulatory approvals required for the closing of the proposed transaction, or delays in governmental and regulatory approvals that may delay the transaction or result in the imposition of conditions that could reduce the anticipated benefits from the proposed transaction or cause the parties to abandon the proposed transaction; successful completion of the proposed transaction; (iii) failure to satisfy the conditions to the closing of the proposed transactions; (iv) unexpected costs, liabilities or delays in connection with or with respect to the proposed transaction; (v) the effect of the announcement of the proposed transaction on the ability of SendGrid or Twilio to retain and hire key personnel and maintain relationships with customers, suppliers and others with whom SendGrid or Twilio does business, or on SendGrid’s or Twilio’s operating results and business generally; (vi) the outcome of any legal proceeding related to the proposed transaction; (vii) the challenges and costs of integrating, restructuring and achieving anticipated synergies and benefits of the proposed transaction and the risk that the anticipated benefits of the proposed transaction may not be fully realized or take longer to realize than expected; (vii) competitive pressures in the markets in which Twilio and SendGrid operate; (viii) the occurrence of any event, change or other circumstances that could give rise to the termination of the merger agreement; and (ix) other risks to the consummation of the proposed transaction, including the risk that the proposed transaction will not be consummated within the expected time period or at all. Additional factors that may affect the future results of Twilio and SendGrid are set forth in their respective filings with the SEC, including each of Twilio’s and SendGrid’s most recently filed Annual Report on Form 10-K, subsequent Quarterly Reports on Form 10-Q, Current Reports on Form 8-K and other filings with the SEC, which are available on the SEC’s website at www.sec.gov. See in particular Part II, Item 1A of Twilio’s Quarterly Report on Form 10-Q for the quarter ended June 30, 2018 under the heading β€œRisk Factors” and Part II, Item 1A of SendGrid’s Quarterly Report on Form 10-Q for the quarter ended June 30, 2018 under the heading β€œRisk Factors.” The risks and uncertainties described above and in Twilio’s most recent Quarterly Report on Form 10-Q and SendGrid’s most recent Quarterly Report on Form 10-Q are not exclusive and further information concerning Twilio and SendGrid and their respective businesses, including factors that potentially could materially affect their respective businesses, financial condition or operating results, may emerge from time to time. Readers are urged to consider these factors carefully in evaluating these forward-looking statements, and not to place undue reliance on any forward-looking statements. Readers should also carefully review the risk factors described in other documents that Twilio and SendGrid file from time to time with the SEC. The forward-looking statements in these materials speak only as of the date of these materials. Except as required by law, Twilio and SendGrid assume no obligation to update or revise these forward-looking statements for any reason, even if new information becomes available in the future.

* Annualized data for the quarterly period ended June 30, 2018.

Source: Twilio Inc.

↧
↧

Search for Alien Life Should Be a Fundamental Part of NASA, New Report Urges

$
0
0

For decades many researchers have tended to view astrobiology as the underdog of space science. The fieldβ€”which focuses on the investigation of life beyond Earthβ€”has often been criticized as more philosophical than scientific, because it lacks in tangible samples to study.

Now that is all changing. Whereas astronomers once knew of no planets outside our solar system, today they have thousands of examples. And although organisms were previously thought to need the relatively mild surface conditions of our world to survive, new findings about life’s ability to persist in the face of extreme darkness, heat, salinity and cold have expanded researchers’ acceptance that it might be found anywhere from Martian deserts to the ice-covered oceans of Saturn’s moon Enceladus.

Highlighting astrobiology’s increasing maturity and clout, a new Congressionally mandated report from the National Academy of Sciences (NAS) urges NASA to make the search for life on other worlds an integral, central part of its exploration efforts. The field is now well set to be a major motivator for the agency’s future portfolio of missions, which could one day let humanity know whether or not we are alone in the universe. β€œThe opportunity to really address this question is at a critically important juncture,” says Barbara Sherwood Lollar, a geologist at the University of Toronto and chair of the committee that wrote the report.

The astronomy and planetary science communities are currently gearing up to each perform their decadal surveysβ€”once-every-10-year efforts that identify a field’s most significant open questionsβ€”and present a wish list of projects to help answer them. Congress and government agencies such as NASA look to the decadal surveys to plan research strategies; the decadals, in turn, look to documents such as the new NAS report for authoritative recommendations on which to base their findings. Astrobiology’s reception of such full-throated encouragement now may boost its odds of becoming a decadal priority.

Another NAS study released last month could be considered a second vote in astrobiology’s favor. This β€œExoplanet Science Strategy” report recommended NASA lead the effort on a new space telescope that could directly gather light from Earth-like planets around other stars. Two concepts, the Large Ultraviolet/Optical/Infrared (LUVOIR) telescope and the Habitable Exoplanet Observatory (HabEx), are current contenders for a multibillion-dollar NASA flagship mission that would fly as early as the 2030s. Either observatory could use a coronagraph, or β€œstarshade”—objects that selectively block starlight but allow planetary light throughβ€”to search for signs of habitability and of life in distant atmospheres. But either would need massive and sustained support from outside astrobiology to succeed in the decadal process and beyond.

There have been previous efforts to back large, astrobiologically focused missions such as NASA’s Terrestrial Planet Finder conceptsβ€”ambitious space telescope proposals in the mid-2000s that would have spotted Earth-size exoplanets and characterized their atmospheres (if these projects had ever made it off the drawing board). Instead, they suffered ignominious cancellations that taught astrobiologists several hard lessons. There was still too little information at the time about the number of planets around other stars, says Caleb Scharf, an astrobiologist at Columbia University, meaning advocates could not properly estimate such a mission’s odds of success. His community had yet to realize that in order to do large projects it needed to band together and show how its goals aligned with those of astronomers less professionally interested in finding alien life, he adds. β€œIf we want big toys,” he says. β€œWe need to play better with others.”

There has also been tension in the past between the astrobiological goals of solar system exploration and the more geophysics-steeped goals that traditionally underpin such efforts, says Jonathan Lunine, a planetary scientist at Cornell University. Missions to other planets or moons have limited capacity for instruments, and those specialized for different tasks often end up in ferocious competitions for a slot onboard. Historically, because the search for life was so open-ended and difficult to define, associated instrumentation lost out to hardware with clearer, more constrained geophysical research priorities. Now, Lunine says, a growing understanding of all the ways biological and geologic evolution are interlinked is helping to show that such objectives do not have to be at odds. β€œI hope that astrobiology will be embedded as a part of the overall scientific exploration of the solar system,” he says. β€œNot as an add-on, but as one of the essential disciplines.”

Above and beyond the recent NAS reports, NASA is arguably already demonstrating more interest in looking for life in our cosmic backyard than it has for decades. This year the agency released a request for experiments that could be carried to another world in our solar system to directly hunt for evidence of living organismsβ€”the first such solicitation since the 1976 Viking missions that looked for life on Mars. β€œThe Ladder of Life Detection,” a paper written by NASA scientists and published in Astrobiology in June, outlined ways to clearly determine if a sample contains extraterrestrial creaturesβ€”a goal mentioned in the NAS report. The document also suggests NASA partner with other agencies and organizations working on astrobiological projects, as the space agency did last month when it hosted a workshop with the nonprofit SETI Institute on the search for β€œtechno-signatures,” potential indicators of intelligent aliens. β€œI think astrobiology has gone from being something that seemed fringy or distracting to something that seems to be embraced at NASA as a major touchstone for why we’re doing space exploration and why the public cares,” says Ariel Anbar, a geochemist at Arizona State University in Tempe.

All this means is astrobiology’s growing influence is helping bring what once were considered outlandish ideas into reality. Anbar recalls attending a conference in the early 1990s, when then–NASA Administrator Dan Goldin displayed an Apollo-era image of Earth from space and suggested the agency try to do the same thing for a planet around another star.

β€œThat was pretty out there 25 years ago,” he says. β€œNow it’s not out there at all.”

↧

The Wonder from Down Under: The Fairlight CMI Digital Sampling Synthesiser

$
0
0

After Sydney-native Peter Vogel graduated from high school in 1975, his classmate Kim Ryrie approached him with the idea of a creating a computer microprocessor-driven electronic musical synthesiser. Ryrie was frustrated with his attempts at building an analogue synth, feeling that the sounds that it could produce were extremely limited.

Vogel agreed, the pair spent the next six months in the basement of the house they rented to be Fairlight’s headquarters working on potential designs. However, it wasn’t until they met Motorola consultant Tony Furse that they made a breakthrough.

In 1972 Furse had worked with the Canberra School of Electronic Music to build a digital synthesiser using two 8-bit Motorola 6800 microprocessors, called the Qasar. It had a monitor for displaying simple graphical representations of music, and a light pen for manipulating them.

However, Furse’s synthesiser lacked the ability to create harmonic partials (complementary frequencies created in addition to the β€œroot” frequency of a musical note in acoustic instruments, for example when the string of a piano or guitar is struck) and the sounds it emitted lacked fullness and depth. Ryrie and Vogel thought they could solve the problem, and licensed the Qasar from Furse. They worked on the problem for a year without really getting anywhere.

Late one night in 1978, Vogel proposed they took a sample (digital recording) of an acoustic instrument, and extract the harmonics using Fourier analysis. Then they could recreate the harmonics using oscillators. But after sampling a piano, Vogel decided to see what would happen if he simply routed the sample back through the Qasar’s oscillators verbatim. It sounded like a piano! And by varying the speed of playback, he could control the pitch.

It wasn’t perfect, but it was better than anything else they had come up with, and off they went.

They continued to work on the idea of digital sampling while selling computers to offices in order to keep the lights on. They added the ability to mask the digitised sounds with an ADSR (Attack Decay Sustain Release) programmable envelope, allowing for some variation.

They also added a QWERTY keyboard to go with the monitor and light pen (a light-sensing β€œpen” which can tell its location on the surface of a CRT by synchronising with the video signal), and an 8-inch floppy diskette for storing sample data, which was loaded into the CMI’s 208KB of memory). It really wasn’t much room – at 24 kilohertz (a CD-quality recording is typically 44.1khz) a sample could only last for one-half to one second – not very long.

Longer sounds needed to be recorded at even lower sample rates, but Vogel credited their low-fidelity (think landline telephone) for giving the CMI a certain sound. However, despite its deficiencies, Australian distributors and consumers were interested, so much so that the Musician’s Union warned that such devices posed a β€œlethal threat” to its members, afraid that humans in orchestras could be replaced!

In the summer of 1979, Vogel visited the home of English singer-songwriter Peter Gabriel, who was in the process of recording his third solo album.Vogel demonstrated the CMI and Gabriel was instantly engrossed with it, using it over the following week to β€œplay” sounds such as a glass and bricks on songs in the album. He was so happy with it he volunteered to start a UK distributor for the CMI, which went on to sell it to other British music artists such as Kate Bush, Alan Parsons and Thomas Dolby.

The Americans soon caught on as well, with Stevie Wonder, Herbie Hancock and Todd Rundgren all taking a shining to the CMIΒ amongst many others. But they weren’t interested in using it for reproducing real instruments – rather it was the surreal quality of its sounds combined with the built-in sequencer which made it an attractive addition to their musical toolbox.

Over the following decade, three generations of CMI, with upgrades such as MIDI support, higher sampling rates and more memory, would contribute heavily to the sound of 1980s popular music, spawning new musical styles such as techno, hip hop and drum and bass.

The Page R sequencer in the Fairlight CMI Series II inspired a great many musician software developers to create versions for 1980s-era home computers, including the Atari 400/800, the Apple II and the Commodore 64.

While these 8-bit machines were limited to simple waveform-based sound synthesis, and couldn’t (generally) play back digital samples the way the CMI could, note-based sequencers provided not only a simple way to both learn music notation but also create 3-voice arrangements of original and popular tunes (and also Christmas carols!) with the noise channels in most sound chips providing primitive drums.

Considering the contemporary equivalent was the repetitive (and cheesy) accompaniment available in the common household electronic organ, this was considered to be an improvement!

Atari and Commodore both released note-based music software for their respective computers; Commodore’s included a musical-keyboard overlay that went over top of the alphanumeric keyboard on its Commodore 64.Β  A number of third-party software programs were also produced, and 8-year old music composers flourished.

Bank Street Music Writer was a typical music application of the time. Written by Glen Clancy and published by Mindscape, the Atari version was released in 1985. Like competitors such as Music Construction Set, users can place graphical representations of notes on to a musical staff, making the creation of computer-generated music much more traditional than step-entry piano-roll type methods.

This was only practical due to the visual nature of a computer monitor, which wouldn’t itself have been possible without the cathode-ray tube, the work of A.A. Campbell Swinton, Philo Farnsworth and many others. This sort of interactive music editing highlights the varied artistic software applications the CRT made possible, not just in visual arenas such as video, photography and digital art, but also in literature and music, where digital composition is a standard practice today.

8-bit music notation software led to the rise of the first β€œbedroom musicians”, amateurs who were now able to compose coherent, sequenced tunes without the need for expensive equipment. Many of them would go on to write music for video games, and/or became professional musicians when they became older – much like many of today’s bedroom EDM producers, who use descendants of that software.

The higher video resolutions available in 16-bit computers such as the Atari ST (640Γ—400) and the Apple Macintosh (512Γ—342) led to an improvement in the graphical quality of music software. The crispness of their monochrome CRT displays made musical notes more readable, and thus more of them were legible on screen at one time than had been on their lower-resolution 8-bit predecessors.

The Atari ST also featured a built-in MIDI interface, which allowed for the connection of external keyboards (for both input and ouput) and digital-sampled β€œsound banks” such as the Roland MT-32, which set the standard for MIDI instrument assignments and allowed for greater portability of MIDI files between different electronic musical instruments and devices.

As they had with the Fairlight CMI, professional musicians began to take notice as consumer-grade computers developed complex music-notation and sequencing software. Paired with MIDI instruments capable of outputting dozens of voices simultaneously, these consumer computers began to overtake dedicated musical computer systems such as the Fairlight CMI, with the Atari ST (commonly paired with Steinberg’s Cubase music sequencing application) becoming fairly standard in music studios around the world for much of the 1990s.

These days, most music is sequenced using an off-the-shelf Macbook Pro!

↧

GitHub's game jam returns next month

$
0
0

Game Off 2018

Game Off is our annual game jam, where participants spend one month creating games based on a theme that we provide. Everyone around the world is welcome to participate, from newbies to professional game developersβ€”and your game can be as simple or complex as you want. It’s a great excuse to learn a new technology, collaborate on something over the weekends with friends, or create a game for the first time!

Last year, the theme was β€œthrowback” and over 200 games were createdβ€”everything from old school LCD games, and retro flight simulators, to squirrel-infested platformers.

We’re announcing this year’s theme on Thursday, November 1, at 13:37 pm (PDT). From that point, you have 30 days to create a game loosely based on (or inspired by) the theme.

Join the jam on itch.io now

Using open source game engines, libraries, and tools is encouraged, but you’re free to use any technology you want. Have you been wanting an excuse to experiment with something new? Now’s your chance to take on a new engine you’d like to try.

As always, we’ll highlight some of our favorites games on the GitHub Blog, and the world will get to enjoy (and maybe even contribute to or learn from) your creations.

Helpβ€”I’ve never created a game before!

With so many free, open source game engines and tutorials available online, there’s never been an easier (or more exciting!) time to try out game development.

Are you…

  • Into JavaScript? You might be interested in Phaser.
  • Comfortable with C++ or C#?Godot might be a good match for you.
  • Proficient with Python? Check out Pygame.
  • Dangerous with Java? Take a look at libGDX.
  • In love with Lua (and/or retrogames)? Drop everything and check out LIKO-12.

Do you really like retro games? Maybe you can…

Whatever genre of game you’re interested in and language you want to use, you’re bound to find a GitHub project that will help you take your game from idea to launch in only a month.

Have a repository or tutorial you’d like to share, tag us with #GitHubGameOff.

Helpβ€”I’ve never used version control, Git, or GitHub before!

Don’t worry, we have tons of resources for you. From how to use Git, to all things GitHub, you’ll β€œgit” it in no time.

  • GitHub Help offers tons of information about GitHub, from basics like creating an account, to advanced topics, such as resolving merge conflicts
  • Git documentation has everything you need to know to start using Git (including version control)

Did you know? You don’t have to use Git on the command line. You can use GitHub Desktop (our client for macOS and Windows), or bring Git and GitHub to your favorite editors:

GLHF! We can’t wait to see what you build! πŸ’™β€οΈ

Octocat pixel art animation

↧
↧

Animals that are currently monitored using facial recognition technology

$
0
0
':""},t.getDefinedParams=function(e,t){return t.filter(function(t){return e[t]}).reduce(function(t,r){return n(t,function(e,t,r){return t in e?Object.defineProperty(e,t,{value:r,enumerable:!0,configurable:!0,writable:!0}):e[t]=r,e}({},r,e[r]))},{})},t.isValidMediaTypes=function(e){var t=["banner","native","video"];return!!Object.keys(e).every(function(e){return t.includes(e)})&&(!e.video||!e.video.context||["instream","outstream"].includes(e.video.context))},t.unsupportedBidderMessage=function(e,t){var r=e.mediaType||Object.keys(e.mediaTypes).join(", "),n=1===t.length?"This bidder":"These bidders";return"\n "+e.code+" is a "+r+" ad unit\n containing bidders that don't support "+r+": "+t.join(", ")+".\n "+n+" won't fetch demand.\n "};var o,a=r(8),s=(o=r(61))&&o.__esModule?o:{default:o},d=r(2),u=!1,c=Object.prototype.toString,l=null;try{l=console.info.bind(window.console)}catch(e){}t.replaceTokenInString=function(e,t,r){return this._each(t,function(t,n){t=void 0===t?"":t;var i=r+n.toUpperCase()+r,o=new RegExp(i,"g");e=e.replace(o,t)}),e};var f,p=(f=0,function(){return++f});function g(){return p()+Math.random().toString(16).substr(2)}function v(e){if(t.isArray(e)&&2===e.length&&!isNaN(e[0])&&!isNaN(e[1]))return e[0]+"x"+e[1]}function m(){return window.console&&window.console.log}t.getUniqueIdentifierStr=g,t.generateUUID=function e(t){return t?(t^16*Math.random()>>t/4).toString(16):([1e7]+-1e3+-4e3+-8e3+-1e11).replace(/[018]/g,e)},t.getBidIdParameter=function(e,t){return t&&t[e]?t[e]:""},t.tryAppendQueryString=function(e,t,r){return r?e+(t+"=")+encodeURIComponent(r)+"&":e},t.parseQueryStringParameters=function(e){var t="";for(var r in e)e.hasOwnProperty(r)&&(t+=r+"="+encodeURIComponent(e[r])+"&");return t},t.transformAdServerTargetingObj=function(e){return e&&Object.getOwnPropertyNames(e).length>0?S(e).map(function(t){return t+"="+encodeURIComponent(E(e,t))}).join("&"):""},t.getTopWindowLocation=function(){var e=void 0;try{window.top.location.toString(),e=window.top.location}catch(t){e=window.location}return e},t.getTopWindowUrl=function(){var e=void 0;try{e=this.getTopWindowLocation().href}catch(t){e=""}return e},t.getTopWindowReferrer=function(){try{return window.top.document.referrer}catch(e){return document.referrer}},t.logWarn=function(e){b()&&console.warn&&console.warn("WARNING: "+e)},t.logInfo=function(e,t){b()&&m()&&l&&(t&&0!==t.length||(t=""),l("INFO: "+e+(""===t?"":" : params : "),t))},t.logMessage=function(e){b()&&m()&&console.log("MESSAGE: "+e)},t.hasConsoleLogger=m;var b=function(){if(!1===a.config.getConfig("debug")&&!1===u){var e="TRUE"===y(d.DEBUG_MODE).toUpperCase();a.config.setConfig({debug:e}),u=!0}return!!a.config.getConfig("debug")};t.debugTurnedOn=b,t.logError=function(){b()&&window.console&&window.console.error&&console.error.apply(console,arguments)},t.createInvisibleIframe=function(){var e=document.createElement("iframe");return e.id=g(),e.height=0,e.width=0,e.border="0px",e.hspace="0",e.vspace="0",e.marginWidth="0",e.marginHeight="0",e.style.border="0",e.scrolling="no",e.frameBorder="0",e.src="about:blank",e.style.display="none",e};var y=function(e){var t=new RegExp("[\\?&]"+e+"=([^]*)").exec(window.location.search);return null===t?"":decodeURIComponent(t[1].replace(/\+/g," "))};t.getParameterByName=y,t.hasValidBidRequest=function(e,t,r){var n=!1;function i(e,r){r===t[o]&&(n=!0)}for(var o=0;o0);for(var r in e)if(hasOwnProperty.call(e,r))return!1;return!0},t.isEmptyStr=function(e){return this.isStr(e)&&(!e||0===e.length)},t._each=function(e,t){if(!this.isEmpty(e)){if(this.isFn(e.forEach))return e.forEach(t,this);var r=0,n=e.length;if(n>0)for(;r'+'':""},t.createTrackPixelIframeHtml=function(e){var r=!(arguments.length>1&&void 0!==arguments[1])||arguments[1],n=arguments.length>2&&void 0!==arguments[2]?arguments[2]:"";return e?(r&&(e=encodeURI(e)),n&&(n='sandbox="'+n+'"'),"'):""},t.getIframeDocument=function(e){if(e){var t=void 0;try{t=e.contentWindow?e.contentWindow.document:e.contentDocument.document?e.contentDocument.document:e.contentDocument}catch(e){this.logError("Cannot get iframe document",e)}return t}},t.getValueString=function(e,t,r){return null==t?r:this.isStr(t)?t:this.isNumber(t)?t.toString():void this.logWarn("Unsuported type for param: "+e+" required type: String")}},function(e,t,r){var n=Object.assign||function(e){for(var t=1;t2&&void 0!==arguments[2]?arguments[2]:{}).supportedMediaTypes,i=void 0===n?[]:n;e&&r?"function"==typeof e.callBids?(f[r]=e,i.includes("video")&&t.videoAdapters.push(r),i.includes("native")&&a.nativeAdapters.push(r)):d.logError("Bidder adaptor error for bidder code: "+r+"bidder must implement a callBids() function"):d.logError("bidAdaptor or bidderCode not specified")},t.aliasBidAdapter=function(e,r){var i,o;if(void 0===f[r]){var u=f[e];if(void 0===u)d.logError('bidderCode "'+e+'" is not an existing bidder.',"adaptermanager.aliasBidAdapter");else try{var c=void 0,l=(i=e,o=[],t.videoAdapters.includes(i)&&o.push("video"),a.nativeAdapters.includes(i)&&o.push("native"),o);if(u.constructor.prototype!=Object.prototype)(c=new u.constructor).setBidderCode(r);else{var p=u.getSpec();c=(0,s.newBidder)(n({},p,{code:r}))}this.registerBidAdapter(c,r,{supportedMediaTypes:l})}catch(t){d.logError(e+" bidder does not currently support aliasing.","adaptermanager.aliasBidAdapter")}}else d.logMessage('alias name "'+r+'" has been already specified.')},t.registerAnalyticsAdapter=function(e){var t=e.adapter,r=e.code;t&&r?"function"==typeof t.enableAnalytics?(t.code=r,m[r]=t):d.logError('Prebid Error: Analytics adaptor error for analytics "'+r+'"\n analytics adapter must implement an enableAnalytics() function'):d.logError("Prebid Error: analyticsAdapter or analyticsCode not specified")},t.enableAnalytics=function(e){d.isArray(e)||(e=[e]),d._each(e,function(e){var t=m[e.provider];t?t.enableAnalytics(e):d.logError("Prebid Error: no analytics adapter found in registry for\n "+e.provider+".")})},t.setBidderSequence=function(e){v[e]?b=e:d.logWarn("Invalid order: "+e+". Bidder Sequence was not set.")},t.getBidAdapter=function(e){return f[e]},t.setS2SConfig=function(e){p=e},t.setS2STestingModule=function(e){l=e}},function(e,t){e.exports={JSON_MAPPING:{PL_CODE:"code",PL_SIZE:"sizes",PL_BIDS:"bids",BD_BIDDER:"bidder",BD_ID:"paramsd",BD_PL_ID:"placementId",ADSERVER_TARGETING:"adserverTargeting",BD_SETTING_STANDARD:"standard"},REPO_AND_VERSION:"prebid_prebid_0.34.16",DEBUG_MODE:"pbjs_debug",STATUS:{GOOD:1,NO_BID:2},CB:{TYPE:{ALL_BIDS_BACK:"allRequestedBidsBack",AD_UNIT_BIDS_BACK:"adUnitBidsBack",BID_WON:"bidWon",REQUEST_BIDS:"requestBids"}},EVENTS:{AUCTION_INIT:"auctionInit",AUCTION_END:"auctionEnd",BID_ADJUSTMENT:"bidAdjustment",BID_TIMEOUT:"bidTimeout",BID_REQUESTED:"bidRequested",BID_RESPONSE:"bidResponse",BID_WON:"bidWon",SET_TARGETING:"setTargeting",REQUEST_BIDS:"requestBids",ADD_AD_UNITS:"addAdUnits"},EVENT_ID_PATHS:{bidWon:"adUnitCode"},GRANULARITY_OPTIONS:{LOW:"low",MEDIUM:"medium",HIGH:"high",AUTO:"auto",DENSE:"dense",CUSTOM:"custom"},TARGETING_KEYS:["hb_bidder","hb_adid","hb_pb","hb_size","hb_deal"],S2S:{DEFAULT_ENDPOINT:"https://prebid.adnxs.com/pbs/v1/auction",SRC:"s2s",ADAPTER:"prebidServer",SYNC_ENDPOINT:"https://prebid.adnxs.com/pbs/v1/cookie_sync",SYNCED_BIDDERS_KEY:"pbjsSyncs"}}},function(e,t,r){var n=Object.assign||function(e){for(var t=1;tpbjs.cbTimeout+pbjs.timeoutBuffer&&t.executeCallback(!0)}function w(e){var r;m.emit(p.EVENTS.BID_RESPONSE,e),pbjs._bidsReceived.push(e),e.adUnitCode&&function(e){var t=this;return pbjs._bidsRequested.map(function(r){return r.bids.filter(o.adUnitsFilter.bind(t,pbjs._adUnitCodes)).filter(function(t){return t.placementCode===e})}).reduce(o.flatten,[]).map(function(e){return"indexExchange"===e.bidder?e.sizes.length:1}).reduce(I,0)===pbjs._bidsReceived.filter(function(t){return t.adUnitCode===e}).length}(e.adUnitCode)&&(r=[e.adUnitCode],x(b.byAdUnit,r)),S()&&t.executeCallback()}function T(e,t){var r={},i=pbjs.bidderSettings;return t&&i&&A(r,j(),t),e&&t&&i&&i[e]&&i[e][p.JSON_MAPPING.ADSERVER_TARGETING]?(A(r,i[e],t),t.alwaysUseBid=i[e].alwaysUseBid,t.sendStandardTargeting=i[e].sendStandardTargeting):y[e]&&(A(r,y[e],t),t.alwaysUseBid=y[e].alwaysUseBid,t.sendStandardTargeting=y[e].sendStandardTargeting),t.native&&(r=n({},r,(0,s.getNativeTargeting)(t))),r}function A(e,t,r){var n=t[p.JSON_MAPPING.ADSERVER_TARGETING];return r.size=r.getSize(),v._each(n,function(n){var i=n.key,o=n.val;if(e[i]&&v.logWarn("The key: "+i+" is getting ovewritten"),v.isFn(o))try{o=o(r)}catch(e){v.logError("bidmanager","ERROR",e)}(void 0===t.suppressEmptyKeys||!0!==t.suppressEmptyKeys)&&"hb_deal"!==i||!v.isEmptyStr(o)&&null!=o?e[i]=o:v.logInfo("suppressing empty key '"+i+"' from adserver targeting")}),e}function x(e,t){var r=this;v.isArray(e)&&e.forEach(function(e){var n=t||pbjs._adUnitCodes,i=[pbjs._bidsReceived.filter(o.adUnitsFilter.bind(r,n)).reduce(D,{})];e.apply(pbjs,i)})}function D(e,t){return e[t.adUnitCode]||(e[t.adUnitCode]={bids:[]}),e[t.adUnitCode].bids.push(t),e}function C(e){var t=e.bidderCode,r=e.cpm,i=void 0;if(pbjs.bidderSettings&&(t&&pbjs.bidderSettings[t]&&"function"==typeof pbjs.bidderSettings[t].bidCpmAdjustment?i=pbjs.bidderSettings[t].bidCpmAdjustment:pbjs.bidderSettings[p.JSON_MAPPING.BD_SETTING_STANDARD]&&"function"==typeof pbjs.bidderSettings[p.JSON_MAPPING.BD_SETTING_STANDARD].bidCpmAdjustment&&(i=pbjs.bidderSettings[p.JSON_MAPPING.BD_SETTING_STANDARD].bidCpmAdjustment),i))try{r=i(e.cpm,n({},e))}catch(e){v.logError("Error during bid adjustment","bidmanager.js",e)}r>=0&&(e.cpm=r)}function j(){var e=l.config.getConfig("priceGranularity"),t=pbjs.bidderSettings;return t[p.JSON_MAPPING.BD_SETTING_STANDARD]||(t[p.JSON_MAPPING.BD_SETTING_STANDARD]={}),t[p.JSON_MAPPING.BD_SETTING_STANDARD][p.JSON_MAPPING.ADSERVER_TARGETING]||(t[p.JSON_MAPPING.BD_SETTING_STANDARD][p.JSON_MAPPING.ADSERVER_TARGETING]=[{key:"hb_bidder",val:function(e){return e.bidderCode}},{key:"hb_adid",val:function(e){return e.adId}},{key:"hb_pb",val:function(t){return e===p.GRANULARITY_OPTIONS.AUTO?t.pbAg:e===p.GRANULARITY_OPTIONS.DENSE?t.pbDg:e===p.GRANULARITY_OPTIONS.LOW?t.pbLg:e===p.GRANULARITY_OPTIONS.MEDIUM?t.pbMg:e===p.GRANULARITY_OPTIONS.HIGH?t.pbHg:e===p.GRANULARITY_OPTIONS.CUSTOM?t.pbCg:void 0}},{key:"hb_size",val:function(e){return e.size}},{key:"hb_deal",val:function(e){return e.dealId}}]),t[p.JSON_MAPPING.BD_SETTING_STANDARD]}t.getTimedOutBidders=function(){return pbjs._bidsRequested.map(h).filter(o.uniques).filter(function(e){return pbjs._bidsReceived.map(_).filter(o.uniques).indexOf(e)0||e.dealId)&&(s=T(e.bidderCode,e)),e.adserverTargeting=n(e.adserverTargeting||{},s)}(t,e),"video"===t.mediaType?(r=t,l.config.getConfig("usePrebidCache")&&!r.videoCacheKey?(0,u.store)([r],function(e,t){e?v.logWarn("Failed to save to the video cache: "+e+". Video bid must be discarded."):(r.videoCacheKey=t[0].uuid,r.vastUrl||(r.vastUrl=(0,u.getCacheUrl)(r.videoCacheKey)),w(r)),E(r)}):(w(r),E(r))):(w(t),E(t)))}),t.getKeyValueTargetingPairs=function(){return T.apply(void 0,arguments)},t.registerDefaultBidderSetting=function(e,t){y[e]=t},t.executeCallback=function(e){if(!e&&b.timer&&clearTimeout(b.timer),!0!==b.all.called&&(x(b.all),b.all.called=!0,e)){var r=t.getTimedOutBidders();r.length&&m.emit(p.EVENTS.BID_TIMEOUT,r)}if(b.oneTime){m.emit(g);try{x([b.oneTime])}catch(e){v.logError("Error executing bidsBackHandler",null,e)}finally{b.oneTime=null,b.timer=!1,pbjs.clearAuction()}}},t.externalCallbackReset=function(){b.all.called=!1},t.addOneTimeCallback=function(e,t){b.oneTime=e,b.timer=t},t.addCallback=function(e,t,r){t.id=e,p.CB.TYPE.ALL_BIDS_BACK===r?b.all.push(t):p.CB.TYPE.AD_UNIT_BIDS_BACK===r&&b.byAdUnit.push(t)},m.on(p.EVENTS.BID_ADJUSTMENT,function(e){C(e)}),t.adjustBids=function(){return C.apply(void 0,arguments)},t.getStandardBidderAdServerTargeting=function(){return j()[p.JSON_MAPPING.ADSERVER_TARGETING]}},function(e,t,r){var n=r(0);t.createBid=function(e,t){return new function(e,t){var r=t&&t.bidId||n.getUniqueIdentifierStr(),i=e||0;this.bidderCode=t&&t.bidder||"",this.width=0,this.height=0,this.statusMessage=function(){switch(i){case 0:return"Pending";case 1:return"Bid available";case 2:return"Bid returned empty or error response";case 3:return"Bid timed out"}}(),this.adId=r,this.mediaType="banner",this.getStatusCode=function(){return i},this.getSize=function(){return this.width+"x"+this.height}}(e,t)}},function(e,t,r){var n=r(0),i={};function o(e,t){var r=document.createElement("script");r.type="text/javascript",r.async=!0,t&&"function"==typeof t&&(r.readyState?r.onreadystatechange=function(){"loaded"!==r.readyState&&"complete"!==r.readyState||(r.onreadystatechange=null,t())}:r.onload=function(){t()}),r.src=e;var n=document.getElementsByTagName("head");(n=n.length?n:document.getElementsByTagName("body")).length&&(n=n[0]).insertBefore(r,n.firstChild)}t.loadScript=function(e,t,r){e?r?i[e]?t&&"function"==typeof t&&(i[e].loaded?t():i[e].callbacks.push(t)):(i[e]={loaded:!1,callbacks:[]},t&&"function"==typeof t&&i[e].callbacks.push(t),o(e,function(){i[e].loaded=!0;try{for(var t=0;t3&&void 0!==arguments[3]?arguments[3]:{};try{var c=void 0,l=!1,f=u.method||(r?"POST":"GET"),p="object"===(void 0===t?"undefined":i(t))?t:{success:function(){a.logMessage("xhr success")},error:function(e){a.logError("xhr error",null,e)}};if("function"==typeof t&&(p.success=t),window.XMLHttpRequest?void 0===(c=new window.XMLHttpRequest).responseType&&(l=!0):l=!0,l?((c=new window.XDomainRequest).onload=function(){p.success(c.responseText,c)},c.onerror=function(){p.error("error",c)},c.ontimeout=function(){p.error("timeout",c)},c.onprogress=function(){a.logMessage("xhr onprogress")}):c.onreadystatechange=function(){if(c.readyState===s){var e=c.status;e>=200&&e0;)try{this.cmd.shift().call()}catch(e){i.logError("Error processing Renderer command: ",e)}}},function(e,t,r){Object.defineProperty(t,"__esModule",{value:!0}),t.userSync=void 0;var n=function(e,t){if(Array.isArray(e))return e;if(Symbol.iterator in Object(e))return function(e,t){var r=[],n=!0,i=!1,o=void 0;try{for(var a,s=e[Symbol.iterator]();!(n=(a=s.next()).done)&&(r.push(a.value),!t||r.length!==t);n=!0);}catch(e){i=!0,o=e}finally{try{!n&&s.return&&s.return()}finally{if(i)throw o}}return r}(e,t);throw new TypeError("Invalid attempt to destructure non-iterable instance")},i=Object.assign||function(e){for(var t=1;t=u.syncsPerBidder?o.logWarn('Number of user syncs exceeded for "{$bidder}"'):u.enabledBidders&&u.enabledBidders.length&&u.enabledBidders.indexOf(t)0&&void 0!==arguments[0]?arguments[0]:0;if(e)return window.setTimeout(c,Number(e));c()},t.triggerUserSyncs=function(){u.enableOverride&&t.syncUsers()},t}a.config.setDefaults({userSync:{syncEnabled:!0,pixelEnabled:!0,syncsPerBidder:5,syncDelay:3e3}});var d=!o.isSafariBrowser()&&o.cookiesAreEnabled();t.userSync=s({config:a.config.getConfig("userSync"),browserSupportsCookies:d})},function(e,t){var r=e.exports="undefined"!=typeof window&&window.Math==Math?window:"undefined"!=typeof self&&self.Math==Math?self:Function("return this")();"number"==typeof __g&&(__g=r)},function(e,t){var r=e.exports={version:"2.5.3"};"number"==typeof __e&&(__e=r)},function(e,t){e.exports=function(e){return"object"==_typeof(e)?null!==e:"function"==typeof e}},function(e,t,r){var n=r(16),i=r(14),o=r(28),a=r(412),s=r(48),d="prototype",u=function e(t,r,u){var c,l,f,p,g=t&e.F,v=t&e.G,m=t&e.P,b=t&e.B,y=v?n:t&e.S?n[r]||(n[r]={}):(n[r]||{})[d],h=v?i:i[r]||(i[r]={}),_=h[d]||(h[d]={});for(c in v&&(u=r),u)f=((l=!g&&y&&void 0!==y[c])?y:u)[c],p=b&&l?s(f,n):m&&"function"==typeof f?s(Function.call,f):f,y&&a(y,c,f,t&e.U),h[c]!=f&&o(h,c,p),m&&_[c]!=f&&(_[c]=f)};n.core=i,u.F=1,u.G=2,u.S=4,u.P=8,u.B=16,u.W=32,u.U=64,u.R=128,e.exports=u},function(e,t){e.exports=function(e){return"object"==_typeof(e)?null!==e:"function"==typeof e}},function(e,t,r){var n=r(19),i=r(20),o=r(36),a=r(103),s="prototype",d=function e(t,r,d){var u,c,l,f=t&e.F,p=t&e.G,g=t&e.S,v=t&e.P,m=t&e.B,b=t&e.W,y=p?i:i[r]||(i[r]={}),h=y[s],_=p?n:g?n[r]:(n[r]||{})[s];for(u in p&&(d=r),d)(c=!f&&_&&void 0!==_[u])&&u in y||(l=c?_[u]:d[u],y[u]=p&&"function"!=typeof _[u]?d[u]:m&&c?o(l,n):b&&_[u]==l?function(e){var t=function(t,r,n){if(this instanceof e){switch(arguments.length){case 0:return new e;case 1:return new e(t);case 2:return new e(t,r)}return new e(t,r,n)}return e.apply(this,arguments)};return t[s]=e[s],t}(l):v&&"function"==typeof l?o(Function.call,l):l,v&&((y.virtual||(y.virtual={}))[u]=l,t&e.R&&h&&!h[u]&&a(h,u,l)))};d.F=1,d.G=2,d.S=4,d.P=8,d.B=16,d.W=32,d.U=64,d.R=128,e.exports=d},function(e,t,r){e.exports=!r(37)(function(){return 7!=Object.defineProperty({},"a",{get:function(){return 7}}).a})},function(e,t){e.exports=function(){}},function(e,t,r){var n=r(0),i=r(8),o=r(15);function a(e,t,r){return t in e?Object.defineProperty(e,t,{value:r,enumerable:!0,configurable:!0,writable:!0}):e[t]=r,e}var s=r(3),d=r(0),u=r(2),c=t,l=[];function f(e){return"string"==typeof e?[e]:d.isArray(e)?e:pbjs._adUnitCodes||[]}function p(){return s.getStandardBidderAdServerTargeting().map(function(e){return e.key}).concat(u.TARGETING_KEYS).filter(n.uniques)}function g(e){return{adUnitCode:e,cpm:0,adserverTargeting:{},timeToRespond:0}}c.resetPresetTargeting=function(e){if((0,n.isGptPubadsDefined)()){var t=f(e),r=pbjs.adUnits.filter(function(e){return t.includes(e.code)});window.googletag.pubads().getSlots().forEach(function(e){l.forEach(function(t){r.forEach(function(r){r.code!==e.getAdUnitPath()&&r.code!==e.getSlotElementId()||e.setTargeting(t,null)})})})}},c.getAllTargeting=function(e){var t,r,s,d,v,m,b=f(e),y=(d=b,v=c.getWinningBids(d),m=p(),v=v.map(function(e){return a({},e.adUnitCode,Object.keys(e.adserverTargeting).filter(function(t){return void 0===e.sendStandardTargeting||e.sendStandardTargeting||-1===m.indexOf(t)}).map(function(t){return a({},t.substring(0,20),[e.adserverTargeting[t]])}))})).concat(function(e){var t=p();return pbjs._bidsReceived.filter(n.adUnitsFilter.bind(this,e)).map(function(e){if(e.alwaysUseBid)return a({},e.adUnitCode,Object.keys(e.adserverTargeting).map(function(r){if(!(t.indexOf(r)>-1))return a({},r.substring(0,20),[e.adserverTargeting[r]])}).filter(function(e){return e}))}).filter(function(e){return e})}(b)).concat(i.config.getConfig("enableSendAllBids")?(t=u.TARGETING_KEYS.concat(o.NATIVE_TARGETING_KEYS),r=[],s=(0,n.groupBy)(pbjs._bidsReceived,"adUnitCode"),Object.keys(s).forEach(function(e){var t=(0,n.groupBy)(s[e],"bidderCode");Object.keys(t).forEach(function(e){return r.push(t[e].reduce(n.getHighestCpm,g()))})}),r.map(function(e){if(e.adserverTargeting)return a({},e.adUnitCode,(r=e,t.filter(function(t){return void 0!==e.adserverTargeting[t]}).map(function(e){return a({},(e+"_"+r.bidderCode).substring(0,20),[r.adserverTargeting[e]])})));var r}).filter(function(e){return e})):[]);return y.map(function(e){Object.keys(e).map(function(t){e[t].map(function(e){-1===l.indexOf(Object.keys(e)[0])&&(l=Object.keys(e).concat(l))})})}),y},c.setTargeting=function(e){window.googletag.pubads().getSlots().forEach(function(t){e.filter(function(e){return Object.keys(e)[0]===t.getAdUnitPath()||Object.keys(e)[0]===t.getSlotElementId()}).forEach(function(e){return e[Object.keys(e)[0]].forEach(function(e){e[Object.keys(e)[0]].map(function(r){return d.logMessage("Attempting to set key value for slot: "+t.getSlotElementId()+" key: "+Object.keys(e)[0]+" value: "+r),r}).forEach(function(r){t.setTargeting(Object.keys(e)[0],r)})})})})},c.getWinningBids=function(e){var t=f(e);return pbjs._bidsReceived.filter(function(e){return t.includes(e.adUnitCode)}).filter(function(e){return e.cpm>0}).map(function(e){return e.adUnitCode}).filter(n.uniques).map(function(e){return pbjs._bidsReceived.filter(function(t){return t.adUnitCode===e?t:null}).reduce(n.getHighestCpm,g(e))})},c.setTargetingForAst=function(){var e=pbjs.getAdserverTargeting();Object.keys(e).forEach(function(t){return Object.keys(e[t]).forEach(function(r){if(d.logMessage("Attempting to set targeting for targetId: "+t+" key: "+r+" value: "+e[t][r]),d.isStr(e[t][r])||d.isArray(e[t][r])){var n={};n[r.toUpperCase()]=e[t][r],window.apntag.setKeywords(t,n)}})})},c.isApntagDefined=function(){if(window.apntag&&d.isFn(window.apntag.setKeywords))return!0}},function(e,t,r){var n=r(406),i=r(411);e.exports=r(29)?function(e,t,r){return n.f(e,t,i(1,r))}:function(e,t,r){return e[t]=r,e}},function(e,t,r){e.exports=!r(30)(function(){return 7!=Object.defineProperty({},"a",{get:function(){return 7}}).a})},function(e,t){e.exports=function(e){try{return!!e()}catch(e){return!0}}},function(e,t){var r=0,n=Math.random();e.exports=function(e){return"Symbol(".concat(void 0===e?"":e,")_",(++r+n).toString(36))}},function(e,t,r){var n=r(50);e.exports=Object("z").propertyIsEnumerable(0)?Object:function(e){return"String"==n(e)?e.split(""):Object(e)}},function(e,t,r){var n=r(55)("unscopables"),i=Array.prototype;null==i[n]&&r(28)(i,n,{}),e.exports=function(e){i[n][e]=!0}},function(e,t,r){Object.defineProperty(t,"__esModule",{value:!0});var n=r(0),i=2,o={buckets:[{min:0,max:5,increment:.5}]},a={buckets:[{min:0,max:20,increment:.1}]},s={buckets:[{min:0,max:20,increment:.01}]},d={buckets:[{min:0,max:3,increment:.01},{min:3,max:8,increment:.05},{min:8,max:20,increment:.5}]},u={buckets:[{min:0,max:5,increment:.05},{min:5,max:10,increment:.1},{min:10,max:20,increment:.5}]};function c(e,t,r){var n="";if(!l(t))return n;var o=t.buckets.reduce(function(e,t){return e.max>t.max?e:t},{max:0}),a=t.buckets.find(function(t){if(e>o.max*r){var a=t.precision;void 0===a&&(a=i),n=(t.max*r).toFixed(a)}else if(e=t.min*r)return t});return a&&(n=function(e,t,r,n){void 0===r&&(r=i);var o=1/(t*n);return(Math.floor(e*o)/o).toFixed(r)}(e,a.increment,a.precision,r)),n}function l(e){if(n.isEmpty(e)||!e.buckets||!Array.isArray(e.buckets))return!1;var t=!0;return e.buckets.forEach(function(e){void 0!==e.min&&e.max&&e.increment||(t=!1)}),t}t.getPriceBucketString=function(e,t){var r=arguments.length>2&&void 0!==arguments[2]?arguments[2]:1,n=parseFloat(e);return isNaN(n)&&(n=""),{low:""===n?"":c(e,o,r),med:""===n?"":c(e,a,r),high:""===n?"":c(e,s,r),auto:""===n?"":c(e,u,r),dense:""===n?"":c(e,d,r),custom:""===n?"":c(e,t,r)}},t.isValidPriceConfig=l},function(e,t,r){Object.defineProperty(t,"__esModule",{value:!0}),t.hasNonVideoBidder=t.videoBidder=t.videoAdUnit=void 0,t.isValidVideoBid=function(e){var t=(0,i.getBidRequest)(e.adId),r=t&&(0,i.deepAccess)(t,"mediaTypes.video"),n=r&&(0,i.deepAccess)(r,"context");return!t||r&&n!==a?o.config.getConfig("usePrebidCache")||!e.vastXml||e.vastUrl?!(!e.vastUrl&&!e.vastXml):((0,i.logError)("\n This bid contains only vastXml and will not work when prebid-cache is disabled.\n Try enabling prebid-cache with pbjs.setConfig({ usePrebidCache: true });\n "),!1):n!==a||!(!e.renderer&&!t.renderer)};var n=r(1),i=r(0),o=r(8),a="outstream",s=(t.videoAdUnit=function(e){var t="video"===e.mediaType,r=(0,i.deepAccess)(e,"mediaTypes.video");return t||r},t.videoBidder=function(e){return n.videoAdapters.includes(e.bidder)});t.hasNonVideoBidder=function(e){return e.bids.filter(function(e){return!s(e)}).length}},function(e,t,r){var n=r(102);e.exports=function(e,t,r){if(n(e),void 0===t)return e;switch(r){case 1:return function(r){return e.call(t,r)};case 2:return function(r,n){return e.call(t,r,n)};case 3:return function(r,n,i){return e.call(t,r,n,i)}}return function(){return e.apply(t,arguments)}}},function(e,t){e.exports=function(e){try{return!!e()}catch(e){return!0}}},,function(e,t,r){var n=r(40);e.exports=Object("z").propertyIsEnumerable(0)?Object:function(e){return"String"==n(e)?e.split(""):Object(e)}},function(e,t){var r={}.toString;e.exports=function(e){return r.call(e).slice(8,-1)}},function(e,t){e.exports=function(e){if(null==e)throw TypeError("Can't call method on "+e);return e}},function(e,t,r){var n=r(43),i=Math.min;e.exports=function(e){return e>0?i(n(e),9007199254740991):0}},function(e,t){var r=Math.ceil,n=Math.floor;e.exports=function(e){return isNaN(e=+e)?0:(e>0?n:r)(e)}},function(e,t,r){r(147),e.exports=r(20).Array.includes},function(e,t){var r;r=function(){return this}();try{r=r||Function("return this")()||(0,eval)("this")}catch(e){"object"==("undefined"==typeof window?"undefined":_typeof(window))&&(r=window)}e.exports=r},function(e,t,r){Object.defineProperty(t,"__esModule",{value:!0}),t.getGlobal=function(){return window.pbjs},window.pbjs=window.pbjs||{},window.pbjs.cmd=window.pbjs.cmd||[],window.pbjs.que=window.pbjs.que||[]},function(e,t){var r={}.hasOwnProperty;e.exports=function(e,t){return r.call(e,t)}},function(e,t,r){var n=r(413);e.exports=function(e,t,r){if(n(e),void 0===t)return e;switch(r){case 1:return function(r){return e.call(t,r)};case 2:return function(r,n){return e.call(t,r,n)};case 3:return function(r,n,i){return e.call(t,r,n,i)}}return function(){return e.apply(t,arguments)}}},function(e,t,r){var n=r(48),i=r(32),o=r(51),a=r(53),s=r(414);e.exports=function(e,t){var r=1==e,d=2==e,u=3==e,c=4==e,l=6==e,f=5==e||l,p=t||s;return function(t,s,g){for(var v,m,b=o(t),y=i(b),h=n(s,g,3),_=a(y.length),I=0,S=r?p(t,_):d?p(t,0):void 0;_>I;I++)if((f||I in y)&&(m=h(v=y[I],I,b),e))if(r)S[I]=m;else if(m)switch(e){case 3:return!0;case 5:return v;case 6:return I;case 2:S.push(v)}else if(c)return!1;return l?-1:u||c?c:S}}},function(e,t){var r={}.toString;e.exports=function(e){return r.call(e).slice(8,-1)}},function(e,t,r){var n=r(52);e.exports=function(e){return Object(n(e))}},function(e,t){e.exports=function(e){if(null==e)throw TypeError("Can't call method on "+e);return e}},function(e,t,r){var n=r(54),i=Math.min;e.exports=function(e){return e>0?i(n(e),9007199254740991):0}},function(e,t){var r=Math.ceil,n=Math.floor;e.exports=function(e){return isNaN(e=+e)?0:(e>0?n:r)(e)}},function(e,t,r){var n=r(56)("wks"),i=r(31),o=r(16).Symbol,a="function"==typeof o;(e.exports=function(e){return n[e]||(n[e]=a&&o[e]||(a?o:i)("Symbol."+e))}).store=n},function(e,t,r){var n=r(16),i="__core-js_shared__",o=n[i]||(n[i]={});e.exports=function(e){return o[e]||(o[e]={})}},function(e,t,r){var n=r(58),i=r(53),o=r(421);e.exports=function(e){return function(t,r,a){var s,d=n(t),u=i(d.length),c=o(a,u);if(e&&r!=r){for(;u>c;)if((s=d[c++])!=s)return!0}else for(;u>c;c++)if((e||c in d)&&d[c]===r)return e||c||0;return!e&&-1}}},function(e,t,r){var n=r(32),i=r(52);e.exports=function(e){return n(i(e))}},,,function(e,t){e.exports=function e(t){var r=Array.isArray(t)?[]:{};for(var n in t){var i=t[n];r[n]=i&&"object"==_typeof(i)?e(i):i}return r}},function(e,t,r){Object.defineProperty(t,"__esModule",{value:!0}),t.setWindow=t.getScreenWidth=t.mapSizes=void 0;var n=function(e){if(e&&e.__esModule)return e;var t={};if(null!=e)for(var r in e)Object.prototype.hasOwnProperty.call(e,r)&&(t[r]=e[r]);return t.default=e,t}(r(0)),i=void 0;function o(e){var t=e||i||window,r=t.document;return t.innerWidth?t.innerWidth:r.body.clientWidth?r.body.clientWidth:r.documentElement.clientWidth?r.documentElement.clientWidth:0}t.mapSizes=function(e){if(t=e.sizeMapping,!(n.isArray(t)&&t.length>0||(n.logInfo("No size mapping defined"),0)))return e.sizes;var t,r=o();if(!r){var i=e.sizeMapping.reduce(function(e,t){return e.minWidth=e.minWidth});return s&&s.sizes&&s.sizes.length?(a=s.sizes,n.logMessage("AdUnit : "+e.code+" resized based on device width to : "+a)):n.logMessage("AdUnit : "+e.code+" not mapped to any sizes for device width. This request will be suppressed."),a},t.getScreenWidth=o,t.setWindow=function(e){i=e}},function(e,t,r){Object.defineProperty(t,"__esModule",{value:!0}),t.store=function(e,t){var r,a={puts:e.map(o)};(0,n.ajax)(i,(r=t,{success:function(e){var t=void 0;try{t=JSON.parse(e).responses}catch(e){return void r(e,[])}t?r(null,t):r(new Error("The cache server didn't respond with a responses property."),[])},error:function(e,t){r(new Error("Error storing video ad in the cache: "+e+": "+JSON.stringify(t)),[])}}),JSON.stringify(a),{contentType:"text/plain",withCredentials:!0})},t.getCacheUrl=function(e){return i+"?uuid="+e};var n=r(7),i="https://prebid.adnxs.com/pbc/v1/cache";function o(e){return{type:"xml",value:e.vastXml?e.vastXml:'\n \n \n prebid.org wrapper\n \n \n \n \n \n "}}},function(e,t,r){Object.defineProperty(t,"__esModule",{value:!0});var n=Object.assign||function(e){for(var t=1;t1&&void 0!==arguments[1]?arguments[1]:10;"function"==typeof e&&(a.push({fn:e,priority:t}),a.sort(function(e,t){return t.priority-e.priority}))},removeHook:function(e){a=a.filter(function(r){return r.fn===t||r.fn!==e})}};return"string"==typeof r&&(o[r]=d),n(function(){for(var r=arguments.length,n=Array(r),i=0;i1?arguments[1]:void 0)}}),r(26)("includes")},function(e,t,r){var n=r(149),i=r(42),o=r(150);e.exports=function(e){return function(t,r,a){var s,d=n(t),u=i(d.length),c=o(a,u);if(e&&r!=r){for(;u>c;)if((s=d[c++])!=s)return!0}else for(;u>c;c++)if((e||c in d)&&d[c]===r)return e||c||0;return!e&&-1}}},function(e,t,r){var n=r(39),i=r(41);e.exports=function(e){return n(i(e))}},function(e,t,r){var n=r(43),i=Math.max,o=Math.min;e.exports=function(e,t){return(e=n(e))0&&void 0!==arguments[0]?arguments[0]:{},t=e.bidsBackHandler,r=e.timeout,n=e.adUnits,i=e.adUnitCodes;S.emit("requestBids");var o=m.cbTimeout=r||g.config.getConfig("bidderTimeout");if(n=n||m.adUnits,y.logInfo("Invoking pbjs.requestBids",arguments),i&&i.length?n=n.filter(function(e){return i.includes(e.code)}):i=n&&n.map(function(e){return e.code}),n.filter(s.videoAdUnit).filter(s.hasNonVideoBidder).forEach(function(e){var t=e.bids.filter(function(e){return!(0,s.videoBidder)(e)}).map(function(e){return e.bidder});y.logWarn(y.unsupportedBidderMessage(e,t)),e.bids=e.bids.filter(s.videoBidder)}),n.filter(d.nativeAdUnit).filter(d.hasNonNativeBidder).forEach(function(e){var t=e.bids.filter(function(e){return!(0,d.nativeBidder)(e)}).map(function(e){return e.bidder});y.logWarn(y.unsupportedBidderMessage(e,t)),e.bids=e.bids.filter(d.nativeBidder)}),j)B.push(function(){m.requestBids({bidsBackHandler:t,timeout:o,adUnits:n,adUnitCodes:i})});else{if(j=!0,m._adUnitCodes=i,h.externalCallbackReset(),m._bidsRequested=[],m._bidsReceived=m._bidsReceived.filter(function(e){return!m._adUnitCodes.includes(e.adUnitCode)}),!n||0===n.length)return y.logMessage("No adUnits configured. No bids requested."),"function"==typeof t&&h.addOneTimeCallback(t,!1),void h.executeCallback();var a=h.executeCallback.bind(h,!0),u=setTimeout(a,o);(0,p.setAjaxTimeout)(o),"function"==typeof t&&h.addOneTimeCallback(t,u),_.callBids({adUnits:n,adUnitCodes:i,cbTimeout:o}),0===m._bidsRequested.length&&h.executeCallback()}},m.addAdUnits=function(e){y.logInfo("Invoking pbjs.addAdUnits",arguments),y.isArray(e)?(e.forEach(function(e){return e.transactionId=y.generateUUID()}),m.adUnits.push.apply(m.adUnits,e)):"object"===(void 0===e?"undefined":n(e))&&(e.transactionId=y.generateUUID(),m.adUnits.push(e)),S.emit(C)},m.onEvent=function(e,t,r){y.logInfo("Invoking pbjs.onEvent",arguments),y.isFn(t)?!r||O[e].call(null,r)?S.on(e,t,r):y.logError('The id provided is not valid for event "'+e+'" and no handler was set.'):y.logError('The event handler provided is not a function and was not set on event "'+e+'".')},m.offEvent=function(e,t,r){y.logInfo("Invoking pbjs.offEvent",arguments),r&&!O[e].call(null,r)||S.off(e,t,r)},m.addCallback=function(e,t){y.logWarn("pbjs.addCallback will be removed in Prebid 1.0. Please use onEvent instead"),y.logInfo("Invoking pbjs.addCallback",arguments);var r=null;return e&&t&&"function"==typeof t?(r=y.getUniqueIdentifierStr,h.addCallback(r,t,e),r):(y.logError("error registering callback. Check method signature"),r)},m.removeCallback=function(){return y.logWarn("pbjs.removeCallback will be removed in Prebid 1.0. Please use offEvent instead."),null},m.registerBidAdapter=function(e,t){y.logInfo("Invoking pbjs.registerBidAdapter",arguments);try{_.registerBidAdapter(e(),t)}catch(e){y.logError("Error registering bidder adapter : "+e.message)}},m.registerAnalyticsAdapter=function(e){y.logInfo("Invoking pbjs.registerAnalyticsAdapter",arguments);try{_.registerAnalyticsAdapter(e)}catch(e){y.logError("Error registering analytics adapter : "+e.message)}},m.bidsAvailableForAdapter=function(e){y.logInfo("Invoking pbjs.bidsAvailableForAdapter",arguments),m._bidsRequested.find(function(t){return t.bidderCode===e}).bids.map(function(t){return i(t,I.createBid(1),{bidderCode:e,adUnitCode:t.placementCode})}).map(function(e){return m._bidsReceived.push(e)})},m.createBid=function(e){return y.logInfo("Invoking pbjs.createBid",arguments),I.createBid(e)},m.addBidResponse=function(e,t){y.logWarn("pbjs.addBidResponse will be removed in Prebid 1.0. Each bidder will be passed a reference to addBidResponse function in callBids as an argument. See https://github.com/prebid/Prebid.js/issues/1087 for more details."),y.logInfo("Invoking pbjs.addBidResponse",arguments),h.addBidResponse(e,t)},m.loadScript=function(e,t,r){y.logInfo("Invoking pbjs.loadScript",arguments),(0,f.loadScript)(e,t,r)},m.enableAnalytics=function(e){e&&!y.isEmpty(e)?(y.logInfo("Invoking pbjs.enableAnalytics for: ",e),_.enableAnalytics(e)):y.logError("pbjs.enableAnalytics should be called with option {}")},m.aliasBidder=function(e,t){y.logInfo("Invoking pbjs.aliasBidder",arguments),e&&t?_.aliasBidAdapter(e,t):y.logError("bidderCode and alias must be passed as arguments","pbjs.aliasBidder")},m.setPriceGranularity=function(e){y.logWarn("pbjs.setPriceGranularity will be removed in Prebid 1.0. Use pbjs.setConfig({ priceGranularity: }) instead."),y.logInfo("Invoking pbjs.setPriceGranularity",arguments),g.config.setConfig({priceGranularity:e})},m.enableSendAllBids=function(){g.config.setConfig({enableSendAllBids:!0})},m.getAllWinningBids=function(){return m._winningBids},m.buildMasterVideoTagFromAdserverTag=function(e,t){y.logWarn("pbjs.buildMasterVideoTagFromAdserverTag will be removed in Prebid 1.0. Include the dfpVideoSupport module in your build, and use the pbjs.adservers.dfp.buildVideoAdUrl function instead"),y.logInfo("Invoking pbjs.buildMasterVideoTagFromAdserverTag",arguments);var r=(0,u.parse)(e);if(0===m._bidsReceived.length)return e;if("dfp"===t.adserver.toLowerCase()){var n=E.dfpAdserver(t,r);return n.verifyAdserverTag()||y.logError("Invalid adserverTag, required google params are missing in query string"),n.appendQueryParams(),(0,u.format)(n.urlComponents)}y.logError("Only DFP adserver is supported")},m.setBidderSequence=_.setBidderSequence,m.getHighestCpmBids=function(e){return w.getWinningBids(e)},m.setS2SConfig=function(e){if(y.contains(Object.keys(e),"accountId"))if(y.contains(Object.keys(e),"bidders")){var t=i({enabled:!1,endpoint:b.S2S.DEFAULT_ENDPOINT,timeout:1e3,maxBids:1,adapter:b.S2S.ADAPTER,syncEndpoint:b.S2S.SYNC_ENDPOINT,cookieSet:!0,bidders:[]},e);_.setS2SConfig(t)}else y.logError("bidders missing in Server to Server config");else y.logError("accountId missing in Server to Server config")},m.getConfig=g.config.getConfig,m.setConfig=g.config.setConfig,m.que.push(function(){return(0,c.listenMessagesFromCreative)()}),m.cmd.push=function(e){if("function"==typeof e)try{e.call()}catch(e){y.logError("Error processing command :",e.message,e.stack)}else y.logError("Commands written into pbjs.cmd.push must be wrapped in a function")},m.que.push=m.cmd.push,m.processQueue=function(){P(m.que),P(m.cmd)}},function(e,t,r){r(404),r(417),r(419),r(422),Number.isInteger=Number.isInteger||function(e){return"number"==typeof e&&isFinite(e)&&Math.floor(e)===e}},function(e,t,r){r(405),e.exports=r(14).Array.find},function(e,t,r){var n=r(22),i=r(49)(5),o="find",a=!0;o in[]&&Array(1)[o](function(){a=!1}),n(n.P+n.F*a,"Array",{find:function(e){return i(this,e,arguments.length>1?arguments[1]:void 0)}}),r(33)(o)},function(e,t,r){var n=r(407),i=r(408),o=r(410),a=Object.defineProperty;t.f=r(29)?Object.defineProperty:function(e,t,r){if(n(e),t=o(t,!0),n(r),i)try{return a(e,t,r)}catch(e){}if("get"in r||"set"in r)throw TypeError("Accessors not supported!");return"value"in r&&(e[t]=r.value),e}},function(e,t,r){var n=r(23);e.exports=function(e){if(!n(e))throw TypeError(e+" is not an object!");return e}},function(e,t,r){e.exports=!r(29)&&!r(30)(function(){return 7!=Object.defineProperty(r(409)("div"),"a",{get:function(){return 7}}).a})},function(e,t,r){var n=r(23),i=r(16).document,o=n(i)&&n(i.createElement);e.exports=function(e){return o?i.createElement(e):{}}},function(e,t,r){var n=r(23);e.exports=function(e,t){if(!n(e))return e;var r,i;if(t&&"function"==typeof(r=e.toString)&&!n(i=r.call(e)))return i;if("function"==typeof(r=e.valueOf)&&!n(i=r.call(e)))return i;if(!t&&"function"==typeof(r=e.toString)&&!n(i=r.call(e)))return i;throw TypeError("Can't convert object to primitive value")}},function(e,t){e.exports=function(e,t){return{enumerable:!(1&e),configurable:!(2&e),writable:!(4&e),value:t}}},function(e,t,r){var n=r(16),i=r(28),o=r(47),a=r(31)("src"),s="toString",d=Function[s],u=(""+d).split(s);r(14).inspectSource=function(e){return d.call(e)},(e.exports=function(e,t,r,s){var d="function"==typeof r;d&&(o(r,"name")||i(r,"name",t)),e[t]!==r&&(d&&(o(r,a)||i(r,a,e[t]?""+e[t]:u.join(String(t)))),e===n?e[t]=r:s?e[t]?e[t]=r:i(e,t,r):(delete e[t],i(e,t,r)))})(Function.prototype,s,function(){return"function"==typeof this&&this[a]||d.call(this)})},function(e,t){e.exports=function(e){if("function"!=typeof e)throw TypeError(e+" is not a function!");return e}},function(e,t,r){var n=r(415);e.exports=function(e,t){return new(n(e))(t)}},function(e,t,r){var n=r(23),i=r(416),o=r(55)("species");e.exports=function(e){var t;return i(e)&&("function"!=typeof(t=e.constructor)||t!==Array&&!i(t.prototype)||(t=void 0),n(t)&&null===(t=t[o])&&(t=void 0)),void 0===t?Array:t}},function(e,t,r){var n=r(50);e.exports=Array.isArray||function(e){return"Array"==n(e)}},function(e,t,r){r(418),e.exports=r(14).Array.findIndex},function(e,t,r){var n=r(22),i=r(49)(6),o="findIndex",a=!0;o in[]&&Array(1)[o](function(){a=!1}),n(n.P+n.F*a,"Array",{findIndex:function(e){return i(this,e,arguments.length>1?arguments[1]:void 0)}}),r(33)(o)},function(e,t,r){r(420),e.exports=r(14).Array.includes},function(e,t,r){var n=r(22),i=r(57)(!0);n(n.P,"Array",{includes:function(e){return i(this,e,arguments.length>1?arguments[1]:void 0)}}),r(33)("includes")},function(e,t,r){var n=r(54),i=Math.max,o=Math.min;e.exports=function(e,t){return(e=n(e))u;)for(var f,p=s(arguments[u++]),g=c?n(p).concat(c(p)):n(p),v=g.length,m=0;v>m;)l.call(p,f=g[m++])&&(r[f]=p[f]);return r}:d},function(e,t,r){var n=r(426),i=r(428);e.exports=Object.keys||function(e){return n(e,i)}},function(e,t,r){var n=r(47),i=r(58),o=r(57)(!1),a=r(427)("IE_PROTO");e.exports=function(e,t){var r,s=i(e),d=0,u=[];for(r in s)r!=a&&n(s,r)&&u.push(r);for(;t.length>d;)n(s,r=t[d++])&&(~o(u,r)||u.push(r));return u}},function(e,t,r){var n=r(56)("keys"),i=r(31);e.exports=function(e){return n[e]||(n[e]=i(e))}},function(e,t){e.exports="constructor,hasOwnProperty,isPrototypeOf,propertyIsEnumerable,toLocaleString,toString,valueOf".split(",")},function(e,t){t.f=Object.getOwnPropertySymbols},function(e,t){t.f={}.propertyIsEnumerable},function(e,t,r){Object.defineProperty(t,"__esModule",{value:!0}),t.listenMessagesFromCreative=function(){addEventListener("message",s,!1)};var n,i=(n=r(12))&&n.__esModule?n:{default:n},o=r(15),a=r(2).EVENTS.BID_WON;function s(e){var t,r,n,s,d,u,c,l,f,p,g,v,m,b=e.message?"message":"data",y={};try{y=JSON.parse(e[b])}catch(e){return}if(y.adId){var h=pbjs._bidsReceived.find(function(e){return e.adId===y.adId});"Prebid Request"===y.message&&(t=h,r=y.adServerDomain,n=e.source,s=t.adId,d=t.ad,u=t.adUrl,c=t.width,l=t.height,s&&(p=(f=t).adUnitCode,g=f.width,v=f.height,(m=document.getElementById(window.googletag.pubads().getSlots().find(function(e){return e.getAdUnitPath()===p||e.getSlotElementId()===p}).getSlotElementId()).querySelector("iframe")).width=""+g,m.height=""+v,n.postMessage(JSON.stringify({message:"Prebid Response",ad:d,adUrl:u,adId:s,width:c,height:l}),r)),pbjs._winningBids.push(h),i.default.emit(a,h)),"Prebid Native"===y.message&&((0,o.fireNativeTrackers)(y,h),pbjs._winningBids.push(h),i.default.emit(a,h))}}},function(e,t,r){var n=r(13),i=r(27);t.dfpAdserver=function(e,t){var r=new function(e){this.name=e.adserver,this.code=e.code,this.getWinningBidByCode=function(){return(0,i.getWinningBids)(this.code)[0]}}(e);r.urlComponents=t;var o={env:"vp",gdfp_req:"1",impl:"s",unviewed_position_start:"1"},a=["output","iu","sz","url","correlator","description_url","hl"];return r.appendQueryParams=function(){var e,t=r.getWinningBidByCode();t&&(this.urlComponents.search.description_url=encodeURIComponent(t.vastUrl),this.urlComponents.search.cust_params=(e=t.adserverTargeting,encodeURIComponent((0,n.formatQS)(e))),this.urlComponents.search.correlator=Date.now())},r.verifyAdserverTag=function(){for(var e in o)if(!this.urlComponents.search.hasOwnProperty(e)||this.urlComponents.search[e]!==o[e])return!1;for(var t in a)if(!this.urlComponents.search.hasOwnProperty(a[t]))return!1;return!0},r}}]),pbjsChunk([119],{138:function(e,t,r){e.exports=r(139)},139:function(e,t,r){var n,i,o=Object.assign||function(e){for(var t=1;t0&&(p="size="+g[0],v>1)){p+="&promo_sizes=";for(var m=1;m\n let win = window;\n for (const i=0; i'):(i.width=e.width,i.height=e.height,i.ad=e.creative)):i=c(),i}return{callBids:function(t){!window.criteo_pubtag||window.criteo_pubtag instanceof Array?(d(t),o.loadScript(e,function(){},!0)):d(t)}}};a.registerBidAdapter(new d,"criteo"),e.exports=d}},[178]),pbjsChunk([99],{218:function(e,t,r){e.exports=r(219)},219:function(e,t,r){var n=Object.assign||function(e){for(var t=1;t=0&&(this.timeoutDelay=r),this.siteID=e,this.impressions=[],this._parseFnName=void 0,this.sitePage=void 0;try{this.sitePage=d.getTopWindowUrl()}catch(e){}if(void 0!==this.sitePage&&""!==this.sitePage||(top===self?this.sitePage=location.href:this.sitePage=document.referrer),top===self?this.topframe=1:this.topframe=0,void 0!==t){if("function"!=typeof t)throw"Invalid jsonp target function";this._parseFnName="cygnus_index_args.parseFn"}void 0===_IndexRequestData.requestCounter?_IndexRequestData.requestCounter=Math.floor(256*Math.random()):_IndexRequestData.requestCounter=(_IndexRequestData.requestCounter+1)%256,this.requestID=String((new Date).getTime()%2592e3*256+_IndexRequestData.requestCounter+256),this.initialized=!0}i.prototype.serialize=function(){var e='{"id":"'+this.requestID+'","site":{"page":"'+n(this.sitePage)+'"';"string"==typeof document.referrer&&""!==document.referrer&&(e+=',"ref":"'+n(document.referrer)+'"'),e+='},"imp":[';for(var t=0;t0&&(e+=',"ext": {'+i.join()+"}"),t+1===this.impressions.length?e+="}":e+="},"}return e+"]}"},i.prototype.setPageOverride=function(e){return"string"==typeof e&&!e.match(/^\s*$/)&&(this.sitePage=e,!0)},i.prototype.addImpression=function(e,t,r,n,i,o){var a={id:String(this.impressions.length+1)};if("number"!=typeof e||e=0))return null;a.siteID=o}return this.impressions.push(a),a.id},i.prototype.buildRequest=function(){if(0!==this.impressions.length&&!0===this.initialized){var e,t=encodeURIComponent(this.serialize());(function(e){for(var t=window,r="",n=0;n=0&&(e+="&t="+this.timeoutDelay),e}};try{if("undefined"==typeof cygnus_index_args||void 0===cygnus_index_args.siteID||void 0===cygnus_index_args.slots)return;var o,a,s=new i(cygnus_index_args.siteID,cygnus_index_args.parseFn,cygnus_index_args.timeout);cygnus_index_args.url&&"string"==typeof cygnus_index_args.url&&s.setPageOverride(cygnus_index_args.url),_IndexRequestData.impIDToSlotID[s.requestID]={},_IndexRequestData.reqOptions[s.requestID]={};for(var u=0;u 0. Got: "+r),0))&&(void 0!==(t=e.params.video.playerType)&&d.isStr(t)?(t=t.toUpperCase(),_[t]||(d.logError("Player type is invalid, must be one of: "+Object.keys(_)),0)):(d.logError("Player type is invalid, must be one of: "+Object.keys(_)),0))&&function(e){if(!d.isArray(e)||d.isEmpty(e))return d.logError("Protocol array is not an array. Got: "+e),!1;for(var t=0;t0)return e;var t,r,n}(e)){e=function(e){e.params.video.siteID=+e.params.video.siteID,e.params.video.maxduration=+e.params.video.maxduration,e.params.video.protocols=e.params.video.protocols.reduce(function(e,t){return e.concat(I[t])},[]);var t=e.params.video.minduration;void 0!==t&&O(t)||(d.logInfo("Using default value for 'minduration', default: "+h.minduration),e.params.video.minduration=h.minduration);var r=e.params.video.startdelay;void 0!==r&&function(e){if(void 0===w[e]){var t=+e;if(isNaN(t)||!d.isNumber(t)||t= -2. Got: "+e),!1}return!0}(r)||(d.logInfo("Using default value for 'startdelay', default: "+h.startdelay),e.params.video.startdelay=h.startdelay);var n,i=e.params.video.linearity;void 0!==i&&(E[n=i]||(d.logInfo("Linearity is invalid, must be one of: "+Object.keys(E)+". Got: "+n),0))||(d.logInfo("Using default value for 'linearity', default: "+h.linearity),e.params.video.linearity=h.linearity);var o=e.params.video.mimes,a=e.params.video.playerType.toUpperCase();void 0!==o&&function(e){if(!d.isArray(e)||d.isEmpty(e))return d.logError("MIMEs array is not an array. Got: "+e),!1;for(var t=0;t0&&function(e,t){var r,n,i,o={id:e,imp:t,site:{page:d.getTopWindowUrl()}};if(!d.isEmpty(o.imp)){var a=(r=o.imp[0].ext.siteID,n=o,(i="https:"===window.location.protocol?c.parse("https://as-sec.casalemedia.com/cygnus?v=8&fn=pbjs.handleCygnusResponse"):c.parse("http://as.casalemedia.com/cygnus?v=8&fn=pbjs.handleCygnusResponse")).search.s=r,i.search.r=encodeURIComponent(JSON.stringify(n)),c.format(i));l.default.loadScript(a)}}(e.bidderRequestId,o),cygnus_index_args.slots.length>20&&d.logError("Too many unique sizes on slots, will use the first 20.",v),cygnus_index_args.slots.length>0&&l.default.loadScript(C());var u=!1;window.cygnus_index_ready_state=function(){if(!u){u=!0;try{var e=_IndexRequestData.targetIDToBid;for(var r in t){var n=t[r].placementCode,o=[];for(var c in e){var l=/^(T\d_)?(.+)_(\d+)$/.exec(c);if(l){var p=l[1]||"",b=l[2],y=l[3],h=j(cygnus_index_args,p+b);if(b===r){var _=a.default.createBid(1);_.cpm=y/100,_.ad=e[c][0],_.bidderCode=m,_.width=h.width,_.height=h.height,_.siteID=h.siteID,"object"===i(_IndexRequestData.targetIDToResp)&&"object"===i(_IndexRequestData.targetIDToResp[c])&&void 0!==_IndexRequestData.targetIDToResp[c].dealID?(void 0===_IndexRequestData.targetAggregate.private[n]&&(_IndexRequestData.targetAggregate.private[n]=[]),_.dealId=_IndexRequestData.targetIDToResp[c].dealID,_IndexRequestData.targetAggregate.private[n].push(b+"_"+_IndexRequestData.targetIDToResp[c].dealID)):(void 0===_IndexRequestData.targetAggregate.open[n]&&(_IndexRequestData.targetAggregate.open[n]=[]),_IndexRequestData.targetAggregate.open[n].push(b+"_"+y)),o.push(_)}}else d.logError("Unable to parse "+c+", skipping slot",v)}if(o.length>0)for(var I=0;I0&&r.length>0&&a.push({code:t.id,sizes:r,bids:n})}),a}(),r=new Event("nymPrebidCleared"),n.length>0?window.pbjs.que.push(function(){window.pbjs.addAdUnits(n),window.pbjs.requestBids({bidsBackHandler:h})}):a=0,setTimeout(function(){window.prebid.cleared=!0,h(),window.dispatchEvent(r)},a))},10),setTimeout(function(){c&&(i.clearInterval(c),i.googletag.pubads().refresh(),i.location.href.indexOf("pbjs_debug=true")>-1&&console.log("MESSAGE: Timeout for prebid load exceeded, aborting"))},500)),r=function(e){var t,a,n=e.data;return n.loaded?e:(n.loaded=!0,t=null,(t=n.sizes?l.defineSlot(n.name,n.sizes,n.id).addService(l.pubads()):l.defineOutOfPageSlot(n.name,n.id).addService(l.pubads())).setTargeting("adid",n.id),b.hasOwnProperty("utm_campaign")&&t.setTargeting("utmcamp",b.utm_campaign),a=p.getAdCount(n.label),t.setTargeting("label",n.label+"_"+n.site+"-"+a),l.display(n.id),l.pubads().addEventListener("slotOnload",function(){i.NYM.analytics.firstAdLoadTime||(i.NYM.analytics.firstAdLoadTime=i.performance.now(),i.NYM.analytics.firstAdLoadLabel=e.data.label)}),window.prebid&&window.prebid.cleared&&(window.pbjs.setTargetingForGPTAsync(),l.pubads().refresh([t],{changeCorrelator:!1})),e.slot=t,e)},o=function(e){var a,i,n,r=t(),o=e.getAttribute("data-name"),d=e.getAttribute("data-sizes"),s=e.getAttribute("data-label"),u=e.getAttribute("data-site");r=e.id,d&&d.length?(d=d.split(","),a=[],_map(d,function(e){e=e.split("x"),i=parseInt(e[0]),n=parseInt(e[1]),a.push([i,n])})):(e.classList.add("oop"),a=!1),this.data={id:r,name:o,sizes:a,loaded:!1,label:s,site:u},g[r]=this},s=function(e){l.cmd.push(function(){var t=r(e);g[e.data.id]=t})},d=function(e){e.slot?(window.pbjs.setTargetingForGPTAsync(),l.pubads().refresh([e.slot],{changeCorrelator:!1})):e&&s(e)},l.cmd.push(function(){var e,t,n,r=page.getMeta("article:tag"),o=page.getMeta("author"),d=i.location.href,s=(e=a.head.querySelector(".head-gtm"),t=a.body.querySelector(".gtm"),e&&"top"===e.getAttribute("data-gtm")?"gtmtop":t&&"bottom"===t.getAttribute("data-gtm")?"gtmbottom":"");n=[],_forEach([r,o,s],function(e){_forEach(e.split(","),function(e){(e=e.trim().toLowerCase().replace(/\s/g,"-").replace(/\'|\’/g,"")).length&&n.push(e)})}),l.pubads().setTargeting("kw",n),d=d.slice(d.lastIndexOf("/")+1),l.pubads().setTargeting("pn",d),l.companionAds().setRefreshUnfilledSlots(!0),l.pubads().enableAsyncRendering(),l.enableServices()}),this.load=s,this.create=function(e){return new o(e)},this.refresh=function(e){var t;_isString(e)?(t=this.getById(e),d(t)):d(e)},this.remove=function(e){var t=e.data.id;a.getElementById(t).innerHTML=""},this.getAdCount=function(e){var t,a=0,i=Object.keys(g);return _each(i,function(i){(t=g[i]).data.loaded&&t.data.label===e&&a++}),a},this.getById=function(e){return g[e]},u=_debounce(function(){var e={TopLeaderboard:1,RightColTopMPU:2,outOfPage:99,"homepageTakeover/TopLeaderboard":1},t=_sortBy(f,function(t){return e[t.data.label]||10});_forEach(t,function(e){return s(e)}),f=[]},10),this.addToPageLoadQueue=function(e){f.push(e),u()}}]); }, {"7":7,"15":15,"26":26,"39":39,"63":63,"67":67,"116":116,"120":120,"125":125,"129":129,"165":165}]; window.modules["chartbeat.legacy"] = [function(require,module,exports){"use strict";var page=require(125);DS.service("chartbeat",["$document","$window","login",function(t,e,a){var n,o,c,s,i,r,g,p=t.getElementById("cb-sponsor-data");e._sf_async_config={uid:19989,useCanonical:!0,domain:"nymag.com",sections:(n=page.getSiteBase(),o=page.getSiteName(),c=o||n,"The Cut"!==c&&"Vulture"!==c||(c+=[",",c,page.getChannel()].join(" ")),c)},(s=page.getPrimaryPageComponent())&&"Sponsored Story"===s.getAttribute("data-type")?(e._sf_async_config.sponsorName=p&&p.getAttribute("data-sponsor"),e._sf_async_config.type="Sponsored"):e._sf_async_config.sponsorName=void 0,(i=t.querySelectorAll(".article-author")).length&&(e._sf_async_config.authors=i[0].textContent.trim()),e._cbq=e._cbq||[],e._cbq.push(["_acct",a.isLoggedIn()?"lgdin":"anon"]),r=function(){var a=t.createElement("script");e._sf_endpt=(new Date).getTime(),a.setAttribute("language","javascript"),a.setAttribute("type","text/javascript"),a.setAttribute("src","//static.chartbeat.com/js/chartbeat.js"),t.body.appendChild(a)},"complete"===t.readyState?r():(g="function"==typeof e.onload?e.onload:function(){},e.onload=function(){g(),r()})}]),setTimeout(function(){DS.get("chartbeat")},0); }, {"125":125}]; window.modules["cid.legacy"] = [function(require,module,exports){"use strict";DS.service("$cid",function(){var r=Math.floor(100*Math.random());return function(){return"cid-"+ ++r}}); }, {}]; window.modules["cookie.legacy"] = [function(require,module,exports){"use strict";DS.service("cookie",["$document",function(t){var e=this;this.set=function(t,i,n){var s,r="";n&&((s=new Date).setTime(s.getTime()+864e5*n),r="; expires="+s.toGMTString()),e.nativeSet(t+"="+i+r+"; path=/")},this.get=function(t){var i,n,s,r=e.nativeGet().split(";");for(t+="=",i=0,n=r.length;i0&&(t.dataLayer.push.apply(t.dataLayer,u),u=[]),d=!1)}function L(t,i){var n=(i||e.body).querySelectorAll("["+l+"]");_each(n,function(e){var i=e.getAttribute(l),n=f[i];n&&n.init&&n.init(e,i,t)})}t.dataLayer=t.dataLayer||[],this.init=function(u){var d=!!e.head.querySelector(".head-gtm");r||(r=!0,i.onceReady(function(i){n=i,d||function(i,n){a.reportNow({event:"dataLayer-initialized",userDetails:{newYorkMediaUserID:n.clientId,loyaltyLevel:n.userLoyalty},pageDetails:{pageUri:n.pageUri,vertical:o,pageType:c,author:s}}),function(t,e,i,n,a){t[n]=t[n]||[],t[n].push({"gtm.start":(new Date).getTime(),event:"gtm.js"});var r=e.getElementsByTagName(i)[0],o=e.createElement(i);o.async=!0,o.src="https://www.googletagmanager.com/gtm.js?id="+a,r.parentNode.insertBefore(o,r)}(t,e,"script","dataLayer",i)}(u,i),L(i),a.reportNow(),e.addEventListener("mouseleave",function(t){t.clientY=_size(o)}function m(){var e;e=_mapValues(o,n.get),a=_omitBy(e,function(e){return""===e||_isUndefined(e)||_isNull(e)})}function h(){return f()||m()||f()}function g(){return h()&&a}function _(e){s.redirect("login",function(e){return{userId:e.id,session_id:e.session,username:e.username,remember_me:!!e.rememberMe}}(e))}function v(e,n){d(c+"/account/email/"+(e||""),n)}function w(e,n){d(c+"/account/username/"+(e||""),n)}function b(e,n){v(e,function(t,i){w(e,function(e,s){var a=i||s;n(a?null:t||e,a)})})}function p(e,n){var t,i=(t=[e.first_name,e.last_name,n],_filter(t,_isString).join(".").toLowerCase().replace(/[^.'A-Za-z]+/g,""));w(i,function(t,a){a||t?p(e,n?n+1:1):(e.username=i,s.trigger("open:register-fb-account",e))})}function k(e){ajax.sendJsonReceiveJson({method:"POST",url:c+"/account/social/login/facebook",data:u(r,e)},function(n,t,i){if(n||200!==i.status)switch(_get(t,"reasonCode")){case 0:p(e);break;case 2:s.trigger("open:link-fb-account",e)}else _(l(t))})}function q(e){var n=window.location.search.match(new RegExp("[?&]"+e+"=([^]*)"));return n&&n[1]}e.enable(s),this.isLoggedIn=h,this.logOut=function(){s.redirect("logout")},this.logIn=_,this.nymAuth=function(e,n){var t;b(e.id,function(i,s){i?n({form:"request failed"}):s?ajax.sendJsonReceiveJson({method:"POST",url:c+"/account/login",data:e},function(e,i){e&&422===e.status?n({password:"invalid"}):e?n({form:"request failed"}):(_(t=l(i)),n(null,t))}):n({id:"invalid"})})},this.getCred=g,this.getProfile=function(e){var n=g()&&a.session;n?ajax.sendReceiveJson(c+"/account/profile?session_id="+n,e):e("User not logged in")},this.getActivationStatus=function(){var e;switch(q("account_activation")){case"0":e="activated";break;case"1":e="activation-expired";break;case"2":e="activation-used"}return e},this.getResetRequest=function(){if(!_isNull(q("reset-password")))return{id:q("email"),oldP:q("code")}},this.requestReset=function(e,n){ajax.sendJsonReceiveJson({method:"POST",url:c+"/account/password/reset",data:e},function(e,t){var i;switch(_get(t,"status")){case"1":i="nym";break;case"2":i="fb";break;case"0":default:i=null}n(e,i)})},this.resetPassword=function(e,n){ajax.sendJson({method:"POST",url:c+"/account/password/update",data:e},function(e,t){422===t.status?n("Your password has already been changed."):e||200!==t.status?n("Error"):n(null)})},this.register=function(e,n){var t;ajax.sendJsonReceiveJson({method:"POST",url:c+"/account",data:e},function(e,i,s){e||200!==s.status?(t=(_get(s,"responseText")||"").indexOf("Invalid email")>-1,n(t?{email:"Please enter a valid email address"}:{form:i.message||"Request failed, please try again"})):n(null,!0)})},this.checkUserEmail=v,this.checkUsername=w,this.checkUserId=b,this.connectFacebookUser=k,this.linkFbNymUser=function(e){ajax.sendJson({method:"POST",url:c+"/account/social/facebook/link",data:e},function(n){var t;n?console.warn(n):k((t=e,u(_invert(r),t)))})},this.redirect=function(e,n){window.location.href=c+"/account/"+e+"/cookie?"+_reduce(n,function(e,n,t){return e+t+"="+n+"&"},"")+"origin="+window.location.href.split("?")[0]}}]),DS.get("facebook"); }, {"11":11,"26":26,"51":51,"82":82,"111":111,"116":116,"135":135,"167":167,"168":168,"169":169,"170":170,"171":171}]; window.modules["visit.legacy"] = [function(require,module,exports){"use strict";var _reduce=require(82),_get=require(11),_includes=require(10),_assign=require(35),_clone=require(33),_remove=require(165);DS.service("visit",["$window","$document","cookie","login","Fingerprint2",function(e,i,t,r,n){var o,s=e.localStorage,c="data-uri",l="visitServiceCount",u="visitServicePreviousTimestamp",d="visitServiceFirstVisitTimestamp",a="visitServiceVisitStartTimestamp",f="visitServiceInitialRefferer",h="nyma",_="visitDates",m=864e5,v=18e5,I=30,g={},p=Object.create(Object.prototype,{_processQ:{value:function(){var e,i=this._q.slice(0),t=i.length;for(this._q=[],e=0;tS(u),t=e-m>S(a);return i||t}function M(){var e=Date.now();!B(e)&&O(e)}function F(e){p.clientId=t.get(h),p.clientId||function(e){new n({excludeJsFonts:!0,excludeFlashFonts:!0,excludeCanvas:!0,excludeWebGL:!0,excludePixelRatio:!0}).get(function(i){p.clientId=i+"."+e})}(e)}function q(i){return i=Array.isArray(i)?i:[],_reduce(_get(e,"location.search","").substr(1).split("&"),function(e,t){var r=t.split("="),n=r[0],o=r[1];return o&&_includes(i,n)&&(e[n]=decodeURIComponent(o)),e},{})}function C(){var i=e.navigator,t=i.userAgent,r=function(e,i,t){return t||_includes(e," OPR/")?_includes(e,"Mini")?"Opera Mini":"Opera":/(BlackBerry|PlayBook|BB10)/i.test(e)?"BlackBerry":_includes(e,"IEMobile")||_includes(e,"WPDesktop")?"Internet Explorer Mobile":_includes(e,"Edge")?"Microsoft Edge":_includes(e,"FBIOS")?"Facebook Mobile":_includes(e,"Chrome")?"Chrome":_includes(e,"CriOS")?"Chrome iOS":_includes(e,"FxiOS")?"Firefox iOS":_includes(i,"Apple")?_includes(e,"Mobile")?"Mobile Safari":"Safari":_includes(e,"Android")?"Android Mobile":_includes(e,"Konqueror")?"Konqueror":_includes(e,"Firefox")?"Firefox":_includes(e,"MSIE")||_includes(e,"Trident/")?"Internet Explorer":_includes(e,"Gecko")?"Mozilla":""}(t,i.vendor,e.opera);return{browser:r,browserVersion:function(e,i){var t={"Internet Explorer Mobile":/rv:(\d+(\.\d+)?)/,"Microsoft Edge":/Edge\/(\d+(\.\d+)?)/,Chrome:/Chrome\/(\d+(\.\d+)?)/,"Chrome iOS":/CriOS\/(\d+(\.\d+)?)/,Safari:/Version\/(\d+(\.\d+)?)/,"Mobile Safari":/Version\/(\d+(\.\d+)?)/,Opera:/(Opera|OPR)\/(\d+(\.\d+)?)/,Firefox:/Firefox\/(\d+(\.\d+)?)/,"Firefox iOS":/FxiOS\/(\d+(\.\d+)?)/,Konqueror:/Konqueror:(\d+(\.\d+)?)/,BlackBerry:/BlackBerry (\d+(\.\d+)?)/,"Android Mobile":/android\s(\d+(\.\d+)?)/,"Internet Explorer":/(rv:|MSIE )(\d+(\.\d+)?)/,Mozilla:/rv:(\d+(\.\d+)?)/}[e],r=t&&i.match(t);return r?parseFloat(r[r.length-2]):null}(r,t),os:function(e){return/Windows/i.test(e)?/Phone/.test(e)||/WPDesktop/.test(e)?"Windows Phone":"Windows":/(iPhone|iPad|iPod)/.test(e)?"iOS":/Android/.test(e)?"Android":/(BlackBerry|PlayBook|BB10)/i.test(e)?"BlackBerry":/Mac/i.test(e)?"Mac OS X":/Linux/.test(e)?"Linux":""}(t)}}function k(t,n){var o,l;g=_assign({clientId:t,currentUrl:e.location.href,firstVisitTimestamp:w(),initialReferrer:s.getItem(f),isNewVisit:p.isNewVisit,isLoggedIn:r.isLoggedIn(),pageUri:(l=i.querySelector("["+c+'*="/_pages/"]'),l&&l.getAttribute(c)),referrer:i.referrer,screenHeight:e.screen.height,screenWidth:e.screen.width,timestamp:n,visitCount:b(),userLoyalty:(o=(s.getItem(_)||"").split(","),o.lengthr}),(0===t.length||n-t[0]>=864e5)&&t.unshift(n),s.setItem(_,t.join(","));try{s.setItem(f,i.referrer)}catch(e){}}var t,r,n;O(o),k(e,o)}),e.document.addEventListener("click",M),this.onceReady=function(e){p.onceClientIdIsReady(function(){e.call(null,_clone(g))})},this.getQueryParamsObject=q,this.getBrowserInfo=C}]); }, {"10":10,"11":11,"33":33,"35":35,"82":82,"165":165}]; require=(function e(t,n,r){function s(o,u){if(!n[o]){if(!t[o]){var a=typeof require=="function"&&require;if(!u&&a)return a(o,!0);if(i)return i(o,!0);var f=new Error("Cannot find module '"+o+"'");throw f.code="MODULE_NOT_FOUND",f}var l=n[o]={exports:{}};t[o][0].call(l.exports,function(e){var n=t[o][1][e];return s(n?n:e)},l,l.exports,e,t,n,r)}return n[o].exports}var i=typeof require=="function"&&require;for(var o=0;o typeof key === 'string' && key.match(/\.legacy$/)) .forEach((key) => window.require(key)); } function tryToMount(fn, el, name) { try { fn(el); // init the controller } catch (e) { const elementTag = el.outerHTML.slice(0, el.outerHTML.indexOf(el.innerHTML)); console.error(`Error initializing controller for "${name}" on "${elementTag}"`, e); } } /** * mount client.js component controllers */ function mountComponentModules() { Object.keys(window.modules) .filter((key) => typeof key === 'string' && key.match(/\.client$/)) .forEach((key) => { let controllerFn = window.require(key); if (typeof controllerFn === 'function') { const name = key.replace('.client', ''), instancesSelector = `[data-uri*="_components/${name}/"]`, defaultSelector = `[data-uri$="_components${name}"]`, instances = document.querySelectorAll(instancesSelector), defaults = document.querySelectorAll(defaultSelector); for (let el of instances) { tryToMount(controllerFn, el, name); } for (let el of defaults) { tryToMount(controllerFn, el, name); } } }); } // note: legacy controllers that require legacy services (e.g. dollar-slice) must // wait for DOMContentLoaded to initialize themselves, as the files themselves must be mounted first mountLegacyServices(); mountComponentModules(); // ]]
↧

How to Write a Technical Paper [pdf]

↧

Priority Queue on Ethereum with a 15 ETH Bug Bounty

$
0
0

What?

A Binary Heap data structure is the simplest implementation of a Priority Queue (for instance order-books). It "partially sorts" data so that the highest priority item can always be found instantly at the root.

Why?

Block-Gas-Limit and the iteration problem

Allowing users to insert data into a contract can result in an issue where it costs too much gas to iterate through. This is a gas-limit-attack.

If directly using an array, an attacker can fill the array to the point where iterating through it will cost more gas than is allowed in a single transaction (the block-gas-limitcurrently 8 million). When such a contract is worth attacking, it will be attacked. Don't write contracts this way. Its not safe.

A Heap mitigates these issues because the structure does not require iteration through the elements. It instead iterates only through the height of a tree.

Data structures to the rescue

Unfortunately, even though many tree structures have O log(n) costs under normal circumstances, they are not safe to use in public Ethereum contracts, because attackers can find conditions that degenerate the tree toward O(n) costs. Degenerating a tree is when you make one branch get really long.

Self Balancing trees solve this issue, because they cannot degenerate. They rotate or swap nodes during insertion to stay balanced, thus preserving their O log(n) costs even under worst-case conditions.

Binary Heap

Options

A Binary Heap is a partially-sorted, self balancing tree that has worst-case characteristics proportional to O log(n).

If you need a fully sorted self balancing tree, you can use a 2-3-4 Tree, Red Black Tree, or an AVL Tree. Piper Merriam wrote an AVL Tree in Solidity that he's used for the Ethereum Alarm Clock.

Fully-sorted vs Partially-sorted?

A Heap allows you to quickly find the largest of some property. It is not as quick however as the other trees at iterating from largest to smallest.

For example:

The Heap was built to accommodate the order-book for a decentralized exchange where.

  • Users can make (and remove) as many orders as they wish
  • The contract has to automatically match the highest order

When someone creates a sell-order, the contract must find the highest price buy-order to see if it matches (and vice-versa). If there is not a match, we do not need to find the next highest price buy-order, so a heap will suffice. If there is a match, we extractMax(), and the heap will re-adjust so the new highest-price order is at the top.

The more I think about it, the more I think you can solve on Ethereum using this Heap. Remember, the cost reduction requirement is only relevant to logic that's executed on-chain. Off-chain we can easily iterate through all the data and locally cache it however appropriate. There is a dump() function for doing just that. There is also an index.js file that can rebuild the heap in javascript and print it visually.

constTestHeap=artifacts.require("TestHeap");constHelpers=require("../index") // or `require("eth-heap")` from a project using npmconstHeap=Helpers.HeapconstNode=Helpers.Node// create a testHeap contract and fill it with datalet dumpSig ="0xe4330545"//keccak("dump()")[0-8]let response =awaitweb3.eth.call({to:heap.address, data: dumpSig})newHeap(response).print()

The only benefit of a fully-sorted tree, is that you can iterate through it from greatest to least... but that just brings back the block-gas-limit attack problem. I cant think of an application that would require an AVL Tree or a Red-Black Tree, but wouldn't run into the gas-limit attack problem.

How? (to use)

npm install eth-heap --save

Then from a truffle contract, import the library

import"eth-heap/contracts/Heap.sol";

Initialize

Call init() once on the library before use

Data Store

Heaps allow for insertion, extraction, and extraction of the Maximum.

This particular heap also supports getById(), and extractById() which solves race conditions. struct Nodes have only id and priority properties (packed into 1 storage slot), but you can extend this to any arbitrary data by pointing to a struct that you define in a separate mapping, with matching id from the heap.

Think of it simply as a data store. insert things into it, extract, or find / remove the largest element. Don't manipulate the heap structure except through the API, or risk corrupting its integrity.

Max-heap / Min-heap.

This is a max-Heap, if you would like to use it as a min-heap, simply reverse the sign before inputing (multiply by -1 (Although I haven't tested this yet)).

Error Handling

Bad input will result in returning the (default) zero node Node(0,0). For the most part, the functions will not throw any errors. This allows you to handle errors in your own way. If you'd like to throw an error in these situations, perform require(Heap.isNode(myNode)); on the returned node;

API.

Note that if you want to return the Heap.Node data types from a public function, you have to use the experimental ABIEncoderV2 for now.

struct Data{int128 idCount;
        Node[] nodes; // root is index 1; index 0 not usedmapping (int128 => uint) indices;   // unique id => node index
    }struct Node{int128 id; //use with a mapping to store arbitrary object typesint128 priority;
    }function init(Data storage self) internal {}function insert(Data storage self, int128priority) internalreturns(Node){}function extractMax(Data storage self) internalreturns(Node){}function extractById(Data storage self, int128id) internalreturns(Node){}function dump(Data storage self) internalviewreturns(Node[]){}function getById(Data storage self, int128id) internalviewreturns(Node){}function getByIndex(Data storage self, uinti) internalviewreturns(Node){}function getMax(Data storage self) internalviewreturns(Node){}function size(Data storage self) internalviewreturns(uint){}

Bounty

It is extremely important for Ethereum code to be bullet-proof. ETH ETC and BTC are the most hostile programing environments ever created. We are in a paradigm shift, and bounties are an important part of the solution. This bounty will start at 10 ETH, and increase over time for at least a month.

Welcome. This is different from many other bounties where you would "report" a bug and hope that we reimburse you fairly. This bounty has the ETH locked right into the smart contract, ready to be withdrawn instantly upon exploitation of any bug.

In fact: if you find a potential attack vector you should tell no one until you successfully exploit it yourself (securing the ETH to your account). You could even do this anonymously, but I would prefer you find a way to document the bug after-the-fact (it would really save me some time). Open a Github issue after executing your exploit.

Bounty Rules

Mainnet Address: 0xd01c0bd7f22083cfc25a3b3e31d862befb44deeb

First I wrote the Heap.sol library. Then, I wrote a second contract BountyHeap.sol (utilizing the library), which exposes all the operations to a single "public" heap that anyone can send transactions to. In this second contract, I took the definitions of what makes a heap a heap, and wrote public functions that release funds iff these properties are broken.

The Heap Property

In a heap, all child nodes, should have a value less-than-or-equal-to their parents. If you are able to get the contract into any state where this is untrue, simply call the

breakParentsHaveGreaterPriority(uint indexChild, address recipient)

function, and the contract will release its full bounty.

There are many other subtle properties that must stay intact for the heap to be secure. I've made corresponding functions that each release the entire bounty if exploited. I will describe the others below.

Completeness Property

breakCompleteness(uintholeIndex, uintfilledIndex, addressrecipient)

A Binary Heap is a complete tree . This means it can be, and in this case is implemented using a dynamic-sized array (no pointers). The array should contain no empty spots (even as nodes are inserted and extracted from any position). This architecture actually allowed for a significant gas cost reduction! If this property is broken, the heap is sure to be corrupted.

ID Maintenance Properties

The rest of the functions have to do with a design decision I made to give each node an unique id. This id allows the heap to organize data of any type. For example, if you want a buyOrder struct with the highest price, find it using the heap's getMax(), and then lookup your buyOrder in a separate mapping using the returned id. The id also allows a user to remove a specific node whereas using another value (like its index), could change unpredictably due to other transactions from other users being mined before it.

To benefit these use cases a mapping from id to index (in the nodes array) was used. It is carefully updated behind the scenes whenever a node is inserted, deleted, or moved.

If there is more than one node with the same id, something has gone terribly wrong. take your ETH using:

function breakIdUniqueness(uintindex1, uintindex2, addressrecipient)

Furthermore, there should never be an id in the mapping that points to an empty or differing node in the array or vice-versa. Use the following to prove otherwise:

function breakIdMaintenance(int128id, addressrecipient)function breakIdMaintenance2(uintindex, addressrecipient)

Gas Usage

All gas costs rise logarithmically at worst, but the simplicity of a binary heap makes it particularly cheaper than alternatives. Because the heap is a complete tree, it is able to be implemented using an array. This makes navigating the structure much cheaper. Instead of pointers to children and parent nodes (requiring the most expensive thing: storage space), it uses simple arithmetic to move from child to parent (index/2) and parent to leftChild or rightChild (index*2 or index*2+1).

Array Tree

performed on 500 item sets

  • extractById() Average Gas Costs: 69461
  • insert() Average Gas Costs: 101261
  • extractMax() Average Gas Costs: 170448

Heuristic: The cost of these functions can go up by about 20,000 gas every time you double the number of data items.

  • red lines => worst-case data
  • green lines => best-case data
  • blue dots (insert) => randomized data

Insert Stats

  • red lines => worst-case data
  • green lines => best-case data
  • blue dots (extractMax) => randomized data
  • brown dots (extractById) => randomized data

Extract Stats

This alone will never exceed the block-gas-limit and "lock-up" given Ethereum's current architecture.

↧

Why Doctors Reject Tools That Make Their Jobs Easier

$
0
0

I want to tell you about a brouhaha in my field over a β€œnew” medical discipline three hundred years ago. Half my fellow doctors thought it weighed them down and wanted nothing to do with it. The other half celebrated it as a means for medicine to finally become modern, objective, and scientific. The discipline was thermometry, and its controversial tool a glass tube used to measure body temperature called a thermometer.

This all began in 1717, when Daniel Fahrenheit moved to Amsterdam and offered his newest temperature sensor to the German physician Hermann Boerhaave. Boerhaave tried it out and liked it. He proposed using measurements with this device to guide diagnosis and therapy.

Boerhaave’s innovation was not embraced. Doctors were all for detecting fevers to guide diagnosis and treatment, but their determination of whether fever was present was qualitative. β€œThere is, for example, that acrid, irritating quality of feverish heat,” the French physician Jean Charles Grimaud said as he scorned the thermometer’s reducing his observations down to numbers. β€œThese [numerical] differences are the least important in practice.”

Grimaud captured the prevailing view of the time when he argued that the physician’s touch captured information richer than any tool, and for over a hundred years doctors were loath to use the glass tube. Researchers among them, however, persevered. They wanted to discover reproducible laws in medicine, and the verbal descriptions from doctors were not getting them there. Words were idiosyncratic; they varied from doctor to doctor and even for the same doctor from day to day. Numbers never wavered.

In 1851 at the Leipzig university hospital in Germany, Carl Reinhold Wunderlich started recording temperatures of his patients. 100,000 cases and several million readings later, he published the landmark work β€œOn the Temperature in Diseases: a manual of medical thermometry.” His text established an average body temperature of 37 degrees, the variation from this mean which could be considered normal, and the cutoff of 38 degrees as a bona fide fever. Wunderlich’s data were compelling; he could predict the course of illness better when he defined fever by a number than when fever had been defined by feel alone. The qualitative status quo would have to change.

Using a thermometer had previously suggested incompetence in a doctor. By 1886, not using one did. β€œThe information obtained by merely placing the hand on the body of the patient is inaccurate and unreliable,” remarked the American physician Austin Flint. β€œIf it be desirable to count the pulse and not trust to the judgment to estimate the number of beats per minute, it is far more desirable to ascertain the animal heat by means of a heat measurer.”

Evidence that temperature signaled disease made patient expectations change too. After listening to the doctor’s exam and evaluations, a patient in England asked, β€œDoctor, you didn’t try the little glass thing that goes in the mouth? Mrs Mc__ told me that you would put a little glass thing in her mouth and that would tell just where the disease was…”

Thermometry was part of a seismic shift in the nineteenth century, along with blood tests, microscopy, and eventually the x-ray, to what we now know as modern medicine. From impressionistic illnesses that went unnamed and thus had no systematized treatment or cure, modern medicine identified culprit bacteria, trialled antibiotics and other drugs, and targeted diseased organs or even specific parts of organs.

Imagine being a doctor at this watershed moment, trained in an old model and staring a new one in the face. Your patients ask for blood tests and measurements, not for you to feel their skin. Would you use all the new technology even if you didn’t understand it? Would you continue feeling skin, or let the old ways fall to the wayside? And would it trouble you, as the blood tests were drawn and temperatures taken by the nurse, that these tools didn’t need you to report their results. That if those results dictated future tests and prescriptions, doctors may as well be replaced completely?

The original thermometers were a foot long, available only in academic hospitals, and took twenty minutes to get a reading. How wonderful that now they are now cheap and ubiquitous, and that pretty much anyone can use one. It's hard to imagine a medical technology whose diffusion has been more successful. Even so, the thermometer's takeover has hardly done away with our use for doctors. If we have a fever we want a doctor to tell us what to do about it, and if we don't have a fever but feel lousy we want a doctor anyway, to figure out what's wrong.

Still, the same debate about technology replacing doctors rages on. Today patients want not just the doctor’s opinion, but everything from their microbiome array and MRI to tests for their testosterone and B12 levels. Some doctors celebrate this millimeter and microliter resolution inside patients’ bodies. They proudly brandish their arsenal of tests and say technology has made medicine the best it’s ever been.

The other camp thinks Grimaud was on to something. They resent all these tests because they miss things that listening to and touching the patient would catch. They insist there is more to health and disease than what quantitative testing shows, and try to limit the tests that are ordered. But even if a practiced touch detects things tools miss, it is hard to deny that tools also detect things we would miss that we don’t want to.

Modern CT scans, for example, perform better than even the best surgeons’ palpation of a painful abdomen in detecting appendicitis. As CT scans become cheaper, faster, and dose less radiation, they will become even more accurate. The same will happen with genome sequences and other up-and-coming tests that detect what overwhelms our human senses. There is no hope trying to rein in their ascent, nor is it right to. Medicine is better off with them around.

What's keeping some doctors from celebrating this miraculous era of medicine is the nagging concern that we have nothing to do with its triumphs. We are told the machines’ autopilot outperforms us so we sit quietly and get weaker, yawning and complacent like a mangy tiger in captivity. We wish we could do as Grimaud said: β€œdistinguishing in feverish heat qualities that may be perceived only by a highly practiced touch, and which elude whatever means physics may offer.”

A children’s hospital in Philadelphia tried just that. Children often have fevers, as anyone who has had children around them well knows. Usually, they have a simple cold and there’s not much to fuss about. But about once in a thousand cases, feverish kids have deadly infections and need antibiotics, ICU care, all that modern medicine can muster.

An experienced doctor’s judgment picks the one in a thousand very sick child about three quarters of the time. To try to capture the remainder of these children being missed, hospitals started using quantitative algorithms from their electronic health records to choose which fevers were dangerous based on hard facts alone. And indeed, the computers did better catching the serious infections nine times out of ten, albeit also with ten times the false alarms.

The Philadelphia hospital accepted the computer-based list of worrisome fevers, but then deployed their best doctors and nurses to apply Grimaud’s β€œhighly practiced touch” and look over the children before declaring the infection was deadly and bringing them into the hospital for intravenous medications. Their teams were able to weed out the algorithm’s false alarms with high accuracy, and in addition find cases the computer missed, bringing their detection rate of deadly infections from 86.2 percent by the algorithm alone, to 99.4 percent by the algorithm in combination with human perception.

Too many doctors have resigned that they have nothing to add in a world of advanced technology. They thoughtlessly order tests and thoughtlessly obey the results. When, inevitably, the tests give unsatisfying answers they shrug their shoulders. I wish more of them knew about the Philadelphia pediatricians, whose close human attention caught mistakes a purely numerical rules-driven system would miss.

It’s true that a doctor’s eyes and hands are slower, less precise, and more biased than modern machines and algorithms. But these technologies can count only what they have been programmed to count: human perception is not so constrained.

Our distractible, rebellious, infinitely curious eyes and hands decide moment-by-moment what deserves attention. While this leeway can lead us astray, with the best of training and judgment, it can also lead us to the as of yet undiscovered phenomena that no existing technology knows to look for. My profession and other increasingly automated fields would do better to focus on finding new answers than on fettering old algorithms.

↧
↧

Hijacking HTML canvas and PNG images to store arbitrary text data

$
0
0
One of the web app projects I'm working on had an interesting requirement recently - it needed to provide a save/load feature without relying on cookies, local storage or server side storage (no accounts or logins). My first pass at the save feature implementation was to take my data, serialise it as JSON, dynamically create a new link element with a data URL and the download attribute set and trigger a click event on this link. That worked pretty well on desktop browsers. It failed miserably on mobile Safari.

Problem - Mobile Safari ignores the download attribute in the link element. This leads to the serialised JSON data being displayed in the browser window without any way of storing it on the user's device. There was no way to disable this.

Solution - Present the user with something that stores data and that they can save to their device. An image is an obvious choice here. This doesn't create the same save/load experience but is close enough to be workable.

I did try using QR codes for this and found them incredibly easy to generate but the decoding side was not so simple and required rather large libraries to be included, so I quickly discarded the idea of using them.

The challenge then was to work out how to store arbitrary text data in a PNG. This was not a new idea and has been donepreviously, however I didn't want to have a completely generic storage container and was happy to impose some constraints to make my job easier.

Constraints/Requirements

  1. The generated image had to be easy to save and should have preset dimensions.
  2. The save/load data I was dealing with was in the order of several dozen kilobytes.
  3. I wanted to store my data as JSON.
  4. I didn't want to deal with the details saving/loading in any particular image format.

Sounds simple enough right? Well there were a few catches. But first lets see the general approach.

Images are fundamentally 2D arrays of pixels. Each pixel is a tuple of 3 bytes, one for each colour component - RGB. Each of the colour components has a range of 0 to 255. This lends itself to storing byte/character arrays naturally. For example a single pixel can be used to store the array of ASCII characters ['F', 'T', 'W'] by encoding their ASCII codes as a colour intensity like so...

The result is a rather grey and boring pixel but it stores the data we want. Whole sentences can be encoded in the same manner - "The quick brown fox jumps over the lazy dog" - is a sequence of these ASCII codes...

Which ends up as 15 pixels like so...

The last 3-tuple only has one character code so it is padded with two zero values to produce the resulting pixel.

That was the basic approach. Then I had to address my requirements:

  1. Though storing and generating an image that was a 1-pixel line would have been the easiest to implement, this is not easy to tap to save so I had to use a square image of sufficient size. Using a preset maximum size (256 x 256 pixels) of the image worked well towards this but it required keeping track of the size of the actual encoded data. This encoded size was the length of a square and had to be stored in the generated image. Using a single colour of the first pixel would let me have a square of up to 255 x 255 in size - the first line is forfeited to store this size value and since it's a square the last column in the image is also forfeited. The size of the byte/character array being encoded also had to be preserved somehow, this would require more than a byte of storage to store but I had the remainder of the first line worth of pixels to deal with this (which I didn't end up needing due to a fortunate issue I encountered with the alpha channel).
  2. Since the maximum size of the available pixel data was 255 x 255 pixels, this gave me 65025 pixels to play with. In turn this translated to 195075 bytes (190kB) of text data. This was well above what I actually needed.
  3. Using TextEncoder I could convert my serialised JSON data into a byte array (Uint8Array in JavaScript).
  4. Using an off-screen canvas would allow me to manipulate pixel data at will and then convert to an image data URL in my desired format.

Converting objects to a byte array

So now I had the general approach worked out and had a container for my byte array. The next step was to convert my objects into a form that could be stored in a byte array. This was easy, using JSON.stringify() and TextEncoder.encode() I could get a Uint8Array. I could also then work out the size of the square image that would be big enough to store this data.

Converting byte array to an image data

Then it was time to take my byte array data and convert it into an ImageData object that could be used with a canvas. That's where I came across the first issue - ImageData expected a Uint8ClampedArray and I had a Uint8Array. Fundamentally though since my data was already 'clamped' in a sense by the TextEncoder conversion I didn't really have to worry too much.

Since I needed a lossless format to store my image data I went for PNG as the output format. This also meant that instead of storing data as RGB, it would be stored as RGBA. There was an additional Alpha channel per pixel and therefore an extra byte to play with. However after some experimentation I ran into an issue that had to do with RGB corruption when the alpha channel was set to zero.

That threw a spanner in the works and I had to write code to convert my 3-tuple byte array into a 4-tuple array with the 4th (alpha) component being set to full opacity (255). This turned out to be an advantage for decoding later since I could skip all zero-padded data easily. It wasn't the most efficient code but it did the trick.

As a bonus I now had the correctly typed Uint8ClampedArray byte array and could finally construct my ImageData object.

Drawing the image

With the ImageData object available I could now create a canvas and draw the image data that was holding my encoded JSON. First the canvas was created 'off screen' and its context retrieved and the background set to a solid colour (actual colour doesn't matter here).

Then I could 'draw' the pixel that represented the size of the square image that encoded my data.

Then I could draw the image data...

Saving the image

The image could now be saved from the canvas to the file system (or in the case of mobile Safari displayed in a new tab) with a bit of jQuery code...

The end result was something like this...

Of course the next step is decoding an image and getting the original JSON back out of it, that will have to wait until the next article however, which is available now - Retrieving data from hijacked PNG images using HTML canvas and Javascript!

-i

↧

How Lisp Became God's Own Programming Language

$
0
0

When programmers discuss the relative merits of different programming languages, they often talk about them in prosaic terms as if they were so many tools in a tool beltβ€”one might be more appropriate for systems programming, another might be more appropriate for gluing together other programs to accomplish some ad hoc task. This is as it should be. Languages have different strengths and claiming that a language is better than other languages without reference to a specific use case only invites an unproductive and vitriolic debate.

But there is one language that seems to inspire a peculiar universal reverence: Lisp. Keyboard crusaders that would otherwise pounce on anyone daring to suggest that some language is better than any other will concede that Lisp is on another level. Lisp transcends the utilitarian criteria used to judge other languages, because the median programmer has never used Lisp to build anything practical and probably never will, yet the reverence for Lisp runs so deep that Lisp is often ascribed mystical properties. Everyone’s favorite webcomic, xkcd, has depicted Lisp this way at least twice: In one comic, a character reaches some sort of Lisp enlightenment, which appears to allow him to comprehend the fundamental structure of the universe. In another comic, a robed, senescent programmer hands a stack of parentheses to his padawan, saying that the parentheses are β€œelegant weapons for a more civilized age,” suggesting that Lisp has all the occult power of the Force.

Another great example is Bob Kanefsky’s parody of a song called β€œGod Lives on Terra.” His parody, written in the mid-1990s and called β€œEternal Flame”, describes how God must have created the world using Lisp. The following is an excerpt, but the full set of lyrics can be found in the GNU Humor Collection:

For God wrote in Lisp code
When he filled the leaves with green.
The fractal flowers and recursive roots:
The most lovely hack I’ve seen.
And when I ponder snowflakes,
never finding two the same,
I know God likes a language
with its own four-letter name.

I can only speak for myself, I suppose, but I think this β€œLisp Is Arcane Magic” cultural meme is the most bizarre and fascinating thing ever. Lisp was concocted in the ivory tower as a tool for artificial intelligence research, so it was always going to be unfamiliar and maybe even a bit mysterious to the programming laity. But programmers now urge each other to β€œtry Lisp before you die” as if it were some kind of mind-expanding psychedelic. They do this even though Lisp is now the second-oldest programming language in widespread use, younger only than Fortran, and even then by just one year. Imagine if your job were to promote some new programming language on behalf of the organization or team that created it. Wouldn’t it be great if you could convince everyone that your new language had divine powers? But how would you even do that? How does a programming language come to be known as a font of hidden knowledge?

How did Lisp get to be this way?

Byte Magazine Cover, August, 1979.The cover of Byte Magazine, August, 1979.

Theory A: The Axiomatic Language

John McCarthy, Lisp’s creator, did not originally intend for Lisp to be an elegant distillation of the principles of computation. But, after one or two fortunate insights and a series of refinements, that’s what Lisp became. Paul Grahamβ€”we will talk about him some more laterβ€”has written that, with Lisp, McCarthy β€œdid for programming something like what Euclid did for geometry.” People might see a deeper meaning in Lisp because McCarthy built Lisp out of parts so fundamental that it is hard to say whether he invented it or discovered it.

McCarthy began thinking about creating a language during the 1956 Darthmouth Summer Research Project on Artificial Intelligence. The Summer Research Project was in effect an ongoing, multi-week academic conference, the very first in the field of artificial intelligence. McCarthy, then an assistant professor of Mathematics at Dartmouth, had actually coined the term β€œartificial intelligence” when he proposed the event. About ten or so people attended the conference for its entire duration. Among them were Allen Newell and Herbert Simon, two researchers affiliated with the RAND Corporation and Carnegie Mellon that had just designed a language called IPL.

Newell and Simon had been trying to build a system capable of generating proofs in propositional calculus. They realized that it would be hard to do this while working at the level of the computer’s native instruction set, so they decided to create a languageβ€”or, as they called it, a β€œpseudo-code”—that would help them more naturally express the workings of their β€œLogic Theory Machine.” Their language, called IPL for β€œInformation Processing Language”, was more of a high-level assembly dialect then a programming language in the sense we mean today. Newell and Simon, perhaps referring to Fortran, noted that other β€œpseudo-codes” then in development were β€œpreoccupied” with representing equations in standard mathematical notation. Their language focused instead on representing sentences in propositional calculus as lists of symbolic expressions. Programs in IPL would basically leverage a series of assembly-language macros to manipulate and evaluate expressions within one or more of these lists.

McCarthy thought that having algebraic expressions in a language, Fortran-style, would be useful. So he didn’t like IPL very much. But he thought that symbolic lists were a good way to model problems in artificial intelligence, particularly problems involving deduction. This was the germ of McCarthy’s desire to create an algebraic list processing language, a language that would resemble Fortran but also be able to process symbolic lists like IPL.

Of course, Lisp today does not resemble Fortran. Over the next few years, McCarthy’s ideas about what an ideal list processing language should look like evolved. His ideas began to change in 1957, when he started writing routines for a chess-playing program in Fortran. The prolonged exposure to Fortran convinced McCarthy that there were several infelicities in its design, chief among them the awkward IF statement. McCarthy invented an alternative, the β€œtrue” conditional expression, which returns sub-expression A if the supplied test succeeds and sub-expression B if the supplied test fails and which also only evaluates the sub-expression that actually gets returned. During the summer of 1958, when McCarthy worked to design a program that could perform differentiation, he realized that his β€œtrue” conditional expression made writing recursive functions easier and more natural. The differentiation problem also prompted McCarthy to devise the maplist function, which takes another function as an argument and applies it to all the elements in a list. This was useful for differentiating sums of arbitrarily many terms.

None of these things could be expressed in Fortran, so, in the fall of 1958, McCarthy set some students to work implementing Lisp. Since McCarthy was now an assistant professor at MIT, these were all MIT students. As McCarthy and his students translated his ideas into running code, they made changes that further simplified the language. The biggest change involved Lisp’s syntax. McCarthy had originally intended for the language to include something called β€œM-expressions,” which would be a layer of syntactic sugar that made Lisp’s syntax resemble Fortran’s. Though M-expressions could be translated to S-expressionsβ€”the basic lists enclosed by parentheses that Lisp is known forβ€” S-expressions were really a low-level representation meant for the machine. The only problem was that McCarthy had been denoting M-expressions using square brackets, and the IBM 026 keypunch that McCarthy’s team used at MIT did not have any square bracket keys on its keyboard. So the Lisp team stuck with S-expressions, using them to represent not just lists of data but function applications too. McCarthy and his students also made a few other simplifications, including a switch to prefix notation and a memory model change that meant the language only had one real type.

In 1960, McCarthy published his famous paper on Lisp called β€œRecursive Functions of Symbolic Expressions and Their Computation by Machine.” By that time, the language had been pared down to such a degree that McCarthy realized he had the makings of β€œan elegant mathematical system” and not just another programming language. He later wrote that the many simplifications that had been made to Lisp turned it β€œinto a way of describing computable functions much neater than the Turing machines or the general recursive definitions used in recursive function theory.” In his paper, he therefore presented Lisp both as a working programming language and as a formalism for studying the behavior of recursive functions.

McCarthy explained Lisp to his readers by building it up out of only a very small collection of rules. Paul Graham later retraced McCarthy’s steps, using more readable language, in his essay β€œThe Roots of Lisp”. Graham is able to explain Lisp using only seven primitive operators, two different notations for functions, and a half-dozen higher-level functions defined in terms of the primitive operators. That Lisp can be specified by such a small sequence of basic rules no doubt contributes to its mystique. Graham has called McCarthy’s paper an attempt to β€œaxiomatize computation.” I think that is a great way to think about Lisp’s appeal. Whereas other languages have clearly artificial constructs denoted by reserved words like while or typedef or public static void, Lisp’s design almost seems entailed by the very logic of computing. This quality and Lisp’s original connection to a field as esoteric as β€œrecursive function theory” should make it no surprise that Lisp has so much prestige today.

Theory B: Machine of the Future

Two decades after its creation, Lisp had become, according to the famousHacker’s Dictionary, the β€œmother tongue” of artificial intelligence research. Early on, Lisp spread quickly, probably because its regular syntax made implementing it on new machines relatively straightforward. Later, researchers would keep using it because of how well it handled symbolic expressions, important in an era when so much of artificial intelligence was symbolic. Lisp was used in seminal artificial intelligence projects like the SHRDLU natural language program, the Macsyma algebra system, and the ACL2 logic system.

By the mid-1970s, though, artificial intelligence researchers were running out of computer power. The PDP-10, in particularβ€”everyone’s favorite machine for artificial intelligence workβ€”had an 18-bit address space that increasingly was insufficient for Lisp AI programs. Many AI programs were also supposed to be interactive, and making a demanding interactive program perform well on a time-sharing system was challenging. The solution, originally proposed by Peter Deutsch at MIT, was to engineer a computer specifically designed to run Lisp programs. These Lisp machines, as I described in my last post on Chaosnet, would give each user a dedicated processor optimized for Lisp. They would also eventually come with development environments written entirely in Lisp for hardcore Lisp programmers. Lisp machines, devised in an awkward moment at the tail of the minicomputer era but before the full flowering of the microcomputer revolution, were high-performance personal computers for the programming elite.

For a while, it seemed as if Lisp machines would be the wave of the future. Several companies sprang into existence and raced to commercialize the technology. The most successful of these companies was called Symbolics, founded by veterans of the MIT AI Lab. Throughout the 1980s, Symbolics produced a line of computers known as the 3600 series, which were popular in the AI field and in industries requiring high-powered computing. The 3600 series computers featured large screens, bit-mapped graphics, a mouse interface, andpowerful graphics and animation software. These were impressive machines that enabled impressive programs. For example, Bob Culley, who worked in robotics research and contacted me via Twitter, was able to implement and visualize a path-finding algorithm on a Symbolics 3650 in 1985. He explained to me that bit-mapped graphics and object-oriented programming (available on Lisp machines via the Flavors extension) were very new in the 1980s. Symbolics was the cutting edge.

Bob Culley's path-finding program.Bob Culley’s path-finding program.

As a result, Symbolics machines were outrageously expensive. The Symbolics 3600 cost $110,000 in 1983. So most people could only marvel at the power of Lisp machines and the wizardry of their Lisp-writing operators from afar. But marvel they did. Byte Magazine featured Lisp and Lisp machines several times from 1979 through to the end of the 1980s. In the August, 1979 issue, a special on Lisp, the magazine’s editor raved about the new machines being developed at MIT with β€œgobs of memory” and β€œan advanced operating system.” He thought they sounded so promising that they would make the two prior yearsβ€”which saw the launch of the Apple II, the Commodore PET, and the TRS-80β€”look boring by comparison. A half decade later, in 1985, a Byte Magazine contributor described writing Lisp programs for the β€œsophisticated, superpowerful Symbolics 3670” and urged his audience to learn Lisp, claiming it was both β€œthe language of choice for most people working in AI” and soon to be a general-purpose programming language as well.

I asked Paul McJones, who has done lots of Lisp preservation work for the Computer History Museum in Mountain View, about when people first began talking about Lisp as if it were a gift from higher-dimensional beings. He said that the inherent properties of the language no doubt had a lot to do with it, but he also said that the close association between Lisp and the powerful artificial intelligence applications of the 1960s and 1970s probably contributed too. When Lisp machines became available for purchase in the 1980s, a few more people outside of places like MIT and Stanford were exposed to Lisp’s power and the legend grew. Today, Lisp machines and Symbolics are little remembered, but they helped keep the mystique of Lisp alive through to the late 1980s.

Theory C: Learn to Program

In 1985, MIT professors Harold Abelson and Gerald Sussman, along with Sussman’s wife, Julie Sussman, published a textbook called Structure and Interpretation of Computer Programs. The textbook introduced readers to programming using the language Scheme, a dialect of Lisp. It was used to teach MIT’s introductory programming class for two decades. My hunch is that SICP (as the title is commonly abbreviated) about doubled Lisp’s β€œmystique factor.” SICP took Lisp and showed how it could be used to illustrate deep, almost philosophical concepts in the art of computer programming. Those concepts were general enough that any language could have been used, but SICP’s authors chose Lisp. As a result, Lisp’s reputation was augmented by the notoriety of this bizarre and brilliant book, which has intrigued generations of programmers (and also become a very strange meme). Lisp had always been β€œMcCarthy’s elegant formalism”; now it was also β€œthat language that teaches you the hidden secrets of programming.”

It’s worth dwelling for a while on how weird SICP really is, because I think the book’s weirdness and Lisp’s weirdness get conflated today. The weirdness starts with the book’s cover. It depicts a wizard or alchemist approaching a table, prepared to perform some sort of sorcery. In one hand he holds a set of calipers or a compass, in the other he holds a globe inscribed with the words β€œeval” and β€œapply.” A woman opposite him gestures at the table; in the background, the Greek letter lambda floats in mid-air, radiating light.

The cover art for SICP.The cover art for SICP.

Honestly, what is going on here? Why does the table have animal feet? Why is the woman gesturing at the table? What is the significance of the inkwell? Are we supposed to conclude that the wizard has unlocked the hidden mysteries of the universe, and that those mysterious consist of the β€œeval/apply” loop and the Lambda Calculus? It would seem so. This image alone must have done an enormous amount to shape how people talk about Lisp today.

But the text of the book itself is often just as weird. SICP is unlike most other computer science textbooks that you have ever read. Its authors explain in the foreword to the book that the book is not merely about how to program in Lispβ€”it is instead about β€œthree foci of phenomena: the human mind, collections of computer programs, and the computer.” Later, they elaborate, describing their conviction that programming shouldn’t be considered a discipline of computer science but instead should be considered a new notation for β€œprocedural epistemology.” Programs are a new way of structuring thought that only incidentally get fed into computers. The first chapter of the book gives a brief tour of Lisp, but most of the book after that point is about much more abstract concepts. There is a discussion of different programming paradigms, a discussion of the nature of β€œtime” and β€œidentity” in object-oriented systems, and at one point a discussion of how synchronization problems may arise because of fundamental constraints on communication that play a role akin to the fixed speed of light in the theory of relativity. It’s heady stuff.

All this isn’t to say that the book is bad. It’s a wonderful book. It discusses important programming concepts at a higher level than anything else I have read, concepts that I had long wondered about but didn’t quite have the language to describe. It’s impressive that an introductory programming textbook can move so quickly to describing the fundamental shortfalls of object-oriented programming and the benefits of functional languages that minimize mutable state. It’s mind-blowing that this then turns into a discussion of how a stream paradigm, perhaps something like today’s RxJS, can give you the best of both worlds. SICP distills the essence of high-level program design in a way reminiscent of McCarthy’s original Lisp paper. The first thing you want to do after reading it is get your programmer friends to read it; if they look it up, see the cover, but then don’t read it, all they take away is that some mysterious, fundamental β€œeval/apply” thing gives magicians special powers over tables with animal feet. I would be deeply impressed in their shoes too.

But maybe SICP’s most important contribution was to elevate Lisp from curious oddity to pedagogical must-have. Well before SICP, people told each other to learn Lisp as a way of getting better at programming. The 1979 Lisp issue of Byte Magazine is testament to that fact. The same editor that raved about MIT’s new Lisp machines also explained that the language was worth learning because it β€œrepresents a different point of view from which to analyze problems.” But SICP presented Lisp as more than just a foil for other languages; SICP used Lisp as an introductory language, implicitly making the argument that Lisp is the best language in which to grasp the fundamentals of computer programming. When programmers today tell each other to try Lisp before they die, they arguably do so in large part because of SICP. After all, the language Brainfuck presumably offers β€œa different point of view from which to analyze problems.” But people learn Lisp instead because they know that, for twenty years or so, the Lisp point of view was thought to be so useful that MIT taught Lisp to undergraduates before anything else.

Lisp Comes Back

The same year that SICP was released, Bjarne Stroustrup published the first edition of The C++ Programming Language, which brought object-oriented programming to the masses. A few years later, the market for Lisp machines collapsed and the AI winter began. For the next decade and change, C++ and then Java would be the languages of the future and Lisp would be left out in the cold.

It is of course impossible to pinpoint when people started getting excited about Lisp again. But that may have happened after Paul Graham, Y-Combinator co-founder and Hacker News creator, published a series of influential essays pushing Lisp as the best language for startups. In his essay β€œBeating the Averages,” for example, Graham argued that Lisp macros simply made Lisp more powerful than other languages. He claimed that using Lisp at his own startup, Viaweb, helped him develop features faster than his competitors were able to. Some programmers at least were persuaded. But the vast majority of programmers did not switch to Lisp.

What happened instead is that more and more Lisp-y features have been incorporated into everyone’s favorite programming languages. Python got list comprehensions. C# got Linq. Ruby got… well, Ruby is a Lisp. As Graham noted even back in 2001, β€œthe default language, embodied in a succession of popular languages, has gradually evolved toward Lisp.” Though other languages are gradually becoming like Lisp, Lisp itself somehow manages to retain its special reputation as that mysterious language that few people understand but everybody should learn. In 1980, on the occasion of Lisp’s 20th anniversary, McCarthy wrote that Lisp had survived as long as it had because it occupied β€œsome kind of approximate local optimum in the space of programming languages.” That understates Lisp’s real influence. Lisp hasn’t survived for over half a century because programmers have begrudgingly conceded that it is the best tool for the job decade after decade; in fact, it has survived even though most programmers do not use it at all. Thanks to its origins and use in artificial intelligence research and perhaps also the legacy of SICP, Lisp continues to fascinate people. Until we can imagine God creating the world with some newer language, Lisp isn’t going anywhere.

If you enjoyed this post, more like it come out every two weeks! Follow @TwoBitHistory on Twitter or subscribe to the RSS feed to make sure you know when a new post is out.

Previously on TwoBitHistory…

↧

Integrating NVMe Disks in HopsFS (HDFS)

$
0
0

Published by dowlingj on

Datasets used for deep learning may reach millions of files. The well known image dataset, ImageNet, contains 1m images, and its successor, the Open Images dataset has over 9m images. Google and Facebook have published papers on datasets with 300m and 2bn images, respectively. Typically, developers want to store and access these datasets as image files, stored in a distributed file system. However, according to Uber, there’s a problem:

β€œmultiple round-trips to the filesystem are costly. It is hard to implement at large scale, especially using modern distributed file systems such as HDFS and S3 (these systems are typically optimized for fast reads of large chunks of data).”

Uber’s proposed solution is to pack image files into larger Apache Parquet files. Parquet is a columnar database file format, and thousands of individual image files can be packed into a single Parquet file, typically 64-256MB in size. For many image and NLP datasets, however, this introduces costly complexity. Existing tools for processing/viewing/indexing files/text need to be rewritten. An alternative approach would be to use HopsFS.

HopsFS solves this problem by now being able to transparently integrate NVMe disks into its HDFS-compatible file system, see our peer-reviewed paper to be published at ACM Middleware 2018. HDFS (and S3) are designed around large blocks (optimized to overcome slow random I/O on disks), while new NVMe hardware supports fast random disk I/O (and potentially small blocks sizes). However, as NVMe disks are still expensive, it would be prohibitively expensive to store tens of terabytes or petabyte-sized datasets on only NVMe hardware. In Hops, our hybrid solution involves storing files smaller than a configurable threshold (default: 64KB, but scales up to around 1MB) on NVMe disks in our metadata layer. On top of this, files under a smaller threshold, typically 1KB, we store replicated in-memory in the metadata layer (due to their minimal overhead). This design choice was informed from our collaboration with Spotify, where we observed that most of their filesystem operations are on small files (β‰ˆ64% of file read operations are performed on files less than 16 KB). Similar file size distributions have been reported at Yahoo!, Facebook, and others.

The result is that when clients read and write small files, they can do so at an order of higher magnitude in throughput (number of files per second), and with massively reduced latency (>90% of file writes on the Spotify workload completed in less than 10ms, compared to >100ms for Apache HDFS). In HopsFS, most incredibly, the small files stored at the metadata layer are not cached and are replicated at more than one host. That is, the scale-out metadata layer in HopsFS, can be scaled out like a key-value store and provide file read/write performance for small files comparable with get/put performance for a modern key-value stores.

NVMe Disk Performance

As we can see from Google Cloud disk performance figures, NVMe disks support more than two orders of magnitude more IOPs than magnetic disks, and over one order of magnitude more IOPs than standard SATA SSD disks.

Key-Value Store Performance for Small Files

In our Middleware paper, we observed up to 66X throughput improvements for writing small files and up 4.5X throughput improvements for reading small files, compared to HDFS. For latency, we saw operational latencies on Spotify’s Hadoop workload were 7.39 times lower for writing small files and 3.15 times lower for reading small files. For real-world datasets, like the Open Images 9m images dataset, we saw 4.5X improvements for reading files and 5.9X improvements for writing files. These figures were generated using only 6 NVMe disks, and we are confident that we scale to must higher numbers with more NVMe disks.

We also discuss in the paper how we solved the problems of maintaining full-HDFS Compatibility: changes for handling small files do not break HopsFS’ compatibility with HDFS clients. We also address the problem of migrating data between different storage types: when the size of a small file that is stored in the metadata layer exceeds some threshold then the file is reliably and safely moved to the HopsFS datanodes.

Running HopsFS (on-premise or in the cloud)

You can already benefit from our small files solution for HopsFS that has been available since HopsFS 2.8.2, released in 2017. We have been running www.hops.site using small files since October 2017, and we are very happy with its stability in production.

Since early 2018, NVMe disks are now available at Google Cloud, AWS, and Azure. Logical Clocks can help with providing support for running HopsFS in the cloud, including running HopsFS in an availability-zone fault-tolerant configuration, available only in Enterprise Hops.

References

↧

Atmospheric railway

$
0
0
From Wikipedia, the free encyclopedia
Jump to navigationJump to search

Aeromovel train at Taman Mini Indonesia Indah, Jakarta, Indonesia, opened in 1989. The girder under the train forms an air duct. The vehicle is connected to a propulsion plate in the duct which is then driven by air pressure.

An atmospheric railway uses differential air pressure to provide power for propulsion of a railway vehicle. A static power source can transmit motive power to the vehicle in this way, avoiding the necessity of carrying mobile power generating equipment. The air pressure, or partial vacuum (i.e. negative relative pressure) can be conveyed to the vehicle in a continuous pipe, where the vehicle carries a piston running in the tube. Some form of re-sealable slot is required to enable the piston to be attached to the vehicle. Alternatively the entire vehicle may act as the piston in a large tube.

Several variants of the principle were proposed in the early 19th century, and a number of practical forms were implemented, but all were overcome with unforeseen disadvantages and discontinued within a few years.

A modern proprietary system has been developed and is in use for short-distance applications. Porto Alegre Metro airport connection is one of them.

History

In the early days of railways, single vehicles or groups were propelled by human power, or by horses. As mechanical power came to be understood, locomotive engines were developed; the iron horse. These had serious limitations, in particular being much heavier than the wagons in use, they broke the rails; and adhesion at the iron-to-iron wheel-rail interface was a limitation, for example in trials on the Kilmarnock and Troon Railway.

Many engineers turned their attention to transmitting power from a static power source, a stationary engine, to a moving train. Such an engine could be more robust and with more available space, potentially more powerful. The solution to transmitting the power, before the days of practical electricity, was the use of either a cable system or air pressure.

Medhurst

In 1799 George Medhurst of London discussed the idea of moving goods pneumatically through cast iron pipes, and in 1812 he proposed blowing passenger carriages through a tunnel.[1]

Medhurst proposed two alternative systems: either the vehicle itself was the piston, or the tube was relatively small with a separate piston. He never patented his ideas and they were not taken further by him.[2]

19th century

Vallance

In 1824 a man called Vallance took out a patent and built a short demonstration line; his system consisted of a 6-ft diameter cast iron tube with rails cast in to the lower part; the vehicle was the full size of the tube and bear skin was used to seal the annular space. To slow the vehicle down, doors were opened at each end of the vehicle. Vallance's system worked, but was not adopted commercially.[2]

Pinkus

Arriving at Kingstown on the Dalkey Atmospheric Railway in 1844

In 1835 Henry Pinkus patented a system with a large (9 sq ft) square section tube with a low degree of vacuum, limiting leakage loss.[3] He later changed to a small-bore vacuum tube. He proposed to seal the slot that enabled the piston to connect with the vehicle with a continuous rope; rollers on the vehicle lifted the rope in front of the piston connection and returned it afterwards.

He built a demonstration line alongside the Kensington Canal, and issued a prospectus for his National Pneumatic Railway Association. He was unable to interest investors, and his system failed when the rope stretched. However his concept, a small bore pipe with a resealable slot was the prototype for many successor systems.[2]

Samuda and Clegg

Developing a practical scheme

Jacob and Joseph Samuda were shipbuilders and engineers, and owned the Southwark Ironworks; they were both members of the Institution of Civil Engineers. Samuel Clegg was a gas engineer and they worked in collaboration on their atmospheric system. About 1835 they read Medhurst's writings, and developed a small bore vacuum pipe system. Clegg worked on a longitudinal flap valve, for sealing the slot in the pipe.

In 1838 they took out a patent "for a new improvement in valves" and built a full-scale model at Southwark. In 1840 Jacob Samuda and Clegg leased half a mile of railway line on the West London Railway at Wormholt Scrubs (later renamed Wormwood Scrubs), where the railway had not yet been opened to the public. In that year Clegg left for Portugal, where he was pursuing his career in the gas industry.

Samuda's system involved a continuous (jointed) cast iron pipe laid between the rails of a railway track; the pipe had a slot in the top. The leading vehicle in a train was a piston carriage, which carried a piston inserted in the tube. It was held by a bracket system that passed through the slot, and the actual piston was on a pole ahead of the point at which the bracket left the slot. The slot was sealed from the atmosphere by a continuous leather flap that was opened immediately in advance of the piston bracket and closed again immediately behind it. A pumping station ahead of the train would pump air from the tube, and air pressure behind the piston would push it forward.

The Wormwood Scrubbs demonstration ran for two years. The traction pipe was of 9 inches diameter, and a 16Β hp stationary engine was used for power. The gradient on the line was a steady 1 in 115. In his treatise, described below, Samuda implies that the pipe would be used in one direction only, and the fact that only one pumping station was erected suggests that trains were gravitated back to the lower end of the run after the atmospheric ascent, as was later done on the Dalkey line (below). Many of the runs were public. Samuda quotes the loads and degree of vacuum and speed of some of the runs; there seems to be little correlation; for example:

  • 11 June 1840; 11 tons 10 cwt; maximum speed 22.5Β mph; 15 inches of vacuum
  • 10 August 1840: 5 tons 0 cwt; maximum speed 30Β mph; 20 inches of vacuum.[4]

Competing solutions

There was enormous public interest in the ideas surrounding atmospheric railways, and at the same time as Samuda was developing his scheme, other ideas were put forward. These included:

  • Nickels and Keane; they were to propel trains by pumping air into a continuous canvas tube; the train had a pair of pinch rollers squeezing the outside of the tube, and the air pressure forced the vehicle forward; the effect was the converse of squeezing a toothpaste tube. They claimed a successful demonstration in a timber yard in Waterloo Road.
  • James Pilbrow; he proposed a loose piston fitted with a toothed rack; cog wheels would be turned by it, and they were on spindle passing through glands to the outside of the tube; the leading carriage of the train would have a corresponding rack and be impelled forward by the rotation of the cog wheels. Thus the vehicle would keep pace with the piston exactly, without any direct connection to it.
  • Henry Lacey conceived a wooden tube, made by barrelmakers as a long, continuous barrel with the opening slot and a timber flap retained by an india-rubber hinge;
  • Clarke and Varley proposed sheet iron tubes with a continuous longitudinal slit. If the tubes were made to precision standards, the vacuum would keep the slit closed, but the piston bracket on the train would spring the slit open enough to pass; the elasticity of the tube would close it again behind the piston carriage.
  • Joseph Shuttleworth suggested a hydraulic tube; water pressure rather than a partial atmospheric vacuum, would propel the train. In mountainous areas where plentiful water was available, a pumping station would be unnecessary: the water would be used directly. Instead of the flap to seal the slot in the tube, a continuous shaped sealing rope, made of cloth impregnated with india-rubber would be within the pipe. Guides on the piston would lift it into position and the water pressure would hold it in place behind the train. Use of a positive pressure enabled a greater pressure differential than a vacuum system. However the water in the pipe would have to be drained manually by staff along the pipe after every train.

Samuda's treatise

Illustration from A Treatise on the Adaptation of Atmospheric Pressure to the Purposes of Locomotion on Railways, Samuda

In 1841 Joseph Samuda published A Treatise on the Adaptation of Atmospheric Pressure to the Purposes of Locomotion on Railways.[4]

It ran to 50 pages, and Samuda described his system; first the traction pipe:

The moving power is communicated to the train through a continuous pipe or main, laid between the rails, which is exhausted by air pumps worked by stationary steam engines, fixed on the road side, the distance between them varying from one to three miles, according to the nature and traffic of the road. A piston, which is introduced into this pipe, is attached to the leading carriage in each train, through a lateral opening, and is made to travel forward by means of the exhaustion created in front of it. The continuous pipe is fixed between the rails and bolted to the sleepers which carry them; the inside of the tube is unbored, but lined or coated with tallow 1/10th of an inch thick, to equalize the surface and prevent any unnecessary friction from the passage of the travelling piston through it.

The operation of the closure valve was to be critical:

Along the upper surface of the pipe is a continuous slit or groove about two inches wide. This groove is covered by a valve, extending the whole length of the railway, formed of a strip of leather riveted between iron plates, the top plates being wider than the groove and serving to prevent the external air forcing the leather into the pipe when the vacuum is formed within it; and the lower plates fitting into the groove when the valve is shut, makes up the circle of the pipe, and prevents the air from passing the piston; one edge of this valve is securely held down by iron bars, fastened by screw bolts to a longitudinal rib cast on the pipe, and allows the leather between the plates and the bar to act as a hinge, similar to a common pump valve; the other edge of the valve falls into a groove which contains a composition of beeswax and tallow: this composition is solid at the temperature of the atmosphere, and becomes fluid when heated a few degrees above it. Over this valve is a protecting cover, which serves to preserve it from snow or rain, formed of thin plates of iron about five feet long hinged with leather, and the end of each plate underlaps the next in the direction of the piston's motion,[note 1] thus ensuring the lifting of each in succession.

The piston carriage would open and then close the valve:

To the underside of the first carriage in each train is attached the piston and its appurtenances; a rod passing horizontally from the piston is attached to a connecting arm, about six feet behind the piston. This connecting arm passes through the continuous groove in the pipe, and being fixed to the carriage, imparts motion to the train as the tube becomes exhausted; to the piston rod are also attached four steel wheels, (two in advance and two behind the connecting arm,) which serve to lift the valve, and form a space for the passage of the connecting arm, and also for the admission of air to the back of the piston; another steel wheel is attached to the carriage, regulated by a spring, which serves to ensure the perfect closing of the valve, by running over the top plates immediately after the arm has passed. A copper tube or heater, about ten feet long, constantly kept hot by a small stove, also fixed to the underside of the carriage, passes over and melts the surface of the composition (which has been broken by lifting the valve,) which upon cooling becomes solid, and hermetically seals the valve. Thus each train in passing leaves the pipe in a fit state to receive the next train.

Entering and leaving the pipe was described:

The continuous pipe is divided into suitable sections (according to the respective distance of the fixed steam engines) by separating valves, which are opened by the train as it goes along: these valves are so constructed that no stoppage or diminution of speed is necessary in passing from one section to another. The exit separating valve, or that at the end of the section nearest to its steam engine, is opened by the compression of air in front of the piston, which necessarily takes place after it has passed the branch which communicates with the air-pump; the entrance separating valve, (that near the commencement of the next section of pipe,) is an equilibrium or balance valve, and opens immediately the piston has entered the pipe. The main pipe is put together with deep socket joints, in each of which an annular space is left about the middle of the packing, and filled with a semi-fluid: thus any possible leakage of air into the pipe is prevented.[5]

At that time railway were developing rapidly, and solutions to the technical limitations of the day were eagerly sought, and not always rationally evaluated. Samuda's treatise put forward the advantages of his system:

  • transmission of power to trains from static (atmospheric) power stations; the static machinery could be more fuel efficient;
  • the train would be relieved of the necessity of carrying the power source, and fuel, with it;
  • power available to the train would be greater so that steeper gradients could be negotiated; in building new lines this would hugely reduce construction costs by enabling reducing earthworks and tunnels;
  • elimination of a heavy locomotive from the train would enable lighter and cheaper track materials to be used;
  • passengers, and lineside residents, would be spared the nuisance of smoke emission from passing trains; this would be especially useful in tunnels;
  • collisions between trains would be impossible, because only one train at a time could be handled on any section between two pumping stations; collisions were at the forefront of the mind of the general public in those days before modern signalling systems, when a train was permitted to follow a preceding train after a defined time interval, with no means of detecting whether that train had stalled somewhere ahead on the line;
  • the piston travelling in the tube would hold the piston carriage down and, Samuda claimed, prevent derailments, enabling curves to be negotiated safely at high speed;
  • persons on the railway would not be subjected to the risk of steam engine boiler explosions (then a very real possibility[2]).

Samuda also rebutted criticisms of his system that had obviously become widespread:

  • that if a pumping station failed the whole line would be closed because no train could pass that point; Samuda explained that a pipe arrangement would enable the next pumping station ahead to supply that section; if this was at reduced pressure, the train would nonetheless be able to pass, albeit with a small loss of time;
  • that leakage of air at the flap or the pipe joints would critically weaken the vacuum effect; Samuda pointed to experience and test results on his demonstration line, where this was evidently not a problem;
  • the capital cost of the engine houses was a huge burden; Samuda observed that the capital cost of steam locomotives was eliminated, and running costs for fuel and maintenance could be expected to be lower.[4]

A patent

In April 1844 Jacob and Joseph Samuda took out a patent for their system. Soon after this Joseph Samuda died, and it was left to his brother Jacob to continue the work. The patent was in three parts: the first describing the atmospheric pipe and piston system, the second describing how in areas of plentiful water supply, the vacuum might be created by using tanks of water at differing levels; and the third section dealt with level crossings of an atmospheric railway.[2]

Dalkey Atmospheric Railway

The Dublin and Kingstown Railway opened in 1834 connecting the port of DΓΊn Laoghaire (then called Kingstown) to Dublin; it was a standard gauge line. In 1840 it was desired to extend the line to Dalkey, a distance of about two miles. A horse tramway on the route was acquired and converted: it had been used to bring stone from a quarry for the construction of the harbour. It was steeply graded (at 1 in 115 with a 440-yard stretch of 1 in 57) and heavily curved, the sharpest being 570 yards radius. This presented significant difficulties to the locomotives then in use. The treasurer of the company, James Pim, was visiting London and hearing of Samuda's project he viewed it. He considered it to be perfect for the requirements of his company, and after petitioning government for a loan of Β£26,000,[6] it was agreed to install it on the Dalkey line. Thus became the Dalkey Atmospheric Railway.

A 15-inch traction pipe was used, with a single pumping station at Dalkey, at the upper end of the 2,400-yard run. The engine created 110 ihp and had a flywheel of 36 feet diameter. Five minutes before the scheduled departure of a train from Kingstown, the pumping engine started work, creating a 15-inch vacuum in two minutes. The train was pushed manually to the position where the piston entered the pipe, and the train was held on the brakes until it was ready to start. When that time came, the brakes were released and the train moved off. (The electric telegraph was later installed, obviating reliance on the timetable for engine operation.)

On 17 August 1843 the tube was exhausted for the first time, and the following day a trial run was made. On Saturday 19 August the line was opened to the public.[note 2] In service a typical speed of 30Β mph was attained; return to Kingstown was by gravitation down the gradient, and slower. By March 1844, 35 train movements operated daily, and 4,500 passengers a week travelled on the line, mostly simply for the novelty.

It is recorded that a young man called Frank Elrington was on one occasion on the piston carriage, which was not attached to the train. On releasing the brake, the light vehicle shot off at high speed, covering the distance in 75 seconds, averaging 65Β mph.

As this was the first commercially operating atmospheric railway, it attracted the attention of many eminent engineers of the day, including Isambard Kingdom Brunel, Robert Stephenson, and Sir William Cubitt.[2][7]

The line continued to operate successfully for ten years, outliving the atmospheric system on British lines, although the Paris - St Germain line continued until 1860.[8]

When the system was abolished in 1855 a 2-2-2 steam locomotive called Princess was employed, incidentally the first steam engine to be manufactured in Ireland. Although a puny mechanism, the steam engine successfully worked the steeply graded line for some years.[2]

Paris - Saint Germain

Saint Germain piston carriage

In 1835 the brothers Pereire obtained a concession from the Compagnie du Chemin de fer de Paris Γ  Saint-Germain. They opened their 19Β km line in 1837, but only as far as Le Pecq, a river quay on the left bank of the Seine, as a daunting incline would have been necessary to reach Saint-Germain-en-Laye, and locomotives of the day were considered incapable of climbing the necessary gradient, adhesion being considered the limiting factor.

On hearing of the success of the Dalkey railway, the French minister of public works (M. Teste) and under-secretary of state (M. Le Grande) dispatched M. Mallet,[note 3] inspecteur gΓ©nΓ©ral honoraire des Ponts et ChaussΓ©es, to Dalkey. He wrote an exhaustive technical evaluation of the system installed there, and its potential, which included the results of measurements made with Joseph Samuda.[3][6][9]

It was through his interest that the Pereire brothers to adopt the system for an extension to St Germain itself, and construction started in 1845, with a wooden bridge crossing the Seine followed by a twenty-arch masonry viaduct and two tunnels under the castle. The extension was opened on 15 April 1847; it was 1.5Β km in length on a gradient of 1 in 28 (35Β mm/m).

The traction pipe was laid between the rails; it had a diameter of 63Β cm (25 inches) with a slot at the top. The slot was closed by two leather flaps. The pumps are powered by two steam engines with a capacity of 200Β hp, located between the two tunnels at Saint-Germain. Train speed on the ascent was 35Β km/h (22Β mph). On the descent the train ran by gravity as far as Pecq, where the steam locomotive took over for the run to Paris.

The system was technically successful, but the development of more powerful steam locomotives led to its abandonment from 3 July 1860, when steam locomotive ran throughout from Paris to St Germain, being assisted by a pusher locomotive up the gradient. This arrangement continued for more than sixty years until the electrification of the line.[10]

A correspondent of the Ohio State Journal described some details; there seem to have been two tube sections:

An iron tube is laid down in the centre of the track, which is sunk about one-third of its diameter in the bed of the road. For a distance of 5,500 yards the tube has a diameter of only 1ΒΎ feet [i.e. 21 inches], the ascent here being so slight as not to require the same amount of force as is required on the steep grade to St Germain, where the pipe, for a distance of 3,800 yards, is 2 feet 1 inch [i.e. 25 inches] in diameter.

The steam engines had accumulators:

To each engine is adapted two large cylinders, which exhaust fourteen cubic feet of air per second. The pressure in the air cauldron (claudieres) attached to the exhausting machines is equal to six absolute atmospheres.

He described the valve:

Throughout the entire length of the tube, a section is made in the top, leaving an open space of about five inches. In each cut edge of the section there is an offset, to catch the edges of a valve which fits down upon it. The valve is made of a piece of sole leather half an inch thick, having plates of iron attached to it on both the upper and corresponding under side to give it strengthΒ ... which are perhaps one-fourth of an inch in thicknessΒ ... The plates are about nine inches long, and their ends, above and below, are placed three quarters of an inch apart, forming joints, so as to give the leather valve pliability, and at the same time firmness.[11]

Clayton records the name of the engineer, Mallet, who had been Inspector general of Public Works, and gives a slightly different account: Clayton says that Mallet used a plaited rope to seal the slot. He also says that vacuum was created by condensing steam in a vacuum chamber between runs, but that may have been a misunderstanding of the pressure accumulators.[2]

London and Croydon Railway

A steam railway at first

Jolly-sailor station on the London and Croydon Railway in 1845, showing the pumping station, and the locomotive-less train

The London and Croydon Railway (L&CR) obtained its authorising Act of Parliament in 1835, to build its line from a junction with the London and Greenwich Railway (L&GR) to Croydon. At that time the L&GR line was under construction, and Parliament resisted the building of two railway termini in the same quarter of London, so that the L&CR would have to share the L&GR's London Bridge station. The line was built for ordinary locomotive operation. A third company, the London and Brighton Railway (L&BR) was promoted and it too had to share the route into London by running over the L&CR.

When the lines opened in 1839 it was found that congestion arose due to the frequent stopping services on the local Croydon line; this was particularly a problem on the 1 in 100 ascent from New Cross to Dartmouth Arms.[3] The L&CR engineer, William Cubitt proposed a solution to the problem: a third track would be laid on the east side of the existing double track main line, and all the local trains in both directions would use it. The faster Brighton trains would be freed of the delay following a stopping train. Cubitt had been impressed during his visit to the Dalkey line, and the new L&CR third track would use atmospheric power. The local line would also be extended to Epsom, also as a single track atmospheric line. These arrangements were adopted and Parliamentary powers obtained on 4 July 1843, also authorising a line to a terminal at Bricklayers Arms. Arrangements were also made with the L&GR for them to add an extra track on the common section of their route. On 1 May 1844 the Bricklayers Arms terminus opened, and a frequent service was run from it, additional to the London Bridge trains.[2][3][12]

Now atmospheric as well

The L&CR line diverged to the south-west at Norwood Junction (then called Jolly Sailor, after an inn), and needed to cross the L&BR line. The atmospheric pipe made this impossible on the flat, and a flyover was constructed to enable the crossing: this was the first example in the railway world.[13] This was in the form of a wooden viaduct with approach gradients of 1 in 50. A similar flyover was to be built at Corbetts Lane Junction, where the L&CR additional line was to be on the north-east side of the existing line, but this was never made.

A 15-inch diameter traction pipe was installed between Forest Hill (then called Dartmouth Arms, also after a local inn) and West Croydon. Although Samuda supervised the installation of the atmospheric apparatus, a weather flap, a hinged iron plate that covered the leather slot valve in the Dalkey installation, was omitted. The L&CR had an Atmospheric Engineer, James Pearson. Maudsley, Son and Field supplied the three 100Β hp steam engines and pumps at Dartmouth Arms, Jolly Sailor and Croydon (later West Croydon), and elaborate engine houses had been erected for them. They were designed in a gothic style by W H Brakespear, and had tall chimneys which also exhausted the evacuated air at high level.[note 4]

A two-needle electric telegraph system was installed on the line, enabling station staff to indicate to the remote engine house that a train was ready to start.

This section, from Dartmouth Arms to Croydon started operation on the atmospheric system in January 1846.

The traction pipe slot and the piston bracket were handed; that is the slot closure flap was continuously hinged on one side, and the piston support bracket was cranked to minimise the necessary opening of the flap. This meant that the piston carriage could not simply be turned on a turntable at the end of a trip. Instead it was double ended, but the piston was manually transferred to the new leading end. The piston carriage itself had to be moved manually (or by horse power) to the leading end of the train. At Dartmouth Arms the station platform was an island between the two steam operated lines. Cubitt designed a special system of pointwork that enabled the atmospheric piston carriage to enter the ordinary track.[note 5]

The Board of Trade inspector, General Pasley, visited the line on 1 November 1845 to approve it for opening of the whole line. The Times newspaper reported the event; a special train left London Bridge hauled by a steam locomotive; at Forest Hill the locomotive was detached and:

the piston carriage substituted and the train thence became actuated by atmospheric pressure. The train consisted of ten carriages (including that to which the piston is attached) and its weight was upward of fifty tons. At seven and a half minutes past two the train left the point of rest at the Dartmouth Arms, and at eight and three-quarter minutes past, the piston entered the valve,[note 6] when it immediately occurred to us that one striking advantage of the system was the gentle, the almost imperceptible, motion on starting. On quitting the station on locomotive lines we have frequently experienced a "jerk" amounting at times to an absolute "shock" and sufficient to alarm the nervous and timid passenger. Nothing of the sort, however, was experienced here. Within a minute and a quarter of the piston entering the pipe, the speed attained against a strong headwind was at the rate of twelve miles an hour; in the next minute, viz. at eleven minutes past two, twenty-five miles an hour; at thirteen minutes past two, thirty-four miles an hour; fourteen minutes past two, forty miles an hour; and fifteen minutes past two, fifty-two miles an hour, which was maintained until sixteen minutes past two, when the speed began to diminish, and at seventeen and a half minutes past two, the train reached the Croydon terminus, thus performing the journey from Dartmouth Arms, five miles, in eight minutes and three-quarters. The barometer in the piston carriage indicated a vacuum of 25 inches and that in the engine house a vacuum of 28 inches.[note 7][14]

The successful official public run was widely reported and immediately new schemes for long-distance railways on the atmospheric system were being promoted; the South Devon Railway's shares appreciated overnight.

Opening

Pasley's report of 8 November was favourable, and the line was clear to open. The directors hesitated, desiring to gain a little more experience beforehand. On 19 December 1845 the crankshaft of the Forest Hill stationary engine fractured, and the engine was unusable. However the part was quickly replaced and on 16 January 1846 the line opened.

At 11:00 that morning the crankshaft of one of the Croydon engines broke. Two engines had been provided, so traffic was able to continue using the other,[note 8] until at 7:20Β p.m. that engine suffered the same fate. Again repairs were made until on 10 February 1846 both the Croydon engines failed.

This was a bitter blow for the adherents of the atmospheric system; shortcomings in the manufacture of the stationary engines procured from a reputable engine-maker said nothing about the practicality of the atmospheric system itself, but as Samuda said to the Board:

"The public cannot discriminate (because it cannot know) the cause of the interruptions, and every irregularity is attributed to the atmospheric system."[15]

Two months later the beam of one of the Forest Hill engines fractured. At this time the directors were making plans for the Epsom extension; they quickly revised their intended purchase of engines from Maudsley, and invited tenders; Boulton and Watt of Birmingham were awarded the contract, their price having been considerably less than their competitors'.

Amalgamation

The London and Brighton Railway amalgamated with the L&CR on 6 July 1846, forming the London, Brighton and South Coast Railway (LB&SCR). For the time being the directors of the larger company continued with the L&CR's intentions to use the atmospheric system.

Technical difficulties

The summer of 1846 was exceptionally hot and dry, and serious difficulties with the traction pipe flap valve started to show themselves. It was essential to make a good seal when the leather flap was closed, and the weather conditions made the leather stiff. As for the tallow and beeswax compound that was supposed to seal the joint after every train, Samuda had originally said "this composition is solid at the temperature of the atmosphere, and becomes fluid when heated a few degrees above it"[4] and the hot weather had that effect. Samuda's original description of his system had included a metal weather valve that closed over the flap, but this had been omitted on the L&CR, exposing the valve to the weather, and also encouraging the ingestion of debris, including, an observer reported, a handkerchief dropped by a lady on to the track. Any debris lodging in the seating of the flap could only have reduced its effectiveness.

Moreover the tallow– that is, rendered animal fat – was attractive to the rat population; their bodies drawn in to the traction pipe at the beginning of pumping in the morning told its story. Delays became frequent, due to inability to create enough vacuum to move the trains, and stoppages on the steep approach inclines at the flyover were commonplace, and widely reported in the press.

The Directors now began to feel uneasy about the atmospheric system, and in particular the Epsom extension, which was to have three engines. In December 1846 they asked Boulton and Watt about cancelling the project, and were told that suspending the supply contract for a year would cost Β£2,300. The Directors agreed to this.

The winter of 1846/7 brought new meteorological difficulties: unusually cold weather made the leather flap stiff, and snow got into the tube[note 9] resulting in more cancellations of the atmospheric service. A track worker was killed in February 1847 while steam substitution was in operation. This was tragically unfortunate, but it had the effect of widespread reporting that the atmospheric was, yet again, non-operational.[16]

Sudden end

Through this long period, the Directors must have become less and less committed to pressing on with the atmospheric system, even as money was being spent on extending it towards London Bridge. (It opened from Dartmouth Arms to New Cross in January 1847, using gravitation northbound and the Dartmouth Arms pumping station southbound.) In a situation in which public confidence was important, the Directors could not express their doubts publicly, at least until a final decision had been taken. On 4 May 1847[17] the directors announced "that the Croydon Atmospheric pipes were pulled up and the plan abandoned".

The reason seems not to have been made public at once, but the trigger seems to have been the insistence of the Board of trade inspector on a second junction at the divergence of the Brighton and Epsom lines. It is not clear what this refers to, and may simply have been a rationalisation of the timing of a painful decision. Whatever the reason, there was to be no more atmospheric work on the LB&SCR.[2]

South Devon Railway

Getting authorisation

A section of the SDR's atmospheric railway pipe at Didcot Railway Centre

The Great Western Railway (GWR) and the Bristol and Exeter Railway working collaboratively had reached Exeter on 1 May 1844, with a broad gauge railway connecting the city to London. Interested parties in Devonshire considered it important to extend the connection to Plymouth, but the terrain posed considerable difficulties: there was high ground with no easy route through.

After considerable controversy, the South Devon Railway Company (SDR) obtained its Act of Parliament authorising a line, on 4 July 1844.

Determining the route

The Company's engineer was the innovative engineer Isambard Kingdom Brunel. He had visited the Dalkey line and he had been impressed with the capabilities of the atmospheric system on that line. Samuda had always put forward the advantages of his system, which (he claimed) included much better hill climbing abilities and lighter weight on the track. This would enable a line in hilly terrain to be planned with steeper than usual gradients, saving substantial cost of construction.

If Brunel had decided definitely to use the atmospheric system at the planning stage, it would have allowed him to strike a route that would have been impossible with the locomotive technology of the day. The route of the South Devon Railway, still in use today, has steep gradients and is generally considered "difficult". Commentators often blame this on it being designed for atmospheric traction; for example:

Sekon, describing the topography of the line, says that beyond Newton Abbot,

the conformation of the country is very unsuitable for the purpose of constructing a railway with good gradients. This drawback did not at the time trouble Mr. Brunel, the engineer to the South Devon Railway Company, since he proposed to work the line on the atmospheric principle, and one of the advantages claimed for the system being that steep banks were as easy to work as a level.[18]

  • The line "was left with a legacy of a line built for atmospheric working with the consequent heavy gradients and sharp curves".[19]
  • Brunel "seriously doubted the ability of any engine to tackle the kind of gradients which would be necessary on the South Devon".[20]

In fact the decision to consider the adoption of the atmospheric system came after Parliamentary authorisation, and the route must have been finalised before submission to Parliament.

Eight weeks after passage of the Act, the shareholders heard that "Since the passing of the Act, a proposal has been receivedΒ ... from Messrs. Samuda BrothersΒ ... to apply their system of traction to the South Devon Line." Brunel and a deputation of the directors had been asked to visit the Dalkey line. The report went on that as a result,

In view of the fact that at many points of the line both the gradients and curves will render the application of this principle particularly advantageous, your directors have resolved that the atmospheric system, including an electric telegraph, should be adopted on the whole line of the South Devon Railway.[21]

Construction and opening

Construction started at once on the section from Exeter to Newton Abbot (at first called Newton); this first part is broadly level: it was the section onwards from Newton that was hilly. Contracts for the supply of the 45 horsepower (34Β kW) pumping engines and machinery were concluded on 18 January 1845, to be delivered by 1 July in the same year. Manufacture of the traction pipes ran into difficulties: they were to be cast with the slot formed,[note 10] and distortion was a serious problem at first.

Delivery of the machinery and laying of the pipes was much delayed, but on 11 August 1846, with that work still in progress, a contract was let for the engines required over the hilly section beyond Newton. These were to be more powerful, at 64 horsepower (48Β kW), and 82 horsepower (61Β kW) in one case, and the traction pipe was to be of a larger diameter.

The train service started between Exeter and Teignmouth on 30 May 1846, but this was operated by steam engines, hired in from the GWR. At length, on 13 September 1847[note 11] the first passenger trains started operating on the atmospheric system.[22][23] Atmospheric goods trains may have operated a few days previously.

Four atmospheric trains ran daily in addition to the advertised steam service, but after a time they replaced the steam trains. At first the atmospheric system was used as far as Teignmouth only, from where a steam engine hauled the train including the piston carriage to Newton, where the piston carriage was removed, and the train continued on its journey. From 9 November some atmospheric working to Newton took place, and from 2 March 1848 all trains on the section were atmospheric.

Through that winter of 1847-8 a regular service was maintained to Teignmouth. The highest speed recorded was an average of 64Β mph (103Β km/h) over 4 miles (6.4Β km) hauling 28 long tons (28Β t), and 35Β mph (56Β km/h) when hauling 100 long tons (100Β t).[citation needed]

Two significant limitations of the atmospheric system were overcome at this period. The first was an auxiliary traction pipe was provided at stations; it was laid outside the track, therefore not obstructing pointwork. The piston carriage connected to it by a ropeβ€”the pipe must have had its own pistonβ€”and the train could be hauled into a station and on to the start of the onward main pipe. The second development was a level crossing arrangement for the pipe: a hinged cover plate lay across the pipe for road usage, but when the traction pipe was exhausted, a branch pipe actuated a small piston which raised the cover, enabling the piston carriage to pass safely, and acting as a warning to road users. Contemporary technical drawings show the traction pipe considerably lower than normal, with its top about level with the rail heads, and with its centre at the level of the centre of the transoms. No indication is shown as to how track gauge was maintained.

Underpowered traction system

Starcross pumping house.

Although the trains were running ostensibly satisfactorily, there had been technical miscalculations. It seems[24] that Brunel originally specified 12-inch (300Β mm) for the level section to Newton and 15-inch (380Β mm) pipes for the hilly part of the route, and in specifying the stationary engine power and vacuum pumps, he considerably underpowered them. The 12-inch (300Β mm) pipes seem to have been scrapped, and 15-inch (380Β mm) pipes installed in their place, and 22-inch (560Β mm) pipes started to be installed on the hilly sections. Changes to the engine control governors were made to uprate them to run 50% faster than designed. It was reported that coal consumption was much heavier than forecast, at 3s 1Β½d per train mile instead of 1s 0d (and instead of 2s 6d which was the hire charge for the leased GWR steam locomotives). This may have been partly due to the electric telegraph not yet having been installed, necessitating pumping according to the timetable, even though a train might be running late. When the telegraph was ready, on 2 August, coal consumption in the following weeks fell by 25%.[25]

Problems with the slot closure

During the winter of 1847–1848 the leather flap valve that sealed the traction pipe slot began to give trouble. During the cold days of winter the leather froze hard in frost after saturation in rain. This resulted in its failing to seat properly after the passage of a train, allowing air into the pipe and reducing the effectiveness of pumping. In the following spring and summer there was hot and dry weather and the leather valve dried out, with pretty much the same outcome. Brunel had the leather treated with whale oil in an attempt to maintain flexibility. There was said to be a chemical reaction between the tannin in the leather and iron oxide on the pipe. There were also difficulties with the leather cup seal on the pistons.

Commentators observe that the South Devon system omitted the iron weather flap that was used on the Dalkey line to cover the flap valve. On that line iron plates were turned away immediately ahead of the piston bracket. It is not recorded why this was omitted in South Devon, but at speed that arrangement must have involved considerable mechanical force, and generated environmental noise.

In May and June even more serious trouble was experienced when sections of the flap tore away from its fixing, and sections had to be quickly replaced. Samuda had a contract with the company to maintain the system, and he advised installation of a weather cover, but this was not adopted. This would not have rectified the immediate problem, and complete replacement of the leather flap was required; this was estimated to cost Β£32,000β€”a very large sum of money thenβ€”and Samuda declined to act.

Abandonment

With a contractual impasse during struggles to keep a flawed system in operation, it was inevitable that the end was near. At a shareholders' meeting on 29 August 1848 the directors were obliged to report all the difficulties, and that Brunel had advised abandonment of the atmospheric system; arrangements were being made with the Great Western Railway to provide steam locomotives, and the atmospheric system would be abandoned from 9 September 1848.

Brunel's report to the Directors, now shown the meeting, was comprehensive, and he was also mindful of his own delicate position, and of the contractual obligations of Samuda. He described the stationary engines, obtained from three suppliers: "These engines have not, on the whole, proved successful; none of them have as yet worked very economically, and some are very extravagant in the use of fuel." As to the difficulties with the leather valve in extremes of weather, heat, frost and heavy rain,

The same remedies apply to all three, keeping the leather of the valve oiled and varnished, and rendering it impervious to the water, which otherwise soaks through it in wet weather, or which freezes it in cold, rendering it too stiff to shut down; and the same precaution prevents the leather being dried up and shrivelled by the heat; for this, and not the melting of the composition, is the principal inconvenience resulting from heat. A little water spread on the valve from a tank in the piston carriage has also been found to be useful in very dry weather, showing that the dryness, and not the heat, was the cause of the leakage.

But there was a much more serious problem: "A considerable extent of longitudinal valve failed by the tearing of the leather at the joints between the plates. The leather first partially cracked at these points, which caused a considerable leakage, particularly in dry weather. After a time it tears completely through."

Maintenance of the traction pipe and the valve was Samuda's contractual responsibility, but Brunel indicated that he was blaming the company for careless storage, and for the fact that the valve had been installed for some time before being used by trains; Brunel declined to go into the liability question, alluding to possible palliative measures, but concluded:

The cost of construction has far exceeded our expectations, and the difficulty of working a system so totally different from that to which everybodyβ€”traveller as well as workmenβ€”is accustomed, have (sic) proved too great; and therefore, although, no doubt, after some further trial, great reductions may be made in the cost of working the portion now laid, I cannot anticipate the possibility of any inducement to continue the system beyond Newton.[26]

Huge hostility was generated among some shareholders, and Samuda, and Brunel in particular were heavily criticised, but the atmospheric system on the line was finished.

Retention recommended

Thomas Gill had been Chairman of the South Devon board and wished to continue with the atmospheric system. In order to press for this he resigned his position, and in November 1848 published a pamphlet urging retention of the system. He created enough support for this that an Extraordinary General Meeting of the Company was held on 6 January 1849. Lengthy technical discussion took place, in which Gill stated that Clark and Varley were prepared to contract to complete the atmospheric system and maintain it over a section of the line. There were, Gill said, twenty-five other inventors anxious to have their creations tried out on the line. The meeting lasted for eight hours, but finally a vote was taken: a majority of shareholders present were in favour of continuing with the system, 645 to 567 shares. However a large block of proxies were held by shareholders who did not wish to attend the meeting, and with their votes abandonment was confirmed by 5,324 to 1,230.

That was the end of the atmospheric system on the South Devon Railway.

Rats

It is often asserted among enthusiasts' groups that the primary cause of the failure of the leather flap was rats, attracted to the tallow, gnawing at it. Although rats are said to have been drawn into the traction pipe in the early days, there was no reference to this at the crisis meeting described above.

Technical details

Wormwood Scrubs demonstration line

The piston carriage on the demonstration line was an open four-wheeled track. No controls of any kind are shown on a drawing. The beam that carried the piston was called the "perch", and it was attached directly to the axles, and pivoted at its centre point; it had a counterweight to the rear of the attachment bracket (called a "coulter").

Dalkey line

The customary train consist was two coaches, the piston carriage, which included a guard's compartment and third class accommodation, and a first class carriage, with end observation windows at the rear. The guard had a screw brake, but no other control. Returning (descending) was done under gravity, and the guard had a lever which enabled him to swing the piston assembly to one side, so that the descent was made with the piston outside the tube.

Saint Germain line

The section put into service, Le Pecq to Saint Germain, was almost exactly the same length as the Dalkey line, and was operated in a similar way except that the descent by gravity was made with the piston in the tube so that air pressure helped retard speed. The upper terminal had sidings, with switching managed by ropes.[27]

London and Croydon

The piston carriages were six-wheeled vans, with a driver's platform at each end, as they were double ended. The driver's position was within the carriage, not in the open. The centre axle was unsprung, and the piston assembly was directly connected to it. The driver had a vacuum gauge (a mercury manometer, connected by a metal tube to the head of the piston. Some vehicles were fitted with speedometers, an invention of Moses Ricardo. As well as a brake, the driver had a by-pass valve which admitted air to the partially exhausted traction tube ahead of the piston, reducing the tractive force exerted. This seems to have been used on the 1 in 50 descent from the flyover. The lever and valve arrangement are shown in a diagram in Samuda's Treatise.

Variable size piston

Part of Samuda's patent included the variable diameter piston, enabling the same piston carriage to negotiate route sections with different traction tube sizes. Clayton describes it: the change could be controlled by the driver while in motion; a lever operated a device rather like an umbrella at the rear of the piston head; it had hinged steel ribs. To accommodate the bracket for the piston, the traction tube slot, and therefore the top of the tube, had to be at the same level whatever the diameter of the tube, so that all of the additional space to be sealed was downwards and sideways; the "umbrella" arrangement was asymmetrical. In fact this was never used on the South Devon Railway as the 22 inch tubes there were never opened; and the change at Forest Hill only lasted four months before the end of the atmospheric system there.[28] A variable diameter piston was also intended to be used on the Saint-Germain railway, where a 15 inch pipe was to be used from Nanterre to Le Pecq, and then a 25 inch pipe on the three and half per cent grade up to Saint-Germain. Only the 25 inch section was completed, so a simple piston was used.[27]

Engine house locations, South Devon Railway

  • Exeter; south end of St Davids station, up side of the line
  • Countess Wear; south of Turnpike bridge, at 197m 22c, down side[note 12]
  • Turf; south of Turf level crossing, down side
  • Starcross; south of station, up side
  • Dawlish; east of station, up side
  • Teignmouth; adjacent to station, up side
  • Summer House; at 212m 38c, down side
  • Newton; east of station, down side
  • Dainton; west of tunnel, down side
  • Totnes; adjacent to station, up side
  • Rattery; 50.43156,-3.78313; building never completed
  • Torquay; 1Β mile north of Torre station (the original terminal, called Torquay), up side

In the Dainton engine house a vacuum receiver was to be installed in the inlet pipe to the pumps. This was apparently an interceptor for debris that might be ingested into the traction pipe; it had an openable door for staff to clear the debris from time to time.[29]

Displays of atmospheric railway tube

  • Didcot Railway Centre, Didcot, Oxfordshire: three unused sections of South Devon 22 inch pipe, found under sand in 1993 at Goodrington Sands, near Paignton, mounted in 2000 with GWR rails recovered from another source.
  • Newton Abbot Town and GWR Museum, Newton Abbot, Devon: a very short cut section of unused South Devon 22 inch pipe, possibly from the 1993 discovery.
  • Being Brunel, Bristol: one section of unused South Devon 22 inch pipe, possibly from the 1993 discovery.
  • Museum of Croydon, Croydon: one section of London and Croydon 15 inch pipe with iron and leather valve intact, found in ground in 1933 at West Croydon station.

Other early applications

Aeromovel

Aeromovel train in Porto Alegre, seen in 2013

The nineteenth century attempts to make a practical atmospheric system (described below) were defeated by technological shortcomings. In the present day, modern materials have enabled a practical system to be implemented.

Towards the end of the twentieth century the Aeromovel Corporation of Brazil developed an automated people mover that is atmospherically powered. Lightweight trains ride on rails mounted on an elevated hollow concrete box girder that forms the air duct. Each car is attached to a square plateβ€”the pistonβ€”within the duct, connected by a mast running through a longitudinal slot that is sealed with rubber flaps. Stationary electric air pumps are located along the line to either blow air into the duct to create positive pressure or to exhaust air from the duct to create a partial vacuum. The pressure differential acting on the piston plate causes the vehicle to move.

Electric power for lighting and braking is supplied to the train by a low voltage (50v) current through the track the vehicles run on; this is used to charge onboard batteries. The trains have conventional brakes for accurate stopping at stations; these brakes are automatically applied if there is no pressure differential acting on the plate. Fully loaded vehicles have a ratio of payload to dead-weight of about 1:1, which is up to three times better than conventional alternatives.[30] The vehicles are driverless with motion determined by lineside controls.[31] Aeromovel was designed in the late 1970s by Brazilian Oskar H.W. CoesterΒ (pt).[32]

The system was first implemented in 1989 at Taman Mini Indonesia Indah, Jakarta, Indonesia. It was constructed to serve a theme park; it is a 2-mile (3.22Β km) loop with six stations and three trains.[33]

The Aeromovel system is in operation at Porto Alegre Airport, Brazil. A line connecting the EstaΓ§Γ£o Aeroporto (Airport Station) on the Porto Alegre Metro) and Terminal 1 of Salgado Filho International Airport began operation on Saturday 10 August 2013.[34] The single line is 0.6-mile (1Β km) long with a travel time of 90 seconds. The first 150-passenger vehicle was delivered in April 2013 with a 300-passenger second vehicle delivered later.

In 2016 construction commenced on a 4.7Β km single line with seven stations in the city of Canoas. Construction was due to be completed in 2017 but in March 2018 the new city administration announced that the project had been suspended pending endorsement from central government and that equipment already purchased had been placed in storage. The new installation is part of a planned 18Β km, two line, twenty four station system in the city.[35][36][37]

Concept

Flight Rail Corp. in the USA has developed the concept of a high-speed atmospheric train that uses vacuum and air pressure to move passenger modules along an elevated guideway. Stationary power systems create vacuum (ahead of the piston) and pressure (behind the piston) inside a continuous pneumatic tube located centrally below rails within a truss assembly. The free piston is magnetically coupled to the passenger modules above; this arrangement allows the power tube to be closed, avoiding leakage. The transportation unit operates above the power tube on a pair of parallel steel rails.

The company currently has a 1/6 scale pilot model operating on an outdoor test guideway. The guideway is 2095 feet (639 m) long and incorporates 2%, 6% and 10% grades. The pilot model operates at speeds up to 25Β mph (40Β km/h). The Corporation claims that a full-scale implementation would be capable of speeds in excess of 200Β mph (322Β km/h).[38]

See also

  • Cable railway– a more successful albeit slow way of overcoming steep grades.
  • Funicular– a system of overcoming steep grades using the force of gravity on downward cars to raise upward cars
  • Steam catapult– used for launching aircraft from ships: the arrangement of seal and traveller is similar, although positive pressure is used.
  • Vactrain– a futuristic concept in which vehicles travel in an evacuated tube, to minimise air resistance; the suggested propulsion system is not atmospheric.
  • Hyperloop

Notes

  1. ^Yet as single line operation was envisaged, this seems to be impossible.
  2. ^Kingstown station was not ready and the runs started from Glasthule Bridge.
  3. ^Possibly C.-F. Mallet
  4. ^This may mean that the exhaust air was used to create a draught for the fires.
  5. ^It is not known exactly what form these points took, but some early engineers used switches in which the lead rails move together to form a butt joint with the approach rails, and it is likely Cubitt used this. The traction pipe can hardly have crossed the ordinary track and trains may have been moved by horses.
  6. ^75 seconds in moving the train by human or horse power to the pipe.
  7. ^These values are much higher than Samuda arranged during the Wormwood Scrubbs demonstrations; standard atmospheric pressure is taken as 29.92 in Hg.
  8. ^The Maudsley engines consisted of two engines driving the same shaft; either could be disconnected if required.
  9. ^Snow inside the tube itself might not have been serious; it is likely that compacted snow in the valve seating was the real problem.
  10. ^In the Dalkey case the pipes were cast as complete cylinders, and the slot was then machined in.
  11. ^Clayton says 14 September
  12. ^Kay states (page 25) that MacDermot and Hadfield wrongly say that Countess Wear house was on the up side of the line.

References

  1. ^R. A. Buchanan, The Atmospheric Railway of I.K. Brunel, Social Studies of Science, Vol. 22, No. 2, Symposium on 'Failed Innovations' (May 1992), pp. 231–2.
  2. ^ abcdefghijHoward Clayton, The Atmospheric Railways, self-published by Howard Clayton, Lichfield, 1966
  3. ^ abcdCharles Hadfield, Atmospheric Railways, Alan Sutton Publishing Limited, Gloucester, 1985 (reprint of 1967), ISBNΒ 0-86299-204-4
  4. ^ abcdJ d'A Samuda, A Treatise on the Adaptation of Atmospheric Pressure to the Purposes of Locomotion on Railways, John Weale, London, 1841
  5. ^Samdua's treatise; references to parts on diagrams omitted.
  6. ^ ab"Report on the railroad constructed from Kingstown to Dalkey, in Ireliand, upon the atmospheric system, and on the application of this system to railroads in general (Abridged Translation)", Mons. Mallet, The Practical Mechanic and Engineer's Magazine, in 4 parts commencing May 1844, p279
  7. ^Industrial Heritage of Ireland website (archived)[dead link]
  8. ^K H Vignoles, Charles Blacker Vignoles: Romantic Engineer, Cambridge University Press, 2010, ISBNΒ 978-0-521-13539-9
  9. ^Mallet, Rapport sur le chemin de fer établi suivant le système atmosphérique de Kingstown à Dalkey, en Irlande, et sur l'application de ce système aux chemins de fer en général, Carillan-Goeury et Ve Dalmont, Paris, 1844, accessible on line
  10. ^Jean Robert, Notre mΓ©tro, Omens & Cie, Paris, 1967, ASIN: B0014IR65O, page 391
  11. ^Article in the New York Times, 10 November 1852
  12. ^Charles Howard Turner, The London Brighton and South Coast Railway, volume 1, Batsford Books, London, 1977, ISBNΒ 978-0-7134-0275-9, pages 239–256
  13. ^Clayton, page 39
  14. ^The Times newspaper, contemporary report, quoted in Clayton. Note: the Times digital archive does not appear to carry this article.
  15. ^Samuda, letter to L&CR Board, quoted in Clayton.
  16. ^The Times newspaper, quoted in Clayton
  17. ^Railway Chronicle (periodical) 10 May 1847 quoted in Clayton, stated that this was announced "last Tuesday"
  18. ^G A Sekon (pseudonym), A History of the Great Western Railway, Digby Long & Co., London, 1895, reprinted by Forgotten Books, 2012
  19. ^Clayton, page 75
  20. ^Clayton, page 76
  21. ^Report to Shareholders' meeting 28 August 1844, quoted in Clayton
  22. ^R H Gregory, The South Devon Railway, Oakwood Press, Salisbury, 1982, ISBNΒ 0-85361-286-2
  23. ^Peter Kay, Exeter – Newton Abbot: A Railway History,Platform 5 Publishing, Sheffield, 1991, ISBNΒ 978-1-872524-42-9
  24. ^Clayton, page 91
  25. ^Clayton, page 92
  26. ^Brunel's report to the Directors, reproduced in Clayton
  27. ^ abPaul Smith, Les chemins de fer atmospheriques, In Situ, October 2009
  28. ^Clayton, pageΒ 113–199
  29. ^Clayton, page 110
  30. ^"Aeromovel - Technology". Retrieved 30 April 2013.
  31. ^"US Patent 5,845,582 Slot sealing system for a pneumatic transportation system guideway". United States Patent 5845582. Retrieved 30 April 2013.
  32. ^Aeromovel described
  33. ^"Aeromovel:History".
  34. ^Aeromovel inaugurated at airportArchived 17 August 2013 at the Wayback Machine.
  35. ^http://www.aeromovel.com.br/en/projeto/canoas/
  36. ^http://www.diariodecanoas.com.br/_conteudo/2016/08/noticias/regiao/376207-aeromovel-vai-transportar-211-mil-passageiros.html
  37. ^"AeromΓ³vel de Canoas (RS) segue indefinido". DiΓ‘rio do Transporte (in Portuguese). 26 March 2018. Retrieved 5 August 2018.
  38. ^Flight Rail Corp

Further reading

  • Adrian Vaughan, Railway Blunders, Ian Allan Publishing, Hersham, 2008, ISBNΒ 978-0-7110-3169-2; page 21 shows a photograph of L&CR traction tubes unearthed in 1933.
  • Arthur R Nicholls, The London & Portsmouth Direct Atmospheric Railway, Fonthill Media, 2013, ISBNΒ 978 1 78155244 5; Story of an unsuccessful attempt at a trunk route
  • Winchester, Clarence, ed. (1936), ""The Atmospheric railway"", Railway Wonders of the World, pp.Β 586–588
↧
↧

To become a software consultant, avoid letting clients pay you for code (2017)

$
0
0

Recently, I made an idle threat on Twitter (shown below). Β I was thinking of creating some content along the lines of how to go from being a software developer to a software consultant. Β People ask me about this all the time, and it makes for an interesting subject. Β I was also flattered and encouraged by the enthusiastic response to the tweet.

I’m still mulling over the best delivery mechanism for such a thing. Β I could do another book, but I could also do something like a video course or perhaps a series of small courses. Β But whatever route I decide to go, I need to chart out the content before doing anything else. Β I could go a mile wide and a mile deep on that, but I’d say there’s one sort of fundamental, philosophical key to becoming a software consultant. Β So today I’d like to speak about that.

Software Consultant, Differentiated

I won’t bury the lede any further here. Β The cornerstone piece of advice I’ll offer is the one upon which I’d build all of the rest of my content. Β You probably won’t like it. Β Or, you’ll at least probably think it should take a back seat to other pieces of advice like, β€œbe sympathetic” or β€œask a lot of questions” or something. Β But, no.

Don’t ever let would-be consulting clients pay you for code that you write.

Seriously. Β That’s the most foundational piece of your journey from software developer to software consultant. Β And the reason has everything to do with something that successful consultants come to understand well: positioning. Β Now, usually, people talk about positioning in the context of marketingΒ as differentiating yourself from competitors. Β Here, I’m talking about differentiating yourself from what you’re used to doing (and thus obliquely from competitors you should stop bothering to compete with: software developers).

Let me explain, as I’m wont to do, with narrative.

Leonardo Da Vinci: Renaissance Plumber

By any reckoning, Leonardo Da Vinci was one of the most impressive humans ever to walk the planet. Β Among his diverse achievements, he painted the Mona Lisa, designed a tank, and made important strides in human anatomy. Β But let’s say that, in a Bill and Ted-like deus ex machina, someone transported him 500 years into the future and brought him to the modern world.

Even someone as impressive as Leonardo would, no doubt, need a bit of time to get his bearings. Β So assume that, as he learned modern language, technology, and culture, he took a job as a plumber.

Leonardo Da Vinci as a plumber to help you understand the difference between software developer and software consultant

Let’s assume that you happened to have a leaky sink faucet, and you called Leonardo’s plumbing company for help. Β They dispatched him forthwith to take a look and to help you out.

So Leonardo comes over and, since he’s Leonardo, figures out almost immediately that your supply line has come slightly loose. Β He tightens it, and you couldn’t be more pleased with the result.

Leonardo, Ignored

Encouraged by your praise, Leonardo then gets a bit of a twinkle in his eye. Β He does some mental arithmetic and tells you that could actually cut down on your water bill by about 15% if you adopted a counter-intuitive way of cleaning off your dishes after meals. Β He proceeds to tell you how that would work. Β And, while he’s at it, he points out that the painting print on your kitchen wall isn’t a good match for the paint in the room.

And do you know what you do in response to the genius of Leonardo Da Vinci teaching you a better dish washing scheme and helping you with your art? Β You smile, humor him, and think to yourself, β€œjust fix the sink and get out of here.” Β And you’re absolutely right to do that.

Why? Β Because you have no way of knowing that he’s Leonardo Freakin’ Da Vinci. Β You just understand that you hired someone to fix a sink, and that someone is now giving you unsolicited advice about washing your dishes and decorating your house. Β You hired him to perform labor, and instead (or in addition to), you’re getting his opinions about your life.

Software Developer, Ignored

Anyone reading this that has spent time as a professional software developer can probably relate to my channeling of Da Vinci. Β You understand the better outcomes your company would have if you’d ramp down tech debt. Β You can easily see that management’s waterfall β€˜methodology’ is ineffective and misery-inducing. Β And, while you’re no expert, you even know that the company’s branding and sales strategies are ineffective.

You offer constructive feedback, but nobody listens. Β You’re Da Vinci, ignored. Β And I’m not patronizing you. Β You have to be smart to write software for a living, and in every shop I’ve ever visited, the software developers had good ideas that extended far beyond the boundaries of an IDE. Β And, usually, management humored them or else flat out ignored them. Β You could chalk this up to familiarity breeding contempt, but it’s really the positioning that I mentioned earlier.

Management hired you to perform the labor of writing code. Β Your unsolicited opinions are not part of that equation. Β Because you’re in high demand, people smile, nod, and humor you. Β But they don’t care. Β That’s the life of a software developer. Β File your suggestions in the little bin, on the ground next to my desk with a β€œsuggestions” label taped over the β€œtrash” label.

Would-Be Software Consultant, Ignored

Let’s say that you tire of this at some point. Β You decide you love the industry and that you love software, but you want more influence. Β Management isn’t for you, so β€œconsulting” it is. Β Maybe you hang out your shingle to freelance, or maybe you go work for a software β€œconsulting” firm. Β Now, it’ll be different. Β Now, people will listen.

And then, to your intense frustration, it doesn’t turn out that way. Β Even though you have β€œconsultant” in your title and charter, people still humor you and say, β€œwhatever, buddy, just code up the spec.”

What gives? Β Well, a big part of the problem lies in the dilution of the term β€œconsultant” in our line of work. Β Everyone at your client’s site that doesn’t have a W2 arrangement with that client is a β€œconsultant,” whether they personally advise the CIO or whether they lock themselves in a broom closet and write stored procedures.

And, to make matters worse, every firm that does custom app dev and calls it consulting positions themselves in an entirely predictable way. Β β€œOh, heavens no. Β Our consultants aren’t just coders β€” they write code AND provide thought leadership and advice.”

That’s so utterly expected that some clients would probably find it refreshing to hear one of these shops or people say, β€œnope, we just turn specs into code.” Β ThankΒ goodness. Β I finally don’t have to listen to the plumber talk about my choice of wall decorations.

Positioning Like You Mean It

So let’s take an honest look at the software consultant’s situation. Β All that β€œconsultant” really tells anyone for sure is that you do work for someone that doesn’t send you a W2. Β But, if they play the odds, it tells them that you write code for someone that doesn’t send you a W2 and will offer a lot of opinions, whether anyone wants them or not. Β The stock software consultant persona and thus default positioning is then β€œopinionated, above average developer.”

Now the people interested in my prospective book or course are people that actually want toΒ beΒ consultants. Β Firms don’t pay consultants for labor (or code); they pay consultants for their opinions. Β So, here’s the rub. Β If you introduce yourself as a software consultant or someone else introduces you that way, your default positioning is β€œcoder.” Β But, to achieve your objective, you need to position yourself as an actual consultant, getting paid for advice.

While many subtle options exist to nudge yourself in that direction, you have one foundational one. Β Don’t let your clients pay you to write code.

In a world where every software developer writing code for another company is a β€œconsultant” you can position yourself as an actual consultant by not writing code for pay. Β Nobody confuses you with a pro coder then.

Medicine as a Metaphor

Jonathan Stark, of Ditching HourlyΒ and the Freelancer’s Show, has a great metaphor to help you understand the positioning concept. Β And I’ll use it here to drive home a major differentiation between consulting and laboring (i.e. writing code).

He talks about four phases of solving problems for companies. Β Those include diagnosis, prescribing a cure, application of the cure, and re-application of the cure. Β Software developers and most so-called software consultants involve themselves almost exclusively in phase three: application. Β But that’s a pretty low leverage place to be. Β Consultants exist almost exclusively in phases one and two: diagnosing and prescribing. Β They let laborers take care of phase three and even lower status laborers take care of phase four.

Think of it in terms of other knowledge workers. Β You go to the doctor with an ailment, and the doctor figures out the ailment and prescribes medicine. Β But if that medicine involves rubbing stuff on the bottom of your feet 5 times per day, he doesn’t also handle that β€” it’s below his pay grade. Β You do that yourself, or you hire a masseuse or something.

When you write code as a software β€œconsultant,” you tell people that you’re in the business of diagnosis and prescription. Β But when the rubber meets the road, you spend almost all of your time slathering stuff on people’s feet and talking at length about (β€œconsulting on”) the best ways to slather.

Now, imagine an industry in which diagnosticians and slatherers alike all called themselves doctors. Β When you needed a diagnosis, you’d start to look reflexively for people without goop on their hands in order to tell the difference.

Caveats

First of all, let me clear something up immediately. Β I can practically write the comment myself. Β Someone is going to read this and say, β€œwell, I’m a consultant that writes code for my clients and they actually asked me whether they should adopt Scrum or not and then listened.” Β Yes, I believe that in the same way I believe that management does sometimes listen to staff software developers’ opinions. Β It happens. Β But it’s a far cry from you being thereΒ only to tell them whether or not to adopt Scrum.

Secondly, you can write code in a consultative capacity. Β Coaches and trainers make excellent examples. Β Notice that I said not to let people pay you forΒ code that you write. Β Companies don’t pay trainers for the code that they write, but rather for the service of showing their team how to write code. Β As a rule of thumb for differentiating, ask yourself whether the client depends on you to code something intended for production. Β If the answer is yes, you’re slathering foot goop and not diagnosing.

And, finally, I won’t dispute that some people may walk this line with more than ephemeral success. Β Maybe everywhere they go, they roll up their sleeves and crank code all morning, only to then go into the CIO’s office and provide strategy advice. Β I’ve never actually seen that or anything close to it, but it could happen. Β Or, maybe even more likely, someone consults for some clients and codes for others. Β Whatever the arrangement, some people might succeed in perpetually walking the consultant-coder line. Β And, good for them. Β But what I can tell you is that this is the exception and not the rule.

There’s an Awful Lot More to Consulting, But Here’s Your Start

As I mentioned in early in the post, I could fill a book or course(s) with information about how to succeed as a software consultant. Β Going from software developer to software consultant seems kind of straightforward, but that’s really fools’ gold. Β It’s superficially easy if you accept the extremely loose definition of consulting, but not if you seriously want to get paid for expert opinions, diagnoses and prescriptions instead of for writing code. Β Then you have a good bit of learning and skill acquisition ahead of you.

So why, of all things, do I pick avoiding writing code as foundational? Β Well, as I’ve said all along, positioning yourself is critical, and that’s your single best piece of positioning. Β In order to get paid for diagnosing, you need someone asking you for a diagnosis and not asking you to slather stuff on their foot and just call that diagnosing.

But there’s an even subtler reason for the emphasis on not coding, as well. Β Writing code is satisfying, fun, andΒ extremely marketable. Β And so finding people to pay you to write code is tantalizingly easy. Β They need your programming skills so badly, they’ll probably even call you the β€œCEO of teh codez” if that’s what you want to be called. Β You have a ready-made crutch.

Refusing to write code for clients means forcibly removing the crutch. Β Doing this, you’re like a non-native language speaker that flies to a foreign country and practices learning by immersion. Β You have no crutch, and no choice but to figure it out. Β You can write code for fun, in your spare time, and to support your practice. Β But if you want to get serious about consulting, stop slathering so that you can start diagnosing. Β Don’t let ’em pay you for your code.

And, by the way…

By the way, if you liked this post and you're new here, check out this page as a good place to start for more content that you might enjoy.Β  You can also sign up for my mailing list below to check out a free set of chapters from my book.

Want more content like this?

Sign up for my mailing list, and get about 10 posts' worth of content, excerpted from my latest book as a PDF. (Don't worry about signing up again -- the list is smart enough to keep each email only once.)

↧

The high-risk, high-reward world of selling random stuff on Amazon

$
0
0
');$vidEndSlate.removeClass('video__end-slate--inactive').addClass('video__end-slate--active');}};CNN.autoPlayVideoExist = (CNN.autoPlayVideoExist === true) ? true : false;var configObj = {thumb: 'none',video: 'business/2018/10/03/this-is-what-happens-when-amazon-comes-for-your-business.cnn-business',width: '100%',height: '100%',section: 'international',profile: 'expansion',network: 'cnn',markupId: 'large-media_0',adsection: 'const-article-carousel-pagetop',frameWidth: '100%',frameHeight: '100%',posterImageOverride: {"mini":{"width":220,"type":"jpg","uri":"//cdn.cnn.com/cnnnext/dam/assets/181001125327-world-according-to-amazon-small-169.jpg","height":124},"xsmall":{"width":307,"type":"jpg","uri":"//cdn.cnn.com/cnnnext/dam/assets/181001125327-world-according-to-amazon-medium-plus-169.jpg","height":173},"small":{"width":460,"type":"jpg","uri":"//cdn.cnn.com/cnnnext/dam/assets/181001125327-world-according-to-amazon-large-169.jpg","height":259},"medium":{"width":780,"type":"jpg","uri":"//cdn.cnn.com/cnnnext/dam/assets/181001125327-world-according-to-amazon-exlarge-169.jpg","height":439},"large":{"width":1100,"type":"jpg","uri":"//cdn.cnn.com/cnnnext/dam/assets/181001125327-world-according-to-amazon-super-169.jpg","height":619},"full16x9":{"width":1600,"type":"jpg","uri":"//cdn.cnn.com/cnnnext/dam/assets/181001125327-world-according-to-amazon-full-169.jpg","height":900},"mini1x1":{"width":120,"type":"jpg","uri":"//cdn.cnn.com/cnnnext/dam/assets/181001125327-world-according-to-amazon-small-11.jpg","height":120}}},autoStartVideo = false,isVideoReplayClicked = false,callbackObj,containerEl,currentVideoCollection = [{"title":"What happens when Amazon comes for your business?","duration":"04:24","sourceName":"CNN Business ","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/10/03/this-is-what-happens-when-amazon-comes-for-your-business.cnn-business/index.xml","videoId":"business/2018/10/03/this-is-what-happens-when-amazon-comes-for-your-business.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/181001125327-world-according-to-amazon-large-169.jpg","videoUrl":"/videos/business/2018/10/03/this-is-what-happens-when-amazon-comes-for-your-business.cnn-business/video/playlists/business-amazon/","description":"CNN's Jon Sarlin explains why companies panic when Amazon enters their markets and what unexpected opportunities that can bring.","descriptionText":"CNN's Jon Sarlin explains why companies panic when Amazon enters their markets and what unexpected opportunities that can bring."},{"title":"Will the government attempt to break up Amazon?​","duration":"02:22","sourceName":"CNN Business","sourceLink":"http://www.cnn.com/business","videoCMSUrl":"/video/data/3.0/video/business/2018/10/12/amazon-anti-trust-orig.cnn-business/index.xml","videoId":"business/2018/10/12/amazon-anti-trust-orig.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/181012115857-amazon-government-large-169.jpg","videoUrl":"/videos/business/2018/10/12/amazon-anti-trust-orig.cnn-business/video/playlists/business-amazon/","description":"Amazon is growing rapidly. But how big is too big? Experts weigh in on the competition Amazon faces and growing antitrust concerns.","descriptionText":"Amazon is growing rapidly. But how big is too big? Experts weigh in on the competition Amazon faces and growing antitrust concerns."},{"title":"Amazon is using AI in almost everything it does","duration":"04:27","sourceName":"CNN Business","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/10/03/amazon-ai-behind-the-scenes-orig-mss.cnn-business/index.xml","videoId":"business/2018/10/03/amazon-ai-behind-the-scenes-orig-mss.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/181003173251-amazon-go-large-169.jpg","videoUrl":"/videos/business/2018/10/03/amazon-ai-behind-the-scenes-orig-mss.cnn-business/video/playlists/business-amazon/","description":"Rachel Crane goes inside Amazon HQ to see how Amazon uses AI to improve customer experiences, from cashier-less stores to Alexa's new tricks.","descriptionText":"Rachel Crane goes inside Amazon HQ to see how Amazon uses AI to improve customer experiences, from cashier-less stores to Alexa's new tricks."},{"title":"These industries could be Amazon's next targets","duration":"02:42","sourceName":"CNN Business","sourceLink":"https://www.cnn.com/business","videoCMSUrl":"/video/data/3.0/video/business/2018/10/09/amazon-2023-next-ventures-orig.cnn-business/index.xml","videoId":"business/2018/10/09/amazon-2023-next-ventures-orig.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/181003174843-lista-forbes-millonarios-jeff-bezos-bill-gates-portafolio-cnnee-00000005-large-169.jpg","videoUrl":"/videos/business/2018/10/09/amazon-2023-next-ventures-orig.cnn-business/video/playlists/business-amazon/","description":"Amazon has a voracious appetite for new ventures. Experts weigh in on what's next for the already-trillion-dollar company.","descriptionText":"Amazon has a voracious appetite for new ventures. Experts weigh in on what's next for the already-trillion-dollar company."},{"title":"Can Amazon afford to get into the ad business?","duration":"02:41","sourceName":"CNN Business","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/10/05/amazons-ad-future-orig.cnn-business/index.xml","videoId":"business/2018/10/05/amazons-ad-future-orig.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/180918185033-amazon-advertising-large-169.jpg","videoUrl":"/videos/business/2018/10/05/amazons-ad-future-orig.cnn-business/video/playlists/business-amazon/","description":"CNN's Jon Sarlin looks into Amazon's booming ad business. Could Amazon's move into Google and Facebook's space alienate its customers and vendors?","descriptionText":"CNN's Jon Sarlin looks into Amazon's booming ad business. Could Amazon's move into Google and Facebook's space alienate its customers and vendors?"},{"title":"Will Bezos ever leave Amazon?","duration":"02:16","sourceName":"CNN Business","sourceLink":"http://www.cnn.com/business","videoCMSUrl":"/video/data/3.0/video/business/2018/10/10/amazon-2023-jeff-bezos-future.cnn-business/index.xml","videoId":"business/2018/10/10/amazon-2023-jeff-bezos-future.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/180419121306-amazon-jeff-bezos-happy-large-169.jpg","videoUrl":"/videos/business/2018/10/10/amazon-2023-jeff-bezos-future.cnn-business/video/playlists/business-amazon/","description":"Jeff Bezos is the world's richest man heading up one of the most valuable companies ever. So what's next for the visionary and CEO?","descriptionText":"Jeff Bezos is the world's richest man heading up one of the most valuable companies ever. So what's next for the visionary and CEO?"},{"title":"Amazon is worth more than $1 trillion","duration":"01:24","sourceName":"CNN Business","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/09/26/amazon-is-worth-1-trillion-orig.cnn-business/index.xml","videoId":"business/2018/09/26/amazon-is-worth-1-trillion-orig.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/180830094628-gfx-amazon-1-trillion-large-169.jpg","videoUrl":"/videos/business/2018/09/26/amazon-is-worth-1-trillion-orig.cnn-business/video/playlists/business-amazon/","description":"Amazon has doubled in market value in just a year and now joins Apple in the elite ranks of companies worth $1 trillion.","descriptionText":"Amazon has doubled in market value in just a year and now joins Apple in the elite ranks of companies worth $1 trillion."},{"title":"Is Amazon a monopoly?","duration":"04:14","sourceName":"CNN Business","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/10/03/amazon-monopoly-orig.cnn-business/index.xml","videoId":"business/2018/10/03/amazon-monopoly-orig.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/181001181316-amazon-boxes-large-169.jpg","videoUrl":"/videos/business/2018/10/03/amazon-monopoly-orig.cnn-business/video/playlists/business-amazon/","description":"Amazon dominates online retail and is one of the most valuable companies in the world. But are consumers harmed by its low prices? CNN's Jon Sarlin reports. ","descriptionText":"Amazon dominates online retail and is one of the most valuable companies in the world. But are consumers harmed by its low prices? CNN's Jon Sarlin reports. "},{"title":"Amazon raises minimum wage to $15 an hour","duration":"03:00","sourceName":"CNN Business","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/10/02/amazon-minimum-wage.cnn-business/index.xml","videoId":"business/2018/10/02/amazon-minimum-wage.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/181002115644-01-us-amazon-employees-file-restricted-large-169.jpg","videoUrl":"/videos/business/2018/10/02/amazon-minimum-wage.cnn-business/video/playlists/business-amazon/","description":"Dave Clark, Amazon's SVP of Global Operations, tells CNN's Christine Romans that the minimum wage increase will \"help us hire and retain the best people over the course of time.\"","descriptionText":"Dave Clark, Amazon's SVP of Global Operations, tells CNN's Christine Romans that the minimum wage increase will \"help us hire and retain the best people over the course of time.\""},{"title":"See Amazon's new Prime delivery initiative","duration":"01:35","sourceName":"CNN Business","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/09/21/amazon-prime-delivery-partners-orig.cnn-business/index.xml","videoId":"business/2018/09/21/amazon-prime-delivery-partners-orig.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/180921124933-amazon-prime-partners-cnn-business-large-169.jpg","videoUrl":"/videos/business/2018/09/21/amazon-prime-delivery-partners-orig.cnn-business/video/playlists/business-amazon/","description":"Amazon has announced a new program to create small businesses that can deliver its packages in branded vans and uniforms.","descriptionText":"Amazon has announced a new program to create small businesses that can deliver its packages in branded vans and uniforms."},{"title":"Thanks to Amazon you can talk to your microwave","duration":"01:13","sourceName":"CNNMoney","sourceLink":"http://money.cnn.com","videoCMSUrl":"/video/data/3.0/video/cnnmoney/2018/09/20/amazon-alexa-microwave.cnnmoney/index.xml","videoId":"cnnmoney/2018/09/20/amazon-alexa-microwave.cnnmoney","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/180920184423-amazon-alexa-microwave-large-169.jpg","videoUrl":"/videos/cnnmoney/2018/09/20/amazon-alexa-microwave.cnnmoney/video/playlists/business-amazon/","description":"Amazon's Alexa-enabled microwave can respond to your commands and figure out how long to cook your dinner.","descriptionText":"Amazon's Alexa-enabled microwave can respond to your commands and figure out how long to cook your dinner."}],currentVideoCollectionId = '',isLivePlayer = false,mediaMetadataCallbacks,mobilePinnedView = null,moveToNextTimeout,mutePlayerEnabled = false,nextVideoId = '',nextVideoUrl = '',turnOnFlashMessaging = false,videoPinner,videoEndSlateImpl;if (CNN.autoPlayVideoExist === false) {autoStartVideo = true;if (autoStartVideo === true) {if (turnOnFlashMessaging === true) {autoStartVideo = false;containerEl = jQuery(document.getElementById(configObj.markupId));CNN.VideoPlayer.showFlashSlate(containerEl);} else {CNN.autoPlayVideoExist = true;}}}configObj.autostart = CNN.Features.enableAutoplayBlock ? false : autoStartVideo;CNN.VideoPlayer.setPlayerProperties(configObj.markupId, autoStartVideo, isLivePlayer, isVideoReplayClicked, mutePlayerEnabled);CNN.VideoPlayer.setFirstVideoInCollection(currentVideoCollection, configObj.markupId);var videoHandler = {},isFeaturedVideoCollectionHandlerAvailable = (CNN !== undefined &&CNN.VIDEOCLIENT !== undefined &&CNN.VIDEOCLIENT.FeaturedVideoCollectionHandler !== undefined);if (!isFeaturedVideoCollectionHandlerAvailable) {CNN.INJECTOR.executeFeature('videx').done(function () {jQuery.ajax({dataType: 'script',cache: true,url: '//edition.i.cdn.cnn.com/.a/2.119.1/js/featured-video-collection-player.min.js'}).done(function () {initializeVideoAndCollection();}).fail(function () {throw 'Unable to fetch /js/featured-video-collection-player.min.js';});}).fail(function () {throw 'Unable to fetch the videx bundle';});}function initializeVideoAndCollection() {videoHandler = new CNN.VIDEOCLIENT.FeaturedVideoCollectionHandler(configObj.markupId,"cn-featured-gbmiqd",'js-video_description-featured-gbmiqd',[{"title":"What happens when Amazon comes for your business?","duration":"04:24","sourceName":"CNN Business ","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/10/03/this-is-what-happens-when-amazon-comes-for-your-business.cnn-business/index.xml","videoId":"business/2018/10/03/this-is-what-happens-when-amazon-comes-for-your-business.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/181001125327-world-according-to-amazon-large-169.jpg","videoUrl":"/videos/business/2018/10/03/this-is-what-happens-when-amazon-comes-for-your-business.cnn-business/video/playlists/business-amazon/","description":"CNN's Jon Sarlin explains why companies panic when Amazon enters their markets and what unexpected opportunities that can bring.","descriptionText":"CNN's Jon Sarlin explains why companies panic when Amazon enters their markets and what unexpected opportunities that can bring."},{"title":"Will the government attempt to break up Amazon?​","duration":"02:22","sourceName":"CNN Business","sourceLink":"http://www.cnn.com/business","videoCMSUrl":"/video/data/3.0/video/business/2018/10/12/amazon-anti-trust-orig.cnn-business/index.xml","videoId":"business/2018/10/12/amazon-anti-trust-orig.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/181012115857-amazon-government-large-169.jpg","videoUrl":"/videos/business/2018/10/12/amazon-anti-trust-orig.cnn-business/video/playlists/business-amazon/","description":"Amazon is growing rapidly. But how big is too big? Experts weigh in on the competition Amazon faces and growing antitrust concerns.","descriptionText":"Amazon is growing rapidly. But how big is too big? Experts weigh in on the competition Amazon faces and growing antitrust concerns."},{"title":"Amazon is using AI in almost everything it does","duration":"04:27","sourceName":"CNN Business","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/10/03/amazon-ai-behind-the-scenes-orig-mss.cnn-business/index.xml","videoId":"business/2018/10/03/amazon-ai-behind-the-scenes-orig-mss.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/181003173251-amazon-go-large-169.jpg","videoUrl":"/videos/business/2018/10/03/amazon-ai-behind-the-scenes-orig-mss.cnn-business/video/playlists/business-amazon/","description":"Rachel Crane goes inside Amazon HQ to see how Amazon uses AI to improve customer experiences, from cashier-less stores to Alexa's new tricks.","descriptionText":"Rachel Crane goes inside Amazon HQ to see how Amazon uses AI to improve customer experiences, from cashier-less stores to Alexa's new tricks."},{"title":"These industries could be Amazon's next targets","duration":"02:42","sourceName":"CNN Business","sourceLink":"https://www.cnn.com/business","videoCMSUrl":"/video/data/3.0/video/business/2018/10/09/amazon-2023-next-ventures-orig.cnn-business/index.xml","videoId":"business/2018/10/09/amazon-2023-next-ventures-orig.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/181003174843-lista-forbes-millonarios-jeff-bezos-bill-gates-portafolio-cnnee-00000005-large-169.jpg","videoUrl":"/videos/business/2018/10/09/amazon-2023-next-ventures-orig.cnn-business/video/playlists/business-amazon/","description":"Amazon has a voracious appetite for new ventures. Experts weigh in on what's next for the already-trillion-dollar company.","descriptionText":"Amazon has a voracious appetite for new ventures. Experts weigh in on what's next for the already-trillion-dollar company."},{"title":"Can Amazon afford to get into the ad business?","duration":"02:41","sourceName":"CNN Business","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/10/05/amazons-ad-future-orig.cnn-business/index.xml","videoId":"business/2018/10/05/amazons-ad-future-orig.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/180918185033-amazon-advertising-large-169.jpg","videoUrl":"/videos/business/2018/10/05/amazons-ad-future-orig.cnn-business/video/playlists/business-amazon/","description":"CNN's Jon Sarlin looks into Amazon's booming ad business. Could Amazon's move into Google and Facebook's space alienate its customers and vendors?","descriptionText":"CNN's Jon Sarlin looks into Amazon's booming ad business. Could Amazon's move into Google and Facebook's space alienate its customers and vendors?"},{"title":"Will Bezos ever leave Amazon?","duration":"02:16","sourceName":"CNN Business","sourceLink":"http://www.cnn.com/business","videoCMSUrl":"/video/data/3.0/video/business/2018/10/10/amazon-2023-jeff-bezos-future.cnn-business/index.xml","videoId":"business/2018/10/10/amazon-2023-jeff-bezos-future.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/180419121306-amazon-jeff-bezos-happy-large-169.jpg","videoUrl":"/videos/business/2018/10/10/amazon-2023-jeff-bezos-future.cnn-business/video/playlists/business-amazon/","description":"Jeff Bezos is the world's richest man heading up one of the most valuable companies ever. So what's next for the visionary and CEO?","descriptionText":"Jeff Bezos is the world's richest man heading up one of the most valuable companies ever. So what's next for the visionary and CEO?"},{"title":"Amazon is worth more than $1 trillion","duration":"01:24","sourceName":"CNN Business","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/09/26/amazon-is-worth-1-trillion-orig.cnn-business/index.xml","videoId":"business/2018/09/26/amazon-is-worth-1-trillion-orig.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/180830094628-gfx-amazon-1-trillion-large-169.jpg","videoUrl":"/videos/business/2018/09/26/amazon-is-worth-1-trillion-orig.cnn-business/video/playlists/business-amazon/","description":"Amazon has doubled in market value in just a year and now joins Apple in the elite ranks of companies worth $1 trillion.","descriptionText":"Amazon has doubled in market value in just a year and now joins Apple in the elite ranks of companies worth $1 trillion."},{"title":"Is Amazon a monopoly?","duration":"04:14","sourceName":"CNN Business","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/10/03/amazon-monopoly-orig.cnn-business/index.xml","videoId":"business/2018/10/03/amazon-monopoly-orig.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/181001181316-amazon-boxes-large-169.jpg","videoUrl":"/videos/business/2018/10/03/amazon-monopoly-orig.cnn-business/video/playlists/business-amazon/","description":"Amazon dominates online retail and is one of the most valuable companies in the world. But are consumers harmed by its low prices? CNN's Jon Sarlin reports. ","descriptionText":"Amazon dominates online retail and is one of the most valuable companies in the world. But are consumers harmed by its low prices? CNN's Jon Sarlin reports. "},{"title":"Amazon raises minimum wage to $15 an hour","duration":"03:00","sourceName":"CNN Business","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/10/02/amazon-minimum-wage.cnn-business/index.xml","videoId":"business/2018/10/02/amazon-minimum-wage.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/181002115644-01-us-amazon-employees-file-restricted-large-169.jpg","videoUrl":"/videos/business/2018/10/02/amazon-minimum-wage.cnn-business/video/playlists/business-amazon/","description":"Dave Clark, Amazon's SVP of Global Operations, tells CNN's Christine Romans that the minimum wage increase will \"help us hire and retain the best people over the course of time.\"","descriptionText":"Dave Clark, Amazon's SVP of Global Operations, tells CNN's Christine Romans that the minimum wage increase will \"help us hire and retain the best people over the course of time.\""},{"title":"See Amazon's new Prime delivery initiative","duration":"01:35","sourceName":"CNN Business","sourceLink":"","videoCMSUrl":"/video/data/3.0/video/business/2018/09/21/amazon-prime-delivery-partners-orig.cnn-business/index.xml","videoId":"business/2018/09/21/amazon-prime-delivery-partners-orig.cnn-business","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/180921124933-amazon-prime-partners-cnn-business-large-169.jpg","videoUrl":"/videos/business/2018/09/21/amazon-prime-delivery-partners-orig.cnn-business/video/playlists/business-amazon/","description":"Amazon has announced a new program to create small businesses that can deliver its packages in branded vans and uniforms.","descriptionText":"Amazon has announced a new program to create small businesses that can deliver its packages in branded vans and uniforms."},{"title":"Thanks to Amazon you can talk to your microwave","duration":"01:13","sourceName":"CNNMoney","sourceLink":"http://money.cnn.com","videoCMSUrl":"/video/data/3.0/video/cnnmoney/2018/09/20/amazon-alexa-microwave.cnnmoney/index.xml","videoId":"cnnmoney/2018/09/20/amazon-alexa-microwave.cnnmoney","videoImage":"//cdn.cnn.com/cnnnext/dam/assets/180920184423-amazon-alexa-microwave-large-169.jpg","videoUrl":"/videos/cnnmoney/2018/09/20/amazon-alexa-microwave.cnnmoney/video/playlists/business-amazon/","description":"Amazon's Alexa-enabled microwave can respond to your commands and figure out how long to cook your dinner.","descriptionText":"Amazon's Alexa-enabled microwave can respond to your commands and figure out how long to cook your dinner."}],'js-video_headline-featured-gbmiqd','',"js-video_source-featured-gbmiqd",true,true,'business-amazon');if (typeof configObj.context !== 'string' || configObj.context.length
↧

Making Windows Slower Part 2: Process Creation

$
0
0

Windows has long had a reputation for slow file operations and slow process creation. Have you ever wanted to make these operations even slower? This weeks’ blog post covers a technique you can use to make process creation on Windows grow slower over time (with no limit), in a way that will be untraceable for most users!

And, of course, this post will also cover how to detect and avoid this problem.

This issue is a real one that I encountered earlier this year, and this post explains how I uncovered the problem and found a workaround. Previous posts on making Windows slower include:

Noticing that something is wrong

I don’t go looking for trouble, but I sure seem to find it. Maybe it’s because I build Chrome from source hundreds of times over the weekend, or maybe I’m just born with it. I guess we’ll never know. For whatever reason, this post documents the fifth major problem that I have encountered on Windows while building Chrome.

  1. Unplanned serialization that lead to full-system UI hangs: 24-core CPU and I can’t move my mouse
  2. Process handle leak in one of Microsoft’s add-ons to Windows: Zombie Processes are Eating your Memory
  3. A long-standing correctness bug in the Windows file cache: Compiler bug? Linker bug? Windows Kernel bug
  4. A performance glitch if you misuse file notifications: Making Windows Slower Part 1: File Access
  5. And this one – an odd design decision that makes process creation slower over time

Tracking a rare crash

Computers should be reliable and predictable and I get annoyed when they aren’t. If I build Chrome a few hundred times in a row then I would like every build to succeed. So, when our distributed compiler process (gomacc.exe) would crash occasionally I wanted to investigate. I have automatic recording of crash dumps configured so I could see that the crashes happened when heap corruption was detected. A simple way of investigating that is to turn on pageheap so that the Windows heap puts each allocation on a separate page. This means that use-after-free and buffer overruns become instant crashes instead of hard to diagnose corruption. I’ve written about enabling pageheap using App Verifier before.

App Verifier causes your program to run more slowly, both because allocations are now more expensive and because the page-aligned allocations mean that your CPU’s cache is mostly neutered. So, I expected my builds to run a bit slower, but not too much, and indeed the build seemed to be running fine.

But when I checked in later the build seemed to have stopped. After about 7,000 build steps there was no apparent sign of progress.

O(n^2) is usually not okay

It turns out that Application Verifier likes to create log files. Never mind that nobody ever looks at these log files, it creates them just in case. And these log files need to have unique names. And I’m sure it seemed like a good idea to just give these log files numerically ascending names like gomacc.exe.0.dat, gomacc.exe.1.dat, and so on.

To get numerically ascending names you need to find what number you should use next, and the simplest way to do that is to just try the possible names/numbers until you find something that hasn’t been used. That is, try to create a new file called gomacc.exe.0.dat and if that already exists then try gomacc.exe.1.dat, and so on.

What’s the worst that could happen?

Actually, the worst is pretty bad

It turns out that if you do a linear search for an unused file name whenever you create a process then launching N processes takes O(N^2) operations. A good rule of thumb is that O(N^2) algorithms are too slow unless you can guarantee that N always stays quite small.

Exactly how bad this will be depends on how long it takes to see if a file name already exists. I’ve since done measurements that show that in this context Windows seems to take about 80 microseconds (80 Β΅s or 0.08 ms) to check for the existence of a file. Launching the first process is fast, but launching the 1,000th process requires scanning through the 1,000 log files that have already been created, and that takes 80 ms, and it keeps getting worse.

A typical build of Chrome requires running the compiler about 30,000 times. Each launch of the compiler requires scanning over the previously created N log files, at 0.08 ms for each existence check. The linear search for the next available log file name means that launching N processes takes (N^2)/2 file existence checks, so 30,000 * 30,000 / 2 which is 450 million. Since each file existence check takes 0.08 ms that’s 36 million ms, or 36,000 seconds. That means that my Chrome build, which normally takes five to ten minutes, was going to take an additional ten hours.

Darn.

When writing this blog post I reproduced the bug by launching an empty executable about 7,000 times and I saw a nice O(n^2) curve like this:

Oddly enough, if you grab an ETW trace and just look at the average time to call CreateFile on these many different file names then the result – from beginning to end – suggests that it takes less than five microseconds per file (an average of 4.386 microseconds in the example below):

image

It looks like this just reveals a limitation of ETW’s file I/O tracing. The file I/O events only track the very lowest level of the file system, and there are many layers above Ntfs.sys, including FLTMGR.SYS and ntoskrnl.exe. However the cost can’t hide entirely – the CPU time all shows up in the CPU Usage (Sampled) graph. The screen show below shows a 548 ms time period, representing the creation of one process, mostly just scanning over about 6,850 possible log file names:

image

Would a faster disk help?

No.

The amount of data being dealt with is tiny, and the amount being written to disk is even tinier. During my tests to repro this behavior my disk was almost completely idle. This is a CPU bound problem because all of the relevant disk data is cached. And, even if the overhead was reduced by an order of magnitude it would still be too slow. You can’t make an O(N^2) algorithm be good.

Detection

You can detect this specific problem by looking in %userprofile%\appverifierlogs for .dat files. You can detect process creation slowdowns more generally by grabbing an ETW trace, and now you know one more thing to look for.

The solution

The simplest solution is to disable the generation of the log files. This also stops your disk from filling up with GB of log files. You can do that with this command:

appverif.exe -logtofile disable

With log file creation disabled I found that my tracked processes started about three times faster (!) than at the beginning of my test, and the slowdown is completely avoided. This allows 7,000 Application Verifier monitored processes to be spawned in 1.5 minutes, instead of 40 minutes. With my simple test batch file and simple process I see these process-creation rates:

  • 200 per second normally (5 ms per process)
  • 75 per second with Application Verifier enabled but logging disabled (13 ms per process)
  • 40 per second with Application Verifier enabled and logging enabled, initially… (25 ms per process, increasing to arbitrarily high limits)
  • 0.4 per second after building Chrome once

Microsoft could fix this problem by using something other than a monotonically increasing log-file number. If they used the current date and time (to millisecond or higher resolution) as part of the file name then they would get log file names that were more semantically meaningful, and could be created extremely quickly with virtually no unique-file-search logic.

But, Application Verifier is not being maintained anymore, and the log files are worthless anyway, so just disable them.

Supporting information

The batch files and script to recreate this after enabling Application Verifier for empty.exe can be found here.

An ETW trace from around the end of the experiment can be found here.

The raw timing data used to generate the graph can be found here.

↧

First analysis of how Uber and Lyft have affected roadway congestion in SF

$
0
0

Rider enters a TNC vehicle

Overview And Key Findings

"TNCs and Congestion" report provides the first comprehensive analysis of how Transportation Network Companies Uber and Lyft collectively have affected roadway congestion in San Francisco.

Key findings in the report:

The report found that Transportation Network Companies accounted for approximately 50 percent of the rise in congestion in San Francisco between 2010 and 2016, as indicated by three congestion measures: vehicle hours of delay, vehicle miles travelled, and average speeds.

Employment and population growth were primarily responsible for the remainder of the worsening congestion.

Major findings of the TNCs & Congestion report show that collectively the ride-hailΒ  services accounted for:

  • 51 percent of the increase in daily vehicle hours of delay between 2010 and 2016;Β 
  • 47 percent of the increase in vehicle miles travelled during that same time period; and
  • 55 percent of the average speed decline on roadways during that same time period.
  • On an absolute basis, TNCs comprise an estimated 25 percent of total vehicle congestion (as measured by vehicle hours of delay) citywide and 36 percent of delay in the downtown core.

Consistent with prior findings from the Transportation Authority’s 2017 TNCs Today report, TNCs also caused the greatest increases in congestion in the densest parts of the city - up to 73 percent in the downtown financial district - and along many of the city’s busiest corridors.Β  TNCs had little impact on congestion in the western and southern San Francisco neighborhoods.

The report also found that changes to street configuration (such as when a traffic lane is converted to a bus-only lane), contributed less than 5 percent to congestion.Β 

Resources

Download a copy of "TNCs and Congestion" report.

Download a copy of the press release.

Dynamic Map

TNC Congestion ExplorerExplore a dynamic map of TNCs and Congestion.

Data Files

Download a copy of the data file used to prepare the report:

Data set 2010

Data set 2016

Connect With Us

If you have questions about "TNCs Today," or are interested in a research collaboration, please contact Joe Castiglione, Deputy Director for Technology, Data and Analysis via email or Drew Cooper, Planner, via email.

↧
↧

The Bredesen protocol for treating Alzheimer’s

$
0
0

This is the most important column I’ve ever written. Β The message is quite complex–dozens of new health parameters to test for and to optimize, all of them interacting in ways that will require new training for MDs. Β The message is also as simple as it can be: There is a cure for Alzheimer’s disease. You can stop reading right here, and buy two copies of Dale Bredesen’s book, one for you and one for your doctor: Β The End of Alzheimer’s.


Dr Bredesen’s spectacular success is easily lost in a flood of overly-optimistic, early hype about any number of magic cures. Β This is an excuse for the New York Times, the Nobel Prize committee, and the mainstream of medical research, but it’s no excuse for me. Β I’ve known Bredesen for 14 years, and I’ve written about his work in the past. Β His book has been out for a year, and I should have written this column earlier.

I suspect you’re waiting for the punch line: what is Bredesen’s cure? Β That’s exactly what I felt when I read about his work three years ago. But there isn’t a short answer. Β That’s part of the frustration, but it’s also a reason that Bredesen’s paradigm may be a template for novel research approaches cancer, heart disease, and aging itself.

The Bredesen protocol consists of a battery of dozens of lab tests, combined with interviews, consideration of life style, home environment, social factors, dentistry, leaky gut, mineral imbalances, hormone imbalances, sleep and more. Β This leads to an individual diagnosis: Which of 36 factors known to affect APP cleavage are most important in this particular case? How can they be addressed for this individual patient?

Brain cells have on their surface a protein called APP, which is a dependence receptor. Β It is like a self-destruct switch whose default is in the ON position. Β The protein that binds to the receptor is a neurotrophin ligand, and in the absence of the neurotrophin ligand, Β the receptor signals the cell to die.

APP cleavage is the core process that led Bredesen down a path to his understanding of the etiology of AD 16 years ago. Β APP is Amyloid Precursor Protein, and it is sensitive to dozens of kinds of signals, adding up the pros and the cons to make a decision, to go down one of two paths. Β It can be cleaved in two, creating signal molecules that cause formation of new synapses and formation of new brain cells; or it can be cleaved in four, creating signal molecules that lead to trimming back of existing synapses, and eventually, to apoptosis, cell suicide of neurons.

In a healthy brain, these two processes are balanced so we can learn new things and we can forget what is unimportant. Β But in the Alzheimer’s brain, destruction (synaptoclastic) dominates creation (synaptoblastic), and the brain withers away.

On the right, one of the fragments is beta amyloid.Β  Beta amyloid blocks the dependence receptor, so the receptor cannot receive the neurotrophin ligand that gives it permission to go on living. Β Beta amyloid is one of the 4 pieces, when the APP molecule goes down the branch where it is split in 4.

One of the signals that determines whether APP splits in 2 or in 4 is beta amyloid itself. Β This implies a positive feedback loop; beta amyloid leads to even more beta amyloid, and in the Alzhyeimer’s patient, this is a runaway process. Β But positive feedback loops work in both directions–a boon to Bredesen’s clinical approach. If the balance in signaling can be tipped from the right to the left pathway in the diagram above, this can lead to self-reinforcing progress in the healing direction. Β In the cases where Bredesen’s approach has led to stunning reversals of cognitive loss, this is the underlying mechanism that explains the success.

Amyloid has been identified with AD for decades, and for most of that time the mainstream hypothesis was that beta-amyloid plaques cause the disease. Β (Adherents to this view have been referred to jokingly as BAPtists.) But success in dissolving the plaques has not led to restored cognitive function. Β In Bredesen’s narrative, generation of large quantities of beta amyloid are a symptom of the body’s attempts to triage a dying brain.

To tip the balance back toward growing new synapses

Having identified the focal point that leads to AD, Bredesen went to work first in the lab, then in the clinic, to identify processes that tend to tip the balance one way or the other. Β He has compiled quite a list.

  • Reduce APPΞ²-cleavage
  • Reduce Ξ³-cleavage
  • Reduce caspase-6 cleavage
  • Reduce caspase-3 cleavage
    (All the above are cleavage in 4)
  • Increase Ξ±-cleavage (cleavage in 2)
  • Prevent amyloid-beta oligomerization
  • Increase neprilysin
  • Increase IDE (insulin-degrading enzyme)
  • Increase microglial clearance of AΞ²
  • Increase autophagy
  • Increase BDNF (brain-derived neurotropliic factor)
  • Increase NGF (nerve growth factor)
  • Increase netrin-1
  • Increase ADNP (activity-dependent neuroprotective protein)
  • Increase VIP (vasoactive intestinal peptide)
  • Reduce homocysteine
  • Increase PPZA (protein phosphatase 2A) activity
  • Reduce phospho-tau
  • Increase phagocytosis index
  • Increase insulin sensitivity
  • Enhance leptin sensitixity
  • improve axoplasmic transport
  • Enhance mitochondnal function and biogenesis
  • Reduce oxidative damage and optimize ROS (reactive oxygen species) production
  • Enhance cholinergic neurotransmission
  • Increase synaptoblastic signaling
  • Reduce synaptoclastic signaling
  • Improve LTP (long-term potentiation)
  • Optimize estradiol
  • Optimize progesterone
  • Optimize E2:P (estradiol to progesterone) ratio
  • Optimize free T3
  • Optimize free T4
  • Optimize TSH (thyroid-stimulating llormone)
  • Optimize piegnenolone
  • Optimize testosterone
  • Optimize cortisol
  • Optimize DHEA (deliydroepiandrosterone)
  • Optimize insulin secretion and signaling
  • Activate PPAR-Ξ³ (peroxisome proliferator-activated receptor gamma)
  • Reduce inflammation
  • Increase resolvins
  • Enhance detoxification
  • Improve vascularization
  • Increase cAMP (cyclic adenosine monophosphate)
  • Increase glutathione
  • Provide synaptic components
  • Optimize all metals
  • Increase GABA (gamma-aminobutyric acid)
  • Increase vitamin D signaling
  • Increase SirT1 (silent information regulator T1)
  • Reduce NF-ΞΊB (nuclear factor kappa-ligllt-chain-enhancer of activated B cells)
  • Increase telomere length
  • Reduce glial scarring
  • Enhance stein-cell-mediated brain repair

This explains why no single drug can have much effect on AD; it’s because the primary decision point depends on a balance among so many pro-AD (synaptoclastic) and anti-AD (synaptoblastic) signals. Β Addressing them all may be impractical in any given patient, so the Bredesen protocol is built around a detailed diagnostic process that identifies the factors that are most important in each individual case.

Three primary types of AD

Bredesen’s diagnosis begins with classifying each case of AD into one of three broad constellations of symptoms, with associated causes.

Type I is inflammatory. It is found more often in people with carry one or two ApoE4 alleles (a gene long associated with Alzheimer’s) and runs in families. Laboratory testing will often demonstrate an increase in C- reactive protein, in interleukin-2, tumor necrosis factor, insulin resistance and a decrease in the albumin:globulin ratio.

Type II is atrophic. It also occurs more often those who carry one or two copies of ApoΞ΅4, but occurs about a decade later. Here we do not see evidence of inflammatory markers (they may be decreased), but rather deficiencies of support for our brain synapses. These include decreased hormonal levels of thyroid, adrenal, testosterone, progesterone and/or estrogen, low levels of vitamin D and elevated homocysteine.

Type III is toxic.Β This occurs more often in those who carry the ApoΞ΅3 allele rather than ApoΞ΅4 so it does not tend to run in families. This type tends to affect more brain areas, which may show neuroinflammation and vascular leaks on a type of MRI called FLAIR, and associated with low zinc levels, high copper, low cortisol, high Reverse T3, elevated levels of mercury or mycotoxins or infections such as Lyme disease with Β its associated coinfections. Β 

(This box quoted from Dr Neil Nathan’s book review)

There’s also a Type 1.5, associated with diabetes and sugar toxicity, a Type IV, which is vascular dementia, and a Type V which is traumatic damage to the brain.
These categories are just a start. Β The patient will work closely with an expert physician to determine, first, where are the most important imbalances to address, and, second, which of the changes that cna address them are most accessible for the life style of this particular patient.

Success

Bredesen wrote a paper in 2014 about successes in reversing cognitive decline with his first ten patients. Β As of this writing, he has treated over 3,000 patients with the protocol called RECODE (for REversal of COgnitive DEcline), and he claims success with all of them, in the sense of measurable improvement in cognitive performance. Β This contrasts with the utter failure of all previous methods, which claim, at best, to slow cognitive decline.

Translation to the millions of Alzheimer’s patients will require training of local practitioners all across the country. Β A few doctors have already learned parts of the Bredesen protocol, and Bredesen’s website can help you find someone to guide your program, but you will probably have to travel. Β The first training for doctors is being organized now through the Institute for Functional Medicine.

Implications

This is a new paradigm for how to study chronic, debilitating diseases. Β Type 2 diabetes comes to mind as the next obvious candidate for reversal through an individualized, comprehensive program. Β Terry Wahls has pioneered a similar approach with MS. Β Cancer and heart disease may be in the future.

I’ll go out on a limb and say I think Bredesen’s protocol is the most credible generalized anti-aging program we have. Β (Blame me for the hyperbole, not Dr Bredesen β€” he has never made any such claim.) Could we adopt Bredesen’s research method to accelerate research in anti-aging medicine? Β Perhaps biomarkers for aging (especially methylation age) are approaching a point where they could be used as feedback for an individualized program, but Horvath’s PhenoAge clock will probably have to be 10 times more accurate to be used for individuals. Β Averaging over ~100 individuals can give this factor of 10 in a clinical trial. Β Still, we don’t have the kind of mechanistic understanding of aging that Bredesen himself developed for AD before bringing his findings to the clinic; and this is probably because causes of aging are more complex and varied than AD.

Disclaimers: Β I’m pre-disposed to think highly of Dale Bredesen and his ideas for 3 reasons. Β He was a friend to me, and gave me a platform when I was new to the field of aging. Β He believes that aging is programmed. And his multi-factorial approach parallels the research I have advocated for researching other aspects of aging.

Rhonda Patrick interviews Dale Bredesen on FoundMyFitness

↧

The story of Augur, an Ethereum prediction market

$
0
0
15 min read

The tightly knit team of a half dozen coders converged in July in a 17th floor hotel suite with a panoramic view of the Las Vegas strip. They had picked the center of gambling in America for a symbolic reason: the team had just spent three years working together to build a new kind of prediction market called Augur, to make it possible for bettors anywhere to place wagers on anything.

Now they were about to unleash their creation on the world by uploading the code to the Ethereum network. Where else could they have gone for this historic moment but to the Strip?

Under a glitzy chandelier in a sky suite strewn with pizza boxes at the Aria Resort, it took the programmers a full day to finish the code and upload all the smart contracts for the decentralized application. They didn’t mind: after three years of working together, mostly in a ratty little house on the edge of San Francisco, it was unclear when or if they might see each other again.

That had been the plan all along. They weren’t creating a business where they hoped they all might work. This new kind of enterprise was more like a group of people making an indie movie. They were releasing a protocolβ€”a piece of software that would live forever on the Net. And after that? They would all simply walk away and find something else to work on.

Was it thrilling? β€œIt was not thrilling,” Jack Peterson, Augur’s co-founder recalls.

Oh, but it was. A lot was at stake. In fact, thousands of people had been betting on Augur for three years, including the 2,500 investors who had bought $5.3 million-worth of highly speculative digital β€œREP” tokens to sponsor its development. Though its creators insist it was a β€œpresale of software licenses,” that event in 2015 was in effect one of theΒ first initial coin offerings.

In the days after the Vegas launch, Augur did not disappoint. Thousands of users had traded upwards of $1.5 million on Augur, and the value of the REP digital tokensΒ grew to the mid $30Β range. Hundreds of betting markets proliferated on Augur’s interface. Would U.S. President Trump be re-elected?Β Who would winΒ the France-Belgium semi-final in the 2018 World Cup? Would the price of etherΒ exceed $500Β by the year’s end?

But then, within a week, Augur fell to earth. Only a few dozen people were trading it daily. Users complained about the clunky interface, and started to notice the abundance of dud markets (β€œDoes God exist?”). Worse, morally challenged β€œassassination markets” emerged, which some observers believed might actually encourage bettors to kill celebritiesβ€”if the jackpot got high enough. Some news outletsΒ declaredΒ Augur a joke.Β One publicationΒ lamented the β€œhype, the horror, and the letdown of prediction market Augur.”

The truth lies somewhere in between.Β 

If you want to understand what’s happening on the Internet right now and why thousands of developers and billions of dollars are being focused on the promise of web3, Augur is a pretty good place to start. Obviously, it is not a Facebook, Google or Twitter, certainly not at this point in history, and maybe never.

But it represents one of those moments in technology that signals the start of something potentially huge. A better analogy might be to consider one of the early Wright Brothers flights at Kitty Hawk. TheΒ Wright FlyerΒ wasn’t much to look at, took you 120 feet and only stayed airborne for seconds. But it was flying, and that was something that might lead somewhere huge, wasn’t it?

Vitalik Buterin–the person most people consider the godfather of Ethereum–certainly thought so. Most people don’t know that he was a seed investor, consultant and muse to Augur. Augur, he says, is a success. β€œEven if it ends up with only 45 users, creating an application of this level of complexity and turning it into an actually working system is still a huge achievement.”

Fortune’s Children

Krug’s old gaming computer, maxed out with Radeon GPUs to mine bitcoin.
Krug’s old gaming computer, maxed out with Radeon GPUs to mine bitcoin.

Augur’s story starts with Joey Krug, who was born in Knoxville, Illinois in 1995 to an ER nurse and a doctor. He grew up to love betting, business, and bitcoin, and by age 13, he was already winning β€œthousands” playing the ponies and the stock market, carefully filing the results in an Excel spreadsheet.

He learned about bitcoin in 2011, after reading an article on GPU mining on overclock.net, a hardware site. It ignited his business instinctsβ€”by simply hitching a few Radeon GPU units to a gaming computer and letting the whole thing whirr, he was able to earn money, right in the comfort of his childhood bedroom.

It was bitcoin that prompted him to drop out of Pomona College, California, after his freshman year, where he studied computer science. He left to write third-party bitcoin applications, including an app for buying things in bitcoinβ€”which he abandoned once he realised β€œnobody wanted to buy things in bitcoin.”

Nevertheless, his marginal interests connected him to a Skype group, around 2014, with Buterin, who would soon co-create Ethereum, as well as Peterson, a then 32-year-old engineer and biophysicist working on his own abortive blockchain project, a startup called Dyffy.

Peterson was born in 1982 and grew up in Atlanta, Georgia. Unlike Krug, he had never been much of a gambler. He wasn’t even into money that much. He had had a stash of 100 bitcoins, which he accidentally wiped when reformatting his hard drive. Though he narrowly escaped great riches, he has no regrets.

But he thought Intrade, an early prediction market that had been abruptly shut down in 2013,Β was β€œincredibly cool.” It was unclear what caused Intrade’s shutdownβ€”the company claimed it had been forced to shut down due to discovering β€œfinancial irregularities.” Others pointed to a U.S. government lawsuit that prohibited people in the U.S. from using it, which had cut it off from its American market.

Whatever it was, Peterson remembers wishing that β€œIntrade could be like bitcoin.” By distributing a prediction market’s administration across a global, independent network of computers, he reasoned, it would have no single point of failure. Traders could go on trading, no matter what.

Ethereum Savant

Buterinβ€”whom Krug describes, with mathematical precision, as a β€œvalue-added person”—would provide the spark. It was 2014, and his invention, Ethereum, was emerging as a programmable alternative to the bitcoin network. He was mulling over how to resolve a problem with Ethereum’s β€œsmart contracts,” digital agreements that fulfill themselves algorithmically, without human intervention. There was just one problem: what would verify that the conditions of a contract had been met, if not humans?

β€œBlockchain doesn’t know things from the outside world,” Buterin explains. β€œIt doesn’t know what time it is, what the temperature is.” For complex smart contracts to work, β€œyou need to source that info somewhere,” he says. That’s known as the β€œOracle Problem.”

During his research, Buterin came across a widely circulated Princeton treatise,β€œOn decentralizing prediction markets and limit order books,” as well as a paper by Paul Sztork, a statistician at Yale University, detailing a protocol called β€œTruthcoin.” Both papers, loosely, advocated deferring the smart contracts’ truth-finding duties to a decentralized network of β€œreporters,” thereby solving the Oracle Problem by establishing a human link with the code. In their vision, a new kind of prediction market could then run on these contracts, which would dispense payouts automatically, without need for a middlemanβ€”the algorithmic equivalent of bookies. Thus stimulated, Buterin drafted a blueprint for β€œSchellingcoin,” which would largely do the same.

Motivated by divisions within the bitcoin community, which Buterin says was then β€œspiralling into civil war” over technical differences, he published the paper, hoping it might both resolve the Oracle Problem and foster a new kind of β€œon-chain,” betting-based governance model that could be adopted by the burgeoning Ethereum network. Such a system, he speculated, would encourage his users to put aside their differences and put their money where their mouths were.

The blueprints for Schellingcoin and Truthcoin found their way to Peterson and Krug, as well as to Joe Costello, an angel investor supporting Peterson’s startup Dyffy. Costello, bored on a holiday in the Maldives, read and re-read Truthcoin so many times that he became β€œobsessed” with the idea of building an advanced version of it; he figured, in time, that Augur would support third-party apps he could profit from. Peterson and Krug were willing to take on the project, which Costello helped kick off with a seed fund of Β β€œaround $1 million.” (Though Peterson puts it at half a million.) Buterin, as well as β€œbitcoin billionaire” Jeremy Gardner, would invest too.

Pennies from Heaven

Thus funded, Krug and Peterson, as well as two advisors, wrote a whitepaper detailing how the protocol would run. They named the project Augur, after the Roman seers, or β€œaugurs,” who predicted the future by observing the flight patterns of birds. The logo was a pyramid, with three points converging upon an all-seeing eye.

To start, they built an alpha version on the bitcoin network. Buterin, however, urged them to change tack and build it on Ethereum, which would be easier; so they did, and it was. Within months, they had a working prototype on the Ethereum β€œtestnet”—theΒ Jornada del MuertoΒ desert of decentralized softwareβ€”that they could use to sell the project to public investors.

For a 45-day period between August and September, 2015, they sold off 11 million β€œREP tokens” at 60 cents apiece, saving 20 percent for themselves. The idea was that these tokens would provide minor financial incentives for holders of Augur’s REP token.

These holders, in their role as β€œreporters,” would then collectively act as Augur’s β€œOracle” by voting on the outcome of events in exchange for more REPβ€”if they were truthful. If they voted out of line, they would incur a penalty and lose REP. REP holders would also be entitled to 50 percent of all Augur’s trading fees, with the other half going to market makers. (It wasn’t until bitcoin’s vertiginous rise in 2017 that they realised the REP could run up in price and, conceivably, generate supplementary incomes for its holders.)

The auction went astonishingly well. That same month, the Chinese stock market had tanked, and traditional IPOs were struggling to raise money. Yet Augur, on its 19 pages worth of relatively untested ideas, managed to crowdsource $5.3 million without the support of venture capitalists, banks, or any kind of institutional middleman.

Krug remembers being β€œpleasantly surprised.”

Pleasantly surprised?

β€œI was pleasantly shocked,” he says.

Peterson plugging Augur at Mountain View, California’s CryptoEconomicon, 2014.
Peterson plugging Augur at Mountain View, California’s CryptoEconomicon, 2014.Β 

Buterin, meanwhile, was pleasantly nothing.

β€œHmm,” he reminisces.

Did he at least feel proud? His underlying algorithm churns, searching for the correct variant. β€œHmm,” he concludes.

Bingo.

The Price of Immortality

That money, along with the cut of the REP tokens the team had reserved for themselves, would prop up the Forecast Foundation, a not-for-profit.

The Foundation would write the team’s checks, support Augur’s development, and, in Costello’s words, generally β€œkeep it alive and well.” Yet at the same time, it would be functionally powerless.Β 

That’s the rub of the decentralized web. Companies, necessarily, must relinquish control over their products, and willingly withdraw themselves as middlemen. Indeed, the Forecast Foundation had, and has, no central power to either shut Augur down or even forcibly upgrade it. The updates to Augur’s interface, like those on the Bitcoin network, would only be optional downloads for its users. What’s more, to protect the Foundation from regulation and culpability in the event of Augur’s misuseβ€”see β€œassassination markets”—the Foundation would take no profit from Augur’s markets.

Yet this swings both ways. With no chance of the Foundation generating revenue from the Augur platform itself, these funds, plus the initial seed investments, would have to carry the project through to the end. If those funds were to diminish, Peterson says, the Foundation would be unable to fund its employees and would have to outsource to β€œvolunteer developers.”

Still, the Augur protocol itself would survive any collapse in the Foundation, albeit in an incomplete form. Says Buterin: β€œIf the Forecast Foundation disappears,” he says, β€œthen there’ll be no more future updates.” It’s like if BMW collapsed, but somehow kept rolling out half-built cars.

Costello, however, still thinks it’s possible to turn a buck: Augur could provide valuable speculative information to β€œany marketplace where people are trying to make a prediction,” he says, citing sports and finance as two lucrative areas to mine.

Third-party apps that scrape and resell reliable predictions to gamblers, and people trading equities, he says, could generate handsome returns; prediction markets, drawing as they do on vast reserves of crowd knowledge, have an uncanny tendency to hit the nail on the head when guessing the future.

The Long Code To Prediction

It would take three years for the core development team, which consisted of Krug, Peterson, and two others, Chris Calderon and Scott Leonard, to get the final product off the ground.

Most of the team, including Krug, lived and worked in what they called the Bitcoin Basement, a San Francisco crypto hotspot where their advisor, Gardner, also lived. The Basement, Peterson recalls, was β€œgross.” Three bedroomsβ€”and one toiletβ€”served six guys, none of whom cleaned. β€œAt least not regularly,” says Peterson. He lived in Oregon at the time and, when visiting, would politely refuse board and sleep in his car. They kept the Crypto Castle, their next (above-ground) home, in β€œbetter order,” he says. But not much better.Β 

The team programmed. They pushed out a beta in 2016, but it was far from complete. In the summer of 2017, they found a vulnerability that hackers could exploit to lock users’ money in the smart contracts Augur relied on to dispense payouts automatically. β€œAnyone who had money in one of those smart contracts would have lost it forever,” says Krug. The team had to transfer the code from Serpent to Solidity, a programming language that didn’t contain the vulnerability. It delayed launch by another year. Investors were growing restless. Why was it taking so long?

But slowly, Augur came to life.

Growing Pains

Pizza before launchβ€”which would come much later in the evening. From left to right: Tom Kysar, Joey Krug, Alex Chapman (developer), Jack Peterson, Paul Gebheim, and Scott Bigelow.
Pizza before launchβ€”which would come much later in the evening. From left to right: Tom Kysar, Joey Krug, Alex Chapman (developer), Jack Peterson, Paul Gebheim, and Scott Bigelow.

Augur’s launch was widely covered. Thousands of users flocked to the network, numbers not seen since CryptoKitties crashed the Ethereum network a year before. Most of the $1.5 million traded was wagered on World Cup-related markets, and other markets proliferated at breakneck speed. But soon the squib was dampened.

Detractors lined up to blast the software’s clunky, unintuitive interface, the proliferation of dud markets, the high trading fees, and the dismally low value of the REP token. The cryptocurrency, which traded at $32 and was an enticement to Augur’s reporters, has been consistently hovering around $12.

These are all valid points, says Krug. β€œWe said it was going to be expensive, slow to use, and have a terrible UX,” he explains. Peterson adds: β€œPeople had unrealistic expectations for what that first iteration would look like.” Augur is a 1.0 product. Augur 2.0, expected to drop soon, will address technical issues, largely drawn from Β a Reddit β€œwishlist.” (One upgrade will integrate Augur with the DAI, a dollar-backed stablecoin that will stop traders’ funds from dropping in value.)

There are other more fundamental problems. Paul Stzork, the Yale statistician who authored the seminal Truthcoin paper that so inspired Buterin, had ripped into the project in a massive take downΒ he published at the end of 2015. The paper lays out every conceivable objection. (The Foundation is also in the midst of an $152 million legal battle with Costello’s former partner, the developer Matt Liston, whom Sztork supports.) Β 

The Daily Debrief delivered to your inbox

Subscribe to our mailing list to receive the latest Daily Debrief in your inbox

Awesome! Check your inbox to confirm your subscription.

Will Jennings, media director of PredictIt, a competitor, asserts that Augur never even fully solved the Oracle Problem. People are easily co-opted, he says, and are otherwise unreliable sources of truth. What if, for instance, fundamental religionists hijacked the network and collectively voted that β€œGod is real,” ensuring that payouts went only to the faithful? (Though the $9.83 on that market suggests this won’t happen anytime soon.)

Krug concedes that Augur could indeed be prone to manipulation. But he readily admits the system isn’t perfect. This is a young technology. People thought the first generation iPhone was a β€œuseless piece of tech,” he says. Now look at it.

And anyway, heβ€”and Petersonβ€”have since parted ways from Augur.

Though Krug retains his role as β€œadvisor,” he’s now working full time as a hedge fund manager at Pantera Capital. He doesn’t even use his Augur email address anymore. Peterson, meanwhile, has returned to his old love, biophysics.

And why wouldn’t they leave? They created a business whose explicit end result is no one should run it. So they can jump ship, scot-free, leaving the rest of the development team to subsist on the diminishing winnings of that original, sepia-tinged token sale. And in a way, their departure is a final flourish, a last act of total decentralization; the creators, Augur’s original middlemen, dutifully removing themselves.

And Augur’s muse, Buterin? He frames the whole thing as a massive, cosmic gamble whose payout has yet to be delivered: β€œWe’re staking a claim that if we make blockchain apps more usable, people will use them,” he says, with recursive simplicity. Β 

So will the gamble pay off? Give it some time: It was a decade after Kitty Hawk before the first commercial flight took to the skies.Β Augur’s small base of users aren’t especially bullish on Augur’sΒ value. But they’re still placing bets.

Read Next: Bot Wars
↧

Concepts to help developers master JavaScript

$
0
0

33 Concepts Every JavaScript Developer Should Know

PRs WelcomeLicence MIT

Introduction

This repository was created with the intention of helping developers master their concepts in JavaScript. It is not a requirement, but a guide for future studies. It is based on an article written by Stephen Curtis and you can read it here. Feel free to contribute.


Table of Contents


Call Stack

Articles

Videos

⬆ back to top


Primitive Types

Articles

Videos

⬆ back to top


Value Types and Reference Types

Articles

Videos

⬆ back to top


Implicit, Explicit, Nominal, Structuring and Duck Typing

Articles

⬆ back to top


== vs === vs typeof

Articles

Videos

⬆ back to top


Function Scope, Block Scope and Lexical Scope

Articles

Videos

⬆ back to top


Expression vs Statement

Articles

Videos

⬆ back to top


Hoisting

Articles

Videos

⬆ back to top


IIFE, Modules and Namespaces

Articles

Videos

⬆ back to top


Message Queue and Event Loop

Articles

Videos

⬆ back to top


setTimeout, setInterval and requestAnimationFrame

Articles

Videos

⬆ back to top


Expensive Operation and Big O Notation

Articles

Videos

⬆ back to top


JavaScript Engines

Articles

Videos

⬆ back to top


Binary, Hex, Dec, Scientific Notation

Articles

⬆ back to top


Bitwise Operators, Type Arrays and Array Buffers

Articles

Videos

⬆ back to top


DOM and Layout Trees

Articles

Videos

⬆ back to top


new, Constructor, instanceof and Instances

Articles

⬆ back to top


Prototype Inheritance and Prototype Chain

Articles

Videos

⬆ back to top


Object.create and Object.assign

Articles

Videos

⬆ back to top


Factories and Classes

Articles

Videos

⬆ back to top


Memoization

Articles

Videos

⬆ back to top


Pure Functions, Side Effects and State Mutation

Articles

Videos

⬆ back to top


map, reduce, filter

Articles

Videos

⬆ back to top


Closures

Articles

Videos

⬆ back to top


High Order Functions

Articles

Videos

⬆ back to top


Abstract Data Structures

Articles

Videos

⬆ back to top


Recursion

Articles

Videos

⬆ back to top


Algorithms

Articles

⬆ back to top


Inheritance, Polymorphism and Code Reuse

Articles

Videos

⬆ back to top


Design Patterns

Articles

⬆ back to top


Partial Functions, Currying, Compose and Pipe

Articles

Videos

⬆ back to top


this, call, apply and bind

Articles

Videos

⬆ back to top


Clean Code

Articles

⬆ back to top


↧
Viewing all 25817 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>