Quantcast
Channel: Hacker News
Viewing all 25817 articles
Browse latest View live

Quebec reaches lodging tax deal with Airbnb

$
0
0

The Quebec government has reached an agreement with Airbnb that will see the home-rental website collect a lodging tax on behalf of its hosts.

Under the deal, which goes into effect on Oct. 1, Airbnb will automatically collect and remit the 3.5 per cent tax.

"The agreement announced today is a positive step toward the future and development of tourism in Quebec, since it will make it possible to adapt the taxation system to the new collaborative and digital economy," Julie Boulet, the province's tourism minister, said in a statement. 

Boulet said the agreement aims to address concerns voiced by the hotel industry, who argued Airbnb wasn't operating on a level playing field by not paying the lodging tax.

The money raised will be used to help fund the province's 22 regional tourism offices.

Alex Dagg, Airbnb's public policy manager for Canada, called the deal a "landmark" and a "defining moment for Airbnb in Canada."

"The agreement in Quebec is an example of how Airbnb and government officials can work together as partners," she said in a statement. 

Representatives from Airbnb and the province made the deal official at a joint news conference Tuesday in Montreal.

There are more than 22,000 Airbnb hosts in the province, with an average of 38 nights hosted per year.

In 2016, the Quebec government would have collected $3.7 million in taxes if the new agreement had been in place.

A provincial law that went into effect last year was meant to ensure hosts obtain a permit and pay a hotel tax, but the majority of hosts didn't register with the province.

New legislation will be tabled this fall to ensure Airbnb hosts comply with the new agreement, Boulet said.

"We have to make a distinction, draw a line between what the sharing economy is and what is a business," she told the news conference.


My Institutional Review Board Nightmare

$
0
0

[Epistemic status: Pieced together from memory years after the event. I may have mis-remembered some things or gotten them in the wrong order. Aside from that – and the obvious jokes – this is all true. I’m being deliberately vague in places because I don’t want to condemn anything specific without being able to prove anything.]

September 2014

There’s a screening test for bipolar disorder. You ask patients a bunch of things like “Do you ever feel really happy, then really sad?”. If they say ‘yes’ to enough of these questions, you start to worry.

Some psychiatrists love this test. I hate it. Patients will say “Yes, that absolutely describes me!” and someone will diagnose them with bipolar disorder. Then if you ask what they meant, they’d say something like “Once my local football team made it to the Super Bowl and I was really happy, but then they lost and I was really sad.” I don’t even want to tell you how many people get diagnosed bipolar because of stuff like this.

There was a study that supposedly proved this test worked. But parts of it confused me, and it was done on a totally different population that didn’t generalize to hospital inpatients. Also, it said in big letters THIS IS JUST A SCREENING TEST IT IS NOT INTENDED FOR DIAGNOSIS, and everyone was using it for diagnosis.

So I complained to some sympathetic doctors and professors, and they asked “Why not do a study?”

Why not do a study? Why not join the great tradition of scientists, going back to Galileo and Newton, and make my mark on the world? Why not replace my griping about bipolar screening with an experiment about bipolar screening, an experiment done to the highest standards of the empirical tradition, one that would throw the entire weight of the scientific establishment behind my complaint? I’d been writing about science for so long, even doing my own informal experiments, why not move on to join the big leagues?

For (it would turn out) a whole host of excellent reasons that I was about to learn.

A spring in my step, I journeyed to my hospital’s Research Department, hidden in a corner office just outside the orthopaedic ward. It was locked, as always. After enough knocking, a lady finally opened the door and motioned for me to sit down at a paperwork-filled desk.

“I want to do a study,” I said.

She looked skeptical. “Have you done the Pre-Study Training?”

I had to admit I hadn’t, so off I went. The training was several hours of videos about how the Nazis had done unethical human experiments. Then after World War II, everybody met up and decided to only do ethical human experiments from then on. And the most important part of being ethical was to have all experiments monitored by an Institutional Review Board (IRB) made of important people who could check whether experiments were ethical or not. I dutifully parroted all this back on the post-test (“Blindly trusting authority to make our ethical decisions for us is the best way to separate ourselves from the Nazis!”) and received my Study Investigator Certification.

I went back to the corner office, Study Investigator Certification in hand.

“I want to do a study,” I said.

The lady still looked skeptical. “Do you have a Principal Investigator?”

Mere resident doctors weren’t allowed to do studies on their own. They would probably screw up and start building concentration camps or something. They needed an attending (high-ranking doctor) to sign on as Principal Investigator before the IRB would deign to hear their case.

I knew exactly how to handle this: one by one, I sought out the laziest attendings in the hospital and asked “Hey, would you like to have your name on a study as Principal Investigator for free while I do all the actual work?” Yet one by one, all of the doctors refused, as if I was offering them some kind of plague basket full of vermin. It was the weirdest thing.

Finally, there was only one doctor left – Dr. W, the hardest-working attending I knew, the one who out of some weird masochistic impulse took on every single project anyone asked of him and micromanaged it to perfection, the one who every psychiatrist in the whole hospital (including himself) had diagnosed with obsessive-compulsive personality disorder.

“Sure Scott,” he told me. “I’d be happy to serve as your Principal Investigator”.

A feeling of dread in my stomach, I walked back to the tiny corner office.

“I want to do a study,” I said.

The lady still looked skeptical. “Have you completed the New Study Application?” She gestured to one of the stacks of paperwork filling the room.

It started with a section on my research question. Next was a section on my proposed methodology. A section on possible safety risks. A section on recruitment. A section on consent. A section on…wow. Surely this can’t all be the New Study Application? Maybe I accidentally picked up the Found A New Hospital Application?

I asked the lady who worked in the tiny corner office whether, since I was just going to be asking bipolar people whether they ever felt happy and then sad, maybe I could get the short version of the New Study Application?

She told me that was the short version.

“But it’s twenty-two pages!”

“You haven’t done any studies before, have you?”

Rather than confess my naivete, I started filling out the twenty-two pages of paperwork. It started by asking about our study design, which was simple: by happy coincidence, I was assigned to Dr. W’s inpatient team for the next three months. When we got patients, I would give them the bipolar screening exam and record the results. Then Dr. W. would conduct a full clinical interview and formally assess them. We’d compare notes and see how often the screening test results matched Dr. W’s expert diagnosis. We usually got about twenty new patients a week; if half of them were willing and able to join our study, we should be able to gather about a hundred data points over the next three months. It was going to be easy-peasy.

That was the first ten pages or so of the Application. The rest was increasingly bizarre questions such as “Will any organs be removed from participants during this study?” (Look, I promise, I’m not a Nazi).

And: “Will prisoners be used in the study?” (COME ON, I ALREADY SAID I WASN’T A NAZI).

And: “What will you do if a participant dies during this research?” (If somebody dies while I’m asking them whether they sometimes feel happy and then sad, I really can’t even promise so much as “not freaking out”, let alone any sort of dignified research procedure).

And more questions, all along the same lines. I double-dog swore to give everybody really, really good consent forms. I tried my best to write a list of the risks participants were taking upon themselves (mostly getting paper cuts on the consent forms). I argued that these compared favorably to the benefits (maybe doctors will stop giving people strong psychiatric medications just because their football team made the Super Bowl).

When I was done, I went back to the corner office and submitted everything to the Institutional Review Board. Then I sat back and hoped for the best. Like an idiot.

October 2014

The big day arrived. The IRB debated the merits of my study, examined the risks, and…sent me a letter pointing out several irregularities in my consent forms.

IRREGULARITY #1: Consent forms traditionally included the name of the study in big letters where the patient could see it before signing. Mine didn’t. Why not?

Well, because in questionnaire-based psychological research, you never tell the patient what you’re looking for before they fill out the questionnaire. That’s like Methods 101. The name of my study was “Validity Of A Screening Instrument For Bipolar Disorder”. Tell the patient it’s a study about bipolar disorder, and the gig is up.

The IRB listened patiently to my explanation, then told me that this was not a legitimate reason not to put the name of the study in big letters on the consent form. Putting the name of the study on the consent form was important. You know who else didn’t put the name of the study on his consent forms? Hitler.

IRREGULARITY #2: Consent forms traditionally included a paragraph about the possible risks of the study and a justification for why we believed that the benefits were worth the risks. Everyone else included a paragraph about this on our consent forms, and read it to their patients before getting their consent. We didn’t have one. Why not?

Well, for one thing, because all we were doing was asking them whether they felt happy and then sad sometimes. This is the sort of thing that goes on every day in a psychiatric hospital. Heck, the other psychiatrists were using this same screening test, except for real, and they never had to worry about whether it had risks. In the grand scheme of things, this just wasn’t a very risky procedure.

Also, psychiatric patients are sometimes…how can I put this nicely?…a little paranoid. Sometimes you can offer them breakfast and they’ll accuse you of trying to poison them. I had no illusions that I would get every single patient to consent to this study, but I felt like I could at least avoid handing them a paper saying “BY THE WAY, THIS STUDY IS FULL OF RISKS”.

The IRB listened patiently to my explanation, then told me that this was not a legitimate reason not to have a paragraph about risks. We should figure out some risks, then write a paragraph explaining how those were definitely the risks and we took them very seriously. The other psychiatrists who used this test every day didn’t have to do that because they weren’t running a study.

IRREGULARITY #3: Signatures are traditionally in pen. But we said our patients would sign in pencil. Why?

Well, because psychiatric patients aren’t allowed to have pens in case they stab themselves with them. I don’t get why stabbing yourself with a pencil is any less of a problem, but the rules are the rules. We asked the hospital administration for a one-time exemption, to let our patients have pens just long enough to sign the consent form. Hospital administration said absolutely not, and they didn’t care if this sabotaged our entire study, it was pencil or nothing.

The IRB listened patiently to all this, then said that it had to be in pen. You know who else had people sign consent forms in pencil…?

I’m definitely not saying that these were the only three issues the IRB sprung on Dr. W and me. I’m saying these are a representative sample. I’m saying I spent several weeks relaying increasingly annoyed emails and memos from myself to Dr. W to the IRB to the lady in the corner office to the IRB again. I began to come home later in the evening. My relationships suffered. I started having dreams about being attacked by giant consent forms filled out in pencil.

I was about ready to give up at this point, but Dr. W insisted on combing through various regulations and talking to various people, until he discovered some arcane rule that certain very safe studies with practically no risk were allowed to use an “expedited consent form”, which was a lot like a normal consent form but didn’t need to have things like the name of the study on it. Faced with someone even more obsessive and bureaucratic than they were, the IRB backed down and gave us preliminary permission to start our study.

The next morning, screening questionnaire in hand, I showed up at the hospital and hoped for the best. Like an idiot.

November 2014

Things progressed slowly. It turns out a lot of psychiatric inpatients are either depressed, agitated, violent, or out of touch with reality, and none of these are really conducive to wanting to participate in studies. A few of them already delusionally thought we were doing experiments on them, and got confused when we suddenly asked them to consent. Several of them made it clear that they hated us and wanted to thwart us in any way possible. After a week, I only had three data points, instead of the ten I’d been banking on.

“Data points” makes it sound abstract. It wasn’t. I had hoped to put the results in the patients’ easily accessible online chart, the same place everyone else put the results of the exact same bipolar screening test when they did it for real. They would put it in a section marked TEST RESULTS, which was there to have a secure place where you could put test results, and where everybody’s secure test results were kept.

The IRB would have none of this. Study data are Confidential and need to be kept Secure. Never mind that all the patients’ other secure test results were on the online chart. Never mind that the online chart contains all sorts of stuff about the patients’ diagnoses, medications, hopes and fears, and even (remember, this is a psych hospital) secret fetishes and sexual perversions. Study data needed to be encrypted, then kept in a Study Binder in a locked drawer in a locked room that nobody except the study investigators had access to.

The first problem was that nobody wanted to give us a locked room that nobody except us had access to. There was a sort of All Purpose Psychiatry Paperwork room, but the janitors went in to clean it out every so often, and apparently this made it unacceptable. Hospitals aren’t exactly drowning in spare rooms that not even janitors can get into. Finally Dr. W grudgingly agreed to keep it in his office. This frequently meant I couldn’t access any of the study material because Dr. W was having important meetings that couldn’t be interrupted by a resident barging into his office to rummage in his locked cabinets.

But whatever. The bigger problem was the encryption. There was a very specific way we had to do it. We would have a Results Log, that said things like “Patient 1 got a score of 11.5 on the test”. And then we’d have a Secret Patient Log, which would say things like “Patient 1 = Bob Johnson from Oakburg.” That way nobody could steal our results and figure out that Bob was sometimes happy, then sad.

(meanwhile, all of Bob’s actual diagnoses, sexual fetishes, etc were in the easily-accessible secure online chart that we were banned from using)

And then – I swear this is true – we had to keep the Results Log and the Secret Patient Log right next to each other in the study binder in the locked drawer in the locked room.

I wasn’t sure I was understanding this part right, so I asked Dr. W whether it made sense, to him, that we put a lot of effort writing our results in code, and then put the key to the code in the same place as the enciphered text. He cheerfully agreed this made no sense, but said we had to do it or else our study would fail an audit and get shut down.

January 2015

I’d planned to get a hundred data points in three months. Thanks to constant bureaucratic hurdles, plus patients being less cooperative than I expected, I had about twenty-five. Now I was finishing my rotation on Dr. W’s team and going to a clinic far away. What now?

A bunch of newbies were going to be working with Dr. W for the next three months. I hunted them down and threatened and begged them until one of them agreed to keep giving patients the bipolar screening test in exchange for being named as a co-author. Disaster averted, I thought. Like an idiot.

Somehow news of this arrangement reached the lady in the corner office, who asked whether the new investigator had completed her Pre-Study Training. I protested that she wasn’t designing the study, she wasn’t conducting any analyses, all she was doing was asking her patients the same questions that she would be asking them anyway as part of her job for the next three months. The only difference was that she was recording them and giving them to me.

The lady in the corner office wasn’t impressed. You know who else hadn’t thought his lackeys needed to take courses in research ethics?

So the poor newbie took a course on how Nazis were bad. Now she could help with the study, right?

Wrong. We needed to submit a New Investigator Form to the IRB and wait for their approval.

Two and a half months later, the IRB returned their response: Newbie was good to go. She collected data for the remaining two weeks of her rotation with Dr. W before being sent off to another clinic just like I was.

July 2015

Dr. W and I planned ahead. We had figured out which newbies would be coming in to work for Dr. W three months ahead of time, and gotten them through the don’t-be-a-Nazi course and the IRB approval process just in time for them to start their rotation. Success!

Unfortunately, we received another communication from the IRB. Apparently we were allowed to use the expedited consent form to get consent for our study, but not to get consent to access protected health information. That one required a whole different consent form, list-of-risks and all. We were right back where we’d started from.

I made my case to the Board. My case was: we’re not looking at any protected health information, f@#k you.

The Board answered that we were accessing the patient’s final diagnosis. It said right in the protocol, we were giving them the screening test, then comparing it to the patient’s final diagnosis. “Psychiatric diagnosis” sure sounds like protected health information.

I said no, you don’t understand, we’re the psychiatrists. Dr. W is the one making the final diagnosis. When I’m on Dr. W’s team, I’m in the room when he does the diagnostic interview, half the time I’m the one who types the final diagnosis into the chart. These are our patients.

The Board said this didn’t matter. We, as the patient’s doctors, would make the diagnosis and write it down on the chart. But we (as study investigators) needed a full signed consent form before we were allowed to access the diagnosis we had just made.

I said wait, you’re telling us we have to do this whole bureaucratic rigamarole with all of these uncooperative patients before we’re allowed to see something we wrote ourselves?

The Board said yes, exactly.

I don’t remember this part very well, except that I think I half-heartedly trained whichever poor newbie we were using that month in how to take a Protected Health Information Consent on special Protected Health Information Consent Forms, and she nodded her head and said she understood. I think I had kind of clocked out at this point. I was going off to work all the way over in a different town for a year, and I was just sort of desperately hoping that Dr. W and various newbies would take care of things on their own and then in a year when I came back to the hospital I would have a beautiful pile of well-sorted data to analyze. Surely trained doctors would be able to ask simple questions from a screening exam on their own without supervision, I thought. Like an idiot.

July 2016

I returned to my base hospital after a year doing outpatient work in another town. I felt energized, well-rested, and optimistic that the bipolar screening study I had founded so long ago had been prospering in my absence.

Obviously nothing remotely resembling this had happened. Dr. W had vaguely hoped that I was taking care of it. I had vaguely hoped that Dr. W was taking care of it. The various newbies whom we had strategically enlisted had either forgotten about it, half-heartedly screened one or two patients before getting bored, or else mixed up the growing pile of consent forms and releases and logs so thoroughly that we would have to throw out all their work. It had been a year and a half since the study had started, and we had 40 good data points.

The good news was that I was back in town and I could go back to screening patients myself again. Also, we had some particularly enthusiastic newbies who seemed really interested in helping out and getting things right. Over the next three months, our sample size shot up, first to 50, then to 60, finally to 70. Our goal of 100 was almost in sight. The worst was finally behind me, I hoped. Like an idiot.

November 2016

I got an email saying our study was going to be audited.

It was nothing personal. Some higher-ups in the nationwide hospital system had decided to audit every study in our hospital. We were to gather all our records, submit them to the auditor, and hope for the best.

Dr. W, who was obsessive-compulsive at the best of times, became unbearable. We got into late-night fights over the number of dividers in the study binder. We hunted down every piece of paper that had ever been associated with anyone involved in the study in any way, and almost came to blows over how to organize it. I started working really late. My girlfriend began to doubt I actually existed.

The worst part was all the stuff the newbies had done. Some of them would have the consent sheets numbered in the upper left-hand-corner instead of the upper-right-hand corner. Others would have written the patient name down on the Results Log instead of the Secret Code Log right next to it. One even wrote something in green pen on a formal study document. It was hopeless. Finally we just decided to throw away all their data and pretend it had never existed.

With that decision made, our work actually started to look pretty good. As bad as it was working for an obsessive-compulsive boss in an insane bureaucracy, at least it had the advantage that – when nitpicking push came to ridiculous shove – you were going to be super-ready to be audited. I hoped. Like an idiot.

December 2016

The auditor found twenty-seven infractions.

She was very apologetic about it. She said that was actually a pretty good number of infractions for a study this size, that we were actually doing pretty well compared to a lot of the studies she’d seen. She said she absolutely wasn’t going to shut us down, she wasn’t even going to censure us. She just wanted us to make twenty-seven changes to our study and get IRB approval for each of them.

I kept the audit report as a souvenier. I have it in front of me now. Here’s an example infraction:

The data and safety monitoring plan consists of ‘the Principal Investigator will randomly check data integrity’. This is a prospective study with a vulnerable group (mental illness, likely to have diminished capacity, likely to be low income) and, as such, would warrant a more rigorous monitoring plan than what is stated above. In addition to the above, a more adequate plan for this study would also include review of the protocol at regular intervals, on-going checking of any participant complaints or difficulties with the study, monitoring that the approved data variables are the only ones being collected, regular study team meetings to discuss progress and any deviations or unexpected problems. Team meetings help to assure participant protections, adherence to the protocol. Having an adequate monitoring plan is a federal requirement for the approval of a study. See Regulation 45 CFR 46.111 Criteria For IRB Approval Of Research. IRB Policy: PI Qualifications And Responsibility In Conducting Research. Please revise the protocol via a protocol revision request form. Recommend that periodic meetings with the research team occur and be documented.

Among my favorite other infractions:

1. The protocol said we would stop giving the screening exam to patients if they became violent, but failed to rigorously define “violent”.

2. We still weren’t educating our patients enough about “Alternatives To Participating In This Study”. The auditor agreed that the only alternative was “not participating in this study”, but said that we had to tell every patient that, then document that we’d done so.

3. The consent forms were still getting signed in pencil. We are never going to live this one down. If I live to be a hundred, representatives from the IRB are going to break into my deathbed room and shout “YOU LET PEOPLE SIGN CONSENT FORMS IN PENCIL, HOW CAN YOU JUSTIFY THAT?!”

4. The woman in the corner office who kept insisting everybody take the Pre-Study Training…hadn’t taken the Pre-Study Training, and was therefore unqualified to be our liaison with the IRB. I swear I am not making this up.

Faced with submitting twenty-seven new pieces of paperwork to correct our twenty-seven infractions, Dr. W and I gave up. We shredded the patient data and the Secret Code Log. We told all the newbies they could give up and go home. We submitted the Project Closure Form to the woman in the corner office (who as far as I know still hasn’t completed her Pre-Study Training). We told the IRB that they had won, fair and square; we surrendered unconditionally.

They didn’t seem the least bit surprised.

August 2017

I’ve been sitting on this story for a year. I thought it was unwise to publish it while I worked for the hospital in question. I still think it’s a great hospital, that it delivers top-notch care, that it has amazing doctors, that it has a really good residency program, and even that the Research Department did everything it could to help me given the legal and regulatory constraints. I don’t want this to reflect badly on them in any way. I just thought it was wise to wait a year.

During that year, Dr. W and I worked together on two less ambitious studies, carefully designed not to require any contact with the IRB. One was a case report, the other used publicly available data.

They won 1st and 2nd prize at a regional research competition. I got some nice certificates for my wall and a little prize money. I went on to present one of them at the national meeting of the American Psychiatric Association, a friend helped me write it up formally, and it was recently accepted for publication by a medium-tier journal.

I say this not to boast, but to protest that I’m not as much of a loser as my story probably makes me sound. I’m capable of doing research, I think I have something to contribute to Science. I still think the bipolar screening test is inappropriate for inpatient diagnosis, and I still think that patients are being harmed by people’s reliance on it. I still think somebody should look into it and publish the results.

I’m just saying it’s not going to be me. I am done with research. People keep asking me “You seem really into science, why don’t you become a researcher?” Well…

I feel like a study that realistically could have been done by one person in a couple of hours got dragged out into hundreds of hours of paperwork hell for an entire team of miserable doctors. I think its scientific integrity was screwed up by stupid requirements like the one about breaking blinding, and the patients involved were put through unnecessary trouble by being forced to sign endless consent forms screaming to them about nonexistent risks.

I feel like I was dragged almost to the point of needing to be in a psychiatric hospital myself, while my colleagues who just used the bipolar screening test – without making the mistake of trying to check if it works – continue to do so without anybody questioning them or giving them the slightest bit of aggravation.

I feel like some scientists do amazingly crappy studies that couldn’t possibly prove anything, but get away with it because they have a well-funded team of clerks and secretaries who handle the paperwork for them. And that I, who was trying to do everything right, got ground down with so many pointless security-theater-style regulations that I’m never going to be able to do the research I would need to show they’re wrong.

In the past year or so, I’ve been gratified to learn some other people are thinking along the same lines. Somebody linked me to The Censor’s Hand, a book by a law/medicine professor at the University of Michigan. A summary from a review:

Schneider opens by trying to tally the benefits of IRB review. “Surprisingly,” he writes, a careful review of the literature suggests that “research is not especially dangerous. Some biomedical research can be risky, but much of it requires no physical contact with patients and most contact cannot cause serious injury. Ill patients are, if anything, safer in than out of research.” As for social-science research, “its risks are trivial compared with daily risks like going online or on a date.”

Since the upsides of IRB review are likely to be modest, Schneider argues, it’s critical to ask hard questions about the system’s costs. And those costs are serious. To a lawyer’s eyes, IRBs are strangely unaccountable. They don’t have to offer reasons for their decisions, their decisions can’t be appealed, and they’re barely supervised at the federal level. That lack of accountability, combined with the gauzy ethical principles that govern IRB deliberations, is a recipe for capriciousness. Indeed, in Schneider’s estimation, IRBs wield coercive government power—the power to censor university research—without providing due process of law.

And they’re not shy about wielding that power. Over time, IRB review has grown more and more intrusive. Not only do IRBs waste thousands of researcher hours on paperwork and elaborate consent forms that most study participants will never understand. Of greater concern, they also superintend research methods to minimize perceived risks. Yet IRB members often aren’t experts in the fields they oversee. Indeed, some know little or nothing about research methods at all.

IRBs thus delay, distort, and stifle research, especially research on vulnerable subgroups that may benefit most from it. It’s hard to precise about those costs, but they’re high: after canvassing the research, Schneider concludes that “IRB regulation annually costs thousands of lives that could have been saved, unmeasurable suffering that could have been softened, and uncountable social ills that could have been ameliorated.”

This view seems to be growing more popular lately, and has gotten support from high-profile academics like Richard Nisbett and Steven Pinker:

And there’s been some recent reform, maybe. The federal Office for Human Research Protections made a vague statement that perhaps studies that obviously aren’t going to hurt anybody might not need the full IRB treatment. There’s still a lot of debate about how this will be enforced and whether it’s going to lead to any real-life changes. But I’m glad people are starting to think more about these things.

(I’m also glad people are starting to agree that getting rid of a little oversight for the lowest-risk studies is a good compromise, and that we don’t have to start with anything more radical.)

I sometimes worry that people misunderstand the case against bureaucracy. People imagine it’s Big Business complaining about the regulations preventing them from steamrolling over everyone else. That hasn’t been my experience. Big Business – heck, Big Anything – loves bureaucracy. They can hire a team of clerks and secretaries and middle managers to fill out all the necessary forms, and the rest of the company can be on their merry way. It’s everyone else who suffers. The amateurs, the entrepreneurs, the hobbyists, the people doing something as a labor of love. Wal-Mart is going to keep selling groceries no matter how much paperwork and inspections it takes; the poor immigrant family with the backyard vegetable garden might not.

Bureaucracy in science does the same thing: limit the field to big institutional actors with vested interests. No amount of hassle is going to prevent the Pfizer-Merck-Novartis Corporation from doing whatever study will raise their bottom line. But enough hassle will prevent a random psychiatrist at a small community hospital from pursuing his pet theory about bipolar diagnosis. The more hurdles we put up, the more the scientific conversation skews in favor of Pfizer-Merck-Novartis. And the less likely we are to hear little stuff, dissenting voices, and things that don’t make anybody any money.

I’m not just talking about IRBs here. I could write a book about this. There are so many privacy and confidentiality restrictions around the most harmless of datasets that research teams won’t share data with one another (let alone with unaffiliated citizen scientists) lest they break some arcane regulation or other. Closed access journals require people to pay thousands of dollars in subscription fees before they’re allowed to read the scientific literature; open-access journals just shift the burden by requiring scientists to pay thousands of dollars to publish their research. Big research institutions have whole departments to deal with these kinds of problems; unaffiliated people who just want to look into things on their own are out of luck.

And this is happening at the same time we’re becoming increasingly aware of the shortcomings of big-name research. Half of psychology studies fail replication; my own field of psychiatry is even worse. And citizen-scientists and science bloggers are playing a big part in debunking bad research: here I’m thinking especially of statistics bloggers like Andrew Gelman and Daniel Lakens, but there are all sorts of people in this category. And both Gelman and Lakens are PhDs with institutional affiliations – “citizen science” doesn’t mean random cavemen who don’t understand the field – but they’re both operating outside their day job, trying to contribute a few hours per project instead of a few years. I know many more people like them – smart, highly-qualified, but maybe not going to hire a team of paper-pushers and spend thousands of dollars in fees in order to say what they have to say. Even now these people are doing great work – but I can’t help but feel like more is possible.

IRB overreach is a small part of the problem. But it’s the part which sunk my bipolar study, a study I really cared about. I’m excited that there’s finally more of a national conversation about this kind of thing, and hopeful that further changes will make scientific efforts easier and more rewarding for the next generation of doctors.

The Lost Pleasure of Reading Aloud

$
0
0

‘I have nothing to doe but work and read my Eyes out,’ complained Anne Vernon in 1734, writing from her country residence in Oxfordshire to a friend in London. She and her circle of correspondents (who included Mary Delany, the artist and bluestocking) swapped rhyming jokes, ‘a Dictionary of hard words’, and notes on what they were currently reading. Their letters are suggestive of the boredom suffered by women of a certain class, constrained by social respectability and suffering the restlessness of busy but unfulfilled minds.

But that’s not their interest for Abigail Williams in this fascinating study of habits of reading in the Georgian period. Her quest is rather to discover how they read, in which room of the house, who with, out loud or alone and silently, as entertainment or education. A professor of English literature at Oxford University, she has turned her attention away from the content of books to focus on the ways in which that content is received and appreciated.

How books are read is as important as what’s in them, argues Williams persuasively, and her book charts her exhaustive forays into a multiplicity of sources, reading between the lines of diaries, letters and library records to glean an understanding of ‘what books have meant to readers in the past’. It has long been thought, for instance, that the print revolution of the 18th century resulted in a shift from oral to silent reading, from shared reading to indulging in a book of one’s own, as books became more available to a wider range of people while leisure time also increased. But, says Williams, such a clear-cut transition is difficult to trace.

On the contrary, reading aloud remained as popular as it had ever been because it was sociable and gave participants a glancing acquaintance with books that might otherwise take weeks to read (such as Samuel Richardson’s five-volume novel Clarissa or be beyond the budget of a housemaid or stonemason. Sharing of books and communal reading staved off the boredom of long, dark winter nights while at the same time providing opportunities for self-improvement. (The Margate circulating library, we discover, had 600 sermons in its collection.)

‘Tales of Wonder’ by James Gillray
‘Tales of Wonder’ by James Gillray

Reading out loud was also encouraged as a defence against the ‘seductive, enervating dangers’ of sentimental novels, and the ‘indelicacy’ of certain plays. Like the nine o’clock watershed or the internet filter, suggests Williams, reading in company was an attempt to ensure that young people were not corrupted by too much acquaintance with writers like Shakespeare. Thomas Bowdler’s Family Shakespeare, subtitled ‘in Which Nothing is Added to the Original Text but those Words and Expressions are Omitted Which Cannot with Propriety be Read Aloud in a Family’, was extremely popular in spite of, or rather because, ‘objectionable expressions’ and ‘redundant’ passages had been removed.

We might think our multimedia habits of channel-surfing while reading Twitter and replying to emails are evidence of our advanced 21st-century brain capacity and powers of attention amid chaotic busyness. But here we find Mary Delany recommending James Boswell’s Tour of the Hebrides as an excellent book to listen to while doing knotting work. Hairdressing and powdering in this age of wigs was often a good time to catch up on the latest novel — the bindings of books once owned by circulating libraries are often ‘cracked by quantities of powder, and pomantum between the leaves’. Servants, too, were often part of the reading experience, Hester Thrale noting in her diary that her maid listened in while she read from the original Spectator to her daughters.

Not only were abridged collections, such as Bowdler’s edition, produced to satisfy the demand for plays that could be staged at home (illustrated costume suggestions and acting directions were often included), but also shortened versions of novels were printed in magazines, in serial form, as folded pamphlets, while favourite scenes and passages were collected into volumes of ‘Beauties’ (75 per cent of the surviving copies of Robinson Crusoe are abridgements). At the same time Spouting Clubs emerged, where tradesmen and merchants could practise reading verses and passages from books following the advice of newly published books on elocution and delivery: ‘The Mouth should not be writh’d, the Lips bit or lick’d, the Shoulders shrugg’d, nor the Belly thrust out.’

On 1 July 1780 Anna Larpent read from four different books in a single day with four different people. None of them will have heard more than a small portion of the whole book. Sampling, excerpting, reading again were common practice as readers moved between different genres. But Williams begins by quoting from the journal of Dorothy Wordsworth, who after that famous afternoon walk in April 1802 when she and her brother came upon the daffodils dancing in the breeze later went into a tavern for refreshment. There, she and William found a pile of books and spent the rest of the day drinking tots of warm rum and reading together from a volume of Congreve — a single journal entry illustrating how poetry was made but also how literature was best enjoyed, impromptu, in company, at leisure.

Python Data Science Handbook

Engineering Uber's Self-Driving Car Visualization Platform for the Web

$
0
0

The ATG (Advanced Technologies Group) at Uber is shaping the future of driverless transportation. Earlier this year, the Data Visualization Team—which uses visualization for exploration, inspection, debugging and exposition of data—partnered with the ATG to improve how its self-driving vehicles (cars and trucks) interpret and perceive the world around them.

Using some of the latest web-based visualization technologies, the ATG Visualization team built a platform that enables engineers and operators across ATG to quickly inspect, debug, and explore information collected from offline and online testing. This is critical for ensuring that users of this technology understand issues quickly, iterate fast, and increase productivity across a wide range of ever-evolving use cases. There was also a need for a simpler visual language and UX design to convey all of the detailed, technical information to operators and engineers.

In this article, we describe how our Data Visualization team built this platform and explore the challenges of combining complex and diverse datasets into a reusable and performant web component.

Choosing the web

There are many interesting discussions around why the Web might be the right platform to build our self-driving car visualization platform. We turned to the Web for the following reasons:

  1. Fast cycle of iteration. On the Web, it is quick and easy to develop and deploy features incrementally. If a user wants the latest version of a product, they only need to refresh the browser’s page as opposed to downloading and installing a new app.
  2. Flexibility and shareability. Since the Web is hardware-agnostic, anyone, anywhere is able to work on the platform using any operating system—in fact, the browser becomes the operating system. Moving this system to the web bridged any team-wide OS divides and opened up the possibility of scaling the team beyond ATG headquarters in Pittsburgh. On the Web, reporting and diagnosing an incident is only one URL click away.
  3. Collaboration and customization. As a fast-evolving technology, self-driving vehicles never cease to produce new datasets, metrics, or use cases. New services and endpoints are added all the time. Each team at ATG has unique visualization and data generation needs. As such, they need to be able to customize their  experiences. HTML5 and JavaScript are tested and trusted tools for creating custom UI on the fly, and are easily integrated into other infrastructure and task management systems.

Uniting diverse data sources

To understand the decisions made by an autonomous vehicle, a large amount of data is required to recreate the context around a trip. This includes maps that are preprocessed and vehicle logs that are generated at runtime.

Maps describe the connectivity and constraints of roads in a city. On top of what is available via Uber’s proprietary web-based map, maps for self-driving vehicles contain a lot more details. For example, high-resolution scans of the ground surface, lane boundaries and types, turn and speed limits, and crosswalks—basically any other relevant map information.

Maps teams use the platform to inspect 3D map details at a given intersection.

Vehicle logs describe what the vehicle is doing, what it sees, and how it acts. Three critical stages from algorithms are run based on sensor data: perception (measuring), prediction (forecasting), and motion planning (acting). In order to successfully operate, a vehicle needs to be able to perceive the activity around it through its sensors. Based on that information, it can predict where these objects will be in the near future, which will provide enough information to properly plan its next move (think: changing lanes or stopping at a stop sign).

Operators use the platform to inspect perception and prediction data of an autonomous vehicle.

Working with our self-driving engineers, we experimented and formalized a system of visual metaphors to represent complex data. The system offers realistic representation for environmental elements such as cars, ground imagery, lane markers, and signs which enable operators and engineers to anchor their understanding of the vehicle’s surroundings.

To help engineers peek into alternative decisions or time slices from the future, the system also offers abstract representation for algorithmically generated information such as object classification, prediction, planning, and lookaheads by way of color and geometric coding.

One of the biggest challenges of bringing these data sources together into one unified view is dealing with disparate geospatial coordinates. Different services offer models for using different coordinate systems: some in latitude/longitude, some in the Universal Transverse Mercator coordinate system (UTM), some relative to an absolute world position, and others yet relative to the position and orientation of the vehicle. Furthermore, all the positions are updated at high frequency during playback by being sampled multiple times a second. To convert these coordinates efficiently and project them accurately, we delegated the heavy-lifting to our GPU.

Rendering performant 3D scenes with WebGL

Uber’s Visualization Team maintains a suite of frameworks for web-based large scale data visualization, including react-map-gl and deck.gl. These frameworks leverage the GPU capacities in the browser to display millions of geometries at a high frame rate. If visualization is interpreted as mapping from the “bit” (data structure) to the “pixel” (graphics)—essentially applying the same transformation on millions of inputs—the GPU is naturally the ideal venue, as it is designed to perform the same task repeatedly in parallel.

deck.gl layers render GeoJSON, point cloud, and grid visualizations.

Ultimately, performance is the deterministic success factor for this collaboration. ATG engineers and vehicle operators need the vehicle logs to play in real time and smoothly manipulate the camera and select objects in a scene. This is where our advanced deck.gl data visualization framework comes into play.

The latest release of deck.gl features numerous performance optimizations and graphic features that are driven by use cases originating from our work with ATG. Each layer in the deck.gl context renders a data source into a given look, either mesh (ground surfaces and cars), paths (lanes and trajectories), extruded polygons (other objects on the road), or point clouds (3D objects without current semantic meaning). Each layer can also specify their own coordinate system, while sharing the same camera view. A typical log snippet renders 60-100 layers at 30-50 frames per second.

Users may switch between different camera modes during a log playback.

Autonomy engineers use the platform to compare simulations generated with two versions of software.

Next steps

Given the rapid pace of ATG’s development, having the right products for Uber’s self-driving engineers is key for our growth. We are excited to architect the future of transportation with driverless vehicles; are you onboard?

If mapping self-driving technologies interests you, consider applying for a role on Uber’s Data Visualization Team. To learn more in person, attend our upcoming meetup discussing this and other amazing visualization work in our Seattle Engineering office on 8/31.

Xiaoji Chen is a software engineer on the Uber Data Visualization team.

The life and death of a startup

$
0
0

12:06 PM, 27 August 2017

At DjangoCon Europe 2017, and again at DjangoCon US 2017, I gave a talk entitled "Autopsy of a slow train wreck: The life and death of a Django startup". After I gave those presentations, a number of people requested that I publish the content in blog form so they could share it with colleagues.

Transcript

I've been a frequent (almost constant) fixture at DjangoCon events over the last 10 years. And if you met me at one of those DjangoCons in the last 6 years, or seen me speak, I may have introduced myself as the CTO and co-founder of TradesCloud. TradesCloud was a software as a service company for tradespeople - plumbers, electricians, carpenters and the like.

TradesCloud was my startup. I say "was"... beacuse in January of this year, my business partner and I closed the doors on TradesCloud, and shut down the service.

Gold-plated Lamborghinis

As an industry, we're fond of promoting the glossy side of startups. That a plucky bunch of engineers can take an idea and a personal credit card, build a business empire, and drive off into the sunset in a gold-plated Lamborghini.

And yes - those unicorns - and gold plated lamborghinis - do exist. There are even a couple of them in the Django community (the unicorns, not the lamborghinis).

But it's important to remember that those stories are unicorns. They're not the normal startup experience for most people.

The reality...

In the VC-backed startup world, the general expctation is that if a VC firm invests in 20 companies, only 1 of them will actually succeed spectacularly. 4 will have some sort of exit that at least results in a breakeven financially; but 15 will fail outright, with a significant or complete financial loss.

Interestingly, this isn't something unique to tech. Tech does it at a much grander scale, especially when VCs are involved - but open any small business advice book - the sort that is targetted at the plumbers and electricians of the world - and they'll warn you that 50% of businesses fail in their first year.

And yet, despite the fact that failure happens all the time, we don't talk about it. We don't talk about why things fail. And as a result, many of the same lessons have to be learned over and over again. Those that experience failure often feel like they're doing it alone, because of the significant social stigma associated with failure.

So - this is my small attempt to restore the balance. In this talk, I'm going to talk about TradesCloud - my failed business. TradesCloud was a slow train wreck - we survived for 6 years, almost to the day. And we had plenty of optimism that success was just around the corner... but that never quite happened.

What was Tradescloud?

But what was TradesCloud? What prompted me to dedicate 6 years of my life to it?

Well, it started as a problem that thought I could solve. If you find yourself needing a plumber, how do you pick one? Well, 15 years ago, when I first had the idea for what became TradesCloud, the best option was opening the phone book and looking for the brightest, shiniest ad, maybe arranging a bunch of quotes, and pick one basically at random. If you were really lucky, you might be able to use Google - but that's still looking for the shiniest ad. If they turned out to be good... well we won't need a plumber for a while, so that knowledge is useless. And if they turn out to be awful... we can't warn anyone off, either.

"There has to be a better way". And, of course, I did nothing about it. I say nothing - I did start tinkering around with a web framework... you may have heard of it... Django. I originally got involved in Django because I wanted to add aggregation functions to the ORM so I could compute average ratings. And in 2008, I mentored a student - Nicholas Lara - to add aggregation as a Summer of Code project. So... success?

An idea is born...

In late 2010, I met up with a former boss for a drink, and he tells me about his brother. His brother owns a pest control company, he has the same problem - but from a different angle - I'm looking for tradespeople in my area that can be reccomended - he is smaller company that wants to compete with the big players with the shinier ads based on quality of service.

And so, TradesCloud was born. We had an idea. At the time, it wasn't called TradesCloud - it was called CleverPages - because it was going to be a clever Yellow Pages. In my spare time, I started hacking together a proof of concept.

Mistake 1: Validate, then build

That was our first mistake, and the first mistake most tech-oriented people make. As much as Django sells itself as a rapid development framework, any non-trivial project still takes time and effort. And I spent a couple of months of spare time hacking together a proof of concept.

German military strategist Helmuth von Moltke once noted "No battle plan survives contact with the enemy". Or, in non-military terms, Scottish poet Robert Burns said "The best-laid schemes o’ mice an’ men gang aft a-gley". And so it is with business ideas. All the time I spent working on that prototype could have been eliminated if I'd actually spoken to a plumber first.

Just because we had an idea, and I could implement the idea in software, that didn't mean we had a good business idea. It meant we had a good idea for a hobby project. And the difference is critical. A business is an idea that generates revenue. A hobby project may be fun to work on. It may even be useful for other people. But if you can't sell something, if you can't pay the bills with it - it isn't a business. And conflating the two ideas is a major problem.

What we should have done is validate the idea first, and then build it.

But, we didn't do that - and when I finally had something to show off, my business partner opened the local newspaper, picked a bunch of local plumbers, and called them in an attempt to sell the idea.

He called 10 plumbers. 5 of them suggested he place the idea in an anatomically implausible location. 4 of them had their secretary provide the same advice. One plumber did sound interested, and said he wanted to have a chat.

Mistake 2: If you can't sell it, it's not a business

This was mistake number 2. Or, at the very least, it should have been a warning flag.

At the end of the day, business is about selling something. Selling a physical product. A subscription. Selling services. But whatever you're doing, you're selling something. And in order to sell something, you have to have customers. If all your prospective customers hang up when you call... you have a problem. You don't have a sales channel. It doesn't matter if you've got a machine that turns lead into gold - if you can't get that idea in front of the people who are going to buy your product, you might as well shut up shop right now. The fact that it was very difficult to get plumbers to answer the phone should have been a warning sign that our prospective audience wasn't going to be easy to crack.

But, we persisted, and had a chat with the one plumber who would talk to us. We did our pitch, and he said "Nope. Not interested. But if you can make that pile of paper disappear, I'll give you as much money as you want."

This was a conversation that set the direction of our company for years to come. Was this a mistake or a success? Well, that's a little hard to judge. There's an extent to which we changed direction because it was the only direction that seemed open to us - which was a bad move. But it was a very lucrative direction, so... maybe it's a wash.

What we identified in that conversation was a significant business problem - a business process that was being performed manually, and took three hours a day, and identified a simple and reliable way that it could be automated. We identified a couple of other processes we could automate, and ways to report on some key performance indicators. We identified a path forward that could use mobile tech to improve communication and process management. By the time we were done, we'd worked out how to immediately free up a full time employee, with potential for more. So as long as we charged less than the cost of that employee - about $50k a year - the business owner would be ahead.

Our costs were next to nothing. Our newfound customer told us that these business processes were due to one specific contract that they had - and there were many others on the same contract. So - should have been easy to sell the same software to everyone else on the contract, and... profit! Right? So - lets keep it simple, offer them a 50% saving - and after doing some fancy footwork to reverse engineer a good explanation for why that was what we were charging, and just start making $25k a year per customer, right?

Mistake 3: Humans gonna human

Well, no. That was mistake number 3.

Mistake 3 is that we didn't take into account is the human factor. In theory, charging anything less than $50k per year - rationally - should have been a no-brainer, easy sale. But we were selling to humans. And humans don't ever behave rationally. There's an almost bottomless body of research about how bad humans are at evaluating economic decisions and consequences.

And so, when we walked in the door of a prospective customer, we did our pitch, they were almost universally blown away. And then we told them the price, and they starting describing anatomically implausible locations again. Why?

Humans aren't rational

Firstly, a sale of that size isn't easy. Asking a plumber to spend $100 a month - they know they can afford that. It's probably less than what they spend on coffee in a month. But asking them to spend $2000 a month? That's a lot harder for them to justify. That actually starts to make a dent in their bottom line. So they're going to take some convincing. They're going to want proof that it actually works, that it's actually going to deliver the benefit you promise.

Software is hard to sell to humans

Secondly - we were selling software. While we were completely honorable, and completely truthful, and we were able to deliver everything we promised, and our software made makes birds suddenly appear every time we were near - we weren't the first IT salesperson they've had to deal with. And we - we collectively - are part of an industry that has, for 40 years, systematically over promised and under delivered what software can do for a business. So that is something that needs to be overcome.

No really - Software is hard to sell to humans

Thirdly - we were selling software. Who here is currently holding a phone worth a couple of hundred, maybe even a thousand dollars? Now, how many of give more consideration to whether you should buy a 99c app from the app store than you do the decision to buy the thousand dollar phone?

That's the problem selling software. And multiply it a thousand times when you start dealing with non-tech audiences. We've been conditioned to expect that physical, tangible things are expensive - but software? That should be cheap, or better still free.

As a side note - this is one of the major problems we face with funding open source projects as well - but that's a subject for a different rant, and a rant that I've had before.

Humans are people

Lastly - we were dealing with personal relationships. If we walked into a small plumbing business to speak with the manager, there was an odds-on chance that the bookkeeper, or another significant employee in the business, was the wife of the manager. And you start talking about being able to cut an employee... well, you can guess how well that conversation goes. And even if it wasn't a family member, people don't generally want to fire people.

Mistake 4: Beware favourable patterns

Mistake number 4 happened as the result of an unfortunate coincidence. After closing our first sale, we got that customer to give us an introduction to some other possible customers. And he gave us the best possibilities first. So our first two sales were both $2k a month. Our third was a smaller business - only $500 a month - but that gave us the confidence that we had something that we could sell to medium and small businesses.

We'd closed three sales in rapid succession. We had $4500 a month in revenue, and the sales were really easy to close. We thought we had found a money printing machine.

And then we hit a wall. The next few sales calls we made just went nowhere. Never a hard no... but lots of ums and ahs about price, and "we'll have to think about it"'s...

We confused initial success with a pattern that was going to continue. After three sales in a month, we essentially didn't close a sale for another 8 months. And that's not a good sign.

Arguably, we got bitten by circumstances there - when you have lots of early success, it's easy to think that success will be ongoing. This is a time where you need to be objective. If you can't consistently close sales, if you can't reliably predict your close rate - you have a problem.

Mistake 5: Do the math

Mistake number 5, though, was completely our fault. We completely failed to do basic math.

Our value proposition - the business process that we had optimized - existed because of the processes required by one particular contract. Our pricing scheme was simple - we charged $1 per job completed. Our initial customer - they did about 2000 jobs a month, so we charged them $2000 a month. Which is great, because it also happened to hit our 50% savings target that we originally identified.

What we didn't do was add up how many jobs there actually were in the system. It turns out that if we managed to close every company on that contract, we would have only generated $12k a month in revenue. Which sounds like a lot, especially when your costs are so low... but our costs weren't low. We also had two founders who were full time, and needed to be paid. Our burn rate was closer to $22k per month.

Mistake 6: Pricing is what the customer will pay, not your burn rate

And this led to mistake number 6: We didn't have a serious pricing discussion until it was way too late. Mislead by our initial success, our pricing was really determined by taking our burn rate, and working backwards - not forwards from what the market would bear. My co-founder and I would have regular discussions about pricing - but all of those discussions happened against a background of "how are we going to make payroll this month". Which is the wrong time to be having that discussion - because two important options: drastically reduce the price, and shut down the company - are effectively off the table.

So, we had a product that was too expensive to sell, and a market that wasn't big enough. Now the good news is that the paperwork reduction niche we'd found wasn't unique to that one contract - There were many other simliar contracts with similar paperwork requriements. But we'd found the thousand pound gorilla in the market. Other contracts were smaller, and the paperwork and process requirements were subtlely different.

Mistake 7: Establish your sales channel

And that wouldn't have been a problem - if we hadn't made mistake number 7 - we never established our sales channel.

We got our first sale almost by accident. We bumped into a customer who gave us an opportunity. Subsequent sales came by word of mouth. Word of mouth is an incredible sales channel if you can get it. But as a result, we never cracked the most important problem - how do we sell to someone who hasn't heard of us? How do we get in the door? How do we establish trust? And as a result, our sales were essentially constrained by the personal networks of our existing customers. Perth is a small, geographically isolated city. When we'd exhausted personal networks... we hadn't learned the most important thing - how to sell our product to someone who didn't have a personal introduction.

Joel Spolsky once noted that there's no software product priced between $1000 and $100000. This is because a product that costs less than $1000 can be bought on a credit card. But if software costs more than can be hidden on an expense statement, you need to have salespeople, and that means you have to pay them, and their commissions, and pay for the steak dinners and drinks used to closed the sale. We had a product that was squarely in this dead zone. Too expensive to be a casual purchase, but not expensive enough to support the sales process it needed.

TradesCloud had a serious problem. Once we closed a sale, we had almost zero churn rate. The only customers we ever lost were because they closed down, or they dropped the contract where we offered an advantage.

What we didn't have - and what we never really established - was a good way to prove to new customers that we were, indeed, that good. There wasn't a good way to "trial" TradesCloud. We were managing processes that were at the core of a trades business. Those processes have to work. And they can't be duplicated or doubled up. So - there was no way to "stick your toe in the water" - you had to jump in, or stay out. And since we had a huge price tag, most people were conservative, and said no. If they got a recommendation from someone they knew, it was a little easier - but if that didn't exist, we had a problem.

In order to close a sale, people have to believe - really believe - what you're telling them. It has to be obvious, and undeniable that you will give them benefit - or the cost of trying has to be vastly less than the cost of the software itself. In our case, even if we dropped our price to zero, we didn't have a zero cost, because the cost of institutional process change involved in adopting a new piece of software at the core of business operations is huge.

Our best sales person was completely accidental. He wasn't our employee - he was an employee who changed employers every 6 months. He was in upper management, had a reputation for getting things done and turning companies around, so he kept getting poached. And he'd seen the benefits of TradesCloud with one contract, and so it was easy to get in the door every where else he went. And because he was known around the industry, his word was extremely valuable. When he said "This is good", people believed him. His word was trusted.

Mistake 8: Sales don't stop when you sell

But even when we did make a sale, the mistakes didn't stop. Mistake number 8 - we didn't pay enough attention to onboarding new customers. A sale for a product isn't closed when a contract is signed. It's closed when the person who uses the software has accepted it into their daily lives, because that is what prevents churn. If you've selling a small personal tool, the person who buys and the person who uses is probably the same - but in our case, the purchase decision was rarely made by the person who actually had to use the software. And you have to get those people on board. If anything, they're more important, because they're the ones who are going to make the bosses life hell if the software they buy isn't doing the job - or worse - is doing the job too well.

Over and over again, we saw internal sabotage. People would simply refuse to change processes, and would find any excuse. "Oh, the software didn't work, so I had to go back to doing it manually". And after a month or two, the boss would call us and say "what happened to all the benefits you promised?", and we'd say "well, you only get the benefits if you actually use the software".

What we learned - the hard way - is that you sell to the business owner - but you also have to sell to the users. If you're dealing with software that is part of a key business process, change management is key. You have to show them how your tool does what they currently do by hand. And you have to show them that their jobs aren't at risk. That first employee whose 3 hour/day task was replaced? She wasn't fired - she redeployed inside the business. She went from doing a mindless office task for most of the day, and could start expanding into other parts of the business. About 2 years after we first deployed TradesCloud, she was running accounts and payroll.

But, despite all these mistakes, we were able to stumble along, and were complete self funded for almost 2 years. Now, that meant burning a lot of personal funds, and my co-founder doing a bunch of consulting on the side. But that's just part of the startup experience, right?

Mistake 9: Establish failure criteria

Well, maybe it is. But in retrospect, mistake number 9 was an entirely personal one - I shouldn't have lost as much money on the experience as I did. I knew what I considered success criteria, but I never considered what my failure criteria would be.

After 2 years, I had reached a point where my personal financial runway was running out. TradesCloud either needed to start paying a full wage, or I wasn't going to be able to continue. And this pivoted the business. Fundraising is a full time job. Everything else goes on hold - sales, support, development - everything. We tried to get VC investment, but the VC scene in Australia is pretty bad, and even worse in Perth. Eventually, we managed to secure a $250k cash investment from a colleague of my business partner; and we got some matching funding through an Australian government program. And that gave us another 2 years of runway.

The way we were able to secure that runway was by changing our tactics. Instead of going after individual plumbers, we started going after the head contractors - the multimillion dollar facilities management companies. And we were able to sell them a really great story. These companies are all competitive, and they're looking for any advantage they can get. But they're also technologically laggards, because they're big established companies. They have inertia when it comes to adopting new technology. So we were able to walk in, and promise a mobile-enabled workforce, real-time tracking, enforced health and safety practices - all sorts of things that made them really excited, because they could use those features as differentiators against their competition.

Mistake 10: You are who you are - Don't deny it

But - and here's mistake number 10 - we forgot who we were, and who we were selling to. We were selling to multimillion dollar companies. The reason these companies are technologically laggards? They're conservative. They don't take risks. There's no incentive for individual employees to take risks. And so, they make safe decisions.

And when they adopt new technology, they don't just pick something - they put it out to tender, and get multiple bids, and then they invite bidders into the head office to interrogate them, and eventually, after 6 months, they pick someone - the safe option. We got into a tendering process with almost every major facilities managmeent company in Australia. The tendering process almost always started because we pitched them the idea of providing TradesCloud to all their subcontractors - but what they heard was "provide software to all their subcontractors". And so, at the end of the tendering process, we were told, every time - we prefer your technological solution... but we're going with your competitor, because you're too risky. A 2 person company was too much of a risk for a multimillion dollar company to trust.

So after 2 years of trying this tactic, and being turned down by every facility management company in Australia, the money was running out again, but i'd built up a bit of cash buffer again. But in between failing to sell to multinational companies, we'd found a bit of success selling to smaller facility management companies and large constructions companies. And the good news was that these companies were big enough that when they bought software, they wanted it customized - so as well as the $2k/month, they would pay $40k up front so that everything matches their requirements.

On the back of that change, were got a loan from our investor. Between making more personal sacrifices, and that cash injection, we were able to stumble along for another 18 months.

Mistake 11: Take the hint

This was mistake number 11. We didn't take the hint. Each of those points where we took investment was, potentially, a point that would have been a natural point to shut down the business. And, in retrospect, we should have. The writing was on the wall. The simple truth is if you can't close sales, you don't have a business. And yes - you can stumble along hoping that you're going to find the missing sales ingredient - but that takes resources. It takes money, and it takes emotional capital as well.

One of the reasons the failure of TradesCloud was so personally galling, is that it didn't fail for any reason that I would consider "my fault". From a purely technical perspective, we were significantly more reliable than the multi-national companies we were integrating with. We delivered new features in timeframes that our customers considered inconceivable. When I went in an did demonstrations to the multimillion dollar head contractors, they expressed doubt that we could actually do what we said we were doing... right up until I showed them the code doing it, live.

But none of that mattered. TradesCloud failed, ultimately, because we couldn't sell what we had - or, at least, we couldn't sell it in quantities that allowed us to cover costs. And when you're there giving your all, doing things that are being called magic by prospective customers... and you're still failing... that's hard to internalize. And when you layer on top of that the fact that I'm a husband and a father, and the sole income for the family - that introduces all sorts of guilt and fear into the mix. And then you take all that stress, and add in the long hours and weekend work in the desperate hope that this will be the thing that saves the company...

... and you start to understand why, 2 years ago, I had a major depressive episode.

Mistake 12: Quitting is always an option

Mistake number 12 - I lost sight of the fact that quitting was always an option - and that quitting didn't mean failure. If, after 2 years, I had taken an honest audit and said "you know what - this isn't working. I'm out". I would have had 4 years of my life back. That's four years I could have sunk into a different project. But I didn't pay attention to any of my signs. I conflated success of the company with my own personal success. I lost sight of the fact that this was a job. And it was meant to provide some income and some intellectual engagement. And if it wasn't doing that - walking away was always an option. But I didn't ever really consider it seriously.

Mistake 13: Partnerships require actual partners

Why not? Well that was mistake number 13, and it was another personal failure - I didn't stand up to my co-founder as much as I should have. And as a result, we wasted a lot of time, and effort, and in some cases, money. Now - I have to be clear - if only because there's a chance he might see this video - i'm not blaming Mark here. Mark is a great guy, and he's extremely talented, and he's got absolutely no shame at fronting up to companies thousands of times bigger than his, and telling them how they should be doing things. He opened a lot of doors that I know I wouldn't have ever even considered knocking on, let alone opening. He was a real asset to the business. The failure was a personal one, and it was mine.

We didn't go into TradesCloud as complete equals. Sure - we were 50/50 partners on paper - but when I met Mark, he was my first boss out of university. I worked for him for 4 years as a very junior subordinate. And a lot of that power dynamic remained. I let him do a lot of things because "well, he must know what he's doing". I caved on decisions because I could see his side, and he was more experienced. And, Mark is a great sales guy. He can make you believe in things. And he made me believe in TradesCloud - but that's a double edged sword. It got me through all sorts of lows - but it also meant I believed even when I probably shouldn't have. I should have putting my foot down and said "no more" a lot more often than I did - both for the benefit of the business, and for my own mental health.

The end

And so, when the money ran out for the third time, neither Mark nor myself had the energy to continue. We had a couple of last minute hail-Mary options that we thought might have saved us... but one by one, they all fell through. In the end, we were able to pay back the loan to our investor; but his equity investment was essentially lost. And in January, we closed the doors for the last time.

And that's the TradesCloud story. I will warn you, though, that the plural of anecdote is not data. This is my story. Many stories are like it, but this one is mine. I don't profess to having any particular business insight - I just know that TradesCloud didn't work. And these are the 13 reasons I can identify why.

In the aftermath, I've had a lot of people - many of them in this room - reach out and give me a virtual hug, or a spoon. And many asked me if I was sad to see TradesCloud go. But frankly, the emotion I had was relief. On January 31 2017, I slept like I hadn't slept for 6 years - because I knew I wasn't going to be woken up by a server alarm. And I knew I could sleep in, because I wasn't going to get a support call at 6AM.

The fact that I wasn't even slightly disappointed by the loss of TradesCloud from my life - that's the biggest sign for me that I waited far too long to step away.

The good news, though, it that the process of running TradesCloud hasn't burned me completely. It was an amazing learning experience. And I landed on my feet - At DjangoCon US last year, I put my name up on the jobs board saying I was looking, and it hadn't been there half a day before my good friend Andrew Pinkham approached me and said.. Uh... are you that Russell Keith-Magee looking for work?

The other silver lining is that the TradesCloud experience drew my attention to problems in the world of mobile development, which has influenced the path that my new toy, BeeWare, has taken. And I've been busy trying to work out how to turn BeeWare into something.

But this time, I'm a little older. A little grayer. Hopefully a little wiser. And I should have a better idea what to look out for. If you want to talk about that... well... get in touch.


U.S. probes Uber for possible bribery law violations

$
0
0

SAN FRANCISCO (Reuters) - Uber Technologies Inc [UBER.UL] said on Tuesday it was cooperating with a preliminary investigation led by the U.S. Department of Justice into possible violations of bribery laws.

The preliminary investigation is the latest in a series of legal wrangles at Uber as the ride-services company waits for its new chief executive to take the reins.

Uber has chosen Dara Khosrowshahi, the CEO of Expedia Inc (EXPE.O), as it next leader, sources have told Reuters, but the company has not yet made it official.

A spokesman for the company confirmed the existence of a "preliminary investigation" following a report by the Wall Street Journal on Tuesday that the Justice Department had started probing whether managers at Uber violated U.S. laws against bribery of foreign officials, specifically the Foreign Corrupt Practices Act.

It is unclear whether authorities are focused on one country or multiple countries where the company operates.

Reuters in June reported that Uber had hired a law firm to investigate how it obtained the medical records of an Indian woman who was raped by an Uber driver in 2014. The review was to focus in part on accusations from some current and former employees that bribes were involved, two people familiar with the matter told Reuters.

The Uber board on Sunday voted to select Khosrowshahi as the company's next leader to replace co-founder Travis Kalanick, who was ousted in June under shareholder pressure, sources told Reuters.

Khosrowshahi, 48, on Tuesday made his first public comments since the board's decision to make him CEO in two interviews in which he confirmed he plans to accept Uber's top job, despite the company's many problems. He made the comments at a previously scheduled event at Expedia's headquarters in Bellevue, Washington.

"Are there difficulties? Are there complexities? Are there challenges? Absolutely, but that's also what makes it fun," Khosrowshahi told Bloomberg.

Khosrowshahi has not responded to inquires from Reuters.

Reporting by Heather Somerville in San Francisco; Additional reporting by Sangameswaran S; Editing by Bill Rigby

Terry Pratchett's unfinished novels destroyed by steamroller

$
0
0

The unfinished books of Sir Terry Pratchett have been destroyed by a steamroller, following the late fantasy novelist’s wishes.

Pratchett’s hard drive was crushed by a vintage John Fowler & Co steamroller named Lord Jericho at the Great Dorset Steam Fair, ahead of the opening of a new exhibition about the author’s life and work.

Pratchett, famous for his colourful and satirical Discworld series, died in March 2015 after a long battle with Alzheimer’s disease.

After his death, fellow fantasy author Neil Gaiman, Pratchett’s close friend and collaborator , told the Times that Pratchett had wanted “whatever he was working on at the time of his death to be taken out along with his computers, to be put in the middle of a road and for a steamroller to steamroll over them all”.

On Friday, Rob Wilkins, who manages the Pratchett estate, tweeted from an official Twitter account that he was “about to fulfil my obligation to Terry” along with a picture of an intact computer hard drive – following up with a tweet that showed the hard drive in pieces.

The symbolism of the moment, which captured something of Pratchett’s unique sense of humour, was not lost on fans, who responded on Twitter with a wry melancholy, though some people expressed surprise that the author – who had previously discussed churning through computer hardware at a rapid rate – would have stored his unfinished work on an apparently older model of hard drive.

The hard drive will go on display as part of a major exhibition about the author’s life and work, Terry Pratchett: HisWorld, which opens at the Salisbury museum in September.

The author of over 70 novels, Pratchett was diagnosed with Alzheimer’s disease in 2007.

He became an advocate for assisted dying, giving a moving lecture on the subject, Shaking Hands With Death, in 2010, and presenting a documentary for the BBC called Terry Pratchett: Choosing to Die.

He continued to write and publish, increasingly with the assistance of others, until his death in 2015. Two novels were published posthumously: The Long Utopia (a collaboration with Stephen Baxter) and The Shepherd’s Crown, the final Discworld novel.

The Salisbury museum exhibition will run from 16 September until 13 January 2018.


Roamer: A Plain-Text File Manager

$
0
0

README.md

The Plain Text File Manager

asciicast

Roamer turns your favorite text editor into a lightweight file manager. Copy, Cut & Paste files en masse without leaving your terminal window.

Install

Requirements

  • Python version: 2.7+, 3.2+
  • OS: Linux, MacOS, Windows WSL (Windows Subsystem for Linux)

Command

For a high security install see here.

Usage

Start Roamer

This will open the current working directory in your default $EDITOR. (See options section to override editor)

Example Output

" pwd: /Users/abaldwin/Desktop/stuffmy_directory/ | b0556598b8f8my_file_1.txt | ce9b0a287985my_file_2.txt | fc3da7f790a6my_file_3.txt | fc3da7f790a6

Explanation

  • Each line represents a single entry (file or directory)
  • On the left side of the pipe character is the entry's name
  • On the right side is the entry's hash. You can think of the hash as a link to that entry's contents.
  • A line starting with double quote (") is a comment and will be ignored.

--> Make changes as desired. When finished save and quit to commit the changes. e.g. vim :wq

Common Operations

Delete a file

Copy a file

  • Copy the entire line
  • Paste it onto a new line

Rename a file

  • Type over the existing file's name
  • Do not modify the hash on the right side

Copy over a file

  • Copy the hash from the first file
  • Replace the second file's hash

Make a new empty file

  • Add a new line
  • Type the new file's name

Move files between directories

  • Open up another terminal tab and run second roamer session
  • Copy / Paste lines between both sessions of roamer

Options

Editor

Roamer uses your default $EDITOR environment variable.

To override a specific editor for roamer add this to your shell's config. (~/.bashrc ~/.zshrc etc)

export ROAMER_EDITOR=emacs

If no editor is set then vi will be used.

Data Directory

Roamer needs a directory for storing data between sessions. By default this will be saved in .roamer-data in your home directory.

To override:

export ROAMER_DATA_PATH=~/meh/

Editor Plugins

This roamer library is editor agnostic and focused on processing plain text. To enhance your experience with roamer consider installing roamer editor plug-ins:

Lua code: security overview and practical approaches to static analysis [pdf]

Coreboot and Skylake, Part 2: A Beautiful Game

$
0
0

Hi everyone,

While most of you are probably excited about the possibilities of the recently announced“Librem 5” phone, today I am sharing a technical progress report about our existing laptops, particularly findings about getting coreboot to be “production-ready” on the Skylake-based Librem 13 and 15, where you will see one of the primary reasons we experienced a delay in shipping last month (and how we solved the issue).

TL;DR: Shortly we began shipping from inventory the coreboot port was considered done, but we found some weird SATA issues at the last minute, and those needed to be fixed before shipping those orders.

  • The bug was sometimes preventing booting any operating system, which is why it became a blocker for shipments.
  • I didn’t find the “perfect” fix yet, I simply worked around the problem; the workaround corrects the behavior without any major consequences for users, other than warnings showing up during boot with the Linux kernel, which allowed us to resume shipments.
  • Once I come up with the proper/perfect fix, an update will be made available for users to update their coreboot install post-facto. So, for now, do not worry if you see ATA errors during boot (or in dmesg) in your new Librem laptops shipped this summer: it is normal, harmless, and hopefully will be fixed soon.

I previously considered the coreboot port “done” for the new Skylake-based laptops, and as I went to the coreboot conference, I thought I’d be coming back home and finally be free to take care of the other stuff in my ever-increasing TODO list. But when I came back, I received an email from Zlatan (who was inside our distribution center that week), saying that some machines couldn’t boot, throwing errors such as:

Read Error

…in SeaBIOS, or

error: failure reading sector 0x802 from 'hd0'

or

error: no such partition. entering rescue mode

…in GRUB before dropping into the GRUB rescue shell.

That was odd, as I had never encountered those issues except one time very early in the development of the coreboot port, where we were seeing some ATA error messages in dmesg but that was fixed, and neither Matt nor I ever saw such errors again since. So of course, I didn’t believe Zlatan at first, thinking that maybe the OS was not installed properly… but the issue was definitely occurring on multiple machines that were being prepared to ship out. Zlatan then booted into the PureOS Live USB and re-installed the original AMI BIOS; then he had no more issues booting into his SSD, but when he’d flash coreboot back, it would fail to boot.

Intrigued, I tested on my machine again with the “final release” coreboot image I had sent them and I couldn’t boot into my OS either. Wait—What!? It was working fine just before I went to the coreboot conference.

  • Did something change recently? No, I remember specifically sending the image that I had been testing for weeks, and I hadn’t rebased coreboot because I very specifically wanted to avoid any potential new bug being introduced “at the last minute” from the latest coreboot git base.
  • Just to be sure, I went back to an even older image I had saved (which was known to work as well), and the issue occurred there as well—so not a compiling-related problem either.
  • I asked Matt to test on his machine, and when he booted the machine, it was failing for him with the same error. He hadn’t even flashed a new coreboot image! It was still the same image he had on the laptop for the past few weeks, which was was working perfectly for him… until now, as it now refused to boot.

Madness? THIS—IS—SATA!

After extensive testing, we finally came to the conclusion that whether or not the machine would manage to boot was entirely dependent on the following conditions:

  • The time of day
  • The current phase of the moon
  • The alignment of the planets in some distant galaxy
  • The mood of my neighbor’s cat

The most astonishing (and frustrating) thing is that during the three weeks where Matt and I have been working on the coreboot port previously, we never encountered any “can’t boot” scenario—and we were rebooting those machines probably 10 times per hour or more… but now, we were suddenly both getting those errors, pretty consistently.

After a day or two of debugging, it suddenly started working without any errors again for a couple of hours, then it started bugging again. On my end, the problem seemed to typically happen with SATA SSDs on the M.2 port (I didn’t get any issues when using a 2.5″ HDD, and Matt was in the same situation). However, even with a 2.5″ HDD, Zlatan was having the same issues we were seeing with the M.2 connector.

So the good news was that we were at least able to encounter the error pretty frequently now, the bad news was that Purism couldn’t ship its newest laptops until this issue was fixed—and we had promised the laptops would be shipping out in droves by that time! Y’know, just to add a bit of stress to the mix.

The Eolian presents: DTLE

When I was doing the v1 port, I had a more or less similar issue with the M.2 SATA port, but it was much more stable: it would always fail with “Read Error”, instead of failing with a different error on every boot and “sometimes failing, sometimes working”. Some of you may remember my explanation of how I fixed the issue on the v1 in February: back then, I had to set the DTLE setting on the IOBP register of the SATA port. What this means is anyone’s guess, but I found this article explaining that “DTLE” means “Discrete Time Linear Equalization”, and that having the wrong DTLE values can cause the drives to “run slower than intended, and may even be subject to intermittent link failures”. Intermittent link failures! Well! Doesn’t that sound familiar?

Unfortunately, I don’t know how to set the DTLE setting on the Skylake platform, since coreboot doesn’t have support for it. The IOBP registers that were on the Broadwell platform do not exist in Skylake (they have been replaced by a P2SB—Primary to SideBand—controller), and the DTLE setting does not exist in the P2SB registers either, according to someone with access to the NDA’ed datasheet.

When the computer was booting, there were some ATA errors appearing in dmesg, and it looks something like this:

ata3: exception Emask 0x0 SAct 0xf SErr 0x0 action 0x10 frozen
ata3.00: failed command: READ FPDMA QUEUED
ata3.00: cmd 60/04:00:d4:82:85/00:00:1f:00:00/40 tag 0 ncq 2048 in
res 40/00:18:d3:82:85/00:00:1f:00:00/40 Emask 0x4 (timeout)
ata3.00: status: { DRDY }

Everywhere I found this error referenced, such as in forums, the final conclusion was typically “the SATA connector is defective”, or “it’s a power related issue” where the errors disappeared after upgrading the power supply, etc. It sort of makes sense with regards to the DTLE setting causing a similar issue.

It also looks strikingly similar to Ubuntu bug #550559 where there is no insight on the cause, other than “disabling NCQ in the kernel fixes it”… but the original (AMI) BIOS does not disable NCQ support in the controller, and it doesn’t fix the DTLE setting itself.

Chasing the wind

So, not knowing what to do exactly and not finding any information in datasheets, I decided to try and figure it out using some good old reverse engineering.

First, I needed to see what the original BIOS did… but when I opened it in UEFIExtract, it turns out there’s a bunch of “modules” in it. What I mean by “a bunch” is about 1581 modules in the AMI UEFI BIOS, from what I could count. Yep. And “somewhere” in one of those, the answer must lay. I didn’t know what to look for; some modules are named, some aren’t, so I obviously started with the file called “SataController”—I thought I’d find the answer in it quickly enough simply by opening it up with IDA, but nope: that module file pretty much doesn’t do anything. I also tried “PcieSataController” and “PcieSataDynamicSetup” but those weren’t of much help either.

I then looked at the code in coreboot to see how exactly it initializes the SATA controller, and found this bit of code:

 /* Step 1 */
 sir_write(dev, 0x64, 0x883c9003);

I don’t really know what this does but to me it looks suspiciously like a “magic number”, where for some reason that value would need to be set in that variable for the SATA controller to be initialized. So I looked for that variable in all of the UEFI modules and found one module that has that same magic value, called “PchInitDxe”. Progress! But the code was complex and I quickly realized it would take me a long time to reverse engineer it all, and time was something I didn’t have—remember, shipments were blocked by this, and customers were asking us daily about their order status!

The RAM in storm

One realization that I had was that the error is always about this “READ FPDMA QUEUED” command… which means it’s somehow related to DMA, and therefore related to RAM—so, could there be RAM corruption occurring? Obviously, I tested the RAM with memtest and no issues turned up, and since we had finally received the hardware, I could push for receiving the schematics from the motherboard designer (I was previously told it would be a distraction to pursue schematics when there were so many logistical issues to fix first).

  • As I finally received the schematics and started studying them, I found that there were some discrepancies between the RComp resistor values in the schematics and what I had set in coreboot, so I fixed that… but it made no difference.
  • I thought that maybe the issue then is with the DQ/DQS settings of the RAM initialization (which is meant for synchronization), but I didn’t have the DQ/DQS settings for this motherboard and I couldn’t figure it out from the schematics, so what I did was to simply hexdump the entire UEFI modules, and grep for “79 00 51” which is the 16 bit value of “121” followed by the first byte of the 16 bit value of “81”, which are two of the RComp resistor values. That allowed me to find 2 modules which contained the values of the Rcomp resistors for this board, and from there, I was able to find the DQ and DQS settings that were stored in the same module, just a few bytes above the Rcomp values, as expected. I tested with these new values, and… it made no difference. No joy.

What else could I do? “If only there was a way to run the original BIOS in an emulator and catch every I/O it does to initialize the SATA controller!”

Well, there is something like that, it’s called serialICE and it’s part (sort of?) of the coreboot umbrella project. I was very happy to find that, but after a while I realized I can’t make use of it (at least not easily): it requires us to replace the BIOS with this serialICE which is a very very minimal BIOS that basically only initializes the UART lines and loads up qemu, then you can “connect” to it using the serial port, send it the BIOS you want to run, and while serialICE runs the BIOS it will output all the I/O access over the serial port back to you… That’s great, and exactly what I need, unfortunately:

  • the Librems do not have a serial port that I can use for that;
  • looking at the schematics, the only UART pad that is available is for TX (for receiving data), not RX (for sending data to the machine);
  • I can’t find the TX pad on the motherboard, so I can’t even use that.

Thankfully, I was told that there is a way to use xHCI usb debugging capabilities even on Skylake, and Nico Huber wrote libxhcidbg which is a library implementing the xHCI usb debug features. So, all I would need to make serialICE work would be to:

  • port coreboot to use libxhcidebug to have the USB debugging feature, test it and make sure it all works, or…
  • port my previous flashconsole work to serialICE then find a way to somehow send/bundle the AMI BIOS inside the serialICE or put it somewhere in the flash so serialICE can grab it directly without me needing to feed it to it through serial.

Another issue is that for the USB debug to work, USB needs to be initialized, and there is no way for me to know if the AMI BIOS initializes the SATA controller before or after the USB controller, so it might not even be helpful to do all that yak shaving.

The other solution (to use flashconsole) might not work either because we have 16MB of flash and I expect that a log of all I/O accesses will probably take a lot more space than that, so it might not be useful either.

And even if one or both of the solutions actually worked, sifting through thousands of I/O accesses to find just the right one that I need, might be like looking for a needle in a haystack.

Considering the amount of work involved, the uncertainty of whether or not it would even work, and the fact that I really didn’t have time for such animal cruelty (remember: shipments on hold until this is fixed!), I needed to find a quicker solution.

At that point, I was starting to lose hope for a quick solution and I couldn’t find any more tables to flip:

“This issue is so weird! I can’t figure out the cause, nothing makes sense, and there’s no easy way to track down what needs to be done in order to get it fixed.”

And then I noticed something. While it will sometimes fail to boot, sometimes will boot without issues, sometimes will trigger ATA errors in dmesg, sometimes will stay silent… one thing was consistent: once Linux boots, we don’t experience any issues—there was no kernel panic “because the disc can’t be accessed”, no “input/output error” when reading files… there is no real visible issue other than the few ATA errors we see in dmesg at the beginning when booting Linux, and those errors don’t re-appear later.

After doing quite a few tests, I noticed that whenever the ATA errors happen for a few times, the Linux kernel ends up dropping the ATA link speed to 3Gbps instead of the default 6Gbps, and that once it does, there aren’t any errors happening afterwards. I eventually came to the conclusion that those ATA errors are the same issue causing the boot errors from SeaBIOS/GRUB, and that they only happened when the controller was setup to use 6Gbps speeds.

What if I was wrong about the DTLE setting, and potential RAM issues? What if all of this is because of a misconfiguration of the controller itself? What if all AMI does is to disable the 6Gbps speed setting on the controller so it can’t be used?!

So, of course, I checked, and nope, it’s not disabled, and when booting Linux from the AMI BIOS, the link was set up to 6Gbps and had no issues… so it must be something else, related to that. I dumped every configuration of the SATA controller—not only the PCI address space, but also the AHCI ABAR memory mapped registers, and any other registers I could find that were related to the SATA/AHCI controller—and I made sure that they matched exactly between the AMI BIOS and the coreboot registers, and… still nothing. It made even less sense! If all the SATA PCI address space and AHCI registers were exactly the same, then why wouldn’t it work?

I gave up!

…ok, I actually didn’t. I temporarily gave up trying to fix the problem’s root cause, but only because I had an idea for a workaround that could yield a quick win instead: if Linux is able to drop the link speed to 3Gbps and stop having any issues, then why can’t I do the same in coreboot? Then both SeaBIOS and GRUB would stop having issues trying to read from the drive, ensuring the drive will allow booting properly.

I decided I would basically do the same thing as Linux, but do it purposedly in coreboot, instead of it being done “in Linux” after errors start appearing.

While not the “ideal fix”, such a workaround would at least let the Skylake-based Librems boot reliably for all users, allowing us to release the shipments so customers can start receiving their machines as soon as possible, after which I would be able to take the time to devise the “ideal” fix, and provide it as a firmware update.

Sleeping under the wagon: an overnight workaround

I put my plan in motion:

  • I looked at the datasheet and how to configure the controller’s speed, and found that I could indeed disable the 6Gbps speed, but for some reason, that didn’t work.
  • Then I tried to make it switch to 3Gbps, and that still didn’t work.
  • I went into the Linux kernel’s SATA driver to see what it does exactly, and realized that I didn’t do the switch to 3Gbps correctly. So I fixed my code in coreboot, and the machines started booting again.
    • I also learned what exactly happens in the Linux kernel: when there’s an error reading the drive, it will retry a couple of times; if the error keeps happening over and over again, then it will drop the speed to 3Gbps, otherwise, it keeps it as-is. That explains why we sometimes see only one ATA error, sometimes 3, and some other times 20 or more; it all depends on whether the retries worked or not.
    • Once I changed the speed of the controller to 3Gbps, I stopped having troubles booting into the system because both SeaBIOS and GRUB were working on 3Gbps and were not having any issues reading the data. However, once Linux boots, it resets the controller, which cancels out the changes that I did, and Linux starts using the drive at 6Gbps. That’s not really a problem because I know that Linux will retry any reads, and will drop to 3Gbps on its own once errors start happening, but it has the side effect that users will be seeing these ATA error message on their boot screen or in dmesg.

As you can see, small issues like that are a real puzzle, and that’s the kind of thing that can make you waste a month of work just to “get it working” (let alone “find the perfect fix”). This is why I typically don’t give time estimates on this sort of work. We’re committed though on getting you the best experience with your machines, so we’re still actively working on everything.

Here’s a summary of the current situation:

  • You will potentially see errors in your boot screen, but it’s not a problem since Linux will fix it
  • It’s not a hardware issue, since it doesn’t happen with the AMI BIOS, we just need to figure out what to configure to make it work.
  • There is nothing to be worried about, and I expect to fix it in a future coreboot firmware update, which we’ll release to everyone once it’s available (we’re working on integration with fwupd, so maybe we’ll release it through that, I don’t know yet).

It’s taken me much longer than anticipated to write this blog post (2 months exactly), as other things kept getting in the way—avalanches of emails, other bugs to fix, patches to test/verify, scripts to write, and a lot of things to catch up on from the one month of intense debugging during which I had neglected all my other responsibilities.

While I was writing this status report, I didn’t make much progress on the issue—I’ve had 3 or 4 enlightenments and I thought I suddenly figured it all out, only to end up in a dead end once again. Well, once I do figure it out, I will let you all know! Thanks for reading and thanks for your patience.


End notes: if you didn’t catch some references, or paragraph titles in this post, then you need to read The KingKiller Chronicles by Patrick Rothfuss (The Name of the Wind and The Wise Man’s Fear). These are some of the best fantasy books I’ve ever read, but be aware that the final book of the trilogy may not be released for another 10 years because the author loves to do millions of revisions of his manuscripts until they are “perfect”.

Gigster raises $20M from investors including Marc Benioff and Michael Jordan

$
0
0

If you're a software programmer who wants to command a Silicon Valley salary, you typically need to live in or near Silicon Valley.

Gigster wants to change that.

The 4-year-old Silicon Valley startup pairs companies in need of software developers with freelance programmers from around the world who are looking for work.

For companies large and small, Gigster offers a way hire a freelance software-development team all at once. For tech workers, the company's service helps them find well-paid work on a variety of projects across a range of esoteric fields.

Gigster already counts eBay and IBM among its customers, but it's aiming to lure in other big companies and connect them with its pool of freelancers. It now has a bunch of new funding to help it reach that goal.

On Tuesday, Gigster is announcing it has received $20 million in a new round of financing. The round was led by Redpoint Ventures, with participation from its existing investors Y Combinator, Andreessen Horowitz, and Ashton Kutcher's Sound Capital.

The company also attracted two new investors: Salesforce CEO Marc Benioff and the basketball legend Michael Jordan. For Jordan, the investment is one of the first he has made in tech.

Marc Benioff

caption
Salesforce CEO Marc Benioff is one of Gigster's new investors.
source
Photo by Kimberly White/Getty Images for Fortune

The company plans to use the money to fund sales, marketing, and other efforts aimed at persuading big companies to use Gigster. As part of that effort, it's planning to add about 100 employees to its 60-person staff over the next year.

Gigster CEO Roger Dickey told Business Insider the company graded itself on the percentage of the thousand or so tech workers on its marketplace who are making a full-time living from work they get through the site. Right now, the proportion is somewhere between 20% and 30%. The company is aiming to get that above 30% by the end of the year and 50% by the end of next year.

Gigster's goal is to offer developers what Dickey calls "the employee experience." The company wants the experience of being a freelancer who gets work through Gigster to be "as good or better" than being an employee "somewhere like Google" - but with the added benefits of being able to set your schedule and work from wherever you are, anywhere in the world, he said.

When Gigster was founded, it was intended to be a place where anybody with an idea could find and hire a professional software-development team that could see a project through to its completion.

The site categorizes freelancers based on the jobs they do, such as programming, product management, and design. Among the freelancers listed on Gigster are people who worked at Google, NASA, and SpaceX. When a customer comes to the site seeking freelancers for a particular project, Gigster will suggest an ad hoc team comprising people with various skills, taking into account the scope of the project or the customer's needs.

"We put a developer team in everyone's pocket," Dickey said. "What they do with it is up to them."

Over the past year or so, though, Gigster has turned its attention specifically toward the business market. More than 40 businesses use the site to outsource at least some of their software work.

Michael Jordan

caption
Michael Jordan made one of his first tech investments when he joined Gigster's latest funding round.
source
Elsa/Getty

Gigster has found a sweet spot in providing experts in fields like machine learning and blockchain, where talent is hard to come by - and so too are full-time jobs, Dickey said. Many companies don't have enough blockchain-related work to justify hiring a full-time employee, so being able to tap into the talent available on Gigster is handy, he said.

For developers, Gigster handles much of the hard work of being a freelancer - most importantly, the part about getting the client to pay up.

"We just want to write code and get paid," Dickey said.

If the company achieves its goal of luring in bigger companies, Gigster could pay its freelancers even more money - and potentially create something of a virtuous cycle.

"We can attract the best freelance developers in the world," Dickey said.

What We Get Wrong About Technology

$
0
0
Highlights

Blade Runner (1982) is a magnificent film, but there’s something odd about it. The heroine, Rachael, seems to be a beautiful young woman. In reality, she’s a piece of technology — an organic robot designed by the Tyrell Corporation. She has a lifelike mind, imbued with memories extracted from a human being.  So sophisticated is Rachael that she is impossible to distinguish from a human without specialised equipment; she even believes herself to be human. Los Angeles police detective Rick Deckard knows otherwise; in Rachael, Deckard is faced with an artificial intelligence so beguiling, he finds himself falling in love. Yet when he wants to invite Rachael out for a drink, what does he do?

He calls her up from a payphone.

There is something revealing about the contrast between the two technologies — the biotech miracle that is Rachael, and the graffiti-scrawled videophone that Deckard uses to talk to her. It’s not simply that Blade Runner fumbled its futurism by failing to anticipate the smartphone. That’s a forgivable slip, and Blade Runner is hardly the only film to make it. It’s that, when asked to think about how new inventions might shape the future, our imaginations tend to leap to technologies that are sophisticated beyond comprehension. We readily imagine cracking the secrets of artificial life, and downloading and uploading a human mind. Yet when asked to picture how everyday life might look in a society sophisticated enough to build such biological androids, our imaginations falter. Blade Runner audiences found it perfectly plausible that LA would look much the same, beyond the acquisition of some hovercars and a touch of noir.

Now is a perplexing time to be thinking about how technology shapes us. Some economists, disappointed by slow growth in productivity, fear the glory days are behind us. “The economic revolution of 1870 to 1970 was unique in human history,” writes Robert Gordon in The Rise and Fall of American Growth (UK) (US). “The pace of innovation since 1970 has not been as broad or as deep.” Others believe that exponential growth in computing power is about to unlock something special. Economists Erik Brynjolfsson and Andrew McAfee write of “the second machine age” (UK) (US), while the World Economic Forum’s Klaus Schwab favours the term “fourth industrial revolution”, following the upheavals of steam, electricity and computers. This coming revolution will be built on advances in artificial intelligence, robotics, virtual reality, nanotech, biotech, neurotech and a variety of other fields currently exciting venture capitalists.

Forecasting the future of technology has always been an entertaining but fruitless game. Nothing looks more dated than yesterday’s edition of Tomorrow’s World. But history can teach us something useful: not to fixate on the idea of the next big thing, the isolated technological miracle that utterly transforms some part of economic life with barely a ripple elsewhere. Instead, when we try to imagine the future, the past offers two lessons. First, the most influential new technologies are often humble and cheap. Mere affordability often counts for more than the beguiling complexity of an organic robot such as Rachael. Second, new inventions do not appear in isolation, as Rachael and her fellow androids did. Instead, as we struggle to use them to their best advantage, they profoundly reshape the societies around us.

To understand how humble, cheap inventions have shaped today’s world, picture a Bible — specifically, a Gutenberg Bible from the 1450s. The dense black Latin script, packed into twin blocks, makes every page a thing of beauty to rival the calligraphy of the monks. Except, of course, these pages were printed using the revolutionary movable type printing press. Gutenberg developed durable metal type that could be fixed firmly to print hundreds of copies of a page, then reused to print something entirely different.  The Gutenberg press is almost universally considered to be one of humanity’s defining inventions. It gave us the Reformation, the spread of science, and mass culture from the novel to the newspaper. But it would have been a Rachael — an isolated technological miracle, admirable for its ingenuity but leaving barely a ripple on the wider world — had it not been for a cheap and humble invention that is far more easily and often overlooked: paper.

The printing press didn’t require paper for technical reasons, but for economic ones. Gutenberg also printed a few copies of his Bible on parchment, the animal-skin product that had long served the needs of European scribes. But parchment was expensive — 250 sheep were required for a single book. When hardly anyone could read or write, that had not much mattered. Paper had been invented 1,500 years earlier in China and long used in the Arabic world, where literacy was common. Yet it had taken centuries to spread to Christian Europe, because illiterate Europe no more needed a cheap writing surface than it needed a cheap metal to make crowns and sceptres.

Paper caught on only when a commercial class started to need an everyday writing surface for contracts and accounts. “If 11th-century Europe had little use for paper,” writes Mark Kurlansky in his book Paper (UK) (US), “13th-century Europe was hungry for it.” When paper was embraced in Europe, it became arguably the continent’s earliest heavy industry. Fast-flowing streams (first in Fabriano, Italy, and then across the continent) powered massive drop-hammers that pounded cotton rags, which were being broken down by the ammonia from urine. The paper mills of Europe reeked, as dirty garments were pulped in a bath of human piss.

Paper opened the way for printing. The kind of print run that might justify the expense of a printing press could not be produced on parchment; it would require literally hundreds of thousands of animal skins. It was only when it became possible to mass-produce paper that it made sense to search for a way to mass-produce writing too. Not that writing is the only use for paper. In his book Stuff Matters (UK) (US), Mark Miodownik points out that we use paper for everything from filtering tea and coffee to decorating our walls. Paper gives us milk cartons, cereal packets and corrugated cardboard boxes. It can be sandpaper, wrapping paper or greaseproof paper. In quilted, perforated form, paper is soft, absorbent and cheap enough to wipe, well, anything you want. Toilet paper seems a long way from the printing revolution. And it is easily overlooked — as we occasionally discover in moments of inconvenience. But many world-changing inventions hide in plain sight in much the same way — too cheap to remark on, even as they quietly reorder everything. We might call this the “toilet-paper principle”.

It’s not hard to find examples of the toilet-paper principle, once you start to look. The American west was reshaped by the invention of barbed wire, which was marketed by the great salesman John Warne Gates with the slogan: “Lighter than air, stronger than whiskey, cheaper than dust.” Barbed wire enabled settlers to fence in vast areas of prairie cheaply. Joseph Glidden patented it in 1874; just six years later, his factory produced enough wire annually to circle the world 10 times over. Barbed wire’s only advantage over wooden fencing was its cost but that was quite sufficient to cage the wild west, where the simple invention prevented free-roaming bison and cowboys’ herds of cattle from trampling crops.  Once settlers could assert control over their land, they had the incentive to invest in and improve it. Without barbed wire, the American economy — and the trajectory of 20th-century history — might have looked very different.

There’s a similar story to be told about the global energy system. The Rachael of the energy world — the this-changes-everything invention, the stuff of dreams — is nuclear fusion. If we perfect this mind-bendingly complex technology, we might safely harvest almost limitless energy by fusing variants of hydrogen. It could happen: in France, the ITER fusion reactor is scheduled to be fully operational in 2035 at a cost of at least $20bn. If it works, it will achieve temperatures of 200 million degrees Celsius — yet will still only be an experimental plant, producing less power than a coal-fired plant, and only in 20-minute bursts. Meanwhile, cheap-and-cheerful solar power is quietly leading a very different energy revolution. Break-even costs of solar electricity have fallen by two-thirds in the past seven years, to levels barely more than those of natural gas plants. But this plunge has been driven less by any great technological breakthrough than by the humble methods familiar to anyone who shops at Ikea: simple modular products that have been manufactured at scale and that snap together quickly on site.

The problem with solar power is that the sun doesn’t always shine. And the solution that’s emerging is another cheap-and-cheerful, familiar technology: the battery. Lithium-ion batteries to store solar energy are becoming increasingly commonplace, and mass-market electric cars would represent a large battery on every driveway. Several giant factories are under construction, most notably a Tesla factory that promises to manufacture 35GW worth of batteries each year by 2020; that is more than the entire global production of batteries in 2013. Battery prices have fallen as quickly as those of solar panels. Such Ikea-fication is a classic instance of toilet-paper technology: the same old stuff, only cheaper.

Perhaps the most famous instance of the toilet-paper principle is a corrugated steel box, 8ft wide, 8.5ft high and 40ft long. Since the shipping container system was introduced, world merchandise trade (the average of imports and exports) has expanded from about 10 per cent of world GDP in the late 1950s to more than 20 per cent today. We now take for granted that when we visit the shops, we’ll be surrounded by products from all over the globe, from Spanish tomatoes to Australian wine to Korean mobile phones.

“The standard container has all the romance of a tin can,” says historian Marc Levinson in his book The Box (UK) (US). Yet this simple no-frills system for moving things around has been a force for globalisation more powerful than the World Trade Organisation. Before the shipping container was introduced, a typical transatlantic cargo ship might contain 200,000 separate items, comprising many hundreds of different shipments, from food to letters to heavy machinery. Hauling and loading this cornucopia from the dockside, then packing it into the tightest corners of the hull, required skill, strength and bravery from the longshoremen, who would work on a single ship for days at a time. The container shipping system changed all that.

Loading and unloading a container ship is a gigantic ballet of steel cranes, choreographed by the computers that keep the vessel balanced and track each container through a global logistical system. But the fundamental technology that underpins it all could hardly be simpler. The shipping container is a 1950s invention using 1850s know-how. Since it was cheap, it worked. The container was a simple enough idea, and the man who masterminded its rise, Malcom McLean, could scarcely be described as an inventor. He was an entrepreneur who dreamed big, took bold risks, pinched pennies and deftly negotiated with regulators, port authorities and the unions.

McLean’s real achievement was in changing the system that surrounded his box: the way that ships, trucks and ports were designed. It takes a visionary to see how toilet-paper inventions can totally reshape systems; it’s easier for our limited imaginations to slot Rachael-like inventions into existing systems.  If nuclear fusion works, it neatly replaces coal, gas and nuclear fission in our familiar conception of the grid: providers make electricity, and sell it to us. Solar power and batteries are much more challenging. They’re quietly turning electricity companies into something closer to Uber or Airbnb — a platform connecting millions of small-scale providers and consumers of electricity, constantly balancing demand and supply.

Some technologies are truly revolutionary. They transcend the simple pragmatism of paper or barbed wire to produce effects that would have seemed miraculous to earlier generations. But they take time to reshape the economic systems around us — much more time than you might expect. No discovery fits that description more aptly than electricity, barely comprehended at the beginning of the 19th century but harnessed and commodified by its end. Usable light bulbs had appeared in the late 1870s, courtesy of Thomas Edison and Joseph Swan. In 1881, Edison built electricity-generating stations in New York and London and he began selling electricity as a commodity within a year. The first electric motors were used to drive manufacturing machinery a year after that. Yet the history of electricity in manufacturing poses a puzzle. Poised to take off in the late 1800s, electricity flopped as a source of mechanical power with almost no impact at all on 19th-century manufacturing. By 1900, electric motors were providing less than 5 per cent of mechanical drive power in American factories. Despite the best efforts of Edison, Nikola Tesla and George Westinghouse, manufacturing was still in the age of steam.

Productivity finally surged in US manufacturing only in the 1920s. The reason for the 30-year delay? The new electric motors only worked well when everything else changed too. Steam-powered factories had delivered power through awe-inspiring driveshafts, secondary shafts, belts, belt towers, and thousands of drip-oilers. The early efforts to introduce electricity merely replaced the single huge engine with a similarly large electric motor. Results were disappointing.

As the economic historian Paul David has argued, electricity triumphed only when factories themselves were reconfigured. The driveshafts were replaced by wires, the huge steam engine by dozens of small motors. Factories spread out, there was natural light. Stripped of the driveshafts, the ceilings could be used to support pulleys and cranes. Workers had responsibility for their own machines; they needed better training and better pay. The electric motor was a wonderful invention, once we changed all the everyday details that surrounded it.

David suggested in 1990 that what was true of electric motors might also prove true of computers: that we had yet to see the full economic benefits because we had yet to work out how to reshape our economy to take advantage of them. Later research by economists Erik Brynjolfsson and Lorin Hitt backed up the idea: they found that companies that had merely invested in computers in the 1990s had seen few benefits, but those that had also reorganised — decentralising, outsourcing and customising their products — had seen productivity soar.

Overall, the productivity statistics have yet to display anything like a 1920s breakthrough. In that respect we are still waiting for David’s suggestion to bear fruit. But in other ways, he was proved right almost immediately. People were beginning to figure out new ways to use computers and, in August 1991, Tim Berners-Lee posted his code for the world wide web on the internet so that others could download it and start to tinker. It was another cheap and unassuming technology, and it unlocked the potential of the older and grander internet itself.

If the fourth industrial revolution delivers on its promise, what lies ahead? Super-intelligent AI, perhaps? Killer robots? Telepathy: Elon Musk’s company, Neuralink, is on the case. Nanobots that live in our blood, zapping tumours? Perhaps, finally, Rachael? The toilet-paper principle suggests that we should be paying as much attention to the cheapest technologies as to the most sophisticated. One candidate: cheap sensors and cheap internet connections. There are multiple sensors in every smartphone, but increasingly they’re everywhere, from jet engines to the soil of Californian almond farms — spotting patterns, fixing problems and eking out efficiency gains. They are also a potential privacy and security nightmare, as we’re dimly starting to realise — from hackable pacemakers to botnets comprised of printers to, inevitably, internet-enabled sex toys that leak the most intimate data imaginable. Both the potential and the pitfalls are spectacular.

Whatever the technologies of the future turn out to be, they are likely to demand that, like the factories of the early 20th century, we change to accommodate them. Genuinely revolutionary inventions live up to their name: they change almost everything, and such transformations are by their nature hard to predict. One clarifying idea has been proposed by economists Daron Acemoglu and David Autor. They argue that when we study the impact of technology on the workplace, we should view work in bite-sized chunks — tasks rather than jobs.

For example, running a supermarket involves many tasks — stacking the shelves, collecting money from customers, making change, and preventing shoplifters. Automation has had a big impact on supermarkets, but not because the machines have simply replaced human jobs. Instead, they have replaced tasks done by humans, generally the tasks that could be most easily codified. The barcode turned stocktaking from a human task into one performed by computers. (It is another toilet-paper invention, cheap and ubiquitous, and one that made little difference until retail formats and supply chains were reshaped to take advantage.)

A task-based analysis of labour and automation suggests that jobs themselves aren’t going away any time soon — and that distinctively human skills will be at a premium. When humans and computers work together, says Autor, the computers handle the “routine, codifiable tasks” while amplifying the capabilities of the humans, such as “problem-solving skills, adaptability and creativity”. But there are also signs that new technologies have polarised the labour market, with more demand for both the high-end skills and the low-end ones, and a hollowing out in the middle. If human skills are now so valuable, that low-end growth seems like a puzzle — but the truth is that many distinctively human skills are not at the high end. While Jane Austen, Albert Einstein and Pablo Picasso exhibited human skills, so does the hotel maid who scrubs the toilet and changes the bed. We’re human by virtue not just of our brains, but our sharp eyes and clever fingers.

So one invention I’m keen to observe is the “Jennifer unit”, made by a company called Lucas Systems. Jennifer and the many other programmes like her are examples of a “voice-directed application” — just software and a simple, inexpensive earpiece. Such systems have become part of life for warehouse workers: a voice in their ear or instructions on a screen tell them where to go and what to do, down to the fine details. If 13 items must be collected from a shelf, Jennifer will tell the human worker to pick five, then five, then three. “Pick 13” would lead to mistakes. That makes sense. Computers are good at counting and scheduling. Humans are good at picking things off shelves. Why not unbundle the task and give the conscious thinking to the computer, and the mindless grabbing to the human? Like paper, Jennifer is inexpensive and easy to overlook. And like the electric dynamo, the technologies in Jennifer are having an impact because they enable managers to reshape the workplace. Science fiction has taught us to fear superhuman robots such as Rachael; perhaps we should be more afraid of Jennifer.

 
Written for and first published in the FT Magazine on 8 July 2017.

My new book is “Fifty Things That Made The Modern Economy” – now out! Grab yourself a copy in the US (slightly different title) or in the UK or through your local bookshop.

24/192 Music Downloads Are Very Silly Indeed (2012)

$
0
0

Also see Xiph.Org's new video, Digital Show& Tell, for detailed demonstrations of digital sampling in action on real equipment!

Articles last month revealed that musician Neil Young and Apple's Steve Jobs discussed offering digital music downloads of 'uncompromised studio quality'. Much of the press and user commentary was particularly enthusiastic about the prospect of uncompressed 24 bit 192kHz downloads. 24/192 featured prominently in my own conversations with Mr. Young's group several months ago.

Unfortunately, there is no point to distributing music in 24-bit/192kHz format. Its playback fidelity is slightly inferior to 16/44.1 or 16/48, and it takes up 6 times the space.

There are a few real problems with the audio quality and 'experience' of digitally distributed music today. 24/192 solves none of them. While everyone fixates on 24/192 as a magic bullet, we're not going to see any actual improvement.

First, the bad news

In the past few weeks, I've had conversations with intelligent, scientifically minded individuals who believe in 24/192 downloads and want to know how anyone could possibly disagree. They asked good questions that deserve detailed answers.

I was also interested in what motivated high-rate digital audio advocacy. Responses indicate that few people understand basic signal theory or thesampling theorem, which is hardly surprising. Misunderstandings of the mathematics, technology, and physiology arose in most of the conversations, often asserted by professionals who otherwise possessed significant audio expertise. Some even argued that the sampling theorem doesn't really explain how digital audio actually works [1].

Misinformation and superstition only serve charlatans. So, let's cover some of the basics of why 24/192 distribution makes no sense before suggesting some improvements that actually do.

Gentlemen, meet your ears

The ear hears via hair cells that sit on the resonant basilar membrane in the cochlea. Each hair cell is effectively tuned to a narrow frequency band determined by its position on the membrane. Sensitivity peaks in the middle of the band and falls off to either side in a lopsided cone shape overlapping the bands of other nearby hair cells. A sound is inaudible if there are no hair cells tuned to hear it.

Above left: anatomical cutaway drawing of a human cochlea with the basilar membrane colored in beige. The membrane is tuned to resonate at different frequencies along its length, with higher frequencies near the base and lower frequencies at the apex. Approximate locations of several frequencies are marked.

Above right: schematic diagram representing hair cell response along the basilar membrane as a bank of overlapping filters.

This is similar to an analog radio that picks up the frequency of a strong station near where the tuner is actually set. The farther off the station's frequency is, the weaker and more distorted it gets until it disappears completely, no matter how strong. There is an upper (and lower) audible frequency limit, past which the sensitivity of the last hair cells drops to zero, and hearing ends.

Sampling rate and the audible spectrum

I'm sure you've heard this many, many times: The human hearing range spans 20Hz to 20kHz. It's important to know how researchers arrive at those specific numbers.

First, we measure the 'absolute threshold of hearing' across the entire audio range for a group of listeners. This gives us a curve representing the very quietest sound the human ear can perceive for any given frequency as measured in ideal circumstances on healthy ears. Anechoic surroundings, precision calibrated playback equipment, and rigorous statistical analysis are the easy part. Ears and auditory concentration both fatigue quickly, so testing must be done when a listener is fresh. That means lots of breaks and pauses. Testing takes anywhere from many hours to many days depending on the methodology.

Then we collect data for the opposite extreme, the 'threshold of pain'. This is the point where the audio amplitude is so high that the ear's physical and neural hardware is not only completely overwhelmed by the input, but experiences physical pain. Collecting this data is trickier. You don't want to permanently damage anyone's hearing in the process.

Above: Approximate equal loudness curves derived from Fletcher and Munson (1933) plus modern sources for frequencies > 16kHz. The absolute threshold of hearing and threshold of pain curves are marked in red. Subsequent researchers refined these readings, culminating in the Phon scale and the ISO 226 standard equal loudness curves. Modern data indicates that the ear is significantly less sensitive to low frequencies than Fletcher and Munson's results.

The upper limit of the human audio range is defined to be where the absolute threshold of hearing curve crosses the threshold of pain. To even faintly perceive the audio at that point (or beyond), it must simultaneously be unbearably loud.

At low frequencies, the cochlea works like a bass reflex cabinet. The helicotrema is an opening at the apex of the basilar membrane that acts as a port tuned to somewhere between 40Hz to 65Hz depending on the individual. Response rolls off steeply below this frequency.

Thus, 20Hz - 20kHz is a generous range. It thoroughly covers the audible spectrum, an assertion backed by nearly a century of experimental data.

Genetic gifts and golden ears

Based on my correspondences, many people believe in individuals with extraordinary gifts of hearing. Do such 'golden ears' really exist?

It depends on what you call a golden ear.

Young, healthy ears hear better than old or damaged ears. Some people are exceptionally well trained to hear nuances in sound and music most people don't even know exist. There was a time in the 1990s when I could identify every major mp3 encoder by sound (back when they were all pretty bad), and could demonstrate this reliably in double-blind testing [2].

When healthy ears combine with highly trained discrimination abilities, I would call that person a golden ear. Even so, below-average hearing can also be trained to notice details that escape untrained listeners. Golden ears are more about training than hearing beyond the physical ability of average mortals.

Auditory researchers would love to find, test, and document individuals with truly exceptional hearing, such as a greatly extended hearing range. Normal people are nice and all, but everyone wants to find a genetic freak for a really juicy paper. We haven't found any such people in the past 100 years of testing, so they probably don't exist. Sorry. We'll keep looking.

Spectrophiles

Perhaps you're skeptical about everything I've just written; it certainly goes against most marketing material. Instead, let's consider a hypothetical Wide Spectrum Video craze that doesn't carry preexisting audiophile baggage.

Above: The approximate log scale response of the human eye's rods and cones, superimposed on the visible spectrum. These sensory organs respond to light in overlapping spectral bands, just as the ear's hair cells are tuned to respond to overlapping bands of sound frequencies.

The human eye sees a limited range of frequencies of light, aka, the visible spectrum. This is directly analogous to the audible spectrum of sound waves. Like the ear, the eye has sensory cells (rods and cones) that detect light in different but overlapping frequency bands.

The visible spectrum extends from about 400THz (deep red) to 850THz (deep violet) [3]. Perception falls off steeply at the edges. Beyond these approximate limits, the light power needed for the slightest perception can fry your retinas. Thus, this is a generous span even for young, healthy, genetically gifted individuals, analogous to the generous limits of the audible spectrum.

In our hypothetical Wide Spectrum Video craze, consider a fervent group of Spectrophiles who believe these limits aren't generous enough. They propose that video represent not only the visible spectrum, but also infrared and ultraviolet. Continuing the comparison, there's an even more hardcore [and proud of it!] faction that insists this expanded range is yet insufficient, and that video feels so much more natural when it also includes microwaves and some of the X-ray spectrum. To a Golden Eye, they insist, the difference is night and day!

Of course this is ludicrous.

No one can see X-rays (or infrared, or ultraviolet, or microwaves). It doesn't matter how much a person believes he can. Retinas simply don't have the sensory hardware.

Here's an experiment anyone can do: Go get your Apple IR remote. The LED emits at 980nm, or about 306THz, in the near-IR spectrum. This is not far outside of the visible range. Take the remote into the basement, or the darkest room in your house, in the middle of the night, with the lights off. Let your eyes adjust to the blackness.

Above: Apple IR remote photographed using a digital camera. Though the emitter is quite bright and the frequency emitted is not far past the red portion of the visible spectrum, it's completely invisible to the eye.

Can you see the Apple Remote's LED flash when you press a button [4]? No? Not even the tiniest amount? Try a few other IR remotes; many use an IR wavelength a bit closer to the visible band, around 310-350THz. You won't be able to see them either. The rest emit right at the edge of visibility from 350-380 THz and may be just barely visible in complete blackness with dark-adjusted eyes [5]. All would be blindingly, painfully bright if they were well inside the visible spectrum.

These near-IR LEDs emit from the visible boundry to at most 20% beyond the visible frequency limit. 192kHz audio extends to 400% of the audible limit. Lest I be accused of comparing apples and oranges, auditory and visual perception drop off similarly toward the edges.

192kHz considered harmful

192kHz digital music files offer no benefits. They're not quite neutral either; practical fidelity is slightly worse. The ultrasonics are a liability during playback.

Neither audio transducers nor power amplifiers are free of distortion, and distortion tends to increase rapidly at the lowest and highest frequencies. If the same transducer reproduces ultrasonics along with audible content, any nonlinearity will shift some of the ultrasonic content down into the audible range as an uncontrolled spray of intermodulation distortion products covering the entire audible spectrum. Nonlinearity in a power amplifier will produce the same effect. The effect is very slight, but listening tests have confirmed that both effects can be audible.

Above: Illustration of distortion products resulting from intermodulation of a 30kHz and a 33kHz tone in a theoretical amplifier with a nonvarying total harmonic distortion (THD) of about .09%. Distortion products appear throughout the spectrum, including at frequencies lower than either tone.

Inaudible ultrasonics contribute to intermodulation distortion in the audible range (light blue area). Systems not designed to reproduce ultrasonics typically have much higher levels of distortion above 20kHz, further contributing to intermodulation. Widening a design's frequency range to account for ultrasonics requires compromises that decrease noise and distortion performance within the audible spectrum. Either way, unneccessary reproduction of ultrasonic content diminishes performance.

There are a few ways to avoid the extra distortion:

  1. A dedicated ultrasonic-only speaker, amplifier, and crossover stage to separate and independently reproduce the ultrasonics you can't hear, just so they don't mess up the sounds you can.

  2. Amplifiers and transducers designed for wider frequency reproduction, so ultrasonics don't cause audible intermodulation. Given equal expense and complexity, this additional frequency range must come at the cost of some performance reduction in the audible portion of the spectrum.

  3. Speakers and amplifiers carefully designed not to reproduce ultrasonics anyway.

  4. Not encoding such a wide frequency range to begin with. You can't and won't have ultrasonic intermodulation distortion in the audible band if there's no ultrasonic content.

They all amount to the same thing, but only 4) makes any sense.

If you're curious about the performance of your own system, the following samples contain a 30kHz and a 33kHz tone in a 24/96 WAV file, a longer version in a FLAC, some tri-tone warbles, and a normal song clip shifted up by 24kHz so that it's entirely in the ultrasonic range from 24kHz to 46kHz:

Assuming your system is actually capable of full 96kHz playback [6], the above files should be completely silent with no audible noises, tones, whistles, clicks, or other sounds. If you hear anything, your system has a nonlinearity causing audible intermodulation of the ultrasonics. Be careful when increasing volume; running into digital or analog clipping, even soft clipping, will suddenly cause loud intermodulation tones.

In summary, it's not certain that intermodulation from ultrasonics will be audible on a given system. The added distortion could be insignificant or it could be noticable. Either way, ultrasonic content is never a benefit, and on plenty of systems it will audibly hurt fidelity. On the systems it doesn't hurt, the cost and complexity of handling ultrasonics could have been saved, or spent on improved audible range performance instead.

Sampling fallacies and misconceptions

Sampling theory is often unintuitive without a signal processing background. It's not surprising most people, even brilliant PhDs in other fields, routinely misunderstand it. It's also not surprising many people don't even realize they have it wrong.

Above: Sampled signals are often depicted as a rough stairstep (red) that seems a poor approximation of the original signal. However, the representation is mathematically exact and the signal recovers the exact smooth shape of the original (blue) when converted back to analog.

The most common misconception is that sampling is fundamentally rough and lossy. A sampled signal is often depicted as a jagged, hard-cornered stair-step facsimile of the original perfectly smooth waveform. If this is how you envision sampling working, you may believe that the faster the sampling rate (and more bits per sample), the finer the stair-step and the closer the approximation will be. The digital signal would sound closer and closer to the original analog signal as sampling rate approaches infinity.

Similarly, many non-DSP people would look at the following:

And say, "Ugh!&quot It might appear that a sampled signal represents higher frequency analog waveforms badly. Or, that as audio frequency increases, the sampled quality falls and frequency response falls off, or becomes sensitive to input phase.

Looks are deceiving. These beliefs are incorrect!

added 2013-04-04:
As a followup to all the mail I got about digital waveforms and stairsteps, I demonstrate actual digital behavior on real equipment in our video Digital Show & Tell so you need not simply take me at my word here!

All signals with content entirely below the Nyquist frequency (half the sampling rate) are captured perfectly and completely by sampling; an infinite sampling rate is not required. Sampling doesn't affect frequency response or phase. The analog signal can be reconstructed losslessly, smoothly, and with the exact timing of the original analog signal.

So the math is ideal, but what of real world complications? The most notorious is the band-limiting requirement. Signals with content over the Nyquist frequency must be lowpassed before sampling to avoid aliasing distortion; this analog lowpass is the infamous antialiasing filter. Antialiasing can't be ideal in practice, but modern techniques bring it very close. ...and with that we come to oversampling.

Oversampling

Sampling rates over 48kHz are irrelevant to high fidelity audio data, but they are internally essential to several modern digital audio techniques. Oversampling is the most relevant example [7].

Oversampling is simple and clever. You may recall from myA Digital Media Primer for Geeks that high sampling rates provide a great deal more space between the highest frequency audio we care about (20kHz) and the Nyquist frequency (half the sampling rate). This allows for simpler, smoother, more reliable analog anti-aliasing filters, and thus higher fidelity. This extra space between 20kHz and the Nyquist frequency is essentially just spectral padding for the analog filter.

Above: Whiteboard diagram from A Digital Media Primer for Geeks illustrating the transition band width available for a 48kHz ADC/DAC (left) and a 96kHz ADC/DAC (right).

That's only half the story. Because digital filters have few of the practical limitations of an analog filter, we can complete the anti-aliasing process with greater efficiency and precision digitally. The very high rate raw digital signal passes through a digital anti-aliasing filter, which has no trouble fitting a transition band into a tight space. After this further digital anti-aliasing, the extra padding samples are simply thrown away. Oversampled playback approximately works in reverse.

This means we can use low rate 44.1kHz or 48kHz audio with all the fidelity benefits of 192kHz or higher sampling (smooth frequency response, low aliasing) and none of the drawbacks (ultrasonics that cause intermodulation distortion, wasted space). Nearly all of today's analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) oversample at very high rates. Few people realize this is happening because it's completely automatic and hidden.

ADCs and DACs didn't always transparently oversample. Thirty years ago, some recording consoles recorded at high sampling rates using only analog filters, and production and mastering simply used that high rate signal. The digital anti-aliasing and decimation steps (resampling to a lower rate for CDs or DAT) happened in the final stages of mastering. This may well be one of the early reasons 96kHz and 192kHz became associated with professional music production [8].

16 bit vs 24 bit

OK, so 192kHz music files make no sense. Covered, done. What about 16 bit vs. 24 bit audio?

It's true that 16 bit linear PCM audio does not quite cover the entire theoretical dynamic range of the human ear in ideal conditions. Also, there are (and always will be) reasons to use more than 16 bits in recording and production.

None of that is relevant to playback; here 24 bit audio is as useless as 192kHz sampling. The good news is that at least 24 bit depth doesn't harm fidelity. It just doesn't help, and also wastes space.

Revisiting your ears

We've discussed the frequency range of the ear, but what about the dynamic range from the softest possible sound to the loudest possible sound?

One way to define absolute dynamic range would be to look again at the absolute threshold of hearing and threshold of pain curves. The distance between the highest point on the threshold of pain curve and the lowest point on the absolute threshold of hearing curve is about 140 decibels for a young, healthy listener. That wouldn't last long though; +130dB is loud enough to damage hearing permanently in seconds to minutes. For reference purposes, a jackhammer at one meter is only about 100-110dB.

The absolute threshold of hearing increases with age and hearing loss. Interestingly, the threshold of pain decreases with age rather than increasing. The hair cells of the cochlea themselves posses only a fraction of the ear's 140dB range; musculature in the ear continuously adjust the amount of sound reaching the cochlea by shifting the ossicles, much as the iris regulates the amount of light entering the eye [9]. This mechanism stiffens with age, limiting the ear's dynamic range and reducing the effectiveness of its protection mechanisms [10].

Environmental noise

Few people realize how quiet the absolute threshold of hearing really is.

The very quietest perceptible sound is about -8dbSPL [11]. Using an A-weighted scale, the hum from a 100 watt incandescent light bulb one meter away is about 10dBSPL, so about 18dB louder. The bulb will be much louder on a dimmer.

20dBSPL (or 28dB louder than the quietest audible sound) is often quoted for an empty broadcasting/recording studio or sound isolation room. This is the baseline for an exceptionally quiet environment, and one reason you've probably never noticed hearing a light bulb.

The dynamic range of 16 bits

16 bit linear PCM has a dynamic range of 96dB according to the most common definition, which calculates dynamic range as (6*bits)dB. Many believe that 16 bit audio cannot represent arbitrary sounds quieter than -96dB. This is incorrect.

I have linked to two 16 bit audio files here; one contains a 1kHz tone at 0 dB (where 0dB is the loudest possible tone) and the other a 1kHz tone at -105dB.

Above: Spectral analysis of a -105dB tone encoded as 16 bit / 48kHz PCM. 16 bit PCM is clearly deeper than 96dB, else a -105dB tone could not be represented, nor would it be audible.

How is it possible to encode this signal, encode it with no distortion, and encode it well above the noise floor, when its peak amplitude is one third of a bit?

Part of this puzzle is solved by proper dither, which renders quantization noise independent of the input signal. By implication, this means that dithered quantization introduces no distortion, just uncorrelated noise. That in turn implies that we can encode signals of arbitrary depth, even those with peak amplitudes much smaller than one bit [12]. However, dither doesn't change the fact that once a signal sinks below the noise floor, it should effectively disappear. How is the -105dB tone still clearly audible above a -96dB noise floor?

The answer: Our -96dB noise floor figure is effectively wrong; we're using an inappropriate definition of dynamic range. (6*bits)dB gives us the RMS noise of the entire broadband signal, but each hair cell in the ear is sensitive to only a narrow fraction of the total bandwidth. As each hair cell hears only a fraction of the total noise floor energy, the noise floor at that hair cell will be much lower than the broadband figure of -96dB.

Thus, 16 bit audio can go considerably deeper than 96dB. With use of shaped dither, which moves quantization noise energy into frequencies where it's harder to hear, the effective dynamic range of 16 bit audio reaches 120dB in practice [13], more than fifteen times deeper than the 96dB claim.

120dB is greater than the difference between a mosquito somewhere in the same room and a jackhammer a foot away.... or the difference between a deserted 'soundproof' room and a sound loud enough to cause hearing damage in seconds.

16 bits is enough to store all we can hear, and will be enough forever.

Signal-to-noise ratio

It's worth mentioning briefly that the ear's S/N ratio is smaller than its absolute dynamic range. Within a given critical band, typical S/N is estimated to only be about 30dB. Relative S/N does not reach the full dynamic range even when considering widely spaced bands. This assures that linear 16 bit PCM offers higher resolution than is actually required.

It is also worth mentioning that increasing the bit depth of the audio representation from 16 to 24 bits does not increase the perceptible resolution or 'fineness' of the audio. It only increases the dynamic range, the range between the softest possible and the loudest possible sound, by lowering the noise floor. However, a 16-bit noise floor is already below what we can hear.

When does 24 bit matter?

Professionals use 24 bit samples in recording and production [14] for headroom, noise floor, and convenience reasons.

16 bits is enough to span the real hearing range with room to spare. It does not span the entire possible signal range of audio equipment. The primary reason to use 24 bits when recording is to prevent mistakes; rather than being careful to center 16 bit recording-- risking clipping if you guess too high and adding noise if you guess too low-- 24 bits allows an operator to set an approximate level and not worry too much about it. Missing the optimal gain setting by a few bits has no consequences, and effects that dynamically compress the recorded range have a deep floor to work with.

An engineer also requires more than 16 bits during mixing and mastering. Modern work flows may involve literally thousands of effects and operations. The quantization noise and noise floor of a 16 bit sample may be undetectable during playback, but multiplying that noise by a few thousand times eventually becomes noticeable. 24 bits keeps the accumulated noise at a very low level. Once the music is ready to distribute, there's no reason to keep more than 16 bits.

Listening tests

Understanding is where theory and reality meet. A matter is settled only when the two agree.

Empirical evidence from listening tests backs up the assertion that 44.1kHz/16 bit provides highest-possible fidelity playback. There are numerous controlled tests confirming this, but I'll plug a recent paper,Audibility of a CD-Standard A/D/A Loop Inserted into High-Resolution Audio Playback, done by local folks here at theBoston Audio Society.

Unfortunately, downloading the full paper requires an AES membership. However it's been discussed widely in articles and on forums, with the authors joining in. Here's a few links:

This paper presented listeners with a choice between high-rate DVD-A/SACD content, chosen by high-definition audio advocates to show off high-def's superiority, and that same content resampled on the spot down to 16-bit / 44.1kHz Compact Disc rate. The listeners were challenged to identify any difference whatsoever between the two using an ABX methodology. BAS conducted the test using high-end professional equipment in noise-isolated studio listening environments with both amateur and trained professional listeners.

In 554 trials, listeners chose correctly 49.8% of the time. In other words, they were guessing. Not one listener throughout the entire test was able to identify which was 16/44.1 and which was high rate [15], and the 16-bit signal wasn't even dithered!

Another recent study [16] investigated the possibility that ultrasonics were audible, as earlier studies had suggested. The test was constructed to maximize the possibility of detection by placing the intermodulation products where they'd be most audible. It found that the ultrasonic tones were not audible... but the intermodulation distortion products introduced by the loudspeakers could be.

This paper inspired a great deal of further research, much of it with mixed results. Some of the ambiguity is explained by finding that ultrasonics can induce more intermodulation distortion than expected in power amplifiers as well. For example, David Griesinger reproduced this experiment [17] and found that his loudspeaker setup did not introduce audible intermodulation distortion from ultrasonics, but his stereo amplifier did.

Caveat Lector

It's important not to cherry-pick individual papers or 'expert commentary' out of context or from self-interested sources. Not all papers agree completely with these results (and a few disagree in large part), so it's easy to find minority opinions that appear to vindicate every imaginable conclusion.Regardless, the papers and links above are representative of the vast weight and breadth of the experimental record. No peer-reviewed paper that has stood the test of time disagrees substantially with these results. Controversy exists only within the consumer and enthusiast audiophile communities.

If anything, the number of ambiguous, inconclusive, and outright invalid experimental results available through Google highlights how tricky it is to construct an accurate, objective test. The differences researchers look for are minute; they require rigorous statistical analysis to spot subconscious choices that escape test subjects' awareness. That we're likely trying to 'prove' something that doesn't exist makes it even more difficult. Proving a null hypothesis is akin to proving the halting problem; you can't. You can only collect evidence that lends overwhelming weight.

Despite this, papers that confirm the null hypothesis are especially strong evidence; confirming inaudibility is far more experimentally difficult than disputing it. Undiscovered mistakes in test methodologies and equipment nearly always produce false positive results (by accidentally introducing audible differences) rather than false negatives.

If professional researchers have such a hard time properly testing for minute, isolated audible differences, you can imagine how hard it is for amateurs.

How to [inadvertently] screw up a listening comparison

The number one comment I heard from believers in super high rate audio was [paraphrasing]: "I've listened to high rate audio myself and the improvement is obvious. Are you seriously telling me not to trust my own ears?"

Of course you can trust your ears. It's brains that are gullible. I don't mean that flippantly; as human beings, we're all wired that way.

Confirmation bias, the placebo effect, and double-blind

In any test where a listener can tell two choices apart via any means apart from listening, the results will usually be what the listener expected in advance; this is called confirmation bias and it's similar to the placebo effect. It means people 'hear' differences because of subconscious cues and preferences that have nothing to do with the audio, like preferring a more expensive (or more attractive) amplifier over a cheaper option.

The human brain is designed to notice patterns and differences, even where none exist. This tendency can't just be turned off when a person is asked to make objective decisions; it's completely subconscious. Nor can a bias be defeated by mere skepticism. Controlled experimentation shows that awareness of confirmation bias can increase rather than decreases the effect! A test that doesn't carefully eliminate confirmation bias is worthless [18].

In single-blind testing, a listener knows nothing in advance about the test choices, and receives no feedback during the course of the test. Single-blind testing is better than casual comparison, but it does not eliminate the experimenter's bias. The test administrator can easily inadvertently influence the test or transfer his own subconscious bias to the listener through inadvertent cues (eg, "Are you sure that's what you're hearing?", body language indicating a 'wrong' choice, hesitating inadvertently, etc). An experimenter's bias has also been experimentally proven to influence a test subject's results.

Double-blind listening tests are the gold standard; in these tests neither the test administrator nor the testee have any knowledge of the test contents or ongoing results. Computer-run ABX tests are the most famous example, and there are freely available tools for performing ABX tests on your own computer[19]. ABX is considered a minimum bar for a listening test to be meaningful; reputable audio forums such as Hydrogen Audio often do not even allow discussion of listening results unless they meet this minimum objectivity requirement [20].

Above: Squishyball, a simple command-line ABX tool, running in an xterm.

I personally don't do any quality comparison tests during development, no matter how casual, without an ABX tool. Science is science, no slacking.

Loudness tricks

The human ear can consciously discriminate amplitude differences of about 1dB, and experiments show subconscious awareness of amplitude differences under .2dB. Humans almost universally consider louder audio to sound better, and .2dB is enough to establish this preference. Any comparison that fails to carefully amplitude-match the choices will see the louder choice preferred, even if the amplitude difference is too small to consciously notice. Stereo salesmen have known this trick for a long time.

The professional testing standard is to match sources to within .1dB or better. This often requires use of an oscilloscope or signal analyzer. Guessing by turning the knobs until two sources sound about the same is not good enough.

Clipping

Clipping is another easy mistake, sometimes obvious only in retrospect. Even a few clipped samples or their aftereffects are easy to hear compared to an unclipped signal.

The danger of clipping is especially pernicious in tests that create, resample, or otherwise manipulate digital signals on the fly. Suppose we want to compare the fidelity of 48kHz sampling to a 192kHz source sample. A typical way is to downsample from 192kHz to 48kHz, upsample it back to 192kHz, and then compare it to the original 192kHz sample in an ABX test [21]. This arrangement allows us to eliminate any possibility of equipment variation or sample switching influencing the results; we can use the same DAC to play both samples and switch between without any hardware mode changes.

Unfortunately, most samples are mastered to use the full digital range. Naive resampling can and often will clip occasionally. It is necessary to either monitor for clipping (and discard clipped audio) or avoid clipping via some other means such as attenuation.

Different media, different master

I've run across a few articles and blog posts that declare the virtues of 24 bit or 96/192kHz by comparing a CD to an audio DVD (or SACD) of the 'same' recording. This comparison is invalid; the masters are usually different.

Inadvertent cues

Inadvertant audible cues are almost inescapable in older analog and hybrid digital/analog testing setups. Purely digital testing setups can completely eliminate the problem in some forms of testing, but also multiply the potential of complex software bugs. Such limitations and bugs have a long history of causing false-positive results in testing [22].

The Digital Challenge - More on ABX Testing, tells a fascinating story of a specific listening test conducted in 1984 to rebut audiophile authorities of the time who asserted that CDs were inherently inferior to vinyl. The article is not concerned so much with the results of the test (which I suspect you'll be able to guess), but the processes and real-world messiness involved in conducting such a test. For example, an error on the part of the testers inadvertantly revealed that an invited audiophile expert had not been making choices based on audio fidelity, but rather by listening to the slightly different clicks produced by the ABX switch's analog relays!

Anecdotes do not replace data, but this story is instructive of the ease with which undiscovered flaws can bias listening tests. Some of the audiophile beliefs discussed within are also highly entertaining; one hopes that some modern examples are considered just as silly 20 years from now.

Finally, the good news

What actually works to improve the quality of the digital audio to which we're listening?

Better headphones

The easiest fix isn't digital. The most dramatic possible fidelity improvement for the cost comes from a good pair of headphones. Over-ear, in ear, open or closed, it doesn't much matter. They don't even need to be expensive, though expensive headphones can be worth the money.

Keep in mind that some headphones are expensive because they're well made, durable and sound great. Others are expensive because they're $20 headphones under a several hundred dollar layer of styling, brand name, and marketing. I won't make specfic recommendations here, but I will say you're not likely to find good headphones in a big box store, even if it specializes in electronics or music. As in all other aspects of consumer hi-fi, do your research (and caveat emptor).

Lossless formats

It's true enough that a properly encoded Ogg file (or MP3, or AAC file) will be indistinguishable from the original at a moderate bitrate.

But what of badly encoded files?

Twenty years ago, all mp3 encoders were really bad by today's standards. Plenty of these old, bad encoders are still in use, presumably because the licenses are cheaper and most people can't tell or don't care about the difference anyway. Why would any company spend money to fix what it's completely unaware is broken?

Moving to a newer format like Vorbis or AAC doesn't necessarily help. For example, many companies and individuals used (and still use) FFmpeg's very-low-quality built-in Vorbis encoder because it was the default in FFmpeg and they were unaware how bad it was. AAC has an even longer history of widely-deployed, low-quality encoders; all mainstream lossy formats do.

Lossless formats like FLAC avoid any possibility of damaging audio fidelity [23] with a poor quality lossy encoder, or even by a good lossy encoder used incorrectly.

A second reason to distribute lossless formats is to avoid generational loss. Each reencode or transcode loses more data; even if the first encoding is transparent, it's very possible the second will have audible artifacts. This matters to anyone who might want to remix or sample from downloads. It especially matters to us codec researchers; we need clean audio to work with.

Better masters

The BAS test I linked earlier mentions as an aside that the SACD version of a recording can sound substantially better than the CD release. It's not because of increased sample rate or depth but because the SACD used a higher-quality master. When bounced to a CD-R, the SACD version still sounds as good as the original SACD and better than the CD release because the original audio used to make the SACD was better. Good production and mastering obviously contribute to the final quality of the music [24].

The recent coverage of 'Mastered for iTunes' and similar initiatives from other industry labels is somewhat encouraging. What remains to be seen is whether or not Apple and the others actually 'get it' or if this is merely a hook for selling consumers yet another, more expensive copy of music they already own.

Surround

Another possible 'sales hook', one I'd enthusiastically buy into myself, is surround recordings. Unfortunately, there's some technical peril here.

Old-style discrete surround with many channels (5.1, 7.1, etc) is a technical relic dating back to the theaters of the 1960s. It is inefficient, using more channels than competing systems. The surround image is limited, and tends to collapse toward the nearer speakers when a listener sits or shifts out of position.

We can represent and encode excellent and robust localization with systems like Ambisonics. The problems are the cost of equipment for reproduction and the fact that something encoded for a natural soundfield both sounds bad when mixed down to stereo, and can't be created artificially in a convincing way. It's hard to fake ambisonics or holographic audio, sort of like how 3D video always seems to degenerate into a gaudy gimmick that reliably makes 5% of the population motion sick.

Binaural audio is similarly difficult. You can't simulate it because it works slightly differently in every person. It's a learned skill tuned to the self-assembling system of the pinnae, ear canals, and neural processing, and it never assembles exactly the same way in any two individuals. People also subconsciously shift their heads to enhance localization, and can't localize well unless they do. That's something that can't be captured in a binaural recording, though it can to an extent in fixed surround.

These are hardly impossible technical hurdles. Discrete surround has a proven following in the marketplace, and I'm personally especially excited by the possibilities offered by Ambisonics.

Outro

"I never did care for music much.
It's the high fidelity!"
     —Flanders & Swann, A Song of Reproduction

The point is enjoying the music, right? Modern playback fidelity is incomprehensibly better than the already excellent analog systems available a generation ago. Is the logical extreme any more than just another first world problem? Perhaps, but bad mixes and encodings do bother me; they distract me from the music, and I'm probably not alone.

Why push back against 24/192? Because it's a solution to a problem that doesn't exist, a business model based on willful ignorance and scamming people. The more that pseudoscience goes unchecked in the world at large, the harder it is for truth to overcome truthiness... even if this is a small and relatively insignificant example.

"For me, it is far better to grasp the Universe as it really is than to persist in delusion, however satisfying and reassuring."
     —Carl Sagan

Further reading

Readers have alerted me to a pair of excellent papers of which I wasn't aware before beginning my own article. They tackle many of the same points I do in greater detail.

  • Coding High Quality Digital Audio by Bob Stuart of Meridian Audio is beautifully concise despite its greater length. Our conclusions differ somewhat (he takes as given the need for a slightly wider frequency range and bit depth without much justification), but the presentation is clear and easy to follow. [Edit: I may not agree with many of Mr. Stuart's other articles, but I like this one a lot.]

  • Sampling Theory For Digital Audio [Updated link 2012-10-04] by Dan Lavry of Lavry Engineering is another article that several readers pointed out. It expands my two pages or so about sampling, oversampling, and filtering into a more detailed 27 page treatment. Worry not, there are plenty of graphs, examples and references.

Stephane Pigeon of audiocheck.net wrote to plug the browser-based listening tests featured on his web site. The set of tests is relatively small as yet, but several were directly relevant in the context of this article. They worked well and I found the quality to be quite good.


—Monty (monty@xiph.org) March 1, 2012
last revised March 25, 2012 to add improvements suggested by readers.
Edits and corrections made after this date are marked inline, except for spelling errors
spotted on Dec 30, 2012 and March 15, 2014, and an extra 'is' removed on April 1, 2013]

Monty's articles and demo work are sponsored by Red Hat Emerging Technologies.
(C) Copyright 2012 Red Hat Inc. and Xiph.Org
Special thanks to Gregory Maxwell for technical contributions to this article

Don’t Fall for Babylonian Trigonometry Hype

$
0
0

You may have seen headlines about an ancient Mesopotamian tablet. “Mathematical secrets of ancient tablet unlocked after nearly a century of study,” said the Guardian. “This mysterious ancient tablet could teach us a thing or two about math,” said Popular Science, adding, “Some researchers say the Babylonians invented trigonometry—and did it better.” National Geographic was a bit more circumspect: “A new study claims the tablet could be one of the oldest contributions to the study of trigonometry, but some remain skeptical.” Daniel Mansfield and Norman Wildberger certainly did a good job selling their new paper in the generally more staid journal Historia Mathematica. I’d like to help separate fact from speculation and outright nonsense when it comes to this new paper.

What is Plimpton 322?

Plimpton 322, the tablet in question, is certainly an alluring artifact. It’s a broken piece of clay roughly the size of a postcard. It was filled with four columns of cuneiform numbers around 1800 BCE, probably in the ancient city of Larsa (now in Iraq) and was removed in the 1920s. George Plimpton bought it in 1922 and bequeathed it to Columbia University, which has owned it since 1936. Since then, many scholars have studied Plimpton 322, so any picture you might have of Mansfield and Wildberger on their hands and knees in a hot, dusty archaeological site, or even rummaging through musty, neglected archives and unearthing this treasure is inaccurate. We’ve known about the artifact and what was on it for decades. The researchers claim to have a new interpretation of how the artifact was used, but I am skeptical. 

Scholars have known since the 1940s that Plimpton 322 contains numbers involved in Pythagorean triples, that is, integer solutions to the equation a2+b2=c2. For example, 3-4-5 is a Pythagorean triple because 32+42=9+16=25=52. August 15 of this year was celebrated by some as “Pythagorean Triple Day” because 8-15-17 is another, slightly sexier, such triple.

The far right column consists of the numbers 1 through 15, so it’s just an enumeration. The two middle columns of Plimpton 322 contain one side and the hypotenuse of a Pythagorean triangle, or a and c in the equation a2+b2=c2. (Note that a and b are interchangeable.) But these are a little brawnier than the Pythagorean triples you learn in school. The first entries are 119 and 169, corresponding to the Pythagorean triple 1192+1202=1692. The far left column is a ratio of squares of the sides of the triangles. Exactly which sides depends slightly on what is contained in the missing shard from the left side of the artifact, but it doesn’t make a huge difference. It’s either the square of the hypotenuse divided by the square of the remaining leg or the square of one leg divided by the square of the other leg. In modern mathematical jargon, these are squares of either the tangent or the secant of an angle in the triangle.

We can interpret one of the columns as containing trigonometric functions, so in some sense, it is a trig table. But despite what the headlines would have you believe, people have known that for decades. The mystery is what purpose the tablet served in its time. Why was it created? Why were those particular triangles included in the table? How were the columns computed? In a 1980 paper titled “Sherlock Holmes in Babylon,” R. Creighton Buck implied that through mathematics and cunning observation, one could sleuth out the meaning of the tablet and offered an explanation he thought fit the data. But Eleanor Robson, in “Neither Sherlock Holmes nor Babylon,” writes, "Ancient mathematical texts and artefacts, if we are to understand them fully, must be viewed in the light of their mathematico-historical context, and not treated as artificial, self-contained creations in the style of detective stories." It's arrogant and will probably lead to incorrect conclusions to look at ancient artifacts primarily through the lens of our modern understanding of mathematics.

What did it do?

There are a few theories about how Plimpton 322 was created and used by the person or people who made it. Mansfield and Wildberger are not the first to believe it’s some sort of trig table. On the other hand, some believe it links the Pythagorean theorem (known by these ancient Mesopotamians and many other civilizations long before Pythagoras) with the method of completing the square to solve a quadratic equation, a common problem in mathematical texts from that time and place. Some believe the triples were generated using different numbers not included in the table in a “number theoretic” way. Some believe the numbers came from so-called reciprocal pairs that were used for multiplication. Some think the tablet was a pedagogical tool, perhaps a source of exercises for students. Some believe it was used in something more like original mathematical research. Academic but readable information about these interpretations can be found in articles by Buck in 1980, Robson in 2001 and 2002, and John P. Britton, Christine Proust, and Steve Shnider from 2011

If it is a trigonometry table, is it better than modern trigonometry tables?

Mansfield and Wildberger’s contribution to scholarship on Plimpton 322 seems to be speculation that the artifact could be used to do trigonometry in a more exact way than we do now. In a publicity video by UNSW that must have accompanied the press releases sent to many math and science journalists (but not to me—what gives, UNSW?), Mansfield makes the claims that this table is “superior in some ways to modern trigonometry” and the “only completely accurate trigonometry table.”

It’s hard to know where to start with this part of their claims. For one, the tablet contains some well-known errors, so claims that it is the most accurate or exact trig table ever are just not true. But even a corrected version of Plimpton 322 would not be a revolutionary replacement for modern trig tables. 

If you, like me, did not grow up using trig tables, they are fantastic tools when you don’t have a computer that does calculations with 10 digits of accuracy in a split second. A trig table would include columns with the sine, cosine, tangent, and possibly other trigonometric functions of angles. Someone or a group of someones would do these painstaking computations by hand, and then you could just look up the value when, say, cos(24°) came up in a computation. Today, computers generally use formulas for trig functions rather than calling up a list of all the values, and humans don’t need to know many values at all. These formulas are based on calculus and can be as precise as necessary. Need the correct answer to 50 digits? Your computer can do it, probably pretty quickly.

If you memorized “soh cah toa” or a mnemonic about “some old hippie,” you might remember that the basic trig functions are ratios of side lengths of triangles. The sine of an angle is the opposite side divided by the hypotenuse, the cosine is the adjacent divided by the hypotenuse, and the tangent is the opposite divided by the adjacent. The values of trig functions of most angles are not rational numbers. They can’t be written as the ratio of two whole numbers, so the entries you’ll find in trig tables are truncated after some number of decimal points. Mansfield and Wildberger seem to have homed in on the observation that when the side lengths of a right triangle are all integers, these ratios are all rational. Plimpton 322 is an “exact” trigonometric table because it only has trig functions based on triangles that have integer side lengths. (And in fact, the creator of the table set it up so the denominators of all the fractions are easy to represent in base 60.) 

Modern trig tables are based on angles that increase at a steady rate. They might give the sines of 1°, 2°, 3°, and so on, or 0.1°, 0.2°, 0.3°, and so on, or even finer gradations of angles. Because like other ancient Mesopotamians, the people who produced Plimpton 322 thought of triangles in terms of side lengths rather than angles, the angles do not change steadily. That’s the difference between this seen as a trig table and modern trig tables. Neither way is inherently superior. If we wanted to make modern trig tables with angles that had only rational trig functions, we could, but it wouldn’t make computations dramatically more accurate. Either way, we could get the accuracy we needed for any particular application.

A little digging shows that Wildberger has a pet idea called “rational trigonometry.” He seems to be somewhat skeptical of things involving infinity, including irrational numbers, which have infinite, nonrepeating decimal representations. From a cursory reading of a chapter he’s written on rational trigonometry, I don’t see anything blatantly wrong with the theory, but it seems like a solution to a problem that doesn’t exist. The fact that most angles have irrational sines, cosines, and tangents doesn’t bother the vast majority of mathematicians, physicists, engineers, and others who use trig. It’s hard not to see their work on Plimpton 322 as motivated by a desire to legitimize an approach that has almost no traction in the mathematical community.

Is base 60 better than base 10?

Perhaps the utility of different types of trig tables is a matter of opinion, but the UNSW video also has some outright falsehoods about accuracy in base 60 versus the base 10 system we now use. Around the 1:10 mark, Mansfield says, “We count in base 10, which only has two exact fractions: 1/2, which is 0.5, and 1/5.” My first objection is that any fraction is exact. The number 1/3 is precisely 1/3. Mansfield makes it clear that what he means by 1/3 not being an exact fraction is that it has an infinite (0.333…) rather than a terminating decimal. But what about 1/4? That’s 0.25, which terminates, and yet Mansfield doesn’t consider it an exact fraction. And what about 1/10 or 2/5? Those can be written 0.1 and 0.4, which seem pretty exact. 

Indefensibly, when he lauds the many “exact fractions” available in base 60, he doesn’t apply the same standards. In base 60, 1/8 would be written 7/60+30/3600 which is the same idea as writing 0.25, or 2/10+5/100, for 1/4 in base 10. Why is 1/8 exact in base 60 but 1/4 not exact in base 10? It’s hard to believe this is an honest mistake coming from a mathematician and instead makes me even more suspicious that his work is motivated by an agenda.

Plimpton 322 is a remarkable artifact, and we have much to learn from it. When I taught math history, I loved opening the semester by having my students read a few papers about it to show how much scholarship has gone into understanding such a small document and how accomplished scholars can disagree about what it means. It demonstrates differences in the way different cultures have done mathematics and outstanding computational facility. It has raised questions about how ancient Mesopotamians approached calculation and geometry. But using it to sell a questionable pet theory won’t get us any closer to the answers.


ARCore: Augmented reality at Android scale

$
0
0

Alongside ARCore, we’ve been investing in apps and services which will further support developers in creating great AR experiences. We built Blocks and Tilt Brush to make it easy for anyone to quickly create great 3D content for use in AR apps. As we mentioned at I/O, we’re also working on Visual Positioning Service (VPS), a service which will enable world scale AR experiences well beyond a tabletop. And we think the Web will be a critical component of the future of AR, so we’re also releasing prototype browsers for web developers so they can start experimenting with AR, too. These custom browsers allow developers to create AR-enhanced websites and run them on both Android/ARCore and iOS/ARKit.

ARCore is our next step in bringing AR to everyone, and we’ll have more to share later this year. Let us know what you think through GitHub, and check out our new AR Experiments showcase where you can find some fun examples of what’s possible. Show us what you build on social media with #ARCore; we’ll be resharing some of our favorites.

The Software Engineering Rule of 3

$
0
0
The software engineering rule of 3

Here’s a dumb extremely accurate rule I’m postulating for software engineering projects: you need at least 3 examples before you solve the right problem.

This is what I’ve noticed:

  1. Don’t factor out shared code between two classes. Wait until you have at least three.
  2. The two first attempts to solve a problem will fail because you misunderstood the problem. The third time it will work.
  3. Any attempt at being smart earlier will end up overfitting to coincidental patterns.

(Note that #1 and #2 are actually pretty different implications. But let’s get back to that later.)

What’s he talking about? Example plz

Let’s say you’re implementing a class that scrapes data from banks. This is an extremely dumbed down version, but should illustrate the point:

classChaseScraper:def__init__(self,username,password):self._username=usernameself._password=passworddefscrape(self):session=requests.Session()sessions.get('https://chase.com/rest/login.aspx',data={'username':self._username,'password':self._password})sessions.get('https://chase.com/rest/download_current_statement.aspx')

Now, you want to add a second class CitibankScraper that implements the same interface, but changes a few implementation detail. In fact let’s say the only changes are that Citibank has different URLs and that their form element have slightly different names. So we add a new scraper

classChaseScraper:def__init__(self,username,password):self._username=usernameself._password=passworddefscrape(self):session=requests.Session()sessions.get('https://citibank.com/cgi-bin/login.pl',data={'user':self._username,'pass':self._password})sessions.get('https://citibank.com/cgi-bin/download-stmt.pl')

At this point after many years of being taught that we need to keep it “DRY” (don’t repeat yourself) we go ermahgerd, cerd derplication!!! and factor out everything into a base class. In this case it means inverting the control and let the base class take over the control flow:

classBaseScraper:def__init__(self,username,password):self._username=usernameself._password=passworddefscrape(self):session=requests.Session()sessions.get(self._LOGIN_URL,data={self._USERNAME_FORM_KEY:self._username,self._PASSWORD_FORM_KEY:self._password})sessions.get(self._STATEMENT_URL)classChaseScraper(BaseScraper):_LOGIN_URL='https://chase.com/rest/login.aspx'_STATEMENT_URL='https://chase.com/rest/download_current_statement.aspx'_USERNAME_FORM_KEY='username'_PASSWORD_FORM_KEY='password'classCitibankScraper(BaseScraper):_LOGIN_URL='https://citibank.com/cgi-bin/login.pl'_STATEMENT_URL='https://citibank.com/cgi-bin/download-stmt.pl'_USERNAME_FORM_KEY='user'_PASSWORD_FORM_KEY='pass'

This would let us remove a lot of lines of code. It’s one of the most compact ways we can implement these two bank statement providers here. So what’s wrong with this code? (Apart from the general antipattern of implementation inheritance).

The problem is we’re overfitting massively to a pattern here! What do I mean with overfitting? We’re seeing patterns that really don’t generalize well.

To see this, let’s say we add a third provider that is slightly different. Maybe it’s one or more of the following:

  • It requires 2-factor authentication
  • Credentials are sent using JSON
  • Login is a POST rather than a GET
  • It requires visiting multiple pages in a row
  • The statement url is generated dynamically based on the current date

… or whatever, there is another 1000 ways this could break down. I hope you see the problem here. We thought we had a pattern after the first two scrapers! It turns out there really wasn’t that much that generalized to the third provider (and more generally, to the nth). In other words, we overfit.

What does Erik mean by overfitting?

So overfitting is a term for when see patterns in data and those patterns don’t generalize. When coding we’re often hyper-vigilant about optimizing for code deduplication, we detect incidental patterns that may not be representative of the full breadth of pattern that we would see if we knew all the different applications. So after implementing two bank scrapers we see a pattern that we think applies more generally, but really it doesn’t.

Note that code duplication isn’t always such a bad thing. Engineers often focus way too much on reducing duplicated code. But care has to be taken to distinguish between code duplication that’s incidental versus code duplication that’s systemic.

Thus, let me introduce the first rule of 3. Don’t worry so much about code duplication if you only have two classes or two functions or whatever. When you see a pattern in three different places, it’s worth thinking about how to factor it out.

Rule of 3 as applied to architecture

The same reasoning applies to system design but with a very different conclusion. When you build a new system from scratch, and you have no idea about how it’s eventually going to be used, don’t get too attached to assumptions. The constraints you think you really need for the 1st and the 2nd implementation seem absolutely crucial, but you’re going to realize that you got it all wrong and the 3rd implementation is really the one where most of the things are right. Ok, this is obviously all extreme blanket statements here. Don’t use my advice for brain surgery or nuclear fission.

As an example, Luigi was the third attempt at solving the problem. The first two attempts solved the wrong problem or optimized for the wrong thing. For instance the first iteration relied on specifying the dependency graph in XML. But this turned out to be super annoying for the reason that you really want the ability to build the dependency graph programmatically. Conversely a bunch of things in the first two attempts that seemed really useful, like decoupling outputs from tasks, ended up adding far more complexity only to support some obscure edge cases.

What would have seem like obscure niche cases in the first iteration because very central in the final iteration, and vice versa.

I was reminded of this when we built an email ingestion system at Better. The first attempt failed because we built it in a poor way (basically shoehorning it into a CRUD request). The second one had a solid microservice design but failed for usability reasons (we built a product that no one really asked for). We’re halfway through the third attempt and I’m having a good feeling about it.

These stories illustrate the second rule of 3— you’re not going to get the system design right until the third time you build it.

More importantly, if you are building the first implementation of some hairy unknown problem, don’t assume you’re going to nail it. Take shortcuts. Hack around nasty problems. You’re probably not going to keep this system anyway — at some point it’s going to break. And then the second version breaks most of the time. The third though — that’s when you perfect it.

Erik Bernhardsson

... is the CTO at Better, which is a startup changing how mortgages are done. I write a lot of code, some of which ends up being open sourced, such as Luigi and Annoy. I also co-organize NYC Machine Learning meetup. You can follow me on Twitter or see some more facts about me.

Show HN: Find the CSS selector for a set of elements, from examples

$
0
0

JavaScript utils to generalize a set of CSS selectors to a single selector that matches them all. Useful for mapping the structure of web apps.

running the test

If you haven't install beefy and browserify ( just for convenience, to serve the test page )

npm install -g browserify beefy

Then go to the directory where you checked out this code and run it

beefy test.js 8080

You can then visit http://<your ip | localhost>:8080 in your browser ( the page was tested in Chrome )

Open console / devtools on that page to get a view of what's happening.

Then to add some elements to the set of examples, just click on them ( don't worry, links won't navigate ). When you want to see the generalized selector, click 'generalize' When you want to clear the example set, click 'clear'

If you notice something weird, open an issue! 👍

Dev Notes

  • Wednesday August 16 2017
  • All the current open issues are improvements. Some of improvements to :any and others are just improvements.
  • None of these currently can be done because:
  1. I need to think more about :any and
  2. The other improvements either need to be thought more about as well, or depend on some human interface element being made, (for example), the composition issue

The next 10B years

$
0
0

It is not surprising that we found the future fascinating; after all, we are all going there. But the future is never what it used to be and it is said that predictions are always difficult, especially those dealing with the future. Nevertheless, it is possible to study the future, which is something different from predicting it. It is an exercise called "scenario building". Here, let me try a telescopic sweep of scenario building that starts from the remote past and takes us to the remote future over a total range of 20 billion years. While the past is what it was, our future bifurcates into two scenarios; one "good" and the other "bad", all depending on what we'll be doing in the coming years.

The past 10 billion years

- 10 billion years ago. The universe is young, it only has less than four billion years. But it already looks the way it will be for many billion years: galaxies, stars, planets, black holes and much more. 

- 1 billion years ago. From the debris of ancient supernovas, the solar system has formed around a second generation star, the Sun, about 4.5 billion years ago. The planets that form the system are not very different from those we see today. Earth has blue oceans, white clouds and dark brown continents. But there are no plants or animals on the continents, nor fish in the water. Life is all unicellular in the oceans, but its activity has already changed a lot of things: the presence of oxygen in the atmosphere is a result of the ongoing photosynthesis activity. 

- 100 million years ago. Plenty of things have been happening on planet Earth. Starting about 550 million years ago, perhaps as a result of the ice age known as "snowball Earth," multicellular life forms have appeared. First, only in the oceans; then, about 400 million years ago, life has colonized the surfaces of the continents creating lush forests and large animals that have populated the Earth for hundreds of millions of years. That wasn't uneventful, though. Life nearly went extinct when, 245 million years ago, a giant volcanic eruption in the region we call Siberia today generated the largest known extinction of Earth's history. But the biosphere managed to survive and regrow into the cretaceous period, the age of Dinosaurs.

- 10 million years ago. The age of dinosaurs is over. They have been wiped out by a new mass extinction, caused probably by a giant asteroid which hit the Earth 65 million years ago. Again, the biosphere has survived and now it prospers again, populated with mammals and birds; including primates. We are in the Miocene period and the Earth has been cooling down over a period of several million years, possibly as the result of the Indian subcontinent having hit Asia and created the Himalayas. That has favored CO2 removal from the atmosphere by weathering and has lowered temperatures. Icecaps have formed both at the North and the South poles for the first time in several hundred million years.

- 1 million years ago. The Earth has considerably cooled down during the period that we call "Pleistocene" and it is now undergoing a series of ice ages and interglacials. Ice ages last for tens of thousands of years, whereas the interglacials are relatively short hot spells, a few thousands of years long. These climatic oscillations are perhaps the element that stimulate the evolution of some primate species which have developed bipedal locomotion. One million years ago, homo erectus and homo abilis can use fire and make simple stone tools.

- 100.000 years ago. The glacial/interglacial cycles continue. The hot spell called the "Eemian" period, about 114,000 years ago, has been short lived and has given way to one of the harshest known glaciations of the recent Earth's history. But humans survive. In Europe, the Neanderthals rule, while the species that we call "homo sapiens" already exists in Africa.

- 10.000 years ago. The ice age ends abruptly to give rise to a new interglacial; the period that we call "Holocene." The Neanderthals have disappeared, pushed over the edge of survival by their "Sapiens" competitors. Climate stabilizes enough for humans to start to practice agriculture in the fertile valleys along the tropical region of Africa and Eurasia, from Egypt to China.

- 1000 years ago. The agricultural age has given rise to the age of empires, fighting for domination of large geographical areas. The human population has been rapidly growing, with the start of a series of cycles of growth and collapse that derive from the overexploitation of the fertile soil. 1000 years ago, the Western World is coming back from one of these periodic collapses and is expanding again during the period we call "Middle Ages".

- 100 years ago. The age of coal has started and has been ongoing for at least two centuries. With it, the industrial revolution has come. Coal and crude oil are the fuels that create a tremendous expansion of humankind in numbers and power. 100 years ago, there are already more than a billion humans on the planet and the population is rapidly heading for the two billion mark. Pollution is still a minor problem that goes largely unrecognized. The concentration of carbon dioxide in the atmosphere has been increasing to near 300 ppm over the 270 ppm which has been the level of the pre-industrial age. This fact is noted by some human scientists, but the long term consequences are not understood.

- 10 years ago. The fossil fuels which have created the industrial age are starting to show signs of depletion and the same is true also for most mineral commodities. The attempt to replace fossil fuels with uranium has not been successful because of the difficulties involved in controlling the technology. Energy production is still increasing, but it shows signs of slowing down. The human population has reached 6 billion and keeps growing, but at reduced rates of growth. The Earth's agricultural system is in full overshoot and the population can only be fed by means of an agricultural-industrial complex based on fossil fuels. The concentration of CO2 in the atmosphere has been growing fast and is now about 370 ppm. Global temperatures have been rising, too. The problem of global warming has been recognized and considerable efforts are being made to reduce the emission of CO2 and of other greenhouse gases.  

Today. The world's industrial system seems to be close to stopping its growth and the financial system has been going through a series of brutal collapses. The production of crude oil has been stable during the past few years; but the overall energy production is still increasing because of the rapid growth of coal production. The political situation is chaotic, with continuously erupting minor wars. The human population has reached seven billion. The climate system seems to be on the verge of collapse, with a rapid increase in natural catastrophes all over the world and the near disappearance of the ice cap at the North Pole.The concentration of CO2 in the atmosphere is almost 400 ppm and keeps increasing.

The future in two scenarios

1.The "bad" scenario.

10 years from now. In 2020, the production of "conventional" crude oil has started a historical trend of decline, but an enormous effort has been made to replace it by liquids produced using non conventional sources. Tar sands, shale oil, and other "heavy" oil sources, as well as biofuels are being produced in amounts sufficient to stave off the decline. Natural gas production is in decline, but large investments in "shale gas" have so far avoided collapse. Uranium, too has become scarce and several countries which don't have national resources have been forced to close down some of their nuclear plants. These trends are partially compensated by the still increasing production of coal; which is also used to produce liquid fuels and other chemicals that once had been obtained from oil. The growth of renewable energy has stalled: there are no more resources to invest in research and development in new technologies and new plants, while a propaganda campaign financed by the oil industry has convinced the public that renewable sources produce no useful energy and are even harmful for the environment. Another propaganda campaign financed by the same lobbies has stopped all attempts of reducing the emissions of greenhouse gases. As a result, agriculture has been devastated by climate change and by the high costs of fertilizers and mechanization. The human population starts an epochal reversal of its growing trend, decimated also in reason of the increasing fraction of fertile land which is dedicated to biofuels.

100 years from now. In 2100, the human economic system has collapsed and the size of the economy is now a small fraction of what it had been at the beginning of the 21st century.  Resource depletion has destroyed most of the industrial system, while climate change and the associated desertification - coupled with the destruction of the fertile soil - have reduced agriculture to a pale shadow of the industrial enterprise it had become. The collapse of agriculture has caused a corresponding population collapse; now under one billion people. Most tropical areas have been abandoned because global warming has made them too hot to be habitable by human beings. The rise in sea level caused by global warming has forced the abandonment of a large number of coastal cities, with incalculable economic damage. The economy of the planet has been further weakened by giant storms and climate disasters which have hit about every inhabited place. Crude oil is not extracted any more in significant amounts and where there still exist gas resources, it is impossible to transport them at long distances because of the decay of the pipeline network and of the flooding of the ports. Only coal is still being extracted and coal fired plants maintain electric power for a reduced industrial activity in several regions of of the North of the planet. Labrador, Alaska, Scandinavia and Northern Siberia see the presence of remnants of the industrial society. Using coal liquefaction, it is still possible to obtain liquid fuels, mostly used for military purposes. The Earth still sees tanks and planes that exchange gunfire against each other.

1000 years from now.The industrial society is a thing of the past. Human caused global warming has  generated the release of methane hydrates which have created even more warming. The stopping of the Oceanic thermohaline currents has transformed most of the planet into a hot desert. Almost all large mammals are extinct. Humans survive only in the extreme fringes of land in the North of the planet and in the South, mainly in Patagonia. For the first time in history, small tribes of humans live on the rapidly de-frosting fringes of the Antarctic continent, living mainly of fishing. In some areas, it is still possible to extract coal and use it for a simple metallurgy that uses the remains of the metals that the 20th century civilization has left. Human being are reduced to a few million people who keep battling each other using old muskets and occasional cannons.

10.000 years from now. Human beings are extinct, together with most vertebrates and trees. Planet Earth is still reeling from the wave of global warming that had started many thousands of years before. The atmosphere still contains large amounts of greenhouse gases generated by human activity and by the release of methane hydrates. The continents are mostly deserts, and the same is true for oceans, reduced to marine deserts by the lack of oxygenating currents. Greenland is nearly ice-free and that's true also for Antarctica, which has lost most of its ice. Only bushes and small size land vertebrates survive in the remote northern and southern fringes of continents.

100.000 from now. The planet is showing signs of recovery. Temperatures have stabilized and silicate erosion have removed a large fraction of the carbon dioxide that had accumulated in the atmosphere. Land animals and trees show some sign of recovery.

1 milion years from now. The planet has partly recovered. The planetary tectonic cycles have re-absorbed most of the CO2 which had created the great burst of warming of long before. Temperature has gone down rapidly and polar ice caps have returned. The return of ice has restarted the thermohaline currents: oceanic waters are oxygenated again. Life - those species that had survived the warming disaster - are thriving again and re-colonizing the tropical deserts - which are fast disappearing.

10 million years from now. Earth is again the lush blue-green planet it used to be, full of life, animals and forests. From the survivors of the great warming, a new explosion of life has been generated. There are again large herbivores and carnivores, as well as large trees, even though none of them looks like the creatures which had populated the Earth before the catastrophe. In Africa, some creatures start using chipped stones for hunting. In time, they develop the ability of creating fire and of building stone structures. They develop agriculture, sea-going ships and ways of recording their thoughts using symbols. But they never develop an industrial civilization for lack of fossil fuels, all burned by humans millions of years before them.

100 milion years from now. Planet Earth is again under stress. The gradual increase in solar irradiation is pushing climate towards a new hot era. The same effect is generated by the gradual formation of a new supercontinent generated by continental drift. Most of the land becomes a desert - all intelligent creatures disappear. There starts a general decline of vertebrates, unable to survive in a progressively hotter planet.

1 billion years from now. The Earth has been sterilized by the increasing solar heat. Just traces of single celled life still survive underground.

10 billion years from now. The sun has expanded and it has become so large that it has absorbed and destroyed the Earth. Then, it has collapsed in a white dwarf. The galaxy and the whole universe move slowly to extinction with the running down of the energy generated by the primeval big bang.

_______________________________________________

2.The "good" scenario

Ten years from now. In 2020, fossil fuel depletion has generated a global decline of production. That, in turn, has led to international treaties directed to ease the replacement of fossil fuels with renewable energy. Treaties are also enacted with the purpose of minimizing the use of coal. The production and the use of biofuels is forbidden everywhere and treaties force producers to direct all the agricultural production towards food for humans. The existing nuclear plants make full use of the uranium in the warheads that had been accumulated during the cold war. Research on nuclear fusion continues, with the hope that it will provide useful energy in 50 years. Even with these actions, global warming continues and agriculture is badly damaged by droughts and erosion. Population growth stops and widespread famines occur. Governments enact fertility reduction measures in order to contain population. Nevertheless, the economy does not show signs of collapse, stimulated by the demand for renewable plants.

A hundred years from now. The measures taken at the beginning of the 21st century have borne fruit. Now, almost 1% of the surface of the planet is covered by solar panels of the latest generations which produce energy with efficiency of the order of 50%. In the north, wind energy is used, as well as energy from ocean currents, tides, and waves. The production of renewable electrical energy keeps growing and it has surpassed anything that was done in the past using primitive technologies based on fossil fuels. No such fuels are extracted any longer and doing so is considered a crime punishable with re-education. The industrial economy is undergoing rapid changes, moving to abandon the exploitation of dwindling resources of rare metals, using the abundant energy available to exploit the abundant elements of the Earth's crust. The human society is now completely based on electric energy, also for transportation. Electric vehicles move along roads and rails, electric ships move across the oceans and electric airship navigate the air. The last nuclear fission plants have been closed for lack of uranium fuel around 2050, they were not needed any more, anyway. Research on nuclear fusion continues with the hope that it will provide usable energy in 50 years. Despite the good performance of the economy, the ecosystem is still under heavy stress because of the large amounts of greenhouse gases emitted into the atmosphere during the past centuries. Agriculture is still reeling from the damage done by erosion and climate change. The human population is in rapid, but controlled, decline under the demographic measures enacted by governments. It is now less than 4 billion humans and famines are a thing of the past.  With the returning prosperity, humans are restarting the exploration of space that they were forced to abandon at the start of the 21st century.

1000 years from now. In the year 3000 A.D. the ecosystems of the planet have completely recovered from the damage done by human activities during the second millennium. A sophisticated planetary control system manages solar irradiation by means of space mirrors and the concentration of greenhouse gases by means of CO2 absorbing/desorbing plants. The planet is managed as a giant garden, optimizing its biological productivity. The Sahara desert is now a forest and the thermohaline currents pump oxygen in the northern regions, full of life of all kinds. The solar and wind plants used during the previous millennium have been mostly dismantled, although some are still kept as a memory of the old times. Most of the energy used by humankind is now generated by space stations which capture solar energy and beam it to the ground in forms easily usable by humans. Research in controlled fusion energy continues with the hope that it will produce usable energy in 500 years. Humans are now less than one billion, they have optimized both their numbers and their energy use and they need enormously less than they had needed in the more turbulent ages of one thousand years before. The development of artificial intelligence is in full swing and practically all tasks that once had been in the hands of humans is now in the "hands" of sophisticated robotic systems. These robots have colonized the solar system and humans now live in underground cities on the Moon. The new planetary intelligence starts considering the idea of terraforming Mars and Venus. The first antimatter powered interstellar spaceships have started their travel to far away stars.

10.000 years from now. There are now less than a billion human beings on Earth who live in splendid cities immersed in the lush forest that the planet has become. Some of them work as a hobby on controlled nuclear fusion which they hope will produce usable energy in a few thousand years. The New Intelligence has now started terraforming Mars. It involves similar methods as those used for controlling the Earth's climate: giant mirrors and CO2 producing plants that control the Martian atmosphere, increasing its pressure and temperature. The terraforming of Venus has also started with similar methods: giant screens that lower the planetary temperatures and immense flying plants that transform CO2 into oxygen and solid carbon. That will take a lot of time, but the New Intelligence is patient. It is also creating new races of solid state beings living on the asteroids and orbiting around the Sun. The exploration of the galaxy is in progress, with spaceships from the solar system now reaching a "sphere" of about a thousand light years from the sun.

100.000 years from now. About 500 million humans live on Earth - mostly engaged in art, contemplation, and living full human lives. Nobody knows any longer what "controlled nuclear fusion" could mean. Mars is now colonized by Earth's plants, which are helping to create an atmosphere suitable for life; it is now a green planet, covered with oceans and lush forests. Several million human beings live there, protected from cosmic radiation by the planetary magnetic field artificially generated by giant magnetic coils at the planet's poles. The temperature of Venus has been considerably lowered, although still not enough for life to take hold of its surface. The exploration of the galaxy is in full swing. Other galactic intelligences are encountered and contacted.

A million years from now. Venus, Earth and Mars are now lush and green; all three full of life. Mercury has been dismantled to provide material for transforming the solar system into a single intelligence system that links a series of creatures. There are statites orbiting around the sun, solid state lifeforms living on asteroids and remote moons, ultra-resistent creatures engineered to live in the thick atmosphere of Jupiter and of the other giant planets. Humans, living on the green planets, have become part of this giant solar network. The other extreme of the Galaxy has been now reached by probes coming from the solar system.

10 milion years from now. The New Intelligence is expanding over the Galaxy. The Green planets are now the place of evolution tests and the Neanderthals now live on Mars, whereas dinosaurs have been recreated on a Venus where the planetary control system has recreated conditions similar to those of the Jurassic on Earth.

In 100 million years from now. Controlling temperatures over the three green planets of the Solar System has become a complex task because of the increasing solar radiation. Mirrors are not enough any more and it has been necessary to move the planets farther away from the sun; which is now the preferred system for climate control. The statites that form the main part of the solar intelligence now surround the sun almost completely in a series of concentric spheres.

In a billion years from now. The solar radiation has increased so much that it has been necessary to move the green planets very far away. One year lasts now as 50 of the "natural" Earth years as they were long before. But these are no problems for the Solar Intelligence, now just a small part of the Galactic intelligence. The three green planets are three jewels of the Solar System.

In ten billion years from now. The sun has collapsed in a weak white dwarf and all the planets that orbit around it are now frozen to death. The Galaxy has lost most of its suns and the universe is entering its last stage of expansion which will lead it to become a frozen darkness. The Galactic Intelligence looks at a nearly dark galaxy. It is now the moment. The Intelligence says, "Let there be light" And there is light.

(this text was inspired by Isaac Asimov's story "The Last Question")

Rayton Solar: Legitimate Investment or Scam?

$
0
0
Bill Nye Pitching Rayton Solar

Rayton Solar, a company endorsed by Bill Nye the Science Guy is asking regular Americans to invest millions of dollars. The Facebook and Instagram feeds of people who are interested in solar and clean energy are full of their ads.

We took a close look at this investment with two questions in mind:

  1. Is this a good investment?
  2. Will this help the solar industry and make renewable energy cheaper and easier to get?

So first, is this a good investment?

Our Answer: No — definitely not.

Investors are being asked by Rayton, and by Bill Nye to make an investment on terms that are not fair and that almost no experienced investor would make in ANY company. We think this is unethical. Rayton has purposely offered “unfriendly terms” to investors that make it likely that the CEO would profit even when the company fails and investors lose all their money.

If you have already invested on StartEngine or another site, we recommend you attempt to withdraw your investment. We’d argue it’s also unethical for StartEngine to allow terms like these to be offered anywhere on its site.

What’s the problem? The strange investment terms Rayton purposefully offers allow the founders to take money from investors and spend it on themselves rather than invest it in Rayton. This is a big red flag. All money raised should go to build technology required to make Rayton a valuable company. It’s hard to overstate how offensive this is as an investor. In basic terms, this means the owners could take some of the money from you, the investor reading this, and buy a bunch of beach houses in Florida or more than 30 Teslas while Rayton fails.

The ability to do this is stated on the first page of their official investment filing:

“After each closing, funds tendered by investors will be available to the company and, after the company has sold $7,000,000 worth of Common Stock, selling securityholders will be permitted to sell up to $3,000,000 worth of Common Stock.”

So if Rayton sells $10 Million in stock, $3 Million of investor money can go straight to the CEO, Andrew Yakub, and to two holding companies, rather than going to build technology so Rayton can make money and repay investors. Those owners will still walk away with $3 Million dollars in cash from investors even if the company fails and investors lose everything.

While founders are sometimes allowed to sell equity in late investment rounds it’s unheard of for founders to do that at Rayton’s stage. This is also a clue that Andrew Yakub and his team aren’t confident in Rayton’s technology.

Worse, it appears that Rayton is using investor money to buy more Facebook ads, to get more investors, so it can raise more money. If this is true, it’s like a ponzi scheme with money from early investors being spent to get money from new investors rather than on building Rayton technology.

The Second Major Red Flag: The price of each share Rayton is higher than almost any other startup ever. It’s a bad deal for investors.

A typical company at this stage with an unproven technology might be worth $2.5 Million in an early “Series A” investment round. If expert investors thought the technology was very promising and the company could earn large profits, professional investors might value it at $25 Million or perhaps a bit more.

Rayton is selling shares at a value “post money” between $60 Million and $267 Million (see page 14 of their investment circular). That means they claim that right now it could be worth over $200 Million.

Almost all companies at this early stage are worth less than $30 Million. Here is a graph of typical valuations at the same “series A” stage of ALL other companies that successfully raise funds. The graph doesn’t even go past $100 Million. How could Rayton, in good conscience, ask its investors to ever invest at twice that? Investors could pay 4 times as much as the early investors in Uber paid.

*As an aside, our country and world as a whole should be investing in the right solar, battery storage, electric vehicle, and wind companies, and asking our lawmakers to provide the same subsidies to those industries that are provided to other industries.

2. Will this help the solar industry? Our Answer: Almost certainly not.

The Rayton video misleadingly implies that using less silicon will change the solar industry. It’s important to consider the capital cost of the machines, higher operating costs, and the fact that moving to thinner silicon would require expensive changes elsewhere in the process of making a solar panel. It’s not clear that in the end Rayton would be cheaper. More importantly, using less silicon wouldn’t solve the key cost issues of solar. Solar panels are less than half the cost of a solar system and silicon, which Rayton says it saves, is only a small part of solar panel cost.

As an expert stated below in the comments, “there are proven technologies that are already more advanced …that will render this idea obsolete…. (such as)… a Gallium Arsenide (GaAs) thin film cell … with similar savings in material costs to what Rayton is proposing and which is already into it’s fourth generation of manufacturing processes. Additionaly, the real issue with solar is that they are non-commodity products sold at commodity prices. See full comment below.

Another statement aggregated from several experts include user YMK1234:

If cutting anything bigger than a tiny sample via the proposed method actually worked it would be extremely fragile. A carrier material and probably a new process of panel creation would be required. The technical description doesn’t address technical accuracy issues… but if they really could really do what they claim, every microchip company could also use that technology which would ensure they wouldn’t need crowdfunding.

More concerning still is the fact that the challenges to wide scale solar adoption are now more around policy, grid infrastructure, and the cost of everything else from land to power inverters (“balance of system” in industry jargon). Silicon cost is not a major issue.

Finally, the fact that no reputable solar investors are listed as supporting Rayton should be a red flag. Rayton will not change the solar industry.

Unethical enough to take action:

Our research showed the extent to which investors are being deliberately misled by Rayton while Rayton raises millions of dollars from casual (and thus probably not wealthy) investors. This is so unethical that action needs to be taken to stop it.

  1. Bill Nye should speak up so more well meaning people who want to support solar do not lose money. If Bill Nye doesn’t do this, his behavior is as unethical as that of Rayton’s.
  2. Other professionals who are endorsing Rayton explicity or implicitly are part of the effort to deliberately mislead crowdfunders (possibly while profiting from the crowdfunding). These people should explain why Rayton’s unfriendly terms are fair, close down Rayton crowdfunding and return investor money, or improve the terms for crowdfunders so Rayton Directors can’t profit while crowdfunders lose everything. Anyone reading this should contact them to tell them to do so, and their colleagues at UCLA should bring this up with them.

Rayton Director and UCLA Profesor James Rosenzweig: rosenzweig@physics.ucla.edu. Phone: 310–206–4541. Website http://www.pa.ucla.edu/directory/james-rosenzweig.

Rayton Director and UCLA Profesor Mark Goorsky: Tel. (310) 206–0267, FAX (310) 206–7353, Email: goorsky@seas.ucla.edu.

The CEO Andrew Yakub — Contact info not publicly available, but email @raytonsolar.com probably guessable.

3. StartEngine bears ethical responsibility for allowing investor unfriendly terms to be sold. The JOBS act made equity crowdfunding legal, but it was not implemented well if Rayton’s unethical terms are legal. Startengine should prohibit “cash-out” offerings and add onerous langauge around early stage exec pay to make schemes like Rayton’s more difficult to pull off. It’s a competitive market so all crowdfunding sites should do this together to preserve their collective reputations. Startengine should also end the Rayton campaign and others with similarly bad terms.

*The JOBS act was probably a good thing and was addressed in a really interesting episode of the Startup Podcast.

Viewing all 25817 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>