Snap Inc.’s growth again fell short of estimates, feeding fears that aggressive competition from Facebook Inc. is blunting the younger social-media company’s potential just months after its initial public offering.
The Los Angeles-based company said daily active users reached 173 million in the second quarter, compared with 166 million in the prior period. Analysts polled by Bloomberg had expected 175 million on average. Revenue also disappointed, and the shares tumbled as much as 16 percent in late trading.
Since its public debut in March, the maker of the Snapchat mobile application for sending disappearing photo and video has said its app would become more popular as the company innovates and adds features. In the second quarter, Snap added a maps function for users to see where friends are, as well as a search section. Yet rival Facebook has been successfully copying some of Snap’s key features on its larger social-media properties, drawing users that may otherwise have downloaded Snapchat -- but now see less use for the standalone app.
Facebook is also exerting pressure in the mobile advertising market. Snap said quarterly revenue was $181.7 million, missing the $185.8 million average estimate of analysts surveyed by Bloomberg. While Snap has been updating its offerings to give advertisers more sophisticated options, the company has been struggling to prove it can secure its position in a market dominated by Facebook and Alphabet’s Google Inc.
This was a “make or break quarter” for Snap, James Cakmak, an analyst at Monness, Crespi Hardt & Co., said in a note to investors. “Snap has tremendous potential if it can capitalize on the opportunity in front of it as an alternative platform for advertisers,” but the company is “under pressure from multiple fronts.”
The company has been urging investors to think of it as different from Facebook. For example, its app works more like a messaging product with a curated media section -- it doesn’t have a feed with ads slotted in like Facebook’s namesake network and its Instagram app do. Snap Chief Executive Officer Evan Spiegel has also said the company isn’t focused on getting as big as possible, like Facebook. Instead, it wants to add users in the most lucrative markets and get them more deeply addicted. Snap reported average revenue per user of $1.05 in the second quarter, with most of its growth coming from North America, bolstering the company’s argument.
The company’s shares slid as low as $11.57 in extended trading following the report. The stock has slumped 19 percent since Snap’s March 1 IPO at $17 a share, closing at $13.77 in New York on Thursday.
The stock also traded lower last week as some inside investors got the ability to sell their shares for the first time, after a lockup period following the IPO. The first lockup expired July 31. On a conference call Thursday, Spiegel said he and co-founder Bobby Murphy won’t sell any of their shares this year, even once they’re free to do so.
As Snap becomes cheaper, the company could be attractive to acquirers, said Shebly Seyrafi, an analyst at FBN Securities. Spiegel, who has the majority of the voting power along with Murphy, is unlikely to want to sell this early, Seyrafi wrote in a note to investors.
For more on Snap, check out the Decrypted podcast:
“However, if Facebook continues to shamelessly copy Snap’s features and there is no clear road ahead for Snap to become net income or free cash flow positive, Mr. Spiegel’s position may change,” Seyrafi said.
Blockchain is a transformational technology with the potential to extend digital transformation beyond a company’s four walls and into the processes it shares with suppliers, customers and partners. A growing number of enterprises are investing in blockchain as a secure and transparent way to digitally track the ownership of assets across trust boundaries and to collaborate on shared business processes, opening up new opportunities for cross-organizational collaboration and imaginative new business models.
Microsoft is committed to bringing blockchain to the enterprise—and is working with customers, partners, and the blockchain community to continue advancing its enterprise readiness. Our mission is to help companies thrive in this new era of secure multi-party computation by delivering open, scalable platforms and services that any company—from ledger startups to retailers to health providers to global banks—can use to improved shared business processes.
As enterprises look to apply blockchain technology to meet their business needs, they’ve come to realize that many existing blockchain protocols fail to meet key enterprise requirements such as performance, confidentiality, governance, and required processing power. This is because existing systems were designed to function—and to achieve consensus—in public scenarios amongst anonymous, untrusted actors with maximum transparency. Because of this, transactions are posted “in the clear” for all to see, every node in the network executes every transaction, and computationally intensive consensus algorithms must be employed. These safeguards, while necessary to ensure the integrity of public blockchain networks, require tradeoffs in terms of key enterprise requirements such as scalability and confidentiality.
Efforts to adapt existing public blockchain protocols or to create new protocols to meet these needs have generally traded one required enterprise attribute for another—such as improved confidentiality at the cost of greater complexity or lower performance.
Facilitating enterprise blockchain adoption
Today I am proud to introduce the Coco Framework, an open-source system that enables high-scale, confidential blockchain networks that meet all key enterprise requirements—providing a means to accelerate production enterprise adoption of blockchain technology.
Coco achieves this by designing specifically for confidential consortiums, where nodes and actors are explicitly declared and controlled. Based on these requirements, Coco presents an alternative approach to ledger construction, giving enterprises the scalability, distributed governance and enhanced confidentiality they need without sacrificing the inherent security and immutability they expect.
Leveraging the power of existing blockchain protocols, trusted execution environments (TEEs) such as Intel SGX and Windows Virtual Secure Mode (VSM), distributed systems and cryptography, Coco enables enterprise-ready blockchain networks that deliver:
Throughput and latency approaching database speeds.
Richer, more flexible, business-specific confidentiality models.
Network policy management through distributed governance.
Support for non-deterministic transactions.
By providing these capabilities, Coco offers a trusted foundation with which existing blockchain protocols can be integrated to deliver complete, enterprise-ready ledger solutions, opening up broad, high scale scenarios across industries, and furthering blockchain's ability to digital transform business.
We have already begun exploring Coco’s potential across a variety of industries, including retail, supply chain and financial services.
"Being able to run our existing supply chain Dapp code much faster within Coco framework is a great performance improvement that will reduce friction when we talk about enterprise Blockchain readiness with our retail customers. Adding data confidentiality support without sacrificing this improvement is what will enable us to lead the digital transformation we are envisioning with Smart Supply Chains."
- Tom Racette, Vice President, Global Retail Business Development, Mojix
Whether a customer is designing an end-to-end trade finance solution, using blockchain to ensure security at the edge or leveraging Enterprise Smart Contracts to drive back office efficiencies, Coco enables them to meet their enterprise requirements. Microsoft is the only cloud provider that delivers consistency across on-premises and the public cloud at hyperscale while providing access to the rich Azure ecosystem for the wide range of applications that will be built on top of blockchain as a shared data layer.
An open approach
By design, Coco is open and compatible with any blockchain protocol. Microsoft has already begun integrating Ethereum into Coco and we’re thrilled to announce that J.P. Morgan Chase, Intel and R3 have committed to integrating enterprise ledgers, Quorum, Hyperledger Sawtooth and Corda, respectively. This is just the beginning, and we look forward to exploring integration opportunities with other ledgers in the near future.
"Microsoft's Coco Framework represents a breakthrough in achieving highly scalable, confidential, permissioned Ethereum or other blockchain networks that will be an important construct in the emerging world of variously interconnected blockchain systems. "
- Joseph Lubin, Founder of ConsenSys
I believe Coco can only benefit from the diverse and talented open source communities that are driving blockchain innovation today. While Coco started as a collaboration between Azure and Microsoft Research, it has benefitted from the input of dozens of customers and partners already. Opening up Coco is a way to scale development far beyond the reach and imagination of our initial working group, and our intent is to contribute the source code to the community in early 2018.
Coco will be compatible, by design, with any ledger protocol and can operate in the cloud and on premises, on any operating system and hypervisor that supports a compatible TEE. We are building in this flexibility in part to allow the community to integrate Coco with additional protocols, try it on other hardware and adapt it for enterprise scenarios we haven't yet thought of.
Industry enthusiasm for blockchain is growing, and while it will still take time for blockchain to achieve enterprise assurance, we remain laser focused on accelerating its development and enterprise adoption in partnership with the community.
To learn more about Coco you can read our technical whitepaper and watch my demo on the MSCloud Youtube page - be sure to star and follow the project on GitHub to keep up with the working group and receive notifications on the latest developments!
We're running neural networks running entirely in your browser to recognize your plays and keep score.
Unfortunately, your browser doesn't support accessing your webcam. Try loading this page in a modern version of Firefox or Google Chrome.
Play Rock Paper Scissors against your computer!
Unfortunately, your browser doesn't support accessing your webcam.
Try loading this page in a desktop version of Firefox or Google Chrome.
The demo is built on our GPU-acceleratedTensorFire library for fully in-browser deep learning. It's fast enough to perform real-time client-side classification of live webcam video, and we're showing it off here with a cute little game.
The rock paper scissors classifier is based on the SqueezeNet architecture. TensorFire + SqueezeNet is by no means limited to this: it can identify NSFW content, distinguish 1000 different ImageNet objects, recognize gestures, detect pets, or even distinguish hot dogs from not hot dogs.
You can learn more about TensorFire and what makes it fast on the Project Page.
You can also sign up to get notified when we publish new demos or release the api documentation.
✔ Sucessfully signed up
Special thanks to Simanta Gautam, Jay Palekar, Hassan Kane, Jackie Xu, Surya Bhupatiraju, Jocelyn Reyes, Laser Nite, Connor Duffy, Lily Jordan, Alexa, Caitlin, and Billy Moses for contributing short clips of shaking their fists at computers (as we all sometimes want to).
I’ve noticed a few problems with the traditional keyboard layout:
The most commonly used keys are not the easiest to reach. This is addressed by the Dvorak layout for the letter keys, but the other keys (such as delete on a mac) are a big stretch.
Learning the location of characters commonly used in programming (such as brackets, equals and plus) is difficult as there is no logic to their positions. These characters are also often hard to reach (on the new mac laptops the escape key is even harder as it is no longer a key).
Reaching modifier keys is difficult on mac keyboards, but they are used often.
These problems are amplified for people with small hands. Even reaching the “enter” key on a mac keyboard is difficult for my 10 year old son. And getting an external keyboard is a major inconvenience if you work on a laptop.
The solution I’ve come up with is to add two additional modifier keys (snap and pop) to give room for keys that are closer to the home row. I’ve tried to leave things the same as much as possible (such as leaving the special characters !@#$% with the number keys on the standard keyboard).
Most of the characters in the Pop layout are grey because they are really only included so its easy to bind them to shortcut commands.
Here is the Dvorak version of the layout:
And here is the Qwerty version (the left option and pop modifiers are moved to the number row).
The way I have implemented this on mac is using Karabiner Elements. Install the latest build from here:
If you have any feedback let me know. I think this is the best possible keyboard layout that is supported on standard mac hardware so if you have any suggestions please let me know.
Erica Heilman: Welcome to Rumblestrip Vermont. I’m Erica Heilman. Today, an interview with library activist and computer savant Jessamyn West.
EH: Let me just start by saying that Jessamyn West is kind of internet famous. She was one of the original moderators for the community blog Metafilter, which is like the civilized version of Reddit. She was recently contacted by the White House for her thoughts on their choice for the next Librarian of Congress. And she speaks internationally about the digital divide. Talking with Jessamyn is a little like being on a really fast ride at the Tunbridge Fair. In this interview, we sat in her kitchen in Randolph, Vermont, and talked about her passion for public libraries and the role of the modern librarian. We also talked about how different people manage their personal relationships with their personal computers. Welcome.
EH: So you have spent a lot of time in this state traveling all over Vermont to go visiting small libraries. What are you trying to learn from these places?
Jessamyn West: To me the most amazing thing about libraries and the reason I like to go there when I’m traveling is because no matter where I am, the public libraries belong to me. I’m the public. It’s for me. How magical is that?
Like I think for a lot of people maybe they get that from other places, maybe they get that from their workplace, maybe they get that from their church, maybe they get that from their community center. But I don’t have those places. I have the libraries. They’re all mine. And everyone’s. And I think you can’t really understate how rare it is to have a thing that’s for everyone. You, if you’re the library, serve, you know, the super… what I call the “beep beep beep” generation (and I’m making gestures with my thumbs here). You know, the people who are sort of ahead of the technology curve, behind the technology curve, somewhere in the middle. You help them solve their information needs.
So that used to just be like “Oh hey, Dr. Bob, here’s a good book.” But now it’s like you need to figure out how to apply for healthcare on the internet. You want to figure how to play the ukulele. You want to learn about where your relatives came from, you know, 150 years ago, and we can find the documentation from the boat they came in on in Philadelphia. It used to be a building full of five thousand books. And now thanks to the internet it’s an endless building with an endless number of books. But the person who works there, or the people who work there, are still the people who help you make sense of that.
I think for a lot of people technology and what you can find on the internet and what’s available there is a big not totally clear question mark. People do the best they can with Google. But you know, it’s worth knowing about Google — like 89% of Google’s income is advertising. They’re an advertising company who happens to run a search engine that’s the most used in the entire world. But they’ve got a particular view. Their particular view has to do with their particular business. Whereas in a library, their particular business is you, you and what you want to know about.
I think there’s historically been like, “Yeah, but you’re all about classic books, and kids’ puppet shows, and blah-de-blah-de-blah” but I think you see in Vermont some libraries where they do more traditional — what I consider to be traditional librarianship. They’re smaller, they deal with smaller communities, either that’s what their community wants or that’s what the librarian wants, hard to say. And I think you see other communities, possibly just as small, who are doing different stuff. Who are bringing in authors, who are having makerspaces, who are running tons of program. Burlington, Fletcher Free Library, they’ll lend you a rake. They’ll lend you a lawnmower. Like — that solves a problem for me, right? And they do it with my money, which is what’s the good news, bad news about it.
I think as we see in the Tea Partying of America, people are like “What? We give money to the library?” and say “I don’t like to read” or whatever the thing is, “How does that affect me?” And I think people forget how much it’s important that we have a public that, well, knows how to read, for one example. But also a public that feels like a public, where you don’t… and this is where my sort of technology leanings and my librarian leanings are a little bit separate… I think it’s easy to sit home on your internet and only talk to the people you know, and feel like you’re part of a community. But we also are answerable in some ways to our communities that we’re actually geographically, physically a part of, and the library is that contact point.
When I moved to Randolph from Bethel, which is the next town over, I went from a town that had a library that was open fourteen hours a week, basically one room, one computer, one incredibly nice lady, Cathy, who works there. And really not much of a library culture and community because the place was so small. And not open very often. But I remember when I moved to Bethel, when I used to live in Topsham, which had no library, I went to get my library card and Cathy was like, “Welcome to Bethel, we’re glad you’re here.” I just — to this day, I just, remember that, and whenever I see Cathy I just give her a huge grin. And then I moved to Randolph and —
EH [interjecting/ overlapping]: Why why why why
JW: Just cause no one else in Bethel said, “Welcome to Bethel”. Like, you know, she was a representative of the state such as it is. I mean, she was the lady from town. And I was the new kid. And just being like, “Hey, good to see you.”
EH: Going into a library always feels like going into a church, to me. Part of it is structural, it’s quiet. It’s kind of a presumed hush.
JW: Sometimes quiet….
EH: But there’s something else, which is — and I’m trying to figure out what it is, I think it has to do with everybody’s there for something, and whatever they’re there for is private. That you’re not really allowed to ask, you know, “so what are you doing here?”
JW: You’re not supposed to.
EH: Is that true?
JW: Well, no, as the librarian. As the librarian, what you’re supposed to do is help people with their stuff without getting all up in their business, if you can avoid that, and also without telling other people what other people’s business is. Now that’s not always how is actually works in small town libraries -
EH: But why, why is — that to me in itself is interesting. And that’s part of what creates the atmosphere of a library. There’s something furtive and kind of — it’s one of the very few places anywhere where you don’t have to, you’re not being asked to spend money. And you’re alone in public, and there’s something about those ingredients —
JW: Well, you don’t have to check in. I mean that’s the big thing to. To me — again, one of the other divides between the library as place and online communities as a place is, you can just walk into a library and we don’t even know you’ve been there, necessarily. I mean, maybe somebody saw you. And in bigger libraries you have cameras and stuff to keep you from destroying the bathroom. But you’re allowed to kind of do whatever you want there.
We see more and more, especially in academic libraries, that libraries are used as social places. I mean, for students, they love getting together in the library, if you let them… working on projects and talking to each other and drawing stuff on whiteboards and figuring stuff out. And it’s been challenging I think, especially for sort of traditional academic libraries, to find ways to accommodate that while at the same time being a quiet place for people to study and get work done in their own private personal kind of alone space.
In the summertime I’m down in Massachusetts and the University of Massachusetts Dartmouth, which is down on the South Coast. It has this crazy bonkers giant library and it’s this brutalist building on the outside and on the inside it’s all like red and orange and purple and you can go all the way up to the fifth floor and have the fifth floor almost to yourself if you’ve got work to do. I can’t be that alone in my own home. And I think also one of the points that neither of us has touched on is that’s exactly what it’s there for. You’re using it exactly the way you’re supposed to. Whereas when I’m home I’m like well I’m supposed to kind of do the dishes and I’ve got to do this thing and plants I’ve got to give the plant some water and I’ve got to do some laundry. My house is for a lot of things. Including my work. But the library is for my edification. You know, mine.
EH: Sometimes people are coming in with deeply personal and pressing questions. So in a way, there’s kind of a priest aspect to being a librarian. What are some memorable exchanges you’ve had with people who’ve come in when you’ve been working in the building?
JW: Well, I think priest is part of it, I think social worker is part of it for people who are having challenging problems. A lot of time what ends up coming up is people who are asking for a thing and they feel weird about the thing because they feel like the thing isn’t normal. And I put “normal” kind of in quotes. You know, like, “Well, my kid likes to read, but my kid doesn’t really like to read what the other kid likes to read, my kid likes this particular kind of story.” Or “I want to read about people are exactly like me”, or “…who are not like me”, or “I’m very different from every member of my community and I’m hoping you have something at the library that’s right for me.”
One of the things we learn in library school is a thing they call “the reference interview” which sounds kind of goofy. But basically it’s talking to a person and through asking questions figuring out what it is that they really want. I used to work in a natural science library for a long time when I was in school, and people would show up and go, “I need something about caves.” And… you can’t do anything with that. But you can try and figure out — are they a student? Are they someone who wants something because they have to write a paper? Are they someone who needs to find this from a journal? Does it need to be current? Does it need to be…? And so, you ask questions that are hopefully kind of friendly and open questions, not like “Well, what do you [“ehh” wrong buzzer sound]”. To figure out what that person wants. Hopefully in a sort of open and nonjudgmental way.
I had one guy who came up to me in the library — I don’t even know if you can use this, but like, asked me what fisting was, because he’d read it in a book, and he wasn’t really sure. But I think he thought it was something very different. Like fisting as a sex thing. And I was like “Oh, well —” and I gave him like a two-sentence description. He was horrified. Not at what the thing was but at “Oh, I didn’t know I was asking you that!” and then I was like, “Here’s a Susie Bright book that will probably help you understand the rest of it.” And he ran, basically — like just got out of there. But realistically speaking, why shouldn’t we be the people that you ask that question?
I don’t care, I thought it was a reasonable question to ask. And occasionally you will get people who you feel like are asking you questions in order to get a rise out of you and not because they want to know anything, but because you’re a woman and you’re trapped behind a desk, and you have to help them. You get a lot of mentally ill people who will ask you the same questions over and over and over again. Eh, you know, no big deal. They’re part of the community also. But other people sometimes respond to that in a weird way, you know — that they look at who’s in the library and they’re like, “rrrrrr, the library just serves the homeless!” or whatever, and you’re like “Well the homeless are using the library. Guess what, they’re your neighbors.”
People don’t always feel good about that. People have moral panics about bedbugs in their library or perverts in the library or — you’ll catch a cold from the library, like, whatever the thing is. And there’s a shred of truth to that, but realistically people are afraid of their own public, I think, in a lot of ways. And so being kind of matter-of-fact about the fact that, “Well, these really are who your neighbors are. Like, you can choose just to ignore that that’s how the world works, but you know, these are all your neighbors, and you see them all at the public library. You’re welcome.” I think has social utility.
And so — meet your neighbors. Welcome, welcome to your actual neighborhood.
EH: For the last ten years, Jessamyn has worked with the Randolph Technical Career Center here in Central Vermont. She’s the self-described “computer lady.” Now, first of all, computer problems have a way of making most of us unpleasant. And second, if you ask me, looking at someone’s personal computer is a little like looking in their sock drawer. You learn more than you probably want to know. But every week for two and half hours Jessamyn sits in a classroom and helps anyone who comes in with their computer questions and their problems. Here’s Jessamyn.
EH: Who shows up? Who are the people who come to this?
JW: It’s mostly — most of the people are between about 55 and 85.They’re not always retired but a lot of times they are. In the past we’ve had relationships with voc. rehab — someone who had maybe lost a job and needed to be retrained in order to do a new job, and we’d help them do stuff like fill out a resume, apply for unemployment, whatever. But a lot of it was just people with very low computer literacy. Maybe they’d never touched a mouse before. Maybe they used to have a computer in a former partnership but then the partnership dissolved and that person took the computer and they hadn’t used a computer in five years, or ten years.
EH: I’m 45, I think — 44 or 45 now — and I didn’t grow up — I mean I’m old enough that there was no training in computers in my primary school life. So anyone older than me and even some a little younger than me never studied computers in school. Can you talk a little bit about the range of kind of emotional states that people come in with their computer questions?
JW: Yeah. Well, I mean I think the first thing is, if someone is, let’s say 50 — and I’m approaching 50 — and they don’t know how to use a computer at all, there’s probably a reason. And that reason is probably not just, “Oh I never got around to it I’ve been living my fabulous life!” Sometimes it is! But a lot of times they have fear, they have concerns, they have anxiety. Maybe they used a computer for a very specific thing but they don’t know how to generalize that experience.
A lot of times I’ll find, for example, someone who was a logger. And that was what they did for a job, they cut down trees and they brought trees in and they prepared wood and they worked in a sawmill. And then they got injured. And now if they want to work with lumber at Home Depot, they have to fill out a job application online. And oh my gosh, the Home Depot job application online is the worst. It may be worse than vermonthealthconnect.gov, I’m not sure. But it is terrible.
So part of the problem is, that guy doesn’t know it’s a bad website. I know it’s a bad website because I’ve seen a hundred thousand websites and I know that’s a bad one. He doesn’t know that, and so what he feels like is that he’s a bad person. Because it’s hard, and it’s complicated, and you’re not sure if you’re doing the right thing. Or you try a thing and you get some popup that says, “No” but you don’t know what “Yes” is.
A lot of times, I think the most important thing I do for most people — I have a little arsenal of useful phrases that I try out with people. One of the ones that’s the most important is, “You’re not a bad person, this is a bad website.”
And I explain to them that not everybody who makes a website is good at making a website. And we talk about their jobs. You know how there’s some people who just can’t cut down trees? Or I talk about what it would be like if I tried to cut down a tree. Like I cannot even imagine how terrible that would be. But like this guy has a huge skill set. It just doesn’t translate at all into the internet world. And maybe he doesn’t — maybe he’s frustrated that he has to do it in the first place. You see a lot of people like that.
Maybe they went through a divorce and now they’ve got to get their own healthcare so they’re kind of mad at the crappy website. And that is a crappy website. But they’re also just mad that they’re divorced and that they have to deal with all of this stuff. And so a lot of their emotional feelings about why they have to do it get channeled into the thing they have to do.
One of the hardest things about technology is you really do feel, a lot of people feel thrown into the deep end. Like, I don’t care, fucking learn it. And they’re like, well how am I supposed to do that? Because it used to be that stuff came with a manual. And I hear people say that all the time. And part of my job is, yeah, I know, that’s crappy. And part of my job is, yup, but — you know, moving on. Because again, going back to why has this 50 year old never used the computer, part of that may be because they have emotional issues with learning new things, with technology, with — who even knows, right? But it’s really worth figuring those things out before you dive in to try to help someone.
In the past we’ve always wanted to have a list. “Here’s the list of how to do the thing, here’s the list of how to teach someone to read.” At some level, if you follow the steps, and you have a person who’s on board who doesn’t have any major disabilities that are getting in their way, you can teach a person how to read. With technology, I believe that same thing is true, but instead of a set of steps, you have an ever-increasing flowchart of — well, if they have a shaky hand, do this. If they don’t have a shaky hand, try this. If they’re afraid of computer, do this. And each of those things expands, so you wind up having to make 50 or 100 choices before the person’s even signed up for an email address.
All of those things are things that even if you were teaching somebody to read, with a grownup, with an adult, you’d be managing all of that preamble also I imagine. But not only are you managing that with technology, it’s that it doesn’t come easily out of the physical world, or our conception of the physical world. And that’s almost impossible to conceive of, if you weren’t born into it. If you don’t already accept that this thing is — does not have any linear shape to it. Well that’s exactly it. We all know what a book is like. And they have different reading levels maybe and they look a little bit different, but in general, all text in all books is more or less the same with a couple outliers.
Technology, the difference between looking at Facebook on your phone versus looking at Facebook on a desktop computer…. They’re different tools, even though they both say Facebook across the top. If you come to me and you’re like “I don’t like ads on Facebook!” I can’t even start helping you with that question until I know how you’re looking at Facebook, and you may not even know how you’re looking at Facebook because it would never occur to you that they’re different. How would you know? Whereas for me, I totally sort of know that and can tease it apart.
Part of the issue is people know there’s a lot of parts, but they don’t know which ones are important. So they’ll do a thing and they’ll get an error message and they’ll stop. And I’ll be like, “Oh just click OK, blow that off.” And they’re like “Well how do you know that?” And I’m like, “Ahhh, well it’s just a thing I know.” And that’s awkward and crappy.
I’ll help people kick the ball down the field, but I resent a little bit that we’re in a world where we kind of have to teach people to ignore some errors and pay attention to others.
And these are people a lot of whom, again, if you haven’t used technology and you’re 50 or 60 you get a lot of messaging about technology from your television. From the newspaper. From media that is in danger from technology, and so they tell you a very particular story about technology. Which is: it’s dangerous, and you’re at risk, and you’ve got to be really careful, and you might be able to buy your way out. You know, like maybe you can buy a thing that will make you safe. But it’s inherently unsafe.
Realistically, there are things that are unsafe about the world of technology. But there are things that are unsafe about the world of your bathroom. We feel like we can evaluate those risks. For a lot of people who have used technology but who just want to learn another technology, they’re already in, at least a little bit, so you don’t have to kind of make it seem worth it to them.
I definitely get people who are novice users who come in and they’re like, I don’t see what all the fuss is about. And I’m like, I don’t know, maybe you don’t need to use technology then. And, you know, I tell them a little story about how it’s like not learning how to drive. You don’t have to. It’s not the law. But your life is going to be inconvenient and you may need to get other people to do parts of your job of being a human for you. There’s nothing wrong with that. But people need to realize that’s the choice they’re making. Like, driving a car is scary too. You could kill someone a lot more easily than I could kill someone with Facebook.
EH: How do you actually communicate, while looking at the computer? What are you actually doing to help somebody learn?
JW: Well it’s more like coaching. Like the big rules are: get on their level, which means if they’re sitting, you’re sitting. Make them drive, unless there’s some reason they can’t. My basic deal is, unless somebody has a one-time only, I need to do this thing right now and it’s, you know, I’ve got very little time and I just need someone to do it… My whole deal is, I’m not your administrative assistant. You either have to do this yourself or you pay someone to be your administrative assistant. That’s not me. This is free help. And so they run the mouse, they run the keyboard. Part of it is getting people used to doing it on their own or with sort of minimal feedback.
If people ask questions, I’ll answer them. If people want to, like… drop down menus, you know? “Tell us how much schooling you’ve had” and there’s a tiny triangle that you have to click, and behind that tiny triangle is a list. “Didn’t graduate high school, high school, college…” whatever. And you have to pick from a list, which involves clicking a triangle, then clicking a thing from a list, but then if you use the rollerball on your mouse then maybe it will change it and it will go away and suddenly you’re in graduate school applying for a job at Home Depot. And people resent the hell out of it.
I think people who are used to kind of division of labor, especially people who are used to sort of boy-girl division of labor, you know, they see — I mean, it’s funny, because it’s not — it’s gendered and yet it’s not gendered. I mean, in certain gender splits I’ve seen where like a married couple comes in, the guy does all the driving. You know, and she kind of watches and whatever. And in other gender splits, the guy tells the lady how to do the driving, he doesn’t sully his hands with touching the stuff. So I think it’s unclear, just like balancing the checkbook. I mean, is balancing the checkbook the power move, or is it the administrative assistant move?
EH: There is a kind of — Jesus, it’s like Sisyphus. Watching — you have to have a great deal of patience.
JW: You get the feeling with some people — like I’ve had some people who started out not knowing how to do stuff and here eight years later they’re doing all sorts of stuff and they just needed a little help. But I think what’s hard for me is in some people I recognize that they have an attitude that is going to get in their way, and it’s really not my job to be like “you’ve got a bad attitude.” But one of the things I do with people is say “You know, it sounds like you’ve got an emotional issue with this situation, I actually can’t help with your emotional issue.” Which is a little bit chilling.
And at the same time if I were a guy doing this job, would anyone start telling me about their breakup and how their boyfriend took the computer and the computer he met his new girlfriend with and that’s why I’m having a hard time learning email…? I’m like “I don’t need to know this, that’s your business.” And it doesn’t matter. People act like that’s a germane reason why they didn’t do the thing. And I’m like “I’m sorry, that sounds difficult. But I can’t help you with your emotional problem. I’m here to solve your other problems, you know.” If it’s someone I’m close to I might be like “Therapy and meds have really gone a long way to helping me accept the things I didn’t want to accept about the world.” But it really is that level.
They’ve got some kind of eddy of pain that is about something entirely different but it happens to involve the computer, and so they’re getting stuck in it and it’s making their progress difficult. But part of managing anxiety, besides, you know, therapy and medication and whatever, is you’ve gotta kind of get over yourself. Like, you can’t show up and be like, “I want to use a computer but I just want to use it hating it the whole time…” and get very far. But anyone can learn to use a computer. Anybody. I don’t care. Any person, if they want to. But they have to want to.
EH: I want to ask you, I think it’s such a common thing, about dealing with people who have, who are click impulsive when you’re trying to teach something —
JW: Always.
EH: What is that about?
JW: I just think it’s some kind of attention deficit disorder-linked behavior. That basically somebody will click on a thing, nothing happens in half a second, and then they click something again. Which in the computer language is two separate clicks… sometimes. And so I feel like sometimes if you can untangle that and sort of explain to people, like, you’ve got to kind of work on this. Like, click once, give it a two count, if nothing happened, click again. Or ask somebody. And look for cues. Do you see the little spinny thing? That’s your website saying it’s going to look for another page. If you see the spinny thing, it’s working. So then, count to five. Like we have a whole bunch of sort of counts that are built into it. But one of the hardest things for me is dealing with people where I’m like, click on that thing. And they click and click and they click and I’m like “What did that say?” And they’re like “I don’t know what it said.” And I’m like “How do you live?”
Certain ways of thinking about things make it more easy for you to adapt to a world that technology has a role in. You know, being able to understand a metaphor, for example.
Like, it’s a file, you put it in a folder. What? What? Like you have to understand that the computer is — that the operating system of the computer — is abstracting this in a way that’s supposed to make it easy for you. But what you have to do is be able to kind of understand the metaphor of like, when I click and drag this thing, there’s actually not a physical thing that’s happening, but the computer’s showing you a picture to help you understand and help you get organized and help you assemble things in a way that’ll make sense to you.
So, I have a real life filing cabinet. And I put real life files in it. And that’s one of the ways I stay organized. If you’re a person at home who can’t make a real life filing cabinet and real life files work for you, for various reasons, good and bad, you’re going to have a really hard time with the metaphor of files and folders -
EH: Or, or you do, that’s the way you’ve functioned for decades, and you know that you can’t fit ten thousand files into a folder, and so conceptually it does not make sense to you. Do these metaphors work, or are they more confusing?
JW: They work for who they work for, and they confuse the other people. Part of the job is learning it. You know, math didn’t make a lot of sense to me either but it’s how numbers work. And so if you started out organized, the computer can help you be even more organized and it’s amazing. If you started out not organized, the computer just gives you another space to be disorganized in. And occasionally I’ll have people come to drop in time, they’ll open their laptop, I’ll look at it, they have five hundred icons on their desktop, and I know exactly where we need to start. Right?
Part of the issue is telling somebody, “Well, you’ve got to do this differently if you want to understand what’s going on.” You know, it goes one of two ways. They get organized, or they say they can’t, and they don’t, and it’s always going to be a struggle. Whereas you know if you were in a partnership, either a business partnership or a life partnership, maybe one of you did the files and the other one cooked the meals, you can share that work among the people who are good at it. One of the things about the computer that makes it so challenging is it’s all about you. So few people share — I mean it’s personal, personal computer. Right? So few people share it, that it means that if you’re part of a partnership and one of you’s organized and one of you isn’t, one you’s going to have a computer that’s easy to use, and other’s — their computer’s gonna be a mess.
A lot of times I see people in couples who come in and they are having a hard time working between computers on their network at home and it’s exactly because one of them’s a mess and one of them’s not a mess. But what do you do? Tell them to go to counseling? I tell them “Well, this is why, I’m not sure this is how you want to solve that.” So I spend a lot of time kind of telling people what the issues are. I remember when my knees were hurting and I was climbing up stairs and being tired and exhausted all the time and the doctor said “You should probably lose some weight.” And I’m like, “I don’t care what you think!” But realistically, you know, he was the professional who knew, maybe that’s what I needed. But it is hard when you’re the professional who’s got the message that somebody doesn’t want to get about how they need to change in order to get their life to be more functional.
EH: You have a really rich online life. And I often think — you know, last night I was online and I’m clicking, clicking, clicking, I’m like, what am I — ? I haven’t had a television in twenty years. But I can spend hours online and I can’t tell you anything that I did for three hours. And it’s, I leave sometimes feeling slack, kind of —
JW: Sure.
EH: And what is that? It almost feels soul-crushing. Do you experience this, or…?
JW: No, I mean, I feel like I do the same thing. But for me it fills — I mean, sometimes — but it fills the role almost that television would fill. You know, I just need to kind of zone out for a while and let information wash over me and if I do it online it’s a little more self-directed. But there definitely are times where I’m like, yep, I’m just going to turn it off, not do my work, not answer my email, not interact with people on Twitter, not do writing, not read web pages I’m supposed to read, and I’m just going to follow links down the rabbit hole and learn about something on Wikipedia, for example.
My favorite thing that I like to talk about as far as what motivates us, like a lot of people ask “Well what motives you, you know, or what motivates humans?” And people are like “Oh, sex and aggression, you want to just like fuck or fight anything.” Right? Or fear and aggression. But Temple Grandin, who’s the animal researcher who wrote thinking in pictures — she’s on the autism spectrum and writes about what she knows about animals — talks about like one of the things that’s really motivating to animals is what she calls the “seeking” instinct. That it’s not the finding the piece of food or the hot other hamster or whatever the thing is, it’s the anticipation of being on the hunt for the other hamster, or the piece of food, that’s actually more captivating than what you would actually get at the end.
This is partly why we like soap operas, this is partly why we like internet discussions, this is partly why we like Reddit, this is why we like Facebook. Because it’s not so much — although this is part of it — that we get to put our own self into it. Hey I like that, way to go, your kid’s cute, I like that dog, whatever. But it’s also that we get to see what people say back to us. And so that what happens next, click, click, what’s after this link, what’s after this link? And sometimes I try to think to myself while I’m doing that: “What would I want to find here?” Like you get the — like my computer makes a beep when I get an email. What do I think the best email I could possibly get would be? Like there’s no best email, like the whole concept that email could deliver you something that is that worth it -
EH: But that to me — that implies a problem, that it is -
JW: That it’s promising what it doesn’t deliver, or -?
EH: That you have some desire that’s not going to be met and that perhaps it could be met if you were looking in different places.
JW: Yeah, well when I see people — in the online communities that I’m a part of, and I see people that are kind of like “heavy in” or online a lot,… One of the things when I’m talking about people who are, you know, 50 years old and they haven’t used a computer, there are other people who they’re 50 years old and they’re online constantly. And if you’re online constantly, like the person who’s offline constantly, there’s probably a reason. Good reason, bad reason, but reason. And so figuring out what that is — you see people are spending a lot of time online who are also complaining that they have depression, they have anxiety, they have concerns, a lot of people who spend a lot of time online are managing various issues. Again, good and bad.
One of the steps a lot of times is: turn off the computer, interact with real life people. Slow it down. Deal with the pace of things, you know, somebody knocking on your door, walking down the street, the slower pace of interactions, but with people who maybe know you more.
When I was dealing with a whole bunch of anxiety — I had some health issues at the beginning of this year, went to therapy because I was concerned — I talked to my therapist about my jobs and how I get up in the morning with a cup of coffee and sit in front of my computer. She was like — it’s ridiculous that I did not come to this on my own — like “Why don’t you spend a half hour, 45 minutes with no computer in the morning?” And I was like, but, reasons! And she was like, you know, just do something else. Just stay away from the screen, do your dishes, watch birds, go to the post office, do something else. And I was like [ahhhh], never work, I hate it. But I did it. And it’s become a part of how I live my life now.
It’s changed my day dramatically just not feeling like that — I call it kind of the hamster wheel of the internet, because it’s always going whether you’re on or off. Right? I don’t feel like it’s a thing I have to respond to with that kind of urgency anymore, and it’s encouraged me to not only take care of my own stuff — you know, I do more of my dishes, I keep more of my stuff clean, I’ve got 35 minutes, 40 minutes in the morning where I have to do something, read a book.
It’s also just put everything in perspective, that it doesn’t require my immediate attention, that I don’t have to get on this immediately. That even if other people — and you see this all the time in offices, “an emergency on your part is not an emergency on my part necessarily” but it’s really helped me conceptualize that, in sort of a real way. I would feel other people’s urgency in the online world. Like, I need your response immediately. Or, you know, puppies dying every minute that you don’t help the puppy farm, or whatever. Or your friend is sad. The urgency of that in a lot of cases comes from my mind not other people’s.
If other people feel something’s important, that’s great, they’re allowed to, there’s nothing wrong with that. But their urgency doesn’t have to become my urgency. I was such a pleaser as a kid, and wanted everybody to be happy, and wanted — not even happy, but just not mad. So I spent a lot of time being very responsive, like touchy responsive, to how the people were around me, and it took a really long time for me to realize that that didn’t serve me as a grownup at all.
EH: That’s a very dangerous cocktail for somebody who’s also on the internet a lot.
JW: Yeah in online discourse there’s a lot of people who have things they want… gun control is a disaster. Bernie’s running for president. You know, the guy in my town needs a ride to the doctor. You’re bombarded with messages, all of which may have urgent attached to them, and if you don’t have good boundaries of your own, you have a very difficult time sorting out what you want and what you need and what’s relevant to you, and so it’s easy to be reactive. And a lot of being online, I think, is figuring out how to allot those things in ways that feel appropriate and true to your own values.
[Music]
That was Jessamyn West. There is lots to be read by and about Jessamyn. You’ll find some of those links on my website, rumblestripvermont.com. Your comments are always welcome there as well.
And I want to thank everyone who’s donated to this show. I am so grateful. These donations help keep me going. If you haven’t donated and you’d like to, any amount is welcome and appreciated. You can find a donate button on my website, rumblestripvermont.com. It’s green and it’s in the upper righthand corner.
If you have ideas for shows that you’d like to hear, just send me an email at rumblestripvermont@gmail.com. This is Erica Heilman. Thank you very much for listening
Hywind Scotland may be the world’s first floating wind park, but it will certainly not be the last. In addition to Hywind, several other floating offshore wind concepts are under development, and the demand for renewable energy is growing.
Europe has excellent opportunities for floating offshore wind power, but the the US and Japanese markets also have great potential. For example, California has set a target of 50% renewable energy, and floating wind power could be the key to fulfilling this, while in Japan, the shift away from nuclear power will drive the need for new and reliable energy supplies.
Furthermore, the development of floating offshore wind power could secure thousands of jobs that today deliver services and goods to the oil and gas sector, marine industries and fisheries. To succeed with Hywind, we need to collaborate closely with suppliers and customers to reduce costs and ensure further deployment.
Statoil believes that Hywind Scotland will prove that Hywind is the most mature solution for producing floating wind energy offshore. The cost reductions seen from the pilot in 2009 and up to today show the tremendous potential of the concept.
Floating offshore wind power combines the technology we know best from our work offshore in oil and gas with traditional wind power. Who knows, some day we might be able to take advantage of the waves around the wind turbine as well? Hywind Scotland is not a finishing line—it's the starting point of a new adventure.
In the past 7.5 years of building Uber, I've learned so many different lessons, one of which is the fact that people who embrace uncertainty and change have the best grip on reality. In the middle of September, I'll be embracing another big change on my journey with Uber and will transition out of a full-time operating role to focus on my role as a Board Director.
In every position I've held at Uber, as GM, then CEO, then SVP of Global Operations, I've focused on people and team. Uber's launch, our rapid growth, and now global impact, are all a testament to the quality of the folks that I have had the pleasure of working and growing with. That team is now the driving force behind the durability and importance of the business we run in over 600+ cities.
In some ways my focus going forward will not actually change very much -- it remains all about people, and it's clear to me the stability of our board of directors, the selection of our new CEO, and the empowerment of our management team is what is needed most. So I will do everything in my power to deliver on those goals for the benefit of our organization and the millions of people -- riders, drivers, eaters and couriers -- and their communities that Uber serves every day.
I could not possibly stress enough how insanely proud I am of this organization. The dedication towards our mission of providing transportation that can be trusted, to everyone, is noble. We, as a team, have achieved something that has truly changed the world for the better, and will continue to do so long into the future.
I also have deep gratitude for the lessons learned from Travis, from my colleagues on Uber's ELT, and my Global Ops leadership team over the years -- notably Rachel, Austin, Jo, Mac, Pierre, Droege, Penn, Jambu, Ro, Mike, Amit, Meghan, Barnes, and so many others who have given so much of their hearts and lives to building this company. Thank you. Without you, I wouldn't be the person I am today and for that, I will forever be in your debt.
When you go through an experience like we have building Uber you forget that it's not just the people across the desk that are making a huge investment, it's also the partners and spouses, the families and the friends at home also making sacrifices. I would never have been able to make this journey without my wife Molly there to listen and advise. The ride hasn't always been easy but nevertheless, she's been there with me to laugh, to cry, to plan, and to celebrate. She deserves more credit than anyone in supporting me through it all. She's been the most constant and enduring partner, right at my side, and building her own company and our family along the way. I *really* look forward to being able to return the love and spend more time with her and with our boys.
So, why now? Well, there is no great time for a move like this one. But it's really important to me that this transition doesn't take away from the importance of the onboarding process of our new CEO, whoever they might be. My hope is that ensuring my transition is known and planned for well before our board's decision on CEO it will help to make it clear to our team and to our new leader that I will be there to support however I can.
There is another lesson I've learned that we should have applied much earlier. We should have taken more time to reflect on our mistakes and make changes together. There always seemed to be another goal, another target, another business or city to launch. Confucius said that reflection is the noblest method to learn wisdom, and fortunately, our new found reflection and introspection has become an asset to us and we have evolved and grown considerably. Our culture, our processes, our leaders, and our teams have become wiser, stronger, and more mature because of it. Regardless of which role I hold in the future, I'll be dedicated to supporting Uber's leadership, partnering with Uber's new CEO to understand the complexities of this business and this organization, and to continuing to deliver on the critically important mission and future we have ahead of us. Again, thank you all, and let's Uber on!
Climate change is one of the greatest challenges of our time, and the way we generate and use electricity now is a major contributor to that issue. To solve it, we need to find a way to eliminate the carbon emissions associated with our electricity as quickly and as cheaply as possible.
Many analysts have come up with a number of possible solutions: renewable energy plus increased energy storage capacity, nuclear power, carbon capture and sequestration from fossil fuels, or a mixture of these. But we realized that the different answers came from different assumptions that people were making about what combination of those technologies and policies would lead to a positive change.
To help our team understand these dynamics, we created a tool that allows us to quickly see how different assumptions—wind, solar, coal, nuclear, for example—affect the future cost to generate electricity and the amount of carbon dioxide emitted.
We created a simplified model of the electrical grid, where demand is always fulfilled at least cost. By “least cost,” we mean the cost of constructing and maintaining power plants, and generating electricity (with fuel, if required). For a given set of assumptions, the model determines the amount of generation capacity to build and when to turn on which type of generator. Our model is similar to othersproposed in other research, but we’ve simplified the model to make it run fast.
We then ran the model hundreds of thousands of times with different assumptions, using our computing infrastructure. We gather all of the runs of the model and present them in a simple web page. Anyone —from students to energy policy wonks—can try different assumptions and see how those assumptions will affect the cost and CO2. The web UI is available for you to try: you can explore the how utilities decide to dispatch their generation capacity, then can test different assumptions. Finally, you can compare different assumptions and share them with others.
This illustration compares the four planets detected around the nearby star tau Ceti (top) and the inner planets of our solar system (bottom). (Illustration courtesy of Fabo Feng)
A new study by an international team of astronomers reveals that four Earth-sized planets orbit the nearest sun-like star, tau Ceti, which is about 12 light years away and visible to the naked eye. These planets have masses as low as 1.7 Earth mass, making them among the smallest planets ever detected around nearby sun-like stars. Two of them are super-Earths located in the habitable zone of the star, meaning they could support liquid surface water.
The planets were detected by observing the wobbles in the movement of tau Ceti. This required techniques sensitive enough to detect variations in the movement of the star as small as 30 centimeters per second.
"We are now finally crossing a threshold where, through very sophisticated modeling of large combined data sets from multiple independent observers, we can disentangle the noise due to stellar surface activity from the very tiny signals generated by the gravitational tugs from Earth-sized orbiting planets," said coauthor Steven Vogt, professor of astronomy and astrophysics at UC Santa Cruz.
According to lead author Fabo Feng of the University of Hertfordshire, UK, the researchers are getting tantalizingly close to the 10-centimeter-per-second limit required for detecting Earth analogs. "Our detection of such weak wobbles is a milestone in the search for Earth analogs and the understanding of the Earth's habitability through comparison with these analogs," Feng said. "We have introduced new methods to remove the noise in the data in order to reveal the weak planetary signals."
The outer two planets around tau Ceti are likely to be candidate habitable worlds, although a massive debris disc around the star probably reduces their habitability due to intensive bombardment by asteroids and comets.
The same team also investigated tau Ceti four years ago in 2013, when coauthor Mikko Tuomi of the University of Hertfordshire led an effort in developing data analysis techniques and using the star as a benchmark case. "We came up with an ingenious way of telling the difference between signals caused by planets and those caused by star's activity. We realized that we could see how star's activity differed at different wavelengths and use that information to separate this activity from signals of planets," Tuomi said.
The researchers painstakingly improved the sensitivity of their techniques and were able to rule out two of the signals the team had identified in 2013 as planets. "But no matter how we look at the star, there seem to be at least four rocky planets orbiting it," Tuomi said. "We are slowly learning to tell the difference between wobbles caused by planets and those caused by stellar active surface. This enabled us to essentially verify the existence of the two outer, potentially habitable planets in the system."
Sun-like stars are thought to be the best targets in the search for habitable Earth-like planets due to their similarity to the sun. Unlike more common smaller stars, such as the red dwarf stars Proxima Centauri and Trappist-1, they are not so faint that planets would be tidally locked, showing the same side to the star at all times. Tau Ceti is very similar to the sun in its size and brightness, and both stars host multi-planet systems.
The data were obtained by using the HARPS spectrograph (European Southern Observatory, Chile) and Keck-HIRES (W. M. Keck Observatory, Mauna Kea, Hawaii).
A paper on the new findings was accepted for publication in the Astronomical Journal and is available online. In addition to Vogt, Feng, and Tuomi, coauthors include Hugh Jones of the University of Hertfordshire, UK; John Barnes of the Open University, UK; Guillem Anglada-Escude of Queen Mary University of London; and Paul Butler of the Carnegie Institute of Washington.
A unique trait of open source is that it's never truly EOL (End of Life). The disc images mostly remain online, and their licenses don't expire, so going back and installing an old version of Linux in a virtual machine and getting a precise picture of what progress Linux has made over the years is relatively simple.
We begin our journey with Slackware 1.01, posted to the comp.os.linux.announce newsgroup well over 20 years ago.
Slackware 1.01 (1993)
Slackware 1.01
The best part about trying Slackware 1.01 is that there's a pre-made image in Qemu's 2014 series of free images, so you don't have to perform the install manually (don't get used to this luxury).
Many things in 1993's version of Linux works just as you'd expect. All the basic commands, such as ls and cd work, all the basic tools (gawk, cut, diff, perl, and of course Volkerding's favorite elvis) are present and accounted for, but some of the little things surprised me. BASH courteously asks for confirmation when you try to tab-complete hundreds of files, and tools to inspect compressed files (such as zless and zmore and zcat) already existed. In more ways than I'd expected, the system feels surprisingly modern.
What's missing is any notion of package management. All installs and uninstalls are entirely manual, with no tracking.
Over all, Slackware 1.01 feels a lot like a fairly modern UNIX—or more appropriately, it feels like modern UNIX might feel to a Linux user. Most everything is familiar, but there are differences here and there. Not nearly as much a difference as you might expect from an operating system released in 1993!
Debian 0.91 (1994)
To try Debian 0.91, I used the floppy disk images available on the Ibiblio digital archive, originally posted in 1994. The commands to boot:
The bootdisk for Debian 0.91 boots to a simple shell, with clear instructions on the steps you're meant to take next.
The install process is surprisingly smooth. It works off of a menu system with seven steps—from partitioning a hard drive and writing the ext2 filesystem to it, all the way through to copying the basedsk images. This provided a minimal Debian install with many of the familiar conventions any modern Linux user would expect from their OS.
Debian is now famous for its package management system, but there are mere hints of that in this early release. The dpkg command exists, but it's an interactive menu-based system—a sort of clunky aptitude, with several layers of menu selections and, unsurprisingly, a fraction of available packages.
Even so, you can sense the convenience factor in the design concept. You download three floppy images and end up with a bootable system, and then use a simple text menu to install more goodies. I sincerely see why Debian made a splash.
Jurix/S.u.S.E. (1996)
Jurix installation
A pre-cursor to SUSE, Jurix shipped with binary .tgz packages organized into directories resembling the structure of Slackware's install packages. The installer itself is also similar to Slackware's installer.
Because I wasn't specifically looking for the earliest instance, Jurix was the first Linux distribution I found that really "felt" like it intended the user to use a GUI environment. XFree86 is installed by default, so if you didn't intend to use it, you had to opt out.
An example /usr/lib/X11/XF86Config (this later became Xorg.conf) file was provided, and that got me 90% of the way to a GUI, but fine-tuning vsync, hsync, and ramdac colormap overrides took me an entire weekend until I finally gave up.
Installing new packages on Jurix was simple; find a .tgz on your sources drive, and run a routine tar command: $ su -c 'tar xzvf foo.tgz -C /' The package gets unzipped and unarchived to the root partition, and ready to use. I did this with several packages I hadn't installed to begin with, and found it easy, fast, and reliable.
SUSE 5.1 (1998)
FVWM running on SuSE 5.1
I installed SUSE 5.1 from a InfoMagic CD-ROM purchased from a software store in Maryland in 1998.
The install process was convoluted compared to those that came before. YaST volleyed configuration files and settings between a floppy disk and the boot CD-ROM, requiring several reboots and a few restarts as I tried to understand the sequence expected from me. Once I'd failed the process twice, I got used to the way YaST worked, and the third time was smooth and very much a hint at the Linux user experience to come in later years.
A GUI environment was my main goal for SUSE 5.1. The configuration process was familiar, with a few nice graphical tools (including a good XF86Setup frontend) to help test and debug mouse and monitor problems. It took less than an hour to get a GUI up and running, and most of the delay was caused by my own research on what resolutions and color depths Qemu's virtualized video card could handle.
Included desktops were fvwm, fvwm2, and ctwm. I used fvwm, and it worked as expected. I even discovered tkDesk, a dock and file manager combo pack that is surprisingly similar to Ubuntu's Unity launcher bar.
The experience was, over all, very pleasant, and in terms of getting a successful desktop up and running, SUSE 5.1 was a rousing success.
Red Hat 6.0 (1999)
Red Hat 6 running GIMP 1.x
The next install disc I happened to have lying around was Red Hat 6.0. That's not RHEL 6.0—just Red Hat 6.0. This was a desktop distribution sold in stores, before RHEL or Fedora existed. The disc I used was purchased in June 1999.
The installation was fully guided and remarkably fast. You never have to leave the safety of the install process, whether choosing what packages to install (grouped together in Workstation, Server, and Custom groups), partitioning a drive, or kicking off the install.
Red Hat 6 included an xf86config application to step you through X configuration, although it strangely allowed some mouse emulation options that X later claimed were invalid. It beat editing the Xf86Config file, but getting X correct was still clearly not a simple task.
The desktop bundled with Red Hat 6 was, as it still is, GNOME, but the window manager was an early Enlightenment, which also provided the main sound daemon. Xdm and gdm were both provided as login managers so that normal users could log in without having the permission to start or kill X itself, which is particularly important on multi-user systems.
Certain staple applications are missing; gedit didn't exist yet, there's no grand unified office application, and there was no package manager to speak of. GnoRPM, a GUI interface for RPM installation, review, and removal, was the closest to a yum or PackageKit experience it had, and gnotepad+ is the GUI text editor (Emacs notwithstanding, obviously).
Over all, though, the desktop is intuitive. Unlike later implementations of GNOME, this early version featured a panel at the bottom of the screen, with an application menu and launcher icons and virtual desktop control in a central location. I can't imagine a user of another operating system at the time finding this environment foreign.
Red Hat 6 was a strong entry for Linux, which was obviously moving seriously toward being a proper desktop OS.
Mandrake 8.0 (2001)
Mandrake: A turning point in Linux
Mandrake 8.0 was released in 2001, so it would have been compared to, for instance, Apple OS 9.2 and Windows ME.
I fell back on fairly old emulated tech to be safe.
I'd thought the Red Hat installation process had been nice, but Mandrake's was amazing. It was friendly, it gave the user a chance to test configurations before continuing, it was easy and fast, and it worked almost like magic. I didn't even have to import my XF86Config file, because Mandrake's installer got it right.
Mandrake 8.0 installer
Using the Mandrake desktop is a lot like using any given desktop of the time, actually. I was a little surprised at how similar the experience was. I feel certain that if I'd somehow stumbled into Mandrake Linux at this time, it actually wouldn't have been beyond my ability, even as a young and not very technical user. The interfaces are intuitive, the documentation helpful, and the package management quite natural, for a time when it still wasn't yet the mental default for people to just go to a website and download an installer for whatever software they wanted.
Fedora 1 (2003)
Blue Fedora, Red Hat
In 2003, the new Fedora Core distribution was released. Fedora Core was based on Red Hat, and was meant to carry on the banner of desktop Linux once Red Hat Enterprise Linux (RHEL) became the flagship product of the company.
Nothing particularly special is required to boot the old Fedora Core 1 disc:
$ qemu-system-i386 -M pc \ -m2048-bootorder=ac,menu=on \ -drivefile=fedora1.qcow2 -usb \ -net nic,model='rtl8139'-netdev user \ -vga cirrus -cdrom fedora-1-i386-cd1.iso
Installing Fedora Core is simple and familiar; it uses the same installer as Fedora and Red Hat for the next 9 years. It's a graphical interface that's easy to use and easy to understand.
Anaconda GUI
The Fedora Core experience is largely indistinguishable from Red Hat 6 or 7. The GNOME desktop is polished, there are all the signature configuration helper applications, and the presentation is clean and professional.
A Start Here icon on the desktop guides the user toward three locations: an Applications folder, the Preferences panel, and System Settings. A red hat icon marks the applications menu, and the lower GNOME panel holds all the latest Linux application launchers, including the OpenOffice office suite and the Mozilla browser.
The future
By the early 2000s, it's clear that Linux has well and truly hit its stride. The desktop is more polished than ever, the applications available want for nothing, the installation is easier and more efficient than other operating systems. In fact, from the early 2000s onward, the relationship between the user and the system is firmly established and remains basically unchanged even today. There are some changes, and of course several updates and improvements and a staggering amount of innovation.
Whether you're new to Linux, or whether you're such an old hand that most of these screenshots have been more biographical than historical, it's good to be able to look back at how one of the largest open source projects in the world has developed. More importantly, it's exciting to think of where Linux is headed and how we can all be a part of that, starting now, and for years to come.
— Shed light on some of the putative benefits of small functions — Explain why I personally think some of the benefits don’t really pan out as well as advertised — Explain why small functions can actually prove counterproductive — Explain the times when I do think smaller functions truly shine
General programming advice doled out invariably seems to extoll the elegance and efficacy of small functions. The book Clean Code — often considered something of a programming bible by many — has a chapter dedicated to functions alone, and the chapter begins with an example of a truly dreadful function that also happens to be long. The book goes on to lay blame on the length of the function as its most grievous offense, stating that:
Not only is it (the function) long, but it’s got duplicated code, lots of odd strings, and many strange and inobvious data types and APIs. Do you understand the function after three minutes of study? Probably not. There’s too much going on in there at too many different levels of abstraction. There are strange strings and odd function calls mixed in with doubly nested if statements controlled by flags.
The chapter briefly ponders what qualities would make the code “easy to read and understand” and “allow a casual reader to intuit the kind of program they live inside”, before declaring that making the function smaller will necessarily achieve this purpose.
The first rule of functions is that they should be small. The second rule of functions is that they should be smaller than that.
The idea that functions should be small is something that is almost considered too sacrosanct to call into question. It often gets trotted out during code reviews, on Twitter discussions, conference talks, books and podcasts on programming, articles on best practices for refactoring code and so forth. This idea made its merry way into my timeline again a few days ago in the form of this tweet:
Fowler, in his tweet, links to his article on function length, where he goes on to state that:
If you have to spend effort into looking at a fragment of code to figure out what it’s doing, then you should extract it into a function and name the function after that “what”.
Once I accepted this principle, I developed a habit of writing very small functions — typically only a few lines long [2]. Any function more than half-a-dozen lines of code starts to smell to me, and it’s not unusual for me to have functions that are a single line of code [3].
The virtues of small functions are so evangelized about that this idea made it to my timeline again today in the form of this tweet:
Some people seem so enamored with small functions that the idea of abstracting any and every piece of logic even nominally complex needing further elucidation via a comment into a separate function is enthusiastically promoted.
I’ve worked on codebases inherited from folks who’d internalized this idea to such an unholy extent that the end result was pretty hellish and entirely antithetical to all the good intentions the road to it was paved with. In this post, I hope to explain why some of the oft-touted benefits don’t always pan out the way one hopes and the times when some of the ideas can actually prove to be counterproductive.
Putative benefits of smaller functions
A number of reasons usually get wheeled out to prove the merit behind smaller functions.
Do one thing
The idea is simple — a function should only ever do one thing and do it well. On the face of it, this sounds like an extremely sound idea, in tune, even, with the Unix philosophy.
The bit where this gets murky is when this “one thing” needs to be defined. The “one thing” can be anything from a simple return statement to a conditional expression to a piece of mathematical computation to a network call, though many a time, this “one thing” is interpreted to mean a single level abstraction of some (often business) logic.
For instance, in a web application, a CRUD operation like “create user” can be “one thing”. Typically, at the very least, creating a user entails creating a record in the database (and handling any concomitant errors). Additionally, creating a user might also require sending them a welcome email. Furthermore, one might also want to trigger a custom event to a message broker like Kafka to feed this event into various other systems.
Thus, often a “single level of abstraction” isn’t just a single level, and what I’ve seen happen is that programmers who’ve completely bought in to the idea that a function should do “one thing” tend to find it hard not to apply the same principle recursively to every function or method they write.
Thus, instead of a reasonably airtight abstraction that can be comprehended (and tested) as a single unit, we now have carved out smaller boundaries to delineate each and every component of “the one thing” until it’s fully modular and entirely DRY.
The fallacy of DRY
DRY, in my opinion, is a good guiding principle, but an awful lot of times, pragmatism and reason is sacrificed at the altar of a dogmatic subscription to DRY, especially by programmers of the Rails persuasion. Raymond Hettinger, a core Python developer, has a fantastic talk called Beyond PEP8: Best practices for beautiful, intelligible code.This talk is a must-watch, not just for Python programmers but for anyone interested in programming or who programs for a living, because it very incisively lays bare the fallacies of dogmatic adherence to PEP8, which is the Python style guide many linters implement. That the talk is on PEP8 isn’t so much important than the rich insights one can take away from the talk, many of which are language agnostic.
Even if you don’t watch the entire talk, you should watch this one minute of the talk, which draws a frighteningly accurate analogy to the siren call of DRY. Programmers who’re beholden to DRY and insist on DRYing up as much of code as possible risk not seeing the forest for the trees.
My main problem with DRY is that it forces us into abstractions — nested and premature ones at that. Inasmuch as it’s impossible to abstract perfectly, the best we can do abstract well enough insofar as we can. How we define “well enough” is hard to generalize and is contingent on a large number of factors, some of them being:
— the nature of the assumptions underpinning our abstraction and how likely (and for how long) they are likely to hold water — the extent to which the layers of abstractions underlying our abstraction in question are prone to remain consistent (and correct) in their implementation and design — the flexibility and extensibility of both the underlying abstractions as well as any abstraction built on top of our abstraction in question currently — the requirements and expectations of any future abstractions that might be built on top of our abstraction in question
As such, a best-effort guarantee is all we might be able to offer. It’s absolutely inevitable that our abstraction is going to be subject to constant reassessment and in all likelihood, partial (or even full) invalidation as well. The one overarching feature that will stand us in good stead for the inevitable modification that’d be needed is to design our abstraction to be flexible. DRYing our current code to the furthest extent possible is depriving our future selves of this flexibility. It’s akin to us aiming for the perfect fit, when what we really ought to be designing for is to give ourselves enough wiggle room so as to be able to accommodate alterations in the future.
The best abstraction is an abstraction that optimizes to be good enough, not perfect. That’s a feature, not a bug. Abstractions are also subject to continuously changing contracts they expose to other abstractions. Understanding this very salient nature of abstraction is the key to designing good ones.
Alex Martelli, the coiner of the phrase duck-typing has a famous talk titled The Tower Of Abstraction, and the slides are well worth a read. In particular, I found how he approached the challenge of how to abstract well interesting.
Sandi Metz has a famous talk called All The Little Things, where she posits that “duplication is far cheaper than the wrong abstraction”, and thus to “prefer duplication over the wrong abstraction”.
Abstractions, in my opinion, can’t ever be entirely “right” or “wrong” — since the line demarcating “right” from “wrong” is an inherently blurred and ever-changing one. Our carefully handcrafted artisanal “perfect” abstraction is only one business requirement or bug report away from being consigned to the status of “wrong”.
I think it helps to view abstractions as a spectrum. One end of this spectrum optimizes for precision, where every last aspect of our code needs to be exactly precise. This certainly has its fair share of benefits but doesn’t serve well for designing good abstractions, since it strives for a perfect alignment. The other end of this spectrum optimizes for imprecision and the lack of boundaries. While this does allow for maximal flexibility, I find this extreme to be prone to other drawbacks.
As with most other things, the ideal lies somewhere in between. There is no one-size-fits-all happy medium. The ideal also varies depending on a vast number of factors — both programmatic and interpersonal — and the hallmark of good engineering is to be able to identify where in the spectrum this ideal lies for any given context, as well as to constantly reevaluate and recalibrate this ideal.
The name of the game
Speaking of abstraction, once it’s decided what to abstract and how, it’s important to give it a name.
And naming things is hard.
It’s considered something of a truism in programming that giving things longer, more descriptive names is a positively good thing, so much so that some even advocate for replacing comments in code with a function bearing the name of the comment. The idea here is that the more descriptive a name the better the encapsulation.
This might probably fly in the Java world where verbosity is the norm, but I’ve never particularly found code with such lengthy names easy to read. What could’ve been, say, 4–5 lines of code is now abstracted away in a function with an extremely long name. When I’m reading code, seeing such a verbose word pop up suddenly gives me pause as I process all the different words in this name, try to place it somewhere appropriate in the mental model I’ve been building thus far and then decide whether or not to peruse it in greater detail by jumping to the function and reading the implementation.
While this isn’t a problem when it’s sparing in its occurrence, when “one single thing” is broken down into several other “one single things” (as is wont to happen when the quest for small functions end up begetting even more small functions), each of them tend to be given extremely verbose names in the spirit of making code self documenting and eschewing comments. The cognitive overhead of processing the verbose names, mapping them into the mental model I’ve been building as I’ve been reading code, deciding which functions to dig deeper into and which to skim, and piecing together the puzzle to uncover the “big picture” becomes rather difficult.
Personally, I find keywords, constructs and idioms offered by the programming language much easier to interpret as compared to the logic custom to the program. Like for instance, when I’m reading an if-else block, I rarely ever spend any mental cycles processing the syntax, but spend my time understanding the logical flow of the program. Interrupting my flow of reasoning with aVeryLongFunctionNameThatIsPassedTwoParameters is a jarring disruption. This is especially true when the function being called is actually a one-liner that can be inlined. Context switches are expensive, whether they are CPU context switches or a programmer having to mentally switch context while reading code.
The other problem with a surfeit of small functions, especially ones with very descriptive and unintuitive names is that the codebase is now harder to search. A function named createUser is easy and intuitive to grep for, something like renderPageWithSetupsAndTeardowns (an name held up as a shining example in the book Clean Code), by contrast, is not the most easily memorable name nor the most searchable. Many editors also do a fuzzy search of the codebase, so having too many functions with similar prefixes is also more likely to pull up an extraneous number of results while searching, which is hardly ideal.
Loss of Locality
Small functions work best when we don’t have to jump across file or package boundaries to find the definition of a function. The book Clean Code proposes something called The Stepdown Rule to this end.
We want the code to read like a top-down narrative. We want every function to be followed by those at the next level of abstraction so that we can read the program, descending one level of abstraction at a time as we read down the list of functions. I call this The Stepdown Rule.
This sounds great in theory but rarely have I seen it play out well in praxes. Instead, what I have seen almost invariably is the loss of locality.
Let’s assume we start out with three functions, A, B and C, each which is called (and ergo read) one after the other. Our initial abstractions were underpinned by certain assumptions, requirements and caveats, all of which we assiduously accounted for at the time of design.
Soon enough, we have a new requirement pop up or an edge case we hadn’t foreseen or a new constraint we need to cater to. We need to modify function A since the abstraction isn’t valid anymore (or wasn’t ever valid to begin with and now we need to rectify it). In-keeping with what we’ve read in Clean Code, we decide the best way to deal with this is to, well, create morefunctions that will hide away the messy new requirements that’ve cropped up.
A couple of weeks after we’ve made this change, our requirements change yet again, and we decide to create even more functions to encapsulate all the additional changes required.
Rinse and repeat, and we’ve arrived exactly at the problem Sandi Metz describes in her post on The Wrong Abstraction. The post goes on to state that:
Existing code exerts a powerful influence. Its very presence argues that it is both correct and necessary. We know that code represents effort expended, and we are very motivated to preserve the value of this effort. And, unfortunately, the sad truth is that the more complicated and incomprehensible the code, i.e. the deeper the investment in creating it, the more we feel pressure to retain it (the “sunk cost fallacy”).
While this might be true when the same team that originally worked on the team continues maintaining it, I’ve seen the opposite play out when new programmers (or managers) take ownership of the codebase. What started out with good intentions has now turned into spaghetti code which sure as hell ain’t clean anymore and the urge to “refactor” or sometimes even rewrite the code becomes all the more tempting.
Now one might argue that to a certain extent this is inevitable. And they’d be right. What we rarely talk about is how important it is to write code that will die a graceful death. I’ve written in the past about how important it is to make code operationally easy to decommission, but this is even more true when it comes to the codebase itself.
All too often, programmers think of code as “dead” only if it’s deleted or not in use anymore or if the service itself is decommissioned. If we start thinking about the code we write as something that dies every single time we add a new git commit, I think we might be more incentivized to write code that’s amenable to easy modification. When thinking about how to abstract, it greatly helps to be cognizant of the fact that the code we’re building might probably only be a few hours away from dying (being modified). In general, it’s rare for programmers working on a team to not have even the slightest inkling of the near-term product roadmap, thus optimizing for utmost ease of modification tends to work better than trying to build topdown narratives of the sort proposed in Clean Code.
Class pollution
Smaller functions also lead to either larger classes or just more number of classes in languages that support Object Oriented Programming. In the case of a language like Go, I’ve seen this tendency lead to larger interfaces (combined with the double whammy of interface pollution) or a large number of tiny packages.
This exacerbates the cognitive overhead involved in mapping the business logic to the carved out abstractions. The more the number of classes/interfaces/packages, the harder it is to “take it all in” in one fell swoop, which doesn’t help justify the maintenance cost of these various components.
Fewer Arguments
Proponents of smaller functions also almost invariably tend to champion that fewer arguments be passed to the function.
The problem with fewer function arguments is that one runs the risk of not making dependencies explicit.
I’ve definitely seen Ruby classes with 5–10 tiny methods, all which typically do something very trivial and take maybe a parameter or two. I’ve also seen a lot of them mutate shared global state or rely on singletons not explicitly passed to them, which is an anti-pattern if ever there was one.
Furthermore, when the dependencies aren’t explicit, testing becomes a lot harder or complicated into the bargain, what with the overhead of setting up and tearing down state before every individual test targeting the itsy-bitsy little functions.
Hard to Read
This has already been stated before but bears reiterating — an explosion of small functions, especially one line functions, makes the codebase inordinately harder to read, especially for those whom the codebase should’ve primarily been optimized for in the first place — newcomers.
There are several types of newcomers to a codebase. A good rule of thumb, in my experience, has been to keep in mind someone who might check a number of the aforementioned categories of “new”. Doing so helps me reevaluate my assumptions and rethink the overhead I might be inadvertently imposing on someone new who’ll be reading the code for the first time. What I’ve realized is that this approach actually leads to far better and simpler code than might’ve been possible otherwise.
Simple code isn’t necessarily the easiest code to write, and rarely is it ever the DRYest code. It takes an enormous amount of careful thought, attention to detail and care to arrive at the simplest solution that is both correct and easy to reason about. What is most striking about simplicity is that it lends itself to being easily understood by both old and new programmers, for all possible definitions of “old” and “new”.
When I’m “new” to a codebase, if I’m fortunate enough to already know the language and/or the framework being used, the biggest challenge for me is to understand the business logic or implementation details. When I’m not so fortunate and am faced with the daunting task of manoeuvring my way through a codebase written in a language foreign to me, the biggest challenge I face is to walk a tightrope between taking in just enough of the language/framework so as to be able to understand what the code is doing without going down a rabbit hole and at the same time being able to isolate the “one single thing” of interest that I’d need understand to make the progress necessary to move the project to the next stage.
During neither of these times have I ever looked at an unfamiliar codebase and thought:
Aww, look at these functions. So small. So DRY. So beauuuuuutiful.
What I’m really hoping for during the times I venture into uncharted territory is to make the least number of mental hops and context switches while trying to find the answer to a question.
Investing time and thought into making things easy for the future maintainer or consumer of the code one is currently authoring is something that will have a huge pay-off, especially for open source projects. This is something I wish I’d done better earlier in my career and is something I’m mindful of these days.
When do smaller functions actually make sense
All things considered, I do believe small functions absolutely have their utility, especially when it comes to testing.
Network I/O
This isn’t a post on how to best write functional, integration and unit tests for a vast number of services. However, when it comes to unit tests, the way network I/O is tested is by, well, not actually testing it.
I’m not a terribly big fan of mocks. Mocks have several shortcomings. For one thing, mocks are an artificial simulation of some result. Mocks are only as good as our imagination and our ability to predict the various failure modes our application might encounter. Mocks are also very likely to get out of sync from the real service they stand-in for, unless one painstakinglytests the mock itself against the real service. Mocks also work best when there is just a single instance of every particular mock and every test uses the same mock.
That said, mocks are still pretty much the only way one can unit-test network I/O. We live in an era of microservices and when outsourcing most (if not all) concerns not core to one’s main product to a vendor makes the most sense. A lot of an application’s core functionality now involves a network call and the best way to unit-test these calls is by mocking them out.
On the whole, I find limiting the surface area of mocks to the least amount of code towork best. An API call to an email service to send our newly created user a welcome email requires making an HTTP connection, and isolating this request to the smallest possible function allows us to mock the least amount of code in tests. Typically, this should be a function with no more than 1–2 lines to make the HTTP connection and return any error along with the response. The same is applicable when publishing an event to Kafka or creating a new user in the database.
Property based testing
For something that can provide such an enormous amount of benefit with such little code, property based testing is woefully underused. Pioneered by the Haskell library QuickCheck and gaining adoption in other languages like Scala and Python, property based testing allows one to generate a large number of inputs that match some specification to a given test and assert that the test passes for each and every one of these cases.
Many property based testing frameworks target functions and as such it makes sense to isolate anything that can subjected to property based testing to a single function. I find this especially useful while testing the encoding or decoding of data, testing JSON parsing and so forth.
Conclusion
This post’s intention was neither to argue that DRY or small functions are inherently bad (even if the title disingenuously suggested so). Only that they aren’t inherently good either.
The number of small functions in a codebase or the average function length in itself isn’t a metric to brag about. There’s a 2016 Pycon talk called onelineizerabout an eponymous Python program that can convert any Python program (including itself) into a single line of code. While this makes for a fun and fascinating conference talk, one would be rather silly to write production code in the same matter.
In the words of one of the best programmers of our time:
This applies universally, not just to Go. Conventional programming wisdom is often heavily influenced by books written during the era when Object Oriented Programming and “Design Patterns” reigned supreme. A lot of these ideas and best practices promulgated have largely gone unchallenged, even as programming paradigms have evolved vastly in the recent years and require reconsideration.
As the complexity of the programs we author has vastly increased and the constraints we work against have become more protean, it becomes all the more necessary for programmers to adapt their thinking accordingly. Wheeling out old tropes is not only lazy but also lulls programmers into a false sense of reassurance.
An expert in California labor law tells us that Damore
has a better chance of winning his case than most people
think.That's because Damore's case is NOT about free speech,
discrimination, or his rights as an "at-will" worker.Damore filed under a section of law that deals with
protecting statements made by workers' rights activists who
have questions about wages and conditions.Google may have difficulty establishing that he broke
the company's internal code of conduct because he used bulletin
boards that the company provided to allow employees to discuss
these issues, and because his manifesto repeatedly states he is
in favor of diversity and intended to "increase women's
representation in tech."
That's because
he filed his complaint against Alphabet (Google's corporate
parent) under a specific provision of the National Labor
Relations Act that protects workers' rights activists. Under that
provision, Damore's complaint will not be about whether
he was discriminated against as a white person, a male, or a
conservative, or whether
the company had a right to let him go as an "at-will" worker.
Rather, the provision governs what workers are allowed to talk
about in the workplace in regards to pay, conditions, promotions,
and other workplace practices. The law was originally crafted to
protect the right of union organisers to discuss pay rates with
their colleagues, and more latterly to protect anyone asking
questions at work about who gets paid what, and why.
On that basis, he has a fighting chance, according to Valerie L.
Sharpe, a labor lawyer based in the San Francisco area. She told
Business Insider that Damore's chances of success are "a little
bit above decent." HR lawyers at other tech companies in the Bay
Area are following the case closely for that reason, she says.
Sec. 8. [§ 158.] (a) [Unfair labor practices by employer] It
shall be an unfair labor practice for an employer--
(1) to interfere with, restrain, or coerce employees in the
exercise of the rights guaranteed in section 7 [section 157 of
this title];
Sec. 7. [§ 157.] Employees shall have the right to
self-organization, to form, join, or assist labor organizations,
to bargain collectively through representatives of their own
choosing, and to engage in other concerted activities for the
purpose of collective bargaining or other mutual aid or
protection, and shall also have the right to refrain from any or
all of such activities except to the extent that such right may
be affected by an agreement requiring membership in a labor
organization as a condition of employment as authorized in
section 8(a)(3) [section 158(a)(3) of this title].
"The crux of his claim is whether Google penalized him for
raising concerns about working conditions (i.e., unfair treatment
of white men?)," Sharpe says. "Whether the manifesto really
constitutes a 'concern about working conditions' and whether he
was acting for the good of others will be the dispositive issues.
I am not aware of any cases that are exactly on point, but there
are certainly cases that litigate this issue and cases where
employees are returned to work."
"Damore's possible claims really have nothing to do with whether
white males are discriminated against in wages and promotion,"
Sharpe says. "It is about whether he was fired because he
complained that Google's diversity efforts were unfair to men. He
doesn't have to prove the allegation, just that he made the claim
for the purpose of advancing working conditions of himself and
others."
Google's other problem in defending this case will be to
establish that Damore broke company policy by publishing the
manifesto. Google's
employee code of conduct prohibits employees from
discriminating against each other on the basis of sex or race.
"Googlers are expected to do their utmost to create a workplace
culture that is free of harassment, intimidation, bias, and
unlawful discrimination," it says. The company might, presumably,
argue that Damore's manifesto was an example of using a company
message board to post a discriminatory statement that violated
the policy. The manifesto argued that "biological causes ... may
explain why we don't see equal representation of women in tech
and leadership." No doubt many female Google employees complained
internally that it was being circulated on Google's employee
message boards.
"I have conducted probably 100 workplace investigations to assess
whether employee complaints about harassment and discrimination
violate employer codes of conduct and I am not sure I would find
this to be a violation of a Code that merely prohibited
'unlawful' harassment and discrimination," Sharpe says.
Lastly, Google may have trouble establishing that using its
message board was a violation of policy, given that the message
boards are provided by the company precisely to allow workers to
discuss workplace issues. "Because start-ups like Google have
lots of message boards and employer sponsored forums for
employees to discuss work issues (i.e. Employee Resource Groups),
Google will have challenges establishing a policy violation,"
Sharpe says.
Of course, the irony here is that if Damore wins it will be
regarded as a big victory for conservatives who work in tech,
even though the win would strengthen the kind of workers' rights
that are traditionally the focus of the left.
You aren't going to share anything novel or nonpublic
If an idea has any merit, someone is already working on it. Only very rarely is anything provided under an NDA actually nonpublic. Your competitors have probably considered your idea and are either working on it or have discarded it.
If I can copy your idea from what you're going to share it's probably not a very good idea
If your idea is proprietary, get a patent or a copyright, not an NDA
Legal agreements are serious documents that take time to review and to consider throughout the duration of their effective dates
Signing a legal agreement is not something to do without thinking or reading; it's a binding contract that should be taken seriously. "This is just boilerplate" or "This is just a routine formality" or "Everyone signs these" are not good reasons to sign an NDA.
NDAs are (probably) impossible to enforce
TODO
The time you're spending creating and enforcing NDAs is a sign of misaligned priorities
If you aren't 100% focused on your customer at the startup stage there is something wrong. Legal agreements that don't have a strategic purpose take time from more important priorities.
You aren't providing anything in return
Signing an NDA provides a value to a company that should be compensated. If the NDA isn't worth paying for the developer to sign, it's probably not worth signing.
You can share non-public information and withhold the juicy details
There maybe something truly secret that you don't want to share. By sharing other details first without an NDA, the prospective employee can determine whether the job is actually a good fit before signing a legal agreement.
Your NDA is one-sided
A one-sided NDA is a nonstarter. An employer should agree to use the same care in protecting confidential information shared with them as they expect from the employee. Furthermore, providing a one-sided NDA is an indicator of either inexperience or over-lawyering.
Lawyers are too expensive
There are a lot of better ways to spend money than having lawyers review nondisclosure agreements.
Developers don't need to sign NDA's to get a job
Developing for startups is in high demand. Good developers can find plenty of work at less litigious companies.
TODO
Feel free to make a PR if you'd like to add to this repo. Please read the contribution guidelines before making a PR.
When biologists synthesize DNA, they take pains not to create or spread a dangerous stretch of genetic code that could be used to create a toxin or, worse, an infectious disease. But one group of biohackers has demonstrated how DNA can carry a less expected threat—one designed to infect not humans nor animals but computers.
In new research they plan to present at the USENIX Security conference on Thursday, a group of researchers from the University of Washington has shown for the first time that it’s possible to encode malicious software into physical strands of DNA, so that when a gene sequencer analyzes it the resulting data becomes a program that corrupts gene-sequencing software and takes control of the underlying computer. While that attack is far from practical for any real spy or criminal, it's one the researchers argue could become more likely over time, as DNA sequencing becomes more commonplace, powerful, and performed by third-party services on sensitive computer systems. And, perhaps more to the point for the cybersecurity community, it also represents an impressive, sci-fi feat of sheer hacker ingenuity.
“We know that if an adversary has control over the data a computer is processing, it can potentially take over that computer,” says Tadayoshi Kohno, the University of Washington computer science professor who led the project, comparing the technique to traditional hacker attacks that package malicious code in web pages or an email attachment. “That means when you’re looking at the security of computational biology systems, you’re not only thinking about the network connectivity and the USB drive and the user at the keyboard but also the information stored in the DNA they’re sequencing. It’s about considering a different class of threat.”
A Sci-Fi Hack
For now, that threat remains more of a plot point in a Michael Crichton novel than one that should concern computational biologists. But as genetic sequencing is increasingly handled by centralized services—often run by university labs that own the expensive gene sequencing equipment—that DNA-borne malware trick becomes ever so slightly more realistic. Especially given that the DNA samples come from outside sources, which may be difficult to properly vet.
If hackers did pull off the trick, the researchers say they could potentially gain access to valuable intellectual property, or possibly taint genetic analysis like criminal DNA testing. Companies could even potentially place malicious code in the DNA of genetically modified products, as a way to protect trade secrets, the researchers suggest. "There are a lot of interesting—or threatening may be a better word—applications of this coming in the future," says Peter Ney, a researcher on the project.
Regardless of any practical reason for the research, however, the notion of building a computer attack—known as an "exploit"—with nothing but the information stored in a strand of DNA represented an epic hacker challenge for the University of Washington team. The researchers started by writing a well-known exploit called a "buffer overflow," designed to fill the space in a computer's memory meant for a certain piece of data and then spill out into another part of the memory to plant its own malicious commands.
But encoding that attack in actual DNA proved harder than they first imagined. DNA sequencers work by mixing DNA with chemicals that bind differently to DNA's basic units of code—the chemical bases A, T, G, and C—and each emit a different color of light, captured in a photo of the DNA molecules. To speed up the processing, the images of millions of bases are split up into thousands of chunks and analyzed in parallel. So all the data that comprised their attack had to fit into just a few hundred of those bases, to increase the likelihood it would remain intact throughout the sequencer's parallel processing.
When the researchers sent their carefully crafted attack to the DNA synthesis service Integrated DNA Technologies in the form of As, Ts, Gs, and Cs, they found that DNA has other physical restrictions too. For their DNA sample to remain stable, they had to maintain a certain ratio of Gs and Cs to As and Ts, because the natural stability of DNA depends on a regular proportion of A-T and G-C pairs. And while a buffer overflow often involves using the same strings of data repeatedly, doing so in this case caused the DNA strand to fold in on itself. All of that meant the group had to repeatedly rewrite their exploit code to find a form that could also survive as actual DNA, which the synthesis service would ultimately send them in a finger-sized plastic vial in the mail.
The result, finally, was a piece of attack software that could survive the translation from physical DNA to the digital format, known as FASTQ, that's used to store the DNA sequence. And when that FASTQ file is compressed with a common compression program known as fqzcomp—FASTQ files are often compressed because they can stretch to gigabytes of text—it hacks that compression software with its buffer overflow exploit, breaking out of the program and into the memory of the computer running the software to run its own arbitrary commands.
A Far-Off Threat
Even then, the attack was fully translated only about 37 percent of the time, since the sequencer's parallel processing often cut it short or—another hazard of writing code in a physical object—the program decoded it backward. (A strand of DNA can be sequenced in either direction, but code is meant to be read in only one. The researchers suggest in their paper that future, improved versions of the attack might be crafted as a palindrome.)
Despite that tortuous, unreliable process, the researchers admit, they also had to take some serious shortcuts in their proof-of-concept that verge on cheating. Rather than exploit an existing vulnerability in the fqzcomp program, as real-world hackers do, they modified the program's open-source code to insert their own flaw allowing the buffer overflow. But aside from writing that DNA attack code to exploit their artificially vulnerable version of fqzcomp, the researchers also performed a survey of common DNA sequencing software and found three actual buffer overflow vulnerabilities in common programs. "A lot of this software wasn't written with security in mind," Ney says. That shows, the researchers say, that a future hacker might be able to pull off the attack in a more realistic setting, particularly as more powerful gene sequencers start analyzing larger chunks of data that could better preserve an exploit's code.
Needless to say, any possible DNA-based hacking is years away. Illumina, the leading maker of gene-sequencing equipment, said as much in a statement responding to the University of Washington paper. "This is interesting research about potential long-term risks. We agree with the premise of the study that this does not pose an imminent threat and is not a typical cyber security capability," writes Jason Callahan, the company's chief information security officer "We are vigilant and routinely evaluate the safeguards in place for our software and instruments. We welcome any studies that create a dialogue around a broad future framework and guidelines to ensure security and privacy in DNA synthesis, sequencing, and processing."
But hacking aside, the use of DNA for handling computer information is slowly becoming a reality, says Seth Shipman, one member of a Harvard team that recently encoded a video in a DNA sample. (Shipman is married to WIRED senior writer Emily Dreyfuss.) That storage method, while mostly theoretical for now, could someday allow data to be kept for hundreds of years, thanks to DNA's ability to maintain its structure far longer than magnetic encoding in flash memory or on a hard drive. And if DNA-based computer storage is coming, DNA-based computer attacks may not be so farfetched, he says.
"I read this paper with a smile on my face, because I think it’s clever," Shipman says. "Is it something we should start screening for now? I doubt it." But he adds that, with an age of DNA-based data possibly on the horizon, the ability to plant malicious code in DNA is more than a hacker parlor trick.
"Somewhere down the line, when more information is stored in DNA and it’s being input and sequenced constantly," Shipman says, "we'll be glad we started thinking about these things."
Tech startups live by the rule that speed is paramount. Houseparty, creator of a hot video app, has an extra reason for urgency.
Facebook Inc., a dominant force in Silicon Valley, is stalking the company, part of the social network’s aggressive mimicking of smaller rivals. Facebook is being aided by an internal “early bird” warning system that identifies potential threats, according to people familiar with the technology.
A Front-end development framework is a combination of HTML, CSS and Javascript components which makes the HTML5 developmentprocess an easy and quick one. On the other hand, by using a development framework, HTML5 developers can develop modern and responsive websites to attract users to their website.
Also, there are many differences between each development framework but all of them uses a grid system for layout. In addition, these development framework contains standardized concepts of HTML5, CSS, and Javascript.
The main criteria of these development frameworks are to make common and complex tasks easier.
In this post, we are going to talk about top 5 popular CSS framework trends in 2017.
Here, are the mentioned below top 5 popular Front-End framework of 2017:
Bootstrap is a development framework developed using HTML5, CSS, and Javascript keeping in mind about mobile phone approach first. This development framework provides many components such as tab, carousel, responsive navigation, accordion etc. On the other hand, HTML5 developers can design any complex layout with the help of 12 column grid layout.
In addition, there is a large Bootstrap developer community all over the internet where developers can find easy solutions. Also, there are many great resource Bootstrap websites like Bootsnipp, Next Bootstrap, Start Bootstrap etc.
Here are some of the pros and cons of Bootstrap CSS development framework:
Pros:
– Bootstrap is a great standardized development platform which has all basic styles and components required to build various front-end designs like layout grids, panels, buttons, tables, etc.
– Bootstrap is supported by all major browsers and easily fixes CSS compatibility issues.
– Provides consistent UI which looks awesome out of the box.
– Bootstrap CSS is designed with responsive structures and styles for mobile devices.
– Provides lots of free and professional templates, themes, and plugins.
Cons:
– Bootstrap styles are complex and it leads to a lot of HTML output which is not perfectly semantic.
– Javascript in Bootstrap is tied to jQuery.
– Bootstrap requires a lot of rewriting of a file if a user has done a lot of customizations or to get diverted from the bootstrap structure.
– Websites customized using Bootstrap framework looks the same if a user does not customize it regularly.
Bulma CSS development framework has been getting the buzz recently among the developer community. Bulma is still in the beta version and they are pushing for a release of version 1.0. Moreover, Bulma is a lightweight and simple to use CSS stylesheet. In addition, Bulma CSS grid is built in with Flexbox which allows performing fancy footwork design than a normal standard grid.
Bulma CSS development framework runs and supports all the web browsers except any Internet Explorer browser running below version 10.0.
Therefore, if you are using Bootstrap than you don’t need to migrate on this.
Here are the pros and cons of Bulma CSS development framework:
Pros:
– Bulma CSS CSS file is very lightweight and simple to customize on other CSS development framework.
– Bulma CSS provides integrated Flexbox which allows users to build fancy designs.
Cons:
– Bulma CSS development framework is still in the development phase with final version still not released.
– Bulma CSS runs very slow on Internet Explorer web browser.
Materialize CSS is another good development framework which is based on Android operating system’s material design. It is good for those who like the material design. The development team behind this development framework is working hard to make it an awesome one. In addition, Materialize CSS development framework provides great features which other development frameworks do not provide like Slide Nav, Chip, Waves, AutoComplete etc.
Here are the mentioned below pros and cons of Materialize CSS framework:
Pros:
– Materialize CSS framework is very opinionated about the UI or UX design elements behavior and how they interact visually.
– Materialize CSS is great for creation of material design which is offered out of the box.
– Good documentation
– Responsive
Cons:
– Materialize CSS does not support some old web browsers.
– CSS files in Materialize CSS are very large in size and heavy.
– Certain nesting of elements is handled properly by Materialize CSS.
Pure CSS development framework works really well with the new project which is still in the development phase. The biggest advantage of using Pure CSS is its simplicity and the total size of a CSS file. Moreover, Pure CSS framework has incredibly small file size when compared with other CSS development framework.
Also, normalize.css file is integrated within this framework. Moreover, if it normally includes your project build, then you can easily get all your CSS resets with this framework.
Here are mentioned below pros and cons of Pure CSS development framework:
Pros:
– Lightweight and another file can simply be added using just a stylesheet.
– CSS file in Pure CSS framework is available via CDN.
– Shows explanations of how design grid works and other related features.
– Saves a lot of development time.
Cons:
– Only limited number of designs and templates are available.
– Pure CSS development framework does not have a large development community for technical support.
– Pure is extremely simple and easy to use.
– Pure CSS FontAwesome is not integrated by default.
Kube CSS development framework is a great tool for modern day CSS developers. It supports all the modern web browsers. The default styles and templates in Kube CSS framework are pretty good and appealing. Moreover, this framework is very light on actual styles and completely focusses on the creation of important tools which will help in website development process.
Moreover, this CSS framework is also based on Flexbox which provides lots of features and added functionalities in terms of layout.
Kube CSS framework is a heavier when compared with Cube.
Here are the pros and cons of Kube CSS mentioned below:
Pros:
– Lightweight CSS files
– Focusses on website development by providing important tools
Cons:
– No Kube CSS development community currently.
– Mass adoption is still not there for this CSS development framework
Conclusion:
In this post, we have discussed the top 5 best front-end development frames for 2017. There are more than dozens of development frameworks which are available on the Internet, but many of them are not updated on a regular basis and some are still in the beta version only.
However, We made a list development frameworks so that you can select one from the top 5. There are many HTML5 developers who are still confused whether which framework they should start working on. Therefore, we recommend you start using either Bootstrap or Materialize so that HTML5 developers should start developing complex websites or web applications.
It’s a CSS property probably everyone who writes CSS has used at some point. It’s pretty ubiquitous.
And it’s super complicated.
“But it’s just a number”, you say. “How can that be complicated?”
I too felt that way one time. And then I worked on implementing it for stylo.
Stylo is the project to integrate Servo’s styling system into Firefox. The styling system handles
parsing CSS, determining which rules apply to which elements, running this through the cascade,
and eventually computing and assigning styles to individual elements in the tree. This happens not
only on page load, but also whenever various kinds of events (including DOM manipulation) occur,
and is a nontrivial portion of pageload and interaction times.
Servo is in Rust, and makes use of Rust’s safe parallelism in many places, one of them being
styling. Stylo has the potential to bring these speedups into Firefox, along with the added safety
of the code being in a safer systems language.
Anyway, as far as the styling system is concerned, I believe that font-size is the most complex
property it has to handle. Some properties may be more complicated when it comes to layout or
rendering, but font-size is probably the most complex one in the department of styling.
I’m hoping this post can give an idea of how complex the Web can get, and also serve as documentation
for some of these complexities. I’ll also try to give an idea of how the styling system works throughout this post.
Alright. Let’s see what is so complex about font-size.
The basics
The syntax of the property is pretty straightforward. You can specify it as:
A length (12px, 15pt, 13em, 4in, 8rem)
A percentage (50%)
A compound of the above, via a calc (calc(12px + 4em + 20%))
An absolute keyword (medium, small, large, x-large, etc)
A relative keyword (larger, smaller)
The first three are common amongst quite a few length-related properties. Nothing abnormal in the syntax.
The next two are interesting. Essentially, the absolute keywords map to various pixel values, and match
the result of <font size=foo> (e.g. size=3 is the same as font-size: medium). The actual value they map to
is not straightforward, and I’ll get to that later in this post.
The relative keywords basically scale the size up or down. The mechanism of the scaling was also complex, however
this has changed. I’ll get to that too.
em and rem units
First up: em units. One of the things you can specify in any length-based CSS property is a value with an em or rem
unit.
5em means “5 times the font-size of the element this is applied to”. 5rem means “5 times the font-size of the root element”
The implications of this are that font-size needs to be computed before all the other properties (well, not quite, but we’ll get to that!)
so that it is available during that time.
You can also use em units within font-size itself. In this case, it computed relative to the font-size of the parent element, since
you can’t use the font-size of the element to compute itself.
Minimum font size
Browsers let you set a “minimum” font size in their preferences, and text will not be scaled below it. It’s useful for those with
trouble seeing small text.
However, this doesn’t affect properties which depend on font-size via em units. So if you’re using a minimum font size,
<div style="font-size: 1px; height: 1em; background-color: red"> will have a very tiny height (which you’ll notice from the color),
but the text will be clamped to the minimum size.
What this effectively means is that you need to keep track of two separate computed font size values. There’s one value that
is used to actually determine the font size used for the text, and one value that is used whenever the style system needs to
know the font-size (e.g. to compute an em unit.)
This gets slightly more complicated when ruby is involved. In ideographic scripts (usually, Han
and Han-based scripts like Kanji or Hanja) it’s sometimes useful to have the pronunciation
of each character above it in a phonetic script, for the aid of readers without proficiency in that
script, and this is known as “ruby” (“furigana” in Japanese). Because these scripts are ideographic,
it’s not uncommon for learners to know the pronunciation of a word but have no idea how to write it.
An example would be 日本, which is 日本 (“nihon”,
i.e. “Japan”) in Kanji with ruby にほん in the phonetic Hiragana script above it.
As you can probably see, the phonetic ruby text is in a smaller font size (usually 50% of the font
size of the main text1). The minimum font-size support respects this, and ensures that if the ruby
is supposed to be 50% of the size of the text, the minimum font size for the ruby is 50% of the
original minimum font size. This avoids clamped text from looking like 日本 (where both get set to
the same size), which is pretty ugly.
Text zoom
Firefox additionally lets you zoom text only when zooming. If you have trouble reading small things, it’s great to
be able to just blow up the text on the page without having the whole page get zoomed (which means you need to scroll
around a lot).
In this case, em units of other properties do get zoomed as well. After all, they’re supposed to be relative to the text’s font
size (and may have some relation to the text), so if that size has changed so should they.
(Of course, that argument could also apply to the min font size stuff. I don’t have an answer for why it doesn’t.)
This is actually pretty straightforward to implement. When computing absolute font sizes (including
keywords), zoom them if text zoom is on. For everything else continue as normal.
Text zoom is also disabled within <svg:text> elements, which leads to some trickiness here.
Interlude: How the style system works
Before I go ahead it’s probably worth giving a quick overview of how everything works.
The responsibiltiy of a style system is to take in CSS code and a DOM tree, and assign computed styles to each element.
There’s a distinction between “specified” and “computed” here. “specified” styles are in the format
you specify in CSS, whereas computed styles are those that get attached to the elements, sent to
layout, and inherited from. A given specified style may compute to different values when applied to
different elements.
So while you can specifywidth: 5em, it will compute to something like width: 80px. Computed values are usually a
cleaned up form of the specified value.
The style system will first parse the CSS, producing a bunch of rules usually containing declarations (a declaration is like width: 20%;; i.e. a property name and a specified value)
It then goes through the tree in top-down order (this is parallelized in Stylo), figuring out which declarations apply to each element
and in which order – some declarations have precedence over others. Then it will compute each relevant declaration against the element’s style (and parent style, among other bits of info),
and store this value in the element’s “computed style”.
There are a bunch of optimizations that Gecko and Servo do here to avoid duplicated work2. There’s a
bloom filter for quickly checking if deep descendent selectors apply to a subtree. There’s a “rule
tree” that helps cache effort from determining applicable declarations. Computed styles are
reference counted and shared very often (since the default state is to inherit from the parent or
from the default style).
But ultimately, this is the gist of what happens.
Keyword values
Alright, this is where it gets complicated.
Remember when I said font-size: medium was a thing that mapped to a value?
So what does it map to?
Well, it turns out, it depends on the font family. For the following HTML:
12
<spanstyle="font: medium monospace">text</span><spanstyle="font: medium sans-serif">text</span>
where the first one computes to a font-size of 13px, and the second one computes to a font-size of
16px. You can check this in the computed style pane of your devtools, or by usinggetComputedStyle().
I think the reason behind this is that monospace fonts tend to be wider, so the default font size (medium)
is scaled so that they have similar widths, and all other keyword font sizes get shifted as well. The final result is something like this:
Firefox and Servo have a matrix that helps derive the values for all the absolute
font-size keywords based on the “base size” (i.e. the computed of font-size: medium). Actually,
Firefox has three tables to support some legacy use cases like quirks mode (Servo has
yet to add support for these tables). We query other parts of the browser for what the “base size”
is based on the language and font family.
Wait, but what does the language have to do with this anyway? How does the language impact font-size?
It turns out that the base size depends on the font family and the language, and you can configure this.
Both Firefox and Chrome (using an extension) actually let you tweak which fonts get used on a per-language basis,as well as the default (base) font-size.
This is not as obscure as one might think. Default system fonts are often really ugly for non-Latin-
using scripts. I have a separate font installed that produces better-looking Devanagari ligatures.
Similarly, some scripts are just more intricate than Latin. My default font size for Devanagari is
set to 18 instead of 16. I’ve started learning Mandarin and I’ve set that font size to 18 as well. Hanzi glyphs
can get pretty complicated and I still struggle to learn (and later recognize) them. A larger font size is great for this.
Anyway, this doesn’t complicate things too much. This does mean that the font family needs to be
computed before font-size, which already needs to be computed before most other properties. The
language, which can be set using a lang HTML attribute, is internally treated as a CSS property by
Firefox since it inherits, and it must be computed earlier as well.
Not too bad. So far.
Now here’s the kicker. This dependence on the language and family inherits.
Quick, what’s the font-size of the inner div?
123456
<divstyle="font-size: medium; font-family: sans-serif;"><!-- base size 16 --> font size is 16px<divstyle="font-family: monospace"><!-- base size 13 --> font size is ??</div></div>
For a normal inherited CSS property3, if the parent has a computed value of 16px,
and the child has no additional values specified, the child will inherit a value of 16px.Where the parent got that computed value from doesn’t matter.
Here, font-size“inherits” a value of 13px. You can see this below (codepen):
Basically, if the computed value originated from a keyword, whenever the font family or language
change, font-size is recomputed from the original keyword with the new font family and language.
The reason this exists is because otherwise the differing font sizes wouldn’t work anyway! The default font size
is medium, so basically the root element gets a font-size: medium and all elements inherit from it. If you change
to monospace or a different language in the document you need the font-size recomputed.
But it doesn’t stop here. This even inherits through relative units (Not in IE).
123456789
<divstyle="font-size: medium; font-family: sans-serif;"><!-- base size 16 --> font size is 16px<divstyle="font-size: 0.9em"><!-- could also be font-size: 50%--> font size is 14.4px (16 * 0.9)<divstyle="font-family: monospace"><!-- base size 13 --> font size is 11.7px! (13 * 0.9)</div></div></div>
So we’re actually inheriting a font-size of 0.9*medium when we inherit from the second div, not 14.4px.
Another way of looking at it is whenever the font family or language changes, you should recompute the font-size as if the language and family were always that way up the tree.
Firefox code uses both of these strategies. The original Gecko style system handles this by actually
going back to the top of the tree and recalculating the font size as if the language/family were
different. I suspect this is inefficient, but the rule tree seems to be involved in making this slightly
more efficient
Servo, on the other hand, stores some extra data on the side when computing stuff, data which gets copied over to the child element. It basically
stores the equivalent of saying “Yes, this font was computed from a keyword. The keyword was medium, and after that we applied a factor of 0.9 to it.”4
In both cases, this leads to a bunch of complexities in all the other font-size complexities, since they need to be carefully preserved through this.
So I mentioned that font-size: larger and smaller scale the size, but didn’t mention by what fraction.
According to the spec, if the font-size currently matches the value of an absolute keyword size (medium/large/etc),
you should pick the value of the next/previous keyword sizes respectively.
If it is between two, find the same point between the next/previous two sizes.
This, of course, must play well with the weird inheritance of keyword font sizes mentioned before. In gecko’s model this isn’t too hard,
since Gecko recalculates things anyway. In Servo’s model we’d have to store a sequence of applications of larger/smaller and relative
units, instead of storing just a relative unit.
Additionally, when computing this during text-zoom, you have to unzoom before looking it up in the table, and then rezoom.
Overall, a bunch of complexity for not much gain — turns out only Gecko actually followed the spec here! All other browser engines
used simple ratios here.
Firefox and Safari support MathML, a markup language for math. It doesn’t get used much on the Web these days, but it exists.
MathML has its own complexities when it comes to font-size. Specifically, scriptminsize, scriptlevel, and scriptsizemultiplier.
For example, in MathML, the text in the numerator or denominator of a fraction or the text of a superscript is 0.71 times the size of the text outside of it. This is because
the default scriptsizemultiplier for MathML elements is 0.71, and these specific elements all get a default scriptlevel of +1.
Basically, scriptlevel=+1 means “multiply the font size by scriptsizemultiplier”, and
scriptlevel=-1 is for dividing. This can be specified via a scriptlevel HTML attribute on an mstyle element. You can
similarly tweak the (inherited) multiplier via the scriptsizemultiplier HTML attribute, and the minimum size via scriptminsize.
So, for example:
12345678910111213141516
<math><msup><mi>text</mi><mn>small superscript</mn></msup></math><br><math> text<mstylescriptlevel=+1> small<mstylescriptlevel=+1> smaller<mstylescriptlevel=-1> small again</mstyle></mstyle></mstyle></math>
will show as (you will need Firefox to see the rendered version, Safari supports MathML too but the support isn’t as good):
So this isn’t as bad. It’s as if scriptlevel is a weird em unit. No biggie, we know how to deal with those already.
Except you also have scriptminsize. This lets you set the minimum font size for changes caused by scriptlevel.
This means that scriptminsize will make sure scriptlevel never causes changes that make the font smaller than the min size,
but it will ignore cases where you deliberately specify an em unit or a pixel value.
There’s already a subtle bit of complexity introduced here, scriptlevel now becomes another thing
that tweaks how font-size inherits. Fortunately, in Firefox/Servo internally scriptlevel (as arescriptminsize and scriptsizemultiplier) is also handled as a CSS property, which means that we
can use the same framework we used for font-family and language here – compute the script
properties before font-size, and if scriptlevel is set, force-recalculate the font size even if
font-size itself was not set.
Interlude: early and late computed properties
In Servo the way we handle dependencies in properties is to have a set of “early” properties and a
set of “late” properties (which are allowed to depend on early properties). We iterate the
declarations twice, once looking for early properties, and once for late. However, now we have a
pretty intricate set of dependencies, where font-size must be calculated after language, font-family,
and the script properties, but before everything else that involves lengths. Additionally, font-family
has to be calculated after all the other early properties due to another font complexity I’m not covering here.
Unlike with the other “minimum font size”, using an em unit in any property will calculate the
length with the clamped value, not the “if nothing had been clamped” value, when the font size has
been clamped with scriptminsize. So at first glance handling this seems straightforward; only
consider the script min size when deciding to scale because of scriptlevel.
As always, it’s not that simple 😀:
1234567891011121314151617181920212223242526272829
<math><mstylescriptminsize="10px"scriptsizemultiplier="0.75"style="font-size:20px"> 20px<mstylescriptlevel="+1"> 15px<mstylescriptlevel="+1"> 11.25px<mstylescriptlevel="+1"> would be 8.4375, but is clamped at 10px<mstylescriptlevel="+1"> would be 6.328125, but is clamped at 10px<mstylescriptlevel="-1"> This is not 10px/0.75=13.3, rather it is still clamped at 10px<mstylescriptlevel="-1"> This is not 10px/0.75=13.3, rather it is still clamped at 10px<mstylescriptlevel="-1"> This is 11.25px again<mstylescriptlevel="-1"> This is 15px again</mstyle></mstyle></mstyle></mstyle></mstyle></mstyle></mstyle></mstyle></mstyle></math>
Basically, if you increase the level a bunch of times after hitting the min size, decreasing it by one should not immediately
compute min size / multiplier. That would make things asymmetric; something with a net script level of +5 should
have the same size as something with a net script level of +6 -1, provided the multiplier hasn’t changed.
So what happens is that the script level is calculated against the font size as if scriptminsize had never applied,
and we only use that size if it is greater than the min size.
It’s not just a matter of keeping track of the script level at which clamping happened – the multiplier could change
in the process and you need to keep track of that too. So this ends up in creating yet another font-size value to inherit.
To recap, we are now at four different notions of font size being inherited:
The main font size used by styling
The “actual” font size, i.e. the main font size but clamped by the min size
(In servo only) The “keyword” size; i.e. the size stored as a keyword and ratio, if it was derived from a keyword
The “script unconstrained” size; the font size as if scriptminsize never existed.
Another complexity here is that the following should still work:
Basically, if you were already below the scriptminsize, reducing the script level (to increase the font size) should not get clamped, since then you’d get something too large.
This basically means you only apply scriptminsize if you are applying the script level to a value greater than the script min size.
So there you have it. font-size is actually pretty complicated. A lot of the web platform has hidden complexities like this, and it’s always fun to encounter more of them.
(Perhaps less fun when I have to implement them 😂)
Thanks to mystor, mgattozzi, bstrie, and projektir for reviewing drafts of this post
Most of us simply know it as the technology behind the highly popular cryptocurrency Bitcoin, but the reality is that blockchain’s usefulness isn’t just limited to digital currencies.
A few of its proponents include business icons like Bill Gates and Richard Branson, and the blockchain even has the big banks tripping over one another to be the first to apply its many uses to their own business.
But what really is the blockchain, and why does it have everyone from Wall Street to Silicon Valley so excited?
What Is The Blockchain?
Historically, people and businesses have relied on middleman like and banks and governments to ensure the fair transaction of money or items of value. These intermediaries are necessary to building trust in the transactional process through authentication and keeping records.
Intermediaries became especially necessary during the dawn of the digital age where digital transactions became normal. Modern digital assets like stocks, money, and intellectual property are basically files, which are easy to reproduce.
This has led to what is now known as the double spending problem wherein a specific unit of value is spent more than once. This has prevented the possibility of there being a peer to peer transfer of a digital asset, up until now that is, with the advent of cryptocurrencies and blockchain.
But before we jump into the technical complexities surrounding blockchain, it’s important to get a little context.
Blockchain Vs. Bitcoin
The first mention of Bitcoin cropped up in 2008 in a white paper authored by an individual or persons using the pseudonym Satoshi Nakamoto. The white paper laid out the details of a newly created peer to peer electronic cash system called Bitcoin, which allowed payments to be transferred directly to another person without the need for an intermediary.
While Bitcoin as a cryptocurrency payment system was exciting, it was really the mechanics behind how the technology worked that revolutionized money as we now know it. After the release of the white paper, it quickly became apparent that the blockchain was the real winner.
While Bitcoin is sometimes used interchangeably with blockchain, blockchain technology has many additional applications – not just with Bitcoin or digital currencies. Bitcoin is simply the first byproduct of blockchain technology. Today, Bitcoin is only one of over 700 applications using the blockchain operating system.
Here’s a great quote that puts Bitcoin and blockchain into a better perspective.
“Blockchain is to Bitcoin, what the internet is to email. A big electronic system, on top of which you can build applications. Currency is just one.” – Sally Davies, FT Technology Reporter
How Does The Blockchain Really Work?
Now it’s time to get into the nuts and bolts of how blockchain networks work.
In short, the blockchain is composed of a decentralized database, or “digital ledger”, that keeps records of all digital transactions. While traditional databases used by banks, governments, and accountants have a central administrator, a distributed ledger contains a network of replicated databases synced to the internet and available to any person within the network.
Blockchain networks can be set to private where only members with access can connect, similar to an intranet, and they can also be publicly accessible to any person in the world, like the Internet.
In a blockchain, when a transaction occurs, the transaction is grouped alongside other transactions within a cryptographically protected block. Blocks are composed of transactions that have occurred within the past 10 minutes, which are then sent out to the entire network. This is where miners step in.
Crypto miners within any network possess high levels of computing power, which allows them to compete with one another to solve complex coded problems. The first crypto miner to solve the coded problem and validate the block’s authenticity receives a reward in the form of a piece of cryptocurrency. So, miners within the Bitcoin Blockchain network would receive partial amounts of Bitcoin as their reward.
After these transaction blocks have been validated, they are then timestamped and added to a chain in chronological order. With time newer blocks are couple with the older blocks, which creates a chain of blocks displaying every single transaction made in that blockchain’s history.
Blockchains continue to be updated so that each ledger in the network is the exact same, which allows each member of a blockchain to prove who owns what at any point in time.
The blockchain’s inherently decentralized, transparent, and cryptographic nature is what allows people to trust this digital peer to peer system, all while making the need for intermediaries practically obsolete.
Blockchains also come with significant security advantages. Coordinated cyber-attacks from hacker collectives almost always negatively impact intermediaries like banks. But the cyber hacks inflicted upon banks are practically impossible to pull off against blockchains. The reasons for this is because if a hacker wanted to hack into a specific block within a chain, he would need to not only hack into that block, but the all the proceeding blocks before it, dating back to all the blocks in the history of that blockchain.
The hackers would also need to hack into every single ledger within the network simultaneously, and since there are millions of ledgers in any given network, the odds of even an entire organization hacking into a blockchain are slim.
So, there you have it, pretty much everything you need to know about how the cryptocurrency blockchain really works.
Will Blockchain Technology Affect the Global Economy?
Blockchain technology isn’t just changing the way we use the internet, it’s also revolutionizing the global economy. Its applications surpass currencies and the transfer of money, but everything from electronic voting and smart contracts to health records and proof of ownership for digital content is now possible. In conclusion, the blockchain is about to significantly disrupt hundreds of industries that rely on intermediaries.
A day after an internal email by a Google employee was leaked to the press, a combination of ideological intolerance and scientific illiteracy led Google to fire James Damore for “perpetuating gender stereotypes.” On the day he was fired, Quillette.com published several brief essays by academics on the science of sex differences, mostly vindicating his characterization of the relevant data. That night, hackers shut down the website, presumably to prevent readers from learning the truth: that there are average differences between men and women, that these differences are partly rooted in biology, and that these differences have predictable social consequences.
How did we get here? Why would such a carefully worded dissenting opinion earn someone so much scorn from the public, misunderstanding by the media, and a pink slip by the company he works for? How could an employee who expresses skepticism about a company’s policy, but doesn’t violate the company’s policy in any obvious way, be fired? And why would activists think it’s okay to use force to shut out dissenting voices on a website like Quillette?
Indoctrination Begins in College and Seeps into Corporate Culture
The problem begins in universities, where radical ideas are promoted and lauded as “progressive” and students are taught what to think instead of how to think.
Universities are populated by professors who are promoted based on increasingly specialized scholarship that is often inscrutable to outsiders. Few faculty are hired or recognized for their ability to bring insights from different fields together and help students see the big picture.
More importantly, while some universities nominally promote “critical thinking,” this phrase has come to mean the study of bizarre subjects like “critical theory” that use bombastic and abstruse language to criticize Western civilization. Thinkers like Plato and Aristotle, Newton and Darwin, are cast aside in favor of Foucault and Derrida, Lacan and Zizek. What most of us mean by “critical thinking” is that students should be taught how to challenge authority in a disciplined way by recognizing common biases. This includes, for example, understanding how statistics can be misused to fool us into accepting conclusions too readily, and becoming aware of how our political commitments can impede our ability to accept scientific evidence that suggests small but significant biological differences between sexes and races.
The point of a liberal education was supposed to be to bring familiarity with the ideals of the Enlightenment, the principles that guide scientific research, and the foundational texts that made the modern world. But this ideal is now considered quaint, and is increasingly rejected as an oppressive force rather than the foundation of a free and prosperous society.
Moreover, students are taught that political speech with which they disagree is “violence” that should be shut down at all costs. They avoid uncomfortable topics by retreating to “safe spaces” on campus and shout down speakers who do not toe the far left line. Too many administrators and faculty promote such behavior. Those who dare to disagree—like Allison Stanger and Bret Weinstein—are run off campus.
It is no surprise, then, that corporations are increasingly populated with young adults who do not know how to handle political views or scientific claims they have been taught are out of bounds of public discussion. When Google’s diversity officer replied to James Damore’s email, it was an incoherent affirmation of the company’s diversity policy, coupled with an accusation of sexism. It didn’t even attempt to cite reasons why the science Damore mentioned was wrong, or why his political views about diversity policy were misguided. It just asserted they were, and then used that assertion the next day as a pretext to fire him. This is what we get when university professors abuse their power and attempt to turn students into pawns in their political game, rather than autonomous agents with the capacity (but not yet ability) to think for themselves.
The Training of Journalists and Importance of Language
Journalists, and, by extension, journalism schools, are also to blame. A common route to writing for newspapers and blogs these days is to get an undergraduate degree in English or journalism, and then cover stories in politics that touch on economic controversies, developments in science, environmental issues like global warming, and international affairs like the Israeli/Palestinian conflict.
Journalists can’t possibly know even a small fraction of what they need to in order to cover these topics, so they depend on experts. Good journalists exhibit epistemic humility, and commit themselves to deferring to a range of different experts in a particular field—in this case, for example, evolutionary psychologists and neuroscientists who study the science of sex differences.
But this approach is increasingly unprofitable for journalists to the extent that there is an arms race to get a story out as quickly as possible in order to maximize clicks. Even when journalists are under less time pressure, the increasing consumer demand for articles that reflect their political preconceptions makes it difficult for responsible journalists to cash in on their talents by publishing a balanced account of a story.
So it is no surprise that the immediate reactions to “Google Gate” tended to use emotionally-charged words and assumptions that are inconsistent with the best available science on sex differences.
Some of the earliest headlines exhibited equal parts scientific ignorance and progressive bias in the use of language. When Gizmodo first published the email, the author omitted references contained in the original email and referred to it as an “anti-diversity screed” rather than an objection or an argument. Britain’s most popular newspaper, the Guardian, ran a popular story entitled “Google’s Sexist Memo Has Provided the Alt Right with a New Martyr.”
To describe a classical liberal who supports moderate efforts at diversity as a “martyr for the alt right” is to engage in guilt by association. And describing the author as a “sexist” simply because he believes in small average differences between the sexes contributes to a tendency to over-extend the term “sexist” so much that it drains the word of any moral bite. If believing in average differences between the sexes makes us sexists, then every rational person is a sexist. The best evidence is that there are small average differences in capacities, and large average differences in interests.
Group Differences: The Ultimate Taboo
When Steven Pinker published The Blank Slate: The Modern Denial of Human Nature in 2002, a common reaction to it was “come on, nobody actually believes the view Pinker is criticizing.” This was the reaction by the eminent philosopher from UNC-Chapel Hill and Cambridge University, Simon Blackburn.
It now appears that Pinker was not only describing reality, but writing a prophesy. If anything, the dogmatic commitment to the view that individual and group differences are purely the result of socialization or bias has increased over the past decade or so, despite the fact that evidence to the contrary is accumulating fast.
Universities typically begin their freshman orientation programs with bias training, and an assigned book or set of readings that emphasize the oppressive nature of Western civilization. Rather than celebrating Western achievements like religious toleration, constitutional democracy, and the scientific revolution, many faculty members and training programs focus on the wrongs committed by Europeans against other groups. Never mind that morally objectionable practices like slavery and colonialism were commonplace around the world and throughout history by nearly every society with powerful weapons and technology. Focus on recent sins in the West, and then use them to explain all achievement gaps.
Two concepts are especially prominent in these readings and bias training seminars: stereotype threat and epigenetics. Stereotype threat occurs when members of a group perform worse on a task after being presented with information that their group tends to do that task poorly. Epigenetics is the idea that a stressful environment can produce chemical changes that alter the expression of genes, which, proponents say, might account for why some groups perform worse than others. Yet the evidence that stereotype threat can explain achievement gaps is weak, and the use of epigenetics to explain group differences is even more tenuous.
At the bottom of these trends is a fundamental change in universities’ understanding of their own mission. The search for truth, wherever it may lead, has been replaced with a definite, inflexible worldview. Universities have abandoned their commitment to reason, evaluation of evidence, and freedom of conscience.