Quantcast
Channel: Hacker News
Viewing all 25817 articles
Browse latest View live

Against Facebook

$
0
0

On the commodification of attention, the abuses of unpaid microwork and strategies of resistance.

Against Facebook

Alarm goes off, time to wake up. Snooze alarm, and then repeat: check email, check Facebook, check Twitter, check Snapchat, rinse; repeat again; no new posts? Check again. No second spared to compose a thought; dreams fade away. Day continues just the same way — rise, rinse, repeat, repeat. Can’t spare time to sit and relax, can’t let your mind wander not five minutes, no unstructured thought or daydream; just repeat. Every moment captures value — must increase engagement! must repeat! —no time for grander narratives, to solve for bigger problems. Atomize attention into smaller slice of microwork, then catch up, read, listen, swipe right, repeat, like, pull-to-refresh, to scroll to infinity, and then again. No matter if you’re pulling espresso shots, serving tables, or playing with computers all day: they all end the same: Netflix & chill, the guardian of sleep, TV, our collective lullaby. Wake up with screens, go to bed with screens. We no longer shit alone. The death of idleness.

This essay explores the ways Facebook transforms our attention into a product, and how that transformation changes us. It then proposes a social media strike as a concrete strategy to reclaim our attention, and finally lists many reasons we should all quit Facebook.

The Commodification of Microwork

Social media companies like Facebook, Google, Twitter and Snapchat have created a marketplace for our attention, where we become the product sold to advertisers who subtly change our behaviour to buy products and services for their own benefit. This type of persuasion is most effective when these platforms command our attention with a heightened level of distraction. We create the content that is then used to command the attention of our peers, which is then used to sell advertising space while the surrounding content is measured for engagement, and the results are analysed to optimize the next round of ad placement.

This process is the commodification of microwork— the seemingly small tasks that we are compelled to do by force of habit, tasks that aren’t in our own interests but in the interests of the platforms and advertisers using our attention and time to pad their bottom lines. Also known as “heteromation” (Ekbia, Nardi, 2014), microwork includes reading timelines, posting photos & updates, liking, retweeting, and generally dedicating our time, attention, and emotions to these platforms—each tiny action its own form of labor, given over freely to the advertisers.

“The new media of surveillance capitalism solicit social behaviors, monitor those behaviors, map social interactions, and resell what they learn to advertisers and others.” (Turner, 2017).

Platforms like Facebook are attempting to create systems for the generation of distraction, Crary describes it as follows:

“conditions that individuate, immobilize, and separate subjects, even within a world in which mobility and circulation are ubiquitous. In this way attention becomes key to the operation of noncoercive forms of power” (Crary, 2001, pg. 74)

Over time, these systems continually push our attention and distraction to new limits and thresholds. Youtube & Netflix discovers that auto-playing the next related video dramatically increases views; Twitter invents the “pull-to-refresh” UI gesture that leverages variable rewards to trigger addictive behavior as a way to increase user engagement with the timeline; Facebook invents the “red dot” notifications that keep people checking their phone for the next new thing. These gadgets are designed to create a “supernormal stimulus,” that is, a stimulus that produces a stronger than natural response. We can even internalize the supernormal stimulus—an example is the “phantom vibrate” we sometimes feel in our pocket when no vibration occurs. In the case of social media, the supernormal stimulus is used to exploit our response to novelty in order to elicit a behavior that works in the interests of the social media provider.

This eventually leads to a crisis of attentiveness, where the system must maintain an increased level of distraction. In order to continually distract us, the visual landscape must constantly change, requiring us to reorient our attention until the shift from one thing to another becomes the natural state (e.g. the Instagram “Explore” tab). Over time, we begin to value our attention while the platforms struggle to get more and more of what they previously got for free. This is not sustainable, as newer products need to continually revolutionize the means of distraction or else we will realize how distracted we really are. With a loss of distraction, we can more easily achieve self actualization.

This process of commodification has turned us all into tastemakers, reviewers, likers, retweeters and brand ambassadors. The platform takes our real authentic friendships and first commodifies them, reifies them, and then sells them back to us as an “image of friendship”, but one that is bankrupt of any genuine social value. Over time, these platforms transform us all into unpaid advertising agencies. We promote goods, services, lifestyles and desires to our friends, weaponizing images to generate feelings of jealousy and FOMO amongst our peers during those idle moments when they feel most bored. These idle moments are when we are most vulnerable, and thus we’re psychologically primed to accept the supernormal stimulus. The platform capitalizes on this vulnerability, and over time begins to redefine what we once considered “real” friendship into a relationship with the platform itself, mediated by the features and “images of friendship” within it through Liked and Retweeted posts, Snapchat streaks, Follow requests, posted text, images or even how we reduce our emotions into a series of emoji (Smith, 2016).

In this way, the platform is able to monetize our friendships, tastes, opinions, and even our emotions. Our internal thoughts and experiences become commodifiable assets, measured as engagements to be analyzed, A-B tested, optimized and charted, then touted by executives in PowerPoint presentations at board meetings. Our experiences are packaged and sold, and we are not paid a dime.

Instead of blindly activating our social media habit for that quick dopamine fix, consider who actually benefits. Are you performing a microwork task for Facebook or Snapchat? If so, then why aren’t you being paid?

What They Say vs What They Do

The Society of the Facebook
The Society of the Facebook

Newspapers, cable news, and social media platforms are trying to turn us all into media addicts. These businesses rely on persuading us to consume more and more (keyword: “increase engagement”) in order to curate an “interest in current affairs” (e.g. New York Times) or to “build meaningful and connected relationships with our community” (e.g. Facebook.)

The reality of the media, however, is driven by a simple business rule: Sell what sells best, whether it’s clickbait, memes, curated lifestyles, celebrity gossip, salacious headlines, and freak events. The media tends to focus on the rare incidents that have no actual influence on our daily lives, and the goal of these platforms is to keep us addicted. Facebooks’ 2017 Annual Report makes it clear:

The increase in the ads delivered was driven by an increase in users and their engagement and an increase in the number and frequency of ads displayed on News Feed, partially offset by increasing user engagement with video content and other product changes. (Facebook, 2017).

We think we’re immune to the persuasiveness of advertising, but remember, advertising is a $600 billion dollar industry that is the financial basis of the tech sector. The massive profits these companies have built have been done so on the backs of advertising campaigns.

Facebooks’ (and Google’s, etc) real agenda is to display and sell ads— not to help people “build meaningful and connected relationships”. At no point was this more clear than during Mark Zuckerberg’s testimony before congress. When Senator Hatch asked him “How do you sustain a business model in which people don’t pay for your service?” his response was as straightforward as it could’ve been: “Senator, we run ads”. (Oddshot Compilations, 2018).

“Building Relationships” is the rationale used in order to serve the advertisements, but the advertisements themselves are the real content of these platforms. The business arrangement of the platform is a simple one: ads are the content, the advertisers are the clients, the user doing microwork tasks are the workers, and we are the product.

As a result, we end up building relationships to the platform—not to each other.

Through the investment of our attention, we allow the commodities and ideologies of the platform to bombard our unconscious and subtly shape our behavior. The process of “exchanging time for image, provides the counterflow to the moving image and as advertising revenues would indicate, is itself productive of value.” (Beller 2006, p.79)

“Facebook’s nearly one billion users have become the largest unpaid workforce in history” (Laney, 2012).

The intended effect the media wishes to create is media-driven neuroticism—a love of change for its own sake, or neomania, a love of new things. Ultimately, neomania together with the media has a negative effect on our moods, which isn’t surprising considering most of what makes the daily news is negative. This leaves us feeling powerless, like the world is falling apart. So we turn to social media to witness the (seemingly) beautiful lives of celebrities, and the romantic and exciting lives of our friends and family. But this ends up driving the compulsion to compare our own lives with those of the people we see on social media, creating feelings of inadequacy, loneliness, and jealousy. These feelings are misguided however, since the lives we see in the media and social platforms are a tightly edited and curated spectacle of people presenting an image of how they want to appear, not of how they actually are.

Practical Defense Against Unpaid Microwork

News Feed, by Beeple
"News Feed", by Beeple.

Today it’s popular to hear about the media detox or a social media diet. The concept is often framed as a metaphor for eating and weight loss. We can extend this metaphor to consider the microwork tasks we find ourselves doing on a daily basis. Are the habits and media and we consume daily like the healthy foods that we can use to build a healthy mind and body? Or will they ultimately poison us? To live a healthy life we should moderate our media consumption the same way we do with the things we eat.

The toxicity of the media can be understood as a signal-to-noise problem. The more media we consume, the more noise we get (rather than the valuable part, called the signal.) If we consume the news on a yearly basis (in the form of books), we can assume about half of what we consume is signal, and the rest noise (randomness). If we increase our frequency to a daily intake, the composition would increase to about 95% noise, and only 5% signal. If we increase our intake to an hourly frequency (as those who follow the stock market or heavy social media users do) the composition increases to 99.5% noise and 0.5% signal. This is roughly 200% more noise than signal, more toxic noise to both misdirect and distract, and has negative effects on our mood.

Distraction could be described as a phenomenon in which you connect, inadvertently or absentmindedly, to more things than you intended. The noise drowns out the signal. (Solnit, 2018).

A practical approach for a healthy relationship with the media we consume is to create distance between ourselves and the media that surrounds us on a daily basis. Distance creates an opportunity to gain new perspectives on the media (and life in general.) This distance reduces the influence of these platforms, and creates a space for our ideas and imagination to flourish.

Idleness and imagination is essential to our wellbeing, it’s the wellspring of mental clarity, and the backstop for memory. Take a few minutes each day to cultivate idleness and some mental space. Step away from the hourly, daily, weekly media cycle, and read books instead. Books benefit from the perspective of time, which results in more settled and established facts with increased depth of analysis. If our goal is to have a wider perspective, and a balanced understanding of the major forces at work in the world, our best bet is from reading good books.

This essay gives us the task of avoiding contact with the mundane, cheap, and ephemeral media that surrounds us on a daily basis; to avoid having our attention distracted, redirected, exploited, and abused against our own interests. We should ignore those things that bring no happiness or value to our lives. Instead, we should focus our attention on our imagination, quality thought, friends and family, idleness; on things that matter. We should explore effective ways to reclaim clarity of thought, and autonomy of attention.

The task is to change the very way we attend the world around us. Henry David Thoreau described a similar mental recaliberation in Life Without Principle:

If we have thus desecrated ourselves, — as who has not? — the remedy will be by wariness and devotion to reconsecrate ourselves, and make once more a fane of the mind. We should treat our minds, that is, ourselves, as innocent and ingenuous children, whose guardians we are, and be careful what objects and what subjects we thrust on their attention. Read not the Times. Read the Eternities. (Thoreau, 1863).

This essay proposes the survivorship bias as the criteria to determine which media to ignore as mundane, cheap, and ephemeral. Survivorship sets the focus on knowledge that has withstood the test of time; Nicholas Taleb makes the following recommendation in Antifragile:

“[read] as little as feasible from the last twenty years, except history books that are not about the last fifty years. […what most people do] is read timely material that becomes instantly obsolete. (Taleb, 2012)

Survivorship is a simple heuristic with which to judge media, be it books, news, social media, television, academic journal articles or even video games. A clickbait article can go viral and get millions of views the first day it’s published, but according to the survivorship bias, its newness means that its likely to be obsolete the next day. Rather than trying to judge for yourself if media is worth your attention, try letting the survivorship bias do the work for you.

The survivorship bias has a few knock-on effects that are worth mentioning. Media tends to be discounted as it ages. Video games are always on sale a few months and especially years after they’ve been released; Books and movies are always cheaper well after their release as well; Newspapers are free the day after they were published; and so on. Beyond saving our attention from instantly obsolete media, we will also be saving money simply by waiting long enough. With time also comes more in-depth analysis from the surrounding community, reviews and deep interpretations of the rich meaning behind film and video games benefit from the additional time it takes for this analysis to blossom. This cultural analysis isn’t available to those following the cult-of-the-new.

Now that we understand how the territory of our attention has been colonized, we can begin to resist. Our challenge is to put the following strategies in practice until they form new habits that encourage idleness and mental spaciousness. We have the tools to reclaim our time and imaginations, we must take action!

The inadvertent laboring towards the propagation of the spectacle is another example of the ways that the spectacle takes advantage of its subordination of the masses. It is a process to which they have no control other than to renounce image culture completely which in a hyperreal world is almost inconceivable. (Berthelot, 2013)

We propose a collective media consumption strike, and rather than allowing our attention to be driven by these platforms, we must direct our attention to the things that matters to us.

The rules can be simple:

  • No TV/Netflix/Youtube
  • No radio
  • No podcasts
  • No social media, delete your Facebook account
  • No messaging/chat
  • No idle web browsing
  • No news aggregation sites
  • No gaming
  • No using a computer if it is not directly related to creating or resisting (not consuming)

Don’t be a fundamentalist about the rules. Rules are meant to be broken. If it’s art, the rules are more loose; if its corporate media, more strict. Cinema, literature, sci-fi, comic books, indy games: as long as these types of works are not connected with our professional, interpersonal, or political responsibilities, use them in moderation. In general, make sure media consumption is done with more moderation than usual. The goal is to break your habits.

Many people work with computers for their job. Most aren’t privileged enough to be able to completely disconnect from work. Work-related email, messaging, and work-related browsing are out of scope for the strike by necessity—but do please try to prevent any non-work related web browsing or media consumption. Don’t use work as an excuse to view social media or browse the web. Academic journal articles and the like will be hard to avoid, but ask yourself if they would pass the survivorship bias after a few years.

Enable Do Not Disturb on your phone. This will establish designated mental space for ourselves, friends, and family. Most smartphones have rules that can automatically put the device into Do Not Disturb mode at set times. In Android and iOS, this is available by going to Settings and searching for Do Not Disturb preferences. Do Not Disturb is also available in macOS as well.

Remove anything that provides variable information rewards. These are features in apps and games designed to give a random, pleasant surprise—to create addiction. Slot machines are designed to do this; Social media platforms like Facebook, Twitter, and video games like Candy Crush use “loot crates” to achieve the same result. Email can operate similarly. Turn off the red Notification dot on all your apps.

Remove Facebook, Snapchat, Twitter, Slack, Instagram, Youtube and any other social media apps from your devices. This makes it tougher to have instant access to these forms of media since it’s not immediately at your fingertips. Don’t cheat and use the web interface. Disable notifications for any apps that you keep. Turn off all lock screen notifications, this also improves privacy. Remove any of the other chat apps from your smartphone: Snapchat, Facebook Messenger, WhatsApp, Mastodon, Viber. If our friends need to reach us, they can txt or call. We should be the masters of our own time, and our network of friends isn’t in control of our time. Our inbox isn’t the worlds way to add items to our TODO list.

Remove any Bookmarks or Recently Viewed sites from your browser to prevent 1-click-away sites from distracting us. Some software like SelfControl will block habitual sites for you. Reddit, BuzzFeed, Upworthy, Medium are great sites to block.

Try to limit checking email to once a day. Set a schedule at a set time, and stick to it. Do the same for messaging. Set an auto-responder if that helps reduce anxiety.

The idea behind this practice is to stop the urge to immediately unlock our phone whenever we have idle time. Our idle time should be owned by us.

Conclusion

The net effect these platforms have on us is to alienate us from our very lives. Instead of focusing on what is valuable and good, we tend to focus on what we lack, or what we’re missing out on. If we divest ourselves from this tendency, ignore the rare incidents and ephemeral content that has no actual relevance to our daily lives, the net effect is beneficial to our mind and spirit. The resulting autonomy of attention can allow us to connect to the people and environments around us, and reclaim our very imaginations.

0xADADA is a software developer working for AI and web related startups in Boston; graduated from Northeastern University with degrees in Computer Science and Cognitive Psychology. 0xADADAs’ Facebook account was created in 2004 and deleted in 2015.

Appendix: Reasons to Quit Facebook

The ideological rationale that keeps platforms like Facebook profiting from the commodification of our attention and the collection of our private data lack common courtesy at best, and is psychopathic at worst. Here we present our Airing of Grievances, a listicle of the top reasons to quit Facebook.

Facebook outsources data exploitation for political manipulation to companies like Cambridge Analytica. (Cadwalladr, Graham-Harrison, 2018).

Facebook has democratized surveillance, where we have normalized reporting upon the intimate details of the status of our friends and family. (e.g. “X got engaged to Y”, “Tagged my friend A in this photo, at location X”, “B works at company C.”).

Without privacy, people resort to self-censorship, and therefore removing any aspect of political action or critique from themselves, thus becoming normalized to political impotence. (Assange, Appelbaum, Müller-Maguhn, Zimmermann, 2012).

Former founding President Sean Parker, along with former vice-president of user growth Chamath Palihapitiya have both objected to Facebooks’ use of dopamine-driven feedback loops to increase addictive behavior. (Hern, 2018).

Former Chief Information Security Officer Alex Stamos resigned from Facebook over an internal disagreement in how much Facebook should publicly share about how nation-states used the platform in the run-up to the 2016 elections. (Perlroth, Frenkel, Shane, 2018).

Co-founder of WhatsApp (acquired by Facebook) Brian Acton has said people should #DeleteFacebook. (Newton, 2018).

Co-founder of WhatsApp Jan Koum, is planning to leave the company after clashing with its parent, Facebook, over the popular messaging service’s strategy and Facebook’s attempts to use its personal data and weaken its encryption. (Dwoskin, 2018).

Facebook correlates data from loyalty program providers (e.g. Walgreens cards from Nielsen-Catalina Solutions) who have access to brick-and-mortar purchase history with individual Facebook accounts. Companies like Johnson & Johnson can buy this data from the broker and use Facebook tools to target individual users for ads promoting products they’ve previously purchased. (Stern, 2018).

Facebook has been fined for breaking privacy laws in the EU for using cookies, social plugins and breaking privacy laws. (Lomas, 2018).

Facebooks’ corporate policy is based on growth and engagement at any cost. (Malik, 2018).

Facebooks’ Protect VPN product is collecting users mobile data traffic and sending it back to Facebook. (Perez, 2018).

Facebook hired a full time pollster to monitor Mark Zuckerbergs’ approval ratings and develop strategies to change public perceptions. (Newton, 2018).

Facebooks’ network “is large enough and deep enough to create a global census that can ‘see’ nearly everyone on the planet, even if they don’t have a Facebook account.” says John Robb, a former counter-terrorism operative in US Special Operations Command. He goes on to say, this will “enable real-time tracking on nearly everyone on the planet using smartphone GPS data and ancillary information”. (Ahmed, 2017).

“Facebook’s business is to simulate you and to own and control your simulation, thereby owning and controlling you.” (Balkan, 2017).

Facebook has experimented with removing popular news outlets from the Feed in poor countries including Sri Lanka, Guatemala, Bolivia, Cambodia, Serbia and Slovakia. (Hern, 2017).

Former Facebook platform team Operations Manager Sandy Parakilas claims the company prioritized data collection of its users over protecting them from abuse. (Parakilas, 2017).

Facebook Likes can be leveraged to reliably predict intelligence, personality traits and politics. (Hess, 2017).

Facebook uses messages and contact details of our friends and other Facebook users to build a shadow profile of us that makes it easier for Facebook to more completely map all our social connections. (Hill, 2017).

Facebook still knows what you typed before [and after] you hit delete. (Boykis, 2017).

Facebook will tag our face in any photos uploaded from any users. (Boykis, 2017).

Facebook became 45% of all referral sources of traffic between 2014 and 2017. This has an enormous influence on what people see on the web, making the web more centralized upon Facebook. (Staltz, 2017).

Facebook is filing patents to detect and use emotion to influence users behaviors. (CB Insights, 2017).

Facebook use gamification to incentivize us to keep checking how many Likes our posts have gotten. A narcissism reinforcement machine. (Dillet, 2017).

Facebook uses nostalgia, birthdays, and sentimentality to manipulate us to increase engagement. (Frost, 2017).

Facebooks’ Like button (seen on almost every site on the web) aren’t just there to make it easy to post that page on Facebook, but it also provides a hook for Facebook to track your visit to that site, and thus collect your entire web browsing history. (Satyal, 2017).

Facebook revealed the identities of its own content censors to suspected terrorists. (Solon, 2017).

When Facebook has trouble acquiring users in specific markets, it simply buys the companies that dominate those markets (e.g. Instagram, WhatsApp and Oculus). (Srnicek, 2017).

Facebook owns a patent to use our devices camera to gauge our emotional state from our facial expression, to better to target us with content and advertising. (Dowling, 2017).

Facebook prevents search indexing, so content posted on Facebook is only discoverable within Facebook. (Gruber, 2017).

Facebook fragments the public debate into filter-bubbles, and users segmented into one bubble will never see the news and information in another bubble, separating society along ideological lines. (Lanchester, 2017).

Facebook can identify when teens feel “insecure”, “worthless” and “need a confidence boost” in order to keep them hooked. (Lewis,. Machkovech, 2017).

The data we give Facebook is used to calculate our ethnic affinity, sexual orientation, political affiliation, social class, travel schedule and much more. (Miller, 2017).

Facebook buys personal data from various data brokers, and correlates it to Facebook profile data to build aggregated profiles that span multiple sources. (Angwin, 2016).

Facebook News team suppressed conservative news items. (Nunez, 2016).

Facebook prevented News Team curators from listing Facebook on their resume in order to make the organization seem like it was unbiased and AI-driven. (Nunez, 2016).

Facebook can use your name and photo to endorse products and services to your social network without your knowledge. (Tucker, 2016).

Facebook is the television of the web, letting us passively scroll through content that we’d probably like, based on our habits and things we’ve already Liked, putting us in comfort bubbles that are more isolating than physical walls. (Derakhshan, 2016).

Facebook creates an illusion of choice, but by shaping the menus we pick from, it hijacks the way we perceive our choices and replaces them with new ones that aren’t in our best interests but serve the interests of Facebook. (Harris, 2016).

Facebook Likes, status updates, and pages we visit is “more reliable” in predicting mental illness. (Reynolds, 2016).

Facebook uses intermittent variable rewards, used in slot machines to maximize addictiveness by linking an action we take (pull-to-refresh) and a random reward (e.g. new posts!). (Harris, 2016).

Facebook convinces us that we’re missing out on something important, when in reality we’re not. (Harris, 2016).

Facebook abuses our need for social approval, validation, and the need to belong in order to increase engagement. (Harris, 2016).

Facebook abuses our need to reciprocate the social gestures of others. (E.g. the need to Friend someone back who has Friended you). (Harris, 2016).

Facebook abuses our attention with immediate interruptions because studies have shown it increases engagement. (Harris, 2016).

Facebook abuses our intentions by hijacking our tasks with the needs of the platform. For example, when you want to lookup a Facebook event happening tonight, the app doesn’t allow us to access the event without first redirecting us to the News feed. (Harris, 2016).

Facebook makes it easy for us to hand-over self-incriminating data that can be used against us by law-enforcement. (Clark, 2016).

Facebook blocked the account of activist Shaun King after he posted a racist email that was sent to him. (Stallman, 2016).

Facebook enforces a real name policy, allowing corporations and nation-states to be able to connect users accounts with their real identity. This is dangerous for marginalized people, and makes them vulnerable to blackmail lest their real identities be exposed. The real name policy forces people to have a single identity when in reality people have flexible identities that change depending on social context. Using ones real name inhibits one from experimenting with alternative identities, limiting personal growth to normative concepts of identity. (Stallman, 2016).

Facebook has censored posts for Israel, Russia, China, Turkey, the UK, and routinely suppresses content for political reasons using algorithmic promotion and depromotion. (Stallman, 2016).

Facebook has devalued hyperlinking to external sites by more strongly promoting text and images hosted directly on Facebook. Content within Facebook is invisible from the rest of the web. (Derakhshan, 2015).

On the web, hyperlinks are freely swapped to enable the cross-pollination of information and a diversity of decentralized ideas. On Facebook, each post exists unto itself, often accessible only within Facebook amongst ones “Friends of Friends”. “instead of seeing [hyperlinks] as a way to make that text richer. You’re encouraged to post one single hyperlink and expose it to a quasi-democratic process of liking and plussing and hearting: Adding several links to a piece of text is not allowed. Hyperlinks are objectivized, isolated, stripped of their powers”. (Derakhshan, 2015).

Facebook hurts the power of the website: “the Stream means you don’t need to open so many websites any more. You don’t need numerous tabs. You don’t even need a web browser. You open Twitter or Facebook on your smartphone and dive deep in. The mountain has come to you. Algorithms have picked everything for you. According to what you or your friends have read or seen before, they predict what you might like to see.” (Derakhshan, 2015).

Facebook (even more so, Instagram) is the cul de sac of the internet. Its’ where content can no longer be enriched with annotations external to itself. It’s where conversations wither and content goes to stare inwards at itself.

Facebook analyses the contents of messages sent between users on the platform to better target advertisements. (Virani, 2015).

Facebook uses friends to gather additional information about us. Tagging friends in photos, answering questions about a friends marital status are all ways we’re tricked into snitching on our friends. (Virani, 2015).

Facebook encourages us to present normative images of our lives, which result in alienation from the actual way we feel. (Krause, 2015).

Increased use of Facebook is linked to depression (Wald, 2015).

Facebooks’ internet.org project was touted to provide internet connected devices and networks in India, but created a Facebook-only view of the internet. (Burrington, 2015).

Facebook abuses our innate tendencies to track our progress and assess our self-worth by comparing ourselves to other people. (Musser, 2015).

Facebook photo data is used by Nashville company Facedeals to identify shoppers in stores with the stores own security cameras and facial recognition software. These profiles are then used to increase purchase behavior using personalized promotions and deals. (Dormehl, 2014).

“The problem with the web and its associated technologies is that it has made it so easy to share information about ourselves that doing so begins to like an obligation, a sentence, a sisyphean curse; another day, another photo of lunch to post.” (Beato, G.)

Facebook manipulates our emotions with experiments on the News Feed. (Booth, 2014).

Facebook owns a patent for determining our location by identifying objects in our photos and videos based on neural networks of nearby images. (Facebook, 2014).

Facebook payments is tracking what you buy, and your financial information like bank account and credit card numbers. Facebook has already started sharing data with Mastercard so they can drive online ad sales and determine credit worthiness from platform data. (Head, 2014).

Facebook provided data to NSA as part of the PRISM program. (Greenwald, 2013).

Facebook enables a surveillance apparatus where our every action could be monitored, and since everyone technically violates some obscure law some of the time, then punishment becomes selective and political. (Marlinspike, 2013).

Facebook owns a patent that tracks and determines the types of physical activities of a user based on movement of their device, including walking, running, cycling, driving, skiing, etc. (Facebook, 2013).

Facebook owns a patent for determining our location using non-GPS data including nearby NFC, RFID, wifi, bluetooth signal, events in your calendar like restaurant reservations or concerts. (Facebook, March 2013)

Facebook sells profile data to credit card companies and insurance providers so they can use platform data as indicators for credit and insurance risk. (Hawley, 2012).

Facebook makes it very difficult to quit, using social reciprocity and UX design dark patterns. (Brown, 2010).

Facebooks CEO Mark Zuckerberg hacked into Harvard Crimson editors private email accounts. (Carlson, 2010).

You should delete your Facebook account, but please share this essay before you do 😉!

Acknowledgements

I thank the following friends for their feedback: Alex Grabau, Thom Dunn, Stephen& qtychr.

References

Angwin, Julia. Mattu, Surya. Parris Jr, Terry. (December 27, 2016). Facebook Doesn’t Tell Users Everything It Really Knows About Them. (Retrieved April 21, 2018).

Ahmed, Nafeez. (December 29, 2017). Facebook will become more powerful than the NSA in less than 10 years — unless we stop it. (Retrieved April 10, 2018).

Allen, Tom. (2016). How my Location Independent Lifestyle Works. (Retrieved on April 19, 2018)

Allsop, John. (February 17, 2017). Not My Digital Detox. (Retrieved April 19, 2018)

Ashkenas, Jeremy. (April 4, 2018). “You know, I really hate to keep beating a downed zuckerberg, but to the extent that expensive patents indicate corporate intent and direction —Come along for a ride, and let’s browse a few of Facebook’s recent U.S.P.T.O. patent applications…”. Twitter. https://mobile.twitter.com/jashkenas/status/981672970098589696 (Retrieved on April 19, 2018)

Assange, Julian., Appelbaum, Jacob., Müller-Maguhn, Andy., Zimmermann, Jérémie. (2012). Cypherpunks: Freedom and the Future of the Internet. OR Books. Print.

Balkan, Aral. (February 18, 2017). Encouraging individual sovereignty and a healthy commons. (Retrieved on January 13, 2018).

Beato, G. (March 2012). Disposable Hip. (Retrieved April 27, 2018).

Berthelot, Martin R. (2013). Spectacle and Resistance in the Modern and Postmodern Eras. (Retrieved on April 18, 2018)

Berthelot, Martin R. (2013). Spectacle & Resistance. (Retrieved on April 18, 2018)

Beller, Jonathan. (2006). The cinematic mode of production: attention economy and the society of the spectacle. Hanover, N.H. Dartmouth College Press, University Press of New England. Print.

Booth, Robert. (January 29, 2014). Facebook Reveals News Feed Experiment to Control Emotions. (Retrieved April 19, 2018)

Boykis, Vicki. (February 1, 2017). What should you think about when using Facebook?. (Retrieved on January 13, 2018).

Brown, Andrew. (May 14, 2010). Facebook is not your friend. (Retrieved on June 5, 2016).

Burrington, Ingrid. (December 4, 2015). A Journey Into the Heart of Facebook. (Retrieved on January 3, 2016).

Cadwalladr, Carole, Graham-Harrison, Emma. (March 17, 2018). How Cambridge Analytica turned Facebook ‘likes’ into a lucrative political tool. (Retrieved on April 10, 2018).

Cadwalladr, Carole, Graham-Harrison, Emma. (March 17, 2018). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. (Retrieved on April 10, 2018).

Carlson, Nicholas. (March 5, 2010). In 2004, Mark Zuckerberg Broke into a Facebook User’s Private Email Account. (Retrieved April 21, 2018).

CB insights. (June 1, 2017). Facebook’s Emotion Tech: Patents Show New Ways For Detecting And Responding To Users’ Feelings. (Retrieved on June 5, 2017).

Clark, Bryan. (February 11, 2016). ‘I have nothing to hide’ is killing the privacy argument. (Retrieved on January 13, 2017).

Crary, Jonathan. (2001). Suspensions of perception: attention, spectacle, and modern culture. London, England. MIT Press.

Debord, Guy. (1983). The Society of the Spectacle. Trans. Fredy Perlman. Detroit: Black & Red, Print.

Derakhshan, Hossein. (December 29, 2015). Iran’s blogfather: Facebook, Instagram and Twitter are killing the web. (Retrieved on January 13, 2018).

Derakhshan, Hossein. (July 14, 2015). The Web We Have to Save. (Retrieved on January 13, 2018).

Derakhshan, Hossein. (May 11, 2016). Mark Zuckerberg is a hypocrite - Facebook has destroyed the open web. (Retrieved on January 13, 2018).

Dillet, Romain. (October 20, 2017). How I cured my tech fatigue by ditching feeds. (Retrieved on April 10, 2018)

Dormehl, Luke. (2014). The Formula. New York: Perigee. Print.

Dowling, Tim. (June 6, 2017). On Facebook, even Harvard students can’t be too paranoid. (Retrieved on June 5, 2016).

Dwoskin, Elizabeth. (April 30, 2018). WhatsApp founder plans to leave after broad clashes with parent Facebook (Retrieved on April 30, 2018).

Ekbia, Hamid., Nardi, Bonnie. (June 2, 2014). Heteromation and its (dis)contents: The invisible division of labor between humans and machines. (Retrieved on April 23, 2018).

Gruber, John. (June 6, 2017). Fuck Facebook. (Retrieved on June 6, 2017).

Facebook, Inc. (2017). Facebook 2017 Annual Report, Form 10-K. US Securities and Exchange Commission. (Retrieved on April 19, 2018)

Facebook, Inc. (June 28, 2013). User Activity Tracking System. US Patent Office. (Retrieved April 24, 2018).

Facebook, Inc. (December 30, 2014). Systems and methods for image object recognition based on location information and object categories. US Patent Office. (Retrieved April 24, 2018).

Facebook, Inc. (March 15, 2013). Multi-Factor Location Verification. US Patent Office. (Retrieved April 24, 2018).

Faliszek, Chet. (April 2, 2018). “I recently posted about Oculus/Facebook and their data collection. Let me go more in depth and this isn’t just about today this is about the future of XR. At the heart of the matter are these points where their privacy policy and actions differ from other XR companies. 1/many”. Twitter. https://twitter.com/chetfaliszek/status/980861065989783552. (Retrieved on April 19, 2018)

Ferri, Jessica. (2018). How Swipe Left, Swipe Right Became a Cultural Phenomenon. (Retrieved on April 19, 2018)

Frost, Brad. (September 11, 2017). Facebook, You Needy sonofabitch. (Retrieved April 21, 2018).

Greenwald, Glenn, MacAskill, Ewen. (June 7, 2013). NSA Prism Program Taps in to User Data of Apple, Google and others. (Retrieved April 18, 2018)

Harris, Tristan. (May 18, 2016). How Technology is Hijacking Your Mind — from a Magician and Google Design Ethicist. (Retrieved on April 9, 2018).

Hawley, Charles. (June 8, 2012). Critique of German Credit Agency Plan to Mine Facebook for Data. (Retrieved April 21, 2018).

Head, Beverley. (October 6, 2014). MasterCard to Access Facebook User Data. (Retrieved April 21, 2018).

Hern, Alex. (October 25, 2017). ‘Downright Orwellian’: journalists decry Facebook experiment’s impact on democracy. (Retrieved on April 10, 2018).

Hern, Alex. (January 23, 2018). ‘Never get high on your own supply’ – why social media bosses don’t use social media. (Retrieved on April 10, 2018).

Hess, Amanda. (May 9, 2017). How Privacy Became a Commodity for the Rich and Powerful. (Retrieved on November 3, 2017).

Hill, Kashmir. (July 11, 2017). How Facebook Figures Out Everyone You’ve Ever Met. (Retrieved on April 10, 2018).

Kaplan, Frederic. (August 1, 2014). Linguistic Capitalism and Algorithmic Mediation. University of California Press Journals. (Retrieved April 18, 2018)

Kalish, Alyse. (February 7, 2018). 15 Things you should be doing after work instead of watching TV. (Retrieved on April 19, 2018)

Krause, Kati. (December 11, 2015). Facebook’s Mental Health Problem. (Retrieved on January 3, 2016).

Lanchester, John. (August 17, 2017). You Are the Product. (Retrieved on April 9, 2018).

Laney, Doug. (May 3, 2012). To Facebook You’re Worth $80.95. (Retrieved April 24, 2018).

Lewis, Paul. (October 6, 2017). ‘Our minds can be hijacked’: the tech insiders who fear a smartphone dystopia. (Retrieved on April 9, 2018).

Lomas, Natasha. (February 19, 2018). Facebook’s tracking of non-users ruled illegal again. (Retrieved on April 10, 2018).

Machkovech, Sam. (May 1, 2017). Report: Facebook helped advertisers target teens who feel “worthless”. (Retrieved on November 3, 2017).

Malik, Om. (February 20, 2018). The #1 Reason Facebook Won’t Ever Change. (Retrieved April 19, 2018)

Marlinspike, Moxie. (June 13, 2013). Why ‘I Have Nothing to Hide’ Is the Wrong Way to Think About Surveillance. (Retrieved on January 3, 2017).

McMillen, Stuart. (March 2012). Amusing Ourselves to Death. (Retrieved on April 30, 2018).

Miller, Joe. (May 26, 2016). How Facebook’s tentacles reach further than you think. (Retrieved on August 1, 2017).

Musser, Cody. (December 28, 2015). I’m quitting Facebook in 2016—and you should too. (Retrieved on January 13, 2018).

Nelson, Joe. (April 20, 2015). Going “Write-Only”. (Retrieved April 20, 2018)

Newton, Casey. (February 6, 2018). Facebook hired a full-time pollster to monitor Zuckerberg’s approval ratings. (Retrieved on April 10, 2018).

Newton, Casey. (March 20, 2018). WhatsApp co-founder tells everyone to delete Facebook. (Retrieved on April 30, 2018).

Nunez, Michael. (May 9, 2016). Former Facebook Workers: We Routinely Suppressed Conservative News. (Retrieved on November 3, 2017).

Nunez, Michael. (May 3, 2016). Want to Know What Facebook Really Thinks of Journalists? Here’s What Happened When It Hired Some. (Retrieved on November 3, 2017).

Oddshot Compilations. (April 11, 2018). Mark Zuckerberg: “We run ads”: U.S. Senate Hearing (Retrieved April 23, 2018).

Parakilas, Sandy. (November 19, 2017). We Can’t Trust Facebook to Regulate Itself. (Retrieved on April 18, 2018).

Parez, Sarah. (February 12, 2018). Facebook is pushing its data-tracking Onavo VPN within its main mobile app. (Retrieved on April 10, 2018).

Parrish, Shane. (January 24, 2018). Most of what you’re going to read today is pointless. (Retrieved on April 10, 2018).

Perlroth, Nicole, Frenkel, Sheera & Shane, Scott. (March 19, 2018). Facebook Exit Hints at Dissent on Handling of Russian Trolls. (Retrieved on April 10, 2018).

Reynolds, Emily. (November 1, 2016). What could Facebook target next? Our mental health data. (Retrieved on November 1, 2016).

Russell, Bertrand. (2013). The Conquest of Happiness. Liveright, Print.

Satyal, Parimal. (November 2, 2017). Against an Increasingly User-Hostile Web. (Retrieved on April 10, 2018).

Solnit, Rebecca. (May 1, 2018). Driven to Distraction. (Retrieved April 23, 2018).

Solon, Olivia. (June 16, 2017). Revealed: Facebook exposed identities of moderators to suspected terrorists. (Retrieved on April 9, 2018).

Smith, Jack IV. (February 24, 2016). Facebook Is Using Those New “Like” Emojis to Keep Tabs on Your Emotions. (Retrieved on April 20, 2018)

Srnicek, Nick. (August 30, 2017). We need to nationalise Google, Facebook and Amazon. Here’s why. (Retrieved on April 9, 2018).

Stallman, Richard. (2016). Reasons not to use Facebook. (Retrieved on June 5, 2016).

Staltz, André. (October 30, 2017). The Web began dying in 2014, here’s how. (Retrieved on April 10, 2018).

Stern, Joanna. (March 7, 2018). Facebook Really Is Spying on You, Just Not Through Your Phone’s Mic. (Retrieved on April 10, 2018).

Taleb, Nassim Nicholas. (2012). Antifragile: Things That Gain from Disorder. Random House. Print.

Thaler, Richard H., and Cass R. Sunstein. (2008). Nudge: Improving Decisions About Health, Wealth, and Happiness. New Haven: Yale University Press. Print.

Turner, Fred. (October 5, 2017). The arts at Facebook: An aesthetic infrastructure for surveillance capitalism. (Retrieved on April 21, 2018).

Thoreau, Henry David. (1863). Life Without Principle. (Retrieved on April 19, 2018)

Tucker, Ian. (February 12, 2016). Douglas Rushkoff: ‘I’m thinking it may be good to be off social media altogether’. (Retrieved on January 13, 2018).

Virani, Salim. (January 29, 2015). Get your loved ones off Facebook. (Retrieved on April 9, 2018).

Wald, Chelsea. (December 3, 2015). Is Facebook Luring You Into Being Depressed?. (Retrieved on January 3, 2016).

Wolford, Ben. (October 23, 2017). The product Facebook sells is you. (Retrieved on January 3, 2016).

Zuboff, Shoshana. (May 3, 2016) The Secrets of Surveillance Capitalism. (Retrieved on January 3, 2017).


Ask HN: Freelancer? Seeking freelancer? (May 2018)

$
0
0

SEEKING WORK | NYC or REMOTE | FULL-STACK, REACT NATIVE

My preferred stack is Typescript, React Native, Node Koa, and PostgreSQL, but I'm productive in a number of other technologies.

I've spent the last few years building React Native apps, both as a freelancer and as a senior developer at a venture-backed startup.

Before that, I had a range of experiences including working for McKinsey Digital and founding a startup (500 Startups Batch 13).

Here's a work sample from one of my side projects: http://emersonjournal.com/

Email matt@mattcasey.nyc


SEEKING WORK - REMOTE Highly experienced VP of Engineering & Lead Web Developer.

Skills:

* NodeJS/Meteor/SailsJS

* Serverless (going heavy on that one)

* Cloud technologies (AWS/Azure/GCP)

* Crypto/Blockchain - mostly the theoretical parts (understanding of different Proof of Stake algorithms, Solidity contracts) - Highly motivated to work with this.

* Wordpress/CodeIgniter/Yii/Drupal (Components, Hacks, Themes) - less motivated, unless truly cutting edge (or WP VIP projects)

* CI & Unit testing - Jenkins, Mocha & Karma for JS, Toast for PHP, as well as Selenium

* Django (general Python too) - to a lesser extent

Seeking: Challenging projects. Most recently worked extensively with Serverless & AWS APIs, building cloud-related prototypes, before that worked as an AngularJS specialist.

Example work: Upon request

Location: EU

Contact: dev (at) azdv.co


SEEKING WORK - Chicago or Remote

Full-stack / macOS / iOS developer with 10 years professional experience making a diverse range of apps and websites.

Portfolio: http://sdegutis.com/

Email: sbdegutis@gmail.com

Phone: 815.388.7881 - free consultation

Rate: Very competitive


SEEKING WORK — Remote okay

I’m Vijay, Full-stack Designer and Developer from South India.

Designed the award-winning branding for io.js, which eventually merged back into Node.js from which it was forked — https://behance.net/gallery/23269525/IOJS-logo-concept

Previously at Infinera, I did Data Visualization using d3.js to built an angular-based frontend for their Java NMS, converting their graph network of nodes and links into a force-directed semantic graph (zooming in or out allows you to drill down the network similar to Google Maps.) Cannot share this work due to NDA but it was very similar to the new version of CISCO DNA - https://www.cisco.com/c/en/us/products/cloud-systems-managem...

My work history involves ample bits of Nginx + node.js on the server side, puppeteer for scraping websites, passport for handling APIs and OAuth, d3.js for visualizations, logo design + branding, landing pages and full-on websites and web apps (React-Node-Mongo stack).

Selected works are up on https://dffrnt.com

If you find my work interesting, please email me at vijay@dffrnt.com

P.S: You can also find me on Twitter as @vjk2005 tweeting an eclectic mix of Game Dev, pop culture, Japan, crypto and AI


SEEKING WORK | FULL STACK DEVELOPER | REMOTE + EDINBURGH, UK.

Freelance full stack software developer with over 10 years experience including a PhD in software verification offering:

- Web app development: JavaScript (Node.js, TypeScript, Vue, Angular, jQuery, D3), Python (Flask, Django), Java, PHP (WordPress)

- Mobile app development: Android, iOS, PhoneGap/Cordova

- Cloud hosting: Creating scalable apps that run on Heroku and AWS

- SEO: On-page audits and optimisations.

- Code quality: Reducing defects in existing projects by integrating test suites, staging + development environments, Continuous Integration, planning boards and code reviews

Portfolio and more information available at https://www.seanw.org.

Recent example project: https://www.checkbot.io/

Contact sw@seanw.org for more details.


SEEKING FREELANCE | San Francisco | In person preferred, remote accepted

Starsky Robotics is a self driving trucking company looking for a contractor experienced in real time video streaming. Our goal is to achieve low-latency peer to peer video streaming over a network connection with variable bandwidth, variable latency, and variable packet loss.

Starsky self driving trucks run autonomously on the highway and are teleoperated for the first and last mile. We currently have a stable, working version of teleoperation but are looking to improve the video quality and reliability.

We use C++ and Python internally, but we are language-agnostic for this project. This project's deliverable could be an integration with an enterprise solution or a custom built streaming pipeline using open source frameworks.

Email kartik@starskyrobotics.com


SEEKING FREELANCER | NYC | Android

Handy is changing the way the world buys services by connecting customers with vetted, independent, local service professionals in a fast, convenient and reliable way - at the tap of a button.

We are a collaborative team of about 100 people across marketing, ops, customer support, product, data, finance and engineering, and our headquarters is located in the Flatiron District, NYC.

I am looking for a freelancer who can work onsite with our existing Android team in NYC.

I’m currently an Engineering Manager with almost 10 years of hands on software experience. Feel free to reach out to me directly at eabrahamsen[at]handy.com if you have any questions.

Here is some recent news about Handy.

https://techcrunch.com/2018/03/19/walmart-to-sell-handys-in-...

https://www.inc.com/nina-ojeda/amazon-has-stiff-competition-...


SEEKING WORK - Remote / Singapore / Bali / Costa Rica / anywhere nice (give me a good enough reason to travel and I'll be there!)

I'm a Shopify Expert from Waterloo, Canada (https://experts.shopify.com/patrick-bollenbach).

This means I...

- Build and setup e-commerce stores on Shopify

- Do in-depth theme customization jobs

- Develop private Shopify applications for features not natively supported by the platform

I do a lot of work for startup companies in Asia/Australia, but am currently looking to do some more work for agencies in North America that are looking to get into the e-commerce game, or that have some overflow Shopify work.

Send me an email, we can chat and figure out if I can help you out.

Portfolio - https://bolle.co

Email - patrick(at)bolle(dot)co


SEEKING WORK - remote, short to medium term projects - zak.wilson@gmail.com

I make software - mostly full-stack web development and HTTP APIs, but I'm adaptable. I have some interest in artificial intelligence and machine learning. I have a little experience making Android apps, and my open-source Android app Ceilingbounce has happy users.

I can do stuff that's harder than basic CRUD apps. Stuff I know well: Clojure, Ruby (with or without Rails), Python, Django, Javascript, Lua, PostgreSQL, MySQL, SASS, responsive CSS.

Other stuff I've used for something non-trivial at least once: Common Lisp, Scheme, Java, SASS, C, PHP, Haskell, Bash, Perl, MongoDB, Mirah, Android development with Clojure. Yes, I can probably pick up that language or tool you're using that nobody has ever heard of.

Github: https://github.com/zakwilson

Some public facing things I've worked on:

https://priceonomics.com

https://survis.com


SEEKING WORK - Portland, OR or Remote

I'm a polyglot, full-stack developer with 17 years experience. My specialties are Rails, Postgres and Chef/AWS. I've done several Postgres C extensions for performance and scalability, and recently I wrote my first for-pay Rust code: a small network service. I'm also very comfortable in Angular, Vue, Java, and Python. I am reliable, easy to work with, quick to turn things around, and a good communicator. I can work solo or on a team, either as lead or a team member. I value client satisfaction as highly as technical excellence.

You can see some of my recent work here:

https://illuminatedcomputing.com/portfolio

https://github.com/pjungwir/aggs_for_arrays

https://github.com/pjungwir/db_leftovers

If you'd like to work together, I'd be happy to discuss your project!: pj@illuminatedcomputing.com


SEEKING WORK -- Joliet, IL -- remote/freelance

I have 16 years of experience building trading related systems for both domestic and international financial markets. I also have created custom back-end frameworks used by various government agencies. While most of my experience has been directed towards building low-latency, reliable, and accurate back-end systems, I have also been getting into Android development as well.

I'm comfortable working with existing systems or helping to design and develop new ones.

If you think I can be of value to your business, I would love to have a conversation and discuss further.

Languages: Java, Kotlin, C, Bash, HTML, CSS, SQL

Website: http://www.bluetowerdigital.com

Email: tszum[at]bluetowerdigital.com


SEEKING WORK -- Denver, Co -- remote/freelance

Looking for work in Stock, Options, CryptoCurrency trading.

Technologies: Trading API, Stocks, Options, Crypto Currencies, Trading, Python, PHP, MySql, MongoDB, Finance

Résumé/CV: http://www.strategic-options.com/resume?=algo_f

Email: chad.humphrey@strategic-options.com

Algorithms / Strategies

-Volatility Algorithm, deploy across $150 million portfolio

-Options Implied Volatility Arbitrage strategies

-Stock & Equity Algorithms, Current tracking over 500 stocks.

-Smaller Bitcoin / Cryto currencies algorithms

Software & API:

-TD Ameritrade, Interactive Brokers, Etrade, Ally

-Scraping techniques


SEEKING WORK | Austin, TX or Remote | iOS

Hi, I'm Josh. I've been developing iOS apps for the last 7 years including Tinder, CrowdRise, and LivingSocial. I've also spent time at Apple, Google, Microsoft, and an acquired startup. (I'm most proud of rebuilding the Tinder card stack and watching it spend over a year in production (2015-2016).)

I can help your team with...

- Designing and developing stable iOS features, quickly

- Mentoring developers on best practices in mobile app development

- Setting up a CI pipeline, code reviews, and unit testing

- Advising on development process inefficiencies

Check out some of my previous work here: https://iamjo.sh/work/

I'm available locally in Austin, TX or remotely during normal hours for any US timezone.

My standard rate is $150 per hour and non-negotiable.

josh@iamjo.sh | https://iamjo.sh


SEEKING WORK | Toronto, NYC, SF, Remote, Willing to Travel

Bonafero provides technology consulting services to drive new business value. We partner with our clients to re-think and modernize the way they deliver solutions.

What we've done for our clients, as an interim leadership (leading teams of 100+) & delivery team:

  - Introduced and executed on new organization structure 
  - Re-prioritized product development by focusing on real customer needs, delivering actual business value as fast as possible
  - Delivered major enterprise projects ahead of schedule
  - Introduced DevOps and continuous delivery practices
  - Modernized legacy systems using micro-services architecture leading to cost savings in millions per year
What's our stack? We've worked on projects that are in:
  - Mobile (Swift, Objective-C)
  - Android (Java, Kotlin)
  - Backend (Go, Node, Java, Ruby + Rails, PHP, .NET, etc)
  - Frontend (JavaScript, React, Angular, etc) 
https://www.bonafero.com

Let's talk about how we can help: jonathan@bonafero.com


SEEKING WORK -- Jacksonville, FL -- remote/freelance

I am an experienced Python developer, having used the language in all kinds of areas and situations, including web development (Flask, Django, Pylons, Google App Engine, etc), GUI development, database access (using MS SQL Server, MySQL, and Postgres), scripting, backend development, automated testing, web crawling/scraping, data extraction and parsing/ETL, etc.

I am looking for full-time or part-time work, either one is fine. If you are looking to get a small project done, or you have an existing project where some maintenance work needs to be done on a regular basis, then I would love to hear from you.

I am also available for technical writing (I kept a programming blog for many years, mostly about Python).

(For the record: Although Python is my main programming language, I am also interested in, and have worked with, many other programming languages, including C, D, Delphi, Go, C#/Mono, Ruby, OCaml, Prolog, Lisp, Scheme, etc, on Windows, Mac OS X and Linux systems. I am also available to work on projects in these languages.)

Website: http://aquila.blue

Email: zephyrfalcon at gmail.com


SEEKING WORK - remote (United States based)

Have you gotten your company past the first stage or two to where it's profitable? Have you been thinking about starting to collect data and optimize? Then let's discuss!

I will instrument your software to produce the necessary metrics and data points, store them, analyze them, view them on dashboards, and best of all: optimize and grow! Both now and down the road.

Another common scenario I can help you with: have you created a monster Excel spreadsheet fed by your database? I can replace it with dashboards that show the same information in a much more useful format so that you won't have to squint at that spreadsheet anymore!

Remote only. Not willing to relocate, but open to a small amount of travel.

info [ @ ] [ please copy and paste my username ] .com

A few keywords for people using search: business intelligence, data analytics, data warehousing, ETL, data visualization, reporting, time series, Django, InfluxDB, Graphite, Grafana, Segment.


SEEKING WORK | Front-End Angular Developer

Location: Portugal

Remote work: Yes

Portfolio: https://nunoarruda.com/#portfolio

GitHub: https://github.com/nunoarruda

Resume: https://nunoarruda.com/resume.pdf

Email: nuno@nunoarruda.com

Hi, I'm Nuno, a Result-Oriented Front End Angular Developer with a strong technical skill-set, attention to detail, and 16 years of experience. I have a passion for translating beautiful designs into functional user interfaces and building great web applications.

I actively seek out new technologies and stay up-to-date on industry trends and advancements. Continued education has allowed me to stay ahead of the curve and deliver exceptional work to each employer I’ve worked for - both full-time and contract.

I've successfully delivered projects like a CSS UI library used by 17,000 employees, a mobile app that now has 15,000+ users, and an award-winning payroll system. I've done frontend work for Adobe, 21st Century Fox, Bayer, among other companies.

I've been working remotely for the last 5 years for clients worldwide and I can be flexible in order to have overlapping working hours with a distributed team.


SEEKING FREELANCER

Worldwide, REMOTE, near-total flexibility on hours. $70-100/hr. Expert Interviewer at Karat (https://karat.io)

Work from anywhere in the world that has a solid internet connection. Work as much or as little as you want. Work any day, any time of day, any number of hours -- you can do 0 one week, 50 the next week, and back to 0 the next week. Only requirement there is that we want you to roughly average at least 10 hours a week, or else the training/time investment doesn't make as much sense from your end or ours. When each interview is done, you're done.

I know the above might sound a little strange, so a bit about the company for context: Karat is a Seattle-based startup that does software engineering interviews on behalf of other companies -- primarily first-round phone screens. Quickly-growing companies can spend a significant fraction of their engineers' time interviewing; we help take the load off. We've done a lot to make the interview experience better for all stakeholders that I could write whole essays about, but suffice it to say that candidates love working with us, clients love working with us, and we're well-funded and growing quickly as a result.

Because of this quickly-growing demand, we're looking to hire more Expert Interviewers. The ideal candidate is a software engineer with strong written and verbal English skills with at least a few years of professional experience. Interviewing experience would be great, but we spend 25 hours (paid) training you before you even start, so if you're strong technically and love working with people we can usually make it work :) Interviews are conducted over video chat, using a collaborative code editor.

Some of our interviewers are freelancers who use our scheduling model to backfill hours; others are full timers at top tech companies looking to make some extra cash; others have quit their jobs to work with us full time; some are digital nomads; one of our interviewers is road tripping around North America for a year and a half, doing anywhere from zero to 40 interviews each week depending on where he is and what the weather's like.

The application form is here: https://jobs.lever.co/karat/d44ab283-c7c0-4bbd-b8c3-4dc0ced6...

I know it's a pretty unique job, so if you have any questions reply here or email me at josh@karat.io and I'm happy to talk through any of it.


SEEKING FREELANCER, Scotland, UK, Remote Okay

I'm a serial entrepreneur and Digital Consultant looking for some help in my current venture.

I'm looking for someone versed in Computer Vision to help me extend my Object Detection and Tracking code in the automobiles space (Don't worry its not self-driving).

Fairly common stack at the minute: Python, NumPy, OpenCV doing much of the tracking work, Caffe doing much of the object detection. The key challenge? I need to get real-time results on a Raspberry Pi!

Email is nile d0t frater at gmail d0t com


SEEKING FREELANCER - Remote ok (we are in Maine & Tampa, FL)

We're looking someone to work with Squaremill http://squaremill.com as a Ruby on Rails freelancer. We're looking for someone that can commit to a six month contract at approx 20 hours a week and that has the following skills:

- Ruby on Rails

- JS (React, plain JS)

- Sysadmin (configuring linux boxes, working knowledge of AWS etc.)

Please send your rates, resume, github if you have it and any examples of work you'd like us to see.


SEEKING WORK - Remote/Westchester, NY Area

I can prototype new ideas, research technologies/trends, extend/maintain an existing system, or quickly build out a one-off microsite. I can work solo or in teams with equal ease. I'm a full stack programmer primarily using C# (standard or .net core) for backend work on linux VMs and Azure appservice. Front end work is mostly jquery/bootstrap with some Vue experimenting of late.

Portfolio - https://wetzdev.com/

Email - my user name on gmail


SEEKING WORK | San Francisco area (on-site, remote possible) | Ruby and backend

Hi, I’m Joe. I work with startups and growing companies to help software development teams solve their backend challenges by bringing technical and organizational expertise.

Teams turn to me when they need help

– providing guidance and mentorship to developers
  – delivering features faster and with fewer issues
  – getting out from under technical debt
  – completing critical projects
  – scaling the product, process, and team
I bring experience in building, changing, and operating large scale Ruby/Rails applications and related systems.

See what people say about working with me at https://firebreaklabs.com

joe@firebreaklabs.com | https://firebreaklabs.com


SEEKING WORK Remote (I'm based in Baltimore. No availability until August but I'm posting this anyway because it helps to plan in advance.)

I help B2B software companies exceed their growth goals. Whether it’s helping AT&T bring new IoT solutions to market, turning Domino Data Lab into a market leader for data science platforms, or accelerating the growth and revenue for Clubhouse, Crew, Etleap, FoundationDB, Gravitational, Inkling, Netlify, Scalyr, Singular, and other B2B software companies.

More info at https://www.gkogan.co or send me an email (greg[at]gkogan.co).


SEEKING WORK - Remote, San Francisco, Washington D.C

I'm a full stack developer and designer.

I'll build you a minimal lovable product for a fixed $9K and in 4 weeks.

For iOS apps, I use Swift. For web apps, I use Ruby/Rails, JavaScript.

To see some of my recent work:

https://breue.com/

https://dribbble.com/zachvanness

My email: zach@breue.com


SEEKING WORK - Remote/Freelance, Montreal (Canada)

iOS/macOS Developer (Objective-C, Swift) and C# Unity Game Developer.

More info at http://chriscomeau.com/ or email (chris.comeau[at]skyriser.com).

On the Timing of Time Zone Changes (2016)

$
0
0

What do Turkey, Chile, Russia, Venezuela, Azerbaijan, North Korea and Haiti all have in common? Time Zone Chaos!

No, that's not the punchline to a joke. It's actually quite a serious problem. The biggest issue with time zones is not that they exist, nor that they have daylight saving time. But rather, in that they often change in a haphazardly manner. Allow me to explain.

First, understand that from a global perspective, one might think that the time zones of the world should be managed by some relatively neutral international body, such as the ITU division of the United Nations, or perhaps the IAU. However, each of the world's time zones are actually controlled from a local perspective. Each individual nation has a sovereign right to decide the local time for the lands within their jurisdiction. This includes both the offset from Universal Time, and the rules that govern daylight saving time, if they choose to use it.

This unto itself is not a problem, and absolutely I agree that countries should be able to do whatever they want with the clocks within their borders. However, time and time again, we run into the same problem, which is simply that they are changed without enough notice. All of the countries mentioned earlier have done this recently, along with many others.

It's crucial that when governments make changes to their time zones or daylight saving time rules, that they provide ample lead time for technology to catch up. One has to consider the real work that people have to do to validate the change, create a data update, test the changes, and to publish and distribute the update. Then you have to consider that individuals don't always update their systems instantly. It's very common for a time zone update to be available for weeks or months before it is actually installed by the end user.

Turkey - A Case Study:

Let's look at Turkey as an example. In 2015, the government decided that it would be a good idea to delay the end of daylight saving time by two weeks to allow for more daylight hours at the polls during their election season. They moved the end of DST date from October 25th to November 8th.

  • The first word about this was from an unofficial news article on September 8th, about 6 weeks before the clocks were due to change. However, this article wasn't noticed by the TZ community until around September 19th. It's difficult to go off of news stories alone, as they are often wrong or fuzzy on the details. A few words from a politician to a reporter is simply not good enough.
  • On September 29th, a government news agency also reported the change. It still wasn't fully official, as it had not come with any kind of decree or legislation. But it was enough to convince some in the TZ community that it was real, and thus a change to the IANA TZ database was initiated, and then released a few days later on October 1st.
  • The official announcement from the government finally came on October 4th, when it was published in the official gazette. This is about three weeks official notice of the proposed change.
  • Many technology vendors, including big players like Apple, Google, and Oracle, took the data from IANA and published it through their own channels. As an example, Apple released it to iPhone and iPad devices with iOS 9.1 update, on October 21st, leaving only 3 days for users to install the update to prevent their clocks from changing on the wrong day.
  • For Microsoft Windows, which follows a slightly different process and requires a higher degree of confirmation, an announcement was made on October 9th and an update was issued on October 20th.
  • In some cases, the date was missed entirely, such as with pytz - the popular time zone library for the Python language, which published its version 2015.7 on October 26th.

So what was the result? Well, to quote the BBC:

Confused Turks are asking "what's the time?" after automatic clocks defied a government decision to defer a seasonal hour's change in the time.

Or as the IBT reported:

Millions of Turks woke up to a confusing morning on Sunday ... as smartphones, tablets, and computers had automatically updated in keeping with other countries in the Eastern European Time zone, even though Turkey delay setting clocks back an hour for the next two weeks.

You can imagine that this probably had exactly the opposite effect on voting than what was envisioned. However, you think they would have known better, since almost the exact same thing happened the previous year! As reported by the Independent Balkan News Agency in 2014:

An unbelievable confusion to 52.9 million Turkish voters was caused by the decision of the turkish government to postpone for a day the time shift applied all around the world, where the indicators are turned one hour forward. The reason for postponing the application of summer time according to the Erdogan government, was to facilitate the smooth conducting of the elections, but nobody predicted the "new technology" factor. All smart phones of the Turkish citizens changed the time automatically, resulting in thousands of voters going to the polls earlier having to wait for an hour to vote.

Similar problems were also caused to computers that had not downloaded a new version of the software. Problems also occurred in the luggage delivery system at Istanbul’s airport as the system automatically changed the time, ignoring the government’s plans and as a result the luggage were delivered to the passengers with great delay. There were also problems with many flights as passengers were confusing their departure time.

What about the rest of the world?

Not only did Turkey not learn from their own mistakes, but other countries around the world also have failed to learn from the experience and continue to have this problem. Remember the list I rattled off earlier? Let's take a closer look:

  • Chile had been on "permanent DST" in 2015, but on March 13th, 2016, the government announced they would return to Standard time starting May 15th, 2016 (two months notice).

  • Russia has 11 distinct time zone offsets, ranging from UTC+02 through UTC+12, with a complex history of changes in the boundaries between them.

    For 2016, six regions changed their time zones on March 27, 2016. Each of these regions had their own law placing the change into effect. One was signed on December 30th (12 weeks notice), which is reasonable. The others however were signed on either February 15th (6 weeks notice) or March 9th (2 weeks notice).

    Two other regions had pending legislation during this period, one of which didn't pass until April 5th, of which its effective date was stretched out until April 24th (3 weeks notice). The other is still awaiting its final signature by the President, which is expected to occur in the next few days, and has an effective date of May 29th (4 weeks notice). (Update: It was passed on April 26th.)

  • Venezuela had been on UTC-4:30 since 2007, but the government recently decided that it would return to UTC-4 on May 1st, 2016. The change was first announced on April 15th, then became official on April 18th when it was published in the country's Gazette (2 weeks notice).

  • Azerbaijan canceled DST permanently in 2016. It was scheduled to go into effect on March 27th, but the cancellation wasn't announced until March 17th (10 days notice).

  • North Korea moved from UTC-9 to UTC-8:30 on August 15th, 2015. The change was announced on August 7th. (8 days notice)

  • Haiti canceled DST for at least the 2016 calendar year. It was scheduled to go into effect on March 13th, but on March 12th (just 1 day notice!) the government issued a press release canceling it.

Other Timing Issues

While all of the above changes come with a certain degree of surprise, there are other some parts of the world that simply don't make any advanced schedule at all for their daylight saving time rules.

Fiji is one such time zone. It has had DST every year since 2009. However, each year, the government issues an announcement stating what date it will begin and end. It's slightly different each year, and it's unclear exactly when the government will reach their decisions, or what to do in the absence of an announcement. It would be much simpler if they would just decide on a regular schedule, and only make announcements if there are deviations from that schedule.

Another such place is Morocco, where the schedule for the first start of DST and last end of DST are adequately defined, but every year since 2012 there has been a "DST suspension period", such that DST ends before the start of Ramadan, and is restored sometime after. Not only does this mean that the clocks need to be changed four times in a single calendar year, but it also means that nobody is fully certain of when the middle two transitions will occur until the government makes an announcement. Part of the reason for this is that the dates for Ramadan are based on the observed sighting of the new moon. However, my personal opinion is that they should still fix the DST transitions to some schedule, even if it starts before Ramadan and ends sometime after. The unpredictability of the dates makes it just too difficult to know what time it is in Morocco unless you are are actually there. (By the way, Egypt used to do this as well, but only in 2010 and 2014.)

Recommendations to the World's Governments

First, I must emphasize that these are my personal recommendations. I am not speaking on behalf of my government, my employer, nor the TZ community. These recommendations are based on years of experience working with time zone data in computing, and the observation of real events.

If you're going to make changes to your time zone(s), whether they are for the standard time offset from UTC, or to the enactment or abolishment of daylight saving time, or to the dates and times that daylight saving time occurs then please do all of the following:

  1. Give ample notice, preferably at least 6 months in advance of the change. One year or more would be even better.
  2. Provide that notice via an official government decree or passage of a law. Publish the law, and make it available online on an official government web site.
  3. Be sure to include the precise details of the change, including the date and the time of day that the change is to go into effect. For example, state "the clocks will advance forward by 30 minutes on April 1st, 2017 at 01:00 local time". Do not just say "The time will change in April". Also, if the change only affects a particular region of your country, please specify the exact areas that are affected.
  4. Notify your citizens and the world via press releases and the news media, but do not rely solely on this to communicate the change. The official decree or law should trump any statement made to the press.
  5. Send notification to the TZ community. To do this, simply send an email to tz@iana.org, which is the address for the tz discussion list. The email should contain a URL to the announcement published on an official government web site.
  6. If the change is to be aborted, please give ample notice of that as well.

Following these guidelines will ensure that your change is observed by technology, including computers, cell phones, and other devices.

Recommendations to Software Developers

  1. Don't try to invent your own time zones, or hard-code a list of time zones into your application.
  2. Let the features of your platform or library perform time zone conversions. Don't attempt to codify the rules on your own.
  3. Don't rely solely on fixed offsets from UTC, nor make any assumptions about daylight saving time for a particular time zone.
  4. Stay on top of time zone updates. Be sure you know how to keep current, using the mechanisms of your platform or library.
  5. Subscribe to the TZ Announcements mailing list, so you know when a new time zone update is available.
  6. If you have knowledge of an upcoming time zone change in a particular area that deviates from the currently known information, or if you have other questions about time zones in computing, join the TZ Discussion mailing list.
  7. Use timeanddate.com to validate any assumptions you have about the time zones for a particular region. The accuracy of this particular site is well established, and its owners participate in the TZ community.
  8. For Windows, .NET, and other Microsoft products, watch the news feed on this site so you know when platform updates are available. (Though you should prefer IANA time zones whenever possible, even if it means using a library to do so.)

Amazon and the Unwisdom of the Populist Crowd

$
0
0

There are some who view a host of claimed negative social ills allegedly related to the large size of firms like Amazon as an occasion to call for the company’s break up. And, unfortunately, these critics find an unlikely ally in President Trump, whose tweet storms claim that tech platforms are too big and extract unfair rents at the expense of small businesses. But these critics are wrong: Amazon is not a dangerous monopoly, and it certainly should not be broken up.  

Of course, no one really spells out what it means for these companies to be “too big.” Even Barry Lynn, a champion of the neo-Brandeisian antitrust movement, has shied away from specifics. The best that emerges when probing his writings is that he favors something like a return to Joe Bain’s “Structure-Conduct-Performance” paradigm (but even here, the details are fuzzy).

The reality of Amazon’s impact on the market is quite different than that asserted by its critics. Amazon has had decades to fulfill a nefarious scheme to suddenly raise prices and reap the benefits of anticompetive behavior. Yet it keeps putting downward pressure on prices in a way that seems to be commoditizing goods instead of building anticompetitive moats.

Twitter rants aside, more serious attempts to attack Amazon on antitrust grounds argue that it is engaging in pricing that is “predatory.” But “predatory pricing” requires a specific demonstration of factors — which, to date, have not been demonstrated — in order to justify legal action. Absent a showing of these factors, it has long been understood that seemingly “predatory” conduct is unlikely to harm consumers and often actually benefits consumers.

One important requirement that has gone unsatisfied is that a firm engaging in predatory pricing must have market power. Contrary to common characterizations of Amazon as a retail monopolist, its market power is less than it seems. By no means does it control retail in general. Rather, less than half of all online commerce (44%) takes place on its platform (and that number represents only 4% of total US retail commerce). Of that 44 percent, a significant portion is attributable to the merchants who use Amazon as a platform for their own online retail sales. Rather than abusing a monopoly market position to predatorily harm its retail competitors, at worst Amazon has created a retail business model that puts pressure on other firms to offer more convenience and lower prices to their customers. This is what we want and expect of competitive markets.

The claims leveled at Amazon are the intellectual kin of the ones made against Walmart during its ascendancy that it was destroying main street throughout the nation. In 1993, it was feared that Walmart’s quest to vertically integrate its offerings through Sam’s Club warehouse operations meant that “[r]etailers could simply bypass their distributors in favor of Sam’s — and Sam’s could take revenues from local merchants on two levels: as a supplier at the wholesale level, and as a competitor at retail.” This is a strikingly similar accusation to those leveled against Amazon’s use of its Seller Marketplace to aggregate smaller retailers on its platform.

But, just as in 1993 with Walmart, and now with Amazon, the basic fact remains that consumer preferences shift. Firms need to alter their behavior to satisfy their customers, not pretend they can change consumer preferences to suit their own needs. Preferring small, local retailers to Amazon or Walmart is a decision for individual consumers interacting in their communities, not for federal officials figuring out how best to pattern the economy.

All of this is not to say that Amazon is not large, or important, or that, as a consequence of its success it does not exert influence over the markets it operates in. But having influence through success is not the same as anticompetitively asserting market power.

Other criticisms of Amazon focus on its conduct in specific vertical markets in which it does have more significant market share. For instance, a UK Liberal Democratic leader recently claimed that “[j]ust as Standard Oil once cornered 85% of the refined oil market, today… Amazon accounts for 75% of ebook sales … .”

The problem with this concern is that Amazon’s conduct in the ebook market has had, on net, pro-competitive, not anti-competitive, effects. Amazon’s behavior in the ebook market has actually increased demand for books overall (and expanded output), increased the amount that consumers read, and decreased the price of theses books. Amazon is now even opening physical bookstores. Lina Khan made much hay in her widely cited article last year that this was all part of a grand strategy to predatorily push competitors out of the market:

The fact that Amazon has been willing to forego profits for growth undercuts a central premise of contemporary predatory pricing doctrine, which assumes that predation is irrational precisely because firms prioritize profits over growth. In this way, Amazon’s strategy has enabled it to use predatory pricing tactics without triggering the scrutiny of predatory pricing laws.

But it’s hard to allege predation in a market when over the past twenty years Amazon has consistently expanded output and lowered overall prices in the book market. Courts and lawmakers have sought to craft laws that encourage firms to provide consumers with more choices at lower prices — a feat that Amazon repeatedly accomplishes. To describe this conduct as anticompetitive is asking for a legal requirement that is at odds with the goal of benefiting consumers. It is to claim that Amazon has a contradictory duty to both benefit consumers and its shareholders, while also making sure that all of its less successful competitors also stay in business.

But far from creating a monopoly, the empirical reality appears to be that Amazon is driving categories of goods, like books, closer to the textbook model of commodities in a perfectly competitive market. Hardly an antitrust violation.

“Big is bad” may roll off the tongue, but, as a guiding ethic, it makes for terrible public policy. Amazon’s size and success are a direct result of its ability to enter relevant markets and to innovate. To break up Amazon, or any other large firm, is to punish it for serving the needs of its consumers.

None of this is to say that large firms are incapable of causing harm or acting anticompetitively. But we should accept calls for dramatic regulatory intervention  — especially from those in a position to influence regulatory or market reactions to such calls — to be supported by substantial factual evidence and legal and economic theory.

This tendency to go after large players is nothing new. As noted above, Walmart triggered many similar concerns thirty years ago. Thinking about Walmart then, pundits feared that direct competition with Walmart was fruitless:

In the spring of 1992 Ken Stone came to Maine to address merchant groups from towns in the path of the Wal-Mart advance. His advice was simple and direct: don’t compete directly with Wal-Mart; specialize and carry harder-to-get and better-quality products; emphasize customer service; extend your hours; advertise more — not just your products but your business — and perhaps most pertinent of all to this group of Yankee individualists, work together.

And today, some think it would be similarly pointless to compete with Amazon:

Concentration means it is much harder for someone to start a new business that might, for example, try to take advantage of the cheap housing in Minneapolis. Why bother when you know that if you challenge Amazon, they will simply dump your product below cost and drive you out of business?

The interesting thing to note, of course, is that Walmart is now desperately trying to compete with Amazon. But despite being very successful in its own right, and having strong revenues, Walmart doesn’t seem able to keep up.

Some small businesses will close as new business models emerge and consumer preferences shift. This is to be expected in a market driven by creative destruction. Once upon a time Walmart changed retail and improved the lives of many Americans. If our lawmakers can resist the urge to intervene without real evidence of harm, Amazon just might do the same.

Backblaze's Hard Drive Stats for Q1 2018

$
0
0

Backblaze Drive Stats Q1 2018

As of March 31, 2018 we had 100,110 spinning hard drives. Of that number, there were 1,922 boot drives and 98,188 data drives. This review looks at the quarterly and lifetime statistics for the data drive models in operation in our data centers. We’ll also take a look at why we are collecting and reporting 10 new SMART attributes and take a sneak peak at some 8 TB Toshiba drives. Along the way, we’ll share observations and insights on the data presented and we look forward to you doing the same in the comments.

Background

Since April 2013, Backblaze has recorded and saved daily hard drive statistics from the drives in our data centers. Each entry consists of the date, manufacturer, model, serial number, status (operational or failed), and all of the SMART attributes reported by that drive. Currently there are about 97 million entries totaling 26 GB of data. You can download this data from our website if you want to do your own research, but for starters here’s what we found.

Hard Drive Reliability Statistics for Q1 2018

At the end of Q1 2018 Backblaze was monitoring 98,188 hard drives used to store data. For our evaluation below we remove from consideration those drives which were used for testing purposes and those drive models for which we did not have at least 45 drives. This leaves us with 98,046 hard drives. The table below covers just Q1 2018.

Q1 2018 Hard Drive Failure Rates

Notes and Observations

If a drive model has a failure rate of 0%, it only means there were no drive failures of that model during Q1 2018.

The overall Annualized Failure Rate (AFR) for Q1 is just 1.2%, well below the Q4 2017 AFR of 1.65%. Remember that quarterly failure rates can be volatile, especially for models that have a small number of drives and/or a small number of Drive Days.

There were 142 drives (98,188 minus 98,046) that were not included in the list above because we did not have at least 45 of a given drive model. We use 45 drives of the same model as the minimum number when we report quarterly, yearly, and lifetime drive statistics.

Welcome Toshiba 8TB drives, almost…

We mentioned Toshiba 8 TB drives in the first paragraph, but they don’t show up in the Q1 Stats chart. What gives? We only had 20 of the Toshiba 8 TB drives in operation in Q1, so they were excluded from the chart. Why do we have only 20 drives? When we test out a new drive model we start with the “tome test” and it takes 20 drives to fill one tome. A tome is the same drive model in the same logical position in each of the 20 Storage Pods that make up a Backblaze Vault. There are 60 tomes in each vault.

In this test, we created a Backblaze Vault of 8 TB drives, with 59 of the tomes being Seagate 8 TB drives and 1 tome being the Toshiba drives. Then we monitored the performance of the vault and its member tomes to see if, in this case, the Toshiba drives performed as expected.

Q1 2018 Hard Drive Failure Rate — Toshiba 8TB

So far the Toshiba drive is performing fine, but they have been in place for only 20 days. Next up is the “pod test” where we fill a Storage Pod with Toshiba drives and integrate it into a Backblaze Vault comprised of like-sized drives. We hope to have a better look at the Toshiba 8 TB drives in our Q2 report — stay tuned.

Lifetime Hard Drive Reliability Statistics

While the quarterly chart presented earlier gets a lot of interest, the real test of any drive model is over time. Below is the lifetime failure rate chart for all the hard drive models which have 45 or more drives in operation as of March 31st, 2018. For each model, we compute their reliability starting from when they were first installed.

Lifetime Hard Drive Failure Rates

Notes and Observations

The failure rates of all of the larger drives (8-, 10- and 12 TB) are very good, 1.2% AFR (Annualized Failure Rate) or less. Many of these drives were deployed in the last year, so there is some volatility in the data, but you can use the Confidence Interval to get a sense of the failure percentage range.

The overall failure rate of 1.84% is the lowest we have ever achieved, besting the previous low of 2.00% from the end of 2017.

Our regular readers and drive stats wonks may have noticed a sizable jump in the number of HGST 8 TB drives (model: HUH728080ALE600), from 45 last quarter to 1,045 this quarter. As the 10 TB and 12 TB drives become more available, the price per terabyte of the 8 TB drives has gone down. This presented an opportunity to purchase the HGST drives at a price in line with our budget.

We purchased and placed into service the 45 original HGST 8 TB drives in Q2 of 2015. They were our first Helium-filled drives and our only ones until the 10 TB and 12 TB Seagate drives arrived in Q3 2017. We’ll take a first look into whether or not Helium makes a difference in drive failure rates in an upcoming blog post.

New SMART Attributes

If you have previously worked with the hard drive stats data or plan to, you’ll notice that we added 10 more columns of data starting in 2018. There are 5 new SMART attributes we are tracking each with a raw and normalized value:

  • 177 – Wear Range Delta
  • 179 – Used Reserved Block Count Total
  • 181- Program Fail Count Total or Non-4K Aligned Access Count
  • 182 – Erase Fail Count
  • 235 – Good Block Count AND System(Free) Block Count

The 5 values are all related to SSD drives.

Yes, SSD drives, but before you jump to any conclusions, we used 10 Samsung 850 EVO SSDs as boot drives for a period of time in Q1. This was an experiment to see if we could reduce boot up time for the Storage Pods. In our case, the improved boot up speed wasn’t worth the SSD cost, but it did add 10 new columns to the hard drive stats data.

Speaking of hard drive stats data, the complete data set used to create the information used in this review is available on our Hard Drive Test Data page. You can download and use this data for free for your own purpose, all we ask are three things: 1) you cite Backblaze as the source if you use the data, 2) you accept that you are solely responsible for how you use the data, and 3) you do not sell this data to anyone. It is free.

If you just want the summarized data used to create the tables and charts in this blog post, you can download the ZIP file containing the MS Excel spreadsheet.

Good luck and let us know if you find anything interesting.

[Ed: 5/1/2018 – Updated Lifetime chart to fix error in confidence interval for HGST 4TB drive, model: HDS5C4040ALE630]

Andy has 20+ years experience in technology marketing. He has shared his expertise in computer security and data backup at the Federal Trade Commission, Rootstech, RSA and over 100 other events. His current passion is to get everyone to back up their data before it's too late.

Latest posts by Andy Klein (see all)

Category: Cloud Storage  

In just 7 months, the US public domain will get its first infusion since 1998

$
0
0

In 1998, the US Congress retroactively extended the copyright on US works, placing public domain works back into copyright and forestalling the entry into the public domain of a great mass of works that were soon to become public domain; now, 20 years later with no copyright term extension in sight, the US public domain is about to receive the first of many annual infusions to come, a great mass of works that will be free for all to use.

Included in the 2019 tranche: William Carlos Williams's The Great American Novel; Charlie Chaplain's The Pilgrim, and Cecil B DeMille's 10 Commandments.

As Glenn Fleishman explains in The Atlantic, the copyright on these ancient works is a complex muddle that makes it difficult or even impossible to figure out the copyright status of a given work; only by placing them in the public domain can we be sure that they're freely usable and thus liable to being kept alive in the public imagination.

The reason that New Year’s Day 2019 has special significance arises from the 1976 changes in copyright law’s retroactive extensions. First, the 1976 law extended the 56-year period (28 plus an equal renewal) to 75 years. That meant work through 1922 was protected until 1998. Then, in 1998, the Sonny Bono Act also fixed a period of 95 years for anything placed under copyright from 1923 to 1977, after which the measure isn’t fixed, but based on when an author perishes. Hence the long gap from 1998 until now, and why the drought’s about to end.

Of course, it’s never easy. If you published something between 1923 and 1963 and wanted to renew copyright, the law required registration with the U.S. Copyright Office at any point in the first 28 years of copyright, followed at the 28-year mark with the renewal request. Without both a registration and a renewal, anything between 1923 and 1963 is already in the public domain. Many books, songs, and other printed media were never renewed by the author or publisher due to lack of sales or interest, an author’s death, or a publisher’s shutting down or bankruptcy. One estimate from 2011 suggests about 90 percent of works published in the 1920s haven’t been renewed. That number shifts to 60 percent or so for works from the 1940s. But there are murky issues about ownership and other factors for as many as 30 percent of books from 1923 to 1963. It’s impossible to determine copyright status easily for them.

It’s easier to prove a renewal was issued than not, making it difficult for those who want to make use of material without any risk of challenge. Jennifer Jenkins, the director of Duke’s Center for the Study of the Public Domain, says, “Even if works from 1923 technically entered the public domain earlier because of nonrenewal, next year will be different, because then we’ll know for sure that these works are in the public domain without tedious research.”

A Mass of Copyrighted Works Will Soon Enter the Public Domain [Glenn Fleishman/The Atlantic]

(via /.)

Customer takes Bell to court and wins

$
0
0

A Toronto man is elated after a deputy judge ruled that a verbal contract he made with a Bell customer service agent trumps the contract the telecom later emailed him, noting prices could increase.

In a judgment issued last month in a Toronto small claims court, Deputy Judge William C. De Lucia said that Bell's attempt to impose new terms after a verbal contract guaranteeing a monthly price for 24 months had been struck was "high-handed, arbitrary and unacceptable."

It all started in November 2016, when David Ramsay called a Bell customer service representative to inquire about TV and internet services.

A small claims court deputy judge ruled it would be unfair and prejudicial if Bell changed contract terms. (Mark Bochsler/CBC)

The sales agent told Ramsay he could get Bell's Fibe TV and internet services "for $112.90 a month for 24 months" and then said he'd get an "email confirmation of everything that was just discussed."

But when the email arrived, it said prices were actually "subject to change" and that Bell was planning to increase its price for internet service by $5, two months later.

"I was stunned and appalled to find these buried terms in an email," says Ramsay. "I had a contract, and this ain't that contract."

Ramsay called Bell to say the emailed contract was different from the verbal contract he'd made on the phone.

In a move that was pivotal to his legal case, he requested a transcript of the call in which the customer service rep promised him a fixed price for two years.

  • To request phone call transcript with Bell: email privacy@Bell.ca

"They kept saying, 'Everyone has to pay those price increases,'" says Ramsay. "'Everyone has to pay.'"

Undeterred, Ramsay filed a complaint with the Commission for Complaints for Telecom-television Services (the CCTS), a moderator between customers and telecom providers.

In a lengthy email exchange, a spokesperson for the CCTS insisted that Bell had the right to increase prices and since the telecom had notified Ramsay of this fact — as well as an upcoming price increase —  it ruled that the telecom provider met its obligations and no further investigation was warranted.

The CCTS closed Ramsay's file.

"I couldn't believe it," says Ramsay. "They just refused to consider my argument that I had a verbal contract. I even sent them a link to that section of the law, which they ignored."

'There's a principle at stake here'

Ramsay had consulted a couple of lawyer friends, who told him they thought he was on the right track, that a verbal agreement was binding.

"Even though the dollar amount was small," says Ramsay, "I got on my white horse and thought, 'There's a principle at stake here. Let's take them to small claims court and see what happens.'"

Industry-wide problem

Ramsay also figured he wasn't the only one who had the same concerns about Bell's pricing, in part because of stories he'd seen by Go Public and other media reports.

Go Public has received over 100 similar complaints from customers who say Bell sales agents promised them a guaranteed monthly price, only to receive an emailed contract where it said prices could go up.

A recent Go Public/Marketplace hidden camera investigation captured sales reps for Bell falsely telling customers their monthly price would not increase for two years. (Marketplace/CBC)

In a joint Go Public/Marketplace investigation earlier this year, sales agents for Bell were repeatedly caught on hidden camera falsely promising customers that prices for TV, internet and home phone deals would not change for 24 months.

Customers from other telecoms — such as Rogers and Telus — have written to say they, too, were promised a price by a sales rep only to receive an email mentioning that prices could change.

According to a recent report from the CCTS, between August 2017 and January 2018 the number 1 complaint it received — from almost 2,000 customers — was that telecom providers gave misleading information or did not disclose all contract terms.

Bell sought confidentiality agreement

Before they got to court, Bell offered Ramsay money to drop the case — $300, roughly the amount Ramsay estimated the telecom would be over-billing him for two years. He declined.

"I wanted a judge to rule on the merits of this case," he says. "And if I happened to win, I thought it'd be a useful case for others to know about."

Three weeks before the court date, Bell contacted Ramsay again. He was offered $1,000 to settle, but was required to sign a confidentiality agreement. Again, Ramsay declined.

"I thought the merits of the case were good," he says. "Not to get too self-righteous, but I thought it was a battle worth having. So I said, 'Onward, ho!'"

Off to small claims court

Representing himself, Ramsay appeared in a Toronto small claims court on March 19, armed with what he calls his "smoking gun" — the transcript of his conversation with the Bell sales rep.

He highlighted two specific comments by that agent — one in which she told him "Your total cost for the 24 months will be $112.90 per month" and "You're going to get an email confirmation of everything that was just discussed."

Bell stuck to its argument that it had emailed contract details shortly after Ramsay's call, so the contents of that email were what should be binding. It also said the customer service agent Ramsay spoke to did not know about a planned price increase, which is why that wasn't mentioned, and claimed that, because Ramsay had continued with Bell's service, he was essentially agreeing to the telecom's contract terms.

De Lucia was not swayed by those arguments, saying in his reasons for judgment, "I find that Bell can not unilaterally insert or impose new terms. Any imposition of new terms ... is unenforceable."

De Lucia said Bell has the right to impose price changes, but not during a contract when a monthly price has been agreed upon.

"To alter or change the terms, as Bell has requested," said De Lucia, "would be grossly unfair, grossly prejudicial to the plaintiff and unconscionable."

The deputy judge ordered Bell to pay Ramsay $1,110 to cover the cost of damages, his time, inconvenience and miscellaneous costs.

Bell won't comment on judgment 

Go Public asked Bell for an on-camera interview, but the the request was declined.

It also refused to comment on the deputy judge's findings, and would not address complaints by other customers who say they were not [verbally] told prices were subject to change when they purchased services.

In an email, Bell's senior manager of media relations admitted the call centre rep did not tell Ramsay that prices were subject to change and said Bell had "informed the customer service team involved and they are using it as a coaching opportunity."

The Bell spokesperson also said the company had offered to cancel Ramsay's contract without penalty.

Ramsay told Go Public that he didn't cancel, because he wanted the services at the price he had [verbally] negotiated.

CCTS changes tune

In an apparent turnaround, when Go Public contacted the CCTS to discuss Ramsay's victory, commissioner Howard Maker said the organization believes an oral contract is binding.

"If a customer calls a service provider on the phone and they make a deal for a package of services for a fixed price, that's a deal," said Maker.

It's also the opposite of what the CCTS employee handling Ramsay's case determined.

Howard Maker of the CCTS says his organization will review how David Ramsay's case was handled. (Andrew Lee/CBC)

"We are human," said Maker. "So did we make an error? Maybe ... we'll do our analysis ... and we'll take appropriate steps."

Ramsay wants the CCTS to re-open similar cases where staff erroneously told customers they had to pay price increases when a telecom sales rep didn't inform them of those changes before locking them into a contract.

"I'm sure they have hundreds of cases just like mine," says Ramsay. "So I think it's incumbent on the CCTS to take notice of this and review a bunch of those cases."

Grounds for class action

Meanwhile, an expert on contract law says he foresees a lot of consumer interest in this "David vs. Goliath" case.

"It should really make consumers feel very confident," says Anthony Daimsis, a contract law professor at the University of Ottawa.

"Should they choose to all get together, instead of having to deal with these claims one at a time, they could probably make a very good case for one big class action."

Contract law Prof. Anthony Daimsis says the small claims court ruling could have big implications for Bell and other telecoms, suggesting customers could initiate a class-action lawsuit. (Andrew Lee/CBC)

Even though the case was heard in small claims court, Daimsis says the judgment was "persuasive" and likely how a higher court would rule.

Daimsis considers the judgment a warning to all telecom providers.

"What it should signal to other outfits that are operating this way, is that this is not the way Canadian courts will accept how larger parties act with consumers."

Submit your story ideas

Go Public is an investigative news segment on CBC-TV, radio and the web.

We tell your stories and hold the powers that be accountable.

We want to hear from people across the country with stories you want to make public.

Submit your story ideas at Go Public.

Follow @CBCGoPublic on Twitter.

A puzzle that tiles infinitely across both sides, based on the Klein Bottle

$
0
0

Have you ever done a puzzle that has no beginning or end? Where you don’t know up from down? Get lost in the infinite galaxy puzzle.

The infinity puzzles are a new type of jigsaw puzzle inspired by topological spaces that continuously tile. Because of that, they have no fixed shape, no starting point, and no edges. They can be assembled in thousands of different ways.

Our puzzles are all about bringing back the artistry and playfulness of traditional hand cut puzzles while exploring the possibilities of new technology. A part of that artisan tradition are “trick” puzzles, puzzles that can be assembled in multiple ways often to create a witty pun or interesting transformation. Our infinity puzzles build on that tradition with a new mathematical twist that would be almost impossible with hand cutting, a puzzle that tiles in every direction. Our generative puzzles are made with math, science and lasers. The intricate branching shapes of our puzzle pieces emerge from a simulation of crystal growth and are lasercut from plywood. By combining mathematical simulation with precision CNC cutting, we create new kinds of jigsaw puzzles that could never be made before.

a small yet challenging puzzle based on a torus

imgp3689-editThe Infinity Puzzle ($50, 6 x 6 inches, 51 pieces) is a challenging wood puzzle that tiles in the plane. This means that any piece on the bottom can be moved to the top and a piece on the right can be moved to the left. Multiple copies of the puzzle can be combined in different colors to create abstract patterns and shapes. Topologically, we can say this is equivalent to a torus, which can be described by its fundamental polygon showing how edges of a square map to each other to make a closed shape. We create the tiling piece shapes by modifying our simulation to wrap like a torus. It was first created for the 2016 Puzzle Parley.

torus_diagram

infinityThis puzzle is extra challenging as it has no image or defined shape to guide assembly. Multiple infinity puzzles can be combined to create a larger continuous puzzle. The image above shows some of the creative combinations possible with two infinity puzzles of different colors ($75, for two).

a puzzle that tiles infinitely across both sides, based on the Klein Bottle

infinitegalaxypuzzle
The Infinite Galaxy Puzzle ($130, 8 x 8 inches, 133 pieces) takes this idea one step further. Instead of mapping to a torus, this puzzle maps to a Klein bottle, an impossible 3D shape where the inside and outside are mathematically indistinguishable. This means that the puzzle tiles with a flip. Pieces from the right side attach to the left side but only after they have been flipped over. Just like the Klein bottle’s surface has no inside or outside, the puzzle has no up or down side. You can start the puzzle anywhere on any side. This puzzle is adorned with a photograph of the galactic center from the Hubble observatory (source). The image is continuous from one side of the puzzle to the other, so it’s not possible to see the entire image at once. Explore the galaxy while assembling the puzzle in multiple ways. The puzzle also features 3 special space themed whimsy pieces shaped like an astronaut, a space shuttle and a satellite.

kleinbottle_diagram

Ready to take on the challenge? shop our entire line of generative jigsaw puzzles here.


Americans Are a Lonely Lot, and Young People Bear the Heaviest Burden

$
0
0
Text-Only NPR.org : Americans Are A Lonely Lot, And Young People Bear The Heaviest Burden

Text-Only NPR.org (go to full version)

Home

Americans Are A Lonely Lot, And Young People Bear The Heaviest Burden

By Rhitu Chatterjee

NPR.org, May 1, 2018 · Loneliness isn't just a fleeting feeling, leaving us sad for a few hours to a few days. Research in recent years suggests that for many people, loneliness is more like a chronic ache, affecting their daily lives and sense of well-being.

Now a nationwide survey by the health insurer Cigna underscores that. It finds that loneliness is widespread in America, with nearly 50 percent of respondents reporting that they feel alone or left out always or sometimes.

Using one of the best-known tools for measuring loneliness — the UCLA Loneliness Scale— Cigna surveyed 20,000 adults online across the country. The University of California, Los Angeles tool uses a series of statements and a formula to calculate a loneliness score based on responses. Scores on the UCLA scale range from 20 to 80. People scoring 43 and above were considered lonely in the Cigna survey, with a higher score suggesting a greater level of loneliness and social isolation.

More than half of survey respondents — 54 percent — said they always or sometimes feel that no one knows them well. Fifty-six percent reported they sometimes or always felt like the people around them "are not necessarily with them." And 2 in 5 felt like "they lack companionship," that their "relationships aren't meaningful" and that they "are isolated from others."

The survey found that the average loneliness score in America is 44, which suggests that "most Americans are considered lonely," according to the report released Tuesday by the health insurer.

"Half of Americans view themselves as lonely," said David Cordani, president and CEO of Cigna Corp. "I can't help but be surprised [by that]." (Cigna is an NPR sponsor and a major provider of health insurance for NPR employees.)

But the results are consistent with other previous research, says Julianne Holt-Lunstad, a psychologist at Brigham Young University, who studies loneliness and its health effects. She wasn't involved in the Cigna survey. While it's difficult to compare the loneliness scores in different studies, she says, other nationally representative estimates have found between 20 percent and 43 percent of Americans report feeling lonely or socially isolated.

Loneliness has health consequences. "There's a blurred line between mental and physical health," says Cordani. "Oftentimes, medical symptoms present themselves and they're correlated with mental, lifestyle, behavioral issues like loneliness."

Several studies in recent years, including ones by Holt-Lunstad, have documented the public health effect of loneliness. It has been linked with a higher risk of coronary heart disease and stroke. It has been shown to influence our genes and our immune systems, and even recovery from breast cancer.

And there is growing evidence that loneliness can kill. "We have robust evidence that it increases risk for premature mortality," says Holt-Lunstad. Studies have found that it is a predictor of premature death, not just for the elderly, but even more so for younger people.

The latest survey also found something surprising about loneliness in the younger generation. "Our survey found that actually the younger generation was lonelier than the older generations," says Dr. Douglas Nemecek, the chief medical officer for behavioral health at Cigna.

Members of Generation Z, born between the mid-1990s and the early 2000s, had an overall loneliness score of 48.3. Millennials, just a little bit older, scored 45.3. By comparison, baby boomers scored 42.4. The Greatest Generation, people ages 72 and above, had a score of 38.6 on the loneliness scale.

"Too often people think that this [problem] is specific to older adults," says Holt-Lunstad. "This report helps with the recognition that this can affect those at younger ages."

In fact, some research published in 2017 by psychologist Jean Twenge at San Diego State University suggests that more screen time and social media may have caused a rise in depression and suicide among American adolescents. The study also found that people who spend less time looking at screens and more time having face-to-face social interactions are less likely to be depressive or suicidal.

However, the Cigna survey didn't find a correlation between social media use and feelings of loneliness. This would on the surface contradict the new findings on screen time, but Holt-Lunstad says that previous research shows that how people use social media determine its influence on one's sense of isolation.

"If you're passively using it, if you're just scrolling feeds, that's associated with more negative effects," she says. "But if you're using it to reach out and connect to people to facilitate other kinds of [in-person] interactions, it's associated with more positive effects."

That last finding is also corroborated by the Cigna survey across all age groups. Respondents who said they have more in-person social interactions on a daily basis reported being less lonely.

The survey also found that working too little or too much is also associated with the experience of loneliness, suggesting that our workplaces are an important source of our social relationships and also that work-life balance is important for avoiding loneliness.

Cigna wants to work with employers to "help address loneliness in the workplace," says Nemecek.

Social connection or the lack of it is now considered a social determinant of health. In a 2014 report, the Institute of Medicine (now the Health and Medicine Division of the National Academies of Sciences, Engineering, and Medicine) suggested that health providers should collect information about patients' "social connections and social isolation" along with information on education, employment, lifestyle (diet, exercise, smoking, etc.) and psychological health.

"But this hasn't happened," says Holt-Lunstad. "I would hope that with a large insurer like Cigna [releasing a report on loneliness], that it would start to be more on the radar of major health organizations but also actual health care providers."

© NPR

C Is Not a Low-Level Language

$
0
0
March/April 2018 issue of acmqueue

The March/April issue of acmqueue is out now



Programming Languages

David Chisnall

In the wake of the recent Meltdown and Spectre vulnerabilities, it's worth spending some time looking at root causes. Both of these vulnerabilities involved processors speculatively executing instructions past some kind of access check and allowing the attacker to observe the results via a side channel. The features that led to these vulnerabilities, along with several others, were added to let C programmers continue to believe they were programming in a low-level language, when this hasn't been the case for decades.

Processor vendors are not alone in this. Those of us working on C/C++ compilers have also participated.

What Is a Low-Level Language?

Computer science pioneer Alan Perlis defined low-level languages this way:

"A programming language is low level when its programs require attention to the irrelevant."5

While, yes, this definition applies to C, it does not capture what people desire in a low-level language. Various attributes cause people to regard a language as low-level. Think of programming languages as belonging on a continuum, with assembly at one end and the interface to the Starship Enterprise's computer at the other. Low-level languages are "close to the metal," whereas high-level languages are closer to how humans think.

For a language to be "close to the metal," it must provide an abstract machine that maps easily to the abstractions exposed by the target platform. It's easy to argue that C was a low-level language for the PDP-11. They both described a model in which programs executed sequentially, in which memory was a flat space, and even the pre- and post-increment operators cleanly lined up with the PDP-11 addressing modes.

Fast PDP-11 Emulators

The root cause of the Spectre and Meltdown vulnerabilities was that processor architects were trying to build not just fast processors, but fast processors that expose the same abstract machine as a PDP-11. This is essential because it allows C programmers to continue in the belief that their language is close to the underlying hardware.

C code provides a mostly serial abstract machine (until C11, an entirely serial machine if nonstandard vendor extensions were excluded). Creating a new thread is a library operation known to be expensive, so processors wishing to keep their execution units busy running C code rely on ILP (instruction-level parallelism). They inspect adjacent operations and issue independent ones in parallel. This adds a significant amount of complexity (and power consumption) to allow programmers to write mostly sequential code. In contrast, GPUs achieve very high performance without any of this logic, at the expense of requiring explicitly parallel programs.

The quest for high ILP was the direct cause of Spectre and Meltdown. A modern Intel processor has up to 180 instructions in flight at a time (in stark contrast to a sequential C abstract machine, which expects each operation to complete before the next one begins). A typical heuristic for C code is that there is a branch, on average, every seven instructions. If you wish to keep such a pipeline full from a single thread, then you must guess the targets of the next 25 branches. This, again, adds complexity; it also means that an incorrect guess results in work being done and then discarded, which is not ideal for power consumption. This discarded work has visible side effects, which the Spectre and Meltdown attacks could exploit.

On a modern high-end core, the register rename engine is one of the largest consumers of die area and power. To make matters worse, it cannot be turned off or power gated while any instructions are running, which makes it inconvenient in a dark silicon era when transistors are cheap but powered transistors are an expensive resource. This unit is conspicuously absent on GPUs, where parallelism again comes from multiple threads rather than trying to extract instruction-level parallelism from intrinsically scalar code. If instructions do not have dependencies that need to be reordered, then register renaming is not necessary.

Consider another core part of the C abstract machine's memory model: flat memory. This hasn't been true for more than two decades. A modern processor often has three levels of cache in between registers and main memory, which attempt to hide latency.

The cache is, as its name implies, hidden from the programmer and so is not visible to C. Efficient use of the cache is one of the most important ways of making code run quickly on a modern processor, yet this is completely hidden by the abstract machine, and programmers must rely on knowing implementation details of the cache (for example, two values that are 64-byte-aligned may end up in the same cache line) to write efficient code.

Optimizing C

One of the common attributes ascribed to low-level languages is that they're fast. In particular, they should be easy to translate into fast code without requiring a particularly complex compiler. The argument that a sufficiently smart compiler can make a language fast is one that C proponents often dismiss when talking about other languages.

Unfortunately, simple translation providing fast code is not true for C. In spite of the heroic efforts that processor architects invest in trying to design chips that can run C code fast, the levels of performance expected by C programmers are achieved only as a result of incredibly complex compiler transforms. The Clang compiler, including the relevant parts of LLVM, is around 2 million lines of code. Even just counting the analysis and transform passes required to make C run quickly adds up to almost 200,000 lines (excluding comments and blank lines).

For example, in C, processing a large amount of data means writing a loop that processes each element sequentially. To run this optimally on a modern CPU, the compiler must first determine that the loop iterations are independent. The C restrict keyword can help here. It guarantees that writes through one pointer do not interfere with reads via another (or if they do, that the programmer is happy for the program to give unexpected results). This information is far more limited than in a language such as Fortran, which is a big part of the reason that C has failed to displace Fortran in high-performance computing.

Once the compiler has determined that loop iterations are independent, then the next step is to attempt to vectorize the result, because modern processors get four to eight times the throughput in vector code that they achieve in scalar code. A low-level language for such processors would have native vector types of arbitrary lengths. LLVM IR (intermediate representation) has precisely this, because it is always easier to split a large vector operation into smaller ones than to construct larger vector operations.

Optimizers at this point must fight the C memory layout guarantees. C guarantees that structures with the same prefix can be used interchangeably, and it exposes the offset of structure fields into the language. This means that a compiler is not free to reorder fields or insert padding to improve vectorization (for example, transforming a structure of arrays into an array of structures or vice versa). That's not necessarily a problem for a low-level language, where fine-grained control over data structure layout is a feature, but it does make it harder to make C fast.

C also requires padding at the end of a structure because it guarantees no padding in arrays. Padding is a particularly complex part of the C specification and interacts poorly with other parts of the language. For example, you must be able to compare two structs using a type-oblivious comparison (e.g., memcmp), so a copy of a struct must retain its padding. In some experimentation, a noticeable amount of total runtime on some workloads was found to be spent in copying padding (which is often awkwardly sized and aligned).

Consider two of the core optimizations that a C compiler performs: SROA (scalar replacement of aggregates) and loop unswitching. SROA attempts to replace structs (and arrays with fixed lengths) with individual variables. This then allows the compiler to treat accesses as independent and elide operations entirely if it can prove that the results are never visible. This has the side effect of deleting padding in some cases but not others.

The second optimization, loop unswitching, transforms a loop containing a conditional into a conditional with a loop in both paths. This changes flow control, contradicting the idea that a programmer knows what code will execute when low-level language code runs. It can also cause significant problems with C's notions of unspecified values and undefined behavior.

In C, a read from an uninitialized variable is an unspecified value and is allowed to be any value each time it is read. This is important, because it allows behavior such as lazy recycling of pages: for example, on FreeBSD the malloc implementation informs the operating system that pages are currently unused, and the operating system uses the first write to a page as the hint that this is no longer true. A read to newly malloced memory may initially read the old value; then the operating system may reuse the underlying physical page; and then on the next write to a different location in the page replace it with a newly zeroed page. The second read from the same location will then give a zero value.

If an unspecified value for flow control is used (for example, using it as the condition in an if statement), then the result is undefined behavior: anything is allowed to happen. Consider the loop-unswitching optimization, this time in the case where the loop ends up being executed zero times. In the original version, the entire body of the loop is dead code. In the unswitched version, there is now a branch on the variable, which may be uninitialized. Some dead code has now been transformed into undefined behavior. This is just one of many optimizations that a close investigation of the C semantics shows to be unsound.

In summary, it is possible to make C code run quickly but only by spending thousands of person-years building a sufficiently smart compiler—and even then, only if you violate some of the language rules. Compiler writers let C programmers pretend that they are writing code that is "close to the metal" but must then generate machine code that has very different behavior if they want C programmers to keep believing that they are using a fast language.

Understanding C

One of the key attributes of a low-level language is that programmers can easily understand how the language's abstract machine maps to the underlying physical machine. This was certainly true on the PDP-11, where each C expression mapped trivially to one or two instructions. Similarly, the compiler performed a straightforward lowering of local variables to stack slots and mapped primitive types to things that the PDP-11 could operate on natively.

Since then, implementations of C have had to become increasingly complex to maintain the illusion that C maps easily to the underlying hardware and gives fast code. A 2015 survey of C programmers, compiler writers, and standards committee members raised several issues about the comprehensibility of C.3 For example, C permits an implementation to insert padding into structures (but not into arrays) to ensure that all fields have a useful alignment for the target. If you zero a structure and then set some of the fields, will the padding bits all be zero? According to the results of the survey, 36 percent were sure that they would be, and 29 percent didn't know. Depending on the compiler (and optimization level), it may or may not be.

This is a fairly trivial example, yet a significant proportion of programmers either believe the wrong thing or are not sure. When you introduce pointers, the semantics of C become a lot more confusing. The BCPL model was fairly simple: values are words. Each word is either some data or the address of some data. Memory is a flat array of storage cells indexed by address.

The C model, in contrast, was intended to allow implementation on a variety of targets, including segmented architectures (where a pointer might be a segment ID and an offset) and even garbage-collected virtual machines. The C specification is careful to restrict valid operations on pointers to avoid problems for such systems. The response to Defect Report 2601 included the notion of pointer provenance in the definition of pointer:

"Implementations are permitted to track the origins of a bit pattern and treat those representing an indeterminate value as distinct from those representing a determined value. They may also treat pointers based on different origins as distinct even though they are bitwise identical."

Unfortunately, the word provenance does not appear in the C11 specification at all, so it is up to compiler writes to decide what it means. GCC (GNU Compiler Collection) and Clang, for example, differ on whether a pointer that is converted to an integer and back retains its provenance through the casts. Compilers are free to determine that two pointers to different malloc results or stack allocations always compare as not-equal, even when a bitwise comparison of the pointers may show them to describe the same address.

These misunderstandings are not purely academic in nature. For example, security vulnerabilities have been observed from signed integer overflow (undefined behavior in C) and from code that dereferenced a pointer before a null check, indicating to the compiler that the pointer could not be null because dereferencing a null pointer is undefined behavior in C and therefore can be assumed not to happen (https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-1897).

In light of such issues, it is difficult to argue that a programmer can be expected to understand exactly how a C program will map to an underlying architecture.

Imagining a Non-C Processor

The proposed fixes for Spectre and Meltdown impose significant performance penalties, largely offsetting the advances in microarchitecure in the past decade. Perhaps it's time to stop trying to make C code fast and instead think about what programming models would look like on a processor designed to be fast.

We have a number of examples of designs that have not focused on traditional C code to provide some inspiration. For example, highly multithreaded chips, such as Sun/Oracle's UltraSPARC Tx series, don't require as much cache to keep their execution units full. Research processors2 have extended this concept to very large numbers of hardware-scheduled threads. The key idea behind these designs is that with enough high-level parallelism, you can suspend the threads that are waiting for data from memory and fill your execution units with instructions from others. The problem with such designs is that C programs tend to have few busy threads.

ARM's SVE (Scalar Vector Extensions)—and similar work from Berkeley4—provides another glimpse at a better interface between program and hardware. Conventional vector units expose fixed-sized vector operations and expect the compiler to try to map the algorithm to the available unit size. In contrast, the SVE interface expects the programmer to describe the degree of parallelism available and relies on the hardware to map it down to the available number of execution units. Using this from C is complex, because the autovectorizer must infer the available parallelism from loop structures. Generating code for it from a functional-style map operation is trivial: the length of the mapped array is the degree of available parallelism.

Caches are large, but their size isn't the only reason for their complexity. The cache coherency protocol is one of the hardest parts of a modern CPU to make both fast and correct. Most of the complexity involved comes from supporting a language in which data is expected to be both shared and mutable as a matter of course. Consider in contrast an Erlang-style abstract machine, where every object is either thread-local or immutable (Erlang has a simplification of even this, where there is only one mutable object per thread). A cache coherency protocol for such a system would have two cases: mutable or shared. A software thread being migrated to a different processor would need its cache explicitly invalidated, but that's a relatively uncommon operation.

Immutable objects can simplify caches even more, as well as making several operations even cheaper. Sun Labs' Project Maxwell noted that the objects in the cache and the objects that would be allocated in a young generation are almost the same set. If objects are dead before they need to be evicted from the cache, then never writing them back to main memory can save a lot of power. Project Maxwell proposed a young-generation garbage collector (and allocator) that would run in the cache and allow memory to be recycled quickly. With immutable objects on the heap and a mutable stack, a garbage collector becomes a very simple state machine that is trivial to implement in hardware and allows for more efficient use of a relatively small cache.

A processor designed purely for speed, not for a compromise between speed and C support, would likely support large numbers of threads, have wide vector units, and have a much simpler memory model. Running C code on such a system would be problematic, so, given the large amount of legacy C code in the world, it would not likely be a commercial success.

There is a common myth in software development that parallel programming is hard. This would come as a surprise to Alan Kay, who was able to teach an actor-model language to young children, with which they wrote working programs with more than 200 threads. It comes as a surprise to Erlang programmers, who commonly write programs with thousands of parallel components. It's more accurate to say that parallel programming in a language with a C-like abstract machine is difficult, and given the prevalence of parallel hardware, from multicore CPUs to many-core GPUs, that's just another way of saying that C doesn't map to modern hardware very well.

References

1. C Defect Report 260. 2004; http://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_260.htm.

2. Chadwick, G. A. 2013. Communication centric, multi-core, fine-grained processor architecture. Technical Report 832. University of Cambridge, Computer Laboratory; http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-832.pdf.

3. Memarian, K., Matthiesen, J., Lingard, J., Nienhuis, K., Chisnall, D. Watson, R. N. M., Sewell, P. 2016. Into the depths of C: elaborating the de facto standards. Proceedings of the 37th ACM SIGPLAN Conference on Programming Language Design and Implementation: 1-15; http://dl.acm.org/authorize?N04455.

4. Ou, A., Nguyen, Q., Lee, Y., Asanović, K. 2014. A case for MVPs: mixed-precision vector processors. Second International Workshop on Parallelism in Mobile Platforms at the 41st International Symposium on Computer Architecture.

5. Perlis, A. 1982. Epigrams on programming. ACM SIGPLANNotices 17(9).

Related articles

The Challenge of Cross-language Interoperability
David Chisnall
Interfacing between languages is increasingly important.
https://queue.acm.org/detail.cfm?id=2543971

Finding More than One Worm in the Apple
Mike Bland
If you see something, say something.
https://queue.acm.org/detail.cfm?id=2620662

Coding for the Code
Friedrich Steimann, Thomas Kühne
Can models provide the DNA for software development?
https://queue.acm.org/detail.cfm?id=1113336

David Chisnall is a researcher at the University of Cambridge, where he works on programming language design and implementation. He spent several years consulting in between finishing his Ph.D. and arriving at Cambridge, during which time he also wrote books on Xen and the Objective-C and Go programming languages, as well as numerous articles. He also contributes to the LLVM, Clang, FreeBSD, GNUstep, and Étoilé open-source projects, and he dances the Argentine tango.

Copyright © 2018 held by owner/author. Publication rights licensed to ACM.

acmqueue

Originally published in Queue vol. 16, no. 2
see this item in the ACM Digital Library




Related:

Tobias Lauinger, Abdelberi Chaabane, Christo Wilson - Thou Shalt Not Depend on Me
A look at JavaScript libraries in the wild

Robert C. Seacord - Uninitialized Reads
Understanding the proposed revisions to the C language

Carlos Baquero, Nuno Preguiça - Why Logical Clocks are Easy
Sometimes all you need is the right language.

Erik Meijer, Kevin Millikin, Gilad Bracha - Spicing Up Dart with Side Effects
A set of extensions to the Dart programming language, designed to support asynchrony and generator functions



Comments

(newest first)



© 2018 ACM, Inc. All Rights Reserved.

Ask HN: Who is hiring? (May 2018)

$
0
0
Ask HN: Who is hiring? (May 2018)
184 points by whoishiring2 hours ago | hide | past | web | favorite | 316 comments
Please state the job location and include the keywords REMOTE, INTERNS and/or VISA when the corresponding sort of candidate is welcome. When remote work is not an option, include ONSITE.

Please only post if you personally are part of the hiring company—no recruiting firms or job boards. Only one post per month, please. If it isn't a household name, explain what your company does.

Commenters: please don't reply to job posts to complain about something. It's off topic here.

Readers: please only email submitters if you personally are interested in the job—no recruiters or sales calls.

You can also use kristopolous' console script to search the thread: https://news.ycombinator.com/item?id=10313519.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Operator Framework: Building Apps on Kubernetes

$
0
0

To help make it easier to build Kubernetes applications, Red Hat and the Kubernetes open source community today share the Operator Framework– an open source toolkit designed to manage Kubernetes native applications, called Operators, in a more effective, automated, and scalable way.

Operators are Kubernetes applications

You may be familiar with Operators from the concept’s introduction in 2016. An Operator is a method of packaging, deploying and managing a Kubernetes application. A Kubernetes application is an application that is both deployed on Kubernetes and managed using the Kubernetes APIs and kubectl tooling. To be able to make the most of Kubernetes, you need a set of cohesive APIs to extend in order to service and manage your applications that run on Kubernetes. You can think of Operators as the runtime that manages this type of application on Kubernetes.

Conceptually, an Operator takes human operational knowledge and encodes it into software that is more easily packaged and shared with consumers. Think of an Operator as an extension of the software vendor’s engineering team that watches over your Kubernetes environment and uses its current state to make decisions in milliseconds. Operators follow a maturity model that ranges from basic functionality to having specific logic for an application. Advanced Operators are designed to handle upgrades seamlessly, react to failures automatically, and not take shortcuts, like skipping a software backup process to save time.

The pieces that are now being launched as the Operator Framework are the culmination of the years of work and experience of our team in building Operators. We’ve seen that Operators’ capabilities differ in sophistication depending on how much intelligence has been added into the implementation logic of the Operator itself. We’ve also learned that the creation of an Operator typically starts by automating an application’s installation and self-service provisioning capabilities, and then evolves to take on more complex automation.

We believe that the new Operator Framework represents the next big step for Kubernetes by using a baseline of leading practices to help lower the application development barrier on Kubernetes. The project delivers a software development kit (SDK) and the ability to manage app installs and updates by using the lifecycle management mechanism, while enabling administrators to exercise Operator capabilities on any Kubernetes cluster.

The Operator Framework: Introducing the SDK, Lifecycle Management, and Metering

The Operator Framework is an open source project that provides developer and runtime Kubernetes tools, enabling you to accelerate the development of an Operator. The Operator Framework includes:

  • Operator SDK: Enables developers to build Operators based on their expertise without requiring knowledge of Kubernetes API complexities.
  • Operator Lifecycle Management: Oversees installation, updates, and management of the lifecycle of all of the Operators (and their associated services) running across a Kubernetes cluster.
  • Operator Metering (joining in the coming months): Enables usage reporting for Operators that provide specialized services.

Operator SDK

The Operator SDK provides the tools to build, test and package Operators. Initially, the SDK facilitates the marriage of an application’s business logic (for example, how to scale, upgrade, or backup) with the Kubernetes API to execute those operations. Over time, the SDK can allow engineers to make applications smarter and have the user experience of cloud services. Leading practices and code patterns that are shared across Operators are included in the SDK to help prevent reinventing the wheel.

Diagram showing a build and test iteration loop with the Operator SDK
Build and test iteration loop with the Operator SDK

Operator Lifecycle Manager

Once built, Operators need to be deployed on a Kubernetes cluster. The Operator Lifecycle Manager is the backplane that facilitates management of operators on a Kubernetes cluster. With it, administrators can control what Operators are available in what namespaces and who can interact with running Operators. They can also manage the overall lifecycle of Operators and their resources, such as triggering updates to both an Operator and its resources or granting a team access to an Operator for their slice of the cluster.

Diagram showing how the lifecycle of multiple applications is managed on a Kubernetes cluster
The lifecycle of multiple applications is managed on a Kubernetes cluster

Simple, stateless applications can leverage the Lifecycle Management features of the Operator Framework without writing any code by using a generic Operator (for example, the Helm Operator). However, complex and stateful applications are where an Operator can shine. The cloud-like capabilities that are encoded into the Operator code can provide an advanced user experience, automating such features as updates, backups and scaling.

Operator Metering

In a future version, the Operator Framework will also include the ability to meter application usage – a Kubernetes first, which provides extensions for central IT teams to budget and for software vendors providing commercial software. Operator Metering is designed to tie into the cluster’s CPU and memory reporting, as well as calculate IaaS cost and customized metrics like licensing.

We are actively working on Metering and it will be open-sourced and join the Framework in the coming months.

Operator Framework benefits

If you are a community member, builder, consumer of applications, or a user of Kubernetes overall, the Operator Framework offers a number of benefits.

For builders and the community

Today, there is often a high barrier to entry when it comes to building Kubernetes applications. There are a substantial number of pre-existing dependencies and assumptions, many of which may require experience and technical knowledge. At the same time, application consumers often do not want their services to be siloed across IT footprints with disparate management capabilities (for example, departments with differing tools for auditing, notification, metering, and so on).

The Operator Framework aims to address these points by helping to bring the expertise and knowledge of the Kubernetes community together in a single project that, when used as a standard application package, can make it easier to build applications for Kubernetes. By sharing this Framework with the community, we hope to enable an ecosystem of builders to more easily create their applications on Kubernetes via a common method and also provide a standardized set of tools to build consistent applications.

We believe a proper extension mechanism to Kubernetes shouldn’t be built without the community. To this end, Red Hat has proposed a “platform-dev” Special Interest Group that aligns well with the existing Kubebuilder project from Google and we look forward to working with other industry leaders should this group come to fruition.

“We are working together with Red Hat and the broader Kubernetes community to help enable this ecosystem with an easier way to create and operate their applications on Kubernetes,” said Phillip Wittrock, Software Engineer at Google, Kubernetes community member, and member of the Kubernetes steering committee. “By working together on platform development tools, we strive to make Kubernetes the foundation of choice for container-native apps - no matter where they reside.”

For application consumers and Kubernetes users

For consumers of applications across the hybrid cloud, keeping those applications up to date as new versions become available is of supreme importance, both for security reasons and for managing the applications’ lifecycles and other needs. The Operator Framework helps address these user requirements, aiding in the creation of cloud-native applications that are easier to consume, to keep updated, and to secure.

Get started

Learn more about the Operator Framework at https://github.com/operator-framework. A special thanks to the Kubernetes community for working alongside us. Take a test drive with the code-to-cluster reference example.

If you are at KubeCon 2018 in Europe, join our morning keynote on Thursday, May 3 to learn more about the framework. Can’t attend live? We’ll host an OpenShift Commons briefing on May 23 at 9 AM PT for a deeper dive on all things Operators.

Macbook Pro frying USB peripherals

$
0
0

So this weirdest thing has happened to my Mac.

It only affects the left furthermost USB-C port. When the charger is connected to the Mac (through any of the remaining 3 ports), this one port will supply 20 volts on standby instead of the regular 5 volts, effectively frying any regular USB peripheral that's connected. When the charger is not connected, the port operates normally and supplies 5V.

The test setup is as follows: Apple USB-C to USB adapter plugged to left furthermost port, to which a regular USB cable is attached which exposes convenient measurement points for the multimeter.

Here's measurement taken of the voltage with charger disconnected:

User uploaded file

and with charger connected (to any of the remaining 3 USB-C ports):

User uploaded file

How to send local files to Chromecast with Python

$
0
0

I have some files lying around in my computer with movies I want to watch. The process typically involves copying the file over to a pendrive, plug that into the TV and use the TV media player. This works very well, but I am lazy. It also happens that I have a Chromecast on my dumb TV. It would be very convenient for me to figure out a way of sending directly files from my computer to the Chromecast. It turns out it's not difficult at all.

1. Install PyChromecast

PyChromecast is a wonderful Python library (3.4+) that allows you to send videos to your Chromecast at home. It handles everything, from device detection, to buffering, to play / pause / several controls.

I have an old-ish Ubuntu 14.04, but this did the trick:

# I like to keep stuff tidy
virtualenv -p /usr/bin/python3 ~/virtualenv/pychromecastsource ~/virtualenv/pychromecast/bin/activate
pip3 install pychromecast

2. Start a local HTTP server

Just go to the folder where you have the videos and start a SimpleHTTPServer Python module.

python -m SimpleHTTPServer

Leave that running on the background. We're almost there.

3. Send the video to your Chromecast

This might be different for you, as I only have one Chromecast at home, so my device will always be the first from the list of detected devices, but this is my test script:

importpychromecastif__name__=="__main__":cast=pychromecast.get_chromecasts()[0]mc=cast.media_controllermc.play_media("http://192.168.0.103:8000/video_test.mp4",content_type="video/mp4")mc.block_until_active()mc.play()

At first I was trying to send the file directly, but that didn't work. The example provided in the documentation took the file from a web server, so I thought that perhaps Chromecast can't receive a file but can receive an URL with the contents of the video. That 192.168.0.103 up there is the internal IP of my computer.

And we're good to go. Please note that this script will return as soon as the movie is playing, so you won't have access to that mc object anymore. If would be better to run it inside a Python terminal, so you can always go back and pause or stop the stream if needed.

A new standard of beauty led to today’s weight-loss regimens

$
0
0
A flapper posing with a flapper pillow.
A flapper posing with a flapper pillow. Yann/Public Domain

In the early 20th century, Americans endlessly discussed and debated flappers. The Flapper, a magazine devoted to this new image of womanhood, used this description in 1922: “Bobbed hair, powder and rouge on the face; use of lipstick; ‘plucked’ eyebrows, low cut sleeveless bodice, [and] absence of corset.” All these elements were in their own way revolutionary—in earlier eras, heavy cosmetics were taboo, and clothing covered rather than revealed. But one aspect was left out: The flapper look was lean and androgynous, and maintaining that ideal often required a special “flapper diet.”

Over the centuries and across cultures, the ideal female body type has fluctuated. In many Western cultures, the pre-flapper generation considered a certain plumpness a sign of health, and fashion called for full skirts. But social reformers and women’s rights advocates had long been wary of abundant cloth, which could easily catch fire, and tight corsets, which could compress and deform women’s torsos. Lighter, shorter dresses became ever more fashionable after World War I, as did comfortable clothing and relaxed social mores. Restrictions on dating, dancing, and sex loosened. The cosmetic changes reflected changing opinions on femininity, and the person who most epitomized the new era was the corsetless, cosmetic-wearing, free-spirited flapper.

Yet other restrictions surfaced. Designers such as Coco Chanel popularized a slim silhouette. The bathroom scale (patented in 1916) became a household staple. Books, magazines, and the media began depicting fat as the result of insufficient willpower. While people have always dieted to fit their era’s beauty standards, the new female silhouette was a departure from previous buxom ideals. “Though the flapper image minimized breasts and hips, it radiated sensuality,” writes historian Margaret A. Lowe. The slender silhouette seemed modern. Female curves seemed old-fashioned.

Women's clothes became shorter, all the better for dancing.
Women’s clothes became shorter, all the better for dancing. Library of Congress/LC-USZ62-63993

Suddenly, raw vegetables were in vogue. In Lowe’s study of the diet of Smith College students in the 1920s, she quoted a campus warden who noticed that consumption of potatoes had diminished, while students were eating more celery, tomatoes, and lettuce. Outside of Smith, people followed the Hollywood 18-Day Diet—a prototype of modern fads. Inspired by the burgeoning film industry, they ate only oranges, grapefruit, toast, and eggs.

But strict diets were no easier to follow back then than they are now. Yvonne Blue was a Chicago teenager who came of age in the 1920s. Her parents described her as “the personification of wild modern youth”—in other words, a flapper. In her diary, she recorded days of fasting and longing descriptions of the buttery grilled cheese and lemonade she denied herself. According to historian Joshua Zeitz, “the expectation that they starve themselves in pursuit of flapperdom [was] a very real dilemma for many young women in the 1920s.” It didn’t help that the decade introduced new processed treats like Reese’s Peanut Butter Cups, Good Humor ice cream, and Velveeta cheese.

The actresses that young women imitated were thin—or else. Slender stars such as Colleen Moore ate no potatoes, sweets, or butter. Though film was a newer medium, magazines extensively covered actresses’ diets and struggles with weight. Clara Bow was scrutinized every time she put on weight, and Barbara La Marr, who epitomized flapperdom’s wild side, died at age 29 from a combination of drug addiction and extreme dieting.

Stars like Lillian Gish embodied the sylph-like ideal.
Stars like Lillian Gish embodied the sylph-like ideal. LOC/LC-USZ62-10139

Many stars and their fans depended on diets drawn up by strong personalities. The Medical Millenium Diet, pioneered by William B. Hayes, called for patients to chew slowly, eat one dish per meal, and endure regular enemas. But far more influential was doctor Lulu Hunt Peters. Her 1918 book Diet & Health: With Key to the Calories was the first weight-loss best-seller, and the first book to advocate calorie-counting to achieve a “modern” look.

With a chatty style and goofy illustrations, Peters told readers to ignore the unhelpful advice of friends and family about the dangers of reducing. Food as fuel was the mantra. “Any food eaten beyond what your system requires for its energy, growth, and repair, is fattening, or is an irritant, or both,” she wrote. A sample lunch consisted of cottage cheese and a French roll (unbuttered). To resist the lure of eating, Peters urged her audience to regard all food as potential calories. The responsibility of watching one’s weight, she wrote, was a worthwhile but lifelong struggle. Diet & Health became the bestselling nonfiction book of 1922. Peters, who was a newspaper columnist as well as a doctor, became “the best known and loved physician in America.”

Much flapper diet advice sounds familiar. Healthy food and exercise are touted as the best ways to slim down, then as now. But this was still relatively novel during the 1920s. “For a nation unaccustomed to a new ideal of slenderness, this was a tough ideal to achieve,” Zeitz writes. So women turned to laxative-laced weight-loss gums, slimming girdles, and cigarettes. Smoking distinguished flappers from their mothers and grandmothers, and cigarettes’ appetite-suppressing qualities were considered an asset.

A Lucky Strike advertisement from circa 1939.
A Lucky Strike advertisement from circa 1939. B Christopher / Alamy

That resulted in one of the biggest ad campaigns of the late 1920s. In 1928, the cigarette company Lucky Strike plastered colorful ads in magazines. In one, a pursed-lip flapper looks at the viewer. “To keep a slender figure no one can deny,” the ad trumpets, “Reach for a Lucky instead of a sweet.” The ads featured illustrations of women in long elegant dresses, and major film stars and Amelia Earhart endorsed the slogan. Marketing cigarettes as slimming agents for young women remained standard for years.

Soon enough, though, the flapper era was over. In 1931, the New York Times ran a story marveling at her disappearance, hastened by the collapse of the economy. She “is only a memory, as antique and romantic … as the Gibson girl,” the author wrote. She mused that the Depression-caused struggle of wheat farmers could be solved if former flappers went back to the bread-eating habits of their Victorian predecessors. But that never happened, and the slender flapper figure remains glamorous today.

Gastro Obscura covers the world’s most wondrous food and drink.
Sign up for our weekly email.


Stateful Apps on Kubernetes: A quick primer

$
0
0

Over the past year, Kubernetes––also known as K8s––has become a dominant topic of conversation in the infrastructure world. Given its pedigree of literally working at Google-scale, it makes sense that people want to bring that kind of power to their DevOps stories; container orchestration turns many tedious and complex tasks into something as simple as a declarative config file.

The rise of orchestration is predicated on a few things, though. First, organizations have moved toward breaking up monolithic applications into microservices. However, the resulting environments have hundreds (or thousands) of these services that need to be managed. Second, infrastructure has become cheap and disposable––if a machine fails, it’s dramatically cheaper to replace it than triage the problems.

So, to solve the first issue, orchestration relies on the boon of the second; it manages services by simply letting new machines, running the exact same containers, take the place of failed ones, which keeps a service running without any manual interference.

However, the software most amenable to being orchestrated are ones that can easily spin up new interchangeable instances without requiring coordination across zones.

Why Orchestrating Databases is Difficult

The above description of an orchestration-native service should sound like the opposite of a database, though.

  • Database replicas are not interchangeable; they each have a unique state. This means you cannot trivially bring them up and down at a moment’s notice.
  • Deploying a database replica requires coordination with other nodes running the same application to ensure things like schema changes and version upgrades are visible everywhere.

In short: managing state in Kubernetes is difficult because the system’s dynamism is too chaotic for most databases to handle––especially SQL databases that offer strong consistency.

Running a Database with a Kubernetes App

So, what’s a team to do? Well, you have a lot of options.

Run Your Database Outside Kubernetes

Instead of running your entire stack inside K8s, one approach is to continue to run the database outside Kubernetes. The main challenge with this, though, is that you must continue running an entire stack of infrastructure management tools for a single service. This means that even though Kubernetes has a high-quality, automated version of each of the following, you’ll wind up duplicating effort:

  • Process monitoring (monit, etc.)
  • Configuration management (Chef, Puppet, Ansible, etc.)
  • In-datacenter load balancing (HAProxy)
  • Service discovery (Consul, Zookeeper, etc.)
  • Monitoring and logging

That’s 5 technologies you’re on the hook for maintaining, each of which is duplicative of a service already integrated into Kubernetes.

Cloud Services

Rather than deal with the database at all, you can farm out the work to a database-as-a-service (DBaaS) provider. However, this still means that you’re running a single service outside of Kubernetes. While this is less of a burden, it is still an additional layer of complexity that could be instead rolled into your teams’ existing infrastructure.

For teams that are hosting Kubernetes themselves, it’s also strange to choose a DBaaS provider. These teams have put themselves in a situation where they could easily avoid vendor lock-in and maintain complete control of their stack.

DBaaS offerings also have their own shortcomings, though. The databases that underpin them are either built on dated technology that doesn’t scale horizontally, or require forgoing consistency entirely by relying on a NoSQL database.

Run Your Database in K8s––StatefulSets & DaemonSets

Kubernetes does have two integrated solutions that make it possible to run your database in Kubernetes:

StatefulSets

By far the most common way to run a database, StatefulSets is a feature fully supported as of the Kubernetes 1.9 release. Using it, each of your pods is guaranteed the same network identity and disk across restarts, even if it’s rescheduled to a different physical machine.

DaemonSets

DaemonSets let you specify that a group of nodes should always run a specific pod. In this way, you can set aside a set of machines and then run your database on them––and only your database, if you choose. This still leverages many of Kubernetes’ benefits like declarative infrastructure, but it forgoes the flexibility of a feature like StatefulSets that can dynamically schedule pods.

StatefulSets: In-Depth

StatefulSets were designed specifically to solve the problem of running stateful, replicated services inside Kubernetes. As we discussed at the beginning of this post, databases have more requirements than stateless services, and StatefulSets go a long way to providing that.

The primary feature that enables StatefulSets to run a replicated database within Kubernetes is providing each pod a unique ID that persists, even as the pod is rescheduled to other machines. The persistence of this ID then lets you attach a particular volume to the pod, retaining its state even as Kubernetes shifts it around your datacenter.

However, because you’ll be detaching and attaching the same disk to multiple machines, you need to use a remote persistent disk, something like EBS in AWS parlance. These disks are located––as you might guess––remotely from any of the machines and are typically large block devices used for persistent storage. One of the benefits of using these disks is that the provider handles some degree of replication for you, making them more immune to typical disk failures, though this benefits databases without built-in replication.

Performance Implications

Because Kubernetes itself runs on the machines that are running your databases, it will consume some resources and will slightly impact performance. In our testing, we found an approximately 5% dip in throughput on a simple key-value workload.

Because StatefulSets still let your database pods to be rescheduled onto other nodes, it’s possible that the stateful service will still have to contend with others for the machine’s physical resources. However, you can take steps to alleviate this issue by managing the resources that the database container requests.

DaemonSets: In-Depth

DaemonSets let you specify that all nodes that match a specific criteria run a particular pod. This means you can designate a specific set of nodes to run your database, and Kubernetes ensures that the service stays available on these nodes without being subject to rescheduling––and optionally without running anything else on those nodes, which is perfect for stateful services.

DaemonSets can also use a machine’s local disk more reliably because you don’t have to be concerned with your database pods getting rescheduled and losing their disks. However, local disks are unlikely to have any kind of replication or redundancy and are therefore more susceptible to failure, although this is less of a concern for services like CockroachDB which already replicate data across machines.

Performance Implications

While some K8s processes still run on these machines, DaemonSets can limit the amount of contention between your database and other applications by simply cordoning off entire Kubernetes nodes.

StatefulSets vs. DaemonSets

Kubernetes StatefulSets behave like all other Kubernetes pods, which means they can be rescheduled as needed. Because other types of pods can also be rescheduled onto the same machines, you’ll also need to set appropriate limits to ensure your database pods always have adequate resources allocated to them.

StatefulSets’ reliance on remote network devices also means there is a potential performance implication, though in our testing, this hasn’t been the case.

DaemonSets on the other hand, are dramatically different. They represent a more natural abstraction for cordoning your database off onto dedicated nodes and let you easily use local disks––for StatefulSets, local disk support is still in beta.

The biggest tradeoff for DaemonSets is that you’re limiting Kubernetes’ ability to help your cluster recover from failures. For example, if you were running CockroachDB and a node were to fail, it can’t create new pods to replace pods on nodes that fail because it’s already running a CockroachDB pod on all the matching nodes. This matches the behavior of running CockroachDB directly on a set of physical machines that are only manually replaced by human operators.

Up Next

In our next blog post, we’ll continue talking about stateful applications on Kubernetes, with details about how you can can (and should) orchestrate CockroachDB in Kubernetes leveraging StatefulSets. If you want to be notified when it’s released, subscribe to our blog using the box on the left.

In the meantime, you should check out our Kubernetes tutorial.

Illustration by Zoë van Dijk

A Mass of Copyrighted Works Will Soon Enter the Public Domain

$
0
0

The Great American Novel enters the public domain on January 1, 2019—quite literally. Not the concept, but the book by William Carlos Williams. It will be joined by hundreds of thousands of other books, musical scores, and films first published in the United States during 1923. It’s the first time since 1998 for a mass shift to the public domain of material protected under copyright. It’s also the beginning of a new annual tradition: For several decades from 2019 onward, each New Year’s Day will unleash a full year’s worth of works published 95 years earlier.

This coming January, Charlie Chaplin’s film The Pilgrim and Cecil B. DeMille’s The 10 Commandments will slip the shackles of ownership, allowing any individual or company to release them freely, mash them up with other work, or sell them with no restriction. This will be true also for some compositions by Bela Bartok, Aldous Huxley’s Antic Hay, Winston Churchill’s The World Crisis, Carl Sandburg’s Rootabaga Pigeons, e.e. cummings’s Tulips and Chimneys, Noël Coward’s London Calling! musical, Edith Wharton’s A Son at the Front, many stories by P.G. Wodehouse, and hosts upon hosts of forgotten works, according to research by the Duke University School of Law’s Center for the Study of the Public Domain.

Throughout the 20th century, changes in copyright law led to longer periods of protection for works that had been created decades earlier, which altered a pattern of relatively brief copyright protection that dates back to the founding of the nation. This came from two separate impetuses. First, the United States had long stood alone in defining copyright as a fixed period of time instead of using an author’s life plus a certain number of years following it, which most of the world had agreed to in 1886. Second, the ever-increasing value of intellectual property could be exploited with a longer term.

But extending American copyright law and bringing it into international harmony meant applying “patches” retroactively to work already created and published. And that led, in turn, to lengthy delays in copyright expiring on works that now date back almost a century.

Only so much that’s created has room to persist in memory, culture, and scholarship. Some works may have been forgotten because they were simply terrible or perishable. But it’s also the case that a lack of access to these works in digital form limits whether they get considered at all. In recent years, Google, libraries, the Internet Archive, and other institutions have posted millions of works in the public domain from 1922 and earlier. With lightning-fast ease, their entire contents are now as contemporary as news articles, and may show up intermingled in search results. More recent work, however, remains locked up. The distant past is more accessible than 10 or 50 years ago.

The details of copyright law get complicated fast, but they date back to the original grant in the Constitution that gives Congress the right to bestow exclusive rights to a creator for “limited times.” In the first copyright act in 1790, that was 14 years, with the option to apply for an automatically granted 14-year renewal. By 1909, both terms had grown to 28 years. In 1976, the law was radically changed to harmonize with the Berne Convention, an international agreement originally signed in 1886. This switched expiration to an author’s life plus 50 years. In 1998, an act named for Sonny Bono, recently deceased and a defender of Hollywood’s expansive rights, bumped that to 70 years.

The Sonny Bono Act was widely seen as a way to keep Disney’s Steamboat Willie from slipping into the public domain, which would allow that first appearance of Mickey Mouse in 1928 from being freely copied and distributed. By tweaking the law, Mickey got another 20-year reprieve. When that expires, Steamboat Willie can be given away, sold, remixed, turned pornographic, or anything else. (Mickey himself doesn’t lose protection as such, but his graphical appearance, his dialog, and any specific behavior in Steamboat Willie—his character traits—become likewise freely available. This was decided in a case involving Sherlock Holmes in 2014.)

The reason that New Year’s Day 2019 has special significance arises from the 1976 changes in copyright law’s retroactive extensions. First, the 1976 law extended the 56-year period (28 plus an equal renewal) to 75 years. That meant work through 1922 was protected until 1998. Then, in 1998, the Sonny Bono Act also fixed a period of 95 years for anything placed under copyright from 1923 to 1977, after which the measure isn’t fixed, but based on when an author perishes. Hence the long gap from 1998 until now, and why the drought’s about to end.

Of course, it’s never easy. If you published something between 1923 and 1963 and wanted to renew copyright, the law required registration with the U.S. Copyright Office at any point in the first 28 years of copyright, followed at the 28-year mark with the renewal request. Without both a registration and a renewal, anything between 1923 and 1963 is already in the public domain. Many books, songs, and other printed media were never renewed by the author or publisher due to lack of sales or interest, an author’s death, or a publisher’s shutting down or bankruptcy. One estimate from 2011 suggests about 90 percent of works published in the 1920s haven’t been renewed. That number shifts to 60 percent or so for works from the 1940s. But there are murky issues about ownership and other factors for as many as 30 percent of books from 1923 to 1963. It’s impossible to determine copyright status easily for them.

It’s easier to prove a renewal was issued than not, making it difficult for those who want to make use of material without any risk of challenge. Jennifer Jenkins, the director of Duke’s Center for the Study of the Public Domain, says, “Even if works from 1923 technically entered the public domain earlier because of nonrenewal, next year will be different, because then we’ll know for sure that these works are in the public domain without tedious research.”

Jenkins’s group was unable, for instance, to find definitive proof that The Great American Novel wasn’t renewed, but that doesn’t mean there’s not an undigitized record in a file in Washington, D.C. While courts can be petitioned to find works affirmatively in the public domain, as ultimately happened following a knotted dispute over “Happy Birthday to You,” most of the time the issue only comes up when an alleged rights holder takes legal action to assert that copyright still holds. As a result, it’s more likely a publisher would wait to reissue The Great American Novel in 2019 than worry about Williams’s current copyright holders objecting in 2018.

There’s one more bit of wiggle, too: Libraries were granted special dispensation in the 1998 copyright revision over work in its last 20 years of its copyright so long as the work isn’t being commercially exploited, such as a publisher or author having a book in print or a musician actively selling or licensing digital sheet music. But hundreds of thousands of published works from 1923 to 1941 can be posted legally by libraries today, moving forward a year every year. (The Internet Archive assembles these works from partners at its ironic Sonny Bono Memorial Collection site.)

It’s possible this could all change again as corporate copyright holders start to get itchy about expirations. However, the United States is now in harmony with most of the rest of the world, and no legislative action is underway this year to make any waves that would affect the 2019 rollover.

A Google spokesperson confirmed that Google Books stands ready. Its software is already set up so that on January 1 of each year, the material from 95 years earlier that’s currently digitized but only available for searching suddenly switches to full text. We’ll soon find out more about what 1923 was really like. And in 2024, we might all ring in the new year whistling Steamboat Willie’s song.

Towards λ-calculus

$
0
0
workshop
+
workshop :: lambdacode_inside_min

Ruby’s Rack Push: Decoupling the real-time web application from the web

$
0
0

Something exciting is coming.

Everyone is talking about WebSockets and their older cousin EventSource / Server Sent Events (SSE). Faye and ActionCable are all the rage and real-time updates are becoming easier than ever.

But it’s all a mess. It’s hard to set up, it’s hard to maintain. The performance is meh. In short, the existing design is expensive – it’s expensive in developer hours and it’s expensive in hardware costs.

However, a new PR in the Rack repository promises to change all that in the near future.

This PR is a huge step towards simplifying our code base, improving real-time performance and lowering the overall cost of real-time web applications.

In a sentence, it’s an important step towards decoupling the web application from the web.

Remember, Rack is the interface Ruby frameworks (such and Rails and Sinatra) and web applications use to communicate with the Ruby application servers. It’s everywhere. So this is a big deal.

The Problem in a Nutshell

The problem with the current standard approach, in a nutshell, is that each real-time application process has to run two servers in order to support real-time functionality.

The two servers might be listening on the same port, they might be hidden away in some gem, but at the end of the day, two different IO event handling units have to run side by side.

“Why?” you might ask. Well, since you asked, I’ll tell you (if you didn’t ask, skip to the solution).

The story of the temporary hijack

This is the story of a quick temporary solution coming up on it’s 5th year as the only “standard” Rack solution available.

At some point in our history, the Rack specification needed a way to support long polling and other HTTP techniques. Specifically, Rails 4.0 needed something for their “live stream” feature.

For this purpose, the Rack team came up with the hijack API approach.

This approach allowed for a quick fix to a pressing need. was meant to be temporary, something quick until Rack 2.0 was released (5 years later, the Rack protocol is still at version 1.3).

The hijack API offers applications complete control of the socket. Just hijack the socket away from the server and voilá, instant long polling / SSE support… sort of.

That’s where things started to get messy.

To handle the (now “free”) socket, a lot of network logic had to be copied from the server layer to the application layer (buffering write calls, handling incoming data, protocol management, timeout handling, etc’).

This is an obvious violation of the “S” in S.O.L.I.D (single responsibility), as it adds IO handling responsibilities to the application / framework.

It also violates the DRY principle, since the IO handling logic is now duplicated (once within the server and once within the application / framework).

Additionally, this approach has issues with HTTP/2 connections, since the network protocol and the application are now entangled.

The obvious hijack price

The hijack approach has many costs, some hidden, some more obvious.

The most easily observed price is memory, performance and developer hours.

Due to code duplication and extra work, the memory consumption for hijack based solutions is higher and their performance is slower (more system calls, more context switches, etc’).

Using require 'faye' will add WebSockets to your application, but it will take almost 9Mb just to load the gem (this is before any actual work was performed).

On the other hand, using the agoo or iodine HTTP servers will add both WebScokets and SSE to your application without any extra memory consumption.

To be more specific, using iodine will consume about 2Mb of memory, marginally less than Puma, while providing both HTTP and real-time capabilities.

The hidden hijack price

A more subtle price is higher hardware costs and a lower clients-per-machine ratio when using hijack.

Why?

Besides the degraded performance, the hijack approach allows some HTTP servers to lean on the select system call, (Puma used select last time I took a look).

This system call breaks down at around the 1024 open file limit, possibly limiting each process to 1024 open connections.

When a connection is hijacked, the sockets don’t close as fast as the web server expects, eventually leading to breakage and possible crashes if the 1024 open file limit is exceeded.

The Solution – Callbacks and Events

The new proposed Rack Push PR offers a wonderful effective way to implement WebSockets and SSE while allowing an application to remain totally server agnostic.

This new proposal leaves the responsibility for the network / IO handling with the server, simplifying the application’s code base and decoupling it from the network protocol.

By using a callback object, the application is notified of any events. Leaving the application free to focus on the data rather than the network layer.

The callback object doesn’t even need to know anything about the server running the application or the underlying protocol.

The callback object is automatically linked to the correct API using Ruby’s extend approach, allowing the application to remain totally server agnostic.

How it works

Every Rack server uses a Hash type object to communicate with a Rack application.

This is how Rails is built, this is how Sinatra is built and this is how every Rack application / framework is built. It’s in the current Rack specification.

A simple Hello world using Rack would look like this (placed in a file called config.ru):

# normal HTTP response
RESPONSE = [200, { 'Content-Type' => 'text/html',
          'Content-Length' => '12' }, [ 'Hello World!' ] ]
# note the `env` variable
APP = Proc.new {|env| RESPONSE }
# The Rack DSL used to run the application
run APP

This new proposal introduces the env['rack.upgrade?'] variable.

Normally, this variable is set to nil (or missing from the env Hash).

However, for WebSocket connection, the env['rack.upgrade?'] variable is set to :websocket and for EventSource (SSE) connections the variable is set to :sse.

To set a callback object, the env['rack.upgrade'] is introduced (notice the missing question mark).

Now the design might look like this:

# Place in config.ru
RESPONSE = [200, { 'Content-Type' => 'text/html',
          'Content-Length' => '12' }, [ 'Hello World!' ] ]
# a Callback class
class MyCallbacks
  def on_open
    puts "* Push connection opened."
  end
  def on_message data
    puts "* Incoming data: #{data}"
  end
  def on_close
    puts "* Push connection closed."
  end
end
# note the `env` variable
APP = Proc.new do |env|
  if(env['rack.upgrade?'])
    env['rack.upgrade'] = MyCallbacks.new
    [200, {}, []]
  else
    RESPONSE
  end
end
# The Rack DSL used to run the application
run APP

Run this application with the Agoo or Iodine servers and let the magic sparkle.

For example, using Iodine:

# install iodine
gem install iodine
# start in single threaded mode
iodine -t 1

Now open the browser, visit localhost:3000 and open the browser console to test some JavaScript.

First try an EventSource (SSE) connection (run in browser console):

// An SSE example 
var source = new EventSource("/");
source.onmessage = function(msg) {
  console.log(msg.id);
  console.log(msg.data);
};

Sweet! nothing happened just yet (we aren't sending notifications), but we have an open SSE connection!

What about WebSockets (run in browser console):

// A WebSocket example 
ws = new WebSocket("ws://localhost:3000/");
ws.onmessage = function(e) { console.log(e.data); };
ws.onclose = function(e) { console.log("closed"); };
ws.onopen = function(e) { e.target.send("Hi!"); };

Wow! Did you look at the Ruby console – we have working WebSockets, it's that easy.

And this same example will run perfectly using the Agoo server as well (both Agoo and Iodine already support the Rack Push proposal).

Try it:

# install the agoo server 
gem install agoo
# start it up
rackup -s agoo -p 3000

Notice, no gems, no extra code, no huge memory consumption, just the Ruby server and raw Rack (I didn't even use a framework just yet).

The amazing push

So far, it's so simple, it's hard to notice how powerful this is.

Consider implementing a stock ticker, or in this case, a timer:

# Place in config.ru
RESPONSE = [200, { 'Content-Type' => 'text/html',
          'Content-Length' => '12' }, [ 'Hello World!' ] ]

# A live connection storage
module LiveList
  @list = []
  @lock = Mutex.new
  def self.<<(connection)
    @lock.synchronize { @list << connection }
  end
  def self.>>(connection)
    @lock.synchronize { @list.delete connection }
  end
  def any?
    # remove connection to the "live list"
    @lock.synchronize { @list.any? }
  end
  # this will send a message to all the connections that share the same process.
  # (in cluster mode we get partial broadcasting only and this doesn't scale)
  def self.broadcast(data)
    @lock.synchronize do
      @list.each do |c|
        begin
          c.write data
        rescue IOError => _e
          # An IOError can occur if the connection was closed during the loop.
        end
      end
    end
  end
end

# Broadcast the time very second... but...
# Threads will BREAK in cluster mode.
@thread = Thread.new do
  while(LiveList.any?) do
    sleep(1)
    LiveList.broadcast "The time is: #{Time.now}"
  end
end

# a Callback class
class MyCallbacks
  def on_open
    # add connection to the "live list"
    LiveList << self
  end
  def on_message(data)
    # Just an example broadcast
    LiveList.broadcast "Special Announcement: #{data}"
  end
  def on_close
    # remove connection to the "live list"
    LiveList >> self
  end
end

# The Rack application
APP = Proc.new do |env|
  if(env['rack.upgrade?'])
    env['rack.upgrade'] = MyCallbacks.new
    [200, {}, []]
  else
    RESPONSE
  end
end
# The Rack DSL used to run the application
run APP

Run the iodine server in single process mode: iodine -w 1 and the little timer is ticking.

Honestly, I don’t love the code I just wrote for the previous example. It’s a little long, it’s slightly iffy and we can’t use iodine’s cluster mode.

For my next example, I’ll author a chat room in 32 lines (including comments).

I will use Iodine’s pub/sub extension API to avoid the LiveList module and the timer thread (I don’t need a timer, so I’ll skip the Iodine.run_every method).

Also, I’ll limit the interaction to WebSocket clients. Why? to show I can.

This will better demonstrate the power offered by the new env['rack.upgrade'] approach and it will also work in cluster mode.

Sadly, this means that the example won’t run on Agoo for now.

# Place in config.ru
RESPONSE = [200, { 'Content-Type' => 'text/html',
          'Content-Length' => '12' }, [ 'Hello World!' ] ]
# a Callback class
class MyCallbacks
  def initialize env
     @name = env["PATH_INFO"][1..-1]
     @name = "unknown" if(@name.length == 0)
  end
  def on_open
    subscribe :chat
    publish :chat, "#{@name} joined the chat."
  end
  def on_message data
    publish :chat, "#{@name}: #{data}"
  end
  def on_close
    publish :chat, "#{@name} left the chat."
  end
end
# The actual Rack application
APP = Proc.new do |env|
  if(env['rack.upgrade?'] == :websocket)
    env['rack.upgrade'] = MyCallbacks.new(env)
    [200, {}, []]
  else
    RESPONSE
  end
end
# The Rack DSL used to run the application
run APP

Start the application from the command line (in terminal):

iodine

Now try (in the browser console):

ws = new WebSocket("ws://localhost:3000/Mitchel");
ws.onmessage = function(e) { console.log(e.data); };
ws.onclose = function(e) { console.log("Closed"); };
ws.onopen = function(e) { e.target.send("Yo!"); };

Why didn’t anyone think of this sooner?

Actually, this isn’t a completely new idea.

Evens as the hijack API itself was suggested, an alternative approach was suggested.

Another proposal was attempted a few years ago.

But it seems things are finally going to change, as two high performance server, agoo and iodine already support this new approach.

Things look promising.

KRust: A Formal Executable Semantics of Rust

$
0
0

(Submitted on 28 Apr 2018)

Abstract: Rust is a new and promising high-level system programming language. It provides both memory safety and thread safety through its novel mechanisms such as ownership, moves and borrows. Ownership system ensures that at any point there is only one owner of any given resource. The ownership of a resource can be moved or borrowed according to the lifetimes. The ownership system establishes a clear lifetime for each value and hence does not necessarily need garbage collection. These novel features bring Rust high performance, fine low-level control of C and C++, and unnecessity in garbage collection, which differ Rust from other existing prevalent languages. For formal analysis of Rust programs and helping programmers learn its new mechanisms and features, a formal semantics of Rust is desired and useful as a fundament for developing related tools. In this paper, we present a formal executable operational semantics of a realistic subset of Rust, called KRust. The semantics is defined in K, a rewriting-based executable semantic framework for programming languages. The executable semantics yields automatically a formal interpreter and verification tools for Rust programs. KRust has been thoroughly validated by testing with hundreds of tests, including the official Rust test suite.
From: Feng Wang [view email]
[v1] Sat, 28 Apr 2018 14:12:11 GMT (28kb)
Viewing all 25817 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>