The European Parliament's (EP’s) Committee on Civil Liberties, Justice, and Home Affairs released a draft proposal for a new Regulation on Privacy and Electronic Communications. The draft recommends a regulation that will enforce end-to-end encryption on all communications to protect European Union citizens’ fundamental privacy rights. The committee also recommended a ban on backdoors.
Enforcement Of EU’s Charter Of Fundamental Rights
Article 7 of the E.U.’s Charter of Fundamental Rights says that E.U. citizens have a right to personal privacy, as well as privacy in their family life and at home. According to the EP committee, the privacy of communications between individuals is also an important dimension of this right.
The EP committee added that:
Confidentiality of electronic communications ensures that information exchanged between parties and the external elements of such communication, including when the information has been sent, from where, to whom, is not to be revealed to anyone other than to the parties involved in a communication.
The principle of confidentiality should apply to current and future means of communication, including calls, internet access, instant messaging applications, e-mail, internet phone calls and messaging provided through social media.
Protecting Citizens Against Hacking Of Personal Information
The EP committee believes that encryption needs to be used to protect EU citizens’ sensitive information such as personal experiences and emotions, medical conditions, sexual preferences, and political views. The disclosure of this info could lead to personal and social harm, or economic loss.
The committee also argued that it’s not just the content of information that needs to be protected, but also the metadata associated with it:
The metadata includes the numbers called, the websites visited, geographical location, the time, date and duration when an individual made a call etc., allowing precise conclusions to be drawn regarding the private lives of the persons involved in the electronic communication, such as their social relationships, their habits and activities of everyday life, their interests, tastes etc.
The protection of confidentiality of communications is also an essential condition for the respect of other related fundamental rights and freedoms, such as the protection of freedom of thought, conscience and religion, and freedom of expression and information.
The EP committee also noted that electronic communications are generally personal data, which means they should also be protected under the recently passed General Data Protection Regulation. Therefore, the new regulation on private communications should not lower the protections written in the GDPR, but it should instead offer complementary safeguards for the confidentiality of communications.
Providers Affected By The Regulation
The updated Regulation on Privacy and Electronic Communications will apply to providers of electronic communication services, providers of publicly available directories, and software providers that permit electronic communications and the retrieval of information on the internet.
We’ve lately seen some EU member states push for increased surveillance and even backdoors in encrypted communications, so there seems to be some conflict here between what the European Parliament institutional bodies may want and what some member states do.
However, if this proposal for the new Regulation on Privacy and Electronic Communications passes, it should significantly increase the privacy of E.U. citizens’ communications, and it won’t be so easy to roll back the changes to add backdoors in the future.
The committee makes it clear that backdoors introduced by member states should be forbidden:
The providers of electronic communications services shall ensure that there is sufficient protection in place against unauthorised access or alterations to the electronic communications data, and that the confidentiality and safety of the transmission are also guaranteed by the nature of the means of transmission used or by state-of-the-art end-to-end encryption of the electronic communications data.
Furthermore, when encryption of electronic communications data is used, decryption, reverse engineering or monitoring of such communications shall be prohibited. Member States shall not impose any obligations on electronic communications service providers that would result in the weakening of the security and encryption of their networks and services
With these departures, the American era of the baronial chief executive, sitting atop an industrial dominion with all the attendant privileges, is drawing to a close.
It is one consequence of a transformed economic landscape in which many of the mega-corporations that defined 20th-century commercial life are confronting a host of new business and technological challenges. These changes — in corporate leadership, on boards and across Wall Street — are recasting the very idea of industry in America.
“The C.E.O. with a big office, a tenure of 10 or 20 years, in a suit and tie, is becoming a thing of the past,” said Vijay Govindarajan, who served as G.E.’s chief innovation consultant in 2008 and 2009 and now teaches at Dartmouth’s Tuck School of Business.
Mr. Immelt’s exit from G.E. is particularly telling, given the company’s reputation as a training ground for the future chief executives of other companies. He tried to change G.E., yet couldn’t react quickly enough to the forces affecting companies like his.
These include the rising power of activist investors, who buy up stakes in companies and then demand changes. Activists are now hunting much bigger game, demanding double-digit annual earnings growth in a stagnant economy. Or else.
It is a reality only too familiar to John Mackey, the co-founder and chief executive of Whole Foods Market. On Friday, after pressure from activists — a group he had referred to in an interview days before as “greedy bastards” — Whole Foods was acquired by Amazon for $13.4 billion.
That deal also shows how the digital age has upended the competitive landscape, pitting companies in vastly different industries against one another.
“Who ever thought Ford would be competing with Google?” said Michael Useem, a professor of management at the Wharton School of the University of Pennsylvania who has studied corporate leadership for decades. “But they are, and Mark Fields wasn’t moving fast enough.”
Boards, too, have changed, evolving from country-club-like collections of the same familiar faces into a much more diverse and demanding constituency.
To be sure, the money is better than ever. And pockets of unbridled ambition and occasional excess remain, especially in Silicon Valley, where Apple’s new $5 billion spaceshiplike headquarters opened in April.
PhotoThe office of Jeffrey R. Immelt, the former chief executive of General Electric, and his predecessor, Jack Welch.Credit
Christopher Capozziello for The New York Times
But for most of the Fortune 500, the unquestioned power and perks, the imperviousness to criticism from the likes of shareholders, and the outsize public profile that once automatically came with the corner office have gone the way of the typewriter and the Dictaphone.
“These people were bigger than life, and I saw it up close,” said Kevin Sharer, a former chief executive of Amgen who worked as a top aide to Mr. Immelt’s legendary predecessor at G.E., Jack Welch. “They were a combination of chief executive, statesman and rock star. They were unassailable.”
A Naval Academy graduate, then an officer, before joining G.E. in 1984, Mr. Sharer said the only place that evoked a feeling of power comparable to the long hallways and corner offices of Fairfield in its prime was aboard the fast attack nuclear submarines where he once served as chief engineer.
“We had the confidence, the swagger, and we felt like we had unlimited industrial potential,” he said. “Could we buy RCA or NBC? Of course we could. I’m not complaining, but this is absolutely not the case today.”
That confidence extended well beyond the boardroom or the executive suite, providing a high profile not only in local communities, but in national affairs as well. Immediately after President Trump declared that the United States was pulling out of the Paris accord on global warming this month, Mr. Immelt offered a blunt dissent on Twitter.
“Disappointed with today’s decision on the Paris Agreement,” he wrote. “Climate change is real. Industry must now lead and not depend on government.”
As Amgen’s chief executive in the spring of 2009, Mr. Sharer visited the White House repeatedly to meet with Obama administration officials as they designed what would become the Affordable Care Act. He also played a key role in getting fellow pharmaceutical industry chiefs to support the legislation.
Now retired and teaching at Harvard Business School, Mr. Sharer said he would never do that today, as wading into bitterly partisan public debates offers little upside for corporate leaders, and risks damage to their company’s reputation.
As a result, while companies in many ways have more economic and political power than ever, “chief executives now shy away from weighing in on the policy level or broader societal issues,” Mr. Sharer said. “They’re more focused on running their companies.”
There are exceptions. Besides Mr. Immelt’s outspokenness on the climate issue, last year Kenneth C. Frazier of Merck called out “bad actors” in the pharmaceutical industry for exorbitant price increases. Timothy D. Cook of Apple challenged Mr. Trump’s proposed immigration restrictions in January.
Still, Mr. Immelt’s exit leaves a void at the intersection of business and public policy, along with the retirement this year of Douglas R. Oberhelman, the Caterpillar chief who led both the Business Roundtable and the National Association of Manufacturers.
“If you start fooling around in Washington with the Business Roundtable or writing op-eds, activist investors will ask what you’re doing,” Mr. Useem said.
G.E.’s next chief executive, John Flannery, is highly regarded inside and outside the company, said Bill George, a professor at Harvard Business School who served as chief executive of Medtronic. Mr. George is much more optimistic than Mr. Sharer on whether chief executives will continue to speak out on broader issues, but he doesn’t expect Mr. Flannery to emulate his predecessor’s high profile.
“I don’t see him stepping into that role,” Mr. George said. “He’s going to keep his head down and focus on the numbers.”
PhotoSacred Heart University purchased the former G.E. headquarters in Fairfield, Conn., after the company moved from the 66-acre suburban location in 2016.Credit
Christopher Capozziello for The New York Times
The Fall of ‘Carpet Land’
At Exxon Mobil, it’s referred to as the God Pod. On the 11th floor of Procter & Gamble’s headquarters in Cincinnati, there was Mahogany Row. And while the official name of the executive wing at G.E.’s Fairfield headquarters was E3, inside the company it was known as Carpet Land.
And no wonder. From the Persian rugs that lined the hallways to the plush wool floor covering in Mr. Immelt’s office and private conference room, the carpets created the hushed atmosphere of a monastery or library.
“It was so quiet, you could feel the energy drain out of you,” said Ann Klee, the G.E. executive who oversaw the move to Boston and the development of its new headquarters there.
What these executive aeries all shared was an Olympus-like sense of remoteness, authority and defined hierarchy.
At G.E., even in Carpet Land, office size grew in lock step with rank, and the biggest corner space was reserved for the chief executive. Not only did Mr. Immelt have his own bathroom, but his two administrative assistants had a private bathroom and a pantry.
The abundant perks — in G.E.’s case, two helicopter pads, a shoeshine station and an executive dining room linked to the kitchen below by dumbwaiters — fed the sense of exalted status. At the same time, faster economic growth and rising earnings camouflaged the cost of these indulgences.
With G.E.’s profits and shares soaring in the 1980s, Mr. Welch oversaw the construction of a private 28-room hotel known as the Guest House to serve visiting executives and others, with no expense spared on the parquet floors, wood-burning fireplaces and a Steinway piano, which was left behind when the company moved out.
“Nothing was off-the-shelf,” said Bill O’Brien, a 19-year G.E. veteran who helped supervise the Fairfield facility. “With Jack Welch, everything was custom and the operation was five star, spot on.”
Almost 16 years after Mr. Welch retired in 2001 and Mr. Immelt took over, G.E. shares have never regained their 2000 peak. So while Mr. Immelt successfully steered the company through a near-death experience during the 2008 financial crisis, refocused it on its industrial roots and shed its ancillary businesses, it became a natural target for activist investors.
One of those was Nelson Peltz, a onetime corporate raider who relied on Michael R. Milken’s junk bonds for financing back when Mr. Welch was building his Guest House. Mr. Peltz has come a long way since then, having scored big wins forcing laggards like Heinz and Wendy’s to improve their performance, and he acquired a $2 billion stake in G.E. in 2015.
By early this year, Mr. Peltz’s Trian Fund was pressing G.E. for deeper cost cuts, and to link executive pay more closely to lower expenses and higher profits. In March, Bloomberg warned of “a do-or-die showdown” between Mr. Peltz and Mr. Immelt, while Fox Business reported that Mr. Immelt was in the “hot seat” and could be forced to retire ahead of schedule.
G.E.’s chief communications officer, Deirdre Latour, denied that activist pressure was a factor in Mr. Immelt’s decision to retire. “Sixteen years is a long time,” she said.
But at 61, Mr. Immelt is four years younger than Mr. Welch when he stepped down, and joins a long list of otherwise respected executives whose stately succession plans were seemingly interrupted by impatient investors.
“C.E.O. tenure is down from the 1970s, 1980s and early 1990s, and the G.E. situation is a reflection of that,” said Jason D. Schloetzer, a professor at Georgetown University’s McDonough School of Business. “Without this outside pressure, it’s likely Jeff Immelt would have served out his term and retired at 64 or 65.”
PhotoSacred Heart plans to convert G.E.’s old offices into classrooms, a business incubator space and a computer engineering center, among other things.Credit
Christopher Capozziello for The New York Times
Now other chief executives are feeling the same pressure, including Mary T. Barra of General Motors. “She’s being pulled in different directions,” Mr. Schloetzer said, noting that G.M.’s shares have barely budged in the last two years, even as the broader stock market has soared.
If Ms. Barra can’t turn things around in 12 to 18 months, Mr. Schloetzer said, she could share a fate similar to that of Mr. Fields at Ford.
Indeed, Mr. Fields’s defenestration was more shocking in many ways than Mr. Immelt’s slow-motion fade, said Jeffrey Sonnenfeld of the Yale School of Management.
“He’s the poster child for what’s happened to the C.E.O. job,” Mr. Sonnenfeld said. “Until last month, people thought dynastic family capital like the Ford family was the solution to rampant short-termism. I’m sorry Ford forgot that recipe.”
‘Now the C.E.O. Has a Boss’
How did activist investors acquire such outsize power over publicly traded companies?
For starters, even as the activists’ assets have grown, the number of public companies has shrunk drastically, said Matthew Slaughter, dean of Dartmouth’s Tuck School. From 1997 to 2015, the number of companies listed on the New York Stock Exchange, the Nasdaq and the old American Stock Exchange dropped by half, falling to 3,766 from 7,507.
“With fewer public companies out there, any one of them is more likely to become a target of interest for one hedge fund or another,” Mr. Slaughter said.
At the same time, activists are getting more, well, active. More than 300 American companies were targeted by these investors in 2015, up from just over 100 in 2010, according to Mr. Useem of Wharton. They’re also becoming more successful at winning board seats, as fewer companies stagger their board elections over three-year cycles.
What’s more, chief executives have less internal latitude. In 2001, more than half of new C.E.O.s also assumed the position of chairman when they took over. By 2016, only 10 percent occupied both roles, according to an analysis by Strategy&, the strategy and consulting arm of PwC.
“You don’t have the C.E.O. running the board too,” said Gary L. Neilson, a principal with Strategy&. “Now the C.E.O. has a boss.”
Boards themselves have changed, Mr. Sonnenfeld added. A few decades ago, a Pittsburgh-based giant like U.S. Steel would draw board members from a local pool of business and community leaders.
“There was a downside in terms of cronyism, mutual back-scratching and a hesitance to criticize,” he said. “The positive was that they were anchored in their communities and they invested in them. Now, for C.E.O.s, all the constituencies have changed.”
So while boards are still willing to dole out huge golden parachutes to C.E.O.’s, even if they fail, they’ve become much more generous with money than they are with additional time.
The glaring exception to the trends outlined by Mr. Sonnenfeld is in the technology sector. But in many ways, these firms are the exception that proves the rule, because they play by different rules.
Google’s Class B shares have 10 times the voting power of normal shares, enabling the founders, Larry Page and Sergey Brin, to retain control of the holding company Alphabet without owning a majority stake.
PhotoAn aerial view of Apple’s new $5 billion spaceshiplike headquarters in Cupertino, Calif.Credit
Justin Sullivan/Getty Images
Facebook also has a multi-class stock structure that effectively guarantees that the founder Mark Zuckerberg will retain control even as he sells shares. And on this point, the company offers no apologies.
“This is not a traditional governance model, but Facebook was not built to be a traditional company,” the company said last year. “The board believes that a founder-led approach has been and continues to be in the best interests of Facebook, its stockholders and the community.”
The staggering profitability of the tech giants provides their leaders with more than a little of the swagger the industrial executives once possessed.
In 1990, the revenues of Detroit’s Big Three automakers totaled $250 billion while they employed 1.2 million people, according to a study by the McKinsey Global Institute.
Silicon Valley’s top three companies in 2014 had almost the same revenues before adjusting for inflation — $247 billion — but with 137,000 employees, they required a work force just one-tenth the size.
That kind of efficiency adds up to huge profits, soaring stock prices and few complaints from investors. Or as Brooks C. Holtom, a professor of management at Georgetown, put it, “If your stock is doing well, your job is safe.”
The Shrinking Executive Suite
If the old quarters for G.E.’s top brass were akin to a pedestal, the new ones are more like a fishbowl. At G.E.’s interim headquarters in Boston, and in the permanent one set to open next year, the offices for top leaders have glass walls that enable them to see out and, in turn, let employees see in.
They are also much smaller. In Fairfield, a handful of senior G.E. leaders and their assistants occupied 44,000 square feet in the executive wing. Now the same group shares a total of just 7,800 square feet, less space than in the big mansions many C.E.O.s inhabit in places like Greenwich, Conn., or the Bay Area.
“It has a much more collaborative feel, and the glass replicates the transparency of working together,” said Ms. Klee, the G.E. executive. “The Fairfield campus was beautiful, but it lacked the spark you feel here. It’s a different time, and we like the power and energy and creativity that comes from mixing people together.”
Perhaps, but like the fallen statue of Ozymandias in Shelley’s poem, the old monuments to corporate power did possess a certain grandeur.
G.E.’s 66-acre Fairfield campus was purchased for $31.5 million last fall by Sacred Heart University, according to Michael J. Kinney, the school’s senior vice president for finance and administration.
“There’s not as big a need for corporate headquarters like this any more,” said Mr. Kinney, who as a Kraft executive once worked in a similarly spectacular setting in the Taj Mahal-like General Foods building in Rye Brook, N.Y. “Those were the days.”
Mr. Kinney has big plans to convert the old offices into classrooms, a business incubator space and a computer engineering center, among other things. Students will be trained in hospitality at the former Guest House.
As a result, the university has kept the campus in pristine condition since the last G.E. executives left. “The only things missing are the Persian rugs and the artwork,” said Mr. O’Brien, who now serves as Sacred Heart’s director of facilities.
“It’s such a different vibe with the kids all here,” said Mr. O’Brien, who added that he’s still getting used to people strolling around the manicured grounds rather than quietly shuttling between hushed offices.
Walking around the pub in the Guest House Mr. Welch built, Mr. O’Brien confessed to a little nostalgia for G.E.’s glory days in Fairfield. “Can’t you see just see Jack running the world from here in ’87 or ’88?”
Uber is losing ground in the US, its biggest market, to a rival once written off as a bit player, as the ride-hailing company reels from a series of crises including the temporary absence of its chief executive.
An onslaught by San Francisco-based Lyft, is taking its toll, with Uber’s US market share dropping from 84 per cent at the beginning of this year to 77 per cent at the end of May, according to data from Second Measure, a research firm that uses anonymised credit card data.
Uber’s global sales are still growing, with first-quarter revenues surging to $3.4bn, triple the levels of the previous year. However, its growth rate in the US is slowing and investors have become concerned after a period of crisis has left its top ranks in disarray.
While Uber has long dominated its home market in the US, it has faced tough competition around the world from companies like Ola in India and Grab in Southeast Asia.
Because ride-hailing relies on network effects — having more riders and drivers leads to a more efficient system — there is a significant advantage to being the largest player in any given market.
Last week Uber, which is valued at $62.5bn, tried to reassure investors by sharing new projections for growth, according to several people who were contacted.
“Investors are worried that Uber may be self-destructing to some extent,” said Santosh Rao, head of research at Manhattan Venture Partners. “At such a high valuation it was priced for perfection,” he adds.
Scandals at Uber — involving allegations of sexual harassment, mishandling the medical records of a rape victim, and a lawsuit over theft of trade secrets — have dented the company’s image and led to a string of senior departures.
Travis Kalanick, chief executive, went on leave last week without naming a replacement, with a statement after a special board meeting saying the company would be run in his absence by 14 executives, many of whom are new in their jobs.
Uber’s annual growth in the US slowed to 40 per cent at the end of May, from 55 per cent in the previous year, according to the data from Second Measure.
Uber’s decline in market share was fuelled by the #DeleteUber campaign at the end of January, which encouraged users to stop using the company due to Mr Kalanick’s role on President Donald Trump’s business advisory council. The campaign hit hardest in New York, Boston and San Francisco, some of Uber’s top 10 US markets.
Lyft, which completed a $600m fundraising in April, has expanded into 150 new cities this year and has seen its user numbers boosted by the fallout from Uber’s woes.
Lyft remains by far the underdog, with last year’s revenues of $708m just one-ninth of Uber’s. But it has managed to hang on to its market share gains since the #DeleteUber campaign.
A spokesman for Lyft said that the data from Second Measure underestimated its growth in gross bookings, which the company said was 135 per cent in April.
Calculations by Matei Zatreanu, founder of data consultancy System2, indicate that Lyft’s gross ride revenue was $1.1bn during the first four months of this year (extrapolating from the Second Measure data and from Lyft’s 2016 ride value). Uber’s gross ride revenue would have been $4.5bn over the same period in the US, Mr Zatreanu estimates.
Lyft has been particularly successful in its hometown of San Francisco, where it has captured about 40 per cent of the market, according to Second Measure. Uber is also based in the city.
The data from Second Measure include Uber Eats as well as Uber car fares, which means Uber’s share of the ride-hailing market may be slightly overstated. Uber declined to comment.
Consumer surveys suggest the internal turmoil at Uber has had an impact — a quarter of consumers have a negative perception of the company, while 4 per cent have stopped using the app, according to a survey by consultancy cg42.
Stephen Beck, managing partner at cg42, points out that one of the challenges for ride-hailing companies is that switching services is relatively easy for users. “There is no lock-in. There is not a meaningful distinction between services,” he said.
ggplot2 is the most elegant and aesthetically pleasing graphics framework available in R. It has a nicely planned structure to it. This tutorial focusses on exposing this underlying structure you can use to make any ggplot. But, the way you make plots in ggplot2 is very different from base graphics making the learning curve steep. So leave what you know about base graphics behind and follow along. You are just 5 steps away from cracking the ggplot puzzle.
The distinctive feature of the ggplot2 framework is the way you make plots through adding ‘layers’. The process of making any ggplot is as follows.
1. The Setup
First, you need to tell ggplot what dataset to use. This is done using the ggplot(df) function, where df is a dataframe that contains all features needed to make the plot. This is the most basic step. Unlike base graphics, ggplot doesn’t take vectors as arguments.
Optionally you can add whatever aesthetics you want to apply to your ggplot (inside aes() argument) - such as X and Y axis by specifying the respective variables from the dataset. The variable based on which the color, size, shape and stroke should change can also be specified here itself. The aesthetics specified here will be inherited by all the geom layers you will add subsequently.
If you intend to add more layers later on, may be a bar chart on top of a line graph, you can specify the respective aesthetics when you add those layers.
Below, I show few examples of how to setup ggplot using in the diamonds dataset that comes with ggplot2 itself. However, no plot will be printed until you add the geom layers.
Examples:
library(ggplot2)ggplot(diamonds) # if only the dataset is known.ggplot(diamonds, aes(x=carat)) # if only X-axis is known. The Y-axis can be specified in respective geoms.ggplot(diamonds, aes(x=carat, y=price)) # if both X and Y axes are fixed for all layers.ggplot(diamonds, aes(x=carat, color=cut)) # Each category of the 'cut' variable will now have a distinct color, once a geom is added.
The aes argument stands for aesthetics. ggplot2 considers the X and Y axis of the plot to be aesthetics as well, along with color, size, shape, fill etc. If you want to have the color, size etc fixed (i.e. not vary based on a variable from the dataframe), you need to specify it outside the aes(), like this.
The layers in ggplot2 are also called ‘geoms’. Once the base setup is done, you can append the geoms one on top of the other. The documentation provides a compehensive list of all available geoms.
We have added two layers (geoms) to this plot - the geom_point() and geom_smooth(). Since the X axis Y axis and the color were defined in ggplot() setup itself, these two layers inherited those aesthetics. Alternatively, you can specify those aesthetics inside the geom layer also as shown below.
ggplot(diamonds) +geom_point(aes(x=carat, y=price, color=cut)) +geom_smooth(aes(x=carat, y=price, color=cut)) # Same as above but specifying the aesthetics inside the geoms.
Notice the X and Y axis and how the color of the points vary based on the value of cut variable. The legend was automatically added. I would like to propose a change though. Instead of having multiple smoothing lines for each level of cut, I want to integrate them all under one line. How to do that? Removing the color aesthetic from geom_smooth() layer would accomplish that.
library(ggplot2)ggplot(diamonds) +geom_point(aes(x=carat, y=price, color=cut)) +geom_smooth(aes(x=carat, y=price)) # Remove color from geom_smoothggplot(diamonds, aes(x=carat, y=price)) +geom_point(aes(color=cut)) +geom_smooth() # same but simpler
Here is a quick challenge for you. Can you make the shape of the points vary with color feature?
Though setting up took us quite a bit of code, adding further complexity such as the layers, distinct color for each cut etc was easy. Imagine how much code you would have had to write if you were to make this in base graphics? Thanks to ggplot2!
# Answer to the challenge.ggplot(diamonds, aes(x=carat, y=price, color=cut, shape=color)) +geom_point()
3. The Labels
Now that you have drawn the main parts of the graph. You might want to add the plot’s main title and perhaps change the X and Y axis titles. This can be accomplished using the labs layer, meant for specifying the labels. However, manipulating the size, color of the labels is the job of the ‘Theme’.
The plot’s main title is added and the X and Y axis labels capitalized.
Note: If you are showing a ggplot inside a function, you need to explicitly save it and then print using the print(gg), like we just did above.
4. The Theme
Almost everything is set, except that we want to increase the size of the labels and change the legend title. Adjusting the size of labels can be done using the theme() function by setting the plot.title, axis.text.x and axis.text.y. They need to be specified inside the element_text(). If you want to remove any of them, set it to element_blank() and it will vanish entirely.
Adjusting the legend title is a bit tricky. If your legend is that of a color attribute and it varies based in a factor, you need to set the name using scale_color_discrete(), where the color part belongs to the color attribute and the discrete because the legend is based on a factor variable.
gg1 <-gg +theme(plot.title=element_text(size=30, face="bold"), axis.text.x=element_text(size=15), axis.text.y=element_text(size=15),axis.title.x=element_text(size=25),axis.title.y=element_text(size=25)) +scale_color_discrete(name="Cut of diamonds") # add title and axis text, change legend title.print(gg1) # print the plot
If the legend shows a shape attribute based on a factor variable, you need to change it using scale_shape_discrete(name="legend title"). Had it been a continuous variable, use scale_shape_continuous(name="legend title") instead.
So now, Can you guess the function to use if your legend is based on a fill attribute on a continuous variable?
The answer is scale_fill_continuous(name="legend title").
5. The Facets
In the previous chart, you had the scatterplot for all different values of cut plotted in the same chart. What if you want one chart for one cut?
gg1 +facet_wrap( ~cut, ncol=3) # columns defined by 'cut'
facet_wrap(formula) takes in a formula as the argument. The item on the RHS corresponds to the column. The item on the LHS defines the rows.
In facet_wrap, the scales of the X and Y axis are fixed to accomodate all points by default. This would make comparison of attributes meaningful because they would be in the same scale. However, it is possible to make the scales roam free making the charts look more evenly distributed by setting the argument scales=free.
For comparison purposes, you can put all the plots in a grid as well using facet_grid(formula).
gg1 +facet_grid(color ~cut) # In a grid
Note, the headers for individual plots are gone leaving more space for plotting area..
6. Commonly Used Features
6.1 Make a time series plot (using ggfortify)
The ggfortify package makes it very easy to plot time series directly from a time series object, without having to convert it to a dataframe. The example below plots the AirPassengers timeseries in one step. Cool!. See more ggfortify’s autoplot options to plot time series here.
library(ggfortify)autoplot(AirPassengers) +labs(title="AirPassengers") # where AirPassengers is a 'ts' object
6.2 Plot multiple timeseries on same ggplot
Plotting multiple timeseries requires that you have your data in dataframe format, in which one of the columns is the dates that will be used for X-axis.
Approach 1: After converting, you just need to keep adding multiple layers of time series one on top of the other.
Approach 2: Melt the dataframe using reshape2::melt by setting the id to the date field. Then just add one geom_line and set the color aesthetic to variable (which was created during the melt).
# Approach 1:data(economics, package="ggplot2") # init data
economics <-data.frame(economics) # convert to dataframeggplot(economics) +geom_line(aes(x=date, y=pce, color="pcs")) +geom_line(aes(x=date, y=unemploy, col="unemploy")) +scale_color_discrete(name="Legend") +labs(title="Economics") # plot multiple time series using 'geom_line's
# Approach 2:library(reshape2)
df <-melt(economics[, c("date", "pce", "unemploy")], id="date")ggplot(df) +geom_line(aes(x=date, y=value, color=variable)) +labs(title="Economics")# plot multiple time series by melting
The disadvantage with ggplot2 is that it is not possible to get multiple Y-axis on the same plot. To plot multiple time series on the same scale can make few of the series appear small. An alternative would be to facet_wrap it and set the scales='free'.
By default, ggplot makes a ‘counts’ barchart, meaning, it counts the frequency of items specified by the x aesthetic and plots it. In this format, you don’t need to specify the Y aesthetic. However, if you would like the make a bar chart of the absolute number, given by Y aesthetic, you need to set stat="identity" inside the geom_bar.
plot1 <-ggplot(mtcars, aes(x=cyl)) +geom_bar() +labs(title="Frequency bar chart") # Y axis derived from counts of X itemprint(plot1)
df <-data.frame(var=c("a", "b", "c"), nums=c(1:3))
plot2 <-ggplot(df, aes(x=var, y=nums)) +geom_bar(stat ="identity") # Y axis is explicit. 'stat=identity'print(plot2)
6.4 Custom layout
The gridExtra package provides the facility to arrage multiple ggplots in a single grid.
There are 3 ways to change the X and Y axis limits.
Using coord_cartesian(xlim=c(x1,x2))
Using xlim(c(x1,x2))
Using scale_x_continuous(limits=c(x1,x2))
Warning: Items 2 and 3 will delete the datapoints that lie outisde the limit from the data itself. So, if you add any smoothing line line and such, the outcome will be distorted. Item 1 (coord_cartesian) does not delete any datapoint, but instead zooms in to a specific region of the chart.
Adding coord_equal() to ggplot sets the limits of X and Y axis to be equal. Below is a meaningless example. So to save face for not giving a good example, I am not showing you the output.
By setting theme(legend.position="none"), you can remove the legend. By setting it to ‘top’, i.e. theme(legend.position="top"), you can move the legend around the plot. By setting legend.postion to a co-ordinate inside the plot you can place the legend inside the plot itself. The legend.justification denotes the anchor point of the legend, i.e. the point that will be placed on the co-ordinates given by legend.position.
library(grid)
my_grob =grobTree(textGrob("This text is at x=0.1 and y=0.9, relative!\n Anchor point is at 0,0", x=0.1, y=0.9, hjust=0,gp=gpar(col="firebrick", fontsize=25, fontface="bold")))ggplot(mtcars, aes(x=cyl)) +geom_bar() +annotation_custom(my_grob) +labs(title="Annotation Example")
6.13 Saving ggplot
plot1 <-ggplot(mtcars, aes(x=cyl)) +geom_bar()ggsave("myggplot.png") # saves the last plot.ggsave("myggplot.png", plot=plot1) # save a stored ggplot
For a more comprehensive list, the top 50 Ggplot2 visualizations provides some advanced Ggplot2 charts and helps to choose the right type for your specific objectives.
Science fiction television is filled with fan-favorite characters, but behind every lead hero is their assistant. They’re often present in the background, usually only ever get a first name, and the role might propel their portrayer into minor fame on the comic con celebrity circuit, long after the show has ended. Stephen Furst’s Vir Cotto was one such character in the science fiction show Babylon 5, but over the course of the story, he became so much more than that: an example of where even background characters have incredible importance to the bigger picture.
Furst passed away earlier this week from complications related to diabetes, according to his Facebook page. He had a long acting resume, appearing in films such as Animal House, and shows like St. Elsewhere, but he will be fondly remembered as as Vir Cotto in J. Michael Straczynski’s Babylon 5, where he played the diminutive and bumbling assistant to the station’s Centauri ambassador, Londo Mollari.
Babylon 5 ran from 1994 through 1997, and told the story of an interstellar space station, a neutral ground for the galaxy’s various alien species to come together peacefully after a great war. The show depicts the rise of another interstellar war, with a nuanced portrayal of politics and diplomacy deep in space. Furst later went on to direct a handful of episodes of Babylon 5 and its sequel show, Crusade, as well as several films.
Vir might have started out as a meek and bumbling character, but over the course of the show, that changed. In many shows, these characters remain static: funny and deferential to their superiors. While he largely remains at Londo’s side throughout the show, he becomes a moral figure in the center of a complicated story. Vir became a rare example of a background character who grows in importance over the course of the story, whose seemingly naïve moralistic qualities become the most important guide for the characters around him. He essentially becomes a stand-in for Londo’s conscious, calling out his mentor’s disastrous decisions, standing up to powerful figures, and often provides the right and just choice during times of moral ambiguity. In doing so, he becomes an indispensable and heroic character in the show, one who actively influences the outcome of the story.
Furst’s portrayal of Vir was reportedly true to life: in an interview, he recounted how his personality helped get him the role of the character. The character is one that shines because of Furst’s performance, and it holds up as an excellent and memorable example, even twenty years after the show went off the air.
Anyone living in the cramped confines of a city apartment knows the pain of not quite having enough space. There’s nowhere to put that Ping-Pong table you’ve always wanted. Your bike is hanging on the wall, and you’ve already stepped on your kid’s Legos twice this week. Storage is expensive. Every new possession, hobby, and project costs not just money, but precious square footage.
The Sharing Depot, Toronto’s first library of things, helps space-starved urbanites cut costs and clutter without giving up access to the stuff they love. A sort of Zipcar for the little things, the Sharing Depot, which opened earlier this year, lets members borrow items like camping gear, sports equipment, toys, and garden tools. Members pay between $50 and $100 Canadian annually; the higher the level of membership the longer you may keep an item.
When I reach Sharing Depot cofounder Ryan Dyment on the phone, the storefront is busy and loud. Patrons can browse an extensive inventory online or search the Depot’s crowded shelves in person. Skill workshops and swap meets keep sharers engaged, and a volunteer program provides free membership in exchange for working a few shifts per month. “You meet a lot of interesting people,” Dyment says earnestly of the Depot’s growing community.
Before they started up the project, Dyment and cofounder Lawrence Alvarez, a community activist, polled local Torontonians to find out what items people needed occasionally but didn’t have room for at home or found too expensive to buy outright. “The most popular were camping gear, toys, party supplies, those kinds of things,” Dyment says. “So we said ‘OK, let’s do a crowdfund, and see if people want to put their money where their mouth is.’”
On Solaris and illumos, you can inspect shared objects (binaries and libraries) with elfdump. In the most common case, you're simply looking for what shared libraries you're linked against, in which case it's elfdump -d (or, for those of us who were doing this years before elfdump came into existence, dump -Lv). For example:
% elfdump -d /bin/true
Dynamic Section: .dynamic index tag value [0] NEEDED 0x1d6 libc.so.1 [1] INIT 0x8050d20
and it goes on a bit. But basically you're looking at the NEEDED lines to see which shared libraries you need. (The other field that's generally of interest for a shared library is the SONAME field.)
However, you can go beyond this, and use elfedit to manipulate what's present here. You can essentially replicate the above with:
elfedit -r -e dyn:dump /bin/true
Here the -r flag says read-only (we're just looking), and -e says execute the command that follows, which is dyn:dump - or just show the dynamic section.
If you look around, you'll see that the classic example is to set the runpath (which you might see as RPATH or RUNPATH in the dump output). This was used to fix up binaries that had been built incorrectly, or where you've moved the libraries somewhere other than where the binary normally looks for them. Which might look like:
elfedit -e 'dyn:runpath /my/local/lib' prog
This is the first example in the man page, and the standard example wherever you look. (Note the quotes - that's a single command input to elfedit.)
However, another common case I come across is where libtool has completely mangled the link so the full pathname of the library (at build time, no less) has been embedded in the binary (either in absolute or relative form). In other words, rather than the NEEDED section being
libfoo.so.1
it ends up being
/home/ptribble/build/bar/.libs/libfoo.so.1
With this sort of error, no amount of tinkering with RPATH is going to help the binary find the library. Fortunately, elfedit can help us here too.
First you need to work out which element you want to modify. Back to elfedit again to dump out the structure
% elfedit -r -e dyn:dump /bin/baz index tag value [0] POSFLAG_1 0x1 [ LAZY ] [1] NEEDED 0x8e2 /home/.../libfoo.so.1
It might be further down, of course. But the entry we want to edit is index number 1. We can narrow down the output just to this element by using the -dynndx flag to the dyn:dump command, for example
elfedit -r -e 'dyn:dump -dynndx 1' /bin/baz
or, equivalently, using dyn:value
elfedit -r -e 'dyn:value -dynndx 1' /bin/baz
And we can actually set the value as well. This requires the -s flag to set a string, but you end up with:
and then if you use elfdump or elfedit or ldd to look at the binary, it should pick up the library correctly.
This is really very simple (the hardest part is having to work out what the index of the right entry is). I didn't find anything when searching that actually describes how simple it is, so I thought it worth documenting for the next time I need it.
I've had a few conversations recently where I say things like, "the Japanese Ruby community uses Ruby for different things than in America"... and I get blank stares. Specifically, I mention that America is very centered on Rails and web apps with Ruby. No surprise, right?
"But then," people ask, "if they're not using Ruby for Rails, what do they do with it?"
And why does anybody care? For the same reason I have these conversations. Because the American style of Rails usage lends itself to throwing huge amounts of memory and concurrency at your problems, and the Japanese style of Ruby usage does not. This normally comes up when they ask, "but why can't Ruby just use JIT?" JIT is complex and memory-intensive. It's great for running a web server. It sucks for... Well, let's look at what the Japanese folks do, shall we?
(The wonderful Twitter exchange in response to this post also examines what's up with MRI and JIT. If you're here for the JIT, it's worth a read.)
The Photogenic Zachary Scott and a billboard for "Ruby City Matsue" in Shimane Prefecture, Japan
A Difference in Community
The American Ruby community mostly happened because of Rails. Yes, yes, Ruby had a long and storied history before Rails happened (and yes, it did.) But America finally noticed Ruby because of Rails.
As a result, Ruby's fortunes in America have looked a lot like how Rails is currently doing. Rails rose and Ruby rose. Rails has mostly peaked and is decreasing, and so is Ruby. It's not that Ruby is only used for Rails -- it isn't. But the two have risen and fallen together in the United States, and in most of the English-speaking world.
Japan has looked a little different. Not only was Ruby popular long before Rails came along, Rails wasn't the sort of wildfire in Japan that it had been in America. And now that the tides of Rails are receding and you're seeing fewer American regional Ruby conferences...
Japan has them all over the place, and only increasing in number. Ruby-no-Kai is Japan's version of Ruby Central, and is hosting six or more regional RubyKaigi (Ruby conferences) this year -- just in Japan! Some of the conferences are new, some have met for the last few years, up to 11 years (!) for the older ones. And of course, there's the worldwide RubyKaigi. There is also an enterprise conference, Ruby World. And multiple award conferences: RubyBiz and the Fukuoka Ruby Awards, plus a Ruby Prize at Ruby World. Ruby is still very much growing in Japan. As a fun little aside: Ruby-no-Kai tracks their conferences with a bug tracker. So you can see them there.
Another difference: government sponsorship. Japan is very proud that Ruby was invented in Japan and is still based there. FCOCA, part of Fukuoka Prefecture, sponsored multiple American Ruby tours and a bunch of embedded Ruby work, and a variety of Ruby-based contests and awards. Shimane sponsors Ruby work as well, and has Matsue ("The Ruby City".) There are areas that used to be miniature Silicon Valleys of their own, and their local government is trying to get over that hump with... Ruby. Often, but not always, embedded Ruby and Ruby IoT devices.
That's one reason you see a lot of Japanese government sponsorship for mruby. American audiences often ask, "why would you want an embedded Ruby?" But for the Japanese, it's a lot of how they were already using Ruby. Ruby has great memory usage and embeds pretty well. mruby embeds really well. But embedded Ruby and mruby aren't a big part of the English-speaking Ruby world.
One other major difference in the Japanese Ruby community is how centralized it is. Many of the core contributors like KoichiSasada, ShyouheiUrabe, YuiNaruse, ZacharyScott and AkiraMatsuda live within 10-15 minutes of each other and talk often. Matz, of course, talks to them all regularly, including at regular committer meetings. Their regional conferences are run primarily by one organization, and their sponsorship comes primarily from a few specific sources.
One more point that affects how the core Ruby committers view Ruby technically: Matz is employed full-time by Heroku, and Koichi (author of the current Ruby VM, Director of the Ruby Association) was until recently. Heroku is an American company, owned by SalesForce. But it's also a hosting company, and so its views on memory usage (its biggest expense) versus CPU (often idle, easy to 'move' between VMs) is rather different than an American company hosting Rails on raw EC2 instances. They also really want Ruby to behave well on the smallest Heroku instances, for all sorts of good reasons.
A Japanese Enterprise Ruby Conference
For some other differences, let's look at the program for Ruby World 2016, which happened in Matsue, Shimane, Japan.
The first Ruby World talk was about using Ruby for an in-car electrical control unit testing machine. The second talk was about using embedded mruby to develop applications for embedded hardware. So yes, there's that embedded thing...
The third talk is about Enechange, an electricity price comparison service. That one would have a web site, but it's still not what you'd think of as a typical U.S. Ruby-based startup.
Next is sponsor talks from Hitachi, and from Eiwa System Management. Based on their company page, which mentions "in-vehicle system development of automobiles," I'd guess there is some embedded Ruby-in-cars going on there too.
The following two talks are about Scientific Computing, followed by machine learning infrastructures. Both are useful, and both happen in the English-speaking Ruby world as well, but I see them more from the Japanese Ruby community. On the "Japanese data management" front, Treasure Data is also worth a mention. They're also a significant force in the Japanese Ruby community, and they also employ prominent Ruby folks.
The next Ruby World talk, on learning mruby with Lego MindStorms does sound like something you'd see at an English-language Ruby convention, but it's also embedded. And after a "scaling the company with Ruby" talk from R-learning, an "IT services and support" company, is one called "Tried to start programming class for children in a small town," which again sounds like something you'd see at a Ruby convention in California or New York.
A lot of the other talks are also about the business or practice of development rather than applications of Ruby -- for instance about Agile, DevOps and how to get a job as a developer. And after a sponsor talk from an IoT sensor company focused on sake brewing, there's a sponsor talk from a Rails consultancy. So it's certainly not as if America and Japan use Ruby totally differently.
Same and Different, Different and Same
You'll see some Ruby on Rails in the Japanese community, it's true. But you'll also find that they often use it a bit differently -- like CookPad, which proudly runs the world's largest Rails monolith, basically by using Rails as a CMS. It's conceptually more like WordPress than it is like Twitter.
The Ruby Association, from Google's Street view
And of course, the English-speaking Ruby world isn't all Rails. You'll find some machine learning and IoT in American Ruby conferences. Presumably Ruby is even running in a car somewhere in America as well. There are definitely liaisons between the Ruby and Rails worlds, like AaronPatterson, AkiraMatsuda and RichardSchneeman. But the overall focus is different.
So: the next time you think, "why isn't Ruby perfectly optimized for Rails and Rails alone?" it's worth remembering the Japanese folks. That's where Ruby comes from. It's where most of the Ruby development happens. And it's a different world, doing different things. There's some Rails, yes. But Rails is a long way from being their whole world.
Many thanks to ZacharyScott, who knows far more about the Japanese Ruby community than I do. He read drafts of this article, suggested many new angles, and helped me see where I'd made some significant mistakes. A lot of the "Difference in Community" section is information he graciously pointed out and I hadn't known.
And many thanks to Matz for Ruby, for mruby, and for corrections to this article about mruby and Heroku!
Once you have installed hyper-pokemon, it's time to set your favorite theme!
Go to your ~/.hyper.js and add the pokemon and pokemonSyntax options below the colors object, and define your theme of choice!
Here is a quick example, where we choose the gengar theme, with a unibody color for the window header & dark terminal tabs! 👻
config: {//...
colors {//...
},
pokemon:'gengar', // Define your favorite pokemon theme!
pokemonSyntax:'dark', // Define the color of the terminal tabs.
unibody:'true', // Define the color of the Hyper window header
}
To get the exact same look as in this image, install oh-my-zsh and choose pure as your zsh prompt 🐱
# clone the repository
$ git clone https://github.com/klauscfhq/hyper-pokemon.git# navigate to the project directory
$ cd hyper-pokemon
Using npm
# get the package & set it as a dependency
$ npm install hyper-pokemon --save# or set it as a devDependency
$ npm install hyper-pokemon --save-dev# or even save it globaly
$ npm install hyper-pokemon --g
Searching for Emacs Lisp alternative? Try hacking Emacs in Go!
Overview
Description
goism is Emacs package that makes it possible to useGo programming language instead
of Emacs Lisp inside Emacs.
It provides Go intrinsics and emacs package to make it
possible to control Emacs from your programs.
Generated functions, methods and variables can be accessed from
Emacs Lisp code.
Enjoy the increased type safety and curly braces!
How it works
Valid Go package is converted into Emacs Lisp bytecode.
Emacs goism package implements Go runtime,
so translated code behaves as
close to the specs as possible.
Different optimizations are performed during this translation,
so it is not going to be any slower than "native" Emacs Lisp.
In my previous post I discussed my concerns the additional complexity adding generics or immutability would bring to a future Go 2.0. As it was an opinion piece, I tried to keep it around 500 words. This post is an exploration of the most important (and possibly overlooked) point of that post.
Indeed, the addition of [generics and/or immutability] would have a knock-on effect that would profoundly alter the way error handling, collections, and concurrency are implemented.
Specifically, what I believe would be the possible knock-on effect of adding generics or immutability to the language.
A powerful motivation for adding generic types to Go is to enable programmers to adopt a monadic error handling pattern. My concerns with this approach have little to do with the notion of the maybe monad itself. Instead I want to explore the question of how this additional form of error handling might be integrated into the stdlib, and thus the general population of Go programmers.
Right now, to understand how io.Reader works you need to know how slices work, how interfaces work, and know how nil works. If the if err != nil { return err } idiom was replaced by an option type or maybe monad, then everyone who wanted to do basic things like read input or write output would have to understand how option types or maybe monads work in addition to discussion of what templated types are, and how they are implemented in Go.
Obviously it’s not impossible to learn, but it is more complex than what we have today. Newcomers to the language would have to integrate more concepts before they could understand how basic things like reading from a file work.
The next question is, would this monadic form become the single way errors are handled? It seems confusing, and gives unclear guidiance to newcomers to Go 2.0, to continue to support both the error interface model and a new monadic maybe type. Also, if some form of templated maybe type was added, would it be a built in, like error, or would it have to be imported in almost every package. Note: we’ve been here before with os.Error.
What began as the simple request to create the ability to write a templated maybe or option type has ballooned into a set of question that would affect every single Go package ever written.
Another reason to add templated types to Go is to facilitate custom collection types without the need for interface{} boxing and type assertions.
On the surface this sounds like a grand idea, especially as these typesareleakingintothe standard library anyway. But that leaves the question of what to do with the built in slice and map types. Should slices and maps co-exist with user defined collections, or should they be removed in favour of defining everything as a generic type?
To keep both sounds redundant and confusing, as all Go developers would have to be fluent in both and develop a sophisticated design sensibility about when and where to choose one over the other. But to remove slices and maps in favour of collection types provided by a library raises other questions.
Slicing
For example, if there is no slice type, only types like a vector or linked list, what happens to slicing? Does it go away, if so, how would that impact common operations like handling the result a call to io.Reader.Read? If slicing doesn’t go away, would that require the addition of operator overloading so that user defined collection types can implement a slice operator?
Then there are questions on how to marry the built in map type with a user defined map or set. Should user defined maps support the index and assignment operators? If so, how could a user defined map offer both the one and two return value forms of lookup without requiring polymophic dispatch based on the number of return arguments? How would those operators work in the presence of set operations which have no value, only a key?
Which types could use the delete function? Would delete need to be modified to work with types that implement some kind of Deleteable interface? The same questions apply to append, len, cap, and copy.
What about addressability? Values in the built in map type are not addressable, but should that be permitted or disallowed for user defined map types? How would that interact with operator overloading designed to make user defined maps look more like the built in map?
What sounded like a good idea on paper—make it possible for programmers to define their own efficient collection data types—has highlighted how deeply integrated the built in map and slice are and spawned not only a requirement for templated types, but operator overloading, polymorphic dispatch, and some kind of return value addressability semantics.
How could you implement a vector?
So, maybe you make the argument that now we have templated types we can do away with the built in slice and map, and replace them with a Java-esque list of collection types.
Go’s Pascal-like array type has a fixed size known at compile time. How could you implement a growable vector without resorting to unsafe hacks? I’ll leave that as an exercise to the reader. But I put it to you that if you cannot implement simple templated vector type with the memory safety we enjoy today with slices, then that is a very strong design smell.
Iteration
I’ll admit that the inability to use the for ... range statement over my own types was something that frustrated me for a long time when I came to Go, as I was accustomed to the flexibility of the iterator types in the Java collections library.
But iterating over in-memory data structures is boring—what you really want to be able to do is compose iterators over database results and network requests. In short, data from outside your process—and when data is outside your process, retrieving it might fail. In that case you have a choice, does your Iterable interface return a value, a value and an error, or perhaps you go down the option type route. Each would require a new form of range loop semantic sugar in an area which already contains its share of footguns.
You can see that adding the ability to write template collection types sounds great on paper, but in practice it would perpetuate a situation where the built in collection types live on in addition to their user defined counterparts. Each would have their strengths and weaknesses, and a Go developer would have to become proficient in both. This is something that Go developers just don’t have to think about today as slices and maps are practically ubiquitous.
Russ wrote at the start of the year that a story for reference immutability was an important area of exploration for the future of Go. Having surveyed hundreds of Go packages and found few which are written with an understanding of the problem of data races—let along actually tried running their tests under the race detector—it is tempting to agree with Russ that the ‘after the fact’ model of checking for races at run time has some problems.
On balance, after thinking about the problems of integrating templated types into Go, I think if I had to choose between generics and immutability, I’d choose the latter.
But the ability to mark a function parameter as const is insufficient, because while it restricts the receiver from mutating the value, it does not prohibit the caller from doing so, which is the majority of the data races I see in Go programs today. Perhaps what Go needs is not immutability, but ownership semantics.
While the Rust ownership model is undoubtedly correct—iff your program complies, it has no data races—nobody can argue that the ownership model is simple or easy for newcomers. Nor would adding an extra dimension of immutability to every variable declaration in Go be simple as it would force every user of the language to write their programs from the most pessimistic standpoint of assuming every variable will be shared and will be mutated concurrently.
These are some of the knock on effects that I see of adding generics or immutability to Go. To be clear, I’m not saying that it should not be done, in fact in my previous post I argued the opposite.
What I want to make clear is adding generics or immutability has nothing to do with the syntax of those feature, little to do with their underlying implementation, and everything to do with the impact on the overall complexity budget of the language and its libraries, that these features would unlock.
David Symonds argued years ago that there would be no benefit in adding generics to Go if they were not used heavily in the stdlib. The question, and concern, I have is; would the result be more complex than what we have today with our quaint built in slice, map, and error types?
I think it is worth keeping in mind the guiding principals of the language—simplicity and readability. The design of Go does not follow the accretive model of C++ or Java The goal is not to reinvent those languages, minus the semicolons.
If there is one single complaint about work I hear most often from clients and friends, it’s that they have a hard time focusing in their office. No matter if you’re pairing with someone or need some quiet time - if you can’t escape the background noise, you’ll have a hard time getting any work done. Let me share with you how I try to have people happily leave their headphones in their backpack.
Fix your office or don’t force people to come
A strong position, given, but let’s be honest: If a company can’t provide room for focus time, it needs to create a decent remote/home office culture and policies.
See for example how vaamo, my previous employer, describes their Remote Culture in one of their current job offerings:
Your workplace is no-bullshit, like many other things at vaamo: Work where you want! You need an office? No problem, we have a beautiful office space in Frankfurt, which is easily reachable with public transit. You want to work from home or a coworking space? No problem! GitHub, Slack, Screenhero, Hangout etc will make sure nothing gets lost and we stay up to date. Except for that you miss being challenged at the soccer table … just sayin’.
If that’s out of the question though - and I understand there are plenty of reasons why remote might does not fit your company right now - it might be time to consider a new office or some serious investments in noise canceling features in the present office.
Get some sound barriers for workstations, acoustic foam for phone boots and ceiling baffles for the whole office. Create extra spaces for people to have a conversation. This is no cost compared to the loss in productivity that interruptions create throughout everyone’s workday.
Design offices for functionality
In recent weeks, I’ve seen two particular office designs that stuck with me: First, Markus Tacker shared his efforts of creating his vision of an office, a tour worth doing. He also mentions visual noise, distractions that happen in your field of view, as something to account for, with which I fully agree.
The second encounter was visiting friends from Zweitag in Münster a few weeks ago. For two weeks, I worked in their office and was amazed by how they manage to provide enclosed spaces to focus on work, all while also creating spaces and opportunities for smalltalk and collaboration in the hallways. Their office is fairly large, divided into offices for four people each. The rooms are not directly connected, but the same small part of the wall in each room is replaced with framed glass, giving you the opportunity to glance into the other offices in order to see if people are still working there (“light’s still on”). In their hallways, there are places to sit and work sprinkled across, a coffee bar, a terrace for smokers 😅 and plenty of other occasions to have a chat throughout the day.
I tried to sketch the office in SketchUp but failed miserably. If you’re curious, you either have to visit them in beautiful Münster or wait for me to get my hands on a decent mouse 😅.
Enable people to raise awareness about loudness
At the core of the issue with noise is, like so often, a people problem; One can, without bad intentions, be completely oblivious to the fact of how loud they’re talking or how disrupting it is to shout across a room.
Making it visible that people are actively being disturbed in their work is a good starting point in order to sensitise people.
This way, they might move into a meeting room before the noise level gets out of hand.
This is where Noisy comes into play, a small Slackbot I wrote that enables people to anonymously raise the issue for everyone else to see.
Mind you that Noisy does not help stopping an immediate interruption as the people talking most likely are not looking at Slack in that particular moment, but only afterwards makes them notice how their behavior was affecting people. It further lowers the barrier of people coming forward and speaking out about the noise. At the same time, it does not address or blame individuals.
Setting up something like Noisy is easy as pie: It’s an AWS Lambda-Function being called through a Slack Slash Command Webhook that uses the Incoming Webhooks API from Slack to send a message to #general without mentioning the person who triggered the command.
I assembled this chart to show how this can already have an impact five days into the experiment: At some days, people triggered Noisy (dark grey) more than once a day, provoking some 👍 (light grey) that further drove the point across.
But Noisy is just the start. I wondered how I could create a mindful atmosphere in the office so I could remove Noisy from the Slack Team at some point.
Automating the loudness indicator
Going a step further, I’d love to have people learn to look for a meeting room before it gets too loud. Cleware, a company in North Germany, sells a USB Traffic Light. I set it up with a Raspberry Pi and a microphone to permanently measure the noise level in the office.
The color coding is straight forward:
If the traffic light is green, all is well.
If it turns yellow, the mic picked up a raise in noise over the last few seconds
If it turns red, someone issued the by now familiar /noisy command in Slack.
There is no black or white in handling this. While you might be able to create Quiet Zones, just as you can create Collaboration Zones, people working together in a room will make noise and hopefully it’s productive noise most of the time.
The first step here is being aware that there are different types of people in your company; some who like the background noise, some who very much dislike it. Giving them both the option to work the way they enjoy the most is something to start with. Working on their awareness of each other is the next step.
Professor Jim Herod
and I have written
Multivariable Calculus ,a book
which we and a few others have used here at Georgia Tech for
two years. We have also proposed that this be the first calculus
course in the curriculum here, but that is another story....
I have also written a modest book,
Complex Analysis, which I have used
in our introductory undergraduate complex analysis course here.
Complex Variables, by Robert Ash and
W. P. Novinger.
This is a substantial revision of the first edition of Professor Ash's complex
variables text originally published in 1971.
An introductory algebraic topology book,
Algebraic Topology I, by
Professor Allen Hatcher, of
Cornell University, is available, and Professor Hatcher promises the
second volume, Algebraic Topology II, will be ready soon.
Thanks to Malaspina Great Books,
Mechanism of the Heavens (1831), by Mary Somerville,
is available online. This second edition was prepared by
Russell McNeil.
Lecture Notes on Optimization, by
Pravin Varaiya. This is a re-issue of a book out of print since 1975.
It is an introduction to mathematical programming, optimal control, and dynamic
programming.
Introduction to Continuum Mechanics,
by
Ray. M. Bowen, also originally published by Plenum Press and now
available from Dover and made freely available here.
Dedicated to Ian Murdock
------------------------
Ian Murdock, the founder of the Debian project, passed away on 28th December
2015 at his home in San Francisco. He was 42.
It is difficult to exaggerate Ian's contribution to Free Software. He led the
Debian Project from its inception in 1993 to 1996, wrote the Debian manifesto
in January 1994 and nurtured the fledgling project throughout his studies at
Purdue University.
Ian went on to be founding director of Linux International, CTO of the Free
Standards Group and later the Linux Foundation, and leader of Project Indiana
at Sun Microsystems, which he described as "taking the lesson that Linux has
brought to the operating system and providing that for Solaris".
Debian's success is testament to Ian's vision. He inspired countless people
around the world to contribute their own free time and skills. More than 350
distributions are known to be derived from Debian.
We therefore dedicate Debian 9 "stretch" to Ian.
-- The Debian Developers
Abstract: Serverless computing has emerged as a new compelling paradigm for the
deployment of applications and services. It represents an evolution of cloud
programming models, abstractions, and platforms, and is a testament to the
maturity and wide adoption of cloud technologies. In this chapter, we survey
existing serverless platforms from industry, academia, and open source
projects, identify key characteristics and use cases, and describe technical
challenges and open problems.