Quantcast
Channel: Hacker News
Viewing all 25817 articles
Browse latest View live

Biology Student Discovers Plastic-Eating Bacteria

$
0
0

One of the world’s environmental crises could be solved with a bacteria that can eat plastic and break it down into harmless by-products. The bacteria was found by a biology student at Reed College in Oregon, Morgan Vague.

This bacteria can degrade polyethylene terephthalate (PET) – a common plastic which is used in clothing, bottles and food packaging. PET can degrade in centuries, and until then, it damages the environment.

Morgan Vague believed that she could speed up the process and help solve a big part of the plastic pollution on our planet:

“When I started learning about the statistics about all the plastic waste we have, essentially that told me we have a really serious problem here and we need some way to address it.”

Then, she learned about bacterial metabolism and “all the crazy things bacteria can do,” so she started to see if microbes could degrade the plastic we get “straight-from-the-store.”

Testing 300 Strains of Bacteria – 3 of Them ‘Ate’ PET

The first step was to hunt for microbes around refineries from her Houston. She was searching for microbes that adapted to degrade plastic both in soil and in water. She took samples back to college in Portland, Oregon and started testing almost 300 strains of bacteria.

In her search, she was looking for an enzyme that could digest fat and break down plastic to transform it into food for the bacteria.

Vague found 20 bacteria that produced lipase, with three that had high levels of that enzyme. The student said that she used these three bacteria and started to feed them PET:

“It looks like it breaks it down into harmless by-products that don’t do any environmental damage, so right now what it’s doing is breaking down the hydrocarbons within the plastic, and then the bacteria can use that as food and fuel. So essentially it’s using that to live. It’s essentially turning plastic into food.”

However, there is a long journey until we begin feeding the bacteria PET. Jay Mellies is a microbiologist and supervisor of Ms. Vague’s thesis, saying that the next step is to make the bacteria eat plastic faster, and get it to eat more different plastics, concluding that:

“This is not going to be the total solution, but I think it’s going to be part of the solution.”

Rex Austinwas born and raised in Thunder Bay Ontario on the shores of Lake Superior. Apart from running his own podcast (Ice Fishing And Other “Cool” Things), he spends his time canoeing and backpacking in Northern Ontario.. As a journalist Rex has published stories for Global News (Thunder Bay) we well as Buzz Feed and Joystiq. As a contributor to Great Lakes Ledger, Rex most covers science and health stories. Contact Rexhere


After the Fall: Ten Years After the Crash

$
0
0

After the Fall

John Lanchester

Some of the more pessimistic commentators at the time of the credit crunch, myself included, said that the aftermath of the crash would dominate our economic and political lives for at least ten years. What I wasn’t expecting – what I don’t think anyone was expecting – was that ten years would go by quite so fast. At the start of 2008, Gordon Brown was prime minister of the United Kingdom, George W. Bush was president of the United States, and only politics wonks had ever heard of the junior senator from Illinois; Nicolas Sarkozy was president of France, Hu Jintao was general secretary of the Chinese Communist Party, Ken Livingstone was mayor of London, MySpace was the biggest social network, and the central bank interest rate in the UK was 5.5 per cent.

It is sometimes said that the odds you could get on Leicester winning the Premiership in 2016 was the single most mispriced bet in the history of bookmaking: 5000 to 1. To put that in perspective, the odds on the Loch Ness monster being found are a bizarrely low 500 to 1. (Another 5000 to 1 bet offered by William Hill is that Barack Obama will play cricket for England. I’d advise against that punt.) Nonetheless, 5000 to 1 pales in comparison with the odds you would have got in 2008 on a future world in which Donald Trump was president, Theresa May was prime minister, Britain had voted to leave the European Union, and Jeremy Corbyn was leader of the Labour Party – which to many close observers of Labour politics is actually the least likely thing on that list. The common factor explaining all these phenomena is, I would argue, the credit crunch and, especially, the Great Recession that followed.

Perhaps the best place to begin is with the question, what happened? Answering it requires a certain amount of imaginative work, because although ten years ago seems close, some fundamentals in the way we perceive the world have shifted. The most important component of the intellectual landscape of 2008 was a widespread feeling among elites that things were working fine. Not for everyone and not everywhere, but in aggregate: more people were doing better than were doing worse. Both the rich world and the poor world were measurably, statistically, getting richer. Most indices of quality of life, perhaps the most important being longevity, were improving. We were living through the Great Moderation, in which policymakers had finally worked out a way of growing economies at a rate that didn’t lead to overheating, and didn’t therefore result in the cycles of boom and bust which had been the defining feature of capitalism since the Industrial Revolution. Critics of capitalism had long argued that it had an inherent tendency towards such cycles – this was a central aspect of Marx’s critique – but policymakers now claimed to have fixed it. In the words of Gordon Brown: ‘We set about establishing a new economic framework to secure long-term economic stability and put an end to the damaging cycle of boom and bust.’ That claim was made when Labour first got into office in 1997, and Brown was still repeating it in his last budget as chancellor ten years later, when he said: ‘We will never return to the old boom and bust.’

I cite this not to pick on Gordon Brown, but because this view was widespread among Western policymakers. The intellectual framework for this overconfidence was derived from contemporary trends in macroeconomics. Not to put too fine a point on it, macroeconomists thought they knew everything. Or maybe not everything, just the most important thing. In a presidential address to the American Economic Association in 2003, Robert Lucas, Nobel prizewinner and one of the most prominent macroeconomists in the world, put it plainly:

Macroeconomics was born as a distinct field in the 1940s, as a part of the intellectual response to the Great Depression. The term then referred to the body of knowledge and expertise that we hoped would prevent the recurrence of that economic disaster. My thesis in this lecture is that macroeconomics in this original sense has succeeded: its central problem of depression prevention has been solved, for all practical purposes, and has in fact been solved for many decades.

Solved. For many decades. That was the climate of intellectual overconfidence in which the crisis began. It’s been said that the four most expensive words in the world are: ‘This time it’s different.’ We can ignore the lessons of history and indeed of common sense because there’s a new paradigm, a new set of tools and techniques, a new Great Moderation. But one of the things that happens in economic good times – a very clear lesson from history which is repeatedly ignored – is that money gets too cheap. Too much credit enters the system and there is too much money looking for investment opportunities. In the modern world that money is hotter – more rapidly mobile and more globalised – than ever before. Ten and a bit years ago, a lot of that money was invested in a sexy new opportunity created by clever financial engineering, which magically created high-yielding but completely safe investments from pools of risky mortgages. Poor people with patchy credit histories who had never owned property were given expensive mortgages to allow them to buy their first homes, and those mortgages were then bundled into securities which were sold to eager investors around the world, with the guarantee that ingenious financial engineering had achieved the magic trick of high yields and complete safety. That, in an investment context, is like claiming to have invented an antigravity device or a perpetual motion machine, since it is an iron law of investment that risks are correlated with returns. The only way you can earn more is by risking more. But ‘this time it’s different.’

The thing about debt and credit is that most of the time, in conventional economic thinking, they don’t present a problem. Every credit is a debt, every debt is a credit, assets and liabilities always match, and the system always balances to zero, so it doesn’t really matter how big those numbers are, how much credit or debt there is in the system, the net is always the same. But knowing that is a bit like climbing up a very, very long ladder and knowing that it’s a good idea not to look down. Sooner or later you inevitably do, and realise how exposed you are, and start feeling different. That’s what happened in the run-up to the credit crunch: people suddenly started to wonder whether these assets, these pools of mortgages (which by this point had been sold and resold all around the financial system so that nobody was clear who actually owned them, like a toxic version of pass the parcel in which nobody knows who is holding the parcel or what’s in it), were worth what they were supposed to be worth. They noticed just how high up the ladder they had climbed. So they started descending the ladder. They started withdrawing credit. What happened next was the first bank run in the UK since the 19th century, the collapse of Northern Rock in September 2007 and its subsequent nationalisation. Northern Rock had an unusual business model in that instead of relying on customer deposits to meet its operational needs it borrowed money short-term on the financial markets. When credit became harder to come by, that source of funding suddenly wasn’t there any more. Then, just as suddenly, Northern Rock wasn’t there any more either.

That was the first symptom of the global crisis, which reached the next level with the very similar collapse of Bear Stearns in March 2008, followed by the crash that really did take the entire global financial system to the brink, the implosion of Lehman Brothers on 15 September. Because Lehmans was a clearing house and repository for many thousands of financial instruments from around the system, suddenly nobody knew who owed what to whom, who was exposed to what risk, and therefore which institutions were likely to go next. And that is when the global supply of credit dried up. I spoke to bankers at the time who said that what happened was supposed to be impossible, it was like the tide going out everywhere on Earth simultaneously. People had lived through crises before – the sudden crash of October 1987, the emerging markets crises and the Russian crisis of the 1990s, the dotcom bubble – but what happened in those cases was that capital fled from one place to another. No one had ever lived through, and no one thought possible, a situation where all the credit simultaneously disappeared from everywhere and the entire system teetered on the brink. The first weekend of October 2008 was a point when people at the top of the global financial system genuinely thought, in the words of George W. Bush, ‘This sucker could go down.’ RBS, at one point the biggest bank in the world according to the size of its balance sheet, was within hours of collapsing. And by collapsing I mean cashpoint machines would have stopped working, and insolvencies would have spread from RBS to other banks – and no one alive knows what that would have looked like or how it would have ended.

The immediate economic consequence was the bailout of the banks. I’m not sure if it’s philosophically possible for an action to be both necessary and a disaster, but that in essence is what the bailouts were. They were necessary, I thought at the time and still think, because this really was a moment of existential crisis for the financial system, and we don’t know what the consequences would have been for our societies if everything had imploded. But they turned into a disaster we are still living through. The first and probably most consequential result of the bailouts was that governments across the developed world decided for political reasons that the only way to restore order to their finances was to resort to austerity measures. The financial crisis led to a contraction of credit, which in turn led to economic shrinkage, which in turn led to declining tax receipts for governments, which were suddenly looking at sharply increasing annual deficits and dramatically increasing levels of overall government debt. So now we had austerity, which meant that life got harder for a lot of people, but – this is where the negative consequences of the bailout start to be really apparent – life did not get harder for banks and for the financial system. In the popular imagination, the people who caused the crisis got away with it scot-free, and, as what scientists call a first-order approximation, that’s about right.

In addition, there were no successful prosecutions of anyone at the higher levels of the financial system. Contrast that with the savings and loan scandal of the 1980s, basically a gigantic bust of the US equivalent of mortgage companies, in which 1100 executives were prosecuted. What had changed since then was the increasing hegemony of finance in the political system, which brought the ability quite simply to rewrite the rules of what is and isn’t legal. One example I saw when I was researching Whoops!, my book on the crisis, was in Baltimore. There people going to buy houses for the first time would turn up at the mortgage company’s office and be told: ‘Look, I’m really sorry, I know we said we’d be able to get you a loan at 6 per cent, but something went wrong at the bank, so the number on here is 12 per cent. But listen, I know you want to come out of here owning a house today – that’s right isn’t it, you do want to leave this room owning your own house for the first time? – so what I suggest is, since there’s a lot of paperwork to get through, you sign it, and we sort out this issue with the loan later, it won’t be a problem.’ That is a flat lie: the loan was fixed and unchangeable and the contract legally binding, but under Maryland law, the principle is caveat emptor, so the mortgage broker can lie as much as they want, since the onus is on the other party to protect their own interests. The result, just in Baltimore, was tens of thousands of people losing their homes. The charity I talked to had no idea where many of those people were: some of them were sleeping in their cars, some of them had gone back to wherever they came from outside the city, others had just vanished. And all that predatory lending was entirely legal.

That impunity, the sense that these things had consequences for us but not for the people who caused the crisis, has been central to the story of the last ten years. It has also been central to the public anger generated by the crash and the Great Recession. In the summer of 2009, when I was writing Whoops!, I remember thinking that a huge storm of rage was coming towards governments once the public realised what a giant hole had been dug for them by the financial system in collusion with their leaders. Then the book came out, and I was giving talks about it all over the place from its publication in January 2010 through the spring and summer, and there was this mysterious lack of rage. People seemed numb and incredulous but not yet angry.

In July 2010 I was in Galway for the arts festival, giving a talk in a room where, I later learned, a former taoiseach was famous for accepting envelopes full of cash during Galway’s racing week. By that point in the publication process you normally have your talk down to a fine art, or as fine as it’s going to get, and my spiel consisted of basically comic points about how reckless and foolish the financial system had been. Normally when I gave the talk people would laugh at the various punchlines, but now there was complete silence in the room – the jokes weren’t landing at all. And yet I could tell people were actually listening. It felt strange. Then the questions began, and all of them were about blame, and I realised everyone in the room was furious. All the questions were about whose fault the crash was, who should be punished, how it was possible that this could have happened and how outrageous it was that the people responsible had got away with it and the rest of society was paying the consequences. I remember thinking that the difference between Ireland and the UK is just that they’re a few months ahead. This is what’s coming.

*

By now we’re eight years into that public anger. Remember that remark made by Robert Lucas, the macroeconomist, that the central problem of depression prevention had been solved? How’s that been working out? How it’s been working out here in the UK is the longest period of declining real incomes in recorded economic history. ‘Recorded economic history’ means as far back as current techniques can reach, which is back to the end of the Napoleonic Wars. Worse than the decades that followed the Napoleonic Wars, worse than the crises that followed them, worse than the financial crises that inspired Marx, worse than the Depression, worse than both world wars. That is a truly stupendous statistic and if you knew nothing about the economy, sociology or politics of a country, and were told that single fact about it – that real incomes had been falling for the longest period ever – you would expect serious convulsions in its national life.

Just as grim, life expectancy has stagnated too, which is all the more shocking because it is entirely unexpected. According to the Continuous Mortality Investigation, life expectancy for a 45-year-old man has declined from an anticipated 43 years of extra life to 42, for a 45-year-old woman from 45.1 more years to 44. There’s a decline for pensioners too. We had gained ten years of extra life since 1960, and we’ve just given one year back. These data are new and are not fully understood yet, but it seems pretty clear that the decline is linked to austerity, perhaps not so much to the squeeze on NHS spending – though the longest spending squeeze, adjusted for inflation and demographics, since the foundation of the NHS has obviously had some effect – but to the impacts of austerity on social services, which in the case of such services as Meals on Wheels and house visits function as an early warning system for illness among the elderly. As a result, mortality rates are up, an increase that began in 2011 after decades in which they had fallen under both parties, and it’s this that is causing the decline in life expectancy.

Life expectancy in the United States is also falling, with the first consecutive-year drop since 1962-63; infant mortality, the generally accepted benchmark for a society’s development, is rising too. The principal driver of the decline in life expectancy seems to be the opioid epidemic, which took 64,000 lives in 2016, many more than guns (39,000), cars (40,000) or breast cancer (41,000). At the same time, the income of the typical worker, the real median hourly income, is about the same as it was in 1971. Anyone time-travelling back to the early 1970s would have great difficulty explaining why the richest and most powerful country in the history of the world had four and a half decades without pandemic, countrywide disaster or world war, accompanied by unprecedented growth in corporate profits, and yet ordinary people’s pay remained the same. I think people would react with amazement and want to know why. Things have been getting consistently better for the ordinary worker, they would say, so why is that process about to stop?

It would be easier to accept all this, philosophically anyway, if since the crash we had made some progress towards reform in the operation of the banking system and international finance. But there has been very little. Yes, there have been some changes at the margin, to things such as the way bonuses are paid. Bonuses were a tremendous flashpoint in the aftermath of the crash, because it was so clear that a) bankers were insanely overpaid; and b) they had incentives for taking risks that paid them huge bonuses when the bets succeeded, but in the event they went wrong, all the losses were paid for by us. Privatised gains, socialised losses. The bonus system has been addressed legally, with new legislation enforcing delays before bonuses can be paid out, and allowing them to be clawed back if things go wrong. But overall remuneration in finance has not gone down. It’s an example of a change that isn’t really a change. The bonus pool in UK finance last year was £15 billion, the largest since 2007.

It’s not that there haven’t been changes. It’s just that it isn’t clear how much of a change the changes are. Bonuses are one example. Another concerns the ring-fencing being introduced in the UK to separate investment banking from retail banking – to separate banks’ casino-like activities on international markets from their piggy-bank-like activities in the real economy. In the aftermath of the crisis there were demands for these two functions to be completely separated, as historically they have been in many countries at many times. The banks fought back hard and as usual got what they wanted. Instead of separation we have a complicated, unwieldy and highly technical process of internal ring-fencing inside our huge banks. When I say huge, I’m referring to the fact that our four biggest banks have balance sheets that, combined, are two and a half times bigger than the UK economy. Mark Carney has pointed out that our financial sector is currently ten times the size of our GDP, and that over the next couple of decades it is likely to grow to 15 or even twenty times its size.

The ring-fence has been under preparation for several years and comes into effect from 2019. It increases the complexity of the system, and a very clear lesson from history is that complexity allows opportunities to game the rules and exploit loopholes. One way of describing modern finance is that it’s a mechanism for enabling very clever, very well paid, very highly incentivised people to spend all day every day thinking of ways to get around rules. Complexity works to their advantage. As for the question of whether ring-fencing makes the financial system safer, the answer again is that we don’t really know. As the financial historian David Marsh observed, the only way you can properly test a firewall is by having a fire.

I think the ring-fence is an opportunity missed. That goes for a lot of the small complicated rules designed to make banks and the financial system safer. Bankers complain about them a lot, which is probably a good sign from the point of view of the public, but it’s not clear they will actually make the system safer compared with the much simpler and cruder mechanism of increasing the amount of equity banks have to hold. At the moment banks operate almost entirely with leverage, meaning borrowed money. When they lose money, they mainly lose other people’s money. An increase in the statutory requirement for bank equity, and the resulting reduction in the amount of leverage they are allowed to employ, would make banks safe through a simple act of brute force: they’d have to lose a lot more of their own money before they started to lose any of anyone else’s. The new rules have made the banks hold more equity, but the systems for calculating how much are notoriously complex and in any case the improvement is a matter of degrees, not an order of magnitude. The plan for increased equity has been most rigorously advocated by the Stanford economist Anat Admati, and the banks absolutely hate the idea. It would make them less profitable, which in turn means bankers would be paid much less, and the system would with absolute certainty be much safer for the public. But that is not the direction we’ve taken, especially in the US, where Trump and the Republican Congress are tearing up all the post-crash legislation.

In some cases, it’s not so much a case of non-change change as of good old-fashioned no change. Take the notorious problem of banks that are too big to fail. That issue is unambiguously more serious than it was before the last crash. The failing banks were eaten by surviving banks, with the result that the surviving banks are now bigger, and the too big to fail problem is worse. Banks have been forced by statute to bring in ‘living wills’, as they’re known, to arrange for their own bankruptcy in the event of their becoming insolvent in the way they did ten years ago. I don’t believe those guarantees. These banks have balance sheets that are in some cases as big as the host country’s GDP – HSBC in the UK, or the seriously struggling Deutsche Bank in Germany – and the system simply could not sustain a bankruptcy of that size. Germany is more likely to introduce compulsory public nudity than it is to let Deutsche Bank fail.

In other areas, we’re in the territory that Donald Rumsfeld called known unknowns. The main example is shadow banking. Shadow banking is all the stuff banks do – such as lending money, taking deposits, transferring money, executing payments, extending credit – except done by institutions that don’t have a formal banking licence. Think of credit card companies, insurance companies, companies that let you send money overseas, PayPal. There are also huge institutions inside finance that lend money back and forwards to keep banks solvent, in a process known as the repo market. All these activities taken together make up the shadow banking system. The thing about this system is that it’s much less regulated than formal banks, and nobody is certain how big it is. The latest report from the Financial Stability Board, an international body responsible for doing what it says on the tin, estimates the size of the shadow system at $160 trillion. That’s twice the GDP of Earth. It’s bigger than the entire commercial banking sector. Shadow banking was one of the main routes for spreading and magnifying the crash ten years ago, and it is at least as big and as opaque as it was then.

This brings me to the main and I think least understood point about contemporary financial markets. The mental image of a market is misleading: the metaphor implies a single place where people meet to trade and where the transactions are open and transparent and under the aegis of a central authority. That authority can be formal and governmental or it can just be the relevant collective norms. There are inevitably some asymmetries of information – usually sellers know more than buyers – but basically what you see is what you get, and there is some form of supervision at work. Financial markets today are not like that. They aren’t gathered together in one place. In many instances, a market is just a series of cables running into a data centre, with another series of cables, belonging to a hedge fund specialising in high-frequency trading, running into the same computers, and ‘front-running’ trades by profiting from other people’s activities in the market, taking advantage of time differences measured in millionths of a second.[*] That howling, shrieking, cacophonous pit in which traders look up at screens and shout prices at each other is a stage set (literally so: the New York Stock Exchange keeps one going simply for the visuals). The real action is all in data centres and couldn’t be less like a market in any generally understood sense. In many areas, the overwhelming majority of transactions are over the counter (OTC), meaning that they are directly executed between interested parties, and not only is there no grown-up supervision, in the sense of an agency overseeing the transaction, but it is actually the case that nobody else knows what has been transacted. The OTC market in financial derivatives, for instance, is another known unknown: we can guess at its size but nobody really knows. The Bank for International Settlements, the Basel-based central bank of central banks, gives a twice yearly estimate of the OTC market. The most recent number is $532 trillion.

*

So that’s where we are with markets. Non-change change, in the form of bonus regulation and ring-fencing; no change or change for the worse in the case of complexity and shadow banking and too big to fail; and no overall reduction in the level of risk present in the system.

We are back with the issue of impunity. For the people inside the system that caused a decade of misery, no change. For everyone else, a decade of misery, magnified by austerity policies. Note that austerity policies were not recommended by mainstream macroeconomists, who predicted that they would lead to flat or shrinking GDP, as indeed they did. Instead politicians took the crisis as a political inflection point – a phrase used to me in private by a Tory in 2009, before the public realised what was about to hit them – and seized the opportunity to contract government spending and shrink the state.

The burden of austerity falls much more on the poor than on the better-off, and in any case it is a heavily loaded term, taking a personal virtue and casting it as an abstract principle used to direct state spending. For the top 1 per cent of taxpayers, who pay 27 per cent of all income tax, austerity means you end up better off, because you pay less tax. You save so much on your tax bill you can switch from prosecco to champagne, or if you’re already drinking champagne, you switch to fancier champagne. For those living in precarious circumstances, tiny changes in state spending can have direct and significant personal consequences. In the UK, these have been exacerbated by policies such as benefit sanctions, in which vulnerable people have their benefits withheld as a form of punishment – a self-defeating policy whose cruelty is hard to overstate.

We thus arrive at the topic that more than any other sums up the decade since the crash: inequality. For students of the subject there is something a little crude about referring to inequality as if it were only one thing. Inequality of income is not the same thing as inequality of wealth, which is not the same as inequality of opportunity, which is not the same as inequality of outcome, which is not the same as inequality of health or inequality of access to power. In a way, though, the popular use of inequality, although it may not be accurate in philosophical or political science terms, is the most relevant when we think about the last ten years, because when people complain about inequality they are complaining about all the above: all the different subtypes of inequality compacted together.

The sense that there are different rules for insiders, the one per cent, is global. Everywhere you go people are preoccupied by this widening crevasse between the people at the top of the system and everyone else. It’s possible of course that this is a trick of perspective or a phenomenon of raised consciousness more than it is a new reality: that this is what our societies have always been like, that elites have always lived in a fundamentally different reality, it’s just that now, after the last ten difficult years, we are seeing it more clearly. I suspect that’s the analysis Marx would have given.

The one per cent issue is the same everywhere, more or less, but the global phenomenon of inequality has different local flavours. In China these concerns divide the city and the country, the new prosperous middle class and the brutally difficult lives of migrant workers. In much of Europe there are significant divides between older insiders protected by generous social provision and guaranteed secure employment, and a younger precariat which faces a far more uncertain future. In the US there is enormous anger at oblivious, entitled, seemingly invulnerable financial and technological elites getting ever richer as ordinary living standards stay flat in absolute terms, and in relative terms, dramatically decline. And everywhere, more than ever before in human history, people are surrounded by images of a life they are told they should want, yet know they can’t afford.

A third driver of increased inequality, alongside austerity and impunity for financial elites, has been monetary policy in the form of Quantitative Easing. QE, as it’s known, is the government buying back its own debt with newly minted electronic money. It’s as if you could log into your online bank account and type in a new balance and then use that to pay off your credit card bill. Governments have used this technique to buy back their own bonds. The idea was that the previous bondholders would suddenly have all this cash on their balance sheets, and would feel obliged to put it to work, so they would spend it and then someone else would have the cash and they would spend it. As Merryn Somerset Webb recently wrote in the Financial Times, the cash is like a hot potato that is passed back and forwards between rich individuals and institutions, generating economic activity in the process.

The problem concerns what people do with that hot potato cash. What they tend to do is buy assets. They buy houses and equities and sometimes they buy shiny toys like yachts and paintings. What happens when people buy things? Prices go up. So the prices of houses and equities have been sustained, kept aloft, by quantitative easing, which is great news for people who own things like houses and equities, but less good news for people who don’t, because from their point of view, these things will become ever more unaffordable. A recent analysis by the Bank of England showed that the effect on house prices of QE had been to keep them 22 per cent higher than they would otherwise have been. The effect on equities was 25 per cent. (The analysis used data up to 2014, so both those numbers will have gone up.) We’re back to that question of whether something could be necessary and a disaster at the same time, because QE may well have played an important role in keeping the economy out of a more severe depression, but it has also been a direct driver of inequality, in particular of the housing crisis, which is one of the defining features of contemporary Britain, especially for the young.

Napoleon said something interesting: that to understand a person, you must understand what the world looked like when he was twenty. I think there’s a lot in that. When I was twenty, it was 1982, right in the middle of the Cold War and the Thatcher/Reagan years. Interest rates were well into double digits, inflation was over 8 per cent, there were three million unemployed, and we thought the world might end in nuclear holocaust at any moment. At the same time, the underlying premise of capitalism was that it was morally superior to the alternatives. Mrs Thatcher was a philosophical conservative for whom the ideas of Hayek and Friedman were paramount: capitalism was practically superior to the alternatives, but that was intimately tied to the fact that it was morally better. It’s a claim that ultimately goes back to Adam Smith in the third book of The Wealth of Nations. In one sense it is the climactic claim of his whole argument: ‘Commerce and manufactures gradually introduced order and good government, and with them, the liberty and security of individuals, among the inhabitants of the country, who had before lived almost in a continual state of war with their neighbours and of servile dependency on their superiors. This, though that has been the least observed, is by far the most important of all their effects.’ So according to the godfather of economics, ‘by far the most important of all the effects’ of commerce is its benign impact on wider society.

I know that the plural of anecdote is not data, but I feel that there has been a shift here. In recent decades, elites seem to have moved from defending capitalism on moral grounds to defending it on the grounds of realism. They say: this is just the way the world works. This is the reality of modern markets. We have to have a competitive economy. We are competing with China, we are competing with India, we have hungry rivals and we have to be realistic about how hard we have to work, how well we can pay ourselves, how lavish we can afford our welfare states to be, and face facts about what’s going to happen to the jobs that are currently done by a local workforce but could be outsourced to a cheaper international one. These are not moral justifications. The ethical defence of capitalism is an important thing to have inadvertently conceded. The moral basis of a society, its sense of its own ethical identity, can’t just be: ‘This is the way the world is, deal with it.’

I notice, talking to younger people, people who hit that Napoleonic moment of turning twenty since the crisis, that the idea of capitalism being thought of as morally superior elicits something between an eye roll and a hollow laugh. Their view of capitalism has been formed by austerity, increasing inequality, the impunity and imperviousness of finance and big technology companies, and the widespread spectacle of increasing corporate profits and a rocketing stock market combined with declining real pay and a huge growth in the new phenomenon of in-work poverty. That last is very important. For decades, the basic promise was that if you didn’t work the state would support you, but you would be poor. If you worked, you wouldn’t be. That’s no longer true: most people on benefits are in work too, it’s just that the work doesn’t pay enough to live on. That’s a fundamental breach of what used to be the social contract. So is the fact that the living standards of young people are likely not to be as high as they are for their parents. That idea stings just as much for parents as it does for their children.

This sense of a system gone wrong has led to political crises all across the developed world. From a personal point of view, looking back over the last ten years, some of this I saw coming and some of it I didn’t. I predicted the anger and the decade of economic hard times, and in general I thought life was going to become tougher. I thought it might well lead to a further crisis. But I was wrong about the nature of the crisis. I thought it was likely to be financial rather than political, in the first instance: a second financial crisis which fed into politics. Instead what has happened is Brexit, Trump, and variously startling electoral results from Italy, Hungary, Poland, the Czech Republic and elsewhere.

Part of what happened can be summed up by a British Telecom ad from the 1980s. Maureen Lipman rings up her grandson to congratulate him on his exam results. She has baked a cake and is decorating it but freezes when he tells her, I failed. She asks him what he failed and he says everything: maths English physics geography German woodwork art. But then he lets slip that he passed pottery and sociology and Lipman says: ‘He gets an ology and he says he’s failed? You get an ology, you’re a scientist.’

I suspect I got the wrong ology. Sociology would have been a better social science than economics for understanding the last ten years. Three dominos fell. The initial event was economic. The meaning of it was experienced in ways best interpreted by sociology. The consequences were acted out through politics. From a sociological point of view, the crisis exacerbated faultlines running through contemporary societies, faultlines of city and country, old and young, cosmopolitan and nationalist, insider and outsider. As a direct result we have seen a sharp rise in populism across the developed world and a marked collapse in support for established parties, in particular those of the centre-left.

Electorates turned with special venom against parties offering what was in effect a milder version of the economic consensus: free-market capitalism with a softer edge. It’s as if the voters are saying to those parties: what actually are you for? It’s not a bad question and it’s one that everyone from the Labour Party to the SPD in Germany to the socialists in France to the Democrats in the US are all struggling to answer. It’s worth noticing another phenomenon too: electorates are turning to very young leaders – a 43-year-old in Canada, a 37-year-old in New Zealand, a 39-year-old in France, a 31-year-old in Austria. They have ideological differences, but they have in common that they were all in metaphorical nappies when the crisis and the Great Recession hit, so they definitely can’t be blamed. Both France and the US elected presidents who had never run for office before.

In conclusion, it’s all doom and gloom. But wait! From another perspective, the story of the last ten years has been one of huge success. At the time of the crash, 19 per cent of the world’s population were living in what the UN defines as absolute poverty, meaning on less than $1.90 a day. Today, that number is below 9 per cent. In other words, the number of people living in absolute poverty has more than halved, while rich-world living standards have flatlined or declined. A defender of capitalism could point to that statistic and say it provides a full answer to the question of whether capitalism can still make moral claims. The last decade has seen hundreds of millions of people raised out of absolute poverty, continuing the global improvement for the very poor which, both as a proportion and as an absolute number, is an unprecedented economic achievement.

The economist who has done more in this field than anyone else, Branko Milanović at Harvard, has a wonderful graph that illustrates the point about the relative outcomes for life in the developing and developed world. The graph is the centrepiece of his brilliant book Global Inequality: A New Approach for the Age of Globalisation.[†] It’s called the ‘elephant curve’ because it looks like an elephant, going up from left to right like the elephant’s back, then sloping down as it gets towards its face, then going sharply upwards again when it reaches the end of its trunk. Most of the people between points A and B are the working classes and middle classes of the developed world. In other words, the global poor have been getting consistently better off over the last decades whereas the previous global middle class, most of whom are in the developed world, have seen relative decline. The elite at the top have of course been doing better than ever.

What if the governments of the developed world turned to their electorates and explicitly said this was the deal? The pitch might go something like this: we’re living in a competitive global system, there are billions of desperately poor people in the world, and in order for their standards of living to improve, ours will have to decline in relative terms. Perhaps we should accept that on moral grounds: we’ve been rich enough for long enough to be able to share some of the proceeds of prosperity with our brothers and sisters. I think I know what the answer would be. The answer would be OK, fine, but get rid of the trunk. Because if we are experiencing a relative decline why shouldn’t the rich – why shouldn’t the one per cent – be slightly worse off in the same way that we are slightly worse off?

The frustrating thing is that the policy implications of this idea are pretty clear. In the developed world, we need policies that reduce the inequality at the top. It is sometimes said these are very difficult policies to devise. I’m not sure that’s true. What we’re really talking about is a degree of redistribution similar to that experienced in the decades after the Second World War, combined with policies that prevent the international rich person’s sport of hiding assets from taxation. This was one of the focuses of Thomas Piketty’s Capital, and with good reason. I mentioned earlier that assets and liabilities always balance – that’s the way they are designed, as accounting equalities. But when we come to global wealth, this isn’t true. Studies of the global balance sheet consistently show more liabilities than assets. The only way that would make sense is if the world were in debt to some external agency, such as Venusians or the Emperor Palpatine. Since it isn’t, a simple question arises: where’s all the fucking money? Piketty’s student Gabriel Zucman wrote a powerful book, The Hidden Wealth of Nations (2015), which supplies the answer: it’s hidden by rich people in tax havens. According to calculations that Zucman himself says are conservative, the missing money amounts to $8.7 trillion, a significant fraction of all planetary wealth. It is as if, when it comes to the question of paying their taxes, the rich have seceded from the rest of humanity.

A crackdown on international evasion is difficult because it requires international co-ordination, but common sense tells us this would be by no means impossible. Effective legal instruments to prevent offshore tax evasion are incredibly simple and could be enacted overnight, as the United States has just shown with its crackdown on oligarchs linked to Putin’s regime. All you have to do is make it illegal for banks to enact transactions with territories that don’t comply with rules on tax transparency. That closes them down instantly. Then you have a transparent register of assets, a crackdown on trust structures (which incidentally can’t be set up in France, and the French economy functions fine without them), and job done. Politically hard but in practical terms fairly straightforward. Also politically hard, and practically less so, are the actions needed to address the sections of society that lose out from automation and globalisation. Milanović’s preferred focus is on equalising ‘endowments’, an economic term which in the context implies an emphasis on equalising assets and education. If changes benefit an economy as a whole, they need to benefit everyone in the economy – which by implication directs government towards policies focused on education, lifelong training, and redistribution through the tax and benefits system. The alternative is to carry on as we have been doing and just let divides widen until societies fall apart.

[*] Donald MacKenzie wrote about high-frequency trading in the LRB of 11 September 2014.

[†] Harvard, 320 pp., £13.95, April 2016, 978 0 674 98403 5.

Pointers Are More Abstract Than You Might Expect in C

$
0
0

A pointer references a location in memory and dereferencing a pointer refers to the lookup of the value of the memory location the pointer references. The value of a pointer is a memory address. The C standard does not define the representation of a memory address. This is crucial since not every architecture makes use of the same memory addressing paradigm. Most modern architectures make use of a linear address space or something similar. Still, even this is not precise enough since you might want to talk about physical or virtual addresses. Some architectures make even use of non-numeric addresses. For example, the Symbolics Lisp Machine makes use of tuples of the form (object, offset) as addresses.

The representation of a pointer is not defined by the C standard. However, operations involving pointers are defined—at least more or less. In the following we will have a look at these operations and how they are defined. Lets start with an introductory example:

#include <stdio.h>intmain(void){inta,b;int*p=&a;int*q=&b+1;printf("%p %p %d\n",(void*)p,(void*)q,p==q);return0;}

If compiled with , then a run of the program on a x86-64 Linux system prints:

0x7fff4a35b19c 0x7fff4a35b19c 0

Note that the pointers p and q point to the same memory address. Still the expression p == q evaluates to false which is very surprising at first. Wouldn’t one expect that if two pointers point to the same memory address, then they should compare equal?

The C standard defines the behavior for comparing two pointers for equality as follows:

C11 § 6.5.9 paragraph 6

Two pointers compare equal if and only if both are null pointers, both are pointers to the same object (including a pointer to an object and a subobject at its beginning) or function, both are pointers to one past the last element of the same array object, or one is a pointer to one past the end of one array object and the other is a pointer to the start of a different array object that happens to immediately follow the first array object in the address space.

The first question which probably comes up is: What is an object? Since we consider the language C it has certainly nothing to do with objects as known from object oriented programming languages like C++. The C standard defines an object rather informally as:

C11 § 3.15

object
region of data storage in the execution environment, the contents of which can represent values

NOTE When referenced, an object may be interpreted as having a particular type; see 6.3.2.1.

Lets be nit-picky. A 16 bit integer variable in memory is a data storage and can represent 16 bit integer values. Therefore it is an object. Should two pointers compare equal if the first pointer points to the first byte of the integer and the second pointer to the second byte of the integer? Of course this is not what the language committee intended. But at that point we should note that the language is not formally defined and we have to start guessing what the intention of the language committee was.

When the Compiler Gets Into Your Way

Lets get back to our introductory example. Pointer p is derived from object a and pointer q is derived from object b. The latter involves pointer arithmetics and this is defined for the operators plus and minus as follows:

C11 § 6.5.6 paragraph 7

For the purposes of these operators, a pointer to an object that is not an element of an array behaves the same as a pointer to the first element of an array of length one with the type of the object as its element type.

Since every pointer which points to a non-array object is virtually lifted to a pointer of type array of length one, the C standard only defines pointer arithmetics for pointers of array types which is finally given in paragraph 8. The interesting part for our case is:

C11 § 6.5.6 paragraph 8

That means, the expression &b + 1 should evaluate to an address without any problem. Hence p and q should be valid pointers. Recap what the C standard defines for comparing two pointers: Two pointers compare equal if and only if […] one is a pointer to one past the end of one array object and the other is a pointer to the start of a different array object that happens to immediately follow the first array object in the address space (C11 § 6.5.9 paragraph 6). This is exactly the case in our example. Pointer q points one past the end of object b which immediately follows object a to which p points. Is this a bug in GCC? The finding has been reported in 2014 as bug #61502 and so far the GCC people argue that this is not a bug and therefore won’t fix it.

The Linux people ran into a similar problem in 2016. Consider the following code:

externint_start[];externint_end[];voidfoo(void){for(int*i=_start;i!=_end;++i){/* ... */}}

The symbols _start and _end are used to span a memory region. Since the symbols are externalized, the compiler does not know where the arrays are actually allocated in memory. Therefore, the compiler must be conservative at this point and assume that they may be allocated next to each other in the address space. Unfortunately GCC compiled the loop condition into the constant true rendering the loop into an endless loop as described in this LKML post where they make use of a similar code snippet. It looks like that GCC changed its behavior according to this problem. At least I couldn’t reconstruct the behavior with GCC version 7.3.1 on x86_64 Linux.

Defect Report #260 to the Rescue?

Defect report #260 may apply in our case. The topic of the report is more about indeterminate values, however, there is one interesting response from the committee:

Implementations […] may also treat pointers based on different origins as distinct even though they are bitwise identical.

If we take this literally, then it is sound that p == q evaluates to false, since p and q are derived from distinct objects that are in no relation to each other. It looks like we are getting closer and closer to the truth, or do we? So far we only considered operators for equality but what about relational operators?

Relational Operators to the Final Rescue?

An interesting point is made while defining the semantics of the relational operators <, <=, >, and >=, in order to compare pointers:

C11 § 6.5.8 paragraph 5

When two pointers are compared, the result depends on the relative locations in the address space of the objects pointed to. If two pointers to object types both point to the same object, or both point one past the last element of the same array object, they compare equal. If the objects pointed to are members of the same aggregate object, pointers to structure members declared later compare greater than pointers to members declared earlier in the structure, and pointers to array elements with larger subscript values compare greater than pointers to elements of the same array with lower subscript values. All pointers to members of the same union object compare equal. If the expression P points to an element of an array object and the expression Q points to the last element of the same array object, the pointer expression Q+1 compares greater thanP. In all other cases, the behavior is undefined.

According to this definition comparing pointers is only defined behavior if the pointers are derived from the same object. Lets demonstrate the idea of this by two examples.

int*p=malloc(64*sizeof(int));int*q=malloc(64*sizeof(int));if(p<q)// undefined behaviorfoo();

In this example the pointers p and q point into two different objects which are not related to each other. Hence comparing them is undefined behavior. Whereas in the following example

int*p=malloc(64*sizeof(int));int*q=p+42;if(p<q)foo();

the pointer p and q point into the same object and are therefore related. Hence it is sound to compare them—assuming that malloc does not return null.

Wrap-up

With respect to pointer comparison the C11 standard is not stringent. The most problematic part we stumbled across was in § 6.5.9 paragraph 6 where it is explicitly allowed to compare two pointers from two different array objects. This contradicts to the statement made in defect report #260. However, the overall topic in DR#260 is about indeterminate values. Therefore, I do not feel comfortable to build up may chain of arguments just of this single sentence and interpret it in a somewhat different context. If we lookup the definition of the relational operators w.r.t. pointer comparison, we observe a slightly different wording than for the equality operators. There, the standard only defines them if both operands are derived from the very same object.

If we step back from the standard and ask our self does it make sense to compare two pointers which are derived from two completely unrelated objects? The answer is probably always no. The introductory example is rather an academic problem. Since the variables a and b have automatic storage duration you typically do not want to make assumptions about their memory arrangement. It might work out in this or that case, but such code is definitely non-portable and you never ever can say what the meaning of the program is prior you compile and run/disassemble the code—which is against any meaningful programming paradigm.

Still the overall feeling regarding the wording of the C11 standard is unsatisfactory and since several people already stumbled across this problem, the question which is left is: Why not make the wording more precise?

Addendum
Pointers One Past the Last Element of an Array

If we lookup the C11 standard and read about pointer arithmetics and comparison we find exceptions for pointers which point one past the last element of an array all over the place. Assume it wouldn’t be allowed to compare two pointers derived from the same array object where at least one pointer points one element past of the array, then code like this

constintnum=64;intx[num];for(int*i=x;i<&x[num];++i){/* ... */}

would not work. Via the loop we iterate over the array x consisting of 64 elements, i.e., the loop body should be evaluated exactly 64 times. However, the loop condition gets evaluated 65 times—once more than we have array elements. In the first 64 evaluations, the pointer i always points into the array x whereas the expression &x[num] always points one element past the array. In the 65th iteration the pointer i also points one element past the array x rendering the condition of the loop false. This is a convenient way to iterate over an array which makes the exception for arrays feasible. Note, the standard only defines the behavior of comparing such pointer—dereferencing pointer is another topic.

Can we change the example such that no pointer points one past the last element of array x? Well, the solution to that is not straight forward. We have to change the loop condition and also make sure that at the end of the loop we do not increment i anymore.

constintnum=64;intx[num];for(int*i=x;i<=&x[num-1];++i){/* ... */if(i==&x[num-1])break;}

This code is rather cluttered with technical details which we do not want to deal with and which distracts us from the actual job we want to accomplish. Despite that it also contains one additional branch inside the loop body. Hence, I think it is reasonable to have exceptions for pointers one past the last element of an array.

Beenz, a digital currency before Bitcoin

$
0
0

Buckled into a leather chair on his chartered plane, Charles Cohen broke into a cold sweat.

None of Cohen’s 265 employees knew it, but his company, Beenz — then the world’s largest digital currency — stood on the brink of collapse. Now, Cohen was flying to the company’s 15 global offices (most of which he’d never even set foot in, the business had boomed so fast) to assess the damage.

After raising millions in venture capital, battling rivals with mega-celebrity sponsors, and dodging Russian hackers, Beenz would be bankrupt by the end of that year.

Beenz bit the dust 17 years ago, but their story is a textbook example of the danger of investing in a powerful technology before that technology is fully understood.

Welcome to the Beenz Clubhouse

Just 6 months before his airborne farewell tour, a cocky Charles Cohen strode into Beenz’s London office.

Crowded with the swinging legs of an Elvis clock, the half-deflated limbs of a pink party doll, and a Yoda statue belching garbled Star Wars phrases, the office resembled a flophouse for gamers who’d just left their moms’ basements and bought whatever the f*ck they wanted.

Charles Cohen, creator of Beenz digital currency

Charles Cohen created Beenz, a “free, completely frictionless, real time internet currency” (via Infoworld).

But somehow, Cohen and his team had raised $86m from Softbank, Apax Partners, Vivendi, Oracle and other big VC firms. That August, while his buddies cheered for Gladiator, the summer’s blockbuster, Cohen opened his 13th office and pumped his billionth Beenz into circulation.

The 29-year old had turned his crazy idea for a digital currency into a rocket ship — and, fueled by optimistic investors, he was headed for the sun.

The brains behind the Beenz

“I hit on this idea of paying people for their attention and using micro-currency to reward people for doing things which were valuable on the internet,” Cohen told us.

Customers ‘earned’ Beenz by spending money (or time) on a website. Once Beenz accumulated, they became currency, free to be bought, sold or traded like any other coin.

Beenz partners (mostly e-commerce websites) paid $0.01 to issue Beenz, and then earned $0.005 back.

At the time, online payment wasn’t mainstream, but everyone and their mother was excited to see it get there — and Beenz seemed like the company to do it. Larry Ellison, CEO of Oracle, agreed: “Beenz.com is clearly an innovator by developing a true global Internet currency,” he toldThe Register in March of 1999.

Endorsements like Ellison’s encouraged 150 e-commerce partners to sign with Beenz in its first 5 months (a bookseller named Bezos was among the few partners who turned Beenz down, claiming to have his own online payment solution).

Beenz homepage

Users visited the Beenz site to login — and understand ‘the mystery’ of Beenz (via S. Baldwin/Pinterest).

“It’s difficult to predict today just what the pitfalls could be for a scheme that seems to have winners on all sides,” The Independent reported of Beenz at the time. “But Cohen said the company is prepared for the unknown.”

They had just one problem: Whoopi Goldberg

Like any juicy, partially-understood technology, the first wave of digital currency attracted plenty of salesmen eager to sell a new dream.

Beenz’s biggest competitor happened to be Whoopi Goldberg — or, rather, the rival currency, Flooz, that recruited her as its spokeswoman.

Flooz launched nearly a year before Beenz. An ad campaign featuring Goldberg (who was paid in a mixture of Flooz currency and Flooz stock) boosted sales from $3m to $25m between 1999 and 2000 — enough to challenge Beenz for digital currency’s heavyweight title.

So, Beenz did what any well-funded currency company would do: it invested heavily in growth, pouring venture capital into everything from new markets to moonshot partnerships.

Drinking the currency Kool-Aid

Neil Forrester, Cohen’s co-founder and friend from Oxford, had come all the way from a starring role in MTV’s The Real World: London to pursue the Beenz dream. Forrester was tasked with bringing Beenz to new markets, directing a small army that translated the platform into 15 languages.

Beenz paid up-and-coming tech companies to design products for the burgeoning business, claiming Beenz would trade on cell-phones, game consoles, interactive televisions (grandaddies of smart TVs), and Mondex smartcards (early digital wallets).

Neil Forrester, MTV's The Real World: London

Before Beenz, co-founder Neil Forrester (far right) starred in MTV’s Real World. (via EW)

The company and its investors radiated boundless optimism. In 2000, Cohen toldTime that he expected to see Beenz listed against major currencies within 5 years. After all, the whole world seemed to be betting on them.

At one point, the UK’s Financial Services Authority searched the Beenz office to investigate the unlicensed ‘Bank of Beenz.’ The team merely laughed.“Beenz,” they explained, “[were] a radical alternative to money” that would “create a new generation of e-millionaires.”

Just 4 months later, the trouble began…

“We decided we needed to be very careful where we spent our money,” Cohen explained to The Register after firing 25 employees in December of 2000. He told reporters the company had plenty of cash left — but things at Beenz were bleaker than Cohen admitted to the press.

While Beenz had developed a ‘radical alternative to money,’ companies like Amazon and Visa built simple online payment options using credit cards — and when Beenz’s partners discovered them, they up and left.

“Talk about the air going out of the balloon,” Cohen told us. “Quite literally, our customers disappeared. It wasn’t that they started buying less; they just disappeared.”

Meanwhile, at Flooz, the sh*t really hit the fan: The company unknowingly sold $300k in currency to members of an organized crime syndicate in Russia and the Philippines, forcing a liquidity crunch that left Flooz with the most creditors (325k) in bankruptcy history.

Beenz digital currency logo

The Beenz logo was a fixture everywhere from conferences to the tops of taxis (via Designscape).

By April, the Beenz staff shrank from 265 to just 30, and 13 of the company’s 15 offices shuttered and in May, the company announced plans to find a buyer.

Then, on August 16th, Beenz users received this message:

“The operation of the Beenz economy will be terminated at 12.01am Eastern Standard Time on August 26 2001. No Beenz earning or spending transaction will be honoured after that date […] Thank you for participating in the Beenz economy.”

The very same day, Flooz announced its bankruptcy.

The crypto craze looks awfully familiar

When entrepreneurs race to create products from cutting-edge tech, some have value — but many don’t.

And, like the Beenz bubble born out of online payments, today’s crypto craze is a response to a powerful new technology: Blockchain.

Blockchain is poised to revolutionize industries, but, it’s also unleashed a speculative tornado of titanic proportions. In fact, cryptocurrency is even more overinvested than digital currency in Cohen’s day (Block.one raised $4B in an ICO without telling its investors what it’s product is).

“Cryptocurrency is like a gold rush in the sense that there’s a massive area of land that’s just opened up, but nobody really knows where the gold is,” Cohen says. “Everybody just takes a patch and digs — but it’s gonna be some time before you see the winners and losers.”

Finding the diamond in the Beenz

To identify which emerging tech companies will go the distance, you first need to distinguish novelty from innovation.

Novelty, according to Cohen, is when you “make something just because some idiot’s gonna buy it off you. Innovation is an application of technology that has some value to people.”

And, more often than not, simple solutions outlast fancy fixes.

Beenz advertisement

This French Beenz ad was part of a campaign that spanned more than 18 countries (via Internet Achat).

Amazon and Visa beat Beenz by using new technology to fix an antiquated system, not by inventing an entirely new one. This time around, Cohen expects the same process to unfold — resulting in a few regulated exchanges and companies built around practical solutions.

Already, Coinbase and Circle, two large digital currency platforms, are racing towards regulation to capture bigger revenue. Telegram, the 2nd highest-funded ICO after Block.one, is using channeling its $1.7B into improving its secure messaging app.

But, when we asked Cohen which cryptocurrencies he had invested in, he chuckled like an uncle who’s seen it all before…

“I absolutely do not have any money in cryptocurrency,” he said. “Funny enough, I now work in the gambling industry. But I don’t gamble with my own money.”

There will be winners and losers in the crypto gold rush. But this time, Cohen won’t be the one picking them.

Share and discuss



Sign up here for our daily news email to get all the non-political news you never knew you needed. All it takes is 5 minutes and BOOM, you're smarter.

FBI would rather prosecutors drop cases than disclose stingray details (2015)

$
0
0

Not only is the FBI actively attempting to stop the public from knowing about stingrays, it has also forced local law enforcement agencies to stay quiet even in court and during public hearings, too.

An FBI agreement, published for the first time in unredacted form on Tuesday, clearly demonstrates the full extent of the agency’s attempt to quash public disclosure of information about stingrays. The most egregious example of this is language showing that the FBI would rather have a criminal case be dropped to protect secrecy surrounding the stingray.

Relatively little is known about how, exactly, stingrays, known more generically as cell-site simulators, are used by law enforcement agencies nationwide, although new documents have recently been released showing how they have been purchased and used in some limited instances. Worse still, cops have lied to courts about their use. Not only can stingrays be used to determine location by spoofing a cell tower, they can also be used to intercept calls and text messages. Typically, police deploy them without first obtaining a search warrant.

Ars previously published a redacted version of this document in February 2015, which had been acquired by the Minneapolis Star Tribune in December 2014. The fact that these two near-identical documents exist from the same year (2012) provides even more evidence that this language is boilerplate and likely exists in other agreements with other law enforcement agencies nationwide.

The new document, which was released Tuesday by the New York Civil Liberties Union (NYCLU) in response to its March 2015 victory in a lawsuit filed against the Erie County Sheriff’s Office (ECSO) in Northwestern New York, includes this paragraph:

In order to ensure that such wireless collection equipment/technology continues to be available for use by the law enforcement community, the equipment/technology and any information related to its functions, operation and use shall be protected from potential compromise by precluding disclosure of this information to the public in any manner including but not limited to: press releases, in court documents, during judicial hearings, or during other public forums or proceedings.

In the version of the document previously obtained in Minnesota, the rest of the sentence after the phrase "limited to" was entirely redacted.

Mariko Hirose, a NYCLU staff attorney, told Ars that she has never seen an agreement like this before.

"This seems very broad in scope and undermines public safety and the workings of the criminal justice system," she said.

Your tax dollars at work

The FBI letter also explicitly confirms a practice that some local prosecutors have engaged in previously, which is to drop criminal charges rather than disclose exactly how a stingray is being used. Last year, prosecutors in Baltimore did just that during a robbery trial—there, Baltimore Police Detective John L. Haley cited a non-disclosure agreement, and he declined to describe in detail how he obtained the location of the suspect.

The newly revealed sections state:

7) The Erie County Sheriff's Office shall not, in any civil or criminal proceeding, use or provide any information concerning the Harris Corporation wireless collection equipment/technology, its associated software, operating manuals, and any related documentation (including its technical/engineering description(s) and capabilities) beyond the evidentiary results obtained through the use of the equipment/technology including, but not limited to, during pre-trial matters, in search warrants, and related affidavits, in discovery, in response to court ordered disclosure, in other affidavits, in grand jury hearings, in the State's case-in-chief, rebuttal, or on appeal, or in testimony in any phase of civil or criminal trial, without the prior written approval of the FBI.

8) In addition, the Erie County Sheriff's Office will, at the request of the FBI, seek dismissal of the case in lieu of using or providing, or allowing others to use or provide, any information concerning the Harris Corporation wireless collection equipment/technology, its associated software, operating manuals, and any related documentation (beyond the evidentiary results obtained through the use of the equipment/technology), if using or providing such information would potentially or actually compromise the equipment/technology. This point supposes that the agency has some control or influence over the prosecutorial process. Where such is not the case, or is limited so as to be inconsequential, it is the FBI's expectation that the law enforcement agency identify the applicable prosecuting agency, or agencies, for inclusion in this agreement.

"Why is it spending over $350,000 on this when prosecutions might have to be dismissed?" Hirose added, referring to the approximate total amount that the ECSO spent on the hardware and related software and training.

Everyone's gone quiet

In response to a media inquiry by Ars, Christopher Allen, an FBI spokesman, wrote: "As you know I am not able to comment beyond what I have previously provided."

Last year, Allen sent Ars an affidavit outlining the agency's position on why so little information has been publicly disclosed.

"The FBI routinely asserts the law enforcement sensitive privilege over cell site simulator equipment because discussion of the capabilities and use of the equipment in court would allow criminal defendants, criminal enterprises, or foreign powers, should they gain access to the items, to determine the FBI’s techniques, procedures, limitations, and capabilities in this area," Bradley Morrison, chief of the tracking technology unit at the FBI, stated in the affidavit.

"This knowledge could easily lead to the development and employment of countermeasures to FBI tools and investigative techniques by subjects of investigations and completely disarm law enforcement’s ability to obtain technology-based surveillance data in criminal investigations."

The Erie County Sheriff’s Office did not immediately respond to Ars’ request for comment. Meanwhile, local legislators have not yet had adequate time to review the new documents.

"A year ago, when the issue was first brought to light, the equipment was discussed in the Legislature’s Committee process with the Sheriff’s Office," Jessica O’Neil, a spokesman for the Majority Caucus of the Erie County Legislature, told Ars by e-mail.

"The members of the Legislature have stated that the privacy of residents should be protected and will review the documents fully before deciding if any action can or should be taken."

The Erie County public defender also did not immediately respond.

UPDATE Wednesday 12:08pm CT: Kevin Stadelmaier, the chief attorney with the Criminal Defense Unit at Legal Aid Buffalo, the local public defender, told Ars he had never heard of stingrays prior to this case and would be investigating further.

"The Erie County Sheriff's Office is basically subverting the Fourth Amendment," he said.

"The point of the matter is not only are they pulling info off people they're looking for, but the same technology could be used against people that are not subject to criminal investigations."

Authorities only sought court permission once, out of 47 times

Finally, the trove of documents released Tuesday includes a June 2014 memo from Chief Scott Patronik to all members of the "Cellular Phone Tracking Team." It states: "Cellular tracking equipment is to be used for official law enforcement purposes only."

In the documentation, officers are also ordered to "describe the legal authority for tracking the cellular phone (exigent circumstances, arrest warrant, court order, etc)," in their police records each time that a stingray is used.

But the list of the 47 instances provided to the NYCLU detailing such usages only specifically mentions one occasion when a pen register was sought—on October 3, 2014 as part of a robbery investigation in Buffalo. Ars has contacted the Erie County Court in an attempt to obtain the pen register application and related court documents.

In the pre-cellphone era, a "pen register and trap and trace order" allowed law enforcement to obtain someone's calling metadata in near-real time from the telephone company. Now, that same data can also be gathered directly by the cops themselves through the use of a stingray. In some cases, police have gone to judges asking for such a device or have falsely claimed the existence of a confidential informant while in fact deploying this particularly sweeping and invasive surveillance tool.

Most judges are likely to sign off on a pen register application not fully understanding that police are actually asking for permission to use a stingray. Under federal law and New York state law, pen registers are granted under a very low standard: authorities must simply show that the information obtained from the pen register is "relevant to an ongoing criminal investigation."

That is a far lower standard than being forced to show probable cause for a search warrant or wiretap order. A wiretap requires law enforcement to not only specifically describe the alleged crimes but also to demonstrate that all other means of investigation have been exhausted or would fail if they were attempted.

Hirose reveled in her organization’s judicial win: "Just because the FBI and the ECSO agree that these documents are confidential doesn't mean that they're confidential under the law."

Design case history: the Commodore 64 (1985) [pdf]

Extractive contributors: How open is too open?

$
0
0

When I look at open source projects, I divide the people involved into three categories: the investors, the contributors, and the users. The contributors do the work on the project, while the investors (if any) support the contributors in some way. The users are those who simply use the project without contributing to it.

For example, in sourmash, the investors are (primarily) the Moore Foundation, because they support most of the people working on the project via the Moore grant that I have. There are the contributors - myself, Luiz Irber, and many others in and out of my lab - who have submitted code, documentation, tutorials, or bug reports. And then there are the users, who have used the project and not contributed to it. (Projects can have many investors, many contributors, and many users, of course.)

I consider anybody who used sourmash and then contacted us - with a bug report, a question, or a suggestion - as a contributor. They may have made a small contribution, but it is a contribution nonetheless. I should add that those who cite us or build on us are contributing back in a reasonably significant way, by providing a formal indication that they found our code useful. This is a good signal of utility that is quite helpful when discussing new investments.

Users are interesting, because they contribute nothing to the project but also cost us nothing. If someone downloads sourmash, installs it, runs it, and gets a result, but for whatever reason never publishes their use and cites us, then they are a zero-cost user. If they file a bug report, that’s potentially a small burden on the project (someone has to pay attention to it), but - especially if they file a good bug report that makes it easy to track down the bug - then I think they are contributing back to the project, by helping us meet our long-term goals of less-buggy / more correct code.

Some (rare) contributors are more burden then help. They are the contributors who discover an interesting project, try it out, find that it doesn’t quite fit their needs, and then ask the developers to adjust it for them without putting any effort into it. Or, they ask many questions via private e-mail, consuming the time and energy of developers in private without contributing to the public discussion of the software’s scope and functionality. Or, they argue passionately about planned features without putting any other time into the project themselves. I call these extractive contributors.

These extractive contributors are far more of a burden then you might think. They consume the effort of the project with no gain to the project. Sometimes feature requests, questions, and high-energy discussions lead the project in new, worthwhile directions, but quite often they’re simply a waste of time and energy for everyone involved. (We don’t have any such contributors in sourmash, incidentally, but I’ve seen them in many other projects - the more well known and useful your project is, the more likely you are to have people who demand things of the project.) Quote from a friend: “They don’t contribute much code, but boy do they have strong opinions!"

You could certainly imagine an extractive contributor who implements some big new feature and then dumps it on the project with a request to merge (these are often called “code bombs”). If the feature was discussed beforehand and aligns with the direction of the project, that’s great! But sometimes people submit a merge request that simply won’t get merged - perhaps it’s misaligned with the project’s roadmap, or it adds a significant maintenance burden. Or, perhaps the project developers don’t know and trust the submitter enough to merge their code without a lot of review. Again, this is not a problem we’ve had in sourmash, but I know this happens with some frequency in the bigger Python projects.

You could even imagine a significant regular code contributor being extractive if they are not contributing to the maintenance of the code. If someone is working for a company, for example, and that company is asking them to implement features X, Y, and Z in a project, but not giving them time to contribute to the overall project maintenance and infrastructure as part of the core team, then they may be extracting more from the project than they are putting in. Again, on the big projects, I’m told this is a serious problem. To quote a friend, “sometimes pull requests are more effort than they are worth."

I don’t know what the number or cost of extractive contributors is on big projects, but at least by legend they are a significant part of the software sustainability problem. Part of the problem is on the side of the core maintainers of any project, of course, who don’t protect their own time - in the open source world, developers are taught to value all users, and will often bend over backwards to meet user’s needs. But a larger part of the problem is on the side of the extractive contributors, who are effectively sapping valuable effort from the project’s contributors.

I don’t think it’s necessarily easy to identify extractive contributors, nor do I think it’s straightforward to draw well-considered boundaries around an open project in which you indicate exactly which contributions are welcome, and how. And some extractive contributors can turn into net positive contributors with a little bit of mentoring and effort; we could think of such an effort as incurring contributor debt that could be recouped if more "effort" is brought into the project than is lost, over the long term.

Looking at things through this lens, some features of the Python core dev group come into sharp focus. Python has a ‘python-ideas’ list where potentially crackpot ideas can be floated and killed without much effort if they are misaligned with the project. If an idea passes some threshold of critical review there, it can potentially move into a formal suggestion for python implementation via a Python Enhancement Proposal, which must follow certain formatting and content guidelines before it can even be considered. These two mechanisms seem to me to be progressive gating mechanisms that serve to block extractive users from sapping effort from the project: before a major change request will be taken seriously, first the low threshold of a successful python-ideas discussion has to be met, and then the significant burden of writing a PEP needs to be undertaken.

A few (many?) years ago, I seem to recall Martin van Loewis offering to review one externally contributed patch for every ten other patches reviewed by the submitter. (I can’t find the link, sorry!) This imposes work requirements on would-be contributors that obligate them to contribute substantively to the project maintenance, before their pet feature gets implemented.

Projects can also decrease the cost of extractive contributors by lowering the cost of engagement. For example, the“pull request hack” makes it possible for anyone who has made a small "minimally viable" contribution to a project to become a committer on the project. While it probably wouldn't work for big complex projects, on smaller projects you could imagine it working well, especially for bug fixes and documentation-centric issues.

Another mechanism of blocking extractive contributors is to gate contributions on tests: in sourmash and khmer, as in many other open source projects, we don’t even consider reviewing pull requests until they pass the continuous integration tests. We do help people who are having trouble with them, in general, but I almost never ask Luiz to review my own PRs until they pass tests. When applied to potential contributors, this imposes a minimum level of engagement and effort on the part of that contributor before they consume the time and energy of the central project.

I suspect there are actually a bunch of techniques that are used in this way, even if they serve purposes beyond gating contributors (we also care if our tests pass!). I’d be really interested in hearing from people if they have encountered strategies that seem to be aimed at blocking or lowering the cost of extractive contributors.

How does this connect with the title, "How open is too open?" Well, this question of sustainability and "extractive" contributors seems to apply to all putatively "open" projects, but techniques aimed at blocking extractive contributors seem to trading openness for sustainability. And I’m curious if that’s something we need to pay attention to when building open communities, and how we should measure and evaluate the tradeoffs, and what clever social hacks people have for doing this.

—titus

OBike: bike sharing operator shuts down, someone does forensics analysis

$
0
0

If you have an oBike account, and worry that you will not get back your deposit, you are right. But you need not be kept in the dark about who took your money.

His name is Shi Yi 石一. 

Shi Yi is the founder of oBike, a company portrayed as "homegrown" in Singapore. The truth is Shi Yi was born in Shanghai, China and he continues to reside there.

Shi Yi wants everyone to believe he is a tech genius.  According to the 29-year-old, he left China to live in Germany at 11. As a lonely child, he learned different programming languages, including JavaScript, and how to build websites through online tutorials. At 14, he ran an online forum for people to discuss about computing and got his  "pay cheque" of US$250 from Google. At 16, he had built more than 30 websites that earned him monthly revenues of between US$4,000 and US$5,000 in advertising revenue. 

At 18, he enrolled as a computer science student at Goethe University in Frankfurt, Shi Yi dropped out after just three semesters because his online business was making him "a lot of money".   How many of us would die to know how he made "a lot of money" from website building, media buys and online promotions, but unfortunately he didn't explain.

After quitting school because he had no time for his business,  Shi Yi bought himself a business-class ticket to travel around the world (huh???). In August of 2009, he started in a small office opposite Shanghai an company called Avazu Advertising that offered multiple mobile advertising platforms for performance-oriented advertisers, focusing on a global market. Within one year, the company was profitable and three years later its profits exceeded US$10 million.

No mean feat for a 20-year-old, living in a country without Google since 2010 and 3G network was rolled out only in the same year.

oBike Singapore

Back to oBike.

oBike's operating entity in Singapore is oBike Asia Pte. Ltd. It was set up on 28 November 2016 and has paid-up capital of S$500,000. In spite of the scale of its operations, oBike Asia Pte. Ltd., currently has only one representive, who is a Singapore citizen company director,  with a Chinese name, Zhu Yimin.

But as you can see, Zhu Yimin was appointed only on 27 April 2018. So Zhu Yimin is probably only a proxy and a scapegoat.

 oBike Asia Pte. Ltd. is wholly-owned by oBike Hong Kong Limited.

oBike Hongkong Limited is a Hong Kong-registered company. It was established  on 12 February 2014. Before 10 November 2016, this company was known before  Avazu Hong Kong Limited.

oBike Hongkong Limited's registered address is that of its corporate secretary. 

oBike Hongkong Limited has issued capital of only HKD10,000. It is most likely a shell-company. 

The sole shareholder of oBike Hongkong Limited is oBike Inc., an offshore company registered in the British Virgin Island (BVI). 

In other words, oBike Inc., through oBike Hongkong Limited, owns oBike Asia Pte. Ltd. 

Before you think of getting back your money from the people behind oBike Inc., you need to know that companies are usually registered in offshore jurisdictions for one purpose — to hide the identities of their directors and shareholders, because the local laws are deliberately drawn up to allow this.

But rather foolishly, Shi Yi left himself registered as oBike Hongkong Limited's only one company director:

This is Shi Yi's hand signature:

Besides oBike Asia Pte. Ltd.  in Singapore, oBike Hongkong Limited has another wholly-owned subsidiary in Shanghai called  Shanghai Aozhi Network Technology Co., Ltd. (translated name) (上海奥致网络科技有限公司). This company, which is in effect oBike's Shanghai entity , was established on 29 August 2014. 

Shanghai Aozhi Network Technology Co., Ltd. is managed by two persons, a supervisor named Zong Jun (宗俊) and a director-cum-general manager who is (you guessed it!) Shi Yi.

Incidentally Shanghai Aozhi Network Technology Co., Ltd. has registered capital of US$3million. The treasure pot, unsurprisingly, is close to home. 

Shi Yi's Business Network

Shi Yi happens to be the legal representative of Shanghai Aozhi Network Technology Co., Ltd., and 10 other companies in China:

  1. 艾维邑动(北京)信息技术有限公司 (Aiwei Dynamic (Beijing) Information Technology Co., Ltd. also reported to be known as DotC United). Formed on 2014-03-06 and has RMB100,000 registered capital. Likewise Shi Yi owns 99% of this company and is its executive director, and Jin Peifang 金佩芳 holds remaining 1% share, although the supervisor is Zong Jun.
  2. 浙江艾维邑动信息技术有限公司 (Zhejiang Aiwei Dynamic Information Technology Co., Ltd.) Formed on 2014-06-04 and has RMB10million registered capital. Shi Yi is its executive director and 99% shareholder. A Jin Peifang 金佩芳 is its supervisor and holder of the remaining 1% share.
  3. 安徽橙致信息技术有限公司 (Anhui Chengzhi Information Technology Co., Ltd. ) Formed on 2014-09-25 and has RMB1million registered capital. Zhejiang Aiwei Dynamic Information Technology Co., Ltd. is its 100% shareholder, Shi Yi is its executive director and Jin Peifang  its supervisor.
  4. 上海奔威信息技术有限公司 (Shanghai Benwei Information Technology Co., Ltd.) Formed on 2014-09-25 and has RMB 500,000 registered capital. Zhejiang Aiwei Dynamic Information Technology Co., Ltd. is its 100% shareholder, Shi Yi is its executive director and Jin Peifang  its supervisor.
  5. 西安麦橙信息科技有限公司 (Xi'an Maicheng Information Technology Co., Ltd.) (deregistered)  Formed on 2015-04-03 and has RMB 500,000 registered capital. 上海麦橙网络科技有限公司 (Shanghai Maicheng Network Technology Co., Ltd. is its 100% shareholder, Shi Yi is its executive director and Li Wenjing 李文进  its supervisor.
  6. 北京橙动信息技术有限公司 (Beijing Chengdong Information Technology Co., Ltd., also known as Teebik). Formed on 2016-03-09 and has RMB100,000 registered capital. 上海橙致信息技术有限公司 (Shanghai Chengdong Information Technology Co., Ltd., ) is its 100% shareholder, Shi Yi is its executive director and Zong Jun its supervisor.
  7. 上海麦橙网络科技有限公司北京分公司 (Shanghai Maicheng Network Technology Co., Ltd. Beijing Branch) Formed on 2016-03-23. Shi Yi is the person-in-charge. 
  8. 南通艾维投资合伙企业(有限合伙) (Nantong Aiwei Investment Partnership (Limited Partnership) (deregistered). Formed on 2016-06-14 and has RMB 2 billion registered capital. Shi Yi is its 99.95% shareholder and the person-in-charge, while Sha Ye 沙烨 held the remaining 0.05%.
  9. 上海奥梵信息技术有限公司 (Shanghai Ao Fan Information Technology Co., Ltd.). Formed on 2017-10-20 and has RMB100,000 registered capital. Shi Yi owns 80% of shares and 上海麦致信息技术有限公司 (Shanghai Maizhi Information Technology Co., Ltd.) owns 20%. Shi Yi is the executive director and Jin Peifang the supervisor.
  10. 上海橙墨信息技术有限公司 (Shanghai Cheng Mo Information Technology Co., Ltd.). Formed on 2017-11-13 and has RMB100,000 registered capital. Anhui Chengzhi Information Technology Co., Ltd.  is its 100% shareholder, Shi Yi is its executive director and Jin Peifang  its supervisor.

Shi Yi is also a director or commissioner of the following business entities:

  1. 上海渔歌网络技术有限公司 (Shanghai Yuge Network Technology Co., Ltd.). Formed on 2016-07-20
  2. 上海手铺网络科技有限公司 (Shanghai Shoupu Network Technology Co., Ltd.). Shi Yi holds 1% share in this company. Formed on 2016-10-19
  3. 霍尔果斯金狮影业传媒有限公司 (Khorgas Golden Lion Film Media Co., Ltd.). Formed on 2017-08-22 

Why does one do with so many  companies engaged in "information technology" and "network technology"? Under normal operations, companies like these serve to enable their clients to transmit and disseminate information .  Knowledgeable persons have also advised that without the cumbersome needs of dealing with physical goods and hiring a large workforce, they can be set up to support the facilitation of sizable payments quickly, across national borders and seemingly in legitimate manner. 

Shi Yi is no computer nerd, as his shares in a number of investment and venture capital businesses show:

  1. 高榕资本(深圳)投资中心(有限合伙) Gaochun Capital (Shenzhen) Investment Center (Limited Partnership). Formed on 2014-06-11 and has RMB 320 million registered capital. Shi Yi owns 1.56% of shares
  2. 珠海横琴富坤创业投资中心(有限合伙) Zhuhai Hengqin Fukun Venture Capital (Limited Partnership). Formed on 2014-09-03. Shi Yi has 21.06% shares.
  3. 上海集观投资中心(有限合伙) (Shanghai Jiguan Investment Center (limited partnership)). Formed on 2015-02-17. Shi Yi has 99.95% shares and Aiwei Dynamic (Beijing) Information Technology Co., Ltd. has the remaining 0.05%.
  4. 宁波丰厚致远创业投资中心(有限合伙)(Ningbo Fengyuan Zhiyuan Venture Capital Center (Limited Partnership)). Formed on 2015-03-26. Shi Yi owns 1.58% of shares
  5. 新余艾动投资管理中心(有限合伙) (Xinyu Aidong Investment Management Center (Limited Partnership). Formed on 2015-05-04. Shi Yi has 99.% shares and JinPeifang has the remaining 1%.
  6. 新余凹凸凹投资管理中心(有限合伙) (Xinyu Ao'tu'ao Investment Management Center (Limited Partnership)) Formed on 2015-05-18. Shi Yi has 90.% shares and JinPeifang has the remaining 10%.
  7. 新余集观投资管理中心(有限合伙) (Xinyu Jiguan Investment Management Center (Limited Partnership)). Formed on 2015-05-18. Shi Yi has 90.% shares and JinPeifang has the remaining 10%.
  8. 北京天际时空网络技术有限公司 (Beijing Skyline Matrix Network Technology Co., Ltd.). Formed on 2015-06-03 and has RMB 15.294 million registered capital. Shi Yi owns 1.70% of shares
  9. 杭州塔酱科技有限公司 (Hangzhou Tajiang Technology Co., Ltd.). Formed on 2015-06-16 and has RMB 2.98 million registered capital. Shi Yi owns 14.00%of shares
  10. 杭州领带蛙网络科技有限公司 (Hangzhou Lingdaiwa Network Technology Co., Ltd.) Formed on 2015-07-17 and has RMB 162,760 registered capital. Shi Yi owns 1.92% of shares
  11. 深圳灿和星团投资咨询合伙企业(有限合伙) (Shenzhen Canhe Star Group Investment Consulting Partnership Enterprise (Limited Partnership)). Formed on 2015-12-21 and has RMB 5 million registered capital. Shi Yi owns 9.19%of shares
  12. 上海陈石互联网信息服务有限公司 (Shanghai Chenshi Internet Information Service Co., Ltd.)  Formed on 2016-07-25 and has RMB 1 million registered capital. Shi Yi owns 3.00% of shares
  13. 宁波饮水思源投资管理有限公司 (Ningbo Yinshui Siyuan Investment Management Co., Ltd.). Formed on 2017-03-14 and has RMB 2 million registered capital. Shi Yi owns 29.85%of shares

Piercing through the veil

In June 2017, Shi Yi appeared on the  2017 Forbes 30 Under 30 China List, which is  issued annually for "selecting Chinese young entrepreneurs and innovators, presenting to the world their entrepreneurial spirit and innovations, and hopefully can influencing more young people in the future".   

The nomination marked the third time in four years Shi Yi was listed in the Forbes 30 under 30, having been first nominated as one of the Forbes China 30 Under 30 in 2014, and included in Forbes 30 Under 30 Asia in April 2017. 

Interestingly Shi Yi's Forbes profile reported in March 2014 states:

"Shi Yi is the founder and CEO of Avazu, a digital advertising company. Avazu has offices in China, Hong Kong, Germany and Brunei, with more than 500 clients."

Interesting because we can see from the records of companies associated with him,  Shi Yi first had his name to a company only on 12 February 2014 when oBike Hongkong Limited/Avazu Hong Kong Limited. Based in Shanghai, set up in Hong Kong, built offices in two more countries, gained 500 clients, and nominated on a Forbes list, all in a month. Wow. 

A more recent Forbes profile of Shi Yi tells a slightly different story:

Zong Jun, whose name appears many times in the business records of  Shi Yi in China is known to be the Vice President of Finance (财务副总裁) of Avazu Holding. So Shi Yi's intimate ties with Avazu Holding is not questioned. Furthermore, company records in China confirm that Avazu Holding was founded in 2009 as 上海威奔广告有限公司 (Shanghai Weiben Advertising Co., Ltd.) and it has RMB 5 million in registered capital. 

However, Shi Yi is not a director or shareholder of Avazu Holding .  Instead the company is equally owned by Shi Yi's close business associate Jin Peifang and Shi Hengzhong 石恒忠, who are also its directors.  Jin Peifang is the company's legal representative.

The URL of Avazu Holding's official website in the record is given as http://www.avazu.cn. However, clicking on this URL redirects you to http://avazuinc.com/that belongs to a Avazu entity, also called DotC United, which "is founded in Brunei in 2009".  A Chinese tech company in Brunei? 

Confusing? Complicated.

The "about" page of Avazu/DotC United makes no attempt to clarify:

How does Zeus Entertainment come into the picture? Who are Jin Peifang and Shi Hengzhong?

According to this legal document produced by a Beijing law firm, Jin Peifang and Shi Hengzhong are the maternal grandmother and maternal grandfather of Shi Yi respectively. 

So it was grandma and grandpa who started the Avazu mobile advertising empire.

Apart the companies associated with Shi Yi, Jin Peifang and Shi Hengzhong have registered business interests in the following other information technology and investment entities in China:

  1. 上海奥邑信息技术有限公司 (Shanghai Aoqi Information Technology Co., Ltd.)
  2. 上海邑为投资管理有限公司 (Shanghai Yiwei Investment Management Co., Ltd.)
  3. 上海麦致信息技术有限公司 (Shanghai Maizhi Information Technology Co., Ltd.)
  4. 上海邑为信息技术有限公司 (Shanghai Yiwei Information Technology Co., Ltd.)
  5. 上海奔汇信息技术有限公司 (Shanghai Benhui Information Technology Co., Ltd.)
  6. 新余邑为投资管理中心(有限合伙) (Xinyu Yiwei Investment Management Center (Limited Partnership))
  7. 新余威风投资管理中心(有限合伙) (Xinyu Weifeng Investment Management Center (Limited Partnership))
  8. 新余邑乐投资管理中心(有限合伙) (Xinyu Yile Investment Management Center (limited partnership))
  9. 新余威奔投资管理中心(有限合伙) Xinyu Weiben Investment Management Center (Limited Partnership)

Xinyu is a city in Jiangxi province.

The document  that spelt out  Shi Yi's ties with Jin Peifang and Shi Hengzhong pertained to  the acquisition of several Avazu companies by Zeus Entertainment (full name Dalian Zeus Entertainment Co., Ltd. 大连天神娱乐股份有限公司)  in the latter's 2015 reverse IPO, as mentioned on Avazu/DotC United website.  

Zeus Entertainment  is controlled by a man called Zhu Ye 朱晔. Nicknamed the "Investment Hunter" (投资猎手), Zhu Ye had a reputation of perfect success in all his numerous acquisitions of technology, gaming, media and software companies and the market value of his companies are estimated to be RMB 20 billion.  Successful as a tech entrepreneur by his early 20s,  Zhu Ye's profile bears uncanny resemblance with Shi Yi's. However on 9 May this year, Zhu Ye was served a "notice of investigation" by the China Securities Regulatory Commission (CSRC) for suspected violations of securities laws and regulations. 

Not a good sign, for him.

So what has happened to our oBike money?  

The Land Transport Authority (LTA) in Singapore has issued oBike an ultimatum to work with its liquidator to remove its bicycles from public spaces by 4 July, failing which  LTA will remove the oBike bicycles (presumably at the expense of Singapore taxpayers) and demand oBike or its liquidator  to pay the towing and storage fees in order to claim impounded bicycles.

We foresee the LTA being stuck with loads of unwanted bicycles. 

After starting its operations in Singapore with much fanfare only a year ago, oBike supposedly found itself in tightening financial difficulties.

The situation is bizarre even if operations had not yet achieve profitability. Only in August 2017 had oBike pulled off one of the largest series B rounds on record in Southeast Asia to raise US$45 million from a venture capital firm linked to a co-founder of Russia’s  Mail.Ru Group  a mysterious “leading global transportation platform” and several unnamed family offices in Southeast Asia. 

Moreover, bicycles can be bought off the internet in China for well-below the S$49 deposit fees collected from every oBike user in Singapore. Their cost price must be even much lower. So what did all the oBike investment and deposit money vanish to?

Regardless, Shi Yi appeared a desperate man to his associates at the end of 2017 and was looking for all ways to raise money to sustain oBike. Until an idea came to him — crowdfunding using cryptocurrency through Initial Coin Offering (ICO). 

To accomplish this, Shi Yi roped in as his partners a Chinese investor named Sun Guofeng 孙高峰  and  Sun Yuchen 孙宇晨 a.k.a.  Justin Sun, the founder of crypto network Tron, which has just gotten into bed with PornHub

Using Singapore-incorporated Odyssey Protocol Foundation Limited as their vehicle, they launched the cryptocurrency OCN on the trading site Gate.IO on 25 January 2018.   OCN was traded on five trading exchanges Huobi Pro, Bit-Z, Kucoin, Bjex and Cobinhood. The venture was a success. Within six days, it raised 50,000 ETH, or equivalent to over S$82 million. 

But this was only the beginning of the end of oBike.  As this report by an information service provider on the venture capital and investment scene in China disclosed back in late March, Shi Yi never had the intention of investing the funds raised into oBike.

Odyssey Protocol Foundation Limited was incorporated on 16 January 2018. Shi Yi's name does not appear anywhere on its registration papers. The company had  two directors, Guan Xiaofei and Singaporean Ang Irene. Its secretary is Wu Lijuan, who like Guan Xiaofei, is a Chinese national holding a Singapore ID number. Ang Irene has since resigned from her position. 

Instead Shi Yi and Justin Sun are shown on the Odyssey Protocol Foundation Limited website as chief advisors.

As Sun Guofeng grievously recounted, Shi Yi  simply withdrew the tens of millions in the OCN pool and walked away. Not a single cent went into oBike as promised.

Having tasted blood with the OCN heist, Shi Yi moved swiftly to initiate two more new cryptocurrencies Content Neutrality Network (CNN) and DatX in China, using the tried and tested formula — assemble a small team of connected persons in Shanghai for execution and registered a limited company in Singapore for packaging. The companies used for the ICOs of CNN and DatX were D-Run Foundation Ltd. and Cosima Foundation Limited respectively. Like Odyssey Protocol Foundation Limited,  Cosima Foundation Limited has the same address with oBike Asia Pte Ltd at 1 Commonwealth Lane #09-19.

The CNN and DatX white papers indicate that the ICOs were conducted with the service of  Tzedek Law, a boutique law firm in Singapore whose directors are Guo Zhanzhi Clarence and Lionel Yun Weijie. 

According to Sun Guofeng, Shi Yi  took the following steps for each ICO to ensure the outcome: 

  1. Step 1: Prepare a well-packaged white paper. This included establishing credibility by recycling the names of investors of previous ventures, inviting foreigners (outside China) with relevant tech experience on their resumes to be consultants. and building an English language website and engaging the foreign media.
  2. Step 2: Attract private investors and use their funds as capital
  3. Step 3: Engage the media to shape opinion
  4. Step 4: Harvest. Manipulate trading so that the value of  cryptocurrencies are pushed low at the beginning, get the media to report on their potential returns to lure outside investors,  and as prices spike, dump to cash in.

The ease by which he raked in millions through OCN, CNN and DatX gave Shi Yi great confidence to approach Sun Guofeng again  to be his partner for one more ICO,  together with a sidekick of his Chen Enyong 陈恩永 a.k.a.Joseph Chen. On 5 February 2018, they discussed on the division of the expected bounty (54% for Shi Yi, 23% each for Sun Guofeng and Joseph Chen) and responsibilities.

Targeting investors/victims in Singapore and elsewhere outside China, the corporate vehicle used this time was an existing one - OPG Asia Pte. Ltd., the developer of SpherePay, an app branded as "Singapore’s Homegrown Mobile Payment App".

In November 2017 SpherePay announced a partnership with oBike and just days before the chat shown above) issued a media release  announcing the completion of a US$10-million funding round from "unnamed investors".

The partnership is nothing but a farce. Shi Yi himself is a director of OPG Asia Pte. Ltd, along with another Singapore ID-bearing Chinese national Liu Junfeng and German national Mi Yongliang.

OPG Asia Pte. Ltd's shareholders are Liu Junfeng and Mohammad Khairul Nizam bin Mohammad Mokhtar.

Probably not many oBiker users/victims realise  that OPG Asia Pte. Ltd is also a party to oBike's bicycle rental service agreement with you.

The SpherePay cryptocurrency, SAY, was initiated on 7 February. Seeking to stir investment interest an announcement was released on 13 February that SpherePay had secured strategic investment from TrueChain, a Chinese entity described "one of the world's fastest blockchain company".

However, the plan never off as the bear market crashed on the crypto world. To keep the story short, the three men fell out and as things turned nasty.

Demand for Action

oBike claimed that it is withdrawing from Singapore because of new government regulations. Some observers said it has failed because of poor business plan. 

What we have learned is that Shi Yi and his associates have been making use of Singapore and its clean reputation to form a base to make quick and big money using highly dubious means. 

The securities regulators in China have began investigations into Zhu Ye, the man connected to Shi Yi's business network at a US$300 million reverse IPO in 2015 . It is possible that the sources of Shi Yi's wealth has not been properly accounted for.

If matters are otherwise, the onus is on Shi Yi to come clean and refund the oBike deposits.


The Machine That Builds Itself: The Strengths of the Lisp Languages (2016)

$
0
0
Abstract: We address the need for expanding the presence of the Lisp family of programming languages in bioinformatics and computational biology research. Languages of this family, like Common Lisp, Scheme, or Clojure, facilitate the creation of powerful and flexible software models that are required for complex and rapidly evolving domains like biology. We will point out several important key features that distinguish languages of the Lisp family from other programming languages and we will explain how these features can aid researchers in becoming more productive and creating better code. We will also show how these features make these languages ideal tools for artificial intelligence and machine learning applications. We will specifically stress the advantages of domain-specific languages (DSL): languages which are specialized to a particular area and thus not only facilitate easier research problem formulation, but also aid in the establishment of standards and best programming practices as applied to the specific research field at hand. DSLs are particularly easy to build in Common Lisp, the most comprehensive Lisp dialect, which is commonly referred to as the "programmable programming language." We are convinced that Lisp grants programmers unprecedented power to build increasingly sophisticated artificial intelligence systems that may ultimately transform machine learning and AI research in bioinformatics and computational biology.
Comments:9 pages
Subjects:Other Quantitative Biology (q-bio.OT); Software Engineering (cs.SE)
Cite as: arXiv:1608.02621 [q-bio.OT]
 (or arXiv:1608.02621v2 [q-bio.OT] for this version)
From: Bohdan Khomtchouk [view email]
[v1] Mon, 8 Aug 2016 20:58:32 GMT (26kb)
[v2] Mon, 19 Sep 2016 03:29:49 GMT (44kb)

The NES turns 30: How it began, worked, and saved an industry (2013)

$
0
0

Nintendo's Family Computer, or Famicom, turns 30 today!

We're right on the cusp of another generation of game consoles, and whether you're an Xbox One fanperson or a PlayStation 4 zealot you probably know what's coming if you've been through a few of these cycles. The systems will launch in time for the holidays, each will have one or two decent launch titles, there will be perhaps a year or two when the new console and the old console coexist on store shelves, and then the "next generation" becomes the current generation—until we do it all again a few years from now. For gamers born in or after the 1980s, this cycle has remained familiar even as old console makers have bowed out (Sega, Atari) and new ones have taken their place (Sony, Microsoft).

It wasn't always this way.

The system that began this cycle, resuscitating the American video game industry and setting up the third-party game publisher system as we know it, was the original Nintendo Entertainment System (NES), launched in Japan on July 15, 1983 as the Family Computer (or Famicom). Today, in celebration of the original Famicom's 30th birthday, we'll be taking a look back at what the console accomplished, how it worked, and how people are (through means both legal and illegal) keeping its games alive today.

The Famicom wasn't Nintendo's first home console—that honor goes to the Japan-only "Color TV Game" consoles, which were inexpensive units designed to play a few different variations of a single, built-in game. It was, however, Nintendo's first console to use interchangeable game cartridges.

The original Japanese Famicom looked like some sort of hovercar with controllers stuck to it. The top-loading system used a 60-pin connector to accept its 3-inch high, 5.3-inch wide cartridges and originally had two hardwired controllers that could be stored in cradles on the side of the device (unlike the NES' removable controllers, these were permanently wired to the Famicom).

The second controller had an integrated microphone in place of its start and select buttons. A 15-pin port meant for hardware add-ons was integrated into the front of the system—we'll talk more about the accessories that used this port in a bit. After an initial hardware recall related to a faulty circuit on the motherboard, the console became quite successful in Japan based on the strength of arcade ports like Donkey Kong Jr. and original titles like Super Mario Bros.

An early prototype of what would become the North American version of the Famicom. The Nintendo Advanced Video System communicated with its peripherals wirelessly through infrared.
Enlarge/ An early prototype of what would become the North American version of the Famicom. The Nintendo Advanced Video System communicated with its peripherals wirelessly through infrared.

The North American version of the console was beset by several false starts, to say nothing of unfavorable marketing conditions. A distribution agreement with then-giant Atari fell through at the last minute after Atari executives saw a version of Nintendo's Donkey Kong running on Coleco's Adam computer at the 1983 Consumer Electronics Show (CES). By the time Atari was ready to negotiate again, the 1983 video game crash had crippled the American market, killing what would have been the "Nintendo Enhanced Video System" before it had a chance to live.

Nintendo decided to go its own way. By the time 1985's CES rolled around, the company was ready to show a prototype of what had become the Nintendo Advanced Video System (AVS). This system was impressive in its ambition and came with accessories including controllers, a light gun, and a cassette drive that were all meant to interface with the console wirelessly, via infrared. The still-terrible market for video games made such a complex (and, likely, expensive) system a tough sell, though, and after a lukewarm reception, Nintendo went back to the drawing board to work on what would become the Nintendo Entertainment System we still know and love today.

By late 1985, Nintendo had settled on the console design that most American readers will be the most familiar with.
Enlarge/ By late 1985, Nintendo had settled on the console design that most American readers will be the most familiar with.

What Nintendo went to market with in October 1985 wasn't just a console redesigned for a new territory, but a comprehensive re-branding strategy meant to convince Westerners that the NES wasn't like those old video game consoles that had burned them a few years before. This new Famicom was billed as an "entertainment system" that required you to insert "game paks" into a "control deck," not some pedestrian video game console that took cartridges. The console's hardware followed suit—it was still marketed to kids, but the grey boxy Nintendo Entertainment System looked much more mature than the bright, toy-like Famicom. At the same time, accessories like R.O.B. the robot assured parents that this wasn't just for "video games"—still dirty words to many consumers.

Note the drastic differences between American and Japanese game cartridges. The disk card pictured here was intended for use with the Japan-only Famicom Disk System.

Note the drastic differences between American and Japanese game cartridges. The disk card pictured here was intended for use with the Japan-only Famicom Disk System.

Each of the titles in the relatively strong 18-game launch lineup (remember, at this point the system had been humming along for more than two years in Japan) also featured box art that accurately depicted the graphics of the game inside, unlike the disappointing exaggerations of the Atari 2600 version of Pac-Man or the infamous E.T.

The <em>E.T.</em> box for the Atari 2600.
Enlarge/ The E.T. box for the Atari 2600.
The <em>Super Mario Bros.</em> box for the NES.
Enlarge/ The Super Mario Bros. box for the NES.
<em>E.T.</em> running on the Atari 2600.
Enlarge/E.T. running on the Atari 2600.
<em>Super Mario Bros.</em> running on the NES.
Enlarge/Super Mario Bros. running on the NES.

The final building block in the NES rebuild of the North American game industry was the way Nintendo handled third-party developers. In the Atari era, everyone from Sears to Quaker Oats tried to grab a slice of the gaming pie. The fact that basically anyone could design and sell hastily-coded Atari 2600 games with no interference from or cooperation with Atari led to a game market flooded with shovelware and to clearance bins filled with unsellable dreck. This in turn led to gun-shy retailers and consumers.

Nintendo clamped down on this hard. Third parties had to be licensed to develop games for Nintendo's system, and Nintendo's licensing terms both prohibited developers from releasing games for other consoles and confined them to releasing just two games a year. Other restrictions, mostly aimed at weeding out religious and other "inappropriate" content, were also imposed—memorably, these restrictions resulted in the Super Nintendo port of Mortal Kombat where all the kombatants combatants ooze "sweat" instead of blood. Developers agreed to the restrictions in order to get access to a base of NES fans rabid for new software. (Many of Nintendo's restrictions weren't relaxed until the early '90s when it was losing developers to its first credible competition, the Sega Genesis.)

Licensed games received both a printed Seal of Quality on their boxes and access to the proprietary 10NES lockout hardware, a chip on the cartridge's circuit board that checked in with a corresponding chip on the console's. While not foolproof, in the early days of the NES the 10NES hardware helped to combat the flood of low-quality software that had killed off Atari and its ilk.

Not all developers were happy with these terms, but fighting Nintendo was an uphill battle. The most significant challenge to the 10NES system came from Tengen, a subsidiary of Atari Games. Rather than try to circumvent 10NES, Tengen used Nintendo's copyright documents to reverse-engineer the chip and create its own compatible version, codenamed "Rabbit." Nintendo sued for patent infringement and, at least in part because Tengen didn't use a clean room design in Rabbit, the judge ruled in Nintendo's favor.

The 10NES chip would prevent the system from booting if its security check failed. It was important in the early days, but NESes with dirty or worn connectors are prone to failing its check—this led to the dreaded grey blinking screen that I've probably spent hours of my life looking at. The redesigned top-loading NES shipped without a 10NES chip, and some people who repair older NES consoles recommend snapping off the fourth pin of the chip to disable the check entirely, as shown here.
Enlarge/ The 10NES chip would prevent the system from booting if its security check failed. It was important in the early days, but NESes with dirty or worn connectors are prone to failing its check—this led to the dreaded grey blinking screen that I've probably spent hours of my life looking at. The redesigned top-loading NES shipped without a 10NES chip, and some people who repair older NES consoles recommend snapping off the fourth pin of the chip to disable the check entirely, as shown here.

Salvaged Circuitry

And the rest is really history. The NES was the undisputed leader in the US for several years and wasn't seriously challenged until Sega's Genesis kicked off the 16-bit era. In some territories like Europe and South America, the 8-bit Sega Master System had gained a stronger foothold, but it was a relative rarity in the US. A new top-loading version of the NES and the Famicom with a redesigned controller was launched in both America and Japan in 1993 after the introduction of the Super Nintendo, but by then the stream of high-profile software had slowed to a trickle. The system was produced until 1995 in the US but lived to see its 20th birthday in Japan before being discontinued in 2003.

Listing image by Kongregate Wiki

Open Banking in the UK; a disaster in the making?

$
0
0

02 July 2018

tl;dr:

Adoption of open APIs by TPPs (third party providers) and the development/implementation of compelling customer facing services is not happening due to poor authorisation journeys implemented by banks, and the limited data available on their API platforms. The hype generated by a range of speakers pointing to a utopia where these factors don’t exist increases the likelihood of disappointment when the reality is understood.

Payment Initiation / PISP

When the the CMA Order went live on 13/1/18, aside from the banks, there were no PISP authorised TPPs, there remain very few today, and there is no sign of a customer (PSU) facing service using APIs available on the open market. The question is why, the answer is simple - the authorisation journeys.

Under the OAuth2-based API flow which forms the basis of the Open Banking Ltd (OBIE) design, once a consent is formed between a PSU and a TPP, the PSU is redirected to the bank (and whatever solution they provide) where they must authorise consent. The API design is standardised, but the authorisation journey that the PSU then goes through is not. Some banks (e.g. Starling) have taken a minimal approach, relying on their own app and (quick and easy) biometric authentication, followed by a display of the consent object with the option to authorise. This journey is automatically invoked, familiar to the PSU, highly secure, and (most importantly) simple to follow. It adds minimum friction but not at the expense of security and control.

Conversely, the implementations of the redirect authentication by the CMA9 have shown poor customer experience (UX). TPPs have no control, but this will impact the view customers have of their applications by association. Multiple steps, requirements to recall opaque user IDs and static passwords / memorable words, needless repetition of the consent object, reliance on web journeys and the like all feature. These are worse for the customer than the biometric approach, and are far more vulnerable to phishing and other security risks. The idea that increased friction in the journey means a reduced likelihood of fraud is a fallacy.

From the TPPs’ perspective they are faced with a choice; do they adopt API-initiated payments in the knowledge that there will be significant customer drop off due to the increased friction versus using a digital card journey as in Apple or Google pay, or do they err on the side of caution, and assume that the loss in customers will outweigh the fees of card-based payments? Every TPP spoken to so far has taken the latter approach.

One response to this has been to benchmark drop off rates on banks’ API platforms against drop off rates on similar journeys which rely on screen scraping, or on payments initiated through their web portals. This is a fool’s errand. In both cases the banks have a captive audience, with no option but to accept whatever UX is put in front of them, otherwise they won’t be able to access the banks’ services. The banks care little for the drop off rates experienced by TPPs, and if anything will be happy to see them, as it will increase the likelihood of customer retention on their own channels, and the continuation of their market dominance that the CMA Order sort to address. If the status quo continues, both aims of payment initiation services developing, and an increase in competition as a result will have failed, partially as a result of design decisions taken by the very organisations subject to the Order.

Account Information / AISP

A previous post explained challenges faced as a result of the back of immutable transaction IDs. The points remain, but in the meantime, a survey of current and prospective third parties has taken place. Responses to the challenge vary from the application of proprietary (to the TPP) IDs, to acceptance of the likelihood of issues, to a refusal to use any service without this essential feature.

The issue is challenging from both technical and business standpoints. If the TPP is offering an accounting application, the danger of transactions appearing twice (pending and booked) or changing means that the user could see wildly different account balances. If this data is to be used to offer lending products, a very high level of surety is required around balances and cash flow, so the consequences of a lack of reliability and traceability kill the business case. This is also the case with prospective credit rating generation, and risk analysis. One respondent stated that the lack of reliability of the data necessitated an increase on the risk rating to be applied by an algorithm and a consequent additional margin, meaning a direct impact to the customer in increased cost of product.

Common to the feedback has been significant concern, a recognition of the need for potentially unreliable workarounds in the short term, and of the potential inefficiency of consistently having to pull and analyse complete data sets to perform any business function. Additionally, feedback was given that the absence of immutable IDs would prevent the use of an API using this design for some, if not all of the respondent’s business cases. A change request has now gone to the CMA9 for review, so this very unfortunate feature of the AISP implementations may yet change. In the meantime it is severely limiting development of compelling use cases..

Combatting the hype

Anyone with a LinkedIn account will see several articles a day extolling the virtues of open banking, and the potential it has to revolutionise behaviour in retail banking. Here are some examples from the past week;

There is no coverage online or in print, of the challenges elucidated above, which dictate whether open banking will take off. If these barriers are not overcome, then those building services argue that they simply won’t adopt the open APIs which are mandated under the regulatory regimes. Focussing on comment from traditional spokespeople - high level execs, columnists and consultancies - is part of the problem. Most of these have no direct experience of the ecosystem, but all will recognise that speaking positively about utopias which may never exist will have a positive effect on the market’s impression of them and the organisations they represent. They are just building hype around possibilities that may never result, and, should open banking turn into a white elephant, the disappointment will be even greater as a result of the hype they’ve built.

For example, PwC recently delivered a report on open banking - Open Banking market could be worth £7.2bn by 2022: PwC - PwC UK. Some notable points raised;

  1. The principle growth areas are projected as account aggregation, analytics of expenditure and financial product comparisons. This is difficult to understand given that none of these areas is new - in fact the information is being delivered differently - that is all that has changed. And in most cases (relative to challenger banks) the data delivered is very limited, preventing innovation on these pre-existing services. The major market comparison engines pull richer data through a combination of private means and screen scraping, and have made clear that open banking APIs will not replace this in their current form.
  2. ‘The consumers most likely to share data tend to be young, urban-dwelling, high earners who are comfortable using technology and multibanking’. - sort of stating the obvious - most people in the UK are urban-dwelling, and open banking can only be accessed using some form of tech platform. PwC think account aggregation is important - you can’t use this unless you ‘multibank’.
  3. “Open Banking is a potential game changer for individual and corporate consumers. It provides an opportunity to transform the public’s interaction and everyday experience with the financial services industry. But there are still many ‘hard yards’ to travel. Few disruptive propositions have been developed so far. This is unsurprising given that since the launch of Open Banking in January it remains unclear who needs to get an account information licence or a payments handling licence and how these licences may change in the future.”

It would be interesting to know whether the API specifications were reviewed for this report. The lack of disruptive propositions has nothing to do with confusion around who needs an AISP/PISP licence - its due to factors such as;

  1. A lack of rich data or functionality on the account information APIs,
  2. A regressive method coupled with very poor authorisation journeys on the banks’ platforms,
  3. Technical challenges such as that posed by a lack of immutable transaction IDs’
  4. The absence of any bank-provided, data rich testing environments.

These have been pointed to on this blog, but none of them have changed. PwC would do well to look at the integrations available on the Monzo and Starling API platforms, which are truly disruptive, particularly in cross currency payments where they undercut the retail market by several percentage points. They might consider that this functionality has been built voluntarily by these challenger banks and integrated third parties, but is in stark contrast to what the incumbents have done. Their focus remains the regulatory-mandated bare minimum. Given these misunderstandings / lack of knowledge, it isn’t a great leap to challenge the predictions of PwC of a £7.2bn market worth, and the value of their entire report.It’s just another example of unhelpful hype, which distracts from the really important issues. These need immediate engagement if anything positive is going to happen over the next 6-12 months.

Conclusion

PwC are right in one respect - open banking is a potential game changer. If this is to be realised, then there are significant challenges that the ecosystem needs to address immediately or open banking may not live up to its potential (at best), or may not develop at all. The aim of the CMA Order was to increase competition in the retail banking market, but there is evidence of decisions aimed at thwarting adoption of open banking, and entrenching a non-competitive market. Commentators who talk up potential future ecosystem development are complicit in this, as the column inches they generate, and speeches given at endless conferences, serve only to distract audiences from important issues which need addressing immediately. Open banking has been live in the UK for 6 months but there is little evidence of improvement, and OpenWorks’ recent report showed poor performance levels on the current implementations, even in the context of very limited use. Something must change very soon, or the only response to a GET request will be disappointment.

Bitcode Demystified

$
0
0
This article is a bit long-ish. You can get it as a PDF and
print or put on e-reader to read it later.Get it here.

A few months ago Apple announced a ‘new feature,’ called ‘Bitcode.’ In this article, I will try to answer the questions like what is Bitcode, what problems it aims to solve, what issues it introduces and so on.

What is Bitcode?

To answer this question let’s look at what compilers do for us. Here is a brief overview of compilation process:

  • Lexer: takes source code as an input and translates it into a stream of tokens;
  • Parser: takes stream of tokens as an input and translates it into an AST;
  • Semantic Analysis: takes an AST as an input, checks if a program is correct (method called with correct amount of parameters, method called on object actually exists and non-private, etc.), fills in ‘missing types’ (e.g.: let x = y, x has type of y) and passes AST to the next phase;
  • Code Generation: takes an AST as an input and emits some high-level IR (intermediate representation);
  • Optimization: takes IR, makes optimizations and emits IR which is potentially faster and/or smaller;
  • AsmPrinter: another code generation phase, it takes IR and emits assembly for particular CPU;
  • Assembler: takes assembly and converts it into an object code (stream of 0s and 1s);
  • Linker: usually programs refer to already compiled routines from other programs (e.g.: printf) to avoid recompilation of the same code over and over. Until this phase these links do not have correct addresses, they are just placeholders. Linker’s job is to resolve those placeholders so that they point to the correct addresses of their corresponding routines.

You can find more details here: The Compiler.

In the modern world these phases are split into two parts: compiler frontend (lexer, parser, semantic analysis, code generation) and compiler backend (optimization, asm printer, assembler, linker). This separation makes much sense for both language designers and hardware manufacturers. If you want to create a new programming language you ‘just’ need to implement a frontend, and you get all available optimizations and support of different CPUs for free. On the other hand, if you created a new chip, you ‘just’ need to extend the backend and you get all the available languages (frontends) support for your CPU.

Below you can see a picture that illustrates compilation process using Clang and LLVM:

This picture clearly demonstrates how communication between frontend and backend is done using IR, LLVM has it is own format, that can be encoded using LLVM bitstream file format - Bitcode.

Just to recall it explicitly - Bitcode is a bitstream representation of LLVM IR.

What problems Apple’s Bitcode aims to solve?

Again, we need to dive a bit deeper and look at how an OS runs programs. This description is not precise and is given just to illustrate the process. For more details I can recommend reading this article: How OS X Executes Applications.

OS X and iOS can run on different CPUs (i386, x86_64, arm, arm64, etc.), if you want to run a program on any OS X/iOS setup, then the program should contain object code for each platform. Here is how a binary might look like:

When you run a program, OS reads the ‘Table Of Contents’ and looks for a slice corresponding to the OS CPU. For instance, if you run operating system on x86_64, then OS will load object code for x86_64 into a memory and run the program.

What’s happening with other slices? Nothing, they just waste your disk space.

This is the problem Apple wants to solve: currently, all the apps on the AppStore contain object code for arm and arm64 CPUs. Moreover, third-party proprietary libraries or frameworks contain object code for i386, x86_64, arm and arm64, so you can use them to test the app on a device or simulator. (Can you imagine how many copies of Google Analytics for i386 you have in your pocket?)

UPD: I do not know why, but I was sure that final executable contains these slices as well (i386, x86_64, etc.), but it seems they are stripped during the build phase.

Apple did not give us that many details about how the Bitcode and App Thinning works, so let me assume how it may look:

When you submit an app (including Bitcode) Apple’s ‘BlackBox’ recompiles it for each supported platform and drops any ‘useless’ object code, so AppStore has a copy of the app for each CPU. When an end user wants to install the app - she installs the only version for the particular processor, without any unused stuff.

Bitcode might save up to 50% of disk space per program.

UPD: Of course, I do not take in count resources, it is just about binary itself. For instance, an app I am working on currently has size ~40 megabytes (including assets, xibs. fonts), a size of a binary itself is ~16 megabytes. I checked sizes of each slice: ~7MB for armv7 and 9MB for arm64, if we crop just one of them, it will decrease the size of the app by ~20%.

What problems do Bitcode introduce?

The idea of Bitcode and recompiling for each platform looks really great, and it is a huge improvement, though it has downsides as well: the biggest one is security.

To get the benefits of Bitcode, you should submit your app including Bitcode (surprisingly). If you use some proprietary third-party library, then it also should contain Bitcode, hence as a maintainer of a proprietary library, you should distribute the library with Bitcode.

To recall: Bitcode is just another form of LLVM IR.

LLVM IR

Let’s write some code to see LLVM IR in action.

// main.cexternintprintf(constchar*fmt, ...);intmain() {
  printf("Hello World\n");return0;
}

Run the following:

clang -S -emit-llvm main.c

And you’ll have main.ll containing IR:

@.str = privateunnamed_addrconstant [13xi8] c"Hello World\0A\00", align1; Function Attrs: nounwind ssp uwtabledefinei32 @main() #0 {
  %1 = allocai32, align4storei320, i32* %1
  %2 = calli32 (i8*, ...)* @printf(i8* getelementptrinbounds ([13xi8]* @.str, i320, i320))reti320
}declarei32 @printf(i8*, ...) #1

What can we see here? It is a bit more verbose than original C code, but it is still much more readable than assembler. Malefactors will be much happier to work with this representation, than with disassembled version of a binary (and they do not even have to pay for tools such Hopper or IDA).

How could malefactor get the IR?

iOS and OS X executables have their own format - Mach-O (read Parsing Mach-O files for more details). Mach-O file contains several segments such as Read-Only Data, Code, Symbol Table, etc. One of those sections contain xar archive with Bitcode:

It is really easy to retrieve it automatically, here I wrote a simple C program that does just that: bitcode_retriever. The workflow is pretty straightforward. Let’s assume that some_binary is a Mach-O file that contains object code for two CPUs (arm and x86_64), and each object code is built using two source files:

$ bitcode_retriever some_binary
arm.xar
x86_64.xar
$ xar -xvf arm.xar12
$ llvm-dis 1# outputs 1.ll$ llvm-dis 2 # outputs 2.ll

Bitcode does not store any information about original filenames but uses numbers instead (1, 2, 3, etc.). Also, probably you do not have llvm-dis installed/built on your machine, but you can easily obtain it, see this article for more details: Getting Started with Clang/LLVM on OS X.

Another potential issue (can’t confirm it) - Bitcode thingie works only for iOS 9, so if you submit your app to the AppStore and it includes Bitcode, then malefactor can get the whole IR from your app using iOS 78 and jailbroken device.

I know only one way to secure the IR - obfuscation. This task is not trivial itself, and it requires even much more efforts if you want to introduce this phase into your Xcode-Driven development flow.

Summary

  • Bitcode is a bitstream file format for LLVM IR
  • one of its goals is to decrease a size of an app by eliminating unused object code
  • malefactor can obtain your app or library, retrieve the IR from it and steal your ‘secret algorithm.’

Why tech’s favorite color is making us all miserable

$
0
0

When I was 14, I saved up money from my first web design job to trick out a really nice gaming PC. I outfitted my computer with tons of blue LED fans, and I kept it on at night, right next to my bed. Shortly after, I realized my sleep patterns were changing. While I wasn’t staying awake any later, it now took me longer to get to sleep. Was I eating differently? Was it just a part of being a teenager? Was it the light in my room? But the orange light from my ’80s era alarm clock wasn’t keeping me up. I finally determined that it must be the particular shade of blue light from my new computer. It took me some research to realize all this, but once I did, I started turning my computer off at night. Problem solved. And when I bought my next computer, I ordered fans with orange lights.

The bright blue light of flat, rectangular touch screens, fans, and displays may be appealing from an aesthetic perspective (more on that below), but from a health standpoint, it is fraught with problems. Blue light inhibits the production of melatonin, the hormone that regulates our sleep cycles. Blue light before bedtime can wreak havoc on our ability to fall asleep. Harvard researchers and their colleagues conducted an experiment comparing the effects of 6.5 hours of exposure to blue light, versus exposure to green light of comparable brightness. They found that blue light suppressed melatonin for about twice as long as the green light and shifted circadian rhythms by twice as much (3 hours compared with 1.5 hours). And worse, it’s been linked in recent studies to an increased risk of obesity and some cancers.

[Photo: Nadezda Murmakova/Shutterstock]
A decade after my experience with the LED fans, I started seeing blue displays everywhere. From mobile phones to in-car displays, blue lights were becoming the norm. It’s hard for me to think of any examples of prominent high-tech products on the market now without pale blue screens or indicator lights. LED-based bulbs with more blue light are fast replacing incandescent bulbs. The default display to our iPhones and Androids operates along the blue spectrum, as do our laptops; new cars, especially those like Tesla which aspire to be “futuristic,” come with blue-lit dashboard displays, and so do our “smart” appliances, televisions, video game consoles, watches–the list goes on.

Thanks to the rapid growth of connected devices and digitized appliances, blue light is now flooding into our lives in places where we’re most vulnerable. It’s why, for instance, when we stumble into the kitchen late at night for some water, we’re guided by the illumination from the touchscreen on our refrigerator–and the after-image of the screen leaves us half-blind, and once back in bed, half-awake.

The right color for dense information

It could be argued that the average person today manages as much information with their devices as an intelligence officer in a wartime situation. But from the Cold War up to now, the user experience of military and consumer technology has vastly differed: Airplane cockpits, submarines, and other military-grade systems are specifically designed for information density, with primary, secondary and tertiary information sources. A key difference in all of these interfaces is color–by and large, many military displays are deep red or orange.

[Photo: Master Sgt. Mark C. Olsen/Department of Defense]
Why use orange and red in military interfaces? They’re low-impact colors that are great for nighttime shifts. In addition, bright blue light is more likely to leave visual artifacts, especially in darkened environments. Have you ever been blinded by the display in your car–or on your phone–when you switch back and forth between looking at that screen and the road ahead? Because the screen is a brighter block of high-energy light, driving (or for that matter, walking) at night creates a longer, stronger afterimage that can adversely affect us when our eyes return to where we’re going.

A contemporary BMW 4 Series display. [Photo: BMW]
BMW is a rare exception in the orange vs. blue design divide, because the car company follows the military’s reasoning: Since the ’70s, BMW has made its cars’ dashboard cluster lights with a red-orange hue, at a wavelength of 605 nanometers. This allows drivers to see the instruments clearly, the company found, while also enabling their vision to quickly adjust to the outside darkness after quickly glancing down; red-orange light also caused less eye fatigue.

2001, Blade Runner, and culture’s blue shift

Somewhere along the line, blue took over in the public consciousness as the “color of the future,” while orange began to look like a shade from the Reagan ’80s. In our current culture, blue signals a transition from the past to the present, from the analog to the digital.

A scene from the 1968 film 2001: A Space Odyssey. [Image: Warner Bros]
Movies and television greatly helped drive this shift. In the late ’60s, the landmark depictions of the future, 2001: A Space Odyssey and Star Trek, conveyed a relatively optimistic vision for humanity, in which we are able to transcend wars and other conflicts to explore the stars–on spaceships controlled, in both case, with user interfaces that were predominantly orange and red-orange.

While both remain widely loved and admired, they’re sometimes perceived as naive and dated. (Never mind that the iPad was inspired by Star Trek‘s PADD devices, or that we saw video-based messaging for the first time in 2001.)

An instrument panel from 1982’s Blade Runner. [Image: Warner Bros]
After premiering in 1982, by contrast, Blade Runner quickly grew in influence as a cult classic among filmmakers, artists, designers, and advertisers. Ridley Scott’s depiction of the future was believable, compelling, and most of all, dark–both figuratively and literally. The blue light from pervasive display screens depicted in the movie fit its shadowy film noir aesthetic, and inadvertently became one of the core tenants in our default mental image of “what the future looks like.”

Remaking the future with a warmer shade

If pop culture has helped lead us into a blue-lit reality that’s hurting us so much, it can help lead us toward a new design aesthetic bathed in orange. We need a resurgence of more realistic user interfaces in movies and TV–which by definition, will skew away from blue. Designers and technologists can help teach audiences to expect more from how user interfaces are depicted in their movies. (Inspiring them to worry, for instance, if Ethan Hunt will have a headache from looking at too many impossible mission messages on a blue screen.) Film effects designers can even take their talents into real product design, as Mark Coleran recently did.

Popular culture is only one way to reshape users’ expectations around interfaces. Startups, blog posts, news articles, and podcasts can help increase general awareness. Popularizing the risks of blue light and re-educating the public about the functionality of orange and red light is the first step, but companies need to take the next steps to build interfaces that are tested, human-centered, and functional into real world design.

None of this is meant to suggest a universal ban on the color blue. Car and appliance displays, for example, could still emanate a futuristic blue during the day–as long as that light switched to an orangish hue as evening comes. At least allowing consumers the option might be a good first step. This presents a problem, as many people fall asleep with phones in hand, watching Netflix or binge-browsing Reddit.

One example is Flux, a Mac app that changes the color of your computer’s display to match the time of day. Instead of a bright blue screen at night, you’ll experience a warm, orange-hue that will help you wind down for a successful night of sleep. In the day, the display changes back to a bright white, matching the sky outside. Following Flux’s lead, Apple released Night Shift, bringing the features of Flux directly into the Mac operating system. iPhone users can now use Night Shift and the less-known Color Tint feature, and Android users can download Twilight for their screen-dimming needs. I hope this new trend extends to all devices, and that we see a world lit by LEDs in warm spectrums.

For military designers, creating an effective, comfortable user experience has always been a matter of life and death. Consumer device designers must begin with a similar perspective. Too much is already at stake.

Amber Case is a design advocate, speaker, and research fellow at the Berkman Klein Center for Internet and Society at Harvard University and The MIT Center for Civic Media. She the author of Calm Technology (2015) and Designing with Sound (2018). Follow her on Twitter

Microsoft: The Early Days

$
0
0

Meme Central  Books  Level 3  Resources  Richard Brodie  Virus of the Mind  What’s New?  Site Map

In May of 1981, I joined my former boss from Xerox PARC, Charles Simonyi, to become Microsoft's 77th employee. The newly born Applications Division I entered then was hardly a Division yet, consisting of only two other employees besides Charles and myself.

At that time, the rest of the company was working under the brilliant vision of Bill Gates on Microsoft's traditional product line of programming languages -- Basic and Fortran -- and our newer line of operating systems -- Xenix, which was a variety of Unix, and an upstart new operating system called MS-DOS.

Our mission, though, was to break into the emerging market of business applications for personal computers. At that time, "personal computer" pretty much meant Apples. It would be a year and a half before IBM would announce its own entry into the field, an entry which would overnight come to mean "PC".

But Charles's mission was to compete against the surprisingly successful VisiCalc, the first spreadsheet program. He was to develop Microsoft's spreadsheet, a project code-named "EP" (for "Electronic Paper") and later marketed as Microsoft Multiplan. That task he entrusted to Doug Klunder, programmer extraordinaire, who would go on to lead the development of the unmatched Excel after Multiplan's lukewarm market reception in the face of Lotus 1-2-3.

I had a slightly different mission. I was to write the so-called "p-code C compiler" that was crucial to Charles's business strategy. His strategy came to be known as the Revenue Bomb.

Here's how the Revenue Bomb worked. You would list all the different business products that Microsoft would develop on the horizontal axis. On the vertical axis, you would list all the different personal computers that were coming out from the dozens of hardware manufacturers. The p-code C compiler, which I named "CS" and which was used for more than ten years to develop Microsoft application software, would allow us to create separate versions of each product very easily for each of the different machines.

What we didn't realize -- nor did most people in those days -- was that there wouldn't be dozens of different PC architectures competing for the market. There would soon be only two: IBM's and Apple's Macintosh. But CS gave Microsoft the upper hand for many years in developing Mac and IBM applications hand-in-hand.

I spent my first summer at Microsoft writing CS, then returned the next summer to work on a secret new project. It was to be a modest word-processor to serve as an inexpensive entree to the business software market. By getting people used to our user interface, they would then be able to easily learn Multiplan and our future business products: Chart and File among them. By October of 1983, when Word Shipped, we had more than 30 programmers and one marketing guy in the now-getting-serious Applications Division. The problem was, Multiplan was already done and its user interface was already out there. I had to be compatible with it. My mission: write the world's first wordprocessor with a spreadsheet user-interface.

It took five years to repair the damage.

Microsoft Word, of course, went on to dominate the market and today is by far the most popular PC word processor.

Read more about Richard Brodie's experience at Microsoft on-line, this essay originally printed in the book Heart at Work edited by Jack Canfield and Jacqueline Miller.

Last Edited: May 03, 2000
Comments, additions, fan mail?
webmaster@memecentral.com
© 1996 Richard Brodie. All rights reserved.

Mining the Sky

$
0
0

Meehan Crist, Atossa Araxia Abrahamian, and Denton Ebel on Astrocapitalism

An artist's rendition of an asteroid mine, courtesy of Factor Magazine.

Why is one of the smallest countries in the world trying to privatize space?

American entrepreneurs want to mine asteroids—and Luxembourg sees a major business opportunity. The technical and legal hurdles are high. But there’s a lot of money to be made in making outer space safe for extractive capitalism.

In September 2017, three people gathered at Caveat in New York City to discuss this issue. Their conversation was part of Convergence, a live show and podcast about science and society hosted by Meehan Crist, writer in residence in biological sciences at Columbia University. The discussion featured Atossa Araxia Abrahamian, a journalist and author of The Cosmopolites: The Coming of the Global Citizen, and Denton Ebel, Chair of the Division of Physical Sciences at the American Museum of Natural History, as well as the curator of the museum's meteorite collection.

Atossa Araxia Abrahamian (AAA)

Luxembourg is very bullish on asteroid mining. They think it’s gonna be a huge business five, ten, fifteen, twenty years from now. And they’re a tiny country that needs to get ahead in this world. Historically, one of the ways they’ve done that has been to identify emerging businesses and lure them to Luxembourg. So that’s why they’re courting Planetary Resources, a startup based outside of Seattle that’s received funding from Richard Branson and various Google executives.

Meehan Crist (MC)

How big is Luxembourg?

AAA

The size of Rhode Island.

MC

And how does a company like Planetary Resources describe what it wants to do?

AAA

Space is no longer a place where only big, nationally funded projects can go. It’s now open—it’s possible for commercial operators like Elon Musk’s SpaceX to send up rockets.

But bringing things back from space is still a long way off. There are significant financial and technical challenges. So right now, Planetary Resources says that their goal is to refuel existing space missions. Essentially, they want to build a gas station in space.

That saves a lot of money, because it’s expensive to send fuel and water up with rocket launches. If you could just have a fueling station that’s already up there, you could extend the length of space missions.

So that’s the first step. And down the line, the idea is that there are natural resources that we’re going to run out of on earth—or that we don’t have very much of to begin with—that the company can procure in space by mining asteroids.

MC

What is actually in these asteroids that one could mine?

Denton Ebel (DE)

My understanding is that some companies are focused on mining iron asteroids for metals and others, like Deep Space Industries, are more focused on mining carbon and hydrogen and oxygen—water—for space civilization down the road.

We know there are bodies in the asteroid belt that are mostly iron. We have them in the form of iron meteorites—bits of them. In the American Museum of Natural History we have the crown jewel of meteoritics—or I’ll call it that—which is the Ahnighito fragment of the Cape York meteorite. It sits on six pillars going down to bedrock because it’s thirty-four tons of iron nickel metal. That has about eight grams of platinum per ton. By comparison, the richest ores that we mine in South Africa for platinum group elements have less than four grams per ton.

There’s also an asteroid called Kleopatra that reflects radar telling us that it’s mostly metal. It’s 220 kilometers long. You could sit it nicely down on New Jersey. It’s about 4.2 grams per cubic centimeter in density, where one gram is water. Iron nickel meteorites are 7.2 grams or more per cubic centimeter—so this is a porous, large object. It consists of big chunks of iron that are sort of welded together.

MC

So how do you begin to solve the engineering problems involved in asteroid mining? Asteroids are shaped differently, they have different kinds of densities, they have different kinds of gravity. Building a robot that can actually go land on an asteroid and extract something and come back is not a single problem, right?

DE

No, it's a bunch of problems.

On earth, a mining operation is a single, large, complicated thing—one failure and you’re down. In space, we can’t afford that. Instead, asteroid mining will involve a horde of ant-like robots, each of which will work semi-autonomously, linked through a hive mind. This is where people like E.O. Wilson, and other insect studiers, become important for asteroid mining because we will need to understand swarm behavior.

AAA

You can definitely imagine conflicting swarms fighting over a piece of asteroid real estate. It’s something that seems pretty ripe for science fiction, but it could happen.

DE

Robots fighting each other in space—Roy Batty talks about it in the original Blade Runner. But yeah, that’s one scenario for space warfare. Another is throwing things. If you throw something from an asteroid in just the right direction, it will hit the earth and make a destructive impact, because it's going really fast. It’s called conservation of angular momentum.

MC

And how do you power all these tiny robots?

AAA

Well, you need water to do most things—up there, down here, everywhere, if you’re a robot or a rocket or a human. And if you can get it up there, it saves you a lot of time and trouble. That’s why Planetary Resources wants to build a fueling station in space.

But the main point here from Luxembourg's perspective is that this is a massive business opportunity and it’s largely unregulated for now.

MC

Why Luxembourg?

AAA

That’s a good question. There’s nothing about Luxembourg, or where it is, or who lives there, that makes it particularly qualified to get into this game. They don’t have a space agency. They only just built a university. They only have about half a million people. They’re not a massive industrial power.

But what they do have—and this is really important—is the ability to make laws. You can only make laws if you’re a sovereign country. There are only so many sovereign countries. Luxembourg is one of them. And throughout its history, it has survived by making laws that businesses want.

It’s as though lobbyists write their legislation. In fact, sometimes they do. Luxembourg is one of the biggest tax havens in the world, even though it's one of the tiniest countries in the world. It's the second biggest center for mutual funds. Trillions of dollars are funneled offshore through Luxembourg. It’s just astonishing the amount of wealth that passes through this tiny little country, and it does that because they have really favorable tax laws.

Fun fact: right before the Great Depression, Luxembourg passed a law exempting holding companies from any corporate taxes. So what happened? Companies from all around the world flocked to Luxembourg to open up shop. Within a few decades you had tens of thousands of companies there, to the point where you walk past a building in Luxembourg today and there are a hundred firms listed as resident there. There’s no way all of these people could fit in the building. But they’re registered.

MC

By “open up shop,” you mean they have an address there.

AAA

Exactly. And simply by virtue of having an address and registration, they are able to claim they’re a Luxembourg company. There’s a lot of accounting acrobatics involved.

But what does this have to do with space? Well, if you’re a company that thinks that one day you're going to make a ton of money by digging platinum out of asteroids and bringing it back to earth, where do you want to incorporate? You want go to a place that’s going to have low taxes, where it’s easy to set up a company, where the politicians want to help you, and where the country is going to recognize your ownership of the stuff that’s in space. Which is not obvious or intuitive because there isn’t any binding international law that determines who can own stuff that’s in space.

MC

The basis of space law is the Outer Space Treaty of 1967, which says that sovereign nations can go out and explore space, but they can’t claim sovereignty over it.

AAA

Right. According to the Outer Space Treaty, space is not subject to national appropriation by claim of sovereignty, use or occupation or any other means. So you can’t go up there and say, “This is part of my country.”

The question is: can a private company not claiming sovereignty say “this asteroid is mine” or “this platinum is mine”?

Property rights in space are not a given. If I take your bracelet, you can say, “That's my bracelet.” Why is it your bracelet? Because you bought it in a shop. There was a transaction. That doesn’t happen in space. There are no transactions in space. There’s no market yet. The whole aim of this enterprise is to create markets up there in a (literal) vacuum.

MC

They’re creating property law where there is no property, or law…

DE

There is some precedent for property in space, though. The moon rocks returned by the Apollo missions, and the moon rocks returned by the robotic lunar missions of Russia—those are the property of those sovereign nations.

MC

Why should this worry us? What are the potential dangers of space privatization?

AAA

Space is relatively untouched. It has not yet been exploited and pillaged—and when it is, I would hope that everybody could benefit from it. If space becomes entirely privatized, however, I worry that the profits will simply flow to the private sector—and thanks to countries like Luxembourg, those profits won’t be taxed.

It’s going to look like the extractive industries on earth. It’s going to be a repeat of what we're seeing in Congo. Now, one big difference is that there aren’t any humans to suffer human rights abuses up in space, so that’s good. You won’t have blood minerals, because the miners will be robots.

MC

What does the Law of the Sea and the history of deep sea mining tell us about what might happen with asteroid mining?

AAA

Well, the asteroid mining companies like to compare mining asteroids to fishing in international waters. They say, “This doesn’t belong to anyone and it’s not sovereign territory, but we can fish and take the fish and eat the fish and sell the fish and do all the things with the fish. Why can’t we just do that with the stuff that’s on asteroids?”

That’s a good question. Some legal scholars say, “Yeah, that’s legit.” Others say, “No, you can't do that without having established who owns what beforehand.” I’m generalizing, but that’s the tenor of the discussion.

MC

You can’t get very far down this road before you start talking about empire. Even if you love the basic science and believe in understanding more about the moon and how the universe started, this is functionally an extension of the colonialist project into space, to some degree.

DE

In 1960, the top federal tax rate on income in the United States was 91%. And if you look at constant dollar spending on space activities by the public sector, the Apollo program in the 1960s was huge compared to what we’re spending today. The shift to private enterprise taking some of the responsibility for space may reflect the fact that the resource balance between private entities and public entities has changed.

MC

One could ask where all those resources came from. Or argue that if all of these companies that are funneling their money through Luxembourg were actually paying their taxes, we might have more money for things like NASA.

What about the idea of space as a cosmic commons? That was part of the intention of the Outer Space Treaty. It states that whatever is done up there should be done for the good of all humankind, and that space belongs to everyone.

AAA

I would love for a portion of all profits made in space to be redistributed. Maybe that’s crazy. Maybe I’m a crazy Marxist.


This piece appears in Logic's fourth issue, "Scale." To order the issue, head on over to our store. To receive future issues, subscribe.


< Back to the Table of Contents


EQT to acquire leading open source software provider SUSE

$
0
0
  • EQT VIII to acquire SUSE, a leading global provider of open source infrastructure software for enterprises
  • EQT VIII is partnering with CEO Nils Brauckmann and his team to support SUSE’s next period of growth and innovation, and to strengthen its position as leading open source player both organically and through add-on acquisitions
  • SUSE to further build its brand and unique corporate culture as a stand-alone business

The EQT VIII fund (“EQT” or “EQT VIII”) has agreed to acquire SUSE, a leading global provider of open source infrastructure software for large enterprises, from the global infrastructure software business Micro Focus International plc (“Micro Focus”) for an enterprise value of USD 2.535 billion. The transaction is subject to Micro Focus shareholder and customary regulatory approvals.

Founded in 1992, SUSE is the world’s first provider of an enterprise-grade open source Linux operating system. With sales of USD 320 million in the 12 months ended October 31, 2017 and approximately 1,400 employees worldwide, SUSE is today a market leader in enterprise-grade, open source software-defined infrastructure and application delivery solutions for on premise and cloud-based workloads. During the ownership of Micro Focus, SUSE has operated as a semi-independent business under the leadership of Nils Brauckmann, executing on a clearly defined growth charter. SUSE has also successfully expanded its product portfolio, including solutions for cloud and storage as well as container and application delivery technology.

EQT VIII will support SUSE’s next period of growth and innovation as an independent company. The strategy includes strengthening its position as a leading open source player, both organically and through add-on acquisitions, leveraging EQT’s long-term experience in the software space. Priorities will be to further build SUSE’s public cloud business and to expand its next-generation product offerings in order to strengthen SUSE as a leading provider commercializing open source for enterprise customers.

“Today is an exciting day in SUSE’s history. By partnering with EQT, we will become a fully independent business,” said Nils Brauckmann, CEO of SUSE. “The next chapter in SUSE’s development will continue, and even accelerate, the momentum generated over the last years. Together with EQT, we will benefit both from further investment opportunities and having the continuity of a leadership team focused on securing long-term profitable growth combined with a sharp focus on customer and partner success. The current leadership team has managed SUSE through a period of significant growth and now, with continued investment in technology innovation and go to market capability, will further develop SUSE’s momentum going forward.”

Johannes Reichel, Partner at EQT Partners and Investment Advisor to EQT VIII, adds: “We are excited to partner with SUSE’s management in this attractive growth investment opportunity. We were impressed by the business’ strong performance over the last years as well as by its strong culture and heritage as a pioneer in the open source space. These characteristics correspond well to EQT’s DNA of supporting and building strong and resilient companies, and driving growth. We look forward to entering the next period of growth and innovation together with SUSE.”

The transaction is subject to approval from Micro Focus shareholders and other relevant authorities.

Jefferies acted as lead financial advisor and Arma Partners acted as financial advisor to EQT VIII. Milbank, Tweed, Hadley & McCloy LLP and Latham & Watkins LLP acted as legal advisors to EQT VIII.

Contacts
Johannes Reichel, Partner at EQT Partners, Investment Advisor to EQT VIII, +49 89 255 49 904
EQT Press office, +46 8 506 55 334

About SUSE
SUSE, a pioneer in open source software, provides reliable, software-defined infrastructure and application delivery solutions that give enterprises greater control and flexibility. More than 25 years of engineering excellence, exceptional service and an unrivaled partner ecosystem power the products and support that help our customers manage complexity, reduce cost, and confidently deliver mission-critical services. The lasting relationships we build allow us to adapt and deliver the smarter innovation they need to succeed – today and tomorrow.

More info: www.suse.com

About EQT
EQT is a leading investment firm with approximately EUR 50 billion in raised capital across 27 funds. EQT funds have portfolio companies in Europe, Asia and the US with total sales of more than EUR 19 billion and approximately 110,000 employees. EQT works with portfolio companies to achieve sustainable growth, operational excellence and market leadership.

More info: www.eqtpartners.com 

Milestone: Tesla makes 5000 Model 3s in a week

$
0
0

According to various sources, Elon Musk wrote to his employees in Tesla Inc (TSLA) that the company made 5,000 Model 3 sedans last week. Just hours after CEO Elon Musk set the midnight goal, the 5,000th vehicle finished final checks at its Fremont, California factory at about five in the morning Pacific Time on Sunday, July 1.

However, it is still a question if Tesla can maintain similar levels of production for a more extended period.

According to a Reuters report, Musk wrote in the letter, “I think we just became a real car company.” The mail also highlighted that the Model 3 milestone came at a time when the weekly production goal of 7,000 Model X and Model S vehicles were also achieved.

‘I think we just became a real car company.’ – Tesla CEO Elon Musk

This news points to the fact the Musk’s company beat Goldman Sachs’ estimates, which was not impressed with Tesla’s performance. Earlier, Goldman Sachs has reaffirmed its SELL rating on the stock, stating that it expects to see the Model 3 deliveries fall short of street expectations once again.

 

The Tesla Model S (Courtesy – Wikimedia Commons)

Tesla CIO Gary Clark is out

Last week, a major shake-up at Tesla saw Chief Information Officer Gary Clark leaving the company. Tesla shares were up 1.45% in premarket trading on Friday, following the announcement.

A month ago, CEO Elon Musk announced a 9% workforce reduction to turn the company’s results from a loss to profit. The job-cut that affected about 46,000 employees, is part of a major reshuffle, including flattening of the management.

Tesla's Employee Count and Model S, Model X, Model 3 delivery date

Model 3 is now cheaper

Earlier last week, Tesla unveiled the Model 3 pricing. The high-end Performance variant, which hits 60 mph in just 3.5 seconds with a maximum speed of 155 mph, is priced at $78,000. A variant of the same, without any extra options such as white interiors, aluminum pedals, carbon fiber spoiler and 20-inch rims, costs $64,000.

The base variant of Model 3 for $49,000. The cost for an optional second motor and AWD was been slashed to $4,000 from $5,000. With these customizations, the standard model would cost $53,000. Tesla will reportedly apply the new prices retroactively to the orders that have already been placed.

Tesla Q1 2018 Earnings Infographic

Ask HN: Looking for a simple solution for building an online course

$
0
0
Ask HN: Looking for a simple solution for building an online course
13 points by r4victor34 minutes ago | hide | past | web | favorite | 4 comments
I want to build an online course on graph algorithms for my university. I've tried to find a solution which would let submit, execute and test student's code (implement an online judge), but have had no success. There are a lot of complex LMS and none of them seem to have this feature as a basic functionality.

Are there any good out-of-box solutions? I'm sure I can build a course using Moodle or another popular LMS with some plugin, but I don't want to spend my time customizing things.

I'm interested both in platforms and self-hosted solutions. Thanks!


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

How to design UI/UX for your Smart TV app?

$
0
0

There is a forgotten ad made by LG Electronics back in 2014. It throws few interesting facts right from the start. Fact #1: 53% of people are unaware how much they can do with a Smart TV. Fact #2: 75% of people think Smart TVs are too complicated. Obviously, the solution proposed in the ad is a better UX/UI made by Korean tech giant.

Unfortunately, LG failed to deliver a better usability. It’s TV sales plummet over the next 2 years period. Frankly speaking, Samsung and Sony haven’t cracked the idea of smart TV usability.

smart-Remote to rule them all

One remote to rule them all

The industry leader, Samsung have heavily invested in smart remote technology. But all it made is a universal remote with a touch-button and poor voice control.

  1. How to design UI/UX for your Smart TV app?
  2. Tips on UI Design for various Screen Sizes
  3. Tips on UX Design for Smart TV
  4. Smart TV App Control and Input
  5. Bottom Line

How to design UI/UX for your Smart TV app?

Despite the fact that some TV manufacturers did publish design guidelines for their Smart TV systems, their documents cover only basic topics or date back to 2015.

It must be acknowledged that our design team hasn’t found much help in these manuals. Although there are some “tactical” tips worth mentioning like fonts, grid dimensions & safe area, color samples, app icon specs.

Tips on UI Design for various Screen Sizes

Take a closer look at numbers behind screen resolutions. There is a distinct pattern in numbers that keep working across all popular screen resolutions, starting from smartphones’ tiny screen, ending with 4K TVs. The majority of screen resolutions can be divided by eight. The only unfortunate exception to this rule is iPhone 6 screen resolution, which is 375 x 667.

screen resolution sozes
The majority of screen resolutions are divisible by 8

  • Concerning this fact it’s a good practice to build a multiple of the 8-pixel grid and make all the icons with regards to this number.
  • Also, keep in mind that different screens have a different active zone where the navigation of cursor is possible. That is why all manufacturers advise setting a safe zone (at least 20-24 px) around the edge of the screen, in order to make sure that your app will be 100% visible on the screen.
  • In addition, don’t hesitate of checking color palette against different TVs, because their color rendition may vary from screen to screen.

4x4 grid system

Tips on UX Design for Smart TV

On top of that, there is Wild West in all concerning usability and UX for TVs. Partly because there are too many intricacies and problems related to the development process itself.

  • Don’t make user journey more than three levels deep.
  • Use one-vector navigation if it’s possible. For example, a horizontal user journey where all folders and subfolders open only to the right.
  • Always indicate navigation symbols like “back”, “home”, “cancel”, “select”, unless you want users to hit the “exit” button each time they get stuck somewhere.
  • The testing environments are too slow and buggy. Don’t use them to run app performance tests.
  • It’s preferable to test on the device itself, otherwise, you’re risking to miss critical latencies that occur on while using the remote control.
  • If you’re building a Web App, don’t rely on libraries like jQuery or fancy frameworks like AngularJS, Ember JS, Backbone, etc. Most of their feature won’t be supported by Smart TVs. Also, those libraries can greatly slow down the app (jQuery performed 6 million times worse than simple script).

do it yourself

No one will take care of your Smart TV app UX except you!

Smart TV App Control and Input

input methods smart tv

Finally, let’s talk about how you want users to interact with your application? There are at least 4 types of Smart TV input options.

  • Gestures.
  • Voice Control.
  • Standard remote control.
  • Bluetooth keyboard/mouse.

Gestures and Bluetooth-connected devices are used by 1% of users. Voice Control system is still not mature enough and doesn’t use much by users either. Also, the Voice Control system is not an option if you’re building a cross-platform app.

The hard part about remote controls is that you’ll end up with hundreds of devices, each with unique Key Codes. For example, Panasonic, LG, and Samsung, all have different key codes for number “1” button.

In such case, you should develop a Key Map with all Keys and devices supported. This map will be a “translator” and transcribe the signal from the remote control to the app based on the device it is currently running on.

On the other hand, you can develop or integrate a ready-made smartphone application that works instead of remote control. In such use case, the user will be able to control their TVs with the help of smartphones.

input method smartphone

Although, it is not necessary to develop a smartphone remote control app if you don’t have such use cases:

  1. Your app heavy relies on text input/search field.
  2. You need a seamless cross-device experience.
  3. Want to deliver a gaming-like experience.

If you’re not intending to use your app one of these ways, don’t bother building a dedicated mobile remote control interface for Smart TVs. Your app will be just fine with the standard remote control interface. In addition, there are third-party solutions we can integrate later instead of inventing the wheel.

Bottom Line

Smart TVs, as well as 4K TVs, are gradually replacing old screens and gain greater user base. And building mobile or web apps that can be used through TV is not such an exclusive thing to do nowadays. But it requires great attention to User Interface and User Experience while developing Smart TV app.

The aforementioned design tips are just a tip of an iceberg. They are called to emphasize the importance of design and comprehensive approach to the development of interfaces and applications. Although you just need to leave your request and leave the details to us!

Tony Sol is the business development manager of GBKSOFT, overseeing the production of all writings for both internal blog and external platforms. He is technical-driven person always looking for new benefits of merging business and software.

How to Unshorten Short Urls

$
0
0

Several months ago I created a site called LinkUnshorten with the goal to guard people against short links from sites like goo.gl, bit.ly, tinyurl etc. Url shorteners are great but they do have the risk of redirecting you to a site you may not want to visit. LinkUnshorten solves this issue by expanding short links to show you the final redirect URL. I also made a chrome extension that will detect short links.

LinkUnshorten is written in Laravel, a php framework and uses CURL to follow any url to its final destination. I wanted to share a code snippet on how the site finds the final url.

$ch = curl_init();
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_URL, $SHORT_URL_HERE);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 30);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 0);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0);

$redirectUrl = curl_getinfo($ch, CURLINFO_EFFECTIVE_URL);
curl_close($ch);

The key here is to use CURLOPT_FOLLOWLOCATION, which will follow any “Location: ” header that the server sends as part of the HTTP header. So even if the short url uses multiple url shorteners, it will continue to search for the final destination url.

Please leave a comment below if you have any questions or feedback.

Viewing all 25817 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>