Quantcast
Channel: Hacker News
Viewing all 25817 articles
Browse latest View live

Why Is the Human Brain So Efficient? Massive Parallelism

$
0
0

The brain is complex; in humans it consists of about 100 billion neurons, making on the order of 100 trillion connections. It is often compared with another complex system that has enormous problem-solving power: the digital computer. Both the brain and the computer contain a large number of elementary units—neurons and transistors, respectively—that are wired into complex circuits to process information conveyed by electrical signals. At a global level, the architectures of the brain and the computer resemble each other, consisting of largely separate circuits for input, output, central processing, and memory.1

Which has more problem-solving power—the brain or the computer? Given the rapid advances in computer technology in the past decades, you might think that the computer has the edge. Indeed, computers have been built and programmed to defeat human masters in complex games, such as chess in the 1990s and recently Go, as well as encyclopedic knowledge contests, such as the TV show Jeopardy! As of this writing, however, humans triumph over computers in numerous real-world tasks—ranging from identifying a bicycle or a particular pedestrian on a crowded city street to reaching for a cup of tea and moving it smoothly to one’s lips—let alone conceptualization and creativity.

So why is the computer good at certain tasks whereas the brain is better at others? Comparing the computer and the brain has been instructive to both computer engineers and neuroscientists. This comparison started at the dawn of the modern computer era, in a small but profound book entitled The Computer and the Brain, by John von Neumann, a polymath who in the 1940s pioneered the design of a computer architecture that is still the basis of most modern computers today.2 Let’s look at some of these comparisons in numbers (Table 1).

The computer has huge advantages over the brain in the speed of basic operations.3 Personal computers nowadays can perform elementary arithmetic operations, such as addition, at a speed of 10 billion operations per second. We can estimate the speed of elementary operations in the brain by the elementary processes through which neurons transmit information and communicate with each other. For example, neurons “fire” action potentials—spikes of electrical signals initiated near the neuronal cell bodies and transmitted down their long extensions called axons, which link with their downstream partner neurons. Information is encoded in the frequency and timing of these spikes. The highest frequency of neuronal firing is about 1,000 spikes per second. As another example, neurons transmit information to their partner neurons mostly by releasing chemical neurotransmitters at specialized structures at axon terminals called synapses, and their partner neurons convert the binding of neurotransmitters back to electrical signals in a process called synaptic transmission. The fastest synaptic transmission takes about 1 millisecond. Thus both in terms of spikes and synaptic transmission, the brain can perform at most about a thousand basic operations per second, or 10 million times slower than the computer.4

The computer also has huge advantages over the brain in the precision of basic operations. The computer can represent quantities (numbers) with any desired precision according to the bits (binary digits, or 0s and 1s) assigned to each number. For instance, a 32-bit number has a precision of 1 in 232 or 4.2 billion. Empirical evidence suggests that most quantities in the nervous system (for instance, the firing frequency of neurons, which is often used to represent the intensity of stimuli) have variability of a few percent due to biological noise, or a precision of 1 in 100 at best, which is millionsfold worse than a computer.5

A pro tennis player can follow the trajectory of a ball served at a speed up to 160 mph.

The calculations performed by the brain, however, are neither slow nor imprecise. For example, a professional tennis player can follow the trajectory of a tennis ball after it is served at a speed as high as 160 miles per hour, move to the optimal spot on the court, position his or her arm, and swing the racket to return the ball in the opponent’s court, all within a few hundred milliseconds. Moreover, the brain can accomplish all these tasks (with the help of the body it controls) with power consumption about tenfold less than a personal computer. How does the brain achieve that? An important difference between the computer and the brain is the mode by which information is processed within each system. Computer tasks are performed largely in serial steps. This can be seen by the way engineers program computers by creating a sequential flow of instructions. For this sequential cascade of operations, high precision is necessary at each step, as errors accumulate and amplify in successive steps. The brain also uses serial steps for information processing. In the tennis return example, information flows from the eye to the brain and then to the spinal cord to control muscle contraction in the legs, trunk, arms, and wrist.

But the brain also employs massively parallel processing, taking advantage of the large number of neurons and large number of connections each neuron makes. For instance, the moving tennis ball activates many cells in the retina called photoreceptors, whose job is to convert light into electrical signals. These signals are then transmitted to many different kinds of neurons in the retina in parallel. By the time signals originating in the photoreceptor cells have passed through two to three synaptic connections in the retina, information regarding the location, direction, and speed of the ball has been extracted by parallel neuronal circuits and is transmitted in parallel to the brain. Likewise, the motor cortex (part of the cerebral cortex that is responsible for volitional motor control) sends commands in parallel to control muscle contraction in the legs, the trunk, the arms, and the wrist, such that the body and the arms are simultaneously well positioned to receiving the incoming ball.

Emotional Renovations

Home is more than a place on a map. It evokes a particular set of feelings, and a sense of safety and belonging. Location, memories, and emotions are intertwined within those walls. Over the past few decades, this sentiment has...READ MORE

This massively parallel strategy is possible because each neuron collects inputs from and sends output to many other neurons—on the order of 1,000 on average for both input and output for a mammalian neuron. (By contrast, each transistor has only three nodes for input and output all together.) Information from a single neuron can be delivered to many parallel downstream pathways. At the same time, many neurons that process the same information can pool their inputs to the same downstream neuron. This latter property is particularly useful for enhancing the precision of information processing. For example, information represented by an individual neuron may be noisy (say, with a precision of 1 in 100). By taking the average of input from 100 neurons carrying the same information, the common downstream partner neuron can represent the information with much higher precision (about 1 in 1,000 in this case).6

The computer and the brain also have similarities and differences in the signaling mode of their elementary units. The transistor employs digital signaling, which uses discrete values (0s and 1s) to represent information. The spike in neuronal axons is also a digital signal since the neuron either fires or does not fire a spike at any given time, and when it fires, all spikes are approximately the same size and shape; this property contributes to reliable long-distance spike propagation. However, neurons also utilize analog signaling, which uses continuous values to represent information. Some neurons (like most neurons in our retina) are nonspiking, and their output is transmitted by graded electrical signals (which, unlike spikes, can vary continuously in size) that can transmit more information than can spikes. The receiving end of neurons (reception typically occurs in the dendrites) also uses analog signaling to integrate up to thousands of inputs, enabling the dendrites to perform complex computations.7

Your brain is 10 million times slower than a computer.

Another salient property of the brain, which is clearly at play in the return of service example from tennis, is that the connection strengths between neurons can be modified in response to activity and experience—a process that is widely believed by neuroscientists to be the basis for learning and memory. Repetitive training enables the neuronal circuits to become better configured for the tasks being performed, resulting in greatly improved speed and precision.

Over the past decades, engineers have taken inspiration from the brain to improve computer design. The principles of parallel processing and use-dependent modification of connection strength have both been incorporated into modern computers. For example, increased parallelism, such as the use of multiple processors (cores) in a single computer, is a current trend in computer design. As another example, “deep learning” in the discipline of machine learning and artificial intelligence, which has enjoyed great success in recent years and accounts for rapid advances in object and speech recognition in computers and mobile devices, was inspired by findings of the mammalian visual system.8 As in the mammalian visual system, deep learning employs multiple layers to represent increasingly abstract features (e.g., of visual object or speech), and the weights of connections between different layers are adjusted through learning rather than designed by engineers. These recent advances have expanded the repertoire of tasks the computer is capable of performing. Still, the brain has superior flexibility, generalizability, and learning capability than the state-of-the-art computer. As neuroscientists uncover more secrets about the brain (increasingly aided by the use of computers), engineers can take more inspiration from the working of the brain to further improve the architecture and performance of computers. Whichever emerges as the winner for particular tasks, these interdisciplinary cross-fertilizations will undoubtedly advance both neuroscience and computer engineering.

Liqun Luo is a professor in the School of Humanities and Sciences, and professor, by courtesy, of neurobiology, at Stanford University.

The author wishes to thank Ethan Richman and Jing Xiong for critiques and David Linden for expert editing.

By Liqun Luo, as published in Think Tank: Forty Scientists Explore the Biological Roots of Human Experience, edited by David J. Linden, and published by Yale University Press.

Footnotes

1. This essay was adapted from a section in the introductory chapter of Luo, L. Principles of Neurobiology (Garland Science, New York, NY, 2015), with permission.

2. von Neumann, J. The Computer and the Brain (Yale University Press, New Haven, CT, 2012), 3rd ed.

3. Patterson, D.A. & Hennessy, J.L. Computer Organization and Design (Elsevier, Amsterdam, 2012), 4th ed.

4. The assumption here is that arithmetic operations must convert inputs into outputs, so the speed is limited by basic operations of neuronal communication such as action potentials and synaptic transmission. There are exceptions to these limitations. For example, nonspiking neurons with electrical synapses (connections between neurons without the use of chemical neurotransmitters) can in principle transmit information faster than the approximately one millisecond limit; so can events occurring locally in dendrites.

5. Noise can reflect the fact that many neurobiological processes, such as neurotransmitter release, are probabilistic. For example, the same neuron may not produce identical spike patterns in response to identical stimuli in repeated trials.

6. Suppose that the standard deviation of mean (σmean) for each input approximates noise (it reflects how wide the distribution is, in the same unit as the mean). For the average of n independent inputs, the expected standard deviation of means is σmean = σ / √•n. In our example, σ = 0.01, and n = 100; thus σmean = 0.001.

7. For example, dendrites can act as coincidence detectors to sum near synchronous excitatory input from many different upstream neurons. They can also subtract inhibitory input from excitatory input. The presence of voltage-gated ion channels in certain dendrites enables them to exhibit “nonlinear” properties, such as amplification of electrical signals beyond simple addition.

8. LeCun, Y. Bengio, Y., & Hinton, G. Deep learning. Nature521, 436–444 (2015).

Lead Art Credits: Photo 12 / Contributor / Getty Images; Wikipedia


To #DeleteFacebook or Not to #DeleteFacebook? That Is Not the Question

$
0
0

Since the Cambridge Analytica news hit headlines, calls for users to ditch the platform have picked up speed. Whether or not it has a critical impact on the company’s user base or bottom line, the message from #DeleteFacebook is clear: users are fed up.

EFF is not here to tell you whether or not to delete Facebook or any other platform. We are here to hold Facebook accountable no matter who’s using it, and to push it and other tech companies to do better for users.

Users should have better options when they decide where to spend their time and attention online.

The problems that Facebook’s Cambridge Analytica scandal highlight—sweeping data collection, indiscriminate sharing of that data, and manipulative advertising—are also problems with much of the surveillance-based, advertising-powered popular web. And there are no shortcuts to solving those problems.

Users should have better options when they decide where to spend their time and attention online. So rather than asking if people should delete Facebook, we are asking: What privacy protections should users have a right to expect, whether they decide to leave or use a platform like Facebook?

If it makes sense for you to delete Facebook or any other account, then you should have full control over deleting your data from the platform and bringing it with you to another. If you stay on Facebook, then you should be able to expect it to respect your privacy rights.

To Leave

As a social media user, you should have the right to leave a platform that you are not satisfied with. That means you should have the right to delete your information and your entire account. And we mean really delete: not just disabling access, but permanently eliminating your information and account from the service’s servers.

Furthermore, if users decide to leave a platform, they should be able to easily, efficiently, and freely take their uploaded information away and move it to a different one in a usable format. This concept, known as "data portability" or "data liberation," is fundamental to promote competition and ensure that users maintain control over their information even if they sever their relationship with a particular service.

Of course, for this right to be effective, it must be coupled with informed consent and user control, so unscrupulous companies can’t exploit data portability to mislead you and then grab your data for unsavory purposes.

Not To Leave

Deleting Facebook is not a choice that most of its 2 billion users can feasibly make. It’s also not a choice that everyone wants to make, and that’s okay too. Everyone deserves privacy, whether they delete Facebook or stay on it (or never used it in the first place!).

Deleting Facebook is not a choice that most of its users can feasibly make.

For many, the platform is the only way to stay in touch with friends, family, and businesses. It’s sometimes the only way to do business, reach out to customers, and practice a profession that requires access to a large online audience. Facebook also hosts countless communities and interest groups that are simply not available in many users’ cities and areas. Without viable general alternatives, Facebook’s massive user base and associated network effects mean that the costs of leaving it may not outweigh the benefits.

In addition to the right to leave described above, any responsible social media company should ensure users’ privacy rights: the right to informed decision-making, the right to control one’s information, the right to notice, and the right of redress.

Facebook and other companies must respect user privacy by default and by design. If you want to use a platform or service that you enjoy and that adds value to your life, you shouldn't have to leave your privacy rights at the door.

The future of electronics based on memristive systems

$
0
0
  • 1.

    Chua, L. O. & Kang, S. M. Memristive devices and systems. Proc. IEEE64, 209–223 (1976).

  • 2.

    Strukov, D. B., Snider, G. S., Stewart, D. R. & Williams, R. S. The missing memristor found. Nature453, 80–83 (2008).

  • 3.

    Lee, J. & Lu, W. D. On-demand reconfiguration of nanomaterials: when electronics meets ionics. Adv. Mater. https://doi.org/10.1002/adma.201702770 (2017).

  • 4.

    Wong, H.-S. P. et al. Metal–oxide RRAM. Proc. IEEE100, 1951–1970 (2012).

  • 5.

    Waser, R., Dittmann, R., Staikov, G. & Szot, K. Redox‐based resistive switching memories — nanoionic mechanisms, prospects, and challenges. Adv. Mater.21, 2632–2663 (2009).

  • 6.

    Govoreanu, B. et al. 10×10nm2 Hf/HfOx crossbar resistive RAM with excellent performance, reliability and low-energy operation. In 2011 IEEE International Electron Devices Meeting (IEDM) 31.6.1–31.6.4 (2011).

  • 7.

    Yang, J. J., Strukov, D. B. & Stewart, D. Memristive devices for computing. Nat. Nanotech8, 13–24 (2013).

  • 8.

    Torrezan, A. C., Strachan, J. P., Medeiros-Ribeiro, G. & Williams, R. S. Sub-nanosecond switching of a tantalum oxide memristor. Nanotechnology22, 485203 (2011).

  • 9.

    Choi, B. J. et al. High-speed and low-energy nitride memristors. Adv. Funct. Mater.26, 5290–5296 (2016).

  • 10.

    Kim, K.-H., Jo, S. H., Gaba, S. & Lu, W. D. Nanoscale resistive memory with intrinsic diode characteristics and long endurance. Appl. Phys. Lett.96, 053106 (2010).

  • 11.

    Zhou, J. et al. Very low-programming-current RRAM with self-rectifying characteristics. IEEE Electron Device Lett37, 404–407 (2016).

  • 12.

    Kim, S. et al. Experimental demonstration of a second-order memristor and its ability to biorealistically implement synaptic plasticity. Nano Lett.15, 2203–2211 (2015).

  • 13.

    Jeong, Y., Kim, S. & Lu, W. Utilizing multiple state variables to improve the dynamic range of analog switching in a memristor. Appl. Phys. Lett.107, 173105 (2015).

  • 14.

    Zidan, M. A. et al. Single-readout high-density memristor crossbar. Sci. Rep. 6, 18863 (2016).

  • 15.

    Yang, J. J. et al. Engineering nonlinearity into memristors for passive crossbar applications. Appl. Phys. Lett.100, 113501 (2012).

  • 16.

    Zhou, J., Kim, K.-H. & Lu, W. D. Crossbar RRAM arrays: selector device requirements during read operation. IEEE Trans. Electron Devices61, 1369–1376 (2014).

  • 17.

    Linn, E., Rosezin, R., Kügeler, C. & Waser, R. Complementary resistive switches for passive nanocrossbar memories. Nat. Mater.9, 403–406 (2010).

  • 18.

    Hu, M. et al. Dot-product engine for neuromorphic computing: programming 1T1M crossbar to accelerate matrix-vector multiplication. In 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC) 1–6 (2016).

  • 19.

    Hilson, G. IMEC, Panasonic Push Progress on ReRAM. https://www.eetimes.com/document.asp?doc_id=1327307 (2015).

  • 20.

    Clarke, P. Crossbar ReRAM in Production at SMIC. https://www.eetimes.com/document.asp?doc_id=1331173 (2017).

  • 21.

    Shen, W. C. et al. High-K metal gate contact RRAM (CRRAM) in pure 28nm CMOS logic process. In 2012 IEEE International Electron Devices Meeting (IEDM) 31.6.1–31.6.4 (2012).

  • 22.

    Fackenthal, R. et al. A 16Gb ReRAM with 200MB/s write and 1GB/s read in 27nm technology. In 2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC) 338–339 (2014).

  • 23.

    Yu, S. & Chen, P.-Y. Emerging memory technologies: recent trends and prospects. IEEE Solid State Circuits Mag8, 43–56 (2016).

  • 24.

    Silver, D. et al. Mastering the game of Go with deep neural networks and tree search. Nature529, 484–489 (2016).

  • 25.

    Jo, S. H. et al. Nanoscale memristor device as synapse in neuromorphic systems. Nano Lett.10, 1297–1301 (2010).

  • 26.

    Neftci, E. O., Pedroni, B. U., Joshi, S., Al-Shedivat, M. & Cauwenberghs, G. Stochastic synapses enable efficient brain-inspired learning machines. Front. Neurosci.10, 241 (2016).

  • 27.

    Yu, S. et al. Scaling-up resistive synaptic arrays for neuro-inspired architecture: challenges and prospect. In 2015 IEEE International Electron Devices Meeting (IEDM) 17.3.1–17.3.4 (2015).

  • 28.

    Alibart, F., Zamanidoost, E. & Strukov, D. B. Pattern classification by memristive crossbar circuits using ex situ and in situ training. Nat. Commun.4, 2072 (2013).

  • 29.

    Milo, V. et al. Demonstration of hybrid CMOS/RRAM neural networks with spike time/rate-dependent plasticity. In 2016 IEEE International Electron Devices Meeting (IEDM) 16.8.1–16.8.4 (2016).

  • 30.

    Prezioso, M. et al. Training and operation of an integrated neuromorphic network based on metal-oxide memristors. Nature521, 61–64 (2015).

  • 31.

    Sheridan, P. M. et al. Sparse coding with memristor networks. Nat. Nanotech.12, 784–789 (2017).

  • 32.

    Choi, S., Shin, J. H., Lee, J., Sheridan, P. & Lu, W. D. Experimental demonstration of feature extraction and dimensionality reduction using memristor networks. Nano Lett.17, 3113–3118 (2017).

  • 33.

    Burr, G. W. et al. Experimental demonstration and tolerancing of a large-scale neural network (165 000 synapses) using phase-change memory as the synaptic weight element. IEEE Trans. Electron Devices62, 3498–3507 (2015).

  • 34.

    Linares-Barranco, B. & Serrano-Gotarredona, T. Memristance can explain spike-time dependent-plasticity in neural synapses. Preprint at http://precedings.nature.com/documents/3010/version/1 (2009).

  • 35.

    Gupta, I. et al. Real-time encoding and compression of neuronal spikes by metal-oxide memristors. Nat. Commun.7, 12805 (2016).

  • 36.

    Ambrogio, S. et al. Unsupervised learning by spike timing dependent plasticity in phase change memory (PCM) synapses. Front. Neurosci.10, 56 (2016).

  • 37.

    Chua, L. O. & Yang, L. Cellular neural networks: theory. IEEE Trans. Circuits Syst. I35, 1257–1272 (1988).

  • 38.

    Corinto, F., Ascoli, A., Kim, Y.-S. & Min, K.-S. in Memristor Networks (eds Adamatzky, A. & Chua, L.) 267–291 (Springer, New York, 2014).

  • 39.

    Chi, P. et al. PRIME: a novel processing-in-memory architecture for neural network computation in ReRAM-based main memory. In International Symposium on Computer Architecture (ISCA) 27–39 (2016).

  • 40.

    Sheri, A. M., Rafique, A., Pedrycz, W. & Jeon, M. Contrastive divergence for memristor-based restricted Boltzmann machine. Eng. Appl. Artif. Intell.37, 336–342 (2015).

  • 41.

    Bojnordi, M. N. & Ipek, E. Memristive Boltzmann machine: a hardware accelerator for combinatorial optimization and deep learning. In 2016 IEEE International Symposium on High Performance Computer Architecture (HPCA) 1–13 (2016).

  • 42.

    Schuman, C. D. et al. A survey of neuromorphic computing and neural networks in hardware. Preprint at https://arxiv.org/abs/1705.06963 (2017).

  • 43.

    LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature521, 436–444 (2015).

  • 44.

    Zidan, M. A. et al. Field-programmable crossbar array (FPCA) for reconfigurable computing. https://doi.org/10.1109/TMSCS.2017.2721160 (2017).

  • 45.

    Borghetti, J. et al. ‘Memristive’ switches enable ‘stateful’ logic operations via material implication. Nature464, 873–876 (2010).

  • 46.

    Yu, S. et al. Binary neural network with 16 Mb RRAM macro chip for classification and online training. In 2016 IEEE International Electron Devices Meeting (IEDM) 16.2.1–16.2.4 (2016).

  • 47.

    Kataeva, I., Merrikh-Bayat, F., Zamanidoost, E. & Strukov, D. Efficient training algorithms for neural networks based on memristive crossbar circuits. In International Joint Conference on Neural Networks (IJCNN) 1–8 (2015).

  • 48.

    Liu, C., Hu, M., Strachan, J. P. & Li, H. H. Rescuing memristor-based neuromorphic design with high defects. In 2017 54th ACM/EDAC/IEEE Design Automation Conference (DAC) 1–6 (2017).

  • 49.

    Shafiee, A. et al. ISAAC: a convolutional neural network accelerator with in-situ analog arithmetic in crossbars. In 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA) 14–26 (2016).

  • 50.

    International technology roadmap for semiconductors (ITRS); http://www.itrs2.net/itrs-reports.html

  • 51.

    Borghetti, J. et al. A hybrid nanomemristor/transistor logic circuit capable of self-programming. Proc. Natl Acad. Sci. USA106, 1699–1703 (2009).

  • 52.

    Kim, K.-H. et al. A functional hybrid memristor crossbar-array/CMOS system for data storage and neuromorphic applications. Nano Lett.12, 389–395 (2012).

  • 53.

    Shulaker, M. M. et al. Three-dimensional integration of nanotechnologies for computing and data storage on a single chip. Nature547, 74–78 (2017).

  • 54.

    Chen, H.-Y. et al. HfOx based vertical RRAM for cost-effective 3D cross-point architecture without cell selector. In 2012 IEEE International Electron Devices Meeting (IEDM) 20.7.1–20.7.4 (2012).

  • 55.

    Kanerva, P. Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors. Cogn. Comput.1, 139–159 (2009).

  • 56.

    Li, H. et al. Hyperdimensional computing with 3D VRRAM in-memory kernels: device-architecture co-design for energy-efficient, error-resilient language recognition. In 2016 IEEE International Electron Devices Meeting (IEDM) 16.1.1–16.1.4 (2016).

  • 57.

    Li, H., Wu, T. F., Mitra, S. & Wong, H.-S. P. Resistive RAM-centric computing: design and modeling methodology. IEEE Trans. Circuits Syst. I64, 2263–2273 (2017).

  • 58.

    Terabe, K., Hasegawa, T., Nakayama, T. & Aono, M. Quantized conductance atomic switch. Nature433, 47–50 (2005).

  • 59.

    Lee, J., Du, C., Sun, K., Kioupakis, E. & Lu, W. D. Tuning ionic transport in memristive devices by graphene with engineered nanopores. ACS Nano10, 3571–3579 (2016).

  • 60.

    Liu, Q. et al. Controllable growth of nanoscale conductive filaments in solid-electrolyte-based ReRAM by using a metal nanocrystal covered bottom electrode. ACS Nano4, 6162–6168 (2010).

  • 61.

    Hou, Y. et al. Sub-10 nm low current resistive switching behavior in hafnium oxide stack. Appl. Phys. Lett.108, 123106 (2016).

  • 62.

    Alibart, F., Gao, L., Hoskins, B. D. & Strukov, D. B. High precision tuning of state for memristive devices by adaptable variation-tolerant algorithm. Nanotechnology23, 075201 (2012).

  • 63.

    Merced-Grafals, E. J., Dávila, N., Ge, N., Williams, R. S. & Strachan, J. P. Repeatable, accurate, and high speed multi-level programming of memristor 1T1R arrays for power efficient analog computing applications. Nanotechnology27, 365202 (2016).

  • 64.

    Sheu, S.-S. et al. A 5ns fast write multi-level non-volatile 1 K bits RRAM memory with advance write scheme. In 2009 Symposium on VLSI Circuits 82–83 (2009).

  • 65.

    O’Connor, P. & Welling, M. Deep spiking networks. Preprint at https://arxiv.org/abs/1602.08323 (2016).

  • 66.

    Shouval, H. Z., Bear, M. F. & Cooper, L. N. A unified model of NMDA receptor-dependent bidirectional synaptic plasticity. Proc. Natl Acad. Sci. USA99, 10831–10836 (2002).

  • 67.

    Du, C., Ma, W., Chang, T., Sheridan, P. & Lu, W. D. Biorealistic implementation of synaptic functions with oxide memristors through internal ionic dynamics. Adv. Funct. Mater.25, 4290–4299 (2015).

  • 68.

    Wang, Z. et al. Memristors with diffusive dynamics as synaptic emulators for neuromorphic computing. Nat. Mater.16, 101–108 (2017).

  • 69.

    Martin, S. J., Grimwood, P. D. & Morris, R. G. M. Synaptic plasticity and memory: an evaluation of the hypothesis. Annu. Rev. Neurosci.23, 649–711 (2000).

  • 70.

    Valov, I. & Lu, W. D. Nanoscale electrochemistry using dielectric thin films as solid electrolytes. Nanoscale8, 13828–13837 (2016).

  • 71.

    Fuller, E. J. et al. Li‐ion synaptic transistor for low power analog computing. Adv. Mater.29, 1604310 (2017).

  • 72.

    Izhikevich, E. M. Simple model of spiking neurons. IEEE Trans. Neural Netw. Learn. Syst14, 1569–1572 (2003).

  • 73.

    Funck, C. et al. Multidimensional simulation of threshold switching in NbO2 based on an electric field triggered thermal runaway model. Adv. Elect. Mater.2, 1600169 (2016).

  • 74.

    Gibson, G. et al. An accurate locally active memristor model for S-type negative differential resistance in NbOx. Appl. Phys. Lett.108, 023505 (2016).

  • 75.

    Pickett, M. D., Medeiros-Ribeiro, G. & Williams, R. S. A scalable neuristor built with Mott memristors. Nat. Mater.12, 114–117 (2013).

  • 76.

    Gao, L., Chen, P. Y. & Yu, S. NbOx based oscillation neuron for neuromorphic computing. Appl. Phys. Lett.111, 103503 (2017).

  • 77.

    Kumar, S., Strachan, J. P. & Williams, R. S. Chaotic dynamics in nanoscale NbO2 Mott memristors for analogue computing. Nature548, 318–321 (2017).

  • 78.

    Maass, W. Noise as a resource for computation and learning in networks of spiking neurons. Proc. IEEE102, 860–880 (2014).

  • Interns with toasters: how I taught people about load balancers

    $
    0
    0

    Several years ago, I wrote a description of a problem that could happen in a large load-balanced environment. When it landed on certain web sites, some of the commenters dismissed either it ("impossible") or me ("doesn't know anything about load balancing"). Those are both wrong. I suspect what happened is that they didn't understand the problem, and so resorted to ineffective means to steer the attention away from their own inadequacies.

    I've since come up with another way to tell the story of what happened. It was refined over many years of teaching new employees about outages. I will now attempt to present a sanitized version of it here.

    One day, someone decided to chase down a weird pattern on a graph. There was a strange "quadruple-hump" figure in the pattern of failed requests to the site. First, they narrowed it down to a specific region, then a specific group of machines. Within that group, they tried to narrow it down more, and attempted to select it out by the load balancer reporting the problem. It didn't help: all of them were reporting it equally.

    Next, they grouped the graph by the web servers which reported the failed requests. This also did not expose the quadruple-hump pattern which was desired, but there was another interesting thing which was then exposed. One of the web servers was reporting perhaps 200 times as many failed requests as every other one of its friends in that location.

    The question I used to ask the class is: how can this happen? How can a big sophisticated operation have a single machine out of many become an outlier in this regard? What's going on?

    It was at this point I asked them to roll with me, and imagine a scenario I would then describe. First, let's say that we get 100 interns to test this, because interns are great and you can use them to test anything. You tell them to report to a room much like the classroom, and have them line up and await further instructions.

    Upon entering the room, they would find 100 small tables I had previously procured. Atop each table was a toaster, since I had gone out and bought every toaster I could find, by hitting Target, Walmart, CVS, and any other place that sells small appliances. Finally, I had the facilities team bring in a whole bunch of extra power feeds, because we were going to need a lot. We wanted to run all 100 toasters at the same time. That's a lot of juice!

    So the interns come in, find their stations, and are waiting to find out what happens. They're chattering about what this could be for. That's the point when I enter the room carrying a massive bag of bread. I'm talking about something comically sized, like an oversized beach ball, and it's packed with slices of bread.

    I go up to the first intern, and hand them two of the slices. They put them in their toaster, push it down, wait a minute or two, and ... hey, toast. (What did you think would happen at this point in the story? It's just a toaster. This isn't Harry Potter.)

    Meanwhile, naturally, I had gone on to the second intern, who got their bread, and put it in their toaster, and the third, and the fourth, and the fifth, and so on down the line until everyone had bread.

    At various points, a toaster would finish and would pop up. Toasters are all slightly different, so they wouldn't take the same amount of time. I'd notice that they were done and would run over to give them more toast. Oh, you're done! Have some more. Oh, you too! Here you go. Ready for more? Yep, here it is. Over here now. Gotcha. Oh and there? Okay!

    This is how the load balancers work. The whole time (oh, you're done, have some more), they are constantly looking to see who's not busy (two more for you, got it) and then send more traffic that way. I'm the human load balancer in this thought experiment, handing bread to my friends the interns.

    Trouble is, one of the interns wasn't playing by the rules.

    Instead, they'd take the bread from me, and they'd look at it. "That's some nice bread", they'd think. They'd continue, "it would look even better over there", and they'd throw it away without even putting it anywhere near the toaster. Of course, I didn't notice this. I wasn't even watching for it.

    But hey, I'd see they were ready for more, so I'd give them two more slices of bread. They'd take them, size them up again, and *foop* throw them away. The pile behind them grew.

    This would go on over and over and over. It would go on forever if nothing stopped us.

    This is what happened when one bad web server decided it was going to fail all of its requests, and would do so while incurring the absolute minimum amount of load on itself. Maybe it never even got the request anywhere near the normal processing guts. It became a failure machine, turning out those HTTP 500s just as fast as it was handed requests.

    Because it never got backlogged or busy, and was always available for more work, the load balancers just kept sending traffic that way. In so doing, one machine out of a very large group managed to scarf down a substantial fraction of the traffic bound for that entire area.

    I told my students the story this way because I wanted them to take it with them through their career, no matter where they ended up. Some day, if they see some problem with one widget out of thousands or millions "capturing" all of the traffic, maybe they'll remember me and my little story about interns and toasters.

    Of course, when I taught that class, I was there in person, physically moving around to denote the size of the line of interns, picking spots in the audience to be intern #1, #2, #3, and so on, all the while "interrupting" myself to say "oh you're done, have another". (I did this above with parentheticals. Did you catch it?) When I got to the point about the one intern not playing by the rules, that's when I'd break out some props and start flinging pens or markers over my shoulder, making sure they audibly caromed off something (like the wall). It worked great. Students loved it.

    I do miss giving those classes. But hey, there's a way forward from here. Maybe I'll start doing this stuff in the real world, bringing these lessons to life on stage somehow. Would people want to see that?

    “Drupalgeddon2” touches off arms race to mass-exploit powerful Web servers

    $
    0
    0

    Attackers are mass-exploiting a recently fixed vulnerability in the Drupal content management system that allows them to take complete control of powerful website servers, researchers from multiple security companies are warning.

    At least three different attack groups are exploiting "Drupalgeddon2," the name given to an extremely critical vulnerability Drupal maintainers patched in late March, researchers with Netlab 360 said Friday. Formally indexed as CVE- 2018-7600, Drupalgeddon2 makes it easy for anyone on the Internet to take complete control of vulnerable servers simply by accessing a URL and injecting publicly available exploit code. Exploits allow attackers to run code of their choice without having to have an account of any type on a vulnerable website. The remote-code vulnerability harkens back to a 2014 Drupal vulnerability that also made it easy to commandeer vulnerable servers.

    Drupalgeddon2 "is under active attack, and every Drupal site behind our network is being probed constantly from multiple IP addresses," Daniel Cid, CTO and founder of security firm Sucuri, told Ars. "Anyone that has not patched is hacked already at this point. Since the first public exploit was released, we are seeing this arms race between the criminals as they all try to hack as many sites as they can."

    China-based Netlab 360, meanwhile, said at least three competing attack groups are exploiting the vulnerability. The most active group, Netlab 360 researchers said in a blog post published Friday, is using it to install multiple malicious payloads, including cryptocurrency miners and software for performing distributed denial-of-service attacks on other domains. The group, dubbed Muhstik after a keyword that pops up in its code, relies on 11 separate command-and-control domains and IP addresses, presumably for redundancy in the event one gets taken down.

    Netlab 360 said that the IP addresses that deliver the malicious payloads are widely dispersed and mostly run Drupal, an indication of worm-like behavior that causes infected sites to attack vulnerable sites that have not yet been compromised. Worms are among the most powerful types of malware because their self-propagation gives them viral qualities.

    Adding extra punch, Muhstik is exploiting previously patched vulnerabilities in other server applications in the event administrators have yet to install the fixes. Webdav, WebLogic, Webuzo, and WordPress are some of the other applications that the group is targeting.

    Muhstik has ties to Tsunami, a strain of malware that has been active since 2011 and infected more than 10,000 Unix and Linux servers in 2014. Muhstik has adopted some of the infection techniques seen in recent Internet-of-things botnets. Propagation methods include scanning for vulnerable server apps and probing servers for weak secure-shell, or SSH, passwords.

    The mass exploitation of Drupal servers harkens back to the epidemic of unpatched Windows servers a decade ago, which gave criminal hackers a toehold in millions of PCs. The attackers would then use their widely distributed perches to launch new intrusions. Because website servers typically have much more bandwidth and computing power than PCs, the new rash of server compromises poses a potentially much greater threat to the Internet.

    Drupal maintainers have patched the critical vulnerability in both the 7.x and 8.x version families as well as the 6.x family, which maintainers stopped supporting in 2016. Administrators who have yet to install the patch should assume their systems are compromised and take immediate action to disinfect them.

    The Man Who Brought Down Lance Armstrong

    $
    0
    0

    Updated: April 19, 2018

    At 5:19 p.m. on Friday, April 30, 2010, Floyd Landis hit send on what would prove the most consequential email of his life. Addressed to the then-CEO of USA Cycling, Steve Johnson, the email bore the subject line “nobody is copied on this one so it’s up to you to demonstrate your true colors….” It went on to detail, year by year, how Landis and other members of the United States Postal Service team had used illegal performance-enhancing drugs and methods to dominate the sport of cycling and claim victories at the sport’s premier event, the Tour de France. The email, later included in Landis’s 2012 affidavit for a United States Anti-Doping Agency (usada) investigation, clearly implicated many of his former teammates—most famously, the seven-time Tour winner Lance Armstrong (who declined to comment for this article).

    To hear more feature stories, see our full list or get the Audm iPhone app.

    It would take more than two years of investigation, but in October 2012, usada concluded that the U.S. Postal Service team under Armstrong and its manager, Johan Bruyneel, had run “the most sophisticated, professionalized and successful doping program that sport has ever seen.” Armstrong’s longtime sponsor Nike was the first to abandon him, and the rest followed. In one day, he lost seven sponsors and an estimated $75 million. A few days later, the International Cycling Union (UCI), which oversees international competitive cycling, stripped him of his record seven Tour victories. Attempting damage control, Armstrong sat down with Oprah in 2013, in an interview that went terribly awry; he simply could not muster the appropriate level of contrition. (Among other missteps, he made a fat joke.) Since then, he has been forced to sell his Austin mansion and his Gulfstream jet to pay $15 million in legal fees, plus $21 million in settlements.

    But Landis hadn’t stopped with the email to Johnson. Fearing that the Teflon-like Armstrong would emerge from the accusations unscathed, Landis had also filed a whistle-blower lawsuit under the federal False Claims Act, alleging that Armstrong and his team had defrauded the government by taking the U.S. Postal Service sponsorship money while knowingly cheating in races. The federal government joined that lawsuit in 2013; on April 19, Armstrong settled for $5 million. And on the grounds of the whistle-blower suit, Landis will be awarded $1.1 million from that settlement. (Armstrong will also pay $1.65 million to cover Landis’s legal costs.)

    When Landis wrote the 2010 email that turned cycling on its head, he was at a low point. A year after his own 2006 Tour de France victory, Landis had become the first man in the race’s 103-year history to be stripped of his title because of a doping conviction. His days were spent in a haze: He consumed as much as a fifth of Jack Daniel’s and 15 double-strength painkillers daily. (He maintains, however, that he was stone-cold sober when he wrote the email.) His house was foreclosed on, his credit ruined. He and his wife, Amber, divorced. “If you are a human in any way and not a psychopath, it’s painful,” he says. “My whole life was completely upside down, and I was not prepared for any of it.”

    A former millionaire, Landis had spent his entire fortune, and then some, on his legal defense. But he has gotten back on his feet, starting a cannabis business in rural Colorado. (The irony—that a life destroyed by one form of dope may be redeemed by another—is not lost on him.) Landis is steadfast that his whistle-blower suit is about justice being done, rather than his own potential windfall. But the money will come in handy in getting his new business off the ground.

    Landis grew up in Farmersville, Pennsylvania, in a conservative Mennonite family. Like the Amish, some Mennonites avoid modern technology. Though his family had electricity, there was no radio or television to occupy young Landis’s time. So he rode his bike.

    He saved enough money to buy his first real mountain bike at age 15, and promptly won the first race he ever entered with it, wearing sweatpants. In 1993, during his senior year of high school, Landis won the U.S. junior national championship, and his career took off. USA Cycling sent the 17-year-old to France to represent America at the world championship. It was his first time on an airplane. “The trip was fairly traumatic,” he told me. “I should have taken that as a sign.”

    Performance-enhancing drugs have been central to competitive cycling for as long as the sport has existed. Early-20th-century riders in the Tour de France took the dangerous stimulant strychnine and held ether-soaked handkerchiefs to their mouth to dull the pain caused by propelling a bike for thousands of miles.

    Floyd Landis at the peak of his cycling career (Doug Pensinger / Getty)

    But Landis claims never to have used performance-enhancing drugs before meeting Armstrong. He trained obsessively, once riding his bike 24,000 miles in a single year. His first professional contract, for the Mercury team in 1999, was worth $6,000.

    By 2002, Armstrong had already won three Tours and was looking to fortify his U.S. Postal Service team to compete for a fourth. Landis, still a drug-free athlete by his own account, was showing promise; he had recently placed fourth at the Tour de l’Avenir, in France. U.S. Postal signed the 26-year-old for $60,000 a year. But from his first bike ride with Armstrong, Landis said, their relationship was tense: “The guy’s a jerk and everybody knows it, but he was surrounded by yes-men, and they were also terrified of him, so they laughed at his jokes even if they didn’t make sense.” The supporting cast of riders around Armstrong were treated more like replaceable cogs than essential components, easily swapped out for any number of other riders.

    “Once I got to Postal it was like, ‘Look, there are no half measures here,’ and we openly discussed doping pretty much on every bike ride,” Landis said. He claimed in his usada affidavit that it was Armstrong who handed him his first performance-enhancing drug, a pile of 2.5 milligram testosterone patches. He then participated in the popular but illegal practice of conducting blood transfusions: Cyclists would draw blood in the off-season, bag it, and reinfuse it into their body during races for a boost of oxygen-carrying red blood cells.

    By 2004, the Armstrong universe had become so unpleasant for Landis that he began shopping around for another team. U.S. Postal wanted to keep him, but it was offering far less than he could find elsewhere. As negotiations grew contentious, Landis said, the team had Armstrong call to sweet-talk him. “That lasted about two minutes, then he spent 45 minutes telling me how much he hated me and he was going to destroy me,” Landis said.

    Landis’s resentment festered. During the 2004 Tour de France, while still riding with the U.S. Postal Service team, Landis signed for the next year with the Swiss professional cycling team Phonak. He would finish the Tour helping Armstrong race to a sixth victory in Paris, and when Armstrong retired after his seventh Tour win the following year, amid a swarm of doping allegations, Landis became a favorite to win in 2006.

    And win he did. For four days, Landis would be considered the best cyclist on Earth. Despite a collapse on Stage 16 of the Tour, which left him at a seemingly insurmountable time disadvantage, Landis pulled himself back into contention over the French mountains on Stage 17 in what remains possibly the most spectacular single-day ride in cycling history. At the time trial two days later, he recaptured the lead, and went on to win the Tour—rolling into the Champs-Élysées flanked by his Phonak teammates—by 57 seconds.

    But a few days afterward the team manager called with life-changing news: Landis had failed the drug test he’d taken after that magical Stage 17. Using a method that examined the atomic makeup of the testosterone in his urine, a French laboratory later found that Landis had used synthetic testosterone.

    At his first press conference after the results were announced, he attempted a paltry excuse, blaming the findings on his naturally high testosterone levels. In subsequent interviews he pointed to the two beers and at least four shots of whiskey he’d consumed the night before the stage. Armstrong—who presumably realized that if Landis fell and flipped, he himself could be next—phoned to encourage Landis to be more forceful in his public denials, Landis claims. “He was practiced at this and I wasn’t, so he told me I had to speak with more conviction,” Landis remembered. “It was completely self-serving. Lance hadn’t talked to me in years before that call.” Nonetheless, he doubled down. He mounted a protracted and expensive battle to assert his innocence, even starting an organization, the Floyd Fairness Fund, to raise money for his fight against the charges. He also published a book titled Positively False, in which the author Loren Mooney helped him explain his miraculous Stage 17 ride and his cycling success much as the journalist Sally Jenkins had done for Armstrong in his equally ironically titled biography, It’s Not About the Bike. Both narratives now read more like fiction.

    Armstrong speaks to Landis during the 2004 Tour de France. (Martin Bureau / AFP / Getty)

    In June of 2008, the Switzerland-based Court of Arbitration for Sport upheld the two-year doping ban imposed on Landis by usada. Landis had exhausted his appeals. To this day, he maintains that although he used performance-enhancing drugs to cheat in races during the latter part of his career, he was not on testosterone during the 2006 Tour, and was somehow set up to take a fall or be made an example of. usada’s CEO, Travis Tygart, publicly urged Landis to acknowledge his mistake and come clean. Friends abandoned him. Under threat of criminal prosecution, he agreed to pay back the $478,354 he had raised from donors, on false pretenses, for his defense.

    Since Landis’s days as a professional athlete, his features have softened, from borderline emaciated to prototypically American. As we enter a restaurant bar in Golden, Colorado, no one recognizes him. His jeans are loose-fitting, and his hair is an awkward length that requires almost constant attention to keep out of his eyes. He seems happy and, quite possibly, at peace with his life.

    In 2016, he launched his marijuana business, Floyd’s of Leadville, which specializes in treating athletes with cannabis-infused analgesic creams, tinctures, and softgels. After almost a decade of using opioids to quell the pain left in his own body from eight years of professional cycling—he had his hip replaced in 2006—Landis discovered that the powerful anti-inflammatory component of marijuana, cannabidiol, could accomplish similar results without the horrific side effects. Now opioid-free, Landis believes in its potential: “This stuff has done so much for me.”

    I asked Landis, before the settlement was announced, about the prospect of the whistle-blower suit making him rich again after his fall from grace, but he demurred: “I don’t care about the money. I don’t care if I get anything out of it.” Likewise, when I asked him his feelings about taking down his old antagonist, he said only, “It was never about Lance in the first place. But I had a choice to come clean or not, and if I did, it was going to be me against Lance, because he was going to fight.”

    What he was really interested in talking about is what he sees as the ongoing corruption in the upper echelons of cycling. Since he blew the doors off the sport’s omertà, cycling has ostensibly cleaned up its act. But Landis believes that the speeds at which cyclists are now riding—on the same sections of European roads he raced—haven’t slowed enough for that to be true, and mounting evidence seems to point to, if not outright doping, at least gray-area techniques.

    Take Team Sky, from Manchester, England. “Team Sky looks exactly like what we were doing—exactly,” Landis said, referring to its current dominance of the cycling world. “So they were able to do that without drugs, but we weren’t? People haven’t evolved over the last eight years.” Sky has won five of the last six Tours, but the legitimacy of its champions has come under scrutiny. A U.K. parliamentary-committee investigation recently concluded that Bradley Wiggins, the 2012 winner, had crossed an “ethical line” by abusing the Therapeutic Use Exemption (TUE) system, which allows an athlete to take banned drugs in order to treat medical conditions. The committee accused him of using corticosteroids to improve his power-to-weight ratio ahead of the race, rather than for the stated purpose of treating asthma. (Both Wiggins and Team Sky have denied crossing any lines to enhance performance.) Wiggins’s former teammate and successor, Chris Froome, who won the past three Tours, failed a drug test during his winning effort at the 2017 Vuelta a España; he had twice the allowed limit of the asthma drug salbutamol in his system. (Froome has denied any wrongdoing, and an International Cycling Union investigation is ongoing.)

    Since retiring from racing, Landis has begun a marijuana business that specializes in treating athletes suffering from sports-related pain. (Benjamin Rasmussen)

    In this drama without heroes, Landis doesn’t think that the disgrace he and Armstrong have undergone has ultimately done much good for the sport. “Taking me down and taking Armstrong down did nothing,” he said. “It was an utter failure because the UCI and wada [the World Anti-Doping Agency] are still lying to kids and making them think that they can become top athletes clean. And they know that you can’t.” (The UCI said that the TUE system was strengthened in 2014 and is now “fully safeguarded.” wada said that it is becoming more and more difficult for athletes to cheat without getting caught, and that it is possible for athletes to succeed without doping.)

    I asked Landis how he felt about being considered among the best cyclists in history. “I don’t care, and I don’t even want to be on the fucking list,” he said. “Leave me out of it.”


    This article appears in the May 2018 print edition with the headline “The Man Who Brought Down Lance Armstrong Isn’t Done With Him Yet.” It has been updated to reflect the settlement of the lawsuit against Armstrong.

    The limits of information (2014)

    $
    0
    0

    Do you remember telephone books? These great big lumbering things, often yellow, were once an indispensible part of every household. Today we don't need them anymore, as we can store several phone books' worth of information on small devices we carry around in our pockets. Those devices will also soon be outdated. And one day in the not too distant future our control of information will be complete. We will be able to encode an infinite amount of it on tiny little chips we can implant in our brains.

    Black hole

    An image of a coiled galaxy taken by NASA's Spitzer Space Telescope. The eye-like object at the centre of the galaxy is a monstrous black hole surrounded by a ring of stars. Image: NASA/JPL-Caltech

    Except that we won't. Not because of a lack of technological know-how, but because the laws of nature don't allow it. There is only so much information you can cram into a region of space that contains a finite amount of matter. "We are talking about information in the sense of something that you can store and reproduce," explains Jacob Bekenstein, the physicist who first came up with this limit of information in the early 1980s. "[To be able to do that] you need a physical manifestation of it; it could be on paper, or it could be electronically [stored]."

    Bekenstein isn't a computer scientist or engineer, but a theoretical physicist. When he came up with the Bekenstein bound, as the information limit is now known, he was thinking about a riddle posed by black holes. These arise when a lot of mass is squeezed into a small region of space. According to Einstein's theory of gravity the gravitational pull of that mass will become so strong that nothing, not even light, can escape from its vicinity. That feature is what gave black holes their name.

    Room for randomness

    The riddle concerned the question of what happens when something falls into a black hole. Most physical systems come with room for variation. For example, at this particular instant in time all the atoms and molecules that make up my body are in a particular configuration. But that configuration is only one of many that are possible. You could swap the position of the tea molecules currently sloshing around in my stomach, or reverse the direction in which they are moving, without altering my macrostate: the physical variables I am able to observe in myself.

    What are black holes? Find out here!

    This room for variation — the allowed amount of randomness underlying my macrostate — is measured by a number physicists would call my entropy. The more configurations of smallest components (the more microstates) there are corresponding to my macrostate, the higher my entropy. You can also think of entropy in terms of information. If a large number of microstates are possible, then that's because there are many different components (eg atoms) that can be arranged in many different ways. To describe a single microstate exactly would mean to specify the exact position, speed and direction of motion of each component, which requires a lot of information. The higher the entropy, the more information you need. This is why you can think of entropy as measuring the minimum number of bits of information you would need to exactly describe my microstate given my macrostate.

    The behaviour of the entropy of a system over time is described by a law of physics called thesecond law of thermodynamics. It says that the entropy of an isolated physical system can only ever go up or stay the same, but it can never decrease. To shed some light on this, think of my cup of tea before I imbibed it. At the very start, the instant I put the milk in, the tea and milk molecules were neatly separated. After a while, however, the milk will have diffused, milk and tea will be thoroughly mixed up, and the liquid will have reached an equilibrium temperature. The latter situation has a higher entropy than the initial situation. That's because there are many more microstates that correspond to an equilibrium cup of tea than there are microstates that correspond to a situation in which the milk and tea molecules are only allowed to be in certain regions within the cup. So the entropy of my cup of tea has increased over time. (You can find out more about entropy here.)

    Losing entropy

    But what about those black holes? Initially black holes were thought of as very simple objects with no room for variation at all. Physicists thought their entropy was zero. But if I fell into a black hole, I would never get out again and my entropy would be lost to the world. The overall entropy of the Universe would have decreased. "The moment you have a black hole, you have some sort of trash can where to hide entropies," says Bekenstein. "So the question is, what does the second law say in that case?"

    Explosion

    A photo taken by NASA's Chandra X-ray Observatory revealing the remains of an explosion in the galaxy Centaurus A. There is a supermassive black hole in the nucleus. Image: NASA.

    It seems that the second law would be violated, and this would indeed be true if the black hole had no entropy at all. However, in 1970 Stephen Hawking found that black holes come with a property that behaves very much like entropy. Every black hole has an event horizon. That's their boundary of no return: if you cross it, you won't come back. Like the shell of an egg the event horizon has an area. Using theoretical calculations Hawking showed that, whatever happens to the black hole, this area never decreases — just like the entropy of an ordinary physical system.

    Bekenstein took the bold step of suggesting that the area of the event horizon does indeed measure a form of entropy. "A black hole is very simple, but it's hiding a complicated history," explains Bekenstein. In an ordinary system like my cup of tea, entropy is a measure of our uncertainty about what's going on at a molecular level. If its entropy is high then that's because there are many possible microstates corresponding to a macrostate. I can observe a macrostate, for example the tea's temperature and mass, but that doesn't give me a clue about what the exact microstate is because there are so many possibilities. "For the simplest black hole all I can figure out is its mass, but it has been formed in one of many possible ways," says Bekenstein. "There are many alternative histories and they all count towards the entropy."

    Bekenstein’s idea was controversial at first, but further investigations into the theory of black holes confirmed that it made sense to define a black hole entropy (call it $S_{BH}$). It turns out to be proportional to a quarter times their horizon’s surface area $A$; to be precise

    where $L_ p = 1.62 \times 10^{-35}$ cm is called the Planck length.

    Recovering entropy

    The notion of black hole entropy gave people a way of generalising the second law of thermodynamics to systems that include black holes: for such a system it's the sum of the ordinary entropy that lives outside the black hole and the black hole entropy that can never decrease. "If some entropy falls into the black hole the surface area will grow enough for the sum of these two entropies to grow," explains Bekenstein. The increase in the black hole entropy will compensate, and most frequently over-compensate, for the loss in the ordinary entropy outside it.

    The generalised second law inspired Bekenstein to a little thought experiment which gave rise to the Bekenstein bound on information. Suppose you take a little package of matter with entropy $S$ and you lower it into a black hole. This will increase the black hole’s entropy and, equivalently, its surface area. You lower the package into the hole very carefully so as to disturb the hole as little as possible and increase the surface area by the smallest possible amount. Physicists know how to calculate that smallest possible amount. Writing $G$ for Newton’s gravitational constant and $c$ for the speed of light, it turns out to be

    where $m$ is the total mass of the package and $r$ is its radius. Thus, lowering the package into the black hole will have increased $S_{BH}$ by at least

    When you have dropped the package into the black hole, the outside will have lost an amount $S$ of entropy. Since the overall entropy cannot decrease, the increase in $S_{HB}$ must exactly balance or exceed $S.$ In other words,

    The entropy of your package cannot be bigger than the number on the right of this inequality, which depends on the package’s mass and its size. And since any package carrying entropy could in theory be dropped into a black hole in this way, any package must comply with the bound.

    The limits of information storage

    How is all of that linked to the storage capacity of a computer chip or some other information storage device? The entropy measures the number of bits needed to describe the chip’s microstate. Some of those bits go towards describing the parts of the chip designed to store information. More storage capacity requires more entropy. And since the entropy is limited (in terms of the chip’s mass and size) by the expression above, so is its storage capacity. To increase the amount of information a device can carry beyond any bound, we would have to increase its size and/or mass beyond any bound too.

    brain

    Could a brain be uploaded on a computer?

    Current devices don't come anywhere near the Bekenstein bound, so there's no need to worry that we will hit the limit any time soon. In fact, the only things physicists know of that exactly reach the bound are black holes themselves. But it is interesting that such a bound even exists. "In the future when information storage technologies will get much better you still will not be able to exceed this," says Bekenstein. "It's a very big bound, but it's a finite bound."

    There is another interesting angle on Bekenstein's bound. It puts a limit on how much information you need to completely describe a physical system, such as a human brain, down to the tiniest detail. Since, according to the bound, that information is finite this means that, in theory at least, a human brain could be entirely recreated on a computer. Many people believe that everything about a person, including consciousness and a sense of self, arise from physical processes in the brain, so in effect it would be possible to upload a person onto a machine. We are nowhere near being able to do this, both in terms of the computing power and the other technologies that would be needed. But still, it's a fascinating thought. Matrix, here we come.


    About this article

    Jacob Bekenstein

    Jacob Bekenstein.

    Jacob Bekenstein is Polak Professor of Theoretical Physics at the Hebrew University of Jerusalem. Marianne Freiberger, Editor of Plus, interviewed him in London in July 2014.

    This article is part of our Information about information project, run in collaboration with FQXi. Click here to read other articles on information and black holes.



    Software Defined Culture

    $
    0
    0

    This five-part series of posts covers a talk titled Software Defined Culture that I gave at DevOps Days Philadelphia 2016, GOTO Chicago 2017, and Velocity San Jose 2017. I think the talk got better and each time I iterated on it. The talk was full of dumb jokes and silly gifs, whereas these posts will remove most of the gags and maybe clarify a couple of points. As a talk there's a limit to how deep I could go on any given topic, but having this here in blog format will give me a framework on which to hang some upcoming topics.

    First, a warning. These posts will get a bit ranty. I'm talking about culture and the decisions we make around technical choices. But we have to be careful talking about culture. It's easy (especially as technologists) to be incredibly tone deaf when talking about culture. It's easy to make assumptions that your experiences are the same as other people's experiences. So for these posts, keep in mind that they're based on my experiences working mostly in small and mid-size organizations, working with enterprise developers in customer organizations, and being part of the technical communities I've been a part of personally. Your mileage may vary.

    Second, some disclaimers. I'm going to quote a few people in these posts, but none of these people would necessarily endorse any particular position I'm making here. I'm also going to poke some fun at various technologies. These are all technologies I've personally worked with. So if I'm not making fun of your favorite technology, it's not because it's necessarily any good but just that I haven't worked with it before.

    If you'd like to skip ahead to the rest of the series:

    1. Part 1: Build for Reliability
    2. Part 2: Build for Operability
    3. Part 3: Build for Observability
    4. Part 4: Build for Responsibility

    Shipping the Org Chart

    My friend and former colleague Bridget Kromhout is fond of saying "containers won't fix your broken culture". And she's right. Ryn Daniels, one of the authors of Effective DevOps, said in their keynote at Velocity NY 2016 that "tools won't fix your broken culture." And they're right.

    This is the essence of the problem we face as technologists trying to improve our organizations. We have an embarrassment of technical tooling and best practices, but none of it really fixes our human problems. Hell, most of the time it barely fixes our technical problems, so how would we expect it to fix our technical problems?

    I live in Philadelphia, and like many communities we have a Slack, and there's a devops channel. And every few months it seems we get an exchange like this:

    Slack channel

    And, sure, maybe this response could be a little more constructive. But the point is that many people are confusing "devops tools" for a model of working. Simply using the tools by themselves doesn't mean you're going to make the cultural transformation you might be looking to make. So often tooling is treated like a kind of a spiritual bypass — "I'm using Docker, so I'm doing The DevOps".

    Part of the reason this doesn't work is because of something called Conway's Law, which was coined by Melvin Conway in his 1967 paper How Do Committees Invent?

    "Any organization that designs a system (defined more broadly here than just information systems) will inevitably produce a design whose structure is a copy of the organization's communication structure."

    It's important to keep in mind here that "system" doesn't just mean the technical system it also means the cultural system.

    Chaotic Feedback

    Human beings are the biggest distributed system. Or as Andrew Clay Shafer likes to say, organizations are a "socio-technical system." But like all complex systems, cultural systems are subject to chaotic feedback mechanisms. Subtle disturbances in equilibrium over time can build up to have outsized effects. Could we use these mechanisms to make technical choices to improve our culture?

    Instinctively we all know that making bad technical choices can influence our culture. We know that if we don't build for observability, the operations team will be frustrated with the developers who are making them fly blind. We know that if don't build for self-serviceability, the development teams will hate the operations team for saying "no" all the time. We know that we don't build for flexibility, the production management team will frustrated with the inability of the development team to react quickly to changing market conditions.

    If you were up all night every night over the weekend because PagerDuty was sending alerts over some crap deployments, then on Monday the team is going to be tired and maybe even cranky with each other. In an org where there's a separate operations team, the operators trust will be eroded every time they get paged unnecessarily. That erosion of trust is a technical decision influencing your culture.

    Last April, Bill Higgins at IBM had a great blog post where he was talking about bringing new tools to a project team to catalyze a change in the way the team organized. He had great results, and it leads to him using the word "magic" a lot to describe the impact:

    "The magic is in the new, better practices that the tools enable. A tool is a vehicle for practices. Practices directly shape habits and tacit assumptions. Habits and tacit assumptions are the foundations of culture."

    The idea that he keeps coming back to was that all these tools had amazing surface usability but the magic was actually about the new methods and collaboration that they generated. GitHub is "just a pretty web UI on git", right? But what it enables is a workflow around peer review, collaborative software development, and communal ownership of code. Slack is "just a pretty web UI on IRC", right? But unlike IRC, you can get non-technical people to use it. Slack channels become a place where technical teams (and their bots!) and non-technical teams can share a common medium of communication.

    Culture influences tools but we can clearly see that tools can influence culture as well. This back-and-forth was illustrated nicely by Avi Vig in an AMA on Reddit (of all places) about his experience as an operations engineer at Etsy:

    "A lot of CD is to do with culture, much more than tools... Once you have the culture moving in the right direction, where developers are happy pushing code and owning software problems, and operations teams are OK letting go of the control and working with developers, the tools become less important."

    If we know our technical choices can influence our culture, how can we make technical choices that will reinforce the values that we want in our organizations? I've come up with four guidelines for technical decision making which I'm going to grandiosely call the "4 principles of software defined culture." The remaining posts in this series will hit on each of these:

    1. Part 1: Build for Reliability
    2. Part 2: Build for Operability
    3. Part 3: Build for Observability
    4. Part 4: Build for Responsibility

    Software Testing Anti-patterns

    $
    0
    0

    Introduction

    There are several articles out there that talk about testing anti-patterns in the software development process. Most of them however deal with the low level details of the programming code, and almost always they focus on a specific technology or programming language.

    In this article I wanted to take a step back and catalog some high-level testing anti-patterns that are technology agnostic. Hopefully you will recognize some of these patterns regardless of your favorite programming language.

    Terminology

    Unfortunately, testing terminology has not reached a common consensus yet. If you ask 100 developers what is the difference between an integration test, a component test and an end-to-end test you might get 100 different answers. For the purposes of this article I will focus on the definition of the test pyramid as presented below.

    The Testing pyramid

    If you have never encountered the testing pyramid before, I would urge you to become familiar with it first before going on. Some good starting points are:

    The testing pyramid deserves a whole discussion on its own, especially on the topic of the amount of tests needed for each category. For the current article I am just referencing the pyramid in order to define the two lowest test categories. Notice that in this article User Interface Tests (the top part of the pyramid) are not mentioned (mainly for brevity reasons and because UI tests come with their own specific anti-patterns).

    Therefore the two major test categories mentioned as unit and integration tests from now on are:

    TestsFocus onRequireSpeedComplexitySetup needed
    Unit testsa class/methodthe source codevery fastlowNo
    Integration testsa component/servicepart of the running systemslowmediumYes

    Unit tests are the category of tests that have wider acceptance regarding the naming and what they mean. They are the tests that accompany the source code and have direct access to it. Usually they are executed with an xUnit framework or similar library. These tests work directly on the source code and have full view of everything. A single class/method/function is tested (or whatever is the smallest possible working unit for that particular business feature) and anything else is mocked/stubbed.

    Integration tests (also called service tests, or even component tests) focus on a whole component. A component can be a set of classes/methods/functions, a module, a subsystem or even the application itself. They examine the component by passing input data and examinining the output data it produces. Usually some kind of deployment/bootstrap/setup is required first. External systems can be mocked completely, replaced (e.g. using an in-memory database instead of a real one), or the real external dependency might be used depending on the business case. Compared to unit tests they may require more specialized tools either for preparing the test environment, or for interacting/verifying it.

    The second category suffers from a blurry definition and most naming controversies regarding testing start here. The “scope” for integration tests is also highly controversial and especially the nature of access to the application (black or white box testing and whether mocking is allowed or not).

    As a basic rule of thumb if

    • a test uses a database
    • a test uses the network to call another component/application
    • a test uses an external system (e.g. a queue or a mail server)
    • a test reads/writes files or performs other I/O
    • a test does not rely on the source code but instead it uses the deployed binary of the app

    …then it is an integration test and not a unit test.

    With the naming out of the way, we can dive into the list. The order of anti-patterns roughly follows their appearance in the wild. Frequent problems are gathered in the top positions.

    Software Testing Anti-Pattern List

    1. Having unit tests without integration tests
    2. Having integration tests without unit tests
    3. Having the wrong kind of tests
    4. Testing the wrong functionality
    5. Testing internal implementation
    6. Paying excessive attention to test coverage
    7. Having flaky or slow tests
    8. Running tests manually
    9. Treating test code as a second class citizen
    10. Not converting production bugs to tests
    11. Treating TDD as a religion
    12. Writing tests without reading documentation first
    13. Giving testing a bad reputation out of ignorance

    Anti-Pattern 1 - Having unit tests without integration tests

    This problem is a classic one with small to medium companies. The application that is being developed in the company has only unit tests (the base of the pyramid) and nothing else. Usually lack of integration tests is caused by any of the following issues:

    1. The company has no senior developers. The team has only junior developers fresh out of college who have only seen unit tests
    2. Integration tests existed at one point but were abandoned because they caused more trouble than their worth. Unit tests were much more easy to maintain and so they prevailed.
    3. The running environment of the application is very “challenging” to setup. Features are “tested” in production.

    I cannot really say anything about the first issue. Every effective team should have at least some kind of mentor/champion that can show good practices to the other members. The second issue is covered in detail in anti-patterns 5, 7 and 8.

    This brings us to the last issue - difficulty in setting up a test environment. Now don’t get me wrong, there are indeed some applications that are really hard to test. Once I had to work with a set of REST applications that actually required special hardware on their host machine. This hardware existed only in production, making integration tests very challenging. But this is a corner case.

    For the run-of-the-mill web or back-end application that the typical company creates, setting up a test environment should be a non-issue. With the appearance of Virtual Machines and lately Containers this is more true than ever. Basically if you are trying to test an application that is hard to setup, you need to fix the setup process first before dealing with the tests themselves.

    But why are integration tests essential in the first place?

    The truth here is that there are some types of issues that only integration tests can detect. The canonical example is everything that has to do with database operations. Database transactions, database triggers and any stored procedures can only be examined with integration tests that touch them. Any connections to other modules either developed by you or external teams need integration tests (a.k.a. contract tests). Any tests that need to verify performance, are integration tests by definition. Here is a summary on why we need integration tests:

    Type of issueDetected by Unit testsDetected by Integration tests
    Basic business logicyesyes
    Component integration problemsnoyes
    Transactionsnoyes
    Database triggers/proceduresnoyes
    Wrong Contracts with other modules/APIsnoyes
    Wrong Contracts with other systemsnoyes
    Performance/Timeoutsnoyes
    Deadlocks/Livelocksmaybeyes
    Cross-cutting Security Concernsnoyes

    Basically any cross-cutting concern of your application will require integration tests. With the recent microservice craze integration tests become even more important as you now have contracts between your own services. If those services are developed by other teams, you need an automatic way to verify that interface contracts are not broken. This can only be covered with integration tests.

    To sum up, unless you are creating something extremely isolated (e.g. a command line linux utility), you really need integration tests to catch issues not caught by unit tests.

    Anti-Pattern 2 - Having integration tests without unit tests

    This is the inverse of the previous anti-pattern. This anti-pattern is more common in large companies and large enterprise projects. Almost always the history behind this anti-pattern involves developers who believe that unit tests have no real value and only integration tests can catch regressions. There is a large majority of experienced developers who consider unit tests a waste of time. Usually if you probe them with questions, you will discover that at some point in the past, upper management had forced them to increase code coverage (See anti-pattern 6) forcing them to write trivial unit tests.

    It is true that in theory you could have only integration tests in a software project. But in practice this would become very expensive to test (both in developer time and in build time). We saw in the table of the previous section that integration tests can also find business logic errors after each run, and so they could “replace” unit tests in that manner. But is this strategy viable in the long run?

    Integration tests are complex

    Let’s look at an example. Assume that you have a service with the following 4 methods/classes/functions.

    Cyclomatic complexity for 4 modules

    The number on each module denotes its cyclomatic complexity or in other words the separate code paths this module can take.

    Mary “by the book” Developer wants to write unit tests for this service (because she understands that unit tests do have value). How many tests does she need to write in order to get full coverage of all possible scenarios?

    It should be obvious that one can write 2 + 5 + 3 + 2 = 12 isolated unit tests that cover fully the business logic of these modules. Remember that this number is just for a single service, and the application Mary is working on, has multiple services.

    Joe “Grumpy” developer on the other hand does not believe in the value of unit tests. He thinks that unit tests are a waste of time and he decides to write only integration tests for this module. How many integration tests should he write? He starts looking at all the possible paths a request can take in that service.

    Examining code paths in a service

    Again it should be obvious that all possible scenarios of codepaths are 2 * 5 * 3 * 2 = 60. Does that mean that Joe will actually write 60 integration tests? Of course not! He will try and cheat. He will try to select a subset of integration tests that feel “representative”. This “representative” subset of tests will give him enough coverage with the minimum amount of effort.

    This sounds easy enough in theory, but can quickly become problematic. The reality is that these 60 code paths are not created equally. Some of them are corner cases. For example if we look at module C we see that is has 3 different code paths. One of them is a very special case, that can only be recreated if C gets a special input from component B, which is itself a corner case and can only be obtained by a special input from component A. This means that this particular scenario might require a very complex setup in order to select the inputs that will trigger the special condition on the component C.

    Mary on the other hand, can just recreate the corner case with a simple unit test, with no added complexity at all.

    Basic unit test

    Does that mean that Mary will only write unit tests for this service? After all that will lead her to anti-pattern 1. To avoid this, she will write both unit and integration tests. She will keep all unit tests for the actual business logic and then she will write 1 or 2 integration tests that make sure that the rest of the system works as expected (i.e. the parts that help these modules do their job)

    The integration tests needed in this system should focus on the rest of the components. The business logic itself can be handled by the unit tests. Mary’s integration tests will focus on testing serialization/deserialization and with the communication to the queue and the database of the system.

    correct Integration tests

    In the end, the number of integration tests will be much smaller than the number of unit tests (matching the shape of the test pyramid described in the first section of this article).

    Integration tests are slow

    The second big issue with integration tests apart from their complexity is their speed. Usually an integration test is one order of magnitute slower than a unit test. Unit tests need just the source code of the application and nothing else. They are almost always CPU bound. Integration tests on the other hand can perform I/O with external systems making them much more difficult to run in an effective manner.

    Just to get an idea on the difference for the running time let’s assume the following numbers.

    • Each unit test takes 60ms (on average)
    • Each integration test takes 800ms (on average)
    • The application has 40 services like the one shown in the previous section
    • Mary is writing 10 unit tests and 2 integration tests for each service
    • Joe is writing 12 integration tests for each service

    Now let’s do the calculations. Notice that I assume that Joe has found the perfect subset of integration tests that give him the same code coverage as Mary (which would not be true in a real application).

    Time to runHaving only integration tests (Joe)Having both Unit and Integration tests (Mary)
    Just Unit testsN/A24 seconds
    Just Integration tests6.4 minutes64 seconds
    All tests6.4 minutes1.4 minutes

    The difference in total running time is enormous. Waiting for 1 minute after each code change is vastly different than waiting for 6 minutes. The 800ms I assumed for each integration test is vastly conservative. I have seen integration test suites where a single test can take several minutes on its own.

    In summary, trying to use only integration tests to cover business logic is a huge time sink. Even if you automate the tests with CI, your feedback loop (time from commit to getting back the test result) will be very long.

    Integration tests are harder to debug than unit tests

    The last reason why having only integration tests (without any unit tests) is an anti-pattern is the amount of time spent to debug a failed test. Since an integration test is testing multiple software components (by definition), when it breaks, the failure can come from any of the tested components. Pinpointing the problem can be a hard task depending on the number of components involved.

    When an integration tests fails you need to be able to understand why it failed and how to fix it. The complexity and breadth of integration tests make them extremely difficult to debug. Again, as an example let’s say that your application only has integration tests. The application you are developing is the typical e-shop.

    A developer in your team (or even you) creates a new commit, which triggers the integration tests with the following result:

    breakage of integration tests

    As a developer you look at the test result and see that the integration test named “Customer buys item” is broken. In the context of an e-shop application this is not very helpful. There are many reasons why this test might be broken.

    There is no way to know why the test broke without diving into the logs and metrics of the test environment (assuming that they can pinpoint the problem). In several cases (and more complex applications) the only way to truly debug an integration test is to checkout the code, recreate the test environment locally, then run the integration tests and see it fail in the local development environment.

    Now imagine that you work with Mary on this application so you have both integration and unit tests. Your team makes some commits, you run all the tests and get the following:

    breakage of both kinds of tests

    Now two tests are broken:

    • “Customer buys item” is broken as before (integration test)
    • “Special discount test” is also broken (unit test)

    It is now very easy to see the where to start looking for the problem. You can go directly to the source code of the Discount functionality, locate the bug and fix it and in 99% of the cases the integration test will be fixed as well.

    Having unit tests break before or with integration tests is a much more painless process when you need to locate a bug.

    Quick summary of why you need integration tests

    This is the longest section of this article, but I consider it very important. In summary while in theory you could only have integration tests, in practice

    1. Unit tests are easier to maintain
    2. Unit tests can easily replicate corner cases and not-so-frequent scenario
    3. Unit tests run much faster than integration tests
    4. Broken unit tests are easier to fix than broken integration tests

    If you only have integration tests, you waste developer time and company money. You need both unit and integration tests are the same time. They are not mutually exclusive. There are several articles on the internet that advocate using only one type of tests. All these articles are misinformed. Sad but true.

    Anti-Pattern 3 - Having the wrong kind of tests

    Now that we have seen why we need both kinds of tests (unit and integration), we need to decide on how many tests we need from each category.

    There is no hard and fast rule here, it depends on your application. The important point is that you need to spend some time to understand what type of tests add the most value to your application. The test pyramid is only a suggestion on the amount of tests that you should create. It assumes that you are writing a commercial web application, but that is not always the case. Let’s see some examples:

    Example - Linux command line utility

    Your application is a command line utility. It reads one special format of a file (let’s say a CSV) and exports another format (let’s say JSON) after doing some transformations. The application is self-contained, does not communicate with any other system or use the network. The transformations are complex mathematical processes that are critical for the correct functionality of the application (it should always be correct even if it slow).

    In this contrived example you would need:

    • Lots and lots of unit tests for the mathematical equations.
    • Some integration tests for the CSV reading and JSON writing
    • No UI tests because there is no UI.

    Here is the breakdown of tests for this project:

    Test pyramid example

    Unit tests dominate in this example and the shape is not a pyramid.

    Example - Payment Management

    You are adding a new application that will be inserted into an existing big collection of enterprise systems. The application is a payment gateway that processes payment information for an external system. This new application should keep a log of all transactions to an external DB, it should communicate with external payment providers (e.g. Paypal, Stripe, WorldPay) and it should also send payment details to another system that prepares invoices.

    In this contrived example you would need

    • Almost no unit tests because there is no business logic
    • Lots and lots of integration tests for the external communications, the db storage, the invoice system
    • No UI Tests because there is a no UI

    Here is the breakdown of tests for this project:

    Test pyramid example

    Integrations tests dominate in this example and the shape is not a pyramid.

    Example - Website creator

    You are working on this brand new startup that will revolutionize the way people create websites, by offering a one-of-a-kind way to create web applications from within the browser.

    The application is a graphical designer with a toolbox of all the possible HTML elements that can be added on a web page along with library of premade templates. There is also the ability to get new templates from a marketplace. The website creator works in a very friendly way by allowing you to drag and drop components on the page, resize them, edit their properties and change their colors and appearance.

    In this contrived example you would need

    • Almost no unit tests because there is no business logic
    • Some integration tests for the marketplace
    • Lots and lots of UI tests that make sure the user experience is as advertised

    Here is the breakdown of tests for this project:

    Test pyramid example

    UI tests dominate here and the shape is not a pyramid.

    I used some extreme examples to illustrate the point that you need to understand what your application needs and focus only on the tests that give you value. I have personally seen “payment management” applications with no integration tests and “website creator” applications with no UI tests.

    There are several articles on the web (I am not going to link them) that talk about a specific amount on integration/unit/UI tests that you need or don’t need. All these articles are based on assumptions that may not be true in your case.

    Anti-Pattern 4 - Testing the wrong functionality

    In the previous sections we have outlined the types and amount of tests you need to have for your application. The next logical step is to explain what functionality you actually need to test.

    In theory, getting 100% code coverage in an application is the ultimate goal. In practice this goal is not only difficult to achieve but also it doesn’t guarantee a bug free application.

    There are some cases where indeed it is possible to test all functionality of your application. If you start on a green-field project, if you work in a small team that is well behaved and takes into account the effort required for tests, it is perfectly fine to write new tests for all new functionality you add (because the existing code already has tests).

    But not all developers are lucky like this. In most cases you inherit an existing application that has a minimal amount of tests (or even none!). If you are part of a big and established company, working with legacy code is mostly the rule rather than the exception.

    Ideally you would have enough development time to write tests for both new and existing code for a legacy application. This is a romantic idea that will probably be rejected by the average project manager who is mostly interested on adding new features rather then testing/refactoring. You have to pick your battles and find a fine balance between adding new functionality (as requested by the business) and expanding the existing test suite.

    So what do you test? Where do you focus your efforts? Several times I have seen developers wasting valuable testing time by writing “unit tests” that add little or no value to the overall stability of the application. The canonical example of useless testing is trivial tests that verify the application data model.

    Code coverage is analyzed in detail in its own anti-pattern section. In the present section however we will talk about code “severity” and how it relates to your tests.

    If you ask any developer to show you the source code of any application, he/she will probably open an IDE or code repository browser and show you the individual folders.

    Source code physical model

    This representation is the physical model of the code. It defines the folders in the filesystem that contain the source code. While this hierarchy of folders is great for working with the code itself, unfortunately it doesn’t define the importance of each code folder. A flat list of code folders implies that all code components contained in them are of equal importance.

    This is not true as different code components have a different impact in the overall functionality of the application. As a quick example let’s say that you are writing an eshop application and two bugs appear in production:

    1. Customers cannot check-out their cart halting all sales
    2. Customers get wrong recommendations when they browse products.

    Even though both bugs should be fixed, it is obvious that the first one has higher priority. Therefore if you inherit an eshop application with zero tests, you should write new tests the directly validate the check-out functionality rather than the recommendation engine. Despite the fact that the recommendation engine and the check-out process might exist on sibling folders in the filesystem, their importance is different when it comes to testing.

    To generalize this example, if you work for some time in any medium/large application you will soon need to think about code using a different representation - the mental model.

    Source code mental model

    I am showing here 3 layers of code, but depending on the size of your application it might have more. These are:

    1. Critical code - This is the code that breaks often, gets most of new features and has a big impact on application users
    2. Core code - This is the code that breaks sometimes, gets few new features and has medium impact on the application users
    3. Other code - This is code that rarely changes, rarely gets new features and has minimal impact on application users.

    This mental mode should be your guiding principle whenever you write a new software test. Ask yourself if the functionality you are writing tests for now belongs to the critical or core categories. If yes, then write a software test. If no, then maybe your development time should better be spent elsewhere (e.g. in another bug).

    The concept of having code with different severity categories is also great when you need to answer the age old question of how much code coverage is enough for an application. To answer this question you need to either know the severity layers of the application or ask somebody that does. Once you have this information at hand the answer is obvious:

    Try to write tests that work towards 100% coverage of critical code. If you have already done this, then try to write tests that work towards 100% of core code. Trying however to get 100% coverage on total code is not recommended.

    The important thing to notice here is that the critical code in an application is always a small subset of the overall code. So if in an application critical code is let’s say 20% of the overall code, then getting just 20% overall code coverage is a good first step for reducing bugs in production.

    In summary, write unit and integration tests for code that

    • breaks often
    • changes often
    • is critical to the business

    If you have the time luxury to further expand the test suite, make sure that you understand the diminishing returns before wasting time on tests with little or no value.

    Anti-Pattern 5 - Testing internal implementation

    More tests are always a good thing. Right?

    Wrong! You also need to make sure that the tests are actually structured in a correct way. Having tests that are written in the wrong manner is bad in two ways.

    • They waste precious development time the first time they are written
    • They waste even more time when they need to be refactored (when a new feature is added)

    Strictly speaking, test code is like any other type of code. You will need to refactor it as some point in order to improve it in a gradual way. But if you find yourself routinely changing existing tests just to make them pass when a new feature is added then your tests are not testing what they should be testing.

    I have seen several companies that started new projects and thinking that they will get it right it this time, they started writing a big number of tests to cover the functionality of the application. After a while, a new feature got added and several existing tests needed to change in order to make them pass again. Then another new feature was added and more tests needed to be updated. Soon the amount of effort spent refactoring/fixing the existing tests was actually larger than the time needed to implement the feature itself.

    In such situations, several developers just accept defeat. They declare software tests a waste of time and abandon completely the existing test suite in order to focus fully on new features. In some extreme scenarios some changes might even be held back because of the amount of tests that break.

    The problem here is of course the bad quality of tests. Tests that need to be refactored all the time suffer from tight coupling with the main code. Unfortunately, you need some basic testing experience to understand which tests are written in this “wrong” way.

    Having to change a big number of existing tests when a new feature is introduced shows the symptom. The actual problem is that tests were instructed to verify internal implementation which is always a recipe for disaster. There are several software testing resources online that attempt to explain this concept, but very few of them show some solid examples.

    I promised in the beginning of this article that I will not speak about a particular programming language and I intend to keep that promise. In this section the illustrations show the data structure of your favorite programming language. Think of them as structs/objects/classes that contain fields/values.

    Let’s say that the customer object in an e-shop application is the following:

    Tight coupling of tests

    The customer type has only two values where 0 means “guest user” and 1 means “registered user”. Developers look at the object and write 10 unit tests that verify various cases of guests users and 10 cases of registered user. And when I say “verify” I mean that tests are looking at this particular field in this particular object.

    Time passes by and business decides that a new customer type with value 2 is needed for affiliates. Developers add 10 more tests that deal with affiliates. Finally another type of user called “premium customer” is added and developers add 10 more tests.

    At this point, we have 40 tests in 4 categories that all look at this particular field. (These numbers are imaginary. This contrived example exists only for demonstration purposes. In a real project you might have 10 interconnected fields within 6 nested objects and 200 tests).

    Tight coupling of tests example

    If you are a seasoned developer you can always imagine what happens next. New requirements come that say:

    1. For registered users, their email should also be stored
    2. For affiliate users, their company should also be stored
    3. Premium users can now gather reward points.

    The customer object now changes as below:

    Tight coupling of tests broken

    You now have 4 objects connected with foreign keys and all 40 tests are instantly broken because the field they were checking no longer exists.

    Of course in this trivial example one could simply keep the existing field to not break backwards compatibility with tests. In a real application this is not always possible. Sometimes backwards compatibility might essentially mean that you need to keep both old and new code (before/after the new feature) resulting in a huge bloat. Also notice that having to keep old code around just to make unit tests pass is a huge anti-pattern on its own.

    In a real application when this happens, developers ask from management some extra time to fix the tests. Project managers then declare that unit testing is a waste of time because they seem to hinder new features. The whole team then abandons the test suite by quickly disabling the failing tests.

    The big problem here is not testing, but instead the way the tests were constructed. Instead of testing internal implementation they should instead expected behavior. In our simple example instead of testing directly the internal structure of the customer they should instead check the exact business requirement of each case. Here is how these same tests should be handled instead.

    Tests that test behavior

    The tests do not really care about the internal structure of the customer object. They only care about its interactions with other objects/methods/functions. The other objects/method/functions should be mocked when needed on a case to case basis. Notice that each type of tests directly maps to a business need rather than a technical implementation (which is always a good practice.)

    If the internal implementation of the Customer object changes, the verification code of the tests remains the same. The only thing that might change is the setup code for each test, which should be centralized in a single helper function called createSampleCustomer() or something similar (more on this in AntiPattern 9)

    Of course in theory it is possible for the verified objects themselves to change. In practice it is not realistic for changes to happen at loginAsGuest()andregister()andshowAffiliateSales()andgetPremiumDiscount()at the same time. In a realistic scenario you would have to refactor 10 tests instead of 40.

    In summary, if you find yourself continuously fixing existing tests as you add new features, it means that your tests are tightly coupled to internal implementation.

    Anti-Pattern 6 - Paying excessive attention to test coverage

    Code coverage is a favorite metric among software stakeholders. Endless discussionshavehappened (and will continue to happen) among developers and project managers on the amount of code coverage a project needs.

    The reason why everybody likes to talk about code coverage is because it is a metric that is easy to understand and quantify. There are several easily accessible tools that output this metric for most programming languages and test frameworks.

    Let me tell you a little secret: Code coverage is completely useless as a metric. There is no “correct” code coverage number. This is a trap question. You can have a project with 100% code coverage that still has bugs and problems. The real metrics that you should monitor are the well-known CTM.

    The Codepipes Testing Metrics (CTM)

    Here is their definition if you have never seen them before:

    Metric NameDescriptionIdeal valueUsual valueProblematic value
    PDWT% of Developers writing tests100%20%-70%Anything less than 100%
    PBCNT% of bugs that create new tests100%0%-5%Anything less than 100%
    PTVB% of tests that verify behavior100%10%Anything less than 100%
    PTD% of tests that are deterministic100%50%-80%Anything less than 100%

    PDWT (Percent of Developers who Write Tests) is probably the most important metric of all. There is no point in talking about software testing anti-patterns if you have zero tests in the first place. All developers in the team should write tests. A new feature should be declared done only when it is accompanied by one or more tests.

    PBCNT (Percent of Bugs that Create New tests). Every bug that slips into production is a great excuse for writing a new software test that verifies the respective fix. A bug that appears in production should only appear once. If your project suffers from bugs that appear multiple times in production even after their original “fix”, then your team will really benefit from this metric. More details on this topic in Antipattern 10.

    PTVB (Percent of Tests that Verify Behavior and not implementation). Tightly coupled tests are a huge time sink when the main code is refactored. This topic was already discussed in Antipattern 5.

    PTD (Percent of Tests that are Deterministic to total tests). Tests should only fail when something is wrong with the business code. Having tests that fail intermittently for no apparent reason is a huge problem that is discussed in Antipattern 7.

    If after reading about these metrics, you still insist on setting a hard number as a goal for code coverage, I will give you the number 20%. This number should be used as a rule of thumb and it is based on the Pareto principle. 20% of your code is causing 80% of your bugs, so if you really want to start writing tests you could do well by starting with that code first. This advice also ties well with Anti-pattern 4 where I suggest that you should write tests for your critical code first.

    Do not try to achieve 100% total code coverage. Achieving 100% code coverage sounds good in theory but almost always is a waste of time:

    • you have wasted a lost of effort as getting from 80% to 100% is much more difficult than getting from 0% to 20%
    • Increasing code coverage has diminishing returns

    In any non trivial application there are certain scenarios that needs complex unit tests in order to trigger. The effort required to write these tests will usually be more than the risk involved if these particular scenarios ever fail in production (if ever).

    If you have worked with any big application you should know by now that after reaching 70% or 80% code coverage, it is getting very hard to write useful tests for the code that is still untested.

    Code Coverage Effort

    On a similar note, as we already saw in the section for Antipattern 4, there are some code paths that never actually fail in production, and therefore writing tests for them is not recommended. The time spent on getting them covered should be better spent on actual features.

    Code Coverage Value

    Projects that need a specific code coverage percentage as a delivery requirement usually force developers to test trivial code in order or write tests that just verify the underlying programming language. This is a huge waste of time and as a developer you have the duty to complain to management who has such unreasonable demands.

    In summary, code coverage is a metric that should not be used as a representation for quality of a software project.

    Anti-Pattern 7 - Having flaky or slow tests

    This particular anti-pattern has alreadybeendocumentedheavily so I am just including it here for completeness.

    Since software tests act as an early warning against regressions, they should always work in a reliable way. A failing test should be a cause of concern and the person(s) that triggered the respective build should investigate why the test failed right away.

    This approach can only work with tests that fail in a deterministic manner. A test that sometimes fails and sometimes passes (without any code changes in between) is unreliable and undermines the whole testing suite. The negative effects are two fold

    • Developers do not trust tests anymore and soon ignore them
    • Even when non-flaky tests actually fail, it is hard to detect them in a sea of flaky tests

    A failing test should be easily recognizable by everybody in your team as it changes the status of the whole build. On the other hand if you have flaky tests it is hard to understand if new failures are truly new or they stem from the existing flaky tests.

    Flaky tests

    Even a small number of flaky tests in enough to destroy the credibility of the rest of test suite. If you have 5 flaky tests for example, run the build and get 3 failures it is not immediately evident if everything is fine (because the failures were coming from the flaky tests) or if you just introduced 3 regressions.

    A similar problem is having tests that are really really slow. Developers need a quick feedback on the result of each commit (also discussed in the next section) so slow tests will eventually be ignored or even not run at all.

    In practice flaky and slow tests are almost always integration tests and/or UI tests. As we go up in the testing pyramid, the probabilities of flaky tests are greatly increasing. Tests that deal with browser events are notoriously hard to get right all the time. Flakiness in integration tests can come from many factors but the usual suspect is the test environment and its requirements.

    The primary defense against flaky and slow tests is to isolate them in their own test suite (assuming that they are not fixable). You can easily find more abundant resources on how to fix flaky tests for your favorite programming language by searching online so there is no point in me explaining the fixes here.

    In summary, you should have a reliable test suite (even if it is a subset of the whole test suite) that is rock solid. A test that fails in this suite means that something is really really wrong with the code and any failure means that the code must not be promoted to production.

    Anti-Pattern 8 - Running tests manually

    Depending on your organization you might actually have several types of tests in place. Unit tests, Load tests, User acceptance tests are common categories of test suites that might be executed before the code goes into production.

    Ideally all your tests should run automatically without any human intervention. If that is not possible at the very least all tests that deal with correctness of code (i.e. unit and integration tests) must run in an automatic manner. This way developers get feedback on the code in the most timely manner. It is very easy to fix a feature when the code is fresh in your mind and you haven’t switched context yet to an unrelated feature.

    Test feedback loop tests

    In the past the most lengthy step of the software lifecycle was the deployment of the application. With the move into cloud infrastructure where machines can be created on demand (either in the form of VMs or containers) the time to provision a new machine has been reduced to minutes or seconds. This paradigm shift has caught a lot of companies by surprise as they were not ready to handle daily or even hourly deployments. Most of the existing practices were centered around lengthy release cycles. Waiting for a specific time in the release to “pass QA” with manual approval is one of those obsolete practices that is no longer applicable if a company wants to deploy as fast as possible.

    Deploying as fast is possible implies that you trust each deployment. Trusting an automatic deployment requires a high degree of confidence in the code that gets deployed. While there are several ways of getting this confidence, the first line of defense should be your software tests. However, having a test suite that can catch regressions quickly is only half part of the equation. The other half is running the tests automatically (possibly after every commit).

    A lot of companies think that they practice continuous delivery and/or deployment. In reality the don’t. Practicing true CI/CD means that at any given point in time there is a version of the code that is ready to be deployed. This means that the candidate release for deployment the candidate release is already tested. Therefore having a package version of an application “ready” which has not really “passed QA” is not true CI/CD.

    Unfortunately, while most companies have correctly realized that deployments should be automated, because using humans for them is error prone and slow, I still see companies where launching the tests is a semi-manual process. And when I say semi-manual I mean that even though the test suite itself might be automated, there are human tasks for house-keeping such as preparing the test environment or cleaning up the test data after the tests have finished. That is an anti-pattern because it is not true automation. All aspects of testing should be automated.

    Automated tests

    Having access to VMs or containers means that it is very easy to create various test environments on demand. Creating a test environment on the fly for an individual pull request should be a standard practice within your organization. This means that each new feature is tested individually on its own. A problematic feature (i.e. that causes tests to fail) should not block the release of the rest of the features that need to be deployed at the same time.

    An easy way to understand the level of test automation within a company is to watch the QA/Test people in their daily job. In the ideal case, testers are just creating new tests that are added to an existing test suite. Testers themselves do not run tests manually. The test suite is run by the build server.

    In summary, testing should be something that happens all the time behind the scenes by the build server. Developers should learn the result of the test for their individual feature after 5-15 minutes of committing code. Testers should create new tests and refactor existing ones, instead of actually running tests.

    Anti-Pattern 9 - Treating test code as a second class citizen

    If you are a seasoned developer, you will spend always some time to structure new code in your mind before implementing it. There are several philosophies regarding code design and some of them are so significant that have their own Wikipedia entry. Some examples are:

    The first one is arguably the most important one as it forces you to have a single source of truth for code that is reused across multiple features. Depending on your own programming language you may also have access to several other best practices and recommended design patterns. You might even have special guidelines that are specific to your team.

    Yet, for some unknown reason several developers do not apply the same principles to the code that holds the software tests. I have seen projects which have well designed feature code, but suffer from tests with huge code duplication, hardcoded variables, copy-paste segments and several other inefficiencies that would be considered inexcusable if found on the main code.

    Treating test code as a second class citizen makes no sense, because in the long run all code needs maintenance. Tests will need to be updated and refactored in the future. Their variables and structure will need to change. If you write tests without thinking about their design you are creating additional technical debt that will be added to the one already present in the main code.

    Try to design your tests with the same attention that you give to the feature code. All common refactoring techniques should be used on tests as well. As a starting point

    • All test creation code should be centralized. All tests should create test data in the same manner
    • Complex verification segments should be extracted in a common domain specific library
    • Mocks and stubs that are used too many times should not be copied-pasted.
    • Test initialization code should shared between similar tests.

    If you employ tools for static analysis, source formatting or code quality then configure them to run on test code too.

    In summary, design your tests with the same detail that you design the main feature code.

    Anti-Pattern 10 - Not converting production bugs to tests

    One of the goals of testing is to catch regressions. As we have seen in antipattern 4, most applications have a “critical” code part where the majority of bugs appear. When you fix a bug you need to make sure that it doesn’t happen again. One of the best ways to enforce this is to write a test for the fix (either unit or integration or both).

    Bugs that slip into production are perfect candidates for writing software tests

    • they show a lack of testing in that area as the bug has already passed into production
    • if you write a test for these bugs the test will be very valuable as it guards future releases of the software

    I am always amazed when I see teams (that otherwise have a sound testing strategy) that don’t write a test for a bug that was found in production. They correct the code and fix the bug straight way. For some strange reason a lot of developers assume that writing tests is only valuable when you are adding a new feature only.

    This could not be further from the truth. I would even argue that software tests that stem from actual bugs are more valuable than tests which are added as part of new development. After all you never know how often a new feature will break in production (maybe it belongs to non-critical code that will never break). The respective software test is good to have but its value is questionable.

    On the other hand the software test that you write for a real bug is super valuable. Not only it verifies that your fix is correct, but also ensures that your fix will always be active even if refactorings happen in the same area.

    If you join a legacy project that has no tests this is also the most obvious way to start getting value from software testing. Rather than attempting to guess which code needs tests, you should pay attention to the existing bugs and try to cover them with tests. After a while your tests will have covered the critical part of the code, since by definition all tests have verified things that break often. One of my suggested metrics embodies the recording of this effort.

    The only case where it is acceptable to not write tests is when bugs that you find in production are unrelated to code and instead stem from the environment itself. A misconfiguration to a load balancer for example is not something that can be solved with a unit test.

    In summary, if you are unsure on what code you need to test next, look at the bugs that slip into production.

    Anti-Pattern 11 - Treating TDD as a religion

    TDD stands for Test Driven Development and like all methodologies before it, it is a good idea on paper until consultants try to convince a company that following TDD blindly is the only way forward. At the time or writing this trend is slowly dying but I decided to mention it here for completeness (as the enterprise world is especially suffering from this anti-pattern).

    Broadly speaking when it comes to software tests:

    1. you can write tests before the respective implementation code
    2. you can write tests at the same time as the implementation code
    3. you can write tests after the implementation code
    4. you can write 0 tests (i.e. never) for the implementation code

    One of the core tenets of TDD is always following option 1 (writing tests before the implementation code). Writing tests before the code is a good general practice but is certainly not always the best practice.

    Writing tests before the implementation code implies that you are certain about your final API, which may or may not be the case. Maybe you have a clear specification document in front of you and thus know the exact signatures of the code methods that need to be implemented. But in other cases you might want to just experiment on something, do a quick spike and work towards a solution instead of a solution itself.

    For a more practical example, it would be immature for a startup to follow blindly TDD. If you work in a startup company you might write code that will change so fast that TDD will not be a big help. You might even throw away code until you get it “right”. Writing tests after the implementation code, is a perfectly valid strategy in that case.

    Writing no tests at all (option 4) is also a valid strategy. As we have seen in anti-pattern 4 there is code that never needs testing. Writing software tests for trivial code because this is the correct way to “do TDD” will get you nowhere.

    The obsession of TDD zealots on writing tests first no matter the case, has been a huge detriment to the mental health of sane developers. This obsession is already documented in various places so hopefully I don’t need to say anything more on the topic (search for “TDD is crap/stupid/dead”).

    At this point I would like to admit that several times I have personally implemented code like this:

    1. Implementing the main feature first
    2. Writing the test afterwards
    3. Running the test to see it succeed
    4. Commenting out critical parts of the feature code
    5. Running the test to see it fail
    6. Uncommenting feature code to its original state
    7. Running the test to see it succeed again
    8. Commiting the code

    In summary, TDD is a good idea but you don’t have to follow it all the time. If you work in a fortune 500 company, surrounded by business analysts and getting clear specs on what you need to implement, then TDD might be helpful.

    On the other hand if you are just playing with a new framework at your house during the weekend and want to understand how it works, then feel free to not follow TDD.

    Anti-Pattern 12 - Writing tests without reading documentation first

    A professional developer is one who knows the tools of the trade. You might need to spend extra time at the beginning of a project to learn about the technologies you are going to use. Web frameworks are coming out all the time and it always pays off to know all the capabilities that can be employed in order to write effective and concise code.

    You should treat software tests with the same respect. Because several developers treat tests as something secondary (see also Anti-pattern 9) they never sit down to actually learn what their testing framework can do. Copy-pasting testing code from other projects and examples might seem to work at first glance, but this is not the way a professional should behave.

    Unfortunately this pattern happens all too often. People are writing several “helper functions” and “utilities” for tests without realizing that their testing framework already offers this function either in a built-in manner or with the help of external modules.

    These utilities make the tests hard to understand (especially for junior developers) as they are filled with in-house knowledge that is non transferable to other projects/companies. Several times I have replaced “smart in-house testing solutions” with standard off-the-self libraries that do the same thing in a standardized manner.

    You should spend some time to learn what your testing framework can do. For example try to find how it can work with:

    • parameterized tests
    • mocks and stubs
    • test setup and teardown
    • test categorization
    • conditional running of tests

    If you are also working on the stereotypical web application you should do some minimal research to understand what are the best practices regarding

    • test data creators
    • HTTP client libraries
    • HTTP mock servers
    • mutation/fuzzy testing
    • DB cleanup/rollback
    • load testing and so on

    There is no need to re-invent the wheel. The sentence applies to testing code as well. Maybe there are some corner cases where your main application is indeed a snowflake and needs some in-house utility for the core code. But I can bet that your unit and integration tests are not special themselves and thus writing custom testing utilities is a questionable practice.

    Anti-Pattern 13 - Giving testing a bad reputation out of ignorance

    Even though I mention this as the last anti-pattern, this is the one that forced me to write this article. I am always disappointed when I find people at conferences and meetups who “proudly” proclaim that all tests are a waste of time and that their application works just fine without any testing at all. A more common occurrence is meeting people who are against a specific type of testing (usually either unit or integration) like we have seen in anti-patterns 1 or 2

    When I find people like this, it is my hobby to probe them with questions and understand their reasons behind hating tests. And always, it boils down to anti-patterns. They previously worked in companies where tests were slow (Anti pattern 7), or needed constant refactoring (Antipattern 5). They have been “burned” by unreasonable requests for 100% code coverage (Anti-pattern 6) or TDD zealots (Antipattern 11) that tried to impose to the whole team their own twisted image of what TDD means.

    If you are one of those people I truly feel for you. I know how hard it is to work in a company that has bad habits.

    Bad experiences of testing in the past should not clutter your judgment when it comes to testing your next greenfield project. Try to look objectively at your team and your project and see if any of the anti-patterns apply to you. If yes, then you are simply testing in the wrong way and no amount of tests will make your application better. Sad but true.

    It is one thing for your team to suffer from bad testing habits, and another to mentor junior developers declaring that “testing is a waste of time”. Please don’t do the latter. There are companies out there that don’t suffer from any of the anti-patterns mentioned in this article. Try to find them!


    Go back to contents, contact me via email or find me at Twitter if you have a comment.

    Artificial intelligence accelerates discovery of metallic glass

    $
    0
    0

    If you combine two or three metals together, you will get an alloy that usually looks and acts like a metal, with its atoms arranged in rigid geometric patterns.

    But once in a while, under just the right conditions, you get something entirely new: a futuristic alloy called metallic glass. The amorphous material’s atoms are arranged every which way, much like the atoms of the glass in a window. Its glassy nature makes it stronger and lighter than today’s best steel, and it stands up better to corrosion and wear.

    Although metallic glass shows a lot of promise as a protective coating and alternative to steel, only a few thousand of the millions of possible combinations of ingredients have been evaluated over the past 50 years, and only a handful developed to the point that they may become useful.

    Now a group led by scientists at Northwestern University, the Department of Energy’s SLAC National Accelerator Laboratory and the National Institute of Standards and Technology (NIST) has reported a shortcut for discovering and improving metallic glass — and, by extension, other elusive materials — at a fraction of the time and cost. 

    The research group took advantage of a system at SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL) that combines machine learning — a form of artificial intelligence where computer algorithms glean knowledge from enormous amounts of data — with experiments that quickly make and screen hundreds of sample materials at a time. This allowed the team to discover three new blends of ingredients that form metallic glass, and to do it 200 times faster than it could be done before.

    The study was published today, April 13, in Science Advances.

    “It typically takes a decade or two to get a material from discovery to commercial use,” said Chris Wolverton, the Jerome B. Cohen Professor of Materials Science and Engineering in Northwestern’s McCormick School of Engineering, who is an early pioneer in using computation and AI to predict new materials. “This is a big step in trying to squeeze that time down. You could start out with nothing more than a list of properties you want in a material and, using AI, quickly narrow the huge field of potential materials to a few good candidates.” 

    The ultimate goal, said Wolverton, who led the paper’s machine learning work, is to get to the point where a scientist can scan hundreds of sample materials, get almost immediate feedback from machine learning models and have another set of samples ready to test the next day — or even within the hour.

    Over the past half century, scientists have investigated about 6,000 combinations of ingredients that form metallic glass. Added paper co-author Apurva Mehta, a staff scientist at SSRL: “We were able to make and screen 20,000 in a single year.”

    Just getting started

    While other groups have used machine learning to come up with predictions about where different kinds of metallic glass can be found, Mehta said, “The unique thing we have done is to rapidly verify our predictions with experimental measurements and then repeatedly cycle the results back into the next round of machine learning and experiments.”

    There’s plenty of room to make the process even speedier, he added, and eventually automate it to take people out of the loop altogether so scientists can concentrate on other aspects of their work that require human intuition and creativity. “This will have an impact not just on synchrotron users, but on the whole materials science and chemistry community,” Mehta said.

    The team said the method will be useful in all kinds of experiments, especially in searches for materials like metallic glass and catalysts whose performance is strongly influenced by the way they’re manufactured, and those where scientists don’t have theories to guide their search. With machine learning, no previous understanding is needed. The algorithms make connections and draw conclusions on their own, which can steer research in unexpected directions.

    “One of the more exciting aspects of this is that we can make predictions so quickly and turn experiments around so rapidly that we can afford to investigate materials that don’t follow our normal rules of thumb about whether a material will form a glass or not,” said paper co-author Jason Hattrick-Simpers, a materials research engineer at NIST. “AI is going to shift the landscape of how materials science is done, and this is the first step.” 

    Experimenting with data

    In the metallic glass study, the research team investigated thousands of alloys that each contain three cheap, nontoxic metals.

    They started with a trove of materials data dating back more than 50 years, including the results of 6,000 experiments that searched for metallic glass. The team combed through the data with advanced machine learning algorithms developed by Wolverton and Logan Ward, a graduate student in Wolverton’s laboratory who served as co-first author of the paper.

    Based on what the algorithms learned in this first round, the scientists crafted two sets of sample alloys using two different methods, allowing them to test how manufacturing methods affect whether an alloy morphs into a glass. An SSRL x-ray beam scanned both sets of alloys, then researchers fed the results into a database to generate new machine learning results, which were used to prepare new samples that underwent another round of scanning and machine learning.

    By the experiment’s third and final round, Mehta said, the group’s success rate for finding metallic glass had increased from one out of 300 or 400 samples tested to one out of two or three samples tested. The metallic glass samples they identified represented three different combinations of ingredients, two of which had never been used to make metallic glass before.

    The study was funded by the US Department of Energy (award number FWP-100250), the Center for Hierarchical Materials Design and the National Institute of Standards and Technology (award number 70NANB14H012).

    — From an article by Glennda Chui, SLAC National Accelerator Laboratory

    At the Bottom of the Ocean, a Gloomy Discovery

    $
    0
    0
    17 octopods congregate on the sediment free surface of Dorado Outcrop, 16 are in the brooding posture. (Credit: Phil Torres, Dr. Geoff Wheat)

    17 octopods congregate on the sediment free surface of Dorado Outcrop, 16 are in the brooding posture. (Credit: Phil Torres, Dr. Geoff Wheat)

    Almost two miles below the surface of the Pacific Ocean, on a lonely outcrop of bare rock 100 miles from Costa Rica, researchers on a geological expedition found something odd. As their remotely controlled submersible sunk through the black waters toward the seafloor, they saw a collection of purple lumps dotting the rocky bottom.

    As they got closer, they resolved themselves into something resembling a bowling ball with suckers. It was a group of female octopuses, of the genus Muusoctopus, guarding clutches of eggs they’d carefully attached to the cracks and crevices snaking across the seafloor.

    It was odd, but the scientists weren’t there to admire the sea life. Led by geochemist Charles Geoffrey Wheat, the expedition was searching for warm seeps on the ocean floor, places where heat from Earth’s interior bleeds out through networks of cracks. Cooler, and less chemically rich than hydrothermal vents, the seeps are still little-studied and could host new and strange collections of undersea life.

    The scientists had found their mark, but it was covered in octopuses. They managed to get observations anyway, probing and poking around the observant mothers, and returned the next year, when they saw the same thing. Octopus moms had colonized the areas around the warm seeps.

    The observation of octopuses this deep would have been no more than a passing curiosity had Janet Voight, associate curator of zoology at Chicago’s Field Museum, not seen the footage. She was surprised, to say the least.

    “This just blew me away,” Voight says. “At 3,000 meters, there’s like four observations, and I’ve made three of them.”

    Finding so many of these octopuses in one place at that depth was a revelation. But, as Voight soon found, it was also a tragedy.

    Labor of Love

    A female octopus’ death is written into her genes. Octopuses only breed once throughout their lives, and the mothers-to-be die once their task is complete.

    After laying a clutch of eggs, and attaching them to a sturdy substrate with tough biological cement, female octopuses have but one task left to them. They brood their eggs with a ferocious singularity of purpose, and once she has laid her eggs, a female octopus will not leave until they hatch. This can take months, in some cases years.

    “When they’re ready to spawn eggs, all of the energy, all of the nutrients in their bodies are directed toward their eggs — they stop taking care of their bodies at a cellular level,” Voight says.

    But the suicide mission is not without merit. In the cold waters where most octopuses live, life moves slowly. Young octopuses need time to develop from helpless embryo to self-sufficient juvenile. The ultimately fatal vigil an octopus mother keeps buys precious time for her vulnerable young.

    It’s a tradeoff: By fixing their eggs to the bottom, and laying their lives on the line, octopus moms ensure that they’ll always be near to their babies. But once the eggs are laid, the octopuses’ wandering days are done. They stay in that spot until they die. If she has picked a spot that’s less than ideal, there’s no going back.

    Tethered to Their Fate

    For the more than 100 Muusoctopus that had turned the pillow basalt near the warm seep into a nursery, their decision would prove deeply unlucky. The water that trickles from deep below is both relatively warm and oxygen-poor, a combination that is ultimately deadly for developing young. The warmth speeds up cellular processes, hastening development, but also increasing the demand for energy.

    “As the embryos start to develop from fertilized cells, they’re increasing their oxygen use … and they’re confronted with less oxygen available,” Voight says. “I don’t see how they can possibly survive.”

    Of the 186 eggs Voight examined on camera footage, not one had a developing embryo inside. For a creature that goes to such extreme lengths to protect its young, the behavior was puzzling. Why set themselves to a task that will ultimately kill them if it’s doomed from the start?

    A clutch of eggs is made visible after a brooding octopod shifts her position. (Credit: screenshot from ALVIN footage by Anne M. Hartwell)

    A clutch of eggs is made visible after a brooding octopod shifts her position. (Credit: Screenshot from ALVIN footage by Anne M. Hartwell)

    Voight thinks that the answer lies in the unpredictable nature of the seeps. They don’t always spew warm, embryo-killing water — in fact, sometimes the seeps can be deceptively silent. When female octopuses happened upon the bare rock uncovered by the seeps it would seem like a perfect place to brood eggs. But at some point over the course of the months the females would spend there, warm water inevitably began to seep through.

    Temperatures at the ocean floor usually hover just above freezing, but the researchers measured water at the seeps at around 50 degrees Fahrenheit — cold to us, but dangerously hot to an octopus. Because they’ve lived down deep for so long, these octopuses have likely lost the ability to thermally regulate their bodies, Voight says. The adult Muusoctopus might actually be doing OK, she thinks, because they’re tall enough to rise above the layer of warm water. Their eggs, stuck fast to the rocks, aren’t so lucky.

    It’s a sad tale, to be sure, but all Muusoctopus probably don’t share the same fate. If all of the octopus eggs are dying, there’s no way there could be such a large population down there, Voight says. In all likelihood, the group the researchers found represents just an unlucky few, the researchers say in a paper published in Deep Sea Research Part I: Oceanographic Research Papers. The seafloor in that area is riddled with fissures and caves perfect for egg-laying, many of which don’t lie near the warm seep. Healthy octopus babies likely frolic beneath the waves as well.

    Indeed, the researchers report seeing telltale arms waving from hidden clefts, hinting at the eight-armed denizens below, although they weren’t able to get close enough to see if the occupants were guarding any eggs.

    Sad though the discovery may be, for the researchers, it’s another reminder that the deep ocean is a wellspring of fascinating discoveries.

    “There are things in the deep ocean even the experts, and I’m supposed to be an expert in this, haven’t even imagined. And we need to find out what’s down there,” Voight says. “Because if we couldn’t even imagine that a cluster of brooding octopuses like this existed, what else is there that we haven’t even considered?”

    Heterosexual College Students Who Hookup with Same-Sex Partners

    $
    0
    0
    Individuals who identify as heterosexual but engage in same-sex sexual behavior fascinate both researchers and the media. We analyzed the Online College Social Life Survey dataset of over 24,000 undergraduate students to examine students whose last hookup was with a same-sex partner (N = 383 men and 312 women). The characteristics of a significant minority of these students (12% of men and 25% of women) who labelled their sexual orientation "heterosexual" differed from those who self-identified as "homosexual," "bisexual," or "uncertain." Differences among those who identified as heterosexual included more conservative attitudes, less prior homosexual and more prior heterosexual sexual experience, features of the hookups, and sentiments about the encounter after the fact. Latent class analysis revealed six distinctive "types" of heterosexually identified students whose last hookup was with a same-sex partner. Three types, comprising 60% of students, could be classified as mostly private sexual experimentation among those with little prior same-sex experience, including some who did not enjoy the encounter; the other two types in this group enjoyed the encounter, but differed on drunkenness and desire for a future relationship with their partner. Roughly, 12% could be classified as conforming to a "performative bisexuality" script of women publicly engaging in same-sex hookups at college parties, and the remaining 28% had strong religious practices and/or beliefs that may preclude a non-heterosexual identity, including 7% who exhibited "internalized heterosexism." Results indicate several distinctive motivations for a heterosexual identity among those who hooked up with same-sex partners; previous research focusing on selective "types" excludes many exhibiting this discordance.

    My account is sending spam emails

    $
    0
    0
    My account is sending spam emailsLouis Morton21-04-18 16:51

    My email account has sent out 3 spam emails in the past hour to a list of about 10 addresses that I don’t recongnize. I changed my password immediately after the first one, but then it happened again 2 more times. The subject of the emails is weight loss and growth supplements for men advertisements. I have reported them as spam. Please help, what else can I do to ensure my account isn’t compromised??

    Re: My account is sending spam emailscatzrule21-04-18 17:00

    Are these emails in your "Sent" label?

    Re: My account is sending spam emailsLouis Morton21-04-18 17:03

    Yes they were. They are gone now though because I marked them as Spam

    Re: My account is sending spam emailscapt-pyro21-04-18 17:24

    this has also happened to me. Ive checked access histories and there has been no outside access.

    Re: My account is sending spam emailslevans2721-04-18 17:53
    I have been getting this too. Is it 'via telus.com' ? Because I think I am being spoofed, I have changed my passwords and removed all app access and havent even logged into my accoutn on anything but one browser since the password change. 2FA is also up so it isnt my account security.
    Re: My account is sending spam emailsAnne Victoria Clark21-04-18 17:58

    This is also happening to me right now. I just changed my password and logged out of everything, but it's still happening. I had one just come via Telus.com but the last one was via startimes2.science 

    Re: My account is sending spam emailsjkky21-04-18 18:11
    Same thing just happened to me. Exact scenario where it was sent "via telus.com" but the topics are about bitcoin and funeral insurance(??). Checked my activity and found nothing and I still have 2FA enabled.
    Re: My account is sending spam emailscapt-pyro21-04-18 18:12Re: My account is sending spam emailsAeyo Odinflame21-04-18 18:17
    Also me. Is there a way to see what device might have sent it?
    Re: My account is sending spam emailslevans2721-04-18 18:22

    If it is spoofing, I am fairly sure it is done externally so they arent even logging into our accounts, they are using some third party and fooling the server into thinking we sent it. If that is the case it is a Google security hole that needs to be fixed.

    Re: My account is sending spam emailsAndrewB8621-04-18 18:24
    Same thing, via telus.com.  We have any solutions yet?  I'm assuming gmail is working on it if it's happening to multiple people
    Re: My account is sending spam emailshardy young21-04-18 18:25

    Same here. Even after I changed my password and settings. How to fix?

    Re: My account is sending spam emailsElisa Litvin21-04-18 18:32
    Mine is doing the same thing - telus.com -- I've changed my password, just in case and have two-step authentication, so I'm fairly sure no one has logged into my account. Even the Google security is saying that it thinks I'm being spoofed. I hope Google is onto this quickly.
    Re: My account is sending spam emailsCarlos xu21-04-18 18:35Re: My account is sending spam emailslevans2721-04-18 18:37

    GOOGLE HALP! There are so many emails coming now!

    Re: My account is sending spam emailsedcasey21-04-18 18:40
    Same here. Multiple emails in my sent folder all sent "via telus"
    Tried contacting telus.com customer service, but so far nothing. Changed all my passwords and reviewed all apps, logins, and devices and don't see anything fishy... Very weird and annoying. 
    Re: My account is sending spam emailslevans2721-04-18 18:46

    Let us know how you go there. I was going to, but assumed it probably wouldnt help.

    Re: My account is sending spam emailsLiis V21-04-18 18:50Re: My account is sending spam emailsGareth Furber21-04-18 18:53
    yes - this is happening to me also - appears in my sent folder, but security check indicates no malicious logins to my account
    Re: My account is sending spam emailsTimothy Bragg21-04-18 18:59
    looks like another victim here
    Re: My account is sending spam emailscatzrule21-04-18 19:01

    You must properly secure the account as soon as possible. 

    To ring remotely or erase a lost phone or device:-

    Check apps connected to the account, and to revoke access:-

    Also applies to devices that are maliciously logging in, or stolen/lost etc.

    Run a full antivirus scan . and also use these free tools to find anything the antivirus may overlook:-

    Are there any missing any messages?  Haa search been done, in a browser on a pc at https://mail.google.com/ in "Trash/Bin", (not IMAP/Trash), "Inbox" and any tabs that may there; or "All Mail" for any missing messages?  The "More" label may have to be expanded to see "All Mail".

    To recover lost emails, if at all possible:-

    1. Use the Gmail security checklist to secure the account. 
    2. Fill out this form to find out whether the messages can be recovered. Some, none, or all, may be returned, depending on what is actually still available.

    (Note that step 1. must be completed first.  There is no guarantee that messages can be retrieved.)

    When using public Internet cafes or shared devices etc., be wary of key loggers, and make sure that that the account is always fully and correctly signed out of Google account at all times.

    You must be on a PC or Mac in an up to date browser for this.  If repeated requests are sent via this form without any new information, the requests may be locked out as a spam.

    Friendly reminder;  account and password security is the account holder's responsibility; and it is not the job of Google to do this.  Google provide secure systems at their end;  the account holder must do the same at their end.  https://www.google.com/safetycenter/
    Re: My account is sending spam emailsG. Astill21-04-18 19:03
    Happening to me too. I have had two-factor authentication enabled for years. Emails appear in my Inbox and my Sent folder. Also sent to "send...@justvaluerate.com" and "send...@justvaluerate.com". Titled "Who is hotter: Saasha or Clara...", "Funeral Insurance...", "Sign up Free and claim...", and "Why Bitcoin's Price is...". Started at 8:59PM tonight, 4 emails and the last one just came in after I reset my password. 
    Re: My account is sending spam emailsnickromano21-04-18 19:06

    I am also receiving these.  They are also showing up in my spam folder and Google SPF is passing when I use the header analyze tool which means there could be a security issue on Google's side.  I have reset my password, enabled 2FA, and am still receiving them.  If Google would like more information on these emails please have support email me.

    Re: My account is sending spam emailsDan K.21-04-18 19:09

    My account is totally secured and has no access from anywhere but my PC and my phone. Along with 2 factor authentication. Still getting these spam emails from "myself", come on google fix it up.

    Re: My account is sending spam emailsVishesh Goyal21-04-18 19:10

    +1 i'm having this issue as well. Telus.com

    Re: My account is sending spam emailsVishesh Goyal21-04-18 19:10

    +1 i'm having this issue as well. Telus.com

    Re: My account is sending spam emailsivan agudelo21-04-18 19:13

    Happening to me as well. I've had this account for years and have always received spam but never have seen this issue and this is really concerning.

    Re: My account is sending spam emailsNicks21-04-18 19:16
    I'm having the same issue. If it is just spoofing how are the emails appearing in my sent folder?
    Re: My account is sending spam emailsjackson budai21-04-18 19:16
    my account is totaly secure with 2 factor authetication and the sent by telus.com messages are still being sent. fix your shit google
    Re: My account is sending spam emailsployrung m21-04-18 19:17
    so does two of my account, I did contact telus.com they said that there is nothing they can do and I should contact my email provider which is Gmail..... anyone be able to fix it?
    Re: My account is sending spam emailsgail gardner21-04-18 19:18
    come on Google, this issue is with telus.com. I have been getting hammered by this site for days now in spam folder and some in my primary in box,  mostly slutty women stuff. If I shut down this gmail account and open a new one am I going to continue to have this problem???  Does Google has a fix for this???
    Re: My account is sending spam emailsdeletemarkedread21-04-18 19:20

    Same here.  Everyone, don't forget to upvote this thread in addition to commenting.  The faster this thread blows up, the faster Google notices.

    Google please help!!

    Re: My account is sending spam emailsElizabeth Belperio21-04-18 19:25
    Yes this has happened to me this morning too - there are 6 x of these spoof emails in my inbox apparently sent from my account and they are showing in my sent items folder. There is one from nore...@travellstore.com and 5 from sen...@justvaluerate.com

    I'm a little freaked out by this ....

    Re: My account is sending spam emailsF4b21-04-18 19:25

    Same here! Everything seems to be alright in terms of activities but I keep receiving spam from myself and if I look at the Sent folder the emails are actually there, like I sent them myself... I changed the password a few times and activated the 2FA. The annoying thing is that you can't label yourself as a spam in case you want to forward yourself anything in the future...

    (ukendt)21-04-18 19:28<Meddelelsen er blevet slettet.>Re: My account is sending spam emailsGeorgina Hafteh21-04-18 19:28

    it is happening to me too - any solution?

    Re: My account is sending spam emailsNicks21-04-18 19:29
    Email Header below with my details removed (REDACTED)

    Received: by 10.74.137.155 with SMTP id r27csp2223549ooh;

            Sat, 21 Apr 2018 17:23:03 -0700 (PDT)

    X-Google-Smtp-Source: AIpwx48+FL5yJkYk4FqcYFfNkv/6EPpqbcbDdEEh3BjshHl63bOFLcB6ysoIhV18Lb6Px65qhIjb

    X-Received: by 10.80.214.201 with SMTP id l9mr21051084edj.67.1524356583771;

            Sat, 21 Apr 2018 17:23:03 -0700 (PDT)

    ARC-Seal: i=1; a=rsa-sha256; t=1524356583; cv=none;

            b=fV03lAGl69ctG0iUWnVL81932LC9tri/ay6aZMyUHIqCi1W7/Njm2a9Ar+nyN5h15a

             u9fPnxdLrj2sve1K6NbsYIrL/5+O1XPSULiBlWFaONCcDRxRIAPnIH38fHe+qp+VjrWW

             NMzB1++bE/KSAGJCNzzyihy2LQXTA2lQZM5KgKXcVdYJq4F4YVQGH3jEyKQEauKSH6Sk

             agXnD/beP7Y1KbKEgM49Oc2iG27b5DghMch7obe3U/3lm9TxEJ3DW8gbtmvMZMQFf/CT

             m1VtzK5cfqUC5G6vYU3T8UVNxU7sGmFtHqdJtatUcdJ41oEoSjx9kxP5t5bb5ca7BjiH

             esiQ==

    ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816;

            h=date:message-id:subject:to:from:arc-authentication-results;

            bh=ILDw35f8cfqdO/SUJ436amWaKibMKgeEWgpBwz88kDw=;

            b=EcGLPpoe8hxM/sg3CfepxhQSAdcivjLPSMflK2rbga0aIpXH062TCwH8B415Ej54Mn

             6nZNMxZDmygIf1ZUpEsrou8RBi/FWYtP+bO6lYubeQ3Yem8Tyaz6lLnelVMOvbSs1Nvp

             nviGl+Wy+axeHNC31xOC2DyFdCoAQuQZW3n/fUueRtwVOHJ3ByNA6EzelNQ2wKcFb+Dr

             YpxaoHTbR0gtJE4tERLLWSaficFTv8jOQCb2GCz2WXsPhk4i3sGMsUJYFkmHvyQoHTkD

             Bjns9gg2oHgCbtsMWDzhsA12sSgcMk8yoTQHir3ZjZH4xbKEFXjq5UaA4cpgiM0LhG61

             FMrA==

           dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com
            by mx.google.com with ESMTP id t5si1160537edt.292.2018.04.21.17.23.03

            Sat, 21 Apr 2018 17:23:03 -0700 (PDT)

    Received-SPF: pass (google.com: domain of re...@telus.com designates 134.119.189.90 as permitted sender) client-ip=134.119.189.90;
           dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com
            by mx.google.com with ESMTP id n59-v6si5794010qtd.116.2018.04.20.00.37.14

            Fri, 20 Apr 2018 00:37:14 -0700 (PDT)

    Received-SPF: softfail (google.com: domain of transitioning nk...@google.com does not designate 134.119.189.90 as permitted sender) client-ip=13.58.85.245;

    Subject: Funeral Insurance With a Bonus $100 Visa Gift Card

    Content-Type: multipart/report; boundary="f4f5e80f07d80f991b056a2936a0"; report-type=delivery-status

    Date: Sat, 21 Apr 2018 20:22:28 -0400

    --f4f5e80f07d80f991b056a2936a0

    Content-Type: text/html; charset="UTF-8"

    <head>

    </head>

    <body>

    <center>

    <div id="pagewrap" align="center">

    <!--[if gte mso 9]>

    <table width="400px" align="center" cellpadding="0" cellspacing="0" border="0">

    <tr>

      <td>

    <!-->

    <table class="content" border="0" cellspacing="0" cellpadding="0" align="center" bgcolor="#ffffff" style="font-family:Arial, Helvetica, sans-serif; max-width: 400px">

    <tr>

    style="text-decoration: none; display: block; color: #00000; padding:10px 10px 30px 10px;"><p style="width:500px"><center>

    </tr>

    <tr></b></a></td>

    </tr>

    <tr>

      <td>

          <table border="0" align="center" cellpadding="0" cellspacing="0">

          <tbody>

          <tr>

    <td>

      <table border="0" cellspacing="0" cellpadding="0" align="right" style="vertical-align: middle"><br><br>

    <tr>

      <td align="center" style="vertical-align: middle; font-size: 18px; -webkit-border-radius: 5px; 

      -moz-border-radius: 30px; border-radius: 5px; font-family: Baskerville, 'Palatino Linotype', Palatino,

      blank" style=" color: #ffffff; text-decoration: none; -webkit-border-radius: 5px; -moz-border-radius: 5px; border-radius: 

      5px; padding: 12px 18px; border: 5px solid orange; display: inline-block; font-size: 22px;">&nbsp;GET A QUOTE&nbsp;</a></td>

    </tr></center>

      </table>

    </td>

      </tr>

          </tbody>

          </table>

      </td>

    </tr>

    <tr>

       <td height="10px" align="center" bgcolor="#ffffff" style="font-size: 1px; line-height: 1px" >&nbsp;</td>

    </tr>

    </table>

    <!--[if gte mso 9]>

      </td>

    </tr>

    </table>

    <!--> 

    </div>

    </center><br><br>

    <center>

    </body>

    </html>

    --f4f5e80f07d80f991b056a2936a0--

    Re: My account is sending spam emailsQin Liu21-04-18 19:30
    Same here. 5 from "telus.com". Reported spam but nothing changed.
    (ukendt)21-04-18 19:31<Meddelelsen er blevet slettet.>Re: My account is sending spam emailsskysis21-04-18 19:34
    Did you read what users are complaining about? It's not an account security issue. It's a gmail problem
    Re: My account is sending spam emailsskysis21-04-18 19:35

    Same here. Just what you posted. I think gmail has been hacked.

    Re: My account is sending spam emailsjoshua moskalewicz21-04-18 19:37

    Same here first 2 in my inbox/sent while the last 3 automatically went into my spam

    Re: My account is sending spam emailsdeletemarkedread21-04-18 19:43

    I really wish they didn't mark that one response as an "expert" reply.  It's clearly a copy/paste of general ways to protect an account, but that has nothing to do with what is happening here.  A lot of us already have all of these security measures in place.  GOOGLE HELP!!

    Re: My account is sending spam emailsAshley Fletcher21-04-18 19:44

    Yes - getting lots. What is the solution?

    Re: My account is sending spam emailsMarc Fienman21-04-18 19:48

    Yeah, something appears to be going on.  It started around 7:30 EST for me.  Emails going to inbox and sent email folder.  Appear to have been sent by me.  Changed password several times and didn't change anything.  Everyone's gmail hacked?  All these people at the same time?  Appears fishy.  Changing passwords doesn't help so perhaps something on Gmail's end?  

    Re: My account is sending spam emailselectrotek21-04-18 19:48

    GOOGLE, this is widespread. It looks to be bcc'ing the sender, which is why it appears in their inbox. Maximum security is having ZERO effect.

    Re: My account is sending spam emailsSereine He21-04-18 19:51

    Same thing happened to me. Went to security process and changed password but still got the issue happening

    My account is sending spam emailsAnthony Taveras21-04-18 19:53
    I take it Google is either working on it or doesn't care enough to fix it. Either way I'm also having the same issue and it's coming from telus.com definitely not a good sign for anything. If anyone has a fix for this let us know, also that had only happened once to me but it was really suspicious so I had to look it up.
    Re: My account is sending spam emailsWill Wilkin21-04-18 19:54
    Same here, in the past few hours a few spoof emails of SPAM were supposedly sent by "me" (but not really), "via telus".  I am suspicious that a few hours ago, in order to download an article from academia.edu, I had to give permission for them to access my contacts.  I hated that but these "permissions" are more and more common just to use an app, etc --it's all so invasive....
    My account is sending spam emailshardy young21-04-18 19:58

    Changed password. 2FA enabled. These emails are still showing up in my inbox and my sent folder as coming from me? Wth is going on

    Re: My account is sending spam emailsMakeitStop21-04-18 20:00
    When you view the original message, did you all notice if it's coming from the below? It is saying it's being sent via telus.com and I'm in the USA.  
    Re: My account is sending spam emailslevans2721-04-18 20:03

    Mark his comment as spam, it doesnt address anything we are saying it is just copy paste crap when we have all said we did all of that already.

    Re: My account is sending spam emailsSamuel Hawkins21-04-18 20:04
    I have two factor authentication. I have checked all the access to my account. I have also changed my password just to be sure. No one but me has access to my account. Yet telus.com is using my email to spam me. This is not a security issue with my account. No one is logged into it. We need this to be fixed. It shouldn't be possible.
    Re: My account is sending spam emailsJosephine101021-04-18 20:04

    Me too. There are a number of other threads on this sam issue. All happening in the last couple of hours. (As at 22 April 2018).  I've changed my password  and logged out of other devices but that hasn't stopped it.  Interestingly, when I delete a spam email from my inbox, it automatically disappears from my sent folder. My first one was on weightloss too. FYI,  I've attached a screen shot of one of the first spam emails (where I opened up the "sent" details).  I've blacked out my email address from the sent section for privacy. 

    Re: My account is sending spam emailsAnthony Taveras21-04-18 20:04
    I noticed that one of the emails it's being sent to is sub...@nytimes.com not sure if it's someone showing a exploit in Google's security system and wants this blasted on the news. 🤷
    Re: My account is sending spam emailsJosephine101021-04-18 20:05
    Mine also says it was sent via telus.com

    see attached screen shot

    Re: My account is sending spam emailsJosephine101021-04-18 20:06

    I also got one via Telus.com

    Re: My account is sending spam emailsJosephine101021-04-18 20:06

    Same. Bitcoin and funeral insurance via Telus.com

    Re: My account is sending spam emailsFranck RABESON21-04-18 20:09

    The same exact thing has started happening to me. I’ve been using a long password AND two-factor authentication for years and nothing suspicious appears in my account activity, so it’s really not an account security issue.

    Re: My account is sending spam emailsJosephine101021-04-18 20:10
    Mine is also sending to those addresses 

    We're all getting the same thing here. 

    Re: My account is sending spam emailsNoura Rijab21-04-18 20:11

    i have been getting the exact emails as well. i changed my pasword as well as put a 2 step verification and i still get them. but it says the sender is my gmail account

    Re: My account is sending spam emailsNoura Rijab21-04-18 20:12

    i am gettting the exact same things. how do i stop this?

    Re: My account is sending spam emailsEldeef21-04-18 20:12

    I am not sure who catzrule is... but if you are a Google moderator, you really need to pay attention to the complaints in this and other forums as of today.  We are all making sure that our accounts are secure but these unwanted messages are still coming through!!  PLEASE FIX THIS!!

    Re: My account is sending spam emailsNoura Rijab21-04-18 20:14

    same with me. are you still getting these emails?

    Re: My account is sending spam emailsRyan Bidgoli21-04-18 20:14
    same thing is happening to me, my account is secure but sending emails with regards to funeral insurance bitcoin etc. 

    I can't block the sender because it's literally me

    Google help

    Re: My account is sending spam emailsIssues need help21-04-18 20:17

    I am gonna try the mute option and see if I can block them that way...the mute option is on the report tab... at least on my note 8 it is.

    Re: My account is sending spam emailswthintribeca21-04-18 20:20
    ME TOO. I saw an email "from" "me" in my Sent folder a couple hours ago. I have always been pretty good with security. I immediately signed out of all active sessions, changed password, enabled 2-factor with Google Authenticator, now it is happening again--the spam, telus.com, none of the emails it's going to are from my contacts, appears to be from my gmail address but not using my username. I have checked security, the only sign-ins are my desktop and phone w/the password i created just a couple hours ago.
    Re: My account is sending spam emailsMattyB21-04-18 20:21

    Same problem.  Been bothering me all morning

    Re: My account is sending spam emailsRini Nishanth21-04-18 20:22Re: My account is sending spam emailsNoura Rijab21-04-18 20:22Re: My account is sending spam emailsPchanstitch21-04-18 20:23

    Obviously not, because this exact problem is affecting thousands who did everything they were supposed to.

    Re: My account is sending spam emailsNoura Rijab21-04-18 20:24

    at this point is it still safe to use my account?

    Re: My account is sending spam emailsUserresu423456t21-04-18 20:27

    I created several filters -- funeral insurance, who is hotter, bitcoin, etc.

    Re: My account is sending spam emailsBlaze Varone21-04-18 20:31
    I keep getting spam messages sent to me by "Me". It is not a spoofed e-mail address is is actually my own e-mail address that is sending these. I changed my password and added 2 step verification but they are still coming. No outside devices were used. Help!
    Re: My account is sending spam emailsDish21-04-18 20:34
    Getting loads of spam emails, from myself. I've changed my password and it's still happening. What is going on Google? Can you stop it? Have our accounts been compromised? We can't even get in touch to get help.

    Feeling very uneasy!!! 

    Re: My account is sending spam emailsSami M21-04-18 20:36

    Have the exact same thing happening. 

    Re: My account is sending spam emailsCourtney Lynn1111121-04-18 20:40

    Same issue. Getting an email every 10 minutes is getting old.

    Re: My account is sending spam emailsLisa Ann21-04-18 20:45

    SAME TING HAPPENING TO ME! DIET PILLS AND PENIS ENHANCEMENT UGH! SEVERAL EVEN AFTER FOLLOWING ALL SECURITY MEASURES!

    Re: My account is sending spam emailsSami M21-04-18 20:47
    Interestingly, when viewing the messages in the "Sent"-folder, the following warning is present:

    This may be a spoofed message. Gmail couldn't verify that it was actually sent from your account

    I get spoofing happens all the time, but why they show up in my Sent folder is clearly an issue..

    Oh, and "my" spam is also originating from Telus.

    Re: My account is sending spam emailsGlenn Fitzpatrick21-04-18 20:47
    My limited understanding is marking as spam will not block your own legitimate emails as the gmail spam filter uses the senders IP address.

    It will however probably not help as the incoming messages seem to come from multiple different remote IP addresses.

    The real issue preventing an easy solution here is likely just that Google does not allow you to filter using the "X-Forwarded-For" part of the header preventing a simple filter that looks for the via Telus .com header entry

    Re: My account is sending spam emailsNoura Rijab21-04-18 20:51

    i checked my recent activity and it said that someone logged into my account on april the 19th google tracked the i address to a windows in Brisbane QLD Australia

    Re: My account is sending spam emailsMjcholland2721-04-18 20:58

    I'm having the same issue. If it is just spoofing how are the emails appearing in my sent folder? MIne appear to be from tiny.url.... been smashed all morning out of no where... antivrus etc ran... passwords changed... google security checks done.... PLEASE HELP GMAIL!

    Re: My account is sending spam emailsMike Pr21-04-18 20:59

    I am having the exact same issue, I have already changed my password 2 times, frustrating.

    Re: My account is sending spam emailsraewyndonnell21-04-18 21:07

    Same here - dozens of spam emails from myself to myself... then it tells me my name is already taken so I can't even write on this forum! WTF GOOGLE???

    Re: My account is sending spam emailsC7921-04-18 21:11

    I'm having the same issue. Same subjects/content/etc. I hope Google fixes this soon. 

    Re: My account is sending spam emailsTessa Rixon21-04-18 21:31
    Same here, one of my accounts appears to be sending and receiving spam emails about Funeral insurance, Bitcoin and missed deliveries. They have been sent "via telus.com". Passwords changed and I'm reporting each as Spam.
    Re: My account is sending spam emailsLuckyNumberSeven21-04-18 21:43

    I was talking about this in another thread too. After upping my security, I'm still getting spam emails from myself via telus. I know this is not an issue on my side, Google. Seriously, please help us.

    Re: My account is sending spam emailsAlyse Goodacre21-04-18 22:25
    This is also happening to me. Has anyone figured out how to fix it? 
    Re: My account is sending spam emails[mailadresse]21-04-18 22:37

    Add me to this list as well this started happening today 22nd April 2018 at 9:53 Australian CST.

    Have changed my password and it is still happening. 8 recipients are listed.

    Covers a range of topics from rating a female to bible quotes

    Any ideas.

    Re: My account is sending spam emailsGreat Duck21-04-18 22:47

    Yeah I have been 'send'ing out these "telus" emails as well. This seems like a major breach of some kind. There is absolutely no reason for my account to be compromised. I do not use public wi-fi or shared computers. This is the only thing I use this password for and it shouldn't be cracked. I've changed it twice now and still seeing these in my sent box.

    Re: My account is sending spam emailsawblocker21-04-18 22:53

    I've had 4 from dzsfdef@startimes2.science as well, same issue

    Re: My account is sending spam emailsIan Auld21-04-18 23:21
    I don't have any sent emails in my account but I do have some spam messages from ` <Pm...@nieywdufb.telus.com>` confirming "my request to delete my gmail account". Related?
    Re: My account is sending spam emailsJeNny21-04-18 23:44

    +1 To the issue, happening to me too. I have upvoted the topic and several of these replies, if enough people do so hopefully Google will notice and fix it ASAP.

    My account is sending spam emailsAnthony O21-04-18 23:52

    Ditto, happening to me as well

    Re: My account is sending spam emailsAshley M. B21-04-18 23:53
    This is happening to me as well.
    Re: My account is sending spam emailsMatias King21-04-18 23:59

    Muchos usuarios en la misma situación. ¿Cómo solucionaremos la recepción de spam desde nuestras propias de gmail cuentas by telus.com
    Re: My account is sending spam emailsBrandi Cagle21-04-18 23:59

    I don't have the sent email issue from Telus, but I have started getting emails from myself. One started in my inbox and now it's in my spam folder. This needs to be addressed ASAP by GMAIL. I've changed my password and don't know if I'm going to have further issues. My email is used for all sorts of professional needs, so I can't "just create a new account". Why are we having this issue? Add another "Spam agency". I just happened to dig through my spam folder and looked at the from and it is startimes2.science ... Even though there are several posts on this, and people have reportedly called in to Google about this and told it is a known issue and they are working on it, some sort of email from Google giving us an update would be great.

    Re: My account is sending spam emailsCaroline Barker22-04-18 00:10

    I am having the same problem too loads of emails sent my me via Telus :( help 

    Re: My account is sending spam emailsGREEN FPV22-04-18 00:10

    Talked to customer service said this issue is Gmail wide and they are trying to address it.  Mine done same thing, changed my password and they stopped few mins later not sure if that helped.

    Re: My account is sending spam emailsAbhinav PS Jadon22-04-18 00:13

    +1 have been getting a lot of spam mails one after the other. 

    Re: My account is sending spam emailsTim _Spencer22-04-18 00:20
    +1 Google. Rarely does anything pass through gmail's spam filter, but half a dozen messages this morning. Symptoms as above - headers say sent by me, and they come via telus.com.
    Re: My account is sending spam emailsvarun.y.sharma22-04-18 00:22

    I am getting it too.. 
    Re: My account is sending spam emailsvarun.y.sharma22-04-18 00:24

    are you guy using MI Products or if this with any specific phones which has this attack.

    Re: My account is sending spam emailsNicole McKinnon22-04-18 00:31

    Also happening to me!! SO frustrating! Glad to know that I'm not alone though, I was getting pretty concerned! Hoping google addresses this quicksmart!

    Re: My account is sending spam emailsOfficialpoiuytrewq464522-04-18 00:36
    im having the same thing and also with the via telus.com, i believe from my current research that someone, somewhere has found a bug with telus and is using it to send as many spam emails via people's gmail accounts as possible, if you want to read these i STRONGLY recommend disabling images to open the messages due to the way the scammers track opened emails (or you can also open it with a vm but still turn off images)
    Re: My account is sending spam emailsAdam Jönsson22-04-18 00:38

    Same thing happening to me, upvote this thread please

    (ukendt)22-04-18 00:42<Meddelelsen er blevet slettet.>Re: My account is sending spam emailsJames E. Morgan22-04-18 00:46
    @catzrule:  Obviously, that isn't the case here.  Why bother going through all the effort to make a post and be condescending when your entire reply is based off of ignorance of the issue?  Stating, essentially, "Google is infallible" is not proper troubleshooting.

    Regardless, I'm glad to see there's at least movement on this.  I had two send out last night, checked the header, and show Telus as well.

    Re: My account is sending spam emailsRo.E22-04-18 00:49

    Im getting the same problem, im sending e-mails to myself and a lot of other random accounts. In one email it shows my name with some weird email adress behind it. The topic of the mails are from Hotelvouchers and Leovegas bonuses. And the Hotelvouchers are written in dutch ( I am dutch). It started happening around 3 in the morning for me.
    It realy needs to get fixed, but luckily im not the only one with this problem which probably means we are not hacked

    Re: My account is sending spam emailsBrad Ritter22-04-18 00:51

    Same here!! Telus.com using our gmail-name for spoofing/spamming

    No way of stopping it !?

    Not happy Google!
    Re: My account is sending spam emailsSquall4Rinoa22-04-18 00:52

    This is not a situation involving compromised accounts.

    My account is sending spam emailsMarissaa G22-04-18 00:54

    This is happening to me too. I changed my password & looked for any suspicious activity. I had an ip address from va in my recent sign ins. I revoked access on a good amount of apps but I'm still finding sent messages from myself in my spam folder.

    Re: My account is sending spam emailsPatrick van Bree22-04-18 01:10

    Same problem here since this morning. What concerns me the most is that the mails appear in my SEND folder. Checked activity but there was no suspicious activity to be found. Changed my PW. GOOGLE please FIX this asap.
    Re: My account is sending spam emailsSquall4Rinoa22-04-18 01:13
    What you are seeing is not an issue of a compromised account.
    Gmail does not use the traditional mail folder setup, the sent item is just a label which is determined based on the From address.
    What has happened is a email list has been picked up by an automated spoofing bot, and is sending emails to the users in this list from other emails in that same list.
    As you appear as the sender via the From address, it is filtered under Sent when the undeliverable message is bounced back.
    Those of you showing logins from locations other than where you are, you have a issue seperate to this one, but may have been a result of passwords being available on the emails list.
    Please mark this as the best answer so it overwrites the current and totally incorrect best answer.
    Re: My account is sending spam emailsGeoff Stillman22-04-18 01:19Re: My account is sending spam emailsBrad Ritter22-04-18 01:20

    Have chatted with Telnus.com, they are aware of the situation & realize it is going to be a big problem for them.

    Try using their live support/ chat service ;)
    They gave an email repor...@telnus.com
    Re: My account is sending spam emailsDeltaRomeo22-04-18 01:34
    It's all coming from someone using telus.com phone service ...The emails look like their are coming from the owner of the gmail account but looped back to the account owner of the gmail.!! WARNING !! ... "DO NOT BLOCK SENDER" or you will block your own self out of gmail and other accounts associated with it... I found out the hard way but was able to ethically hack back into it after hours of trying. 
    Re: My account is sending spam emailsdeepak basnet22-04-18 01:45
    Hello All

    I am another victim of telus.com and I reported that email as a phishing attack. Once an email is reported as a phishing attach the whole case or email goes to google for review. I think we all victims of telus.com should do that so google can have a look at it. 

    BR
    Deepak 

    Re: My account is sending spam emailsjoaultman22-04-18 01:55

    i thought it came from my scumbags husbands gutter whore in the phillippens, said i sent it out

    Original Message

    Message ID<Nk...@google.com=Mx.google.com>
    Created at:Sat, Apr 21, 2018 at 7:59 PM (Delivered after 7389 seconds)
    From:----------------- <jo.au...@gmail.com>
    To:Arm...@ottobrotto.com, ad...@betterwithfn.com, emy@altabos.stream, laura@lobidaaa.site, dnidina@jeeraricity.website, carl@merindadb.win, no...@emag.ro, r...@olx.ro, ram...@teensnow.com, k...@alnilin.com, l...@mbank.pl, man...@elpais.com.uy
    Subject:jo.aultman About Last Night Dick
    Delivered-To: jo.au...@gmail.com
    Received: by 10.55.73.8 with SMTP id w8csp2252573qka;
            Sat, 21 Apr 2018 19:02:40 -0700 (PDT)
    X-Google-Smtp-Source: AB8JxZrXmn/Cn+tO0yQ07qDLnX5vSI0IJUEhJGP6hWzvdcwTXOcnjyamLWijeU35XjtrSF2ouESv
    X-Received: by 10.55.4.9 with SMTP id 9mr16854120qke.217.1524362560601;
            Sat, 21 Apr 2018 19:02:40 -0700 (PDT)
    ARC-Seal: i=1; a=rsa-sha256; t=1524362560; cv=none;
            d=google.com; s=arc-20160816;
            b=xBOjozeXLr5pDrTkEAGE09UXBza0YxzetHx74EFIt8qOioOpnuwQbh/iz+autVIoVK
             pRQz70zgwfHfaXx61fU836GoU8Znn3xGAabch9KoiDDveYU54Du1rF3AkNlQTzxbcpuh
             4tn8utKDQBw3VwW7PLJLvTC32/eSutDLYEGRy6H9C0tn7EhMue7Qh7COhg6ylSOr4gCq
             PSx4f3YYl4m64ALUV+W6jOu07lYBuhirtvWksVZEGc4oLXJW7oxXZk658sDFZ/q4Htxo
             QcP7lFjEn1KSACl6hK27FQDbTaaCg65/laiZlrgo9UGG6AT13jtcGtgNcfOIwckZ7HcP
             FpiA==
    ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816;
            h=date:message-id:subject:to:from:arc-authentication-results;
            bh=xs2jJb+i2NOy09WV9SQTVdixbMK7UY8wQHPyXaOMZ78=;
            b=Lz83xSwvBeIo4YbHogZwOuuybckXR4XJY9R++nrrDV5Ea17IkellQK0AC7So1V2bDj
             RbS6ZuZnRfvgDUqAXg4oc2Gh6vpnTjpRcQogR4hx8fKaXlXRsngTkcyPeEq+r5OJCDjA
             tRSS4axBf/c3BgvzY3N+JoYohXK9B/hBHp0FdB4VGFeOXbwj0YyvEAoAKAMMCnM80UW7
             qeFJW3bo0l+UbYdN63fNKZzKFEnijzbJKdO7FYslljW5BX75RtZ3WqjBVgxQb0p83v53
             qvVpR/m2Z+93UlFwrAcLQcPH4pwpjn4VM/dZN1L2DcvYtKBS8Va+cjK7GmGl+aSCEbOz
             nADA==
    ARC-Authentication-Results: i=1; mx.google.com;
           spf=pass (google.com: domain of s5d4f8s1...@telus.com designates 54.152.240.15 as permitted sender) smtp.mailfrom=s5d4f8s1s5qs3qs5@telus.com;
           dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com
    Return-Path: <s5d4f8s1...@telus.com>
    Received: from studyhardplayhard.win (ec2-54-152-240-15.compute-1.amazonaws.com. [54.152.240.15])
            by mx.google.com with ESMTP id f64si1547726qka.61.2018.04.21.19.02.40
            for <jo.au...@gmail.com>;
            Sat, 21 Apr 2018 19:02:40 -0700 (PDT)
    Received-SPF: pass (google.com: domain of s5d4f8s1...@telus.com designates 54.152.240.15 as permitted sender) client-ip=54.152.240.15;
    Authentication-Results: mx.google.com;
           spf=pass (google.com: domain of s5d4f8s1...@telus.com designates 54.152.240.15 as permitted sender) smtp.mailfrom=s5d4f8s1s5qs3qs5@telus.com;
           dmarc=fail (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com
    Received-SPF: softfail (google.com: domain of transitioning nk...@google.com does not designate 13.58.85.245 as permitted sender) client-ip=13.58.85.245;
    From: ----------------- <jo.au...@gmail.com>
    To: <Arm...@ottobrotto.com>, <ad...@betterwithfn.com>, <emy@altabos.stream>, <laura@lobidaaa.site>, <dnidina@jeeraricity.website>, <carl@merindadb.win>, <no...@emag.ro>, <r...@olx.ro>, <ram...@teensnow.com>, <k...@alnilin.com>, <l...@mbank.pl>, <man...@elpais.com.uy>
    Subject: jo.aultman
      About Last Night Dick
    Message-ID: <Nk...@google.com=Mx.google.com>
    Date: Sat, 21 Apr 2018 23:59:31 +0000
    Content-Type: multipart/report; boundary="f4f5e80f07d80f991b056a2936a0"; report-type=delivery-status
    X-EMMAIL: <@googlemail.frjo.au...@gmail.com>
    
    --f4f5e80f07d80f991b056a2936a0
    Content-Type: text/html; charset="UTF-8"
    
    Dear Scumbag,
     I just wanna tell you that you suck. I will never forgive you for what you did to me, you piece of crap. ASSHOLE! Do not you dare to talk to me again. Believe me, you don't wanna see my other side, ask your goddamn sister, she has seen it. 
    Kiss my ASS,
     T.B
    
    
    Sent from my iPhone
    
    --f4f5e80f07d80f991b056a2936a0--
    My account is sending spam emailst0ny22-04-18 02:00
    I'm impacted to.
    From reading up online I think that it isn't that someone has compromised Gmail accounts but they are spoofing email addresses and it looks like there isn't any real fix for this.

    I think the article below covers what is happening.

    https://lifehacker.com/how-spammers-spoof-your-email-address-and-how-to-prote-1579478914/amp

    Hopefully Google will look into blocking this but from what I understood from the article is that it is possible to block a spammers IP address but if they get another IP address then they can keeping doing it.

    Fingers crossed

    Re: My account is sending spam emailsFlavio Waechter22-04-18 02:06
    Also fount telus.com Mails in my Spam Folder. With the following notice: Why is this message in Spam? It seems to be a fake "bounce" reply to a message that you didn't actually send.
    Re: My account is sending spam emailsMarie Linder22-04-18 02:26

    I've the same issue with mail to myself from my own account but via telus com
    I've made a filter to erase all mails from telus com   but they are still coming. I changed my password. Logged out. Check the activity but it's from my own IP-address.

    When I try to send this message in this group I get a warning that I'll be sharing some other mail addresses with you. There is one I can't remove: info at autotrader dot com
    I looked it up, it's from a Auto trade business in Atlanta, Georgia, USA. Even if I've taken away the @ and the .      it's still attached to this message.

    I've also tested to send a mail to myself - and there is no "sending forward" address in it, only me, looks ok.
    I'm in Sweden and it started around 03:00, the night between 21 april and 22 april.



    What more to do or just wait and hope that Google / Gmail will fix it?
    Re: My account is sending spam emailsstadi22-04-18 02:31

    had the same issue and as with everyone else my account is secure.

    Re: My account is sending spam emailsVokun Ah22-04-18 02:44

    Also getting the same problem over a dozen different conversations from telus, I've done whatever I can to beef up account security just in case. Now just waiting to see if google can do something about it.

    Re: My account is sending spam emailsDigitalSea22-04-18 02:50

    Same thing here in Australia. Started getting tonnes of spam from myself today 22 April 2018. This is worrying and seems widespread...

    Re: My account is sending spam emailsJames Parsons22-04-18 02:55Re: My account is sending spam emailsJames McGregor-Coope22-04-18 03:54Re: My account is sending spam emailsJames McGregor-Coope22-04-18 04:00

    Inside the Linux boot process

    $
    0
    0

    Take a guided tour from the Master Boot Record to the first user-space application

    In the early days, bootstrapping a computer meant feeding a paper tape containing a boot program or manually loading a boot program using the front panel address/data/control switches. Today's computers are equipped with facilities to simplify the boot process, but that doesn't necessarily make it simple.

    Let's start with a high-level view of Linux boot so you can see the entire landscape. Then we'll review what's going on at each of the individual steps. Source references along the way will help you navigate the kernel tree and dig in further.

    Overview

    Figure 1 gives you the 20,000-foot view.

    Figure 1. The 20,000-foot view of the Linux boot process
    High-level view of the Linux kernel boot

    When a system is first booted, or is reset, the processor executes code at a well-known location. In a personal computer (PC), this location is in the basic input/output system (BIOS), which is stored in flash memory on the motherboard. The central processing unit (CPU) in an embedded system invokes the reset vector to start a program at a known address in flash/ROM. In either case, the result is the same. Because PCs offer so much flexibility, the BIOS must determine which devices are candidates for boot. We'll look at this in more detail later.

    When a boot device is found, the first-stage boot loader is loaded into RAM and executed. This boot loader is less than 512 bytes in length (a single sector), and its job is to load the second-stage boot loader.

    When the second-stage boot loader is in RAM and executing, a splash screen is commonly displayed, and Linux and an optional initial RAM disk (temporary root file system) are loaded into memory. When the images are loaded, the second-stage boot loader passes control to the kernel image and the kernel is decompressed and initialized. At this stage, the second-stage boot loader checks the system hardware, enumerates the attached hardware devices, mounts the root device, and then loads the necessary kernel modules. When complete, the first user-space program (init) starts, and high-level system initialization is performed.

    That's Linux boot in a nutshell. Now let's dig in a little further and explore some of the details of the Linux boot process.

    System startup

    The system startup stage depends on the hardware that Linux is being booted on. On an embedded platform, a bootstrap environment is used when the system is powered on, or reset. Examples include U-Boot, RedBoot, and MicroMonitor from Lucent. Embedded platforms are commonly shipped with a boot monitor. These programs reside in special region of flash memory on the target hardware and provide the means to download a Linux kernel image into flash memory and subsequently execute it. In addition to having the ability to store and boot a Linux image, these boot monitors perform some level of system test and hardware initialization. In an embedded target, these boot monitors commonly cover both the first- and second-stage boot loaders.

    In a PC, booting Linux begins in the BIOS at address 0xFFFF0. The first step of the BIOS is the power-on self test (POST). The job of the POST is to perform a check of the hardware. The second step of the BIOS is local device enumeration and initialization.

    Given the different uses of BIOS functions, the BIOS is made up of two parts: the POST code and runtime services. After the POST is complete, it is flushed from memory, but the BIOS runtime services remain and are available to the target operating system.

    To boot an operating system, the BIOS runtime searches for devices that are both active and bootable in the order of preference defined by the complementary metal oxide semiconductor (CMOS) settings. A boot device can be a floppy disk, a CD-ROM, a partition on a hard disk, a device on the network, or even a USB flash memory stick.

    Commonly, Linux is booted from a hard disk, where the Master Boot Record (MBR) contains the primary boot loader. The MBR is a 512-byte sector, located in the first sector on the disk (sector 1 of cylinder 0, head 0). After the MBR is loaded into RAM, the BIOS yields control to it.

    Stage 1 boot loader

    The primary boot loader that resides in the MBR is a 512-byte image containing both program code and a small partition table (see Figure 2). The first 446 bytes are the primary boot loader, which contains both executable code and error message text. The next sixty-four bytes are the partition table, which contains a record for each of four partitions (sixteen bytes each). The MBR ends with two bytes that are defined as the magic number (0xAA55). The magic number serves as a validation check of the MBR.

    Figure 2. Anatomy of the MBR
    Anatomy of the MBR

    The job of the primary boot loader is to find and load the secondary boot loader (stage 2). It does this by looking through the partition table for an active partition. When it finds an active partition, it scans the remaining partitions in the table to ensure that they're all inactive. When this is verified, the active partition's boot record is read from the device into RAM and executed.

    Stage 2 boot loader

    The secondary, or second-stage, boot loader could be more aptly called the kernel loader. The task at this stage is to load the Linux kernel and optional initial RAM disk.

    The first- and second-stage boot loaders combined are called Linux Loader (LILO) or GRand Unified Bootloader (GRUB) in the x86 PC environment. Because LILO has some disadvantages that were corrected in GRUB, let's look into GRUB. (See many additional resources on GRUB, LILO, and related topics in the Resources section later in this article.)

    The great thing about GRUB is that it includes knowledge of Linux file systems. Instead of using raw sectors on the disk, as LILO does, GRUB can load a Linux kernel from an ext2 or ext3 file system. It does this by making the two-stage boot loader into a three-stage boot loader. Stage 1 (MBR) boots a stage 1.5 boot loader that understands the particular file system containing the Linux kernel image. Examples includereiserfs_stage1_5 (to load from a Reiser journaling file system) or e2fs_stage1_5 (to load from an ext2 or ext3 file system). When the stage 1.5 boot loader is loaded and running, the stage 2 boot loader can be loaded.

    With stage 2 loaded, GRUB can, upon request, display a list of available kernels (defined in /etc/grub.conf, with soft links from/etc/grub/menu.lst and /etc/grub.conf). You can select a kernel and even amend it with additional kernel parameters. Optionally, you can use a command-line shell for greater manual control over the boot process.

    With the second-stage boot loader in memory, the file system is consulted, and the default kernel image and initrd image are loaded into memory. With the images ready, the stage 2 boot loader invokes the kernel image.

    Kernel

    With the kernel image in memory and control given from the stage 2 boot loader, the kernel stage begins. The kernel image isn't so much an executable kernel, but a compressed kernel image. Typically this is a zImage (compressed image, less than 512KB) or a bzImage (big compressed image, greater than 512KB), that has been previously compressed with zlib. At the head of this kernel image is a routine that does some minimal amount of hardware setup and then decompresses the kernel contained within the kernel image and places it into high memory. If an initial RAM disk image is present, this routine moves it into memory and notes it for later use. The routine then calls the kernel and the kernel boot begins.

    When the bzImage (for an i386 image) is invoked, you begin at./arch/i386/boot/head.S in the start assembly routine (see Figure 3 for the major flow). This routine does some basic hardware setup and invokes the startup_32 routine in./arch/i386/boot/compressed/head.S. This routine sets up a basic environment (stack, etc.) and clears the Block Started by Symbol (BSS). The kernel is then decompressed through a call to a C function called decompress_kernel (located in./arch/i386/boot/compressed/misc.c). When the kernel is decompressed into memory, it is called. This is yet anotherstartup_32 function, but this function is in./arch/i386/kernel/head.S.

    In the new startup_32 function (also called the swapper or process 0), the page tables are initialized and memory paging is enabled. The type of CPU is detected along with any optional floating-point unit (FPU) and stored away for later use. The start_kernel function is then invoked (init/main.c), which takes you to the non-architecture specific Linux kernel. This is, in essence, themain function for the Linux kernel.

    Figure 3. Major functions flow for the Linux kernel i386 boot
    Major Functions in Linux Kernel i386 Boot Process

    With the call to start_kernel, a long list of initialization functions are called to set up interrupts, perform further memory configuration, and load the initial RAM disk. In the end, a call is made to kernel_thread (in arch/i386/kernel/process.c) to start the init function, which is the first user-space process. Finally, the idle task is started and the scheduler can now take control (after the call to cpu_idle). With interrupts enabled, the pre-emptive scheduler periodically takes control to provide multitasking.

    During the boot of the kernel, the initial-RAM disk (initrd) that was loaded into memory by the stage 2 boot loader is copied into RAM and mounted. This initrd serves as a temporary root file system in RAM and allows the kernel to fully boot without having to mount any physical disks. Since the necessary modules needed to interface with peripherals can be part of the initrd, the kernel can be very small, but still support a large number of possible hardware configurations. After the kernel is booted, the root file system is pivoted (via pivot_root) where the initrd root file system is unmounted and the real root file system is mounted.

    The initrd function allows you to create a small Linux kernel with drivers compiled as loadable modules. These loadable modules give the kernel the means to access disks and the file systems on those disks, as well as drivers for other hardware assets. Because the root file system is a file system on a disk, the initrd function provides a means of bootstrapping to gain access to the disk and mount the real root file system. In an embedded target without a hard disk, theinitrd can be the final root file system, or the final root file system can be mounted via the Network File System (NFS).

    Init

    After the kernel is booted and initialized, the kernel starts the first user-space application. This is the first program invoked that is compiled with the standard C library. Prior to this point in the process, no standard C applications have been executed.

    In a desktop Linux system, the first application started is commonly/sbin/init. But it need not be. Rarely do embedded systems require the extensive initialization provided by init (as configured through /etc/inittab). In many cases, you can invoke a simple shell script that starts the necessary embedded applications.

    Summary

    Much like Linux itself, the Linux boot process is highly flexible, supporting a huge number of processors and hardware platforms. In the beginning, the loadlin boot loader provided a simple way to boot Linux without any frills. The LILO boot loader expanded the boot capabilities, but lacked any file system awareness. The latest generation of boot loaders, such as GRUB, permits Linux to boot from a range of file systems (from Minix to Reiser).

    Downloadable resources

    Related topics

    Enough with the intrusive updates

    $
    0
    0

    This week-end, I went to my gaming PC in my living room. The PC did not respond when I grabbed the mouse. Puzzled, I pressed the “on” button on the PC. Then I saw that Microsoft saw fit to update my PC while I wasn’t looking. I had configured this particular PC to my liking, and many of my careful settings are gone.

    Every time the PC restarts, it seems that “Steam” (a popular game store) has to self-update which can often take several minutes. Who are these engineers who think that an application should do a lengthy self-update with each reboot? That’s hostile toward your users.

    I told myself: “no matter, I will grab my Apple laptop and do some relaxing work”. Only, my Apple needed to be booted up. And rebooting triggered an update installation… So I waited and I waited… and then the laptop came up with a mysterious message… basically, the update failed and the laptop was left in an unusable state.

    Here is what you find online on this problem:

    (…) when I got to the office, and turned on my Mac, it rebooted into recovery, with the Installer Log open, and a dialog that read, “The macOS Installation couldn’t be completed”. I called Apple and they had me run a check disk, (…) Since posting this, Apple called me back and they said that I should be able to reinstall macOS Sierra without formatting the drive and I should be back to the state I was before the failure.
    (…) Fortunately I was able to simply reinstall macOS High Sierra by booting into Recovery Mode (Command ⌘ + R) and choosing Reinstall macOS High Sierra.

    Five minutes of Googling gave me a solution. A whole evening of downloads gave me back a semi-working laptop… except that it is once again bugging me to update it to the very latest version.

    Going back to the gaming PC, the Nvidia graphics card driver insisted on an update… which I did… only to be left with a machine that had the disk throughput wasted on something called Nvidia Experience (or maybe it is GeForce Experience)… I understand that this invasive Nvidia Experience software is responsible for updates. If anything should not be taxing your machine, it is a piece of software in charge of updates. Here is what you find online about this “Nvidia Experience”:

    User 1: Hello, whenever I open the GeForce Experience, I cannot use the program because of a box which says there is an update and I should install it. I cannot use the program because of a box which says there is an update and I should install it. Doing so starts a program but nothing is shown, except in Task Manager where the GeForce program is running at 100% on the Disk. The same thing happens to the disk when I attempt to uninstall the GeForce Experience.

    User 2: solution: don’t install GFE.

    I also visited my mother-in-law. Thankfully, her PC is configured to never update anything ever. In the task bar, I found a few pieces of software that tried to update themselves, including Java and some “zip” utility. I refused the updates. Obviously. Why on Earth would my mother in law even need Java in the first place? And whatever this zip utility is, I am sure it can continue just fine without updates.

    All in all, the experience left me in a terrible mood. I also wasted a lot of time. I am quite good with computers, so I could fix all the problems, but I wonder… what do other people do?

    What is annoying is how intrusive all these updates are. I realize that it is hard to do updates entirely silently, without the user noticing. But what we have now is the opposite. The updates are “in your face”.

    I can excuse Microsoft more easily… because they have to support an untold number of different configurations… but Apple should do better. I understand that security updates require a reboot, but these updates should literally never fail. It is intolerable to be left with an unusable laptop after an update.

    Updates should be as unintrusive as possible. They should not disturb or annoy us.

    So how do we end up in such as sorry state of affairs?

    My theory is that within software companies, the very worse engineers end up in charge of software updates. “Well, Joe can’t be trusted to work on the actual driver, but maybe he can maintain the ‘Experience’ software in charge of updates?” And Joe does a terrible job, but nobody cares because, these are just updates, right?


    Libraries Can Be More Than Just Books (2017)

    $
    0
    0

    Some might complain that such public-private partnerships do not earn the libraries enough space or money, or that the resulting buildings are too big. Such criticism ignores the complexities of building in the country’s oldest and largest metropolis. These deals do not undermine the libraries within — they underpin their futures. When cities lack housing, new libraries and capital dollars, here is a way to get all three for the nominal public investment of an underused property, one the public continues to own once it is built.

    Indeed, New York has already undertaken a number of library partnerships that underscore their promise.

    Across from the Museum of Modern Art, a new 53rd Street branch has opened beneath a luxury hotel to largely positive reviews. New residential towers, from Battery Park City to the BAM Cultural District in Fort Greene, have incorporated libraries. A similar development underway in Brooklyn Heights has drawn criticism for having only market-rate apartments; this overlooks the $52 million earned in the deal, which is underwriting the Sunset Park project, among others.

    Admirable as these are, New York has fallen well short of its potential. The city has built only 16 branches the past two decades, a paltry 8 percent increase, and nothing compared with rival metropolitan areas.

    Other cities are much further ahead. Starting in 1995, Chicago created a master plan tying libraries to community development and has replaced more than three-quarters of its branches. In 1998, Seattle issued the largest library bond in history, allowing for the construction or replacement of all 27 branches. And Columbus, Ohio, unveiled a plan to double, and possibly triple, its system’s square footage over two decades.

    New York ought to take such an integrated approach to the billion-dollar needs of its libraries. At the very least, it should embrace the partnerships already flourishing here and foster even more.

    My organization, the Center for an Urban Future, working with the architecture firm Marble Fairbanks, has identified at least 25 libraries with surplus development rights. These could easily be redeveloped into libraries beneath housing, or even offices or manufacturing centers, depending on a community’s needs. Factoring in some smart rezonings, dozens more libraries could be upgraded in this fashion.

    Walking While Black

    $
    0
    0

    “My only sin is my skin. What did I do, to be so black and blue?”

    –Fats Waller, “(What Did I Do to Be So) Black and Blue?”

    “Manhattan’s streets I saunter’d, pondering.”

    –Walt Whitman, “Manhattan’s Streets I Saunter’d, Pondering”

    My love for walking started in childhood, out of necessity. No thanks to a stepfather with heavy hands, I found every reason to stay away from home and was usually out—at some friend’s house or at a street party where no minor should be— until it was too late to get public transportation. So I walked. The streets of Kingston, Jamaica, in the 1980s were often terrifying—you could, for instance, get killed if a political henchman thought you came from the wrong neighborhood, or even if you wore the wrong color. Wearing orange showed affiliation with one political party and green with the other, and if you were neutral or traveling far from home you chose your colors well. The wrong color in the wrong neighborhood could mean your last day. No wonder, then, that my friends and the rare nocturnal passerby declared me crazy for my long late-night treks that traversed warring political zones. (And sometimes I did pretend to be crazy, shouting non sequiturs when I passed through especially dangerous spots, such as the place where thieves hid on the banks of a storm drain. Predators would ignore or laugh at the kid in his school uniform speaking nonsense.)

    I made friends with strangers and went from  being a very shy and awkward kid to being an extroverted, awkward one. The beggar, the vendor, the poor laborer—those were experienced wanderers, and they became my nighttime instructors; they knew the streets and delivered lessons on how to navigate and enjoy them. I imagined myself as a Jamaican Tom Sawyer, one moment sauntering down the streets to pick low-hanging mangoes that I could reach from the sidewalk, another moment hanging outside a street party with battling sound systems, each armed with speakers piled to create skyscrapers of heavy bass. These streets weren’t frightening. They were full of adventure when they weren’t serene. There I’d join forces with a band of merry walkers, who’d miss the last bus by mere minutes, our feet still moving as we put out our thumbs to hitchhike to spots nearer home, making jokes as vehicle after vehicle raced past us. Or I’d get lost in Mittyesque moments, my young mind imagining alternate futures. The streets had their own safety: Unlike at home, there I could be myself without fear of bodily harm. Walking became so regular and familiar that the way home became home.

    "Father and Son," from Ruddy Roye's "When Living is Protest" series.“Father and Son,” from Ruddy Roye’s “When Living is Protest” series.

    The streets had their rules, and I loved the challenge of trying to master them. I learned how to be alert to surrounding dangers and nearby delights, and prided myself on recognizing telling details that my peers missed. Kingston was a map of complex, and often bizarre, cultural and political and social activity, and I appointed myself its nighttime cartographer. I’d know how to navigate away from a predatory pace, and to speed up to chat when the cadence of a gait announced friendliness. It was almost always men I saw. A lone woman walking in the middle of the night was as common a sight as Sasquatch; moonlight pedestrianism was too dangerous for her. Sometimes at night as I made my way down from hills above Kingston, I’d have the impression that the city was set on “pause” or in extreme slow motion, as that as I descended I was cutting across Jamaica’s deep social divisions. I’d make my way briskly past the mansions in the hills overlooking the city, now transformed into a carpet of dotted lights under a curtain of stars, saunter by middle-class subdivisions hidden behind high walls crowned with barbed wire, and zigzag through neighborhoods of zinc and wooden shacks crammed together and leaning like a tight-knit group of limbo dancers. With my descent came an increase in the vibrancy of street life—except when it didn’t; some poor neighborhoods had both the violent gunfights and the eerily deserted streets of the cinematic Wild West. I knew well enough to avoid those even at high noon.

    I’d begun hoofing it after dark when I was 10 years old. By 13 I was rarely home before midnight, and some nights found me racing against dawn. My mother would often complain, “Mek yuh love street suh? Yuh born a hospital; yuh neva born a street.” (“Why do you love the streets so much? You were born in a hospital, not in the streets.”)

    * * * *

    I left Jamaica in 1996 to attend college in New Orleans, a city I’d heard called “the northernmost Caribbean city.” I wanted to discover—on foot, of course—what was Caribbean and what was American about it. Stately mansions on oak-lined streets with streetcars clanging by, and brightly colored houses that made entire blocks look festive; people in resplendent costumes dancing to funky brass bands in the middle of the street; cuisine—and aromas—that mashed up culinary traditions from Africa, Europe, Asia, and the American South; and a juxtaposition of worlds old and new, odd and familiar: Who wouldn’t want to explore this?

    On my first day in the city, I went walking for a few hours to get a feel for the place and to buy supplies to transform my dormitory room from a prison bunker into  a welcoming space. When some university staff members found out what I’d been up to, they warned me to restrict my walking to the places recommended as safe to tourists and the parents of freshmen. They trotted out statistics about New Orleans’s crime rate. But Kingston’s crime rate dwarfed those numbers, and I decided to ignore these well-meant cautions. A city was waiting to be discovered, and I wouldn’t let inconvenient facts get in the way. These American criminals are nothing on Kingston’s, I thought. They’re no real threat to me.

    What no one had told me was that I was the one who would be considered a threat.

    I wasn’t prepared for any of this. I had come from a majority-black country in which no one was wary of me because of my skin color. Now I wasn’t sure who was afraid of me.

    Within days I noticed that many people on the street seemed apprehensive of me: Some gave me a circumspect glance as they approached, and then crossed the street; others, ahead, would glance behind, register my presence, and then speed up; older white women clutched their bags; young white men nervously greeted me, as if exchanging a salutation for their safety: “What’s up, bro?” On one occasion, less than a month after my arrival, I tried to help a man whose wheelchair was stuck in the middle of a crosswalk; he threatened to shoot me in the face, then asked a white pedestrian for help.

    I wasn’t prepared for any of this. I had come from a majority-black country in which no one was wary of me because of my skin color. Now I wasn’t sure who was afraid of me. I was especially unprepared for the cops. They regularly stopped and bullied me, asking questions that took my guilt for granted. I’d never received what many of my African American friends call “The Talk”: No parents had told me how to behave when I was stopped by the police, how to be as polite and cooperative as possible, no matter what they said or did to me. So I had to cobble together my own rules of engagement. Thicken my Jamaican accent. Quickly mention my college. “Accidentally” pull out my college identification card when asked for my driver’s license.

    "Walking in Harlem," from“Walking in Harlem,” from Ruddy Roye’s “When Living is Protest” series.

    My survival tactics began well before I left my dorm. I got out of the shower with the police in my head, assembling a cop-proof wardrobe. Light-colored oxford shirt. V-neck sweater. Khaki pants. Chukkas. Sweatshirt or T-shirt with my university insignia. When I walked I regularly had my identity challenged, but I also found ways to assert it. (So I’d dress Ivy League style, but would, later on, add my Jamaican pedigree by wearing Clarks Desert Boots, the footwear of choice of Jamaican street culture.) Yet the all-American sartorial choice of white T-shirt and jeans, which many police officers see as the uniform of black troublemakers, was off limits to me—at least, if I wanted to have the freedom of movement I desired.

    In this city of exuberant streets, walking became a complex and often oppressive negotiation. I would see a white woman walking toward me at night and cross the street to reassure her that she was safe. I would forget something at home but not immediately turn around if someone was behind me, because I discovered that a sudden backtrack could cause alarm. (I had a cardinal rule: Keep a wide perimeter from people who might consider me a danger. If not, danger might visit me.) New Orleans suddenly felt more dangerous than Jamaica. The sidewalk was a minefield, and every hesitation and self-censored compensation reduced my dignity. Despite my best efforts, the streets never felt comfortably safe. Even a simple salutation was suspect.

    One night, returning to the house that, eight years after my arrival, I thought I’d earned the right to call my home,   I waved to a cop driving by. Moments later, I was against his car in handcuffs. When I later asked him—sheepishly, of course; any other way would have asked for bruises—why he had detained me, he said my greeting had aroused his suspicion. “No one waves to the police,” he explained. When I told friends of his response, it was my behavior, not his, that they saw as absurd. “Now why would you do a dumb thing like that?” said one. “You know better than to make nice with police.”

    * * * *

    A few days after I left on a visit to Kingston, Hurricane Katrina slashed and pummeled New Orleans. I’d gone not because of the storm but because my adoptive grandmother, Pearl, was dying of cancer. I hadn’t wandered those streets in eight years, since my last visit, and I returned to them now mostly at night, the time I found best for thinking, praying, crying. I walked to feel less alienated—from myself, struggling with the pain of seeing my grandmother terminally  ill; from my home in New Orleans, underwater and seemingly abandoned; from my home country, which now, precisely because of its childhood familiarity, felt foreign to me. I was surprised by how familiar those streets felt. Here was the corner where the fragrance of jerk chicken greeted me, along with the warm tenor and peace-and-love message of Half Pint’s “Greetings,” broadcast from a small but powerful speaker to at least a half-mile radius. It was as if I had walked into 1986, down to the soundtrack. And there was the wall of the neighborhood shop, adorned with the Rastafarian colors red, gold, and green along with images  of local and international heroes Bob Marley, Marcus Garvey, and Haile Selassie. The crew of boys leaning against it and joshing each other were recognizable; different faces, similar stories.

    I was astonished at how safe the streets felt to me, once again one black body among many, no longer having to anticipate the many ways my presence might instill fear and how to offer some reassuring body language. Passing police cars were once again merely passing police cars. Jamaican police could be pretty brutal, but they didn’t notice me the way American police did. I could be invisible in Jamaica in a way I can’t be invisible in the United States. Walking had returned to me a greater set of possibilities.

    And why walk, if not to create a new set of possibilities? Following serendipity, I added new routes to the mental maps I had made from constant walking in that city from childhood to young adulthood, traced variations on the old pathways. Serendipity, a mentor once told me, is a secular way of speaking of grace; it’s unearned favor. Seen theologically, then, walking is an act of faith. Walking is, after all, interrupted falling. We see, we listen, we speak, and we trust that each step we take won’t be our last, but will lead us into a richer understanding of the self and the world.

    In Jamaica, I felt once again as if the only identity that mattered was my own, not the constricted one that others had constructed for me. I strolled into my better self. I said, along with Kierkegaard, “I have walked myself into my best thoughts.”

    * * * *

    When I tried to return to New Orleans from Jamaica a month later, there were no flights. I thought about flying to Texas so I could make my way back to my neighborhood as soon as it opened for reoccupancy, but my adoptive aunt, Maxine, who hated the idea of me returning to a hurricane zone before the end of hurricane season, persuaded me to come to stay in New York City instead. (To strengthen her case she sent me an article about Texans who were buying up guns because they were afraid of the influx of black people from New Orleans.)

    This wasn’t a hard sell: I wanted to be in a place where I could travel by foot and, more crucially, continue to reap the solace of walking at night. And I was eager to follow in the steps of the essayists, poets, and novelists who’d wandered that great city before me—Walt Whitman, Herman Melville, Alfred Kazin, Elizabeth Hardwick. I had visited the city before, but each trip had felt like a tour in a sports car. I welcomed the chance to stroll. I wanted to walk alongside Whitman’s ghost and “descend to the pavements, merge with the crowd, and gaze with them.” So I left Kingston, the popular Jamaican farewell echoing in my mind: “Walk good!” Be safe on your journey, in other words, and all  the best in your endeavors.

    * * * *

    I arrived in New York City, ready to lose myself in Whitman’s “Manhattan crowds, with their turbulent musical chorus!” I marveled at what Jane Jacobs praised as “the ballet of the good city sidewalk” in her old neighborhood, the West Village. I walked up past midtown skyscrapers, releasing their energy as lively people onto the streets, and on into the Upper West Side, with its regal Beaux Arts apartment buildings, stylish residents, and buzzing streets. Onward into Washington Heights, the sidewalks spilled over with an ebullient mix of young and old Jewish and Dominican American residents, past leafy Inwood, with parks whose grades rose to reveal beautiful views of the Hudson River, up to my home in Kingsbridge in the Bronx, with its rows of brick bungalows and apartment buildings nearby Broadway’s bustling sidewalks and the peaceful expanse of Van Cortlandt Park. I went to Jackson Heights in Queens to take in people socializing around garden courtyards in Urdu, Korean, Spanish, Russian, and Hindi. And when I wanted a taste of home, I headed to Brooklyn, in Crown Heights, for Jamaican food and music and humor mixed in with the flavor of New York City. The city was my playground.

    I explored the city with friends, and then with a woman I’d begun dating. She walked around endlessly with me, taking in New York City’s many pleasures. Coffee shops open until predawn; verdant parks with nooks aplenty; food and music from across the globe; quirky neighborhoods with quirkier residents. My impressions of the city took shape during my walks with her.

    As with the relationship, those first few months of urban exploration were all romance. The city was beguiling, exhilarating, vibrant. But it wasn’t long before reality reminded me I wasn’t invulnerable, especially when I walked alone.

    One night in the East Village, I was running to dinner when a white man in front of me turned and punched me in the chest with such force that I thought my ribs had braided around my spine. I assumed he was drunk or had mistaken me for an old enemy, but found out soon enough that he’d merely assumed I was a criminal because of my race. When he discovered I wasn’t what he imagined, he went on to tell me that his assault was my own fault for running up behind him. I blew off this incident as an aberration, but the mutual distrust between me and the police was impossible to ignore. It felt elemental. They’d enter a subway platform; I’d notice them. (And I’d notice all the other black men registering their presence as well, while just about everyone else remained oblivious to them.) They’d glare. I’d get nervous and glance. They’d observe me steadily. I’d get uneasy. I’d observe them back, worrying that I looked suspicious. Their suspicions would increase. We’d continue the silent, uneasy dialogue until the subway arrived and separated us at last.

    I returned to the old rules I’d set for myself in New Orleans, with elaboration. No running, especially at night; no sudden movements; no hoodies; no objects—especially shiny ones—in hand; no waiting for friends on street corners, lest I be mistaken for a drug dealer; no standing near   a corner on the cell phone (same reason). As comfort set in, inevitably I began to break some of those rules, until a night encounter sent me zealously back to them, having learned that anything less than vigilance was carelessness.

    After a sumptuous Italian dinner and drinks with friends, I was jogging to the subway at Columbus Circle—I was running late to meet another set of friends at a concert downtown. I heard someone shouting and I looked up to see a police officer approaching with his gun trained on me. “Against the car!” In no time, half a dozen cops were upon me, chucking me against the car and tightly handcuffing me. “Why were you running?” “Where are you going?” “Where are you coming from?” “I said, why were you running?!” Since I couldn’t answer everyone at once, I decided to respond first to the one who looked most likely to hit me. I was surrounded by a swarm and tried to focus on just one without inadvertently aggravating the others.

    For a black man, to assert your dignity before the police was to risk assault.

    It didn’t work. As I answered that one, the others got frustrated that I wasn’t answering them fast enough and barked at me. One of them, digging through my already-emptied pockets, asked if I had any weapons, the question more an accusation. Another badgered me about where I was coming from, as if on the fifteenth round I’d decide to tell him the truth he imagined. Though I kept saying—calmly, of course, which meant trying to manage a tone that ignored my racing heart and their spittle-filled shouts in my face—that I had just left friends two blocks down the road, who were all still there and could vouch for me, to meet other friends whose text messages on my phone could verify that, yes, sir, yes, officer, of course, officer, it made no difference. For a black man, to assert your dignity before the police was to risk assault. In fact, the dignity of black people meant less to them, which was why I always felt safer being stopped in front of white witnesses than black witnesses. The cops had less regard for the witness and entreaties of black onlookers, whereas the concern of white witnesses usually registered on them. A black witness asking a question or politely raising an objection could quickly become a fellow detainee. Deference to the police, then, was sine qua non for a safe encounter.

    The cops ignored my explanations and my suggestions and continued to snarl at me. All except one of them, a captain. He put his hand on my back, and said to no one in particular, “If he was running for a long time he would have been sweating.” He then instructed that the cuffs be removed. He told me that a black man had stabbed someone earlier two or three blocks away and they were searching for him. I noted that I had no blood on me and had told his fellow officers where I’d been and how to check my alibi—unaware that it was even an alibi, as no one had told me why I was being held,  and  of course, I hadn’t dared ask. From what I’d seen, anything beyond passivity would be interpreted as aggression.

    The police captain said I could go. None of the cops who detained me thought an apology was necessary. Like the thug who punched me in the East Village, they seemed to think it was my own fault for running.

    Humiliated, I tried not to make eye contact with the onlookers on the sidewalk, and I was reluctant to pass them to be on my way. The captain, maybe noticing my shame, offered to give me a ride to the subway station. When he dropped me off and I thanked him for his help, he said, “It’s because you were polite that we let you go. If you were acting up it would have been different.” I nodded and said nothing.

    * * * *

    I realized that what I least liked about walking in New York City wasn’t merely having to learn new rules of navigation and socialization—every city has its own. It was the arbitrariness of the circumstances that required them, an arbitrariness that made me feel like a child again, that infantilized me. When we first learn to walk, the world around us threatens to crash into us. Every step is risky. We train ourselves to walk without crashing by being attentive to our movements, and extra-attentive to the world around us. As adults we walk without thinking, really. But as a black adult I am often returned to that moment in childhood when I’m just learning to walk. I am once again on high alert, vigilant. Some days, when I am fed up with being considered a troublemaker upon sight, I joke that the last time a cop was happy to see a black male walking was when that male was a baby taking his first steps.

    On many walks, I ask white friends to accompany me, just to avoid being treated like a threat. Walks in New York City, that is; in New Orleans, a white woman in my company sometimes attracted more hostility. (And it is not lost on me that my woman friends are those who best understand my plight; they have developed their own vigilance in an environment where they are constantly treated as targets of sexual attention.) Much of my walking is as my friend Rebecca once described it: A pantomime undertaken to avoid the choreography of criminality.

    * * * *

    Walking while black restricts the experience of walking, renders inaccessible the classic Romantic experience of walking alone. It forces me to be in constant relationship with others, unable to join the New York flâneurs I had read about and hoped to join. Instead of meandering aimlessly in the footsteps of Whitman, Melville, Kazin, and Vivian Gornick, more often I felt that I was tiptoeing in Baldwin’s—the Baldwin who wrote, way back in 1960, “Rare, indeed, is the Harlem citizen, from the most circumspect church member to the most shiftless adolescent, who does not have a long tale to tell of police incompetence, injustice, or brutality. I myself have witnessed and endured it more than once.”

    Walking as a black man has made me feel simultaneously more removed from the city, in my awareness that I am perceived as suspect, and more closely connected to it, in the full attentiveness demanded by my vigilance. It has made me walk more purposefully in the city, becoming part of its flow, rather than observing, standing apart.

    * * * *

    But it also means that I’m still trying to arrive in a city that isn’t quite mine. One definition of home is that it’s somewhere we can most be ourselves. And when are we more ourselves but when walking, that natural state in which we repeat one of the first actions we learned? Walking—the simple, monotonous act of placing one foot before the other to prevent falling—turns out not to be so simple if you’re black. Walking alone has been anything but monotonous for me; monotony is a luxury.

    A foot leaves, a foot lands, and our longing gives it momentum from rest to rest. We long to look, to think, to talk, to get away. But more than anything else, we long to be free. We want the freedom and pleasure of walking without fear—without others’ fear—wherever we choose. I’ve lived in New York City for almost a decade and have not stopped walking its fascinating streets. And I have not stopped longing to find the solace that I found as a kid on the streets of Kingston. Much as coming to know New York City’s streets has made it closer to home to me, the city also withholds itself from me via those very streets. I walk them, alternately invisible and too prominent. So I walk caught between memory and forgetting, between memory and forgiveness.

    Garnette Cadogan’s essay first appeared in issue one of  Freeman’s and is forthcoming in the The Fire This Time: A New Generation Speaks About Race (Scribner), ed. Jesmyn Ward. 

    Featured image: “Damion,” from Ruddy Roye’s “When Living is Protest” series.

    Everyone Wants to Go Home During Extra Innings, Maybe Even the Umps

    $
    0
    0

    In the top of the 10th inning in Sunday night’s nationally televised contest between the Astros and Rangers — one that will most likely be remembered as the night a 44-year-old nearly no-hit the defending World Series champs — the visiting Rangers grabbed a 3-1 lead.

    In the bottom of the frame, the home team’s hopes rested on Jake Marisnick, who, with runners at the corners, two outs, and his team still trailing by a pair of runs, worked a 3-1 count against Jake Diekman. A Marisnick walk would load the bases for the Astros, bringing reigning World Series MVP George Springer to the plate, a hit away from tying or winning the game.

    On Diekman’s fifth pitch, it appeared that Marisnick had earned a walk. “This is not a strike, this is off the plate,” ESPN broadcaster Jessica Mendoza opined as the networks’ K-Zone showed the pitch a few inches outside.

    Home plate umpire Adam Hamari disagreed, however, calling the pitch strike two. Marisnick struck out swinging on the following pitch to end the game, and the outfielder slammed his bat in disgust.

    Umps miss balls and strikes all the time. But the strike two in that Marisnick at-bat is emblematic of a larger pattern of borderline calls, albeit one that umps probably produce unwittingly: In extra innings, umpires will vary ball and strike calls in ways that tend to end the game as quickly as possible.

    To find this pattern, we looked at pitches thrown in the bottom of extra innings, when the game could quickly end.1 If the away team scored in the top half of an inning and held a lead, as was the case in Marisnick’s at-bat, an umpire hoping for a faster exit would call more strikes, making it more likely that the home team will be sent down quickly. Alternatively, if the home team got a runner aboard, umps would be more likely to favor them by calling fewer strikes, giving the team more chances to get the runner across the plate and send everyone home.

    Here’s a chart showing how umps changed their behavior in these situations between 2008 and 2016, a sample of roughly 32,000 pitches. Each square shows the percentage increase or decrease in the likelihood that a pitch is called a strike in that part of the strike zone. The color of each square (green for more balls, pink for more strikes) corresponds with which side umps are favoring, while how darkly shaded the square is reflects the size of the change (in percentage points).

    The left panel shows the comparative rate of strike calls when, in the bottom of an inning in extras, the batting team is positioned to win — defined as having a runner on base in a tie game — relative to those rates in situations when there’s no runner on base in a tie game. When the home team has a baserunner, umps call more balls, thus setting up more favorable counts for home-team hitters, creating more trouble for the pitcher, and giving the home team more chances to end the game.

    The right-hand side of the chart shows squares at identical strike zone locations, but shaded according to changes in strike rates when the extra-inning scenario favors the away team. More specifically, any time the away team is trying to hold onto a lead in the bottom half of an inning after the ninth. Here, and as in the pitch to Marisnick, umps call more strikes, giving the batting team fewer chances to extend the game.

    Altogether, teams that are in a position to win get up to a 27 percentage point increase in the rate of called balls, while teams that look like they’re about to lose see increased strike rates of up to 33 percentage points. Differences are largest in fringe areas of the strike zone, where the opportunity for umpire discretion is the highest: 62 percent of these squares in the left panel are green, while 72 percent of fringe squares on the right panel are pink.2 In both settings, umps are more likely to use whatever behavior gets the game over with the quickest. That may not necessarily be a bad thing. MLB games are already slow, and extra-innings play often comes late at night, which means smaller crowds and fewer television viewers.

    MLB did not immediately respond to a request for comment, but the league has made no secret of its interest in shortening games. Even so, umpires may not be consciously deciding who should win. Humans are susceptible to various biases they may not be aware of, and even just a bit of fatigue could unintentionally push umpires in one direction or the other on borderline calls.

    Moreover, according to sources within the umpire union, umps don’t get paid more when games go to extra innings. In other words, MLB asks them to take on extra work without providing any extra compensation. That’s one more reason they may want the game to end early — their paycheck’s the same regardless.

    Why Stanislaw Lem’s futurism deserves attention (2015)

    $
    0
    0

    I remember well the first time my certainty of a bright future evaporated, when my confidence in the panacea of technological progress was shaken. It was in 2007, on a warm September evening in San Francisco, where I was relaxing in a cheap motel room after two days covering The Singularity Summit, an annual gathering of scientists, technologists, and entrepreneurs discussing the future obsolescence of human beings.

    In math, a “singularity” is a function that takes on an infinite value, usually to the detriment of an equation’s sense and sensibility. In physics, the term usually refers to a region of infinite density and infinitely curved space, something thought to exist inside black holes and at the very beginning of the Big Bang. In the rather different parlance of Silicon Valley, “The Singularity” is an inexorably-approaching event in which humans ride an accelerating wave of technological progress to somehow create superior artificial intellects—intellects which with predictable unpredictability then explosively make further disruptive innovations so powerful and profound that our civilization, our species, and perhaps even our entire planet are rapidly transformed into some scarcely imaginable state. Not long after The Singularity’s arrival, argue its proponents, humanity’s dominion over the Earth will come to an end.

    I had encountered a wide spectrum of thought in and around the conference. Some attendees overflowed with exuberance, awaiting the arrival of machines of loving grace to watch over them in a paradisiacal post-scarcity utopia, while others, more mindful of history, dreaded the possible demons new technologies could unleash. Even the self-professed skeptics in attendance sensed the world was poised on the cusp of some massive technology-driven transition. A typical conversation at the conference would refer at least once to some exotic concept like whole-brain emulation, cognitive enhancement, artificial life, virtual reality, or molecular nanotechnology, and many carried a cynical sheen of eschatological hucksterism: Climb aboard, don’t delay, invest right now, and you, too, may be among the chosen who rise to power from the ashes of the former world!

    Over vegetarian hors d’oeuvres and red wine at a Bay Area villa, I had chatted with the billionaire venture capitalist Peter Thiel, who planned to adopt an “aggressive” strategy for investing in a “positive” Singularity, which would be “the biggest boom ever,” if it doesn’t first “blow up the whole world.” I had talked with the autodidactic artificial-intelligence researcher Eliezer Yudkowsky about his fears that artificial minds might, once created, rapidly destroy the planet. At one point, the inventor-turned-proselytizer
 Ray Kurzweil teleconferenced in to discuss,
among other things, his plans for becoming transhuman, transcending his own biology to 
achieve some sort of
 eternal life. Kurzweil
 believes this is possible, 
even probable, provided he can just live to see
 The Singularity’s dawn, 
which he has pegged at 
sometime in the middle of the 21st century. To this end, he reportedly consumes some 150 vitamin supplements a day.

    If our technological civilization is to avoid falling into decay, human obsolescence in one form or another is unavoidable.

    Returning to my motel room exhausted each night, I unwound by reading excerpts from an old book, Summa Technologiae. The late Polish author Stanislaw Lem had written it in the early 1960s, setting himself the lofty goal of forging a secular counterpart to the 13th-century Summa Theologica, Thomas Aquinas’s landmark compendium exploring the foundations and limits of Christian theology. Where Aquinas argued for the certainty of a Creator, an immortal soul, and eternal salvation as based on scripture, Lem concerned himself with the uncertain future of intelligence and technology throughout the universe, guided by the tenets of modern science.

    To paraphrase Lem himself, the book was an investigation of the thorns of technological roses that had yet to bloom. And yet, despite Lem’s later observation that “nothing ages as fast as the future,” to my surprise most of the book’s nearly half-century-old prognostications concerned the very same topics I had encountered during my days at the conference, and felt just as fresh. Most surprising of all, in subsequent conversations I confirmed my suspicions that among the masters of our technological universe gathered there in San Francisco to forge a transhuman future, very few were familiar with the book or, for that matter, with Lem. I felt like a passenger in a car who discovers a blindspot in the central focus of the driver’s view.

    Such blindness was, perhaps, understandable. In 2007, only fragments of Summa Technologiae had appeared in English, via partial translations undertaken independently by the literary scholar Peter Swirski and a German software developer named Frank Prengel. These fragments were what I read in the motel. The first complete English translation, by the media researcher Joanna Zylinska, only appeared in 2013. By Lem’s own admission, from the start the book was a commercial and a critical failure that “sank without a trace” upon its first appearance in print. Lem’s terminology and dense, baroque style is partially to blame—many of his finest points were made in digressive parables, allegories, and footnotes, and he coined his own neologisms for what were, at the time, distinctly over-the-horizon fields. In Lem’s lexicon, virtual reality was “phantomatics,” molecular nanotechnology was “molectronics,” cognitive enhancement was “cerebromatics,” and biomimicry and the creation of artificial life was “imitology.” He had even coined a term for search-engine optimization, a la Google: “ariadnology.” The path to advanced artificial intelligence he called the “technoevolution” of “intellectronics.”

    Even now, if Lem is known at all to the vast majority of the English-speaking world, it is chiefly for his authorship of Solaris, a popular 1961 science-fiction novel that spawned two critically acclaimed film adaptations, one by Andrei Tarkovsky and another by Steven Soderbergh. Yet to say the prolific author only wrote science fiction would be foolishly dismissive. That so much of his output can be classified as such is because so many of his intellectual wanderings took him to the outer frontiers of knowledge.

    Lem was a polymath, a voracious reader who devoured not only the classic literary canon, but also a plethora of research journals, scientific periodicals, and popular books by leading researchers. His genius was in standing on the shoulders of scientific giants to distill the essence of their work, flavored with bittersweet insights and thought experiments that linked their mathematical abstractions to deep existential mysteries and the nature of the human condition. For this reason alone, reading Lem is an education, wherein one may learn the deep ramifications of breakthroughs such as Claude Shannon’s development of information theory, Alan Turing’s work on computation, and John von Neumann’s exploration of game theory. Much of his best work entailed constructing analyses based on logic with which anyone would agree, then showing how these eminently reasonable premises lead to astonishing conclusions. And the fundamental urtext for all of it, the wellspring from which the remainder of his output flowed, is Summa Technologiae.

    The core of the book is a heady mix of evolutionary biology, thermodynamics—the study of energy flowing through a system—and cybernetics, a diffuse field pioneered in the 1940s by Norbert Wiener studying how feedback loops can automatically regulate the behavior of machines and organisms. Considering a planetary civilization this way, Lem posits a set of feedbacks between the stability of a society and its degree of technological development. In its early stages, Lem writes, the development of technology is a self-reinforcing process that promotes homeostasis, the ability to maintain stability in the face of continual change and increasing disorder. That is, incremental advances in technology tend to progressively increase a society’s resilience against disruptive environmental forces such as pandemics, famines, earthquakes, and asteroid strikes. More advances lead to more protection, which promotes more advances still.

    The result is a disconcerting paradox: To maintain control of our own fate, we must yield our agency to minds exponentially more powerful than our own.

    And yet, Lem argues, that same technology-driven positive feedback loop is also an Achilles heel for planetary civilizations, at least for ours here on Earth. As advances in science and technology accrue and the pace of discovery continues its acceleration, our society will approach an “information barrier” beyond which our brains—organs blindly, stochastically shaped by evolution for vastly different purposes—can no longer efficiently interpret and act on the deluge of information.

    Past this point, our civilization should reach the end of what has been a period of exponential growth in science and technology. Homeostasis will break down, and without some major intervention, we will collapse into a “developmental crisis” from which we may never fully recover. Attempts to simply muddle through, Lem writes, would only lead to a vicious circle of boom-and-bust economic bubbles as society meanders blindly down a random, path-dependent route of scientific discovery and technological development. “Victories, that is, suddenly appearing domains of some new wonderful activity,” he writes, “will engulf us in their sheer size, thus preventing us from noticing some other opportunities—which may turn out to be even more valuable in the long run.”

    Lem thus concludes that if our technological civilization is to avoid falling into decay, human obsolescence in one form or another is unavoidable. The sole remaining option for continued progress would then be the “automatization of cognitive processes” through development of algorithmic “information farms” and superhuman artificial intelligences. This would occur via a sophisticated plagiarism, the virtual simulation of the mindless, brute-force natural selection we see acting in biological evolution, which, Lem dryly notes, is the only technique known in the universe to construct philosophers, rather than mere philosophies.

    star power: George Clooney plays the role of Dr. Chris Kelvin in the 2002 film adaption of Lem’s 1961 novel, Solaris.Courtesy of 20th Century Fox

    The result is a disconcerting paradox, which Lem expresses early in the book: To maintain control of our own fate, we must yield our
agency to minds exponentially more powerful than our own, created through processes we cannot entirely understand, and hence potentially unknowable to us. This is the basis for Lem’s explorations of The Singularity, and in describing its consequences he reaches many conclusions that most of its present-day acolytes would share. But there is a difference between the typical modern approach and Lem’s, not in degree, but in kind.

    Unlike the commodified futurism now so common in the bubble-worlds of Silicon Valley billionaires, Lem’s forecasts weren’t really about seeking personal enrichment from market fluctuations, shiny new gadgets, or simplistic ideologies of “disruptive innovation.” In Summa Technologiae and much of his subsequent work, Lem instead sought to map out the plausible answers to questions that today are too often passed over in silence, perhaps because they fail to neatly fit into any TED Talk or startup business plan: Does technology control humanity, or does humanity control technology? Where are the absolute limits for our knowledge and our achievement, and will these boundaries be formed by the fundamental laws of nature or by the inherent limitations of our psyche? If given the ability to satisfy nearly any material desire, what is it that we actually would want?

    Lem’s explorations of these questions are dominated by his obsession with chance, the probabilistic tension between chaos and order as an arbiter of human destiny. He had a deep appreciation for entropy, the capacity for disorder to naturally, spontaneously arise and spread, cursing some while sparing others. It was an appreciation born from his experience as a young man in Poland before, during, and after World War II, where he saw chance’s role in the destruction of countless dreams, and where, perhaps by pure chance alone, his Jewish heritage did not result in his death. “We were like ants bustling in an anthill over which the heel of a boot is raised,” he wrote in Highcastle, an autobiographical memoir. “Some saw its shadow, or thought they did, but everyone, the uneasy included, ran about their usual business until the very last minute, ran with enthusiasm, devotion—to secure, to appease, to tame the future.” From the accumulated weight of those experiences, Lem wrote in the New Yorker in 1986, he had “come to understand the fragility that all systems have in common,” and “how human beings behave under extreme conditions—how their behavior when they are under enormous pressure is almost impossible to predict.”

    To Lem (and, to their credit, a sizeable number of modern thinkers), the Singularity is less an opportunity than a question mark, a multidimensional crucible in which humanity’s future will be forged.

    I couldn’t help thinking of Lem’s question mark that summer in 2007. Within and around the gardens surrounding the neoclassical Palace of Fine Arts Theater where the Singularity Summit was taking place, dark and disruptive shadows seemed to loom over the plans and aspirations of the gathered well-to-do. But they had precious little to do with malevolent superintelligences or runaway nanotechnology. Between my motel and the venue, panhandlers rested along the sidewalk, or stood with empty cups at busy intersections, almost invisible to everyone. Walking outside during one break between sessions, I stumbled across a homeless man defecating between two well-manicured bushes. Even within the context of the conference, hints of desperation sometimes tinged the not-infrequent conversations about raising capital; the subprime mortgage crisis was already unfolding that would, a year later, spark the near-collapse of the world’s financial system. While our society’s titans of technology were angling for advantages to create what they hoped would be the best of all possible futures, the world outside reminded those who would listen that we are barely in control even today.

    In Lem’s view, humans, as imperfect as we are, shall always strive to progress and improve.

    I attended two more Singularity Summits, in 2008 and 2009, and during that three-year period, all the much-vaunted performance gains in various technologies seemed paltry against a more obvious yet less-discussed pattern of accelerating change: the rapid, incessant growth in global ecological degradation, economic inequality, and societal instability. Here, forecasts tend to be far less rosy than those for our future capabilities in information technology. They suggest, with some confidence, that when and if we ever breathe souls into our machines, most of humanity will not be dreaming of transcending their biology, but of fresh water, a full belly, and a warm, safe bed. How useful would a superintelligent computer be if it was submerged by storm surges from rising seas or dis- connected from a steady supply of electricity? Would biotech-boosted personal longevity be worthwhile in a world ravaged by armed, angry mobs of starving, displaced people? More than once I have wondered why so many high technologists are more concerned by as- yet-nonexistent threats than the much more mundane and all-too-real ones literally right before their eyes.

    Lem was able to speak to my experience of the world outside the windows of the Singularity conference. A thread of humanistic humility runs through his work, a hard-gained certainty that technological development too often takes place only in service of our most primal urges, rewarding individual greed over the common good. He saw our world as exceedingly fragile, contingent upon a truly astronomical number of coincidences, where the vagaries of the human spirit had become the most volatile variables of all.

    It is here that we find Lem’s key strength as a futurist. He refused to discount human nature’s influence on transhuman possibilities, and believed that the still-incomplete task of understanding our strengths and weaknesses as human beings was a crucial prerequisite for all speculative pathways to any post-Singularity future. Yet this strength also leads to what may be Lem’s great weakness, one which he shares with today’s hopeful transhumanists: an all-too-human optimism that shines through an otherwise-dispassionate darkness, a fervent faith that, when faced with the challenge of a transhuman future, we will heroically plunge headlong into its depths. In Lem’s view, humans, as imperfect as we are, shall always strive to progress and improve, seeking out all that is beautiful and possible rather than what may be merely convenient and profitable, and through this we may find salvation. That we might instead succumb to complacency, stagnation, regression, and extinction is something he acknowledges but can scarcely countenance. In the end, Lem, too, was seduced—though not by quasi-religious notions of personal immortality, endless growth, or cosmic teleology, but instead by the notion of an indomitable human spirit.

    Like many other ideas from Summa Technologiae, this one finds its best expression in one of Lem’s works of fiction, his 1981 novella Golem XIV, in which a self-programming military supercomputer that has bootstrapped itself into sentience delivers a series of lectures critiquing evolution and humanity. Some would say it is foolish to seek truth in fiction, or to draw equivalence between an imaginary character’s thoughts and an author’s genuine beliefs, but for me the conclusion is inescapable. When the novella’s artificial philosopher makes its pronouncements through a connected vocoder, it is the human voice of Lem that emerges, uttering a prophecy of transcendence that is at once his most hopeful—and perhaps, in light of trends today, his most erroneous:

    “I feel that you are entering an age of metamorphosis; that you will decide to cast aside your entire history, your entire heritage and all that remains of natural humanity—whose image, magnified into beautiful tragedy, is the focus of the mirrors of your beliefs; that you will advance (for there is no other way), and in this, which for you is now only a leap into the abyss, you will find a challenge, if not a beauty; and that you will proceed in your own way after all, since in casting off man, man will save himself.”

    Freelance writer Lee Billings is the author of Five Billion Years of Solitude: The Search for Life Among the Stars.

    Photograph by Forum/UIG/Getty Images

    This article was originally published online in our “Genius” issue in October, 2014.

    Programming in the Debugger

    $
    0
    0

    April 20, 2018

    Jupyter presents a unique programming style where the programmer can change her code while it's running, reducing the cost of mistakes and improving the interactivity of the programming process. I discuss the benefits and limitations of this approach along with the related work.

    For the last year, I’ve been creating a lot of programs for data manipulation (specifically video analysis), and while the heavy lifting is usually in C or Rust, the lighter-weight metadata processing is all Python. As with many in the data science field, I’ve fallen in love with Jupyter notebooks, the code editing and output visualization environment. Jupyter’s most-touted feature is the ability to intertwine code, narrative, and visualization. Write code to create a graph and display it inline. Create literate documents with Markdown headers. And this is great! I love the ability to create living documents that change when your data does, like the one below I’ve been developing.

    However, after using Jupyter for a while, I’ve noticed that it has changed my programming process in a way more fundamental than simply inlining visualization of results. Specifically, Jupyter enables programmers to edit their program while it is running. Here’s a quick example. Let’s say I have some expensive computation (detect faces in a video) and I want to post-process the results (draw boxes on the faces). Normally, the development process would be, I write a first draft of the program:

    ## face.py# Takes 1 minutevideo=load_video()# Takes 20 minutesall_faces=detect_faces(video)# Takes 1 minutefor(frame,frame_faces)inzip(video.frames(),all_faces):cv2.imwrite('frame{}.jpg',draw_faces(frame,frame_faces))

    If I run this program (python face.py), it would probably run to completion, except… oh no! A bug: I forgot to format the 'frame{}.jpg' string (note: not a bug a type system would have found, this isn’t just a dynamic typing issue). But I had to wait 22 minutes to discover this bug, and now when I fix it, I have to re-run my program and wait another 22 minutes to confirm that it works. Why? Even though the bug was in the post-processing, I have to re-run the core computation, since my program exited upon completion, releasing its contents from memory. I should be able to just change the bug, and verify my change in only a minute. How can we do that?

    Consider the same workflow, but running in Jupyter. First, I would define a separate code cell to run each part of the pipeline:

    I would execute each part of the pipeline:

    Then, after inspecting the output and noticing the error, change the last code cell, and only re-run that cell:

    This works exactly as intended! We were able to edit our program while it was running, and then re-run only the part that needed fixing. In some sense, this is an obvious result—a REPL is designed to do exactly this, allow you to create new code while inside a long-running programming environment. But the difference between Jupyter and a REPL is that Jupyter is persistent. Code which I write in a REPL disappears along with my data when the REPL exits, but Jupyter notebooks hang around. Jupyter’s structure of delimited code cells enables a programming style where each can be treated like an atomic unit, where if it completes, then its effects are persisted in memory for other code cells to process.

    More generally, we can view this as a form of programming in the debugger. Rather than separating code creation and code execution as different phases of the programming cycle, they become intertwined. Jupyter performs the many functions of a debugger—inspecting the values of variables, setting breakpoints (ends of code cells), providing rich visualization of program intermediates (e.g. graphs)—except the programmer can react to program’s execution by changing the code while it runs.

    However, this style of programming with Jupyter has its limits. For example, Jupyter penalizes abstraction by removing this interactive debuggability. In the face detection example above, if we made our code generic over the input video:

    defdetect_and_draw_faces(video)all_faces=detect_faces(video)for(frame,frame_faces)inzip(video.frames(),all_faces):cv2.imwrite('frame{}.jpg',draw_faces(frame,frame_faces))video=load_video()detect_and_draw_faces(video)

    Because a single function cannot be split up over multiple code cells, we cannot break the execution of the function in the middle, change its code, and continue to run. Interactive editing and debugging is limited to top-level code. This is actually a really common problem for me, since I’ll write straight-line code for a single instance of the pipeline (on a particular video, as originally), but then want to run it over many videos in batch. However, inevitably I missed some edge case not exposed by the example video, but I can no longer debug the issue in the same way.

    Additionally, this model of debugging/editing only works for code blocks that are pure, i.e. don’t rely on global state outside the block. For example, if I have a program like:

    x=0### new code block ###x+=1print('{}'.format(y))

    Then if I fix the variable name error (format(x) instead of y) and re-run the code block, the value of x has changed, and my program output depends on the number of times I debugged the function. Not good! Essentially we need some kind of reverse debugging (also time-traveling debugging), where we can rewind the state of the program back to a reasonable point before the error occured. This hasbeendone for Python, but to my knowledge has never been integrated into Jupyter in a sensible way.

    This idea has existed in many other forms, particularly in the web world where hot-swapping code is common. An ideal model is embodied in the Elm Reactor time-traveling debugger for the Elm programming language. If your language is pure and functionally reactive, then you almost get this mode of debugging for free (plus a little tooling). The interesting question, then, is if your language is impure, or if your language gradually ensured purity, how far can we go with these edit/debug interactions? Could we integrate reverse debugging into Jupyter for Python? Could I edit a function in the middle of its execution? In what scenarios would such a programming style be most useful?

    As always, let me know what you think. Either drop me a line at wcrichto@cs.stanford.edu or leave a comment on the Hacker News thread.

    Viewing all 25817 articles
    Browse latest View live