Quantcast
Channel: Hacker News
Viewing all 25817 articles
Browse latest View live

United States v. Microsoft Corp. Dismissed [pdf]


New Story (nonprofit S15) Is Hiring a Global Operations Associate

$
0
0

Who We Are:

We’re a close, motivated team on a big mission to change trajectories for families, communities and, ultimately the larger nonprofit sector. New Story is headquartered in San Francisco, with a smaller team located in Atlanta. Since starting about 3 years ago, we’ve raised over $13M, and have funded 1,300+ homes in 11 communities around the world. Our journey has taken us through Y Combinator to being named one of Fast Company’s 2017 Most Innovative Companies In The World. We are fortunate to be backed by excellent partners and advisors.
 

A Welcoming Workplace For All:

New Story is a company at the forefront of social impact and technology. We build homes for families across the globe living in survival mode. We make this happen with innovative technology that makes our work more effective and we share our tech, processes, and learnings with others in the non-profit space so they can scale and grow as well. We know we will only succeed if we have a team who brings a wide variety of perspectives and backgrounds. Our team grew up around the world, from Turkey to Georgia to Venezuela, and welcome all applicants, regardless of background, to apply. We work hard to create a welcoming culture of shared values and radical candor.
 

Benefits and Perks:

Change Trajectories. New Story started with building homes and now we’re on a mission to change the entire nonprofit sector. As a team member, you’ll quantify, know and, share the impact of our work on the ground.

Premium Health Care: We believe in thriving communities and that starts with our team being happy and healthy. This is why we offer generous health insurance, dental and vision care. We also offer a monthly fitness stipend so you can invest in your own wellness and health.

Time Off on Your Terms: Sure, we have unlimited vacation and you can take time for sickness, family, and fun as needed. But we believe in the value of taking time to reconnect and take care of yourself. We’ll ensure you’re taking time off to better yourself (and have fun).

Travel to the field. This person will be a liason and steward on our partnerships and relationships with our local partners. They will travel to New Story Communities in Mexico, El Salvador, Haiti and Bolivia once a quarter on average.

A Contagious Cancer That Jumped Between Species

$
0
0

In the northwest coast of Spain, a delicious clam called the golden carpet shell is suffering from an extraordinary type of cancer—a contagious leukemia.  

Almost every other case of cancer in animals—including humans—begins when a single cell in an individual starts growing and dividing uncontrollably, producing a tumor. If the tumor kills its host, it dies too. But the clam’s leukemia is caused by cancer cells that have become independent parasites; they can travel between individuals, creating fresh tumors in each new host.

And if that wasn’t astonishing enough, this transmissible tumor didn’t even originate in a golden carpet shell. Instead, its genes reveal that it first arose in a related species—the pullet shell. It’s the first known cancer that not only jumps into new hosts but has, at least once, leapt over the species barrier.

Some cancers are caused by contagious things like HPV, the virus that causes cervical cancer, or Helicobacter pylori, a bacterium that causes stomach cancer. But in these cases, the tumor cells themselves stay put. It’s so exceptional for cancers to become infectious in their own right that until a year ago, scientists knew of only two types that did so. One is a facial tumor that infects Tasmanian devils. It evolved recently, spreads through bites, and threatens the future of its hosts. The second is a far older veneral tumor that affects dogs. It arose 11,000 years ago and has spread to six continents.

A third example emerged last year. Along North America’s east coast, soft-shell clams were dying from a strange type of leukemia. Michael Metzger and Stephen Goff,  scientists from Columbia University, studied these cancers and found that they were all genetically identical to each other, but genetically distinct from their hosts. That’s the same pattern seen in the Tasmanian devil and dog tumors—a  clear sign that these cancers arrive in their hosts, rather than originating from them. They drift through the sea, these selfish shellfish cells, traveling from one cancer-ridden clam to another.

Intrigued, Metzger and Goff polled their marine biologist colleagues and learned that many other shellfish species are afflicted by similar rapidly spreading leukemias. They collected cockles and golden carpet shell clams from the coast of Spain, and mussels from the coast of Vancouver. In all three cases, they found the same signature pattern: a genetic match between all the tumors, and a mismatch between each tumor and its respective host.

“Prior to this, we believed that transmissible cancers were bizarre flukes of nature that happened due to a set of unfortunate coincidences in very unlucky species,” says Elizabeth Murchison, a University of Cambridge cancer researcher who studies the Tasmanian devil tumor. Instead, they are “probably relatively common, at least some bivalves, and the processes whereby cancers become transmissible are not as rare as we previously thought.”

Indeed, Metzger and Goff found that cockles have given risen to two strains of contagious cancer. Their tumors belonged to two distinctive lineages, each of which seems to have independently arisen from a different healthy cell. That explains why the same disease presents in two distinct ways, characterized by cells that look different under the microscope. “People noted that, said, ‘Isn’t that odd?’ and moved on,” says Goff. “This explains the mystery.”  

There’s precedent for a dual origin. Earlier this year, Murchison showed that the Tasmanian devil’s contagious tumor also arose twice. “We absolutely couldn’t believe it,” she told me at the time. “It’s the last thing I could have possibly imagined.”

It might now be the second-to-last thing. A bigger surprise came when Metzger and Goff studied the golden carpet shells. Their tumors were not just genetically distinct from their hosts, but wildly so, with matches as low as 78 percent for certain critical genes. “They weren’t even close,” says Goff. “We then realized they were a near perfect match to the cells of another species, the pullet shell.” The cells must have originated there before jumping into the golden carpets.

Oddly, the pullet shells themselves show no signs of the cancer. They may have given rise to it, but they no longer suffer from it. Why? “One could imagine that the species-of-origin is now resistant to the tumor,” says Goff, “but we don’t know that.”

“It would be good to know how the tumors are transmitted in nature,” says Clare Rebbeck, from the University of Cambridge.  On land, devils and dogs can only spread their tumors by biting and mating. In the water, Goff thinks that transmission might be far easier. The infected mollusks release cancer cells in their feces, so every time they poop, they seed the water with transmissible tumors. And since they are filter-feeders that sieve through huge volumes of seawater, they are also well-suited to picking up the cells from their neighbors.

This might also explain why shellfish seem to be so uniquely susceptible to contagious cancers. Goff guesses that their immune systems are involved, too. Human immune systems would usually stop incompatible foreign cells from setting up shop in our bodies; that’s why people have to take immunosuppressive drugs before receiving organ transplants. “The most fascinating aspect of transmissible cancers is their ability to avoid those immune responses,” says Hannah Siddle from the University of Southampton. “And the transmission of cells across a species barrier is even more striking.”

Fortunately—for us, if not the clams and mussels—there’s no evidence that these cells could affect humans, or that we are plagued by any contagious cancer at all. “I would only worry deeply if I was a mollusk,” Goff says. “Could it happen in rare circumstances? We’d be eager to look for that. It would presumably have to happen between genetically closely matched peers, or people who are profoundly immune-compromised.”

Still, his discovery adds to the growing realization that contagious cancers are more common than anyone assumed. There are now eight of them, and they are organisms unlike anything else. Since time immemorial, people have dreamed of immortality. We’ve filled our fiction with vampires, elves, fountains of youth, horcruxes, and the Singularity. Well, in the real world, this is what immortality looks like. Each of these eight contagious cancers represents a single animal—a dog, two Tasmanian devils, and a handful shellfish—whose original body is long dead, but that lives on as dynasties of cells that propagate in new hosts.

Dependency injection on Android with dagger-android and Kotlin

$
0
0

In previous blog, we used plain dagger to do the DI on Android, but there is another package from Google named dagger-android, and it’s tailed for Android. Things get more interesting here. Let’s see.

If you don’t know what dagger is as well as dependency injection, and the some basic dagger terms like module, component seem secret to you. I strongly suggest you to read my other blog first, which is an implementation with plain dagger. Because that is more easy to understand and just take you minutes. Then you can come back and see this blog for a more Android approach and to see which pattern you like most.

1. The big picture

When Google writes dagger-android, they want to reduce the boilerplate code you need to write with plain dagger. So they introduce some new abstraction. And it’s very easy to get lost here. So I think it might be better that we review this base pattern first. As I said in the previous blog. In order to do DI, you need to prepare these initialization of dependencies somewhere therefor you can use them later. So here, in dagger’s terms:

  • You declare how to generate these dependencies in @Module.
  • You use @Component to connect the dependencies with their consumers.
  • Then inside the consumer class. You @inject these dependencies. dagger gonna create the instances for from the @Module

2. Add dependencies

1

2

3

4

5

6

7

8

apply plugin:'kotlin-kapt'

dependencies {

kapt 'com.google.dagger:dagger-compiler:2.15'

implementation 'com.google.dagger:dagger-android:2.15'

kapt 'com.google.dagger:dagger-android-processor:2.15'

implementation 'com.google.dagger:dagger-android-support:2.15'

}

3. Let’s create our modules

First, let’s create an application wide @Module.

1

2

3

4

5

6

7

@Module

classAppModule{

@Provides

@Singleton

funprovideSharedPreference(app: Application): SharedPreferences =

PreferenceManager.getDefaultSharedPreferences(app)

}

This @Provides is the provider for any consumer which has the @Inject decorator. Dagger gonna match the type for you. Which means when a @Inject asks for SharedPreferences, dagger gonna find through all @Provide and find a match and get that instance.

But something interesting here is that who gonna give that app parameter when calling this provideSharedPreference() method and where does it get it. Well, we gonna see that soon.

4. Create an App component

Now consider this is an application wide dependency, we will connect this @Module to the android application with a @Component. For those who has React or Vue experiences, it’s not that component at all. :D

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

@Singleton

@Component(

modules = [

AndroidSupportInjectionModule::class,

AppModule::class,

]

)

interfaceAppComponent: AndroidInjector<App> {

@Component.Builder

interfaceBuilder{

@BindsInstance

funcreate(app: Application):Builder

funbuild(): AppComponent

}

}

Here, something different from the plain dagger is that this interface is based on another interface: AndroidInjector<App>. And needs to declare its Builder as well. The App here is your custome application class. We will create that later on.

This Builder is for dagger, so it knows how to create() it. For instance, in our example, we need to the caller to pass app when create(). No magic here, as the signature indicates, you have to pass it when you first invoke this method.

@BindsInstance is where it shines. It will take this incoming app parameter, and save it for using later, such that, in our previous AppModule, you can invoke provideApplication() with a parameter called app. And this is how it gets that parameter. Because the app has been @BindsIntance when the AppComponent.Builder first create().

AndroidSupportInjectionModule::class is from dagger to inject into other types other than your App.

5. Now, let’s create our custom class

1

2

3

4

5

6

7

8

classApp : DaggerApplication() {

overridefunapplicationInjector(): AndroidInjector<out DaggerApplication> {

return DaggerAppComponent

.builder()

.create(this)

.build()

}

}

Don’t forget to add this App to the AndroidManifests.xml with android:name= attribute to enable it.

Something interesting is that we extend the App from DaggerApplication to reduce some boilerplate like before, the only thing you need to do is to override that applicationInjector method, initialize and return your AppComponent there.

then you call the create() method which you created in that Builder interface, passing this which just fits the signature: fun create(app: Application):Builder.

The DaggerAppComponent will be unresolved until you run Make Project from the Build menu.

If you don’t want to inherit from DaggerApplication, you have to implement the HasActivityInjector interface:

1

2

3

4

5

6

7

8

9

10

11

classApp : Application(), HasActivityInjector {

overridefunactivityInjector(): DispatchingAndroidInjector<Activity> = androidInjector

@Inject

lateinitvar androidInjector: DispatchingAndroidInjector<Activity>

overridefunonCreate() {

super.onCreate()

DaggerAppComponent.builder().appModule(AppModule(this)).build().inject(this)

}

}

You need to know that DaggerApplication does much more things for you other than the several lines of boilerplate above. It handles things like Service and BroadCase to make your life easier in the future.

6. Now let’s connect activity with the module

Create a new file ActivitiesBindingModule.kt with the following code:

1

2

3

4

5

6

7

@Module

abstractclassActivitiesBindingModule{

@ContributesAndroidInjector

abstractfunmainActivity():MainActivity

}

If you want to add more activities, just add them here use the same pattern.

Now connect the ActivitiesBindingModule into the AppModule.

1

2

3

4

5

6

7

8

9

10

11

@Singleton

@Component(

modules = [

AndroidSupportInjectionModule::class,

AppModule::class,

ActivitiesBindingModule::class

]

)

interfaceAppComponent: AndroidInjector<App> {

// Builder

}

If you went through my previous blog, you will notice this part is different. You no longer declare a method: fun inject(activity: MainActivity). You need to use a ActivitiesBindingModule to do the trick. But still you can write component for that activity. this is just much more easier.

7. Now let’s inject MainActivity

Open MainActivity.kt

1

2

3

4

5

6

7

8

9

10

11

12

classMainActivity : DaggerAppCompatActivity() {

@Inject

lateinitvar preferences: SharedPreferences

overridefunonCreate(savedInstanceState: Bundle?) {

super.onCreate(savedInstanceState)

setContentView(R.layout.activity_main)

println("Is abc in Preferences: ${preferences.contains("abc")}")

}

}

It’s more clean than before, you use @Inject, then you get the it. No more (application as MyApp).myAppComponent.inject(this).

Run the app, you should see something like this in the console:

1

04-18 00:34:38.980 5566-5566/? I/System.out: Is abc in Preferences: false

The magic could happen only because we inherited from DaggerAppCompatActivity, other wise you need to call AndroidInjection.inject(this) in the onCreate by yourself.

8. What about some Activity scope dependency

Let’s see that you need some dependencies that is only for one activity. Here for example, we need one such thing for the MainActivity.

1

2

3

4

classBooleanKey(

val name: String,

val value: Boolean

)

Then we just inject in and use it in MainActivity.kt

1

2

3

4

5

6

7

8

9

10

11

12

classMainActivity : DaggerAppCompatActivity() {

@Inject

lateinitvar abcKey: BooleanKey

overridefunonCreate(savedInstanceState: Bundle?) {

super.onCreate(savedInstanceState)

setContentView(R.layout.activity_main)

println("value of abcKey: ${abcKey.value}")

}

}

Where to get this abcKey initialized? Well, we create a MainActivityModule:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

@Module

classMainActivityModule{

@Provides

funprovideABCKey(

preference:SharedPreferences

):BooleanKey {

return BooleanKey(

name = "abc",

value = preference.getBoolean("abc", false)

)

}

}

And connect it with the ActivitiesBindingModule:

1

2

3

4

5

6

7

@Module

abstractclassActivitiesBindingModule{

@ContributesAndroidInjector(modules = [MainActivityModule::class])

abstractfunmainActivity(): MainActivity

}

Run the app, you should see value of abcKey: false printed in the console.

Highlight

In that provideABCKey(preference:SharedPreferences), it needs a SharedPreferences. How could dagger get it?

Well, with all the setup, dagger has a graph of all your dependencies. And every time it needs a parameter in a @Provides function, it will check other @Provides functions to look for that type. In our case, it will find it from the provideSharedPreference() and get it from there. Much better, it’s a singleton! No new instance created!

9. Repo

You can find the repo here

10. End

Hope it helps.

Huawei, Failing to Crack U.S. Market, Signals a Change in Tactics

$
0
0

One recent example of reduced communication with Washington came after the discovery in January of security flaws in the microprocessors inside nearly all of the world’s computers. A Senate committee wrote to Huawei’s founder to ask what the company knew about the vulnerabilities, and how it had been affected by them. Huawei decided not to respond.

“Some things cannot change their course according to our wishes,” Eric Xu, Huawei’s deputy chairman, said at the company’s annual meeting with analysts on Tuesday. “With some things, when you let them go, you actually feel more at ease.”

Huawei’s main Chinese rival, ZTE, also hit a roadblock in Washington this week. The Commerce Department said it would ban the much smaller company from buying American components after it made false statements to the government as part of an investigation into possible violations of American sanctions.

Photo
Huawei products at the Mobile World Congress in Barcelona. The Chinese electronics giant has laid off five American employees, including a key executive, while reducing its political outreach in the United States.Credit Josep Lago/Agence France-Presse — Getty Images

Yet Huawei’s experience also illustrates how little Washington can do to curb Chinese influence in cutting-edge industries throughout the rest of the world.

At Tuesday’s meeting with analysts, executives at the company, which says it is owned by its employees and not by the Chinese state, emphasized growth opportunities in Europe and Asia. They also described ambitions to further diversify Huawei’s business into helping organizations of all kinds — not merely wireless carriers, but factories, governments and the police — transform themselves through cloud computing and artificial intelligence.

“For Huawei, the major challenge is not how we can serve operators better,” said David Wang, a company president. Instead, he said, “we have to work harder to cope with wider challenges in all industries.”

Huawei’s troubles in the United States have been mounting since 2012, when a congressional report warned that its gear could be used to spy on Americans or to destabilize American telecom networks. The company spent $1.2 million on lobbying that year. Last year, it spent $60,000 on such efforts.

Major American carriers such as Verizon and AT&T have since shunned Huawei. The Commerce and Treasury Departments have subpoenaed it over possible violations of American sanctions on Iran and North Korea. The company’s ambitions to become a major smartphone brand — it is already the world’s third largest, after Samsung and Apple — were curtailed when AT&T abandoned a deal this year to sell its handsets. And a bill is before Congress to stop government agencies and contractors from buying Huawei products.

The company has said repeatedly that its products pose no security risk and that it complies with the law everywhere it operates. Still, the layoffs last week appear to be an acknowledgment by Huawei that it has failed to clear the political cloud around it.

Mr. Plummer, Huawei’s vice president of external affairs, had been with the company for almost eight years. He was the most senior member of Huawei’s American policy team who was not a Chinese citizen.

It is not clear whether he will be replaced. The company’s policy operations in the United States are led by a relatively recent arrival, Zhang Ruijun, who took the post nine months ago after working for the company in Mexico and Russia.

A Huawei spokesman said in a statement that any layoffs simply reflected an effort to better align resources with “business strategy and objectives.”

“Any changes to staffing size or structure are simply a reflection of standard business organization,” he said.

Founded three decades ago, Huawei made $93 billion in revenue last year — not much less than Google’s parent company, Alphabet, and more than its two main rivals in telecom gear, Nokia of Finland and Ericsson of Sweden, combined.

When it comes to the next generation of mobile internet, or 5G, Huawei has invested heavily in technology development.

Chinese carriers are likely to deploy such networks more quickly than their American counterparts are, at least in the beginning. But as 5G comes up in the United States, Japan, South Korea and Europe, Nokia and Ericsson will catch up, said Pierre Ferragu, an analyst in New York with New Street Research.

Still, Huawei’s telecom business could be dampened as other countries, particularly allies of the United States, weigh risks to their national security. The chief executive of a leading wireless provider in South Korea told the newspaper The Korea Herald last month that the company was unsure whether to use Huawei’s 5G equipment.

In the United States, Huawei customers that would be affected by the F.C.C.’s proposed rule — small carriers in rural areas — may soon need to find new equipment suppliers.

These carriers love Huawei gear, said Carri Bennet, general counsel for the Rural Wireless Association, an industry group for American telecom companies with fewer than 100,000 subscribers.

“They just love it,” she said. “It works like a charm, the customer service support is awesome,” and the price is attractive, she added.

The association’s members have even elected a Huawei executive, William Levy, to their board.

Ms. Bennet said that rather than blacklisting specific manufacturers, Washington should be creating a system for testing telecom gear for security vulnerabilities.

“These companies who are reliant on this support, they don’t have the funds to overhaul their whole network,” she said. “Public safety, getting 911 services, broadband — it all just starts falling apart.”

Continue reading the main story

Carl Kasell Has Died

$
0
0

Kasell began practicing his newscaster voice as a child and got his first on-air job at 16. He went on to anchor NPR's newscasts for more than 30 years and later served as judge and scorekeeper for the news quiz show Wait Wait... Don't Tell Me! (Left) Courtesy Carl Kasell; (right) Chip Somodevilla/Getty Imageshide caption

toggle caption
(Left) Courtesy Carl Kasell; (right) Chip Somodevilla/Getty Images

Every weekday for more than three decades, his baritone steadied our mornings. Even in moments of chaos and crisis, Carl Kasell brought unflappable authority to the news. But behind that hid a lively sense of humor, revealed to listeners late in his career, when he became the beloved judge and official scorekeeper for Wait Wait... Don't Tell Me!NPR's news quiz show.

Kasell died Tuesday from complications from Alzheimer's disease in Potomac, Md. He was 84.

He started preparing for the role of newscaster as a child. "I sometimes would hide behind the radio and pretend I was on the air," he said in 2009, remembering his boyhood in Goldsboro, N.C.

He also used to play with his grandmother's windup Victrola and her collection of records. "I would sit there sometimes and play those records, and I'd put in commercials between them," he recalled. "And I would do a newscast just like the guy on the radio did."

Kasell became a real guy on the radio at age 16, DJ-ing a late-night music show on his local station.

At the University of North Carolina, Kasell was, unsurprisingly, one of the very first students to work at its brand-new station, WUNC. After graduation he served in the military. But a job was waiting for him back home at his old station in Goldsboro. He moved to Northern Virginia to spin records but a friend persuaded him to take a job at an all-news station.

"I kind of left the records behind," Kasell said. "It came at a time when so much was happening; we had the Vietnam War, the demonstrations downtown in Washington, the [Martin Luther King] and Bobby Kennedy assassinations. And so it was a great learning period even though [there were] bad times in there."

In 1975, Kasell joined NPR as a part-time employee. Four years later, he announced the news for the first broadcast of a new show called Morning Edition. Over three decades, he became one of the network's most recognized voices.

Kasell's Last Newscast

Bob Edwards, Morning Edition's former host, says he relied on Kasell, especially on days such as Sept. 11, when news broke early. "That morning and a thousand others, awful things happened in the morning," Edwards says.

Sure, Edwards was the morning host, but he says Kasell was — in every way — its anchor. "Seven newscasts, every morning ... nobody in the business does that," Edwards said. "That is incredible."

And then came a surprise second act; after decades of being super-serious, Kasell got a chance to let his hair down as the official judge and scorekeeper for Wait Wait... Don't Tell Me!

Host Peter Sagal says no one could have guessed that Kasell would be so funny. "The greatest thing about Carl was anything we came up with, he was game," Sagal says. "When we were in Las Vegas, we had him come onstage in a showgirl's headdress. No matter what we asked him to do — silly voices, or weird stunts; we had him jump out of a cake once to make his entrance onstage — he did it [with] such joy and such dignity."

Kasell decides to take a publicity photo shoot up a notch while Wait Wait... Don't Tell Me! host Peter Sagal tickles the ivories. Wait Wait... Don't Tell Me!/NPRhide caption

toggle caption
Wait Wait... Don't Tell Me!/NPR

At the beginning, Wait Wait didn't have a budget for actual prizes, so the "prize" for listeners was to have Kasell record the outgoing message on their answering machines. He ended up recording more than 2,000 messages. (You can hear some favorites below.)

Kasell may have been known for his measured, on-air newscast persona, but behind the scenes, the kind, witty newsman had plenty of surprises. He loved magic tricks, and at one memorable company holiday party, he sawed Nina Totenberg in half.

"We laid her out on the table, got out that saw and grrrr ... ran it straight through her midsection," he recalled. "She said it tickled and she got up and walked away in one piece."

In all that he did, Carl Kasell was magic.

Kasell unleashes his powers in the lobby of NPR's headquarters in Washington, D.C. Katie Burk/NPRhide caption

toggle caption
Katie Burk/NPR

This story was adapted for the Web by longtime Wait Wait Web guru Beth Novey.

I'd Like To Sing You A Little Tune

Imagine A Man Of My Stature Being Given Away As A Prize

Jane And Christain Have Imprisoned Me In A Rabbit Hutch

It's Your Contribution To Jim And Liz ...

Operant Conditioning by Software Bugs (2012)

$
0
0

conditioning Have you ever used a new program or system and found it to be obnoxiously buggy, but then after a while you didn’t notice the bugs anymore? If so, then congratulations: you have been trained by the computer to avoid some of its problems. For example, I used to have a laptop that would lock up to the point where the battery needed to be removed when I scrolled down a web page for too long (I’m guessing the video driver’s logic for handling a full command queue was defective). Messing with the driver version did not solve the problem and I soon learned to take little breaks when scrolling down a long web page. To this day I occasionally feel a twinge of guilt or fear when rapidly scrolling a web page.

The extent to which we technical people have become conditioned by computers became apparent to me when one of my kids, probably three years old at the time, sat down at a Windows machine and within minutes rendered the GUI unresponsive. Even after watching which keys he pressed, I was unable to reproduce this behavior, at least partially because decades of training in how to use a computer have made it very hard for me to use one in such an inappropriate fashion. By now, this child (at 8 years old) has been brought into the fold: like millions of other people he can use a Windows machine for hours at a time without killing it.

Operant conditioning describes the way that humans (and of course other organisms) adapt their behavior in response to the consequences resulting from that behavior. I drink beer at least partially because this has made me feel good, and I avoiding drinking 16 beers at least partially because that has made me feel bad. Conditioning is a powerful guiding force on our actions and it can happen without our being aware of it. One of my favorite stories is where a psychology class trained the professor to lecture from the corner by paying attention when he stood in one part of the room and looking elsewhere when he did not. (This may or may not be only an urban legend, but seems plausible even so.) Surely our lives are filled with little examples of this kind of unconscious guidance from consequences.

How have software bugs trained us? The core lesson that most of us have learned is to stay in the well-tested regime and stay out of corner cases. Specifically, we will:

  • periodically restart operating systems and applications to avoid software aging effects,
  • avoid interrupting the computer when it is working (especially when it is installing or updating programs) since early-exit code is pretty much always wrong,
  • do things more slowly when the computer appears overloaded—in contrast, computer novices often make overload worse by clicking on things more and more times,
  • avoid too much multitasking,
  • avoid esoteric configuration options,
  • avoid relying on implicit operations, such as the fact that MS Word is supposed to ask us if we want to save a document on quit if unsaved changes exist.

I have a hunch that one of the reasons people mistrust Windows is that these tactics are more necessary there. For example, I never let my wife’s Windows 7 machine go for more than about two weeks without restarting it, whereas I reboot my Linux box only every few months. One time I had a job doing software development on Windows 3.1 and my computer generally had to be rebooted at least twice a day if it was to continue working. Of the half-dozen Windows laptops that I’ve owned, none of them could reliably suspend/resume ten times without being rebooted. I didn’t start this post intending to pick on Microsoft, but their systems have been involved with all of my most brutal conditioning sessions.

Boris Beizer, in his entertaining but long-forgotten book The Frozen Keyboard, tells this story:

My wife’s had no computer training. She had a big writing chore to do and a word-processor was the tool of choice. The package was good, but like most, it had bugs. We used the same hardware and software (she for her notes and I for my books) over a period of several months. The program would occasionally crash for her, but not for me. I couldn’t understand it. My typing is faster than hers. I’m more abusive of the equipment than she. And I used the equipment for about ten hours for each of hers. By any measure, I should have had the problems far more often. Yet, something she did triggered bugs which I couldn’t trigger by trying. How do we explain this mystery? What do we learn from it?

The answer came only after I spent hours watching her use of the system and comparing it to mine. She didn’t know which operations were difficult for the software and consequently her pattern of usage and keystrokes did not avoid potentially troublesome areas. I did understand and I unconsciously avoided the trouble spots. I wasn’t testing that software, so I had no stake in making it fail—I just wanted to get my work done with the least trouble. Programmers are notoriously poor at finding their own bugs—especially subtle bugs—partially because of this immunity. Finding bugs in your own work is a form of self-immolation. We can extend this concept to explain why it is that some thoroughly tested software gets into the field and only then displays a host of bugs never before seen: the programmers achieve immunity to the bugs by subconsciously avoiding the trouble spots while testing.

Beizer’s observations lead me to the first of three reasons why I wrote this piece, which is that I think it’s useful for people who are interested in software testing to know that you can generate interesting test cases by inverting the actions we have been conditioned to take. For example, we can run the OS or application for a very long time, we can interrupt the computer while it is installing or updating something, and we can attempt to overload the computer when its response time is suffering. It is perhaps instructive that Beizer is an expert on software testing, despite also being a successful anti-tester, as described in his anecdote.

skinner The second reason I wrote this piece is that I think operant conditioning provides a partial explanation for the apparent paradox where many people believe that most software works pretty well most of the time, while others believe that software is basically crap. People in the latter camp, I believe, are somehow able to resist or discard their conditioning in order to use software in a more unbiased way. Or maybe they’re just slow learners. Either way, those people would make amazing members of a software testing team.

Finally, I think that operant conditioning by software bugs is perhaps worthy of some actual research, as opposed to my idle observations here. An HCI researcher could examine these effects by seeding a program with bugs and observing the resulting usage patterns. Another nice experiment would be to provide random negative reinforcement by injecting failures at different rates for different users and observing the resulting behaviors. Anyone who has been in a tech support role has seen the bizarre cargo cult rituals that result from unpredictable failures.

In summary, computers are Skinner Boxes and we’re the lab rats—sometimes we get a little food, other times we get a shock.

Acknowledgment: This piece benefited from discussions with my mother, who previously worked as a software tester.

FDA permits marketing of AI-based device to detect diabetes-related eye problems

$
0
0

Español

The U.S. Food and Drug Administration today permitted marketing of the first medical device to use artificial intelligence to detect greater than a mild level of the eye disease diabetic retinopathy in adults who have diabetes.

Diabetic retinopathy occurs when high levels of blood sugar lead to damage in the blood vessels of the retina, the light-sensitive tissue in the back of the eye. Diabetic retinopathy is the most common cause of vision loss among the more than 30 million Americans living with diabetes and the leading cause of vision impairment and blindness among working-age adults.

“Early detection of retinopathy is an important part of managing care for the millions of people with diabetes, yet many patients with diabetes are not adequately screened for diabetic retinopathy since about 50 percent of them do not see their eye doctor on a yearly basis,” said Malvina Eydelman, M.D., director of the Division of Ophthalmic, and Ear, Nose and Throat Devices at the FDA's Center for Devices and Radiological Health. “Today’s decision permits the marketing of a novel artificial intelligence technology that can be used in a primary care doctor’s office. The FDA will continue to facilitate the availability of safe and effective digital health devices that may improve patient access to needed health care.”

The device, called IDx-DR, is a software program that uses an artificial intelligence algorithm to analyze images of the eye taken with a retinal camera called the Topcon NW400. A doctor uploads the digital images of the patient’s retinas to a cloud server on which IDx-DR software is installed. If the images are of sufficient quality, the software provides the doctor with one of two results: (1) “more than mild diabetic retinopathy detected: refer to an eye care professional” or (2) “negative for more than mild diabetic retinopathy; rescreen in 12 months.” If a positive result is detected, patients should see an eye care provider for further diagnostic evaluation and possible treatment as soon as possible.

IDx-DR is the first device authorized for marketing that provides a screening decision without the need for a clinician to also interpret the image or results, which makes it usable by health care providers who may not normally be involved in eye care.

The FDA evaluated data from a clinical study of retinal images obtained from 900 patients with diabetes at 10 primary care sites. The study was designed to evaluate how often IDx-DR could accurately detect patients with more than mild diabetic retinopathy. In the study, IDx-DR was able to correctly identify the presence of more than mild diabetic retinopathy 87.4 percent of the time and was able to correctly identify those patients who did not have more than mild diabetic retinopathy 89.5 percent of the time.

Patients who have a history of laser treatment, surgery or injections in the eye or who have any of the following conditions should not be screened for diabetic retinopathy with IDx-DR: persistent vision loss, blurred vision, floaters, previously diagnosed macular edema, severe non-proliferative retinopathy, proliferative retinopathy, radiation retinopathy or retinal vein occlusion. IDx-DR should not be used in patients with diabetes who are pregnant; diabetic retinopathy can progress very rapidly during pregnancy and IDx-DR is not intended to evaluate rapidly progressive diabetic retinopathy. IDx-DR is only designed to detect diabetic retinopathy, including macular edema; it should not be used to detect any other disease or condition. Patients will still need to get a complete eye examination at the age of 40 and at the age of 60 and also if they have any vision symptoms (for example, persistent vision loss, blurred vision or floaters).

IDx-DR was reviewed under the FDA’s De Novo premarket review pathway, a regulatory pathway for some low- to moderate-risk devices that are novel and for which there is no prior legally marketed device. IDx-DR was granted Breakthrough Device designation, meaning the FDA provided intensive interaction and guidance to the company on efficient device development, to expedite evidence generation and the agency’s review of the device. To qualify for such designation, a device must provide for more effective treatment or diagnosis of a life-threatening or irreversibly debilitating disease or condition, and meet one of the following criteria: the device must represent a breakthrough technology; there must be no approved or cleared alternatives; the device must offer significant advantages over existing approved or cleared alternatives; or the availability of the device is in the best interest of patients.

The FDA is permitting marketing of IDx-DR to IDx LLC.

The FDA, an agency within the U.S. Department of Health and Human Services, protects the public health by assuring the safety, effectiveness, and security of human and veterinary drugs, vaccines and other biological products for human use, and medical devices. The agency also is responsible for the safety and security of our nation’s food supply, cosmetics, dietary supplements, products that give off electronic radiation, and for regulating tobacco products.

###


Proof-Of-Work is a Decentralized Clock

$
0
0

This is an explanation of the key function on Proof-of-Work in the Bitcoin blockchain. It focuses on the one feature of Proof-of-Work that is essential and shows that other features often talked about such as security are secondary side-effects, useful, but not essential.

This explanation rests on illustrating a few interesting properties of how Proof-of-Work is used in the blockchain that are not immediately obvious and sometimes are rather counter-intuitive, for example how participants collectively solve a problem without ever communicating.

Having understood each of these properties, one should conclude that Proof-of-Work is primarily a mechanism which accomplishes a distributed and decentralized system of timing, i.e. a clock.

Note that this write up isn’t about Proof-of-Work per se, it explains how the blockchain takes advantage of it. If you do not know anything about Proof-of-Work, then this link might be a good start.

The Decentralized Ledger Time Ordering Problem

Before describing the solution, let us focus on the problem. Much of the literature around Proof-of-Work is so confusing because it attempts to explain the solution without first identifying the problem.

Any ledger absolutely needs order. One cannot spend money that has not been received, nor can one spend money that is already spent. Blockchain transactions (or blocks containing them) must be ordered, unambiguously, and without the need for a trusted third party.

Even if the blockchain was not a ledger but just data like a log of some sort, for every node to have an identical copy of the blockchain, order is required. A blockchain in a different order is a different blockchain.

But if transactions are generated by anonymous participants all over the world, and no central party is responsible for organizing the list, how can it be done? For example transactions (or blocks) could include timestamps, but how could these timestamps be trusted?

Time is but a human concept, and any source of it, such as an atomic clock, is a “trusted third party”. Which, on top of everything, is slightly wrong most of time due to network delays as well as the effects of Relativity. Paradoxically, relying on a timestamp to determine event order is not possible in a decentralized system.

The “time” we are interested in is not the year, month, day, etc. that we are used to. What we need is a mechanism by which we can verify that one event took place before another or perhaps concurrently.

First though, for the notions of before and after to be applicable, apoint in time needs to be established. Establishing a point in time may seem theoretically impossible at first because there is no technology accurate enough to measure aPlanck. But as you’ll see, Bitcoin works around this by creating its own notion of time where precise points in time are in fact possible.

This problem is well described inLeslie Lamport’s 1978 paper“Time, Clocks, and the Ordering of Events in a Distributed System” which doesn’t actually provide a comprehensive solution other than “properly synchronized physical clocks”. In 1982 Lamport also described the “Byzantine Generals Problem”, and Satoshi in one of his first emails explains, how Proof-of-Work is a solution, though the Bitcoin paper states “To implement a distributed timestamp server on a peer-to-peer basis, we will need to use a proof-of-work system”, suggesting that it primarily solves the issue of timestamping.

Timing is the Root Problem

It must be stressed that the impossibility of associating events with points in time in distributed systems was the unsolved problem that precluded a decentralized ledger from ever being possible until Satoshi Nakamoto invented a solution. There are many other technical details that play into the blockchain, but timing is fundamental and paramount. Without timing there is no blockchain.

Proof-of-Work Recap

Very briefly, the Bitcoin Proof-of-Work is a value whoseSHA-2 hash conforms to a certain requirement which makes such a value difficult to find. The difficulty is established by requiring that the hash is less than a specific number, the smaller the number, the more rare the input value and the higher the difficulty of finding it.

It is called “Proof Of Work” because it is known that a value with such a hash is extremely rare, which means that finding such a value requires a lot of trial and error, i.e. “work”. Work in turn implies time.

By varying the requirement, we can vary the difficulty and thus the probability of such a hash being found. The Bitcoin Difficulty adjusts dynamically so that a proper hash is found on average once every ten minutes.

Nothing Happens Between Blocks

The state of the chain is reflected by its blocks, and each new block produces a new state. The blockchain state moves forward one block at a time, and the average 10 minutes of a block is the smallest measure of blockchain time.

SHA is Memoryless and Progress-Free

The Secure Hash Algorithm is what is known in statistics and probability as memoryless. This is a property that is particularly counter-intuitive for us humans.

The best example of memoryless-ness is a coin toss. If a coin comes up heads 10 times in a row, does it mean that the next toss is more likely to be tails? Our intuition says yes, but in reality each toss has a 50/50 chance of either outcome regardless of what happened immediately prior.

Memorylessness is required for the problem to be progress-free. Progress-free means that as miners try to solve blocks iterating overnonces, each attempt is a stand-alone event and the probability of finding a solution is constant at each attempt, regardless of how much work has been done in the past. In other words at each attempt the participant is not getting any “closer” to a solution or is making no progress. And a miner who’s been looking for a solution for a year isn’t more likely to solve a block at the next attempt than a miner who started a second ago.

The probability of finding the solution given a specific difficulty in a given period of time is therefore determined solely by the speed at which all participants can iterate through the hashes. Not the prior history, not the data, just the hashrate.

The hashrate in turn is a function of the number of participants and the speed of the equipment used to calculate the hash.

The SHA Input is Irrelevant

In the Bitcoin blockchain the input is a block header. But if we just fed it random values, the probability of finding a conforming hash would still be the same. Regardless of whether the input is a valid block header or bytes from /dev/random, it is going to take 10 minutes on average to find a solution.

Of course if you find a conforming hash but your input wasn’t a valid block, such a solution cannot be added to the blockchain, but it is still Proof-of-Work (albeit useless).

The Difficulty is Intergalactic

Curiously, the difficulty is universal, meaning it spans the entire universe. We could have miners on Mars helping out, they do not need to know, or communicate with the Earth miners, the problem would still be solved every 10 minutes. (Ok, they’ll need to somehow tell the Earth people that they solved it if they do, or else we’ll never know about it.)

Remarkably, the distant participants are communicating without actually communicating, because they are collectively solving the same statistical problem and yet they’re not even aware of each other’s existence.

This “universal property” while at first seemingly magical is actually easy to explain. I used the term “universal” because it describes it well in one word, but really it means “known by every participant”.

The input to SHA-256 can be thought of as an integer between 0 and 2256 (because the output is 32 bytes, i.e. also between 0 and 2256, anything larger guarantees a collision, i.e. becomes redundant). Even though it is extremely large (exponentially larger than the number of atoms in the perceivable universe), it is a set of numbers that is known by every participant and the participants can only pick from this set.

If the input set is universally known, the function (SHA-256) is universally known, as well as the difficulty requirement is universally known, then the probability of finding a solution is also indeed “universal”.

Trying a SHA Makes You a Participant

If the stated problem is to find a conforming hash, all you have to do is to try it once, and bingo, you’ve affected the global hash rate, and for that one attempt you were a participant helping others solve the problem. You did not need to tell others that you did it (unless you actually found a solution), others didn’t need to know about it, but your attempt did affect the outcome. For the whole universe, no less.

If the above still seems suspicious, a good analogy might be the problem of finding large prime numbers. Finding the largest prime number is hard and once one is found, it becomes “discovered” or “known”. There is an infinite number of prime numbers, but only one instance of each number in the universe. Therefore whoever attempts to find the largest prime is working on the same problem, not a separate instance of it. You do not need to tell anyone you decided to look for the largest prime, you only need to announce when you find one. If no one ever looks for the largest prime, then it is never going to be found. Thus, participation (i.e. an attempt to find one), even if it’s in total secrecy, still affects the outcome, as long as the final discovery (if found at all) is publicized.

Taking advantage of this mind-boggling statistical phenomenon whereby any participation affects the outcome even if in complete secrecy and without success, is what makes Satoshi’s invention so remarkably brilliant.

It is noteworthy that since SHA is progress-free, each attempt could be thought of as a participant joining the effort and immediately leaving. Thus miners join and leave, quintillions of times per second.

The Participation is Revealed in Statistics

The magical secret participation property also works in reverse. The global hashrate listed on many sites is known not because every miner registered at some “miners registration office” where they report their hash rate periodically. No such thing exists.

The hash rate is known because for a solution of a specific difficulty to be found in 10 minutes, on average this many attempts (~1021 as of this writing) had to have been made by someone somewhere.

We do not know who these participants are, they never announced that they are working, those who did not find a solution (which is practically all of them) never told anyone they were working, their location could have been anywhere in the universe, and yet we know with absolute certainty that they exist. Simply because the problem continues to be solved.

Work is a Clock

And there is the crux of it: The difficulty in finding a conforming hash acts as a clock. A universal clock, if you will, because there is only one such clock in the universe, and thus there is nothing to sync and anyone can “look” at it.

It doesn’t matter that this clock is imprecise. What matters is that the this is the same clock for everyone and that the state of the chain can be tied unambiguously to the ticks of this clock.

This clock is operated by the multi-exahash rate of an unknown number of collective participants spread across the planet, completely independent of one another.

Last Piece of the Puzzle

The solution must be the hash of a block (the block header, to be precise). As we mentioned, the input doesn’t matter, but if it is an actual block, then whenever a solution is found, it happened at the tick of our Proof-of-Work clock. Not before, not after, but exactly at. We know this unambiguosly because the block was part of that mechanism.

To put it another way, if blocks weren’t the input to the SHA256 function, we’d still have a distributed clock, but we couldn’t tie blocks to the ticks of this clock. Using blocks as input addresses this issue.

Noteworthy, our Proof-of-Work clock only provides us with ticks. There is no way tell order from the ticks, this is what the Merkle tree is for.

What About the Distributed Consensus?

Consensus means agreement. What all participants have no choice but to agree on is that the clock has ticked. Also that everyone knows the tick and the data attached to it. And this, in fact, does solve the Byzantine Generals Problem, as Satoshi explained in an email referenced earlier.

There is a separate consensus in a rare but common case of two consecutive ticks being associated with conflicting blocks. The conflict is resolved by what block will be associated with the next tick, rendering one of the disputed blocks “orphan”. How the chain will continue is a matter of chance, and so this too could probably be indirectly attributed to the Proof-of-Work clock.

And that is it

This is what Proof-of-Work does for the blockchain. It is not a “lottery” where miners win the right to solve a block, nor is it some peculiar conversion of real energy into a valuable concept, those are all red herrings.

For example the lottery and the miner’s reward aspect is what encourages miners to participate, but it isn’t what makes the blockchain possible. Blocks are a Merkle tree, but again, that has nothing to do with Proof-of-Work, it cryptographically reinforces recording of the block ordering. The Merkle tree also makes the previous ticks “more certain”, “less deniable” or simply more secure.

Proof-of-Work is also the mechanism by which blocks become effectively immutable, and that’s a nice side-effect which makes Segregated Witness possible, but it could just as well be done by preserving the signatures (witness), so this too is secondary.

Conclusion

The Bitcoin blockchain Proof-of-Work is simply a distributed, decentralized clock.

If you understand this explanation, then you should have a much better grasp of how Proof-of-Work compares to Proof-of-Stake, and it should be apparent that the two are not comparable: Proof-Of-Stake is about (randomly distributed) authority, while Proof-of-Work is a clock.

In the context of the blockchain, Proof-of-Work is probably a misnomer. The term is a legacy from theHashcash project, where it indeed served to prove work. In the blockchain it is primarily about verifiably taking time. When one sees a hash that satisfies the difficulty, one knows it must have taken time. The method by which the delay is accomplished is “work”, but the hash is primarily interesting because it is a proof of time.

The fact that Proof-of-Work is all about time rather than work also suggests that there may be other similar statistical challenges that are time-consuming but require less energy. It may also mean that the Bitcoin hashrate is excessive and that the Bitcoin clock we described above could operate as reliably on a fraction of the hashrate, but it is the incentive structure that drives up the energy consumption.

Figuring out a way to pace ticks with less work is a trillion dollar problem, if you find one, please do let me know!

P.S. Special thanks to Sasha Trubetskoy ofUChicago Statistics for the review and suggestions for the above text.

Rich Hickey on becoming a better developer

$
0
0

Rich Hickey • 3 years ago

Sorry, I have to disagree with the entire premise here.

A wide variety of experiences might lead to well-roundedness, but not to greatness, nor even goodness. By constantly switching from one thing to another you are always reaching above your comfort zone, yes, but doing so by resetting your skill and knowledge level to zero.

Mastery comes from a combination of at least several of the following:

  • Knowledge
  • Focus
  • Relentless considered practice over a long period of time
  • Detected, recovered-from failures
  • Mentorship by an expert
  • Always working slightly beyond your comfort/ability zone, pushing it ever forward

Imagine your proposal recast:

  • Writing Achievements
  • Learn a variety of languages
  • Experience the ins and outs of various platforms
  • Enhance your understanding of the building blocks that we use as writers
  • Write in the open
  • Teach

These are largely the activities of beginners and students, not practitioners nor masters (or, in the case of teaching/publishing, people who should already be practitioners/masters). N.B. I am not questioning the many benefits of broadening or learning activities, just the premise that they lead to any sort of mastery.

Musicians get better by practice and tackling harder and harder pieces, not by switching instruments or genres, nor by learning more and varied easy pieces. Ditto almost every other specialty inhabited by experts or masters.

One can become a great developer in any general purpose language, in any domain, on any platform. And, most notably for the purposes of this discussion, such a developer can carry that greatness across a change in any of them. What skills then are so universally useful and transportable in software development? Two are:

the ability to acquire knowledge, and the ability to solve problems.

How does one get better at acquiring knowledge and solving problems? Not by acquiring a lot of superficial knowledge nor solving a lot of trivial problems (a la your 'achievements'), but by acquiring ever deeper knowledge and solving ever harder problems.

You should take heed your phrase 'leveling up'. You don't level up by switching games all the time, but by sticking with one long enough to gain advanced skills. And, you need to be careful to recognize the actual game involved. Programming mastery has little to do with languages, paradigms, platforms, building blocks, open source, conferences etc. These things change all the time and are not fundamental. Knowledge acquisition skills allow you to grok them as needed. I'd take a developer (or even non-developer!) with deep knowledge acquisition and problem solving skills over a programmer with a smorgasbord of shallow experiences any day.

An in-depth tour of Nikon’s Hikari Glass factory

$
0
0

I've been on a lot of factory tours with various camera and lens manufacturers before, but had never had a chance to see how the optical glass was made that goes into the lenses we use every day. So I was really happy to receive an invite from Nikon to tour their Hikari Glass factory in Akita Japan, following the annual CP+ trade show in Yokohama this year.

This was a pretty special tour, as we got to see the whole process, from start to finish, hosted by three of Hikari's top executives. Our hosts were Mr. Tatsuo Ishitoya, President-Director, Mr. Akio Arai, Corporate Vice President and Production General Manager, and Mr. Toshihiko Futami, Director and Management General Manager. Mr. Masaru Kobayashi, Assistant Manager of the Administration Section also accompanied us and contributed to the information we received. Arai-san is the person directly responsible for plant operations, and it was him who personally guided us on our extensive tour. All three executives briefed us before and after the tour itself.

Here are three of our hosts, Arai-san on the left, Futami-san in the middle, and Masaru Kobayashi, Assistant Manager of the Administration Section, on the right. It was pretty amazing, to have such high-level executives take us on a tour. I was in engineering-geek heaven; our guides were entirely up to fielding even the most technical questions I asked them. There were of course a lot of things that were too proprietary to share, but the knowledge (not to mention the degree of patience) they brought to the table was truly exceptional.

As I said, I've been on a lot of factory tours with various manufacturers, but this qualifies as one of the most interesting ever. I'd previously had only a vague idea of how glass was made; it turns out to be a lot more involved (and more fascinating) than I'd imagined. Here's the story of our tour…

This the road between the Hikari factory and Motoyu Ryokan, the amount of snow apparently pretty typical for early March. The snow was even deeper, closer to the ryokan.

The Hikari factory is in northern Japan, in Akita Prefecture, just a little south and east of the capital city of Akita itself. Akita is on the western coast of Japan, so the prevailing winds blow for hundreds of miles across the Sea of Japan before hitting the coast, picking up a load of moisture along the way. These winds drop a lot of snow even in Akita proper, but when they hit the mountains, they really cut loose. There was a LOT of snow around the factory, and even more as we wound our way further up into the mountains, to spend the night at the very traditional Motoyu Ryokan, built over a free-flowing natural hot spring.

There were actually a couple of entrances to the factory complex; this one was convenient as we ended our tour. The plant covers quite a large area, with multiple buildings.)

Even though I grew up with serious winters in New England, the amount of snow in Akita was truly impressive. Our host Arai-san told us that this is actually a public road in the summertime. In the winter, the town has to pick their shots in their battle against snow, so this road is left unplowed. That's a Japanese stop sign in the shot on the left, while the one on the right gives you some idea of how tall the snow piles were, compared to our guides. The two Nikon staff shown in this picture were pretty compact people, but the piles of snow towered over even my head, at 6' 2

This was one of our first views of the plant, as we started our tour. It wasn't terribly cold on the day that we were there, but it's obvious it stays below freezing a lot of the time. The Hikari factory seemed so much more ...

 

The basic recipe: Initial mixing and blending

Optical glass is a complex blend of ingredients, but some representative ingredients are quartz or silicon dioxide (SiO2). If you live near a beach, chances are a lot of the sand is quartz. All of Hikari's glass begins life as a blend of several basic ingredients, the main one being quartz, although Arai-san politely declined to list what they were. (He also asked that we not take photos of the area where the sacks of ingredients were piled up.)

(Deep geekery: We don't know the main components of Nikon's optical glass, but glass generally consists of SiO2, some sort of an alkali flux to lower the melting point, and stabilizers to make it insoluble in water and increase corrosion resistance. Modern glass frequently uses sodium oxide, typically added as sodium carbonate (Na2CO3) and a tiny amount of potash (K2O), added as potassium carbonate (K2CO3) for the flux. Finally, Lime (CaO) and Magnesia (MgO) are added as stabilizers, to increase corrosion resistance.)

It's important that the ingredients are blended thoroughly, which is the job of a pair of giant mixers like the one shown in the video below.

Ingredients enter the mixer from hoppers on the floor above, via the pipe you can see sticking down from the ceiling. The mixer handles batches of about 500kg at a time, or about 1,100 pounds. Once the ingredients have been delivered to the mixer, the operator sets it rotating for a fixed amount of time. The powdery mix is discharged from the bottom of the V-shaped mixing barrel into plastic bins, to be transported to the melting furnaces, in a nearby part of the facility.

Here's what the mixed raw-ingredient powder looks like, while waiting to be melted. It's pretty plain-looking, with a texture somewhere between table sugar and flour.

First Melt

We couldn't show the furnaces used to melt the mixed powers, as some parts of them were proprietary. (It's too bad, they were pretty dramatic structures!) The furnaces were quite tall, with steps providing access to a platform for servicing the dosing mechanism.

This was interesting: I'd expected that the melting process would just consist of dumping the mixed power into a crucible of some sort, then shoving the whole thing into a furnace. It turns out that this wouldn't work very well, as the unmelted powder doesn't conduct heat very well. So a huge crucible full of it would take a long time to fully melt, working from the outside in.

Instead, they start with the crucible empty, and a mechanism drops small amounts of powder into a metal box on the end of a long mechanical arm. A hatch opens in the side of the furnace, the arm extends and the box flips upside down, to dump a small amount of powder into the hot crucible. This small amount of powder melts relatively quickly, after which the next allotment can be dropped on top of it.

In this way, the crucible gradually fills with molten glass, in a process that Arai-san said can take several hours or so. 

The photo on the right has nothing to do with the furnaces at Hikari Glass, but it's at least some sort of a batch charger, albeit a part of a very high-volume commercial glass production facility (the kind that makes glass for windows, auto windshields, etc). The general idea is the same; a hopper above feeds the glass mix down to the charger, where a bucket slides in and out of the furnace periodically, to deliver doses of the mix into the furnace interior.

The primary melting furnaces were fascinating contraptions, but unfortunately, we weren't allowed to take photos of them. I found out why this was probably the case, when I went looking for an illustration image to use to break up the text here: It seems to be an unusual arrangement, or at least everyone else who uses it considers their solution proprietary was well: I couldn't find a photo of a similar dosing or "charging" system anywhere, despite a lot of Google-searching. The image at right was the closest I could come.

The crucibles used for this melting process is made of fused silica, or … quartz! But wait a minute, didn't we just learn that glass is made from quartz? What keeps the quartz crucible from melting as well?

We weren't allowed to photograph the crucibles, but they were massive, a good couple of feet in diameter by perhaps three feet long, and with walls more than an inch thick. They looked like they'd be very expensive consumable items! The photo above is from a web page by a maker of high-end kilns for glass and ceramics hobbyists. It gives you the general idea of what the crucibles look like; just imagine something that looked like the above, but was 2-3 feet tall. (Image from Paragon Kilns)

It turns out the quartz crucible does melt with each firing, but only a little bit, and the Hikari Glass engineers take this into account in their formulas for the powdery ingredient mix. They basically assume that they'll end up with a bit more quartz in their final glass than was present in the mixed powder.


Once the batch of glass has fully melted, a worker melts a hole in the bottom of the crucible, letting the molten glass (~1200C) pour into a large water tank. (~6 x 6 x 4 feet?). Note that once he's got the glass flowing, he turns on a water jet that sprays across the stream of glass, just as it hits the water surface in the tank. This fractures the glass into tiny, snowflake-like shards, called "frit". Having the glass in the form of frit helps the next step, of homogenizing the glass that's been produced.

Here's a shot of the frit, scooped up in a bucket by the worker running the operation, for us to look at. You can see how it's in the form of many fine, fractal-looking shards.

I always assumed that optical glass was made by just mixing together the various component, melting it, and pouring it out. It turns out though, that the composition of the glass can vary, depending on where it was in the crucible during melting. Parts that were up against the quartz crucible walls will have more SiO2 in them, and parts near the surface will have less of some more volatile components.

Something else I never knew about glass-making: Some of the compounds used have a higher vapor pressure than others at the melting temperature, so they actually evaporate away during the process. (Hence the need for the exhaust-gas scrubbing equipment shown at the beginning of this article.) So depending on the temperature cycle, you'll end up with less of some components than you initially mixed in, in parts of the melt that were near the surface.

If these look like cement mixers, it's because that's what they are! They're used to mix batches of frit, to make sure each batch is completely homogenous.

Between the crucible melting slightly each time and the evaporation of more volatile elements near the surface, there can be quite a bit of variation in frit coming from different parts of the melted glass. Consequently, after each melt is completed, the frit is tumbled for a while in a converted cement mixer, to homogenize the mix. The shot above shows two of the three huge mixers we saw in the room. (I'd estimate that the barrels were about 2 meters/6 feet in diameter.)

The problem with a stock cement mixer is the steel from the barrel would contaminate the frit, changing the glass' properties. Thick rubber liners prevent this from happening.

The problem with an off-the-shelf cement mixer is it has a steel barrel, and steel would contaminate the glass and change its optical properties. To avoid this, Hikari Glass fits them with thick natural-rubber liners, as shown above. Any tiny bits of rubber that abrade off into the frit mix end up burning off in the final melt so they have no effect on the glass itself.

I've shown the frit-mixing process as the next step, immediately following the melt and frit-production stage, but there's actually a step in between, where they melt a sample of the frit into a block of solid glass and measure its optical properties. Depending on where the refractive index of each batch ends up, they'll combine the output of different melts, to be able to hit the target refractive exactly on the money. (Although it occurs to me that there might be two mixing stages, one to make sure the frit from a given melt is homogenized, then a sample of it is melted and tested, after which frit from different batches is mixed together before the final melt.

This shows a generic three-zone glass melting furnace, similar in general concept to the proprietary and highly specialized ones used by Hikari Glass in their final melting process. This isn't what a furnace at Hikari Glass looks like, but it gives the general idea of a furnace with three temperature zones in it. (Image from British Glass& not from Hikari Glass)

Once the frit has been made and blended, it's time for the final or fine melt. The details of this are very proprietary, as it's the key to obtaining uniform, defect-free optical glass. Arai-san explained that every company makes its own final melting furnaces, and the details of them are very proprietary.

Unlike the initial melt, the final melt is done in platinum(!) crucibles. These must be extremely expensive, although the batch sizes for final melts are usually somewhat smaller than the initial melt. Still, a platinum crucible able to hold a couple hundred kilograms of molten glass must be a pretty pricey item! (In practice, I think they're platinum-lined rather than solid platinum. Still, they must be very costly!)

The reason they use platinum crucibles for the final melt is because the platinum won't dissolve into the molten glass and change its characteristics, the way the fused-silica crucibles do that are used for the initial melting.

Despite the use of platinum-lined crucibles, though, the composition of the glass will still change slightly due to the evaporation of some of its components, especially in the central, higher-temperature part of the furnace (see below). So this has to be taken into account, and the mixture adjusted to get the right final result.

Arai-san explains the thermal cycle in their final-melting furnace. The actual temperatures are proprietary and different than those shown, but the general idea is that the glass passes through three temperature zones, an initial melting, a higher-temperature zone, and then into a cool one before final casting.

Bubbles are trouble

One trick in final melting is making sure there are no bubbles in the glass as it's cast into its final form. A bubble in the middle of a lens element would obviously be a problem, so great care is taken to eliminate them.

I was curious how they did this. I thought they might perhaps use a vacuum furnace, so any bubbles would expand and come to the surface. The actual solution is a bit more clever than that, taking advantage of the natural properties of hot glass.

It turns out that air and other gases dissolve in hot glass, in much the same way that air dissolves in water (which is why fish can breathe underwater; they rely on the dissolved oxygen). As with water, cooler molten glass can hold more dissolved gas than hotter glass can. Hikari Glass takes advantage of this fact to eliminate dissolved gas, with a three-zone temperature profile in their final melting furnace.

(Note: The temperatures shown here are just for the sake of discussion; the actual temperatures are different and proprietary.)

The setup is shown above in the rough diagram Arai-san has drawn on the whiteboard. The temperatures shown aren't the ones Hikari Glass actually uses, but they serve to illustrate the concepts involved. On the left, glass is initially melted in an input chamber to a temperature of about 1,200C, a similar temperature to that in the first melting crucible we talked about earlier. From there, the glass flows to a second chamber, held at ~1,400C. Because it is so much hotter, dissolved gas is driven out of the glass, into the surrounding atmosphere. Passing out of the high-temperature chamber, the glass flows into a final crucible that's held at ~1,100C. At this cooler temperature, any bubbles left in the melt from the higher-temperature chamber are dissolved back into the glass, leaving behind perfectly clear, bubble-free glass that's drained from the bottom of the crucible onto the continuous casting conveyor.

As we'll see, the process isn't 100% perfect, because some bubbles and other defects still make it through, and are caught by a subsequent visual inspection.

Casting

The final casting process is pretty amazing; the glass flows very slowly from the bottom of the final melting crucible onto a conveyor belt in a long, long oven, where it's gradually cooled. The casting process is continuous, lasting until the batch of glass in the final melting furnace is exhausted. Arai-san was deliberately vague about specific details of the final casting process, as it is heavily proprietary.

Here's a long ribbon of glass, exiting the casting oven. The final melting furnaces are behind us in this shot, up on a second-floor mezzanine level, above the casting ovens. The details of those furnaces are so proprietary that we weren't allowed within 50 feet or more of them, and couldn't take any photos facing in that direction. It was kind of amazing to see finished glass creep out of the oven like this, a process that continues 24/7 until the entire batch of glass has been cast.

The glass is cast into ribbons of different widths and thicknesses, depending on the size of the lenses it will eventually be made into. We saw samples from ribbons that ranged from perhaps 125-150mm across and 15mm thick, down to maybe a 50mm across and 6-8mm thick.

At the end of the cooling tunnel, the glass ribbons very slowly inch along, propelled by an open-grid metal conveyor belt. When I asked how long it takes to complete the casting for one batch of glass, I was amazed to hear that it can take anywhere from a couple of days to a full month(!)

At the output end of the final casting line, a worker is waiting to label the glass and break it into 30cm-long chunks. He uses a small hammer and chisel to break off the pieces. (We were a little surprised that something as crude as a hammer and chisel would produce such clean breaks, without danger of cracking the slab into shards.)

Sometimes a slab of glass doesn't fracture all the way through from the chisel strike, so the worker uses a padded post to complete the break.

Visual inspection for defects

After the strips of glass come off the casting line, they're inspected visually for defects. This step involves checks for two different types of defect; bubbles and inhomogeneities. 

Bubbles are spotted by shining a strong light through the glass, peering through it at a dark background. Even tiny bubbles show up as bright specks within the glass ingot. (If you look very closely at the image above, you can see a few bright points of light within the glass that are the bubbles.) Bubbles are apparently a fairly rare occurrence, thanks to the special design of the final melting furnaces; the sample in the image above was one that Hikari Glass staff selected so we could see the defects clearly.

Once a bubble is identified by looking through the glass lengthwise, the worker turns the ingot 90 degrees and finds and marks each bubble's x/y position with red marker. This way, the defect-free parts of the ingot can be used, and the parts containing defects discarded.(This is the same sample a shown above, specially selected because it was easy to see the defects in it.)

Inhomogeneities in the glass are more subtle and a bit harder to detect. Once again, a bright light and human eyeballs do the trick.

The other thing to watch out for in optical glass is inhomogenieties caused by changes in the refractive index, resulting from the evaporation of component substances during the high-temperature portion of the three-step thermal processing used to eliminate gas bubbles. (Actually, evaporation occurs in all three thermal stages, but it is obviously most severe at the highest temperature. As mentioned earlier, parts of the melt close to the surface can become depleted of the more volatile components, and if that sort of glass makes its way to the final casting, its refractive index will be different than the rest. (Arai-san didn't give any but the most basic details of the final melting process, for obvious reasons of proprietary information, but I assume there must be some sort of mixing taking place within the three crucibles involved in the final-melt furnace. If the glass wasn't mixed, I would think there'd be a lot of homogeneity problems, or they'd have to waste a good portion of each melt, to avoid parts that had lost too much of their volatile components to evaporation.)

With a little Photoshop work, you can see what the inspection worker was looking for in the previous image. Note the curving line across the width of the slab here. That's an example of "striae", caused by variations in refractive index.

In the visual inspection, inhomogeneities are found by projecting light through the glass ingot onto a screen, and observing the light/dark patterns as the sample is rotated slightly about its long axis. The telltale optical artifacts are pretty subtle, so we cropped the image and radically adjusted the tone curve to highlight them. You can see the "striae" that the technician is looking for in the image above, as light/dark horizontal lines.

I asked whether glass ingots containing defects could be recycled by re-melting them, and was told that it depends on the type of glass involved. Some can be recycled, but my impression was that most could not. (I wonder if Hikari Glass could earn some additional revenue by selling rejected slabs of glass? I'd certainly pay a fair amount to have one as a keepsake/conversation-starter on my desk!)

Refractive index and light-transmission measurement

It's probably become clear from the preceding that refractive index is a key parameter that's controlled very precisely. It's no surprise then that it would be measured at various points throughout the production process. There's a separate room with precision optical instruments in it that measure both refractive index and optical transmission (how transparent the glass is).

This is one of the refractive-measuring machines; glass samples are loaded inside, via door on the right side (you can see the grey handle sticking up). This machine measures refractive indices at several different wavelengths, so they can tell the dispersion of the glass as well as its overall refractive index. As mentioned earlier, dispersion is a measure of how much the refractive index changes as a function of light wavelength/color. (If you're wondering about the wonkiness in the computer screen, it's because I blurred it in Photoshop, to avoid revealing any proprietary info.

There were two different machines used for measuring refractive index, one somewhat more sophisticated than the other. Both bounce light through a square block of glass, and read-out the refractive index, but one of them measures refractive index at a single wavelength, while the other measures refractive index at several different wavelengths, to also measure dispersion. These refractive-index measurements are performed on small test blocks melted directly from the raw frit we saw earlier, as well as on blocks cut from the continuously-cast glass slabs. The final check on refractive index is performed after the annealing step (see below), to make sure it precisely matches specifications.

This is the business end of the other refraction-measuring instrument, with a block of glass mounted in it. Both of us unfortunately missed getting a more interesting shot with the light shining through the sample :-/ As described throughout this article, glass samples like this may be collected at several points in the production process.

Light transmission measurement

In the same room with the two refractive index measurement instruments was another one that measured light transmission. I'm not sure what would cause glass to pass more or less light, but it's obviously an important parameter. The transmission-measuring instrument was just a large grey metal box, but there were three sets of glass samples sitting on top of it for us to see.

The problem with measuring light transmission is that light reflects off the surfaces of the glass you're trying to evaluate. And it's not just the front surface, where the light first strikes the glass; it'll reflect internally from the back surface, some of the internally reflected light will then bounce back off the front surface, etc, etc. When you're looking for very small differences in light transmission, any reflection will disturb the measurement.

Here Arai-san holds a couple of samples prepared for light-transmission measurement. By testing two different thicknesses of the same glass simultaneously, they can cancel-out the effects of reflection, and measure just the light lost to absorption within the glass itself.

The solution turns out to be pretty simple: Just prepare two identical pieces of glass, differing only in their thickness, and measure them both. The surface reflections will be the same between the two samples, so any differences in transmission will be due to the difference in thickness between them. 

Three sets of glass samples ready for transmission measurement. The thinner ones are 2mm thick, while the thicker ones are 10mm. The colored marks on them are just for keeping track which batch of glass they're from.

Cutting into "dice"

After the continuously-cast slabs of glass come off the line and are quality-checked, they're cut into chunks before being pre-formed into lens shapes. This was another surprise for me, in that the glass is fractured, rather than being cut.

Slabs of glass supported above strip-heaters, waiting to be fractured in two lengthwise. We were surprised by how perfectly clean a cut could be obtained so quickly and easily with this method.

The first step is to split the glass ingots in two lengthwise. (At last I assume they're always split just into two halves, as that's what we saw being done. I guess it's possible larger slabs might be split into thirds, but it looked like the slabs were always sized laterally to be twice the width needed for the final preforms.)

Depending on the thickness of the glass, it takes a little while for each piece to heat up to the point that it's ready to be fractured. There were two workers doing this task, each running about six stations simultaneously. They were constantly in motion, turning out a pair of glass strips every minute or so. (I'm sure they were also working very hard, with the big boss Arai-san looking on! :-)

The lengthwise splitting was done using thermal shock. The slabs of glass were laid on some sort of temperature-tolerant substrate, with a coil of nichrome resistance wire running along a slot in the middle. The slabs of glass rested over this coil for a matter of a couple of minutes, until the heat from it had had time to work through the thickness of the glass immediately above it. The worker would then just touch one end of the piece of glass with what looked like a pointed wooden stick that had been dipped in water. The sudden shock from contacting the tip of the cool stick would make the glass crack at that end, the crack instantly propagating down the length of the slab. It happened in the blink of an eye, producing smooth, straight edges every time.

Once the glass ingots had been split in two, each half was chopped up into little chunks or "dice", each approximately the right size to create a lens preform.

Amazingly, the lengths of glass were chopped into smaller pieces using a completely smooth steel "blade", with no abrasive on it, let alone teeth of any kind. As the workers held the pieces of glass against the spinning blade, friction would heat up the point of contact, producing a clean fracture in just seconds.

Here again, the chopping process was incredibly efficient, and relied on thermal shock to do the work. What looked like saw blades that the workers pressed the strips of glass against were actually smooth steel disks, with no teeth or abrasive on them at all. (Arai-san demonstrated how harmless they were by holding his hand against the edge of a "blade" while it was spinning.)


The video clip above shows a worker chopping the longer strips of glass into dice. It was interesting to watch, you could tell the moment that the fracture first formed, as a little line would suddenly become visible inside the block of glass, and the sound of the blade against the glass would change slightly. Very shortly after, the glass would split cleanly into two pieces.

Rather than using abrasive to cut the glass, pressure against the spinning steel disk produced heat from friction, concentrated at the point of contact. Thermal expansion of the glass in that immediate area resulted in a crack that then propagated almost instantly through the thickness of the block. As you can see in the video above, the process was pretty quick, with no kerf loss, powdered glass or expensive diamond blades required.

The little dice of optical glass were very pretty, glistening and sparkling in the light. Smaller scrap pieces might make nice earrings for female photography geeks ;-)

The little glass "dice" sparkled in their trays, thanks to their high refractive index. (The lead glass or "crystal" used in fine-dining glasses and chandeliers sparkles as it does because its high index of refraction bends light more, resulting in more internal reflections. The very high refractive index of cubic zirconia is why that gem sparkles so intensely as well. In fact, the too-high "fire" of CZ relative to diamonds is one give-away that this popular diamond substitute isn't the real thing.)

Weight-adjusting and rounding

While thermal and friction-cutting are very efficient, there's some variation, due both to the manual procedures involved, and slight variations in the width and thickness of the original glass ingots.

A row of large vibratory tumblers, used for grinding the glass dice down to final size. You can see a row of large rotary tumblers in the back, and there were a number of much smaller rotary-drum tumblers out of the shot to the right. There was a lot of tumbler capacity in this room; you're seeing only about a third of it here.

The final lens preforms have to be held to a fairly close weight tolerance, though, so there's a weight-adjusting step between cutting and pre-forming. This is done by grinding the glass dice in large tumblers, filled with smooth rocks, abrasive and a little water. The video above shows a large vibratory tumbler at work, performing this operation. As its name suggests, a vibratory tumbler uses vibration to grind its contents against each other, with the abrasive grit gradually abrading away the work pieces. These were pretty big units, with barrels that I'd estimate to be 2-3 feet (~70-100cm) across, and perhaps a foot (~30cm) deep. The smooth rocks that seemingly fill the barrel are just there to carry the grit and rub up against the optical glass dice that are being ground down.

Here's a shot showing a close-up of a tumbler barrel, letting you see a few of the squarish-looking dice that are being ground down, mixed in among the rocks.

Most of the tumblers being used were the vibratory types shown above. There were a number of smaller ones that we don't have pictures of, that were the more conventional rolling-drum type familiar to rockhounds, often used by hobbyists to smooth and polish colored glass, agates and semi-precious gemstones. (I had a couple of smaller versions of these as a boy, and have a tin of polished amethyst, quartz, jade and tigereye somewhere in the piles of detritus stashed in my basement. Vibratory tumblers were available even then, and were way faster than the drum-type ones that I had, but were priced way beyond my budget.)

Arai-san explains how the raw glass dice are sorted into four weight categories, which are then ground progressively, to bring them all within the necessary weight tolerance for preforming.

Arai-san told us that this tumbling process was used to adjust the weight of the dice prior to preforming, but I didn't see how just tumbling a bunch of glass dice together would work to homogenize their weights. As I suspected, it turned out that the dice are pre-sorted into weight groups. The heaviest group is loaded into the tumbler first, and once the average weight has reached that of the next-lighter group, that group is added. This process continues until all four weight groups have been added, and the lot of them reduced in weight to bring them within the final size tolerance.

Here's what the glass dice look like after they've been tumble-ground. Note the rounded edges and soft matte-finish.

Post-tumbling QC and repair

The smoothed dice are visually inspected after tumbling, to check for defects. It seems that common faults are chips, where a small fracture along an edge resulted in a chunk flaking out of the die. Provided the defects aren't too large, some of these chipped dice can be recovered by grinding-out the edges of the chip, as shown in the shot below. (I'd think that this would result in the die involved ending up under-weight, but perhaps there's enough slack in the tolerance that this sort of post-facto repair can still be applied.)

I was surprised to learn that chipped dice could be repaired to some extent, by manually grinding away the sharp edges of the chip, to prevent them from cracking during the preforming process. You can see the chip on the top edge of the die here, circled in red marker, and with its sharp edges ground into a smoother curve. Apparently, there's enough room in the weight spec to permit this sort of minor adjustment.

Preforming

Optical glass is delivered to lens companies as "preforms", chunks of glass having the general shape of the lenses they're to become. Without thinking much about it, I'd always just assumed that these preforms were made from rectangular chunks of glass by rough-grinding them.

Of course this is a glass factory, though, whose stock in trade is molten glass. So the preforms are of course made by pressing heated, softened glass into rough molds. (I mean duh, right?)


To keep the optical glass from sticking to the preforming molds, it's first coated with fine boron nitride powder. As you'd expect, this is a dusty operation, as the BN powder seemed to be about the consistency of coarse flour. The floor all around this area was a little slippery, thanks to a thin film of the powder that was ground into the cement. (They obviously kept it well-swept, but the powder settled into the fine pores of the concrete itself, making for a slick surface.

Soft glass can be kind of sticky, though, easily attaching itself to molds or tools. To prevent this from happening, the smoothed dice are covered with boron nitride powder, which acts as a mold release agent. Since preforming is carried out at a relatively low temperature (the glass is only soft, not molten), the BN doesn't contaminate the glass, and the outer layer containing it is ground away in the first stage of lens grinding.

Preform-pressing looked like something out of the Industrial Revolution, with huge, glowing ovens, open gas flames heating the preform press and molds, and buff, muscled workers laboring stripped to the waist. (Well, actually not the latter, but it certainly wouldn't have seemed out of place. While some areas of the factory were open to the outside air and quite cold, none of us felt a need for our heavy, Nikon-issued jackets in this section!)

Preform-pressing is done either automatically, by machines, or manually. Since the volume of finished preforms is made in Akita itself is lower than in the sister factory in China, most of the forming done there is manual. (Operational details of the one automated pressing machine we saw working while we were there apparently involved proprietary elements, so we weren't allowed to photograph the machine in operation.)


The preform pressing process was a closely-orchestrated dance between teams of either two or three workers. There's apparently some skill involved on the part of the press operator, who needs to judge how long to press each blank, depending on visual cues and monitoring the press operation itself.

As seen in the video above, the manual-pressing operation evoked images of early-industrial metal foundries or the like: Dull-red chunks of glass were flopped into a mold/carrier, and pressed by a pneumatic ram, which was surrounded by gas flames to keep it hot.

Here, Arai-san is showing the start of the preforming process. The tan blocks are ceramic holders that carry the boron-nitride-coated glass dice through the long oven you see the exit end of in the background. Pacing for the whole process is governed by the worker doing the pressing. After each die exits the mouth of the furnace and is handed off to the press operator, a worker puts a new die into the carrier and adds it to the head of the line, shown here. The dice travel in their carrier blocks up a conveyor, get pre-warmed along the way, then turn a corner at the far end of the furnace and proceed back down to the mouth.

Glass dice are placed in ceramic carriers, that cycle up and through a long heat-treating furnace. They exit glowing a dull red and are visibly soft and pliable as they're dumped into the bottom half of the mold. One worker pulls each softened die from the furnace and dumps it into a mold held by a second worker, who then places it beneath the heated ram, hits a treadle switch to trigger the ram, and then waits a little while, the duration determined by visual cues he's learned to judge, based on years of experience.

One of the preform-pressing stations was making two smaller preforms at a time. It may not have been clear in the movie or other shots, but lower halves of the molds that were attached to the handles had a gas tube running to them, and a ring of burners around each mold half, to keep it at the necessary temperature at all times.

I asked Arai-san how the press-operator knew how long to apply the forming pressure, and he replied that there were three factors: 1) the softness of the glass, based on its appearance, 2) how the glass feels while pressing it, and 3) how the glass feels when it drops from the mold. I can imagine that it takes a lot of experience, to be able to take all these cues into account, to produce perfectly-pressed preforms!

Here, a worker inspects just-pressed preforms after they've exited the press mold.

Once the appropriate amount of time has passed, the press-operator triggers the ram's retraction, removes the now-preformed lens element and transfers it to a third operator. From there, it seems it briefly goes into an intermediate-temperature furnace, to relieve the worst of its internal stresses, then is transferred to another, longer-cycle oven, where it's gradually cooled to room temperature.

Annealing

In metallurgy, "annealing" generally refers to refer to a thermal process that reverses the effects of hardening. Annealing can also mean a process that relieves internal stresses caused by too-rapid cooling.

This is the row of huge annealing ovens at Hikari Akita. They're stacked two-high; the tops of the bottom level are perhaps 6 feet (2 meters) tall. Notes on the front of each furnace tell: 1) What the type of glass is that's being annealed, 2) What its glass-transition temperature is, 3) What the cooling rate is, and 4/5) two other things I forget :-0

In optical glass manufacturing, though, annealing has a much different purpose, namely adjusting the refractive index. (Stress-relief is important as well, but that would occur with much shorter cooling cycles. The most important function of annealing is to change the refractive index.)

This was the first time that I'd heard that refractive index could be adjusted by thermal processing, so I asked Arai-san how it works. He deferred the question until we could be back in the conference room, with a whiteboard available for him to diagram the process for me.

The shot above shows Arai-san and the diagram he drew for us to explain annealing. The critical temperature involved is Tg, the "glass transition temperature". In simple terms, this is the point at which glass goes from being a hard, brittle substance to one that's pliable and can flow. (The full definition of Tg is beyond both the scope of this article and my own understanding, but the preceding is close enough for this discussion.)

In an annealing cycle, the preforms are heated to Tg, held at that temperature for some period of time, and then slowly cooled to some temperature beyond which no further changes in optical characteristics would occur. During this process, the refractive index will change, depending on how quickly or slowly the cooling occurs. The density of the glass is the ultimate controlling factor, and different cooling cycles affect the refractive index because of the influence they exert on density. Slower/longer cooling cycles result in more dense, higher refractive-index glass, while faster/shorter ones produce less-dense, lower refractive-index glass. The annealing cycle needed for each batch of glass is determined by the refractive-index measurements made after the melting process.

The annealing ovens were pretty big, towering over our small group. (I think two or three of us could have comfortably sat inside one, without feeling too claustrophobic. The green panel on the front is a chalkboard, where operators would make notes about annealing temperature, cooling schedule, etc, so anyone could tell at a glance what was going on. This was an empty furnace, so had no notes inscribed on it, and hence was one of the only two we were allowed to take detailed photos of.

It's not clear to me just why slower cooling cycles result in greater density; the details of that were beyond the scope of questions I was able to ask during the tour.

Take it as given, though, that slower cooling = higher refractive index, and that annealing gives Hikari Glass very fine-grained control over refractive index.

OK, so much about refractive index, but what about dispersion?

I was struck by how much emphasis was placed on fine-tuning refractive index, and how little discussion we had about dispersion. When I asked later, it turned out that this was because dispersion is a much more complicated topic, and a full discussion of it wouldn't remotely have fit in the time we had. (Especially given how many questions I ask ;-)

Dispersion refers to how much the refractive index varies based on color/wavelength. Dispersion is why a prism projects a rainbow from white light; all else being equal, high dispersion means you'll get a very wide rainbow, low dispersion means you'll see a much narrower rainbow. So-called "ED" glass is characterized by low dispersion.

This is the standard "Abbe Diagram", showing dispersion vs refractive index. It looks a bit like a map of the Japanese islands, so optical engineers will often talk about "Hokkaido", "Tokyo" or "Nagasaki" glass. The different regions labeled on the graph give some idea of how different ingredients affect the glass' properties, with barium and lanthanum appearing in the names of some glass types.

The image above shows the standard graph of Abbe number (a measure of dispersion, the thing that makes ED glass "ED") vs. refractive index that will be immediately familiar to any optical engineer. As you can see, there are a lot of different glass types, and this only shows the major categories! As an interesting side note, the general shape of this diagram calls to mind the shape of the Japanese archipelago, so Japanese optical engineers will often refer to a type of glass as a region of Japan. For instance, if an engineer is looking for a glass with low Abbe number but high refractive index (the extreme upper right of the diagram), they'd say they're looking for a "Hokkaido" glass. (Hokkaido is Japan's northernmost island.) On the other hand, a glass from the lower left-hand side of the diagram would be referred to as a "Nagasaki" glass. (Nagasaki is the capital of Kyushu province, and located at the far southwestern tip of Japan.) 

When I asked about dispersion, it sounded like it's a fairly basic quality, affected only slightly by process variations. Apparently, dispersion is somehow set by the overall mix of components in the glass recipe, while the overall refractive index is subject to fine-tuning, by mixing different batches of frit, or (seemingly more routinely) by adjusting the annealing cycle. As noted above though, even a basic understanding of how dispersion is controlled would have required far more discussion than we had time for.

(Dispersion really does seem like a very deep subject, I wasn't able to find much in Google searches beyond dozens upon dozens of pages with the same basic description of what it is, vs how different glass ingredients affect it. I'd really like to learn about it and write up an article on it at some point; maybe I can convince a glass engineer to teach me about it, on another visit to Japan someday ;-)

Visual preform QC inspection

After pressing, the lens preforms go through a visual inspection. Using a bright light in fairly dark surroundings, the workers look for any chips, cracks or other flaws. The shot below shows a preform for a binocular prism that has a crack on one side of it. (Hikari makes optical glass for all Nikon products. Camera lenses are a big part of that, but they also make glass for everything from huge semiconductor stepper lenses to microscope lenses to prisms for binoculars.)

After the preforms are pressed, they all go through a visual inspection stage, with a very bright light shone through them against a black background. This will highlight any cracks or imperfections. We have some shots of flawed lens blanks as well, but this photo of a binocular roof prism blank with a crack in it was the best example of how defects might appear.

Packing and shipping

At the end of the whole process, the finished preforms are packed for shipment to Nikon lens factories. It's a long, complex and fascinating process (at least if you're techno-geeks like us), and very impressive that Nikon maintains this entire operation, just to satisfy their internal needs for optical glass. (Or at least 90% to satisfy their internal needs; as mentioned earlier, 10% of Hikari's output is sold on to other manufacturers.)

The finished preforms are packed in vinyl trays, with a number of preforms in each carrier - at least for smaller lenses like these. I imagine that the huge front elements for the like of a 600mm f/4 might call for more robust packaging. Then there's glass for the enormous lenses inside Nikon's semiconductor "stepper" exposure systems. Those are so large they must have to be crated individually!

Epilogue: Motoyu Ryokan

It was a really long day by the time we were done; we'd departed from Tokyo's Haneda Airport fairly early in the morning (at least for a jet-lagged night owl like me), and it was a pretty intense day of asking questions and absorbing information. So we were pretty happy to roll into our overnight digs that evening, especially since it was a pretty long bus ride to get there from the Hikari factory.

Japan is on the Pacific "Rim of Fire", and many of its mountains are of relatively recent volcanic origin. (Recent in geologic terms, at least.) So there's a lot of magma close to the surface in many places, and hence a lot of natural hot springs as well. So Onsen (hot spring spas) are a significant cultural thing there, and a lot of ryokan (traditional Japanese inns) are built around them. This was probably the fourth or fifth time I've stayed in a traditional ryokan (yes, I'm truly fortunate, and realize it :-), but this was a particularly nice one. I don't know its history, but it's apparently a pretty well-known one, and as always was a great experience.

Although I don't have many photos to show from the dinner that evening, it was an unusually lavish affair, with more than a dozen little courses/small plates, all delivered more or less simultaneously to the table. We also had some of the best sake that I've ever tasted. The best was a "raw" sake, meaning it still had live yeast in it. It was one of the tastiest liquids I've ever put in my mouth, and I so wanted to bring a couple of bottles home with me. Unfortunately, though, the live yeast meant that it had to be drunk within a week or so of its manufacture, so it would have been past its prime before I even left Japan. (And I had a series of meetings scheduled for a full week following our visit to Akita, so even with my prodigious sake capacity, I wouldn't have been able to do justice to it from my hotel room in Shibuya :-/ )

Our not-so-humble abode for the evening, Motoyu ryokan.

All in all, our tour of Nikon's Hikari glass factory was an extraordinary experience. As I said at the beginning, it was easily one of the most interesting factory tours I've been on, and that covers a lot of ground. It was certainly a tour in which I learned an incredible amount, about things that I had no knowledge of previously.

Nikon obviously wanted us to draw from the experience the impression that they have a unique ability among camera manufacturers, in that they exercise an unparalleled amount of control over the quality of the most basic material that goes into their lenses, namely the glass itself. Even allowing for the obvious PR intent for the trip, though, we came away very impressed with just that: Nikon really does have a unique ability to control their own destiny and optical designs, all the way from the raw materials to their finished lenses.

Entirely apart from the intended PR message, this was a remarkably informative tour, that left us knowing far more than we did before it began. Many thanks to Nikon Tokyo and Hikari Glass for their hospitality and patience, in answering all our many (many!) questions!

A finished lens blank from the Hikari factory, ready to be polished and built into an amazing Nikkor lens. (We know it's destined for an amazing lens, because of how big this blank is; this is probably the front element for a long, large-aperture tele :-)

Thanks for reading! See more of our recent factory tours by clicking here.
 

Insomnia GraphQL Support

$
0
0

Query Completion

Autocomplete of field names and arguments makes constructing GraphQL queries a breeze.

Error Checking

Schema-based error checking prevents you from making mistakes before you even realize it.

Advanced Features

Insomnia's existing features like template tags, environments, and plugins improve productivity.

Pinebook: A $100 ARM based laptop

$
0
0
  • An Affordable 64-bit ARM based Open Source Notebook

    * With the Pinebook you are getting a lot for the asking price.
    If you are looking for a device in a convenient laptop form-factor that you wish to tinker with,
    then it is safe to say the Pinebook is the right device for you —
    in particular if you are a developer or tinkerer who is willing to document, share and give back to the community.
    This is also especially true for those of you who wish to run Linux on the device, since Linux is by-and-large a community undertaking.

    We do not wish to discourage anyone from getting a Pinebook, as it is a good piece of hardware,
    but if you are looking for a device to replace your current work or school laptop, then perhaps it’s wise to look elsewhere.

  • An Affordable 64-bit ARM based Open Source Notebook

    * With the Pinebook you are getting a lot for the asking price.
    If you are looking for a device in a convenient laptop form-factor that you wish to tinker with,
    then it is safe to say the Pinebook is the right device for you —
    in particular if you are a developer or tinkerer who is willing to document, share and give back to the community.
    This is also especially true for those of you who wish to run Linux on the device, since Linux is by-and-large a community undertaking.

    We do not wish to discourage anyone from getting a Pinebook, as it is a good piece of hardware,
    but if you are looking for a device to replace your current work or school laptop, then perhaps it’s wise to look elsewhere.

Pinebook

PINEBOOK is an 11.6″ or 14″ notebook powered by the same Quad-Core ARM Cortex A53 64-Bit Processor used in our popular PINE A64 Single Board Computer. It is lightweight and comes with a full size keyboard and large multi-touch touchpad for students and makers.

As a new open source platform, Pinebook development is an ongoing process and represents a great opportunity to get involved with computing on a different level, to customise and personalise the portable computer experience, to understand what is going on beneath the surface. Your input can help shape and define what a Pinebook can be.

Pinebook

External Display

Using the mini HDMI port, the PINEBOOK can be connected to a larger external HDMI display or TV for presentations.

PINEBOOK

Storage Expansion

Built-in MicroSD Card slot allows users to expand their data storage up to 256 GB with a microSD Card (SD, SDHC, SDXC).

Specifications
Hardware

CPU :

RAM :

Flash:

Wireless :

USB 2.0 Port :

MicroSD Card Slot :

Mini HDMI :

Headphone Jack :

Microphone :

Keyboard :

Touch-pad :

Power :

Battery :

Display :

Front Camera :

Dimension :

Weight :

Warranty :

1.2GHz 64-Bit Quad-Core ARM Cortex A53

2 GB LPDDR3 RAM Memory

16 GB eMMC 5.0 (upgradable up to 64GB)

WiFi 802.11bgn + Bluetooth 4.0

2

1

1

1

Built-in

Full Size Keyboard

Large Multi-Touch Touchpad

Input: 100~240V, Output: 5V3A

Lithium Polymer Battery (10000mAH)

11.6" or 14" TN LCD (1366 x 768)

0.3 Megapixels

11.6" : 299mm x 200mm x 12mm (WxDxH)

14" : 329mm x 220mm x 12mm (WxDxH)

11.6" : 1.04 kg (2.30 lbs)

14" : 1.26 kg (2.78 lbs)

30 days

Software

OS :

Linux Distro (Default) or Android

Decades-Old Graph Problem Yields to Amateur Mathematician

$
0
0

In 1950 Edward Nelson, then a student at the University of Chicago, asked the kind of deceptively simple question that can give mathematicians fits for decades. Imagine, he said, a graph — a collection of points connected by lines. Ensure that all of the lines are exactly the same length, and that everything lies on the plane. Now color all the points, ensuring that no two connected points have the same color. Nelson asked: What is the smallest number of colors that you’d need to color any such graph, even one formed by linking an infinite number of vertices?

The problem, now known as the Hadwiger-Nelson problem or the problem of finding the chromatic number of the plane, has piqued the interest of many mathematicians, including the famously prolific Paul Erdős. Researchers quickly narrowed the possibilities down, finding that the infinite graph can be colored by no fewer than four and no more than seven colors. Other researchers went on to prove a few partial results in the decades that followed, but no one was able to change these bounds.

Then last week, Aubrey de Grey, a biologist known for his claims that people alive today will live to the age of 1,000, posted a paper to the scientific preprint site arxiv.org with the title “The Chromatic Number of the Plane Is at Least 5.” In it, he describes the construction of a planar unit-distance graph that can’t be colored with only four colors. The finding represents the first major advance in solving the problem since shortly after it was introduced. “I got extraordinarily lucky,” de Grey said. “It’s not every day that somebody comes up with the solution to a 60-year-old problem.”

De Grey appears to be an unlikely mathematical trailblazer. He is the co-founder and chief science officer of an organization that aims to develop technologies for “reversing the negative effects of aging.” He found his way to the chromatic number of the plane problem through a board game. Decades ago, de Grey was a competitive Othello player, and he fell in with some mathematicians who were also enthusiasts of the game. They introduced him to graph theory, and he comes back to it now and then. “Occasionally, when I need a rest from my real job, I’ll think about math,” he said. Over Christmas last year, he had a chance to do that.

It is unusual, but not unheard of, for an amateur mathematician to make significant progress on a long-standing open problem. In the 1970s, Marjorie Rice, a homemaker with no mathematical background, ran across a Scientific American column about pentagons that tile the plane. She eventually added four new pentagons to the list. Gil Kalai, a mathematician at the Hebrew University of Jerusalem, said it is gratifying to see a nonprofessional mathematician make a major breakthrough. “It really adds to the many facets of the mathematical experience,” he said.

Perhaps the most famous graph coloring question is the four-color theorem. It states that, assuming every country is one continuous lump, any map can be colored using only four colors so that no two adjacent countries have the same color. The exact sizes and shapes of the countries don’t matter, so mathematicians can translate the problem into the world of graph theory by representing every country as a vertex and connecting two vertices with an edge if the corresponding countries share a border.

The Hadwiger-Nelson problem is a bit different. Instead of considering a finite number of vertices, as there would be on a map, it considers infinitely many vertices, one for each point in the plane. Two points are connected by an edge if they are exactly one unit apart. To find a lower bound for the chromatic number, it suffices to create a graph with a finite number of vertices that requires a particular number of colors. That’s what de Grey did.

De Grey based his graph on a gadget called the Moser spindle, named after mathematical brothers Leo and William Moser. It is a configuration of just seven points and 11 edges that has a chromatic number of four. Through a delicate process, and with minimal computer assistance, de Grey fused copies of the Moser spindle and another small assembly of points into a 20,425-vertex monstrosity that could not be colored using four colors. He was later able to shrink the graph to 1,581 vertices and do a computer check to verify that it was not four-colorable.

Major California housing bill dies in first committee hearing

$
0
0

SACRAMENTO — A sweeping bill that would have given the state unprecedented power over local development failed in its first committee hearing, crushing the hopes of those who saw it as the key to making housing in the state more affordable.

At a lively and crowded hearing Tuesday, the Senate Transportation and Housing Committee blocked Senate Bill 827, a bill to force cities to allow apartments and condominiums of roughly four to five stories within a half mile of rail and ferry stops — as well as denser housing near bus stops with frequent service.

The vote abruptly halted a feverish debate over one of the biggest housing proposals introduced in Sacramento this year — one which took aim at cities reluctant to embrace larger developments. Its demise also underscored the political realities and pace of change at the Capitol, even as pressure mounts for the state to respond to runaway housing costs.

“Every housing advocate should know that this was always going to be an uphill battle,” said Laura Foote Clark, executive of San Francisco YIMBY Action, which advocates for more housing construction. “Of course there are going to be setbacks, but we are going to rally and keep fighting for it.”

Just four of the 13 committee members, including the bill’s two main authors, Sens. Scott Wiener, D-San Francisco, and Nancy Skinner, D-Oakland, supported the proposal.

Some senators said they liked the idea of housing density near public transportation, but the details were off: that the bill didn’t make sense for smaller, more rural areas, or that its affordable housing provisions weren’t strong enough.

“My challenge, frankly is the one-size-fits-all approach to the bill,” said Sen. Richard Roth, D-Riverside.

When the bill’s fate became clear, Wiener vowed to keep the idea alive — a sentiment echoed by a number of his colleagues, even some who voted against it. “Whatever happens today,” he said, “we’re going to keep working.”

As he made his case to his colleagues, Wiener had argued the ambitious proposal was long overdue, given the state’s spiraling housing costs and freeways clogged with long-distance commuters who can’t afford to live near their jobs. Wiener, a former San Francisco supervisor elected to the statehouse in 2016, says he knows firsthand the pressure on local elected officials to preserve the status quo, and that the bill would bring sorely needed housing where it is needed the most.

“In California, for decades now, we have made a conscious decision that having enough housing simply doesn’t matter,” Wiener told the committee earlier as he made his case. “SB 827 promotes exactly the kind of housing that we need.”

The bill was sponsored by California YIMBY, a coalition of pro-development Yes In My Backyard groups who are newcomers to Sacramento politics. Also backing it were Silicon Valley CEOs and development and real estate trade associations. Dozens of urban planning and housing experts have lined up in favor of the proposal, arguing it could encourage more racially integrated neighborhoods and ease the state’s housing shortage. But the proposal had an even longer list of detractors, including scores of cities and many tenants’ rights and affordable housing groups who predicted it would hasten gentrification and put tenants at even greater risk of displacement.

The state’s influential construction union, the State Building & Construction Trades Council of California, also came out against the bill, which did not include prevailing wage standards or other labor-friendly provisions.

The vice mayor of Beverly Hills, John Mirisch — a vocal opponent of SB 827 — drew cheers and laughter when he called the bill “the wrong prescription,” likening the effort to “trying to cure psoriasis with an appendectomy.”

At least twice during the hearing, the committee chairman Sen. Jim Beall, D-Campbell, had to tell the crowd to be quiet. “No outbursts whatsoever,” he said.

Wiener twice amended the bill after its introduction in January. The second set of changes, made last week, lowered height limits, gave cities more time to prepare for the new rules to take effect, and added provisions that the senator argued would protect low and middle-income tenants at risk of losing their homes.

But the new language did little to neutralize the opposition.

“We just think it was very, very deeply flawed from the start,” said Anya Lawler, a policy advocate for the Western Center on Law & Poverty, in an interview Tuesday. “Do we support high density housing near transit? Of course we do. But we have to be very thoughtful about how we get there.”

In a statement issued after the vote, Wiener said the outcome was not a surprise, given the scope of the proposal.

“I have always known there was a real possibility that SB 827 – like other difficult and impactful bills that have come before – was going to take more than one year,” he said. “… I will continue to work with anyone who shares the critical goals of creating more housing for people in California, and I look forward to working in the coming months to develop a strong proposal for next year.”


GraalVM: Run Programs Faster Anywhere

$
0
0

GraalVM is a universal virtual machine for running applications written in JavaScript, Python, Ruby, R, JVM-based languages like Java, Scala, Kotlin, and LLVM-based languages such as C and C++.

GraalVM removes the isolation between programming languages and enables interoperability in a shared runtime. It can run either standalone or in the context of OpenJDK, Node.js, Oracle Database, or MySQL.

Zulip 1.8: Free software Slack alternative with email-style threading

$
0
0

We're excited to announce the release of Zulip Server 1.8, containing hundreds of new features and bug fixes.

Zulip is the world’s most productive team chat software, an alternative to Slack, HipChat, and IRC. Zulip combines the immediacy of chat with the asynchronous efficiency of email-style threading, and is 100% free and open source software.

Zulip 1.8 is a huge release, with over 3500 new commits since October’s 1.7. A total of 131 people contributed commits to this release, bringing the Zulip server project to 412 distinct code contributors. With 34 people who’ve contributed 100+ commits to Zulip and 100+ commits merged weekly, Zulip has the most active open-source development community of any team chat software, by a wide margin.

Huge thanks to everyone who's contributed to Zulip over the last few months, whether by writing code and docs, reporting issues, testing changes, translating, posting feedback on chat.zulip.org, or just suggesting ideas! We could not do this without the hundreds of people giving back to the Zulip community.

Project highlights

Today marks a release of the Zulip server, but lots of exciting work has happened outside the server codebase, too. Key highlights:

Release highlights

Describing all the improvements in a Zulip release has been an impossible task for our last few releases, and this one is no different. Below are just a few highlights.

Backend
  • One can now set up a production Zulip server with just a few minutes of work! This results from a major rework of the Zulip server installer to fully integrate certbot and eliminate mandatory configuration options.
  • We now have beta support for importing a Slack organization into Zulip, including users, avatars, streams, uploaded files, message history, custom emoji, emoji reactions, and more. This has been a frequently-requested feature from organizations migrating off Slack, and we’re excited that our implementation shares a lot of code with our existing, robust Zulip→Zulip import/export tools.
  • We rewrote our API documentation to be much more friendly and expansive; it now covers most important endpoints and has code examples.
Web

  • Zulip has a new night theme for dark environments.
  • You can now configure mentionable groups of users. So you can now mention @support and have that notify everyone on the support team.
  • We integrated video calls, powered by Jitsi Meet. We expect to integrate other video chat providers soon (e.g. there’s an open PR for Google Hangouts).
  • We overhauled our settings system, providing a slick system that auto-saves changes, and added a ton of useful user and organization settings.
  • Complete new translations for Turkish and Russian. Zulip now has complete or nearly-complete translations for German, Spanish, Russian, Turkish, and Czech. Partial translations for Korean, French, Chinese, Japanese, Indonesian, Finnish, and Polish each cover the majority of the total strings in the project.

See the detailed changelog for dozens of other notable changes. If you administer a Zulip server, we encourage you to read at least the list of added features at the top, since there are a number of useful new settings introduced in this release that you may want to take advantage of.

Upgrading

We highly recommend upgrading, since Zulip has made major improvements in the months since the last release. You can upgrade as usual by following the upgrade instructions.

Several of our largest installations have already upgraded to release candidates without issue, so we feel very confident in this release. But if you need help, best-effort support is available on chat.zulip.org. You can also purchase commercial support from the Zulip core team.

We’d like to highlight one behavior change, to how private streams interact with organization administrators. Now organization administrators can remove users, edit descriptions, and rename private streams, even if they are not subscribed. However, organization administrators still cannot access message content on private streams they are not members of. See Zulip's security model documentation for details.

As a final note, I'd like to advertise a few opportunities to contribute back to Zulip that don't require coding:

Thanks again to the amazing global Zulip development community for making this possible! What follows is a summary of the code contributors to this release, sorted by number of commits.

-Tim Abbott, lead developer

[email protected]:~/zulip$ git log --pretty="%an" 1.7.0..upstream/master | sort | uniq -c | sort -nr
    542 Tim Abbott
    348 Steve Howell
    334 Greg Price
    199 Eeshan Garg
    187 Vishnu Ks
    185 Brock Whittaker
    146 Neil Pilgrim
    133 Rein Zustand
    133 Rhea Parekh
    132 Yashashvi Dave
    127 Robert Hönig
    104 Shubham Dhama
     93 Rishi Gupta
     90 Umair Khan
     84 Cynthia Lin
     65 Aditya Bansal
     44 Harshit Bansal
     35 Rohitt Vashishtha
     33 Tommy Ip
     33 Balaji2198
     26 cPhost
     26 Xavier Cooney
     25 Callum Fraser
     21 Shubham Padia
     20 Alena Volkova
     18 Lyla Fischer
     17 Marco Burstein
     15 Tarun Kumar
     14 Aastha Gupta
     14 Gooca
     11 Nikhil Kumar Mishra
     11 Jerry Zhang
     11 Andy Perez
     10 Utkarsh Patil
      9 Akash Nimare      
      9 novokrest
      9 Ricky
      8 sinwar
      8 Kiy4h
      7 Yago González
      7 Viraat Chandra
      7 David Rosa Tamsen
      7 Abijith10
      6 Sampriti Panda
      6 nyan-salmon
      6 greysome
      6 Puneeth Chaganti
      5 Sarah Stringer
      5 Armaan Ahluwalia
      5 Akash Nimare
      4 Sumana Harihareswara
      4 Shreyansh Dwivedi
      4 Privisus
      4 MadElf1337
      4 Joshua Pan
      4 fredfishgames
      3 Weronika Grzybowska
      3 Vishwesh Jainkuniya
      3 synicalsyntax
      3 Shivam Gera
      3 Priyank Patel
      3 Patrick Grave
      3 Jack Weatherilt
      3 ihsavru
      3 Garvit
      3 Florian Jüngermann
      3 Eric Eslinger
      3 Ben Reeves
      3 Arseny Chekanov
      3 Archana BS
      3 Angelika Serwa
      3 Abhigyan Khaund
      3 Aayush Agrawal
      2 XavierCooney
      2 VishalCR7
      2 Vaida Plankyte
      2 Sivagiri
      2 ryan
      2 Mohd Ali Rizwi
      2 infinitelooped
      2 Henrik Pettersson
      2 guaca
      2 civilnewsnetwork
      2 Catherine Kleimeier
      2 Anurag Sharma
      2 Anupam-dagar
      2 Aman Jain
      2 Aman Agrawal
      2 amanagr
      2 Alyssa Wagenmaker
      2 Abhishek Sharma
      2 Abhijeet Kaur
      1 ViRu-ThE-ViRuS
      1 Vaibhav Sagar
      1 Symt
      1 sreenivas alapati
      1 snlkapil
      1 Sivagiri Visakan
      1 Shivamgera
      1 Shekh Ataul
      1 sandeepsajan0
      1 sagar-kalra
      1 Roman Godov
      1 retsediv
      1 Reid Barton
      1 Priscilla
      1 pradeepgangwar
      1 Patrick Naughton
      1 Nick Al
      1 Mukul Agrawal
      1 Logan Williams
      1 kunall17
      1 KiranS
      1 Josh Mandel
      1 Jeremy Zhou
      1 Jack Zhang
      1 Ivche1337
      1 itstakenalr
      1 Ghislain Antony Vaillant
      1 feorlen
      1 Felix Yan
      1 elenaoat
      1 Dennis Ludl
      1 darshanime
      1 cg-cnu
      1 Axel Tietje
      1 AmAnAgr
      1 AlloulDorian
      1 Alicja Raszkowska
      1 aedorado
      1 Aditya Shridhar Hegde

Mylo: A leather-like material made from mycelium

$
0
0
Unlike making leather, the process of making Mylo™ doesn’t involve raising and sacrificing livestock, or any of the associated greenhouse gases or material wastes. Those impacts are substantial: Livestock use an astonishing 30% of the earth’s entire land surface and cattle-rearing generates more global warming greenhouse gases, as measured in carbon dioxide equivalent, than all transportation methods.1

Put simply, as disposable incomes rise around the globe, we simply can’t meet the demand for meat — and leather consumer goods — using resources available on the planet.

Mylo is also produced in days versus years, without the material waste of using animal hides.

Mylo is also a more sustainable option than synthetic leathers, most of which are made from polyurethane or PVC, These so-called ‘pleathers’ are manufactured using numerous toxic chemicals. While not proven to be dangerous to humans during use, these toxic chemistries persist in the landfills and groundwater where they end up.

Because Mylo™ is made from organic matter, it is completely biodegradable and non-toxic. (Note: animal hide is also biodegradable.)

We will undertake a full lifecycle analysis of Mylo™ prior to large scale commercial rollout, and we look forward to sharing those findings with the world.


1 Source: Livestock’s Long Shadow: Environmental Issues and Options, a 2006 report published by the United Nations Food and Agriculture Organization

Facebook Container for Firefox

FDA clears first contact lens with light-adaptive technology

$
0
0

Español

The U.S. Food and Drug Administration today cleared the first contact lens to incorporate an additive that automatically darkens the lens when exposed to bright light. The Acuvue Oasys Contact Lenses with Transitions Light Intelligent Technology are soft contact lenses indicated for daily use to correct the vision of people with non-diseased eyes who are nearsighted (myopia) or farsighted (hyperopia). They can be used by people with certain degrees of astigmatism, an abnormal curvature of the eye.

The National Eye Institute at the National Institutes of Health estimates that 42 percent of Americans aged 12 to 54 have myopia and 5 to 10 percent of all Americans have hyperopia. The Centers for Disease Control and Prevention estimates that as of 2014, more than 40 million Americans were contact lens wearers.

“This contact lens is the first of its kind to incorporate the same technology that is used in eyeglasses that automatically darken in the sun,” said Malvina Eydelman, director of the Division of Ophthalmic, and Ear, Nose and Throat Devices at the FDA's Center for Devices and Radiological Health.

The contact lenses contain a photochromic additive that adapts the amount of visible light filtered to the eye based on the amount of UV light to which they are exposed. This results in slightly darkened lenses in bright sunlight that automatically return to a regular tint when exposed to normal or dark lighting conditions.

For today’s clearance, the FDA reviewed scientific evidence including a clinical study of 24 patients that evaluated daytime and nighttime driving performance while wearing the contact lenses. The results of the study demonstrated there was no evidence of concerns with either driving performance or vision while wearing the lenses.

Patients with the following conditions should not use these contact lenses: inflammation or infection in or around the eye or eyelids; any eye disease, injury or abnormality that affects the cornea, conjunctiva (the mucous membrane that covers the front of the eye and lines the inside of the eyelids) or eyelids; any previously diagnosed condition that makes contact lens wear uncomfortable; severe dry eye; reduced corneal sensitivity; any systemic disease that may affect the eye or be made worse by wearing contact lenses; allergic reactions on the surface of the eye or surrounding tissues that may be induced or made worse by wearing contact lenses or use of contact lens solutions; any active eye infection or red or irritated eyes.

These contacts are intended for daily wear for up to 14 days. Patients should not sleep in these contact lenses, expose them to water or wear them longer than directed by an eye care professional. These contacts should not be used as substitutes for UV protective eyewear.

The Acuvue Oasys Contact Lenses with Transitions Light Intelligent Technology were reviewed through the premarket notification 510(k) pathway. A 510(k) is a premarket submission made by device manufacturers to the FDA to demonstrate that the new device is substantially equivalent to a legally marketed predicate device.

The FDA granted clearance of the Acuvue Oasys Contact Lenses with Transitions Light Intelligent Technology to Johnson & Johnson Vision Care, Inc.

The FDA, an agency within the U.S. Department of Health and Human Services, protects the public health by assuring the safety, effectiveness, and security of human and veterinary drugs, vaccines and other biological products for human use, and medical devices. The agency also is responsible for the safety and security of our nation’s food supply, cosmetics, dietary supplements, products that give off electronic radiation, and for regulating tobacco products.

###

Viewing all 25817 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>