Quantcast
Channel: Hacker News
Viewing all 25817 articles
Browse latest View live

Nvidia-Docker – Build and Run Docker Containers Leveraging Nvidia GPUs

$
0
0

README.md

nvidia-gpu-docker

The full documentation is available on the repository wiki.
A good place to start is to understand why nvidia-docker is needed in the first place.

Assuming the NVIDIA drivers and Docker® Engine are properly installed (see installation)

Ubuntu distributions

# Install nvidia-docker and nvidia-docker-plugin
wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb
sudo dpkg -i /tmp/nvidia-docker*.deb && rm /tmp/nvidia-docker*.deb# Test nvidia-smi
nvidia-docker run --rm nvidia/cuda nvidia-smi

CentOS distributions

# Install nvidia-docker and nvidia-docker-plugin
wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker-1.0.1-1.x86_64.rpm
sudo rpm -i /tmp/nvidia-docker*.rpm && rm /tmp/nvidia-docker*.rpm
sudo systemctl start nvidia-docker# Test nvidia-smi
nvidia-docker run --rm nvidia/cuda nvidia-smi

Other distributions

# Install nvidia-docker and nvidia-docker-plugin
wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1_amd64.tar.xz
sudo tar --strip-components=1 -C /usr/bin -xvf /tmp/nvidia-docker*.tar.xz && rm /tmp/nvidia-docker*.tar.xz# Run nvidia-docker-plugin
sudo -b nohup nvidia-docker-plugin > /tmp/nvidia-docker.log# Test nvidia-smi
nvidia-docker run --rm nvidia/cuda nvidia-smi

A signed copy of the Contributor License Agreement needs to be provided to digits@nvidia.com before any change can be accepted.


Transparent Hugepages: measuring the performance impact

$
0
0

Intro

TL;DR This post explains Transparent Hugepages (THP) in a nutshell, describes techniques that can be used to measure the performance impact, shows the effect on a real-world application.

The post was inspired by a thread about Transparent Hugepages on the Mechanical Sympathy group. The thread walks through the pitfalls, performance numbers and current state in the latest kernel versions. A lot of useful information is there. In general, you can find many recommendations on the Internet. Many of them tell you to disable THP, e.g. Oracle Database, MongoDB, Couchbase, MemSQL, NuoDB . Few of them utilize the feature, e.g. PostgreSQL (hugetlbpage feature, not THP) and Vertica. There are quite a few stories telling how people fight a system freeze and solved it disabling the THP feature. 1, 2, 3, 4, 5, 6. All those stories lead to distorted view and preconception that the feature is harmful.

Unfortunately, I couldn’t find any post that measures or shows how to measure the impact and consequences of enabling/ disabling the feature. This is what this post is supposed to address.

Transparent Hugepages in a nutshell

Almost all applications and OSes run in virtual memory. Virtual memory is mapped into physical memory. The mapping is managed by an OS maintaining the page tables data structure in RAM. The address translation logic (page table walking) is implemented by the CPU’s memory management unit (MMU). The MMU also has a cache of recently used pages. This cache is called the Translation lookaside buffer (TLB).

“When a virtual address needs to be translated into a physical address, the TLB is searched first. If a match is found (a TLB hit), the physical address is returned and memory access can continue. However, if there is no match (called a TLB miss), the MMU will typically look up the address mapping in the page table to see whether a mapping exists.” The page table walk is expensive because it may require multiple memory accesses (they may hit the CPU L1/L2/L3 caches though). On the other side, the TLB cache size is limited and typically can hold several hundred pages.

OSes manage virtual memory using pages (contiguous block of memory). Typically, the size of a memory page is 4 KB. 1 GB of memory is 256 000 pages; 128 GB is 32 768 000 pages. Obviously TLB cache can’t fit all of the pages and performance suffers from cache misses. There are two main ways to improve it. The first one is to increase TLB size, which is expensive and won’t help significantly. Another one is to increase the page size and therefore have less pages to map. Modern OSes and CPUs support large 2 MB and even 1 GB pages. Using large 2 MB pages, 128 GB of memory becomes just 64 000 pages.

That’s the reason there is Linux Transparent Hugepage Support in Linux. It’s an optimization! It manages large pages automatically and transparently for applications. The benefits are pretty obvious: no changes required on application side; it reduces the number of TLB misses; page table walking becomes cheaper. The feature logically can be divided into two parts: allocation and maintenance. The THP takes the regular (“higher-order”) memory allocation path and it requires that the OS be able to find contiguous and aligned block of memory. It suffers from the same issues as the regular pages, namely fragmentation. If the OS can’t find a contiguous block of memory, it will try to compact, reclaim or page out other pages. That process could cause latency spikes (up to seconds). The second part is maintenance. Even if an application touches just 1 byte of memory, it will consume whole 2 MB large page. This is obviously a waste of memory. So there’s a background kernel thread called “khugepaged”. It scans pages and tries to defragment and collapse them into one huge page.

Another pitfall lays in large page splitting, not all parts of the OS work with large pages, e.g. swap. The OS splits large pages into regular ones for them. The best place to read about Transparent Hugepage Support is the official documentation on the Linux Kernel website The feature has several settings and flags that affect its behavior and they evolve with the Linux kernel.

How to measure?

This is the most crucial part and the goal of this post. Basically there are two ways to measure the impact: CPU counters and kernel functions.

CPU counters

Let’s start from the CPU counters. I use perf, which is a great and easy-to-use tool for that purpose. Perf has built-in event aliases for TLB: dTLB-loads, dTLB-load-misses for data loads hits and misses; dTLB-stores, dTLB-store-misses for data stores hits and misses.

[~]# perf stat -e dTLB-loads,dTLB-load-misses,dTLB-stores,dTLB-store-misses -a -I 1000
#           time             counts unit events
     1.006223197         85,144,535      dTLB-loads                                                    
     1.006223197          1,153,457      dTLB-load-misses          #    1.35% of all dTLB cache hits   
     1.006223197        153,092,544      dTLB-stores                                                   
     1.006223197            213,524      dTLB-store-misses                                             
...

Let’s not forget about instruction misses too: iTLB-load, iTLB-load-misses.

[~]# perf stat -e iTLB-load,iTLB-load-misses -a -I 1000
#           time             counts unit events
     1.005591635              5,496      iTLB-load
     1.005591635             18,799      iTLB-load-misses          #  342.05% of all iTLB cache hits
...

In fact, perf supports just a small subset of all events while CPUs have hundreds of various counters to monitor the performance. For Intel CPUs you can find all available counters on the Intel Processor Event Reference website or in “Intel® 64 and IA-32 Architectures Developer’s Manual: Vol. 3B” or in the Linux kernel sources. The Developer’s Manual also contains event codes that we need to pass to perf.

If we take a look at the TLB related counters, we could find the following most interesting for us:

MnemonicDesctiptionEvent Num.Umask Value
DTLB_LOAD_MISSES.MISS_CAUSES_A_WALKMisses in all TLB levels that cause a page walk of any page size.08H01H
DTLB_STORE_MISSES.MISS_CAUSES_A_WALKMiss in all TLB levels causes a page walk of any page size.49H01H
DTLB_LOAD_MISSES.WALK_DURATIONThis event counts cycles when the page miss handler (PMH) is servicing page walks caused by DTLB load misses.08H10H
ITLB_MISSES.MISS_CAUSES_A_WALKMisses in ITLB that causes a page walk of any page size.85H01H
ITLB_MISSES.WALK_DURATIONThis event counts cycles when the page miss handler (PMH) is servicing page walks caused by ITLB misses.85H10H
PAGE_WALKER_LOADS.DTLB_MEMORYNumber of DTLB page walker loads from memory.BCH18H
PAGE_WALKER_LOADS.ITLB_MEMORYNumber of ITLB page walker loads from memory.BCH28H

Perf supports the *MISS_CAUSES_A_WALK counters via aliases. We need to use event numbers for others. The CPU event numbers and umask values are CPU specific; the listed above are for the Haswell microarchitecture. You need to look for codes for your CPU.

One of the key metrics is the number of CPU cycles spent in the page table walking:

[~]# perf stat -e cycles \>   -e cpu/event=0x08,umask=0x10,name=dcycles/ \>   -e cpu/event=0x85,umask=0x10,name=icycles/ \>   -a -I 1000
#           time             counts unit events
     1.005079845        227,119,840      cycles
     1.005079845          2,605,237      dcycles
     1.005079845            806,076      icycles
...

Another important metric is the number of main memory reads caused by TLB miss; those reads miss the CPU caches hence quite expensive:

[root@PRCAPISV0003L01 ~]# perf stat -e cache-misses \>   -e cpu/event=0xbc,umask=0x18,name=dreads/ \>   -e cpu/event=0xbc,umask=0x28,name=ireads/ \>   -a -I 1000
#           time             counts unit events
     1.007177568             25,322      cache-misses
     1.007177568                 23      dreads
     1.007177568                  5      ireads
...

Kernel functions

Another powerful way to measure how TPH affects performance and latency is tracing/ probing Linux kernel functions. I use SystemTap for that, which is “a tool for dynamically instrumenting running production Linux kernel-based operating systems.”

The first function that is interesting for us is __alloc_pages_slowpath. It is executed when there’s no contiguous block of memory available for allocation. In its turn, this function calls expensive page compaction and reclamation logic that could cause latency spikes.

The second interesting function is khugepaged_scan_mm_slot. It is executed by the background “khugepaged” kernel thread. It scans hugepages and tries to collapse them into one.

I use a SystemTap script to measure a function execution latency. The script stores all execution timings in microseconds and periodically outputs a logarithmic histogram. It consumes few megabytes per hour depending on number of executions. The first argument is a probe point, the second one is number of milliseconds to periodically print statistics.

#! /usr/bin/env stap
global start, intervals

probe $1 { start[tid()] = gettimeofday_us() }
probe $1.return
{
  t = gettimeofday_us()
  old_t = start[tid()]
  if (old_t) intervals <<< t - old_t
  delete start[tid()]
}

probe timer.ms($2)
{
    if (@count(intervals) > 0)
    {
        printf("%-25s:\n min:%dus avg:%dus max:%dus count:%d \n", tz_ctime(gettimeofday_s()),
             @min(intervals), @avg(intervals), @max(intervals), @count(intervals))
        print(@hist_log(intervals));
    }
}

Here’s an example for the __alloc_pages_slowpath function:

[~]# ./func_time_stats.stp 'kernel.function("__alloc_pages_slowpath")' 1000

Thu Aug 17 09:37:19 2017 CEST:
 min:0us avg:1us max:23us count:1538
value |-------------------------------------------------- count
    0 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@  549
    1 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@  541
    2 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@                 377
    4 |@@@@                                                54
    8 |@                                                   12
   16 |                                                     5
   32 |                                                     0
   64 |                                                     0
...

System state

It’s also worth to monitor the overall system state. For example the memory fragmentation state. /proc/buddyinfo“is a useful tool for helping diagnose these problems. Buddyinfo will give you a clue as to how big an area you can safely allocate, or why a previous allocation failed.” More information relevant to fragmentation can also be found in /proc/pagetypeinfo.

cat /proc/buddyinfo
cat /proc/pagetypeinfo

You can read more about it in the official documentation or in this post.

JVM

JVM supports Transparent Hugepages via the -XX:+UseTransparentHugePages flag. Although they warn about possible performance problems:

-XX:+UseTransparentHugePages On Linux, enables the use of large pages that can dynamically grow or shrink. This option is disabled by default. You may encounter performance problems with transparent huge pages as the OS moves other pages around to create huge pages; this option is made available for experimentation.

It may be a good idea to use hugepages with -XX:+AlwaysPreTouch options. It preallocates all physical memory used by the heap, hence avoids any further overhead for page initialization or compaction. But it takes more time to initialize the JVM.

-XX:+AlwaysPreTouch Enables touching of every page on the Java heap during JVM initialization. This gets all pages into the memory before entering the main() method. The option can be used in testing to simulate a long-running system with all virtual memory mapped to physical memory. By default, this option is disabled and all pages are committed as JVM heap space fills.

Aleksey Shipilёv shows performance impact in microbenchmarks in his “JVM Anatomy Park #2: Transparent Huge Pages” blog post.

A real-world case: High-load JVM

Let’s take a look at how Transparent Hugepages affect a real-world application. Given a JVM application: a high-load TCP server based on netty. The server receives up to 100K requests per second, parses them, performs a network database call for each one, then does quite a lot of computations and returns a response back. The JVM application has 200 GB heap. The measurements were done on production servers and production load. Servers were not overloaded and received a half of the maximum number of requests they can handle.

Transparent Hugepages is off

Let’s turn the THP off:

echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag

The first thing to measure is the number of TLB misses. We have ~130 million of TLB misses. Miss/hit rate is 1% (which doesn’t look too bad at first). The numbers:

[~]# perf stat -e dTLB-loads,dTLB-load-misses,iTLB-load-misses,dTLB-store-misses -a -I 1000
#           time             counts unit events
...
    10.007352573      9,426,212,726      dTLB-loads                                                    
    10.007352573         99,328,930      dTLB-load-misses          #    1.04% of all dTLB cache hits   
    10.007352573         26,021,651      iTLB-load-misses                                              
    10.007352573         10,955,696      dTLB-store-misses                                             
...

Let’s take a look how much those misses cost for the CPU:

[~]# perf stat -e cycles \>   -e cpu/event=0x08,umask=0x10,name=dcycles/ \>   -e cpu/event=0x85,umask=0x10,name=icycles/ \>   -a -I 1000
#           time             counts unit events
...
    12.007998332     61,912,076,685      cycles
    12.007998332      5,615,887,228      dcycles
    12.007998332      1,049,159,484      icycles
...

Yes, you see it right! More than 10% of CPU cycles were spent doing the page table walking.

The following counters show us that we have 1 million RAM memory reads caused by TLB misses (which can be up to 100 ns each):

[~]# perf stat -e cpu/event=0xbc,umask=0x18,name=dreads/ \>    -e cpu/event=0xbc,umask=0x28,name=ireads/ \>    -a -I 1000
#           time             counts unit events
...
     6.003683030          1,087,179      dreads
     6.003683030            100,180      ireads
...

All of the above numbers are good to know, but they are quite “synthetic”. The most important metrics for an application developer are the application metrics. Let’s take a look at the application end-to-end latency metrics. Here are the application latency in microseconds gathered for a few minutes:

  "max" : 16414.672,
  "mean" : 1173.2799067016406,
  "min" : 52.112,
  "p50" : 696.885,
  "p75" : 1353.116,
  "p95" : 3769.844,
  "p98" : 5453.675,
  "p99" : 6857.375,

Transparent Hugepages is on

The comparison begins! Let’s turn the THP on:

echo always > /sys/kernel/mm/transparent_hugepage/enabled
echo always > /sys/kernel/mm/transparent_hugepage/defrag

And launch the JVM with the -XX:+UseTransparentHugePages -XX:+AlwaysPreTouch flags.

The quantitative metrics shows us that the number of TLB misses dropped by 6 times from ~130 million to ~20 million. Miss/hit rate dropped from 1% to 0.15%. Here are the numbers:

[~]# perf stat -e dTLB-loads,dTLB-load-misses,iTLB-load-misses,dTLB-store-misses -a -I 1000
#           time             counts unit events
     1.002351984     10,757,473,962      dTLB-loads                                                    
     1.002351984         15,743,582      dTLB-load-misses          #    0.15% of all dTLB cache hits  
     1.002351984          4,208,453      iTLB-load-misses                                              
     1.002351984          1,235,060      dTLB-store-misses                                            

The CPU cycles spent in the page table walking also dropped by 5 times from ~6.7B to ~1.3B. We spend only 2% of CPU time walking the page table. Numbers below:

[~]# perf stat -e cycles \>   -e cpu/event=0x08,umask=0x10,name=dcycles/ \>   -e cpu/event=0x85,umask=0x10,name=icycles/ \>   -a -I 1000
#           time             counts unit events
...
     8.006641482     55,401,975,112      cycles
     8.006641482      1,133,196,162      dcycles
     8.006641482        167,646,297      icycles
...

And the RAM reads also dropped from 1 million to 350K:

[root@PRCAPISV0003L01 ~]# perf stat -e cpu/event=0xbc,umask=0x18,name=dreads/ \>    -e cpu/event=0xbc,umask=0x28,name=ireads/ \>    -a -I 1000
#           time             counts unit events
...
    12.007351895            342,228      dreads
    12.007351895             17,242      ireads
...

Again, all the above numbers look good but the most important fact is how they affect our application. Here are the latency numbers:

  "max" : 16028.281,
  "mean" : 946.232869010599,
  "min" : 41.977000000000004,
  "p50" : 589.297,
  "p75" : 1080.305,
  "p95" : 2966.102,
  "p98" : 4288.5830000000005,
  "p99" : 5918.753,

The difference between 95%% is almost 1 millisecond! Here’s how the 95%% difference looks on a dashboard side by side during time:

grafana!

We just measured the performance improvement having Transparent Hugepages Support enabled. But as we know, it bears some maintenance overhead and risks of latency spikes. We surely need to measure them too. Let’s take a look at the khugepaged kernel thread that works on hugepages defragmentation. The probing was done for twenty-four hours or so. As you can see the maximum execution time is 6 milliseconds, there are quite a few runs that took more than 1 millisecond. This is a background thread but it locks pages it works with. Below is the histogram:

[~]# ./func_time_stats.stp 'kernel.function("khugepaged_scan_mm_slot")' 60000 -o khugepaged_scan_mm_slot.log
[~]# tail khugepaged_scan_mm_slot.log

Thu Aug 17 13:38:59 2017 CEST:
 min:0us avg:321us max:6382us count:10834
value |-------------------------------------------------- count
    0 |@                                                   164
    1 |@                                                   197
    2 |@@@                                                 466
    4 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@  6074
    8 |@@@@@@                                              761
   16 |@@                                                  318
   32 |                                                     65
   64 |                                                     13
  128 |                                                      1
  256 |                                                      3
  512 |@@@                                                 463
 1024 |@@@@@@@@@@@@@@@@@@                                 2211
 2048 |                                                     85
 4096 |                                                     13
 8192 |                                                      0
16384 |                                                      0

Another important kernel function is __alloc_pages_slowpath. It also can cause latency spikes if can’t find contiguous block of memory. The probing histogram looks good, the maximum allocation time was 288 microsecond. Having it running for hours or even days gives us some confidence that we won’t run into a huge spike.

[~]# ./func_time_stats.stp 'kernel.function("__alloc_pages_slowpath")' 60000 -o alloc_pages_slowpath.log
[~]# tail alloc_pages_slowpath.log

Tue Aug 15 10:35:03 2017 CEST:
 min:0us avg:2us max:288us count:6262185
value |-------------------------------------------------- count
    0 |@@@@                                                237360
    1 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@     2308083
    2 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@  2484688
    4 |@@@@@@@@@@@@@@@@@@@@@@                             1136503
    8 |@                                                    72701
   16 |                                                     22353
   32 |                                                       381
   64 |                                                         7
  128 |                                                       105
  256 |                                                         4
  512 |                                                         0
 1024 |                                                         0

Conclusion

Do not blindly follow any recommendation on the Internet, please! Measure, measure and measure again! Transparent Hugepages Support affects performance. The performance and latency impact can be significant for some application. Measure it!

Software Engineering Leader needed to grow high performance team

$
0
0

ZeroCater makes food simple for companies. The best way to think about it is that we’re “Pandora for food.” We create a tight feedback loop between our customers’ food preferences and then use that feedback to create a better experience for them. The longer a customer stays with ZeroCater, the more data we have to make their experience better.

Key stats:

  • Founded 2009, YC W11
  • $100+ million in sales (we're the market leader in our space)
  • 150+ people across offices in SF, NYC, Chicago, Austin
  • 62% female, 43% non-Caucasian - diversity makes us a stronger community

We’re the first company in the space and we’re growing quickly because our customers love us. Here’s how we stand out:

  • We focus on creating delightful customers.Just take a look at what they say about us.
  • We’re building a lasting business. Unlike others in our space, we have a sustainable business model with attractive profit margins. We invest those profits back into the business to fuel the pursuit of solving even more problems for our customers.
  • We’re helping hundreds of local business grow. We partnered with over 600+ independent restaurants and caterers nationwide and provide a critical stream of revenue to our small businesses partners.
  • When it comes to diversity, we walk the walk. Diversity and openness makes us better as a community -- and unlike most tech companies, our numbers back it up: 62% female and 43% non-Caucasian. Applicants are considered without regards to race, color, religion, national origin, sex, marital status, ancestry, physical or mental disability, veteran status, sexual orientation, gender identity, or other protected statuses.
  • We operate in six (and growing) major metro areas in the United States.

We’re a small, smart product development team. We value communication, transparency, give-a-damn-ness, with a strong bias toward responsibility & autonomy. Expect to be challenged, learn, teach, and -- like all of us here -- grow in how you work and what you build. Working with us means you’ll gain the confidence, knowledge, and experience you need to one day build and run your own product company.

What we do:

  • Grow your team: nurture your team's skills & capabilities so they can meet your objectives.
  • Measure your results: measure your team's throughput and impact on projects. You’re in control of your own success.
  • Build & improve code across the entire stack (postgres, python, javascript, html and css) for new and existing products, though your 1st duty is to enable your team to be successful.
  • Ship code to customers every week. Nurture a culture of delivering value to customers every week.
  • Establish and participate in cross-functional teams to design and build new functionality

You:

  • 5+ years industry experience in software engineering with full-stack web applications (particularly with django, rails, or node.js)
  • 1+ years leading teams as a tech lead or engineering manager - you will be managing a small team!
  • Demonstrated track record of working across departments to drive projects to success
  • Pragmatic approach to engineering that strikes a balance between beautiful code, maintainability, and time to market
  • Solid communication chops since you'll be working with non-technical people to scope and manage projects

Questions about this job? Email our VP Engineering at huned@zerocater.com and include a link to your github profile and/or a side project. Thanks for checking us out!

Show HN: A pure Swift port of the Cassowary linear constraints solver

$
0
0

README.md

A Swift port of the Cassowary linear constraints solver.

let solver =Solver()let left =Variable("left")let mid =Variable("mid")let right =Variable("right")try solver.addConstraint(mid == (left + right) /2)try solver.addConstraint(right == left +10)try solver.addConstraint(right <=100)try solver.addConstraint(left >=0)

solver.updateVariables()// left.value is now 90.0// mid.value is now 95.0// right.value is now 100.0try solver.addEditVariable(variable: mid, strength: Strength.STRONG)try solver.suggestValue(variable: mid, value: 2)

solver.updateVariables()// left.value is now 0.0// mid.value is now 5.0// right.value is now 10.0

Documentation can be found on CocoaDocs

Cassowary Swift originally started as a direct port of kiwi-java by Alex Birkett

Show HN: Minimal build system using just /bin/sh

$
0
0

Build systems have a tendency to create languages, and their languages have a way of growing increasingly complex over time.

They work well once you get them working in your environment, but if they don't work out of the box for some reason, they can be daunting for your users to debug.

A common source of build errors is incompatibilities between build languages on different platforms. GNU make isn't the same as BSD make. autotools can autogenerate Makefiles for endless obsolete flavors of Unix, but itonly serves to hinder you on the platform you're on right now.cmake can autogenerate build files for m build tools in n platforms, which means it needs to understand mXn mappings between DSLs.

An alternative approach is to sidestep existing tools (i.e. make) rather than build atop them. This repo illustrates a way to manage a project using just plain Bourne sh, available on all Unix systems for 30 years. No place for incompatibilities to rear their heads!

Interface

Say you have a command to run:

cc input1.c input2.c -o output

To only do this work when necessary, wrap this command in an older_than block:

older_than output input1.c input2.c && {
  cc input1 input2 -o output
}

If the output is newer than all inputs, nothing happens.

If any of the inputs is newer than the output, the command runs and updates the output.

A sequence of such blocks can mimic any makefile, and the extra verbosity can help someone debug your project if it doesn't work for them. Every command is explicit, as is the order to try them in.

Try it out

To see older_than at work and a more complete example of its use, try running the build script in this repo.

$ ./build
updating a.out
$ ./a.out
hello, world!
$ ./build
# no change to a.out

The build script is easy to repurpose to your needs. The older_than function in it is 16 lines of code.

Naming things [pdf] (2015)

Towards a JavaScript Binary AST

$
0
0

In this blog post, I would like to introduce the JavaScript Binary AST, an ongoing project that we hope will help make webpages load faster, along with a number of other benefits.

Over the years, JavaScript has grown from one of the slowest scripting languages available to a high-performance powerhouse, fast enough that it can run desktop, server, mobile and even embedded applications, whether through web browsers or other environments.

As the power of JavaScript has grown, so has the complexity of applications and their size. Whereas, twenty years ago, few websites used more than a few Kb of JavaScript, many websites and non-web applications now need to deliver and load several Mb of JavaScript before the user can start actually using the site/app.

While the sound of “several Mb of JavaScript” may sound odd, recall that a native application such as Steam weighs 3.1Mb (pure binary, without resources, without debugging symbols, without dynamic dependencies, measured on my Mac), Telegram weights 11Mb and the Opera updater weighs 5.8Mb. I’m not adding the size of a web browser, because web browsers are architected essentially from dynamic dependencies, but I expect that both Firefox and Chromium weigh 100+ Mb.

Of course, large JavaScript source code has several costs, including:

  • heavy network transfers;
  • slow startup.

We have reached a stage at which the simple duration of parsing the JavaScript source code of a large web application such as Facebook can easily last 500ms-800ms on a fast computer – that’s before the JavaScript code can be compiled to bytecode and/or interpreted. There is very little reason to believe that JavaScript applications will get smaller with time.

So, a joint team from Mozilla and Facebook decided to get started working on a novel mechanism that we believe can dramatically improve the speed at which an application can start executing its JavaScript: the Binary AST.

The idea of the JavaScript Binary AST is simple: instead of sending text source code, what could we improve by sending binary source code?

Let me clarify: the Binary AST source code is equivalent to the text source code. It is not a new programming language, or a subset of JavaScript, or a superset of JavaScript, it is JavaScript. It is not a bytecode, rather a binary representation of the source code. If you prefer, this Binary AST representation is a form of source compression, designed specifically for JavaScript, and optimized to improve parsing speed. We are also building a decoder that provides a perfectly readable, well-formatted, source code. For the moment, the format does not maintain comments, but there is a proposal to allow comments to be maintained.

Producing a Binary AST file will require a build step and we hope that, in time, build tools such as WebPack or Babel will be able to produce Binary AST files, hence making switching to Binary AST as simple as passing a flag to the build chains already used by many JS developers.

I plan to detail the Binary AST, our benchmarks and our current status it in future blog posts. For the moment, let me just mention that early experiments suggest that we can both obtain very good source compression and considerable parsing speedups.

We have been working on Binary AST for a few months now and the project was just accepted as a Stage 1 Proposal at at ECMA TC-39. This is encouraging, but it will take time until you see implemented in all JavaScript VMs and toolchains.

…compression formats

Most webservers already send JavaScript data using a compression format such as gzip or brotli. This considerably reduces the time spent waiting for the data.

What we’re doing here is a format specifically designed for JavaScript. Indeed, our early prototype uses gzip internally, among many other tricks, and has two main advantages:

  • it is designed to make parsing much faster;
  • according to early experiments, we beat gzip or brotli by a large margin.

Note that our main objective is to make parsing faster, so in the future, if we need to choose between file size and parsing speed, we are most likely to pick faster parsing. Also, the compression formats used internally may change.

…minifiers

The tool traditionally used by web developers to decrease the size of JS files is the minifier, such as UglifyJS or Google’s Closure Compiler.

Minifiers typically remove unused whitespace and comments, rewrite variable names to shorten then, and use a number of other transformations to make the program shorter.

While these tools are definitely useful, they have two main shortcomings:

  • they do not attempt to make parsing faster – indeed, we have witnessed a number of cases in which minification accidentally makes parsing slower;
  • they have the side-effect of making the JavaScript code much harder to read, including renaming unreadable names to variables and functions, using exotic features to pack variable declarations, etc.

By opposition, the Binary AST transformation:

  • is designed to make parsing faster;
  • maintains the source code in such a manner that it can be easily decoded and read, with all variable names, etc.

Of course, obfuscation and Binary AST transformation can be combined for applications that do not wish to keep the source code readable.

…WebAssembly

Another exciting web technology designed to improve performance in certain cases is WebAssembly (or wasm). wasm is designed to let native applications be compiled in a format that can both be transferred efficiently, parsed quickly and executed at native speed by the JavaScript VM.

By design, however, wasm is limited to native code, so it doesn’t work with JavaScript out of the box.

I am not aware of any project that achieves compilation of JavaScript to wasm. While this would certainly be feasible, this would be a rather risky undertaking, as this would involve developing a compiler that is at least as complex as a new JavaScript VM, while making sure that it is still compatible with JavaScript (which is both a very tricky language and a language whose specifications are clarified or extended at least once per year). Of course, this task ends up useless if the resulting code is slower than today’s JavaScript VMs (which tend to be really, really fast) or so large that it makes startup prohibitively slow (because that’s the problem we are trying to solve here) or if it doesn’t work with existing JavaScript libraries or (for browser applications) the DOM.

Now, exploring this would definitely be an interesting work, so if anybody wants to prove us wrong, by all means, please do it :)

…improving caching

When JavaScript code is downloaded by a browser, it is stored in the browser’s cache, so as to avoid having to re-download it later. Both Chromium and Firefox have recently improved their browsers to be able to cache not just the JavaScript source code but also the bytecode, hence side-stepping nicely the issue of parse time for the second load of a page. I have no idea of the status of Safari or Edge on the topic, so it is possible that they may have comparable technologies.

Congratulation to both teams, these technologies are great! Indeed, they nicely improve the performance of reloading a page. This works very well for pages that have not updated their JavaScript code since the last time they were accessed.

The problem we are attempting to solve with Binary AST is different: while we all have some pages that we visit and revisit often, there is a larger number of pages that we visit for the first time, in addition to the pages that we revisit but that that have been updated since our latest visit. In particular, a growing number of applications get updated very, very often – for instance, Facebook ships new JavaScript code several times per day, and I would be surprised if Twitter, LinkedIn, Google Docs et al didn’t follow similar practices. Also, if you are a JS developer shipping a JavaScript application – whether web or otherwise – you want the first contact between you and your users to be as smooth as possible, which means that you want the first load (or first load since update) to be very fast, too.

These are problems that we address with Binary AST.

…we improved caching?

Additional technologies have been discussed to let browsers prefetch and precompile JS code to bytecode.

These technologies are definitely worth investigating and would also help with some of the scenarios for which we are developing Binary AST – each technology improving the other. In particular, the better resource-efficiency of Binary AST would thus help limit the resource waste when such technologies are misused, while also improving the cases in which these techniques cannot be used at all.

…we used an existing JS bytecode?

Most, if not all, JavaScript Virtual Machines already use an internal representation of code as JS bytecode. I seem to remember that at least Microsoft’s Virtual Machine supports shipping JavaScript bytecode for privileged application.

So, one could imagine browser vendors exposing their bytecode and letting all JS applications ship bytecode. This, however, sounds like a pretty bad idea, for several reasons.

The first one affects VM developers. Once you have exposed your internal representation of JavaScript, you are doomed to maintain it. As it turns out, JavaScript bytecode changes regularly, to adapt to new versions of the language or to new optimizations. Forcing a VM to keep compatibility with an old version of its bytecode forever would be a maintenance and/or performance disaster, so I doubt that any browser/VM vendor will want to commit to this, except perhaps in a very limited setting.

The second affects JS developers. Having several bytecodes would mean maintaining and shipping several binaries – possibly several dozens if you want to fine-time optimizations to successive versions of each browser’s bytecode. To make things worse, these bytecodes will have different semantics, leading to JS code compiled with different semantics. While this is in the realm of the possible – after all, mobile and native developers do this all the time – this would be a clear regression upon the current JS landscape.

…we had a standard JS bytecode?

So what if the JavaScript VM vendors decided to come up with a novel bytecode format, possibly as an extension of WebAssembly, but designed specifically for JavaScript?

Just to be clear: I have heard people regretting that such a format did not exist but I am not aware of anybody actively working on this.

One of the reasons people have not done this yet is that designing and maintaining bytecode for a language that changes all the time is quite complicated – doubly so for a language that is already as complex as JavaScript. More importantly, keeping the interpreted-JavaScript and the bytecode-JavaScript in touch would most likely be a losing battle, one that would eventually result in two subtly incompatible JavaScript languages, something that would deeply hurt the web.

Also, whether such a bytecode would actually help code size and performance, remains to be demonstrated.

…we just made the parser faster?

Wouldn’t it be nice if we could just make the parser faster? Unfortunately, while JS parsers have improved considerably, we are long past the point of diminishing returns.

Let me quote a few steps that simply cannot be skipped or made infinitely efficient:

  • dealing with exotic encodings, Unicode byte order marks and other niceties;
  • finding out if this / character is a division operator, the start of a comment or a regular expression;
  • finding out if this ( character starts an expression, a list of arguments for a function call, a list of arguments for an arrow function, …;
  • finding out where this string (respectively string template, array, function, …) stops, which depends on all the disambiguation issues, …;
  • finding out whether this let a declaration is valid or whether it collides with another let a, var a or const a declaration – which may actually appear later in the source code;
  • upon encountering a use of eval, determine which of the 4 semantics of eval to use;
  • determining how truly local local variables are;

Ideally, VM developers would like to be able to parallelize parsing and/or delay it until we know for sure that the code we parse is actually used. Indeed, most recent VMs implement these strategies. Sadly, the numerous token ambiguities in the JavaScript syntax considerably the opportunities for concurrency while the constraints on when syntax errors must be thrown considerably limit the opportunities for lazy parsing.

In either case, the VM needs to perform an expensive pre-parse step that can often backfire into being slower than regular parsing, typically when applied to minified code.

Indeed, the Binary AST proposal was designed to overcome the performance limitations imposed by the syntax and semantics of text source JavaScript.

We are posting this blog entry early because we want you, web developers, tooling developers to be in the loop as early as possible. So far, the feedback we have gathered from both groups is pretty good, and we are looking forward to working closely with both communities.

We have completed an early prototype for benchmarking purposes (so, not really usable) and are working on an advanced prototype, both for the tooling and for Firefox, but we are still a few months away from something useful.

I will try and post more details in a few weeks time.

For more reading:

Why the Brain Needs More Downtime (2013)

$
0
0

Every now and then during the workweek—usually around three in the afternoon—a familiar ache begins to saturate my forehead and pool in my temples. The glare of my computer screen appears to suddenly intensify. My eyes trace the contour of the same sentence two or three times, yet I fail to extract its meaning. Even if I began the day undaunted, getting through my ever growing list of stories to write and edit, e-mails to send and respond to, and documents to read now seems as futile as scaling a mountain that continuously thrusts new stone skyward. There is so much more to do—so much work I genuinely enjoy—but my brain is telling me to stop. It's full. It needs some downtime.

Freelance writer and meditation teacher Michael Taft has experienced his own version of cerebral congestion. “In a normal working day in modern America, there’s a sense of so much coming at you at once, so much to process that you just can’t deal with it all,” Taft says. In 2011, while finalizing plans to move from Los Angeles to San Francisco, he decided to take an especially long recess from work and the usual frenzy of life. After selling his home and packing all his belongings in storage, he traveled to the small rural community of Barre, Mass., about 100 kilometers west of Boston, where every year people congregate for a three-month-long “meditation marathon.”

Taft had been on similar retreats before, but never one this long. For 92 days he lived at Insight Meditation Society’s Forest Refuge facility, never speaking a word to anyone else. He spent most of his time meditating, practicing yoga and walking through fields and along trails in surrounding farmland and woods, where he encountered rafters of turkeys leaping from branches, and once spotted an otter gamboling in a swamp. Gradually, his mind seemed to sort through a backlog of unprocessed data and to empty itself of accumulated concerns. “When you go on a long retreat like that there’s a kind of base level of mental tension and busyness that totally evaporates,” Taft says. “I call that my ‘mind being not full.’ Currently, the speed of life doesn’t allow enough interstitial time for things to just kind of settle down.”

Many people in the U.S. and other industrialized countries would wholeheartedly agree with Taft’s sentiments, even if they are not as committed to meditation. A 2010 LexisNexis survey of 1,700 white collar workers in the U.S., China, South Africa, the U.K. and Australia revealed that on average employees spend more than half their workdays receiving and managing information rather than using it to do their jobs; half of the surveyed workers also confessed that they were reaching a breaking point after which they would not be able to accommodate the deluge of data. In contrast to the European Union, which mandates 20 days of paid vacation, the U.S. has no federal laws guaranteeing paid time off, sick leave or even breaks for national holidays. In the Netherlands 26 days of vacation in a given year is typical. In America, Canada, Japan and Hong Kong workers average 10 days off each year. Yet a survey by Harris Interactive found that, at the end of 2012, Americans had an average of nine unused vacation days. And in several surveys Americans have admited that they obsessively check and respond to e-mails from their colleagues or feel obliged to get some work done in between kayaking around the coast of Kauai and learning to pronounce humuhumunukunukuapua'a.

To summarize, Americans and their brains are preoccupied with work much of the time. Throughout history people have intuited that such puritanical devotion to perpetual busyness does not in fact translate to greater productivity and is not particularly healthy. What if the brain requires substantial downtime to remain industrious and generate its most innovative ideas? "Idleness is not just a vacation, an indulgence or a vice; it is as indispensable to the brain as vitamin D is to the body, and deprived of it we suffer a mental affliction as disfiguring as rickets," essayist Tim Kreider wrote in The New York Times. "The space and quiet that idleness provides is a necessary condition for standing back from life and seeing it whole, for making unexpected connections and waiting for the wild summer lightning strikes of inspiration—it is, paradoxically, necessary to getting any work done."

In making an argument for the necessity of mental downtime, we can now add an overwhelming amount of empirical evidence to intuition and anecdote. Why giving our brains a break now and then is so important has become increasingly clear in a diverse collection of new studies investigating: the habits of office workers and the daily routines of extraordinary musicians and athletes; the benefits of vacation, meditation and time spent in parks, gardens and other peaceful outdoor spaces; and how napping, unwinding while awake and perhaps the mere act of blinking can sharpen the mind. What research to date also clarifies, however, is that even when we are relaxing or daydreaming, the brain does not really slow down or stop working. Rather—just as a dazzling array of molecular, genetic and physiological processes occur primarily or even exclusively when we sleep at night—many important mental processes seem to require what we call downtime and other forms of rest during the day. Downtime replenishes the brain’s stores of attention and motivation, encourages productivity and creativity, and is essential to both achieve our highest levels of performance and simply form stable memories in everyday life. A wandering mind unsticks us in time so that we can learn from the past and plan for the future. Moments of respite may even be necessary to keep one’s moral compass in working order and maintain a sense of self.

The rest is history
For much of the 20th century many scientists regarded the idea that the brain might be productive during downtime as ludicrous. German neurologist Hans Berger disagreed. In 1929, after extensive studies using an electroencephalogram—a device he invented to record electrical impulses in the brain by placing a net of electrodes on the scalp—he proposed that the brain is always in “a state of considerable activity,” even when people were sleeping or relaxing. Although his peers acknowledged that some parts of the the brain and spinal cord must work nonstop to regulate the lungs and heart, they assumed that when someone was not focusing on a specific mental task, the brain was largely offline; any activity picked up by an electroencephalogram or other device during rest was mostly random noise. At first, the advent of functional magnetic resonance imaging (fMRI) in the early 1990s only strengthened this view of the brain as an exquisitely frugal organ switching on and off its many parts as needed. By tracing blood flow through the brain, fMRI clearly showed that different neural circuits became especially active during different mental tasks, summoning extra blood full of oxygen and glucose to use as energy.

By the mid 1990s, however, Marcus Raichle of Washington University in Saint Louis and his colleagues had demonstrated that the human brain is in fact a glutton, constantly demanding 20 percent of all the energy the body produces and requiring only 5 to 10 percent more energy than usual when someone solves calculus problems or reads a book. Raichle also noticed that a particular set of scattered brain regions consistently became less active when someone concentrated on a mental challenge, but began to fire in synchrony when someone was simply lying supine in an fMRI scanner, letting their thoughts wander. Likewise, Bharat Biswal, now at the New Jersey Institute of Technology, documented the same kind of coordinated communication between disparate brain regions in people who were resting. Many researchers were dubious, but further studies by other scientists confirmed that the findings were not a fluke. Eventually this mysterious and complex circuit that stirred to life when people were daydreaming became known as the default mode network (DMN). In the last five years researchers discovered that the DMN is but one of at least five different resting-state networks—circuits for vision, hearing, movement, attention and memory. But the DMN remains the best studied and perhaps the most important among them.

In a recent thought-provoking review of research on the default mode network, Mary Helen Immordino-Yang of the University of Southern California and her co-authors argue that when we are resting the brain is anything but idle and that, far from being purposeless or unproductive, downtime is in fact essential to mental processes that affirm our identities, develop our understanding of human behavior and instill an internal code of ethics—processes that depend on the DMN. Downtime is an opportunity for the brain to make sense of what it has recently learned, to surface fundamental unresolved tensions in our lives and to swivel its powers of reflection away from the external world toward itself. While mind-wandering we replay conversations we had earlier that day, rewriting our verbal blunders as a way of learning to avoid them in the future. We craft fictional dialogue to practice standing up to someone who intimidates us or to reap the satisfaction of an imaginary harangue against someone who wronged us. We shuffle through all those neglected mental post-it notes listing half-finished projects and we mull over the aspects of our lives with which we are most dissatisfied, searching for solutions. We sink into scenes from childhood and catapult ourselves into different hypothetical futures. And we subject ourselves to a kind of moral performance review, questioning how we have treated others lately. These moments of introspection are also one way we form a sense of self, which is essentially a story we continually tell ourselves. When it has a moment to itself, the mind dips its quill into our memories, sensory experiences, disappointments and desires so that it may continue writing this ongoing first-person narrative of life.

Related research suggests that the default mode network is more active than is typical in especially creative people, and some studies have demonstrated that the mind obliquely solves tough problems while daydreaming—an experience many people have had while taking a shower. Epiphanies may seem to come out of nowhere, but they are often the product of unconscious mental activity during downtime. In a 2006 study, Ap Dijksterhuis and his colleagues asked 80 University of Amsterdam students to pick the best car from a set of four that—unbeknownst to the students—the researchers had previously ranked based on size, mileage, maneuverability and other features. Half the participants got four minutes to deliberate after reviewing the specs; the researchers prevented the other 40 from pondering their choices by distracting them with anagrams. Yet the latter group made far better decisions. Solutions emerge from the subconscious in this way only when the distracting task is relatively simple, such as solving an anagram or engaging in a routine activity that does not necessitate much deliberate concentration, like brushing one's teeth or washing dishes. With the right kind of distraction the default mode network may be able to integrate more information from a wide range of brain regions in more complex ways than when the brain is consciously working through a problem.

During downtime, the brain also concerns itself with more mundane but equally important duties. For decades scientists have suspected that when an animal or person is not actively learning something new, the brain consolidates recently accumulated data, memorizing the most salient information, and essentially rehearses recently learned skills, etching them into its tissue. Most of us have observed how, after a good night’s sleep, the vocab words we struggled to remember the previous day suddenly leap into our minds or that technically challenging piano song is much easier to play. Dozensof studies have confirmed that memory depends on sleep.

More recently, scientists have documented what may well be physical evidence of such memory consolidation in animals that are awake but resting. When exploring a new environment—say, a maze—a rat’s brain crackles with a particular pattern of electrical activity. A little while later, when that rat is sitting around, its brain sometimes re-creates a nearly identical pattern of electrical impulses zipping between the same set of neurons. The more those neurons communicate with one another, the stronger their connections become; meanwhile neglected and irrelevant neural pathways wither. Many studies indicate that in such moments—known as sharp-wave ripples—the rat is forming a memory.

In a 2009 studyGabrielle Girardeau, now at New York University, and her colleagues trained rats to find Cocoa Krispies consistently placed in the same branches of an eight-armed octo-maze. Following training sessions, while the rats were either sleeping or awake and resting, the researchers mildly zapped the brains of one group of rodents in a way that disrupted any sharp-wave ripples. Another group of rats received small electric shocks that did not interfere with ripples. The former group had a much harder time remembering where to find the food.

Several studies suggest that something similar happens in the human brain. In order to control their seizures, people with epilepsy sometimes undergo surgery that involves drilling through the skull and implanting electrodes in the brain. In such cases, some patients agree to let scientists record electrical activity picked up by those electrodes—a unique situation that avoids endangering people solely for the sake of neuroscience. In a 2008 studyNikolai Axmacher of the University of Bonn and his colleagues showed epilepsy patients a series of photos of houses and landscapes and tested their memories of those pictures following one-hour naps. During the naps, the researchers recorded electrical activity in a region of the brain known as the rhinal cortex, which is crucial for certain kinds of memory. As expected, the more sharp-wave ripples pulsed through the rhinal cortex, the better patients remembered the pictures. And such ripples occurred most frequently not when the patients were napping, but rather when they were lying awake in bed in the dark shortly before or after falling asleep.

A 2009 study by Chris Miall of the University of Birmingham and his colleagues complements this research. Twenty-four volunteers scooted inside an fMRI scanner and attempted to move a cursor in the center of a computer screen toward various pixelated targets by twiddling a joystick. Half the volunteers worked with a straightforward setup: when they moved the joystick left, the cursor moved left. The other half was stuck with a frustratingly fickle contraption: imagine trying to get the hang of a computer mouse that continuously rotates clockwise—suddenly right is up and left is down. All the participants rested inside the scanner before and after focusing on their assigned task.

Activity in resting state networks of the former group did not change much from one break to the next. But in the brains of volunteers who had previously struggled with the trick joystick, activity in two resting state networks was much more in sync than usual. This coordination likely reflects strengthened connections between those two circuits, Miall suspects, which in turn indicates that during rest the brain was likely ingraining what it had learned about working a strange and confusing tool. In contrast, the brains of volunteers that operated the conventional joystick had not learned anything new. In a yet-to–be-published follow-up experiment in which volunteers learned to press buttons in a particular sequence—and another study in which people studied a new language—Miall and his teammates reached similar conclusions about the importance of brain activity during rest for learning.

A tantalizing piece of evidence suggests that the brain may take advantage of every momentary lapse in attention to let resting state networks take over. In a study published last year, Tamami Nakano of Osaka University recorded electrical impulses in people's brains as they watched clips of British comedian Mr. Bean. The results revealed that the brain can fire up the DMN in the blink of an eye—literally. Every time we blink, circuits we use to consciously direct attention go quiet and the DMN briefly wakes up. Exactly what the DMN accomplishes in these interludes remains unclear, but it could very well be a form of memory consolidation or a moment for attention-directing neurons to catch their breath.

All in a day’s work
That learning and memory depend on both sleep and waking rest may partially explain why some of the most exceptional artists and athletes among us fall into a daily routine of intense practice punctuated by breaks and followed by a lengthy period of recuperation. Psychologist K. Anders Ericsson of The Florida State University has spent more than 30 years studying how people achieve the highest levels of expertise. Based on his own work and a thorough review of the relevant research, Ericsson has concluded that most people can engage in deliberate practice—which means pushing oneself beyond current limits—for only an hour without rest; that extremely talented people in many different disciplines—music, sports, writing—rarely practice more than four hours each day on average; and that many experts prefer to begin training early in the morning when mental and physical energy is readily available. “Unless the daily levels of practice are restricted, such that subsequent rest and nighttime sleep allow the individuals to restore their equilibrium,” Ericsson wrote, “individuals often encounter overtraining injuries and, eventually, incapacitating ‘burnout.’”

These principles are derived from the rituals of the exceptional, but they are useful for just about anyone in any profession, including typical nine-to-fivers. Corporate America may never sanction working only four hours a day, but research suggests that to maximize productivity we should reform the current model of consecutive 40-hour workweeks separated only by two-day weekends and sometimes interrupted by short vacations.

Psychologists have established that vacations have real benefits. Vacations likely revitalize the body and mind by distancing people from job-related stress; by immersing people in new places, cuisines and social circles, which in turn may lead to original ideas and insights; and by giving people the opportunity to get a good night’s sleep and to let their minds drift from one experience to the next, rather than forcing their brains to concentrate on a single task for hours at a time. But a recent comprehensive meta-analysis by Jessica de Bloom, now at the University of Tampere in Finland, demonstrates that these benefits generally fade within two to four weeks. In one of de Bloom’s own studies 96 Dutch workers reported feeling more energetic, happier, less tense and more satisfied with their lives than usual during a winter sports vacation between seven and nine days long. Within one week of returning to work, however, all the feelings of renewal dissipated. A second experiment on four and five days of respite came to essentially the same conclusion. A short vacation is like a cool shower on an oppressively muggy summer day—a refreshing yet fleeting escape.

Instead of limiting people to a single weeklong vacation each year or a few three-day vacations here and there, companies should also allow their employees to take a day or two off during the workweek and encourage workers to banish all work-related tasks from their evenings. In a four-year study, Leslie Perlow of the Harvard Business School and her colleagues tracked the work habits of employees at the Boston Consulting Group. Each year they insisted that employees take regular time off, even when they did not think they should be away from the office. In one experiment each of five consultants on a team took a break from work one day a week. In a second experiment every member of a team scheduled one weekly night of uninterrupted personal time, even though they were accustomed to working from home in the evenings.

Everyone resisted at first, fearing they would only be postponing work. But over time the consultants learned to love their scheduled time off because it consistently replenished their willingness and ability to work, which made them more productive overall. After five months employees experimenting with deliberate periodic rest were more satisfied with their jobs, more likely to envision a long-term future at the company, more content with their work–life balance and prouder of their accomplishments.

Tony Schwartz, a journalist and CEO of The Energy Project, has made a career out of teaching people to be more productive by changing the way they think about downtime. His strategy relies in part on the idea that anyone can learn to regularly renew their reservoirs of physical and mental energy. "People are working so many hours that not only in most cases do they not have more hours they could work, but there's also strong evidence that when they work for too long they get diminishing returns in terms of health costs and emotional costs," Schwartz says. "If time is no longer an available resource, what is? The answer is energy."

Schwartz and his colleagues encourage workers to get seven to eight hours of sleep every night, to use all their vacation days, take power naps and many small breaks during the day, practice meditation, and tackle the most challenging task first thing in the morning so they can give it their full attention. "Many things we are suggesting are in some ways very simple and on some level are things people already knew, but they are moving at such extraordinary speed that they have convinced themselves they are not capable of those behaviors," Schwartz says.

The Energy Project’s approach was a tough sell at first—because it contradicts the prevailing ethos that busier is better—but the organization has so far successfully partnered with Google, Apple, Facebook, Coca-Cola, Green Mountain Coffee, Ford, Genentech and a wide range of Fortune 500 companies. To gauge how employees improve over time, Schwartz measures their level of engagement—that is, how much they like their jobs and are willing to go above and beyond their basic duties—a trait that many studies have correlated with performance. Admittedly, this is not the most precise or direct measurement, but Schwartz says that time and again his strategies have pushed workers' overall engagement well above the average level and that Google has been satisfied enough to keep up the partnership for more than five years.

Put your mind at rest
Many recent studies have corroborated the idea that our mental resources are continuously depleted throughout the day and that various kinds of rest and downtime can both replenish those reserves and increase their volume. Consider, for instance, how even an incredibly brief midday nap enlivens the mind.

By adulthood, most of us have adopted the habit of sleeping through the night and staying awake for most or all of the day—but this may not be ideal for our mental health and is certainly not the only way people have slept throughout history. In somewhat the same way that hobbits in Tolkien's Middle Earth enjoy a first and second breakfast, people living without electricity in preindustrial Europe looked forward to a first and second sleep divided by about an hour of crepuscular activity. During that hour, they would pray, relieve themselves, smoke tobacco, have sex and even visit neighbors. Some researchers have proposed that people are also physiologically inclined to snooze during a 2 P.M. to 4 P.M. “nap zone”—or what some might call the afternoon slump—because the brain prefers to toggle between sleep and wake more than once a day. As far back as the first century B.C. the Romans regularly took midafternoon breaks, which they called meridiari from the Latin for midday. Under the influence of Roman Catholicism, noon became known as sexta (the sixth hour, according to their clocks), a time for rest and prayer. Eventually sexta morphed into siesta.

Plenty of studies have established that naps sharpen concentration and improve the performance of both the sleep-deprived and the fully rested on all kinds of tasks, from driving to medical care. A 2004 study, for example, analyzed four years of data on highway car accidents involving Italian policemen and concluded that the practice of napping before night shifts reduced the prospective number of collisions by 48 percent. In a 2002 study by Rebecca Smith-Coggins of Stanford University and her colleagues, 26 physicians and nurses working three consecutive 12-hour night shifts napped for 40 minutes at 3 A.M. while 23 of their colleagues worked continuously without sleeping. Although doctors and nurses that had napped scored lower than their peers on a memory test at 4 A.M., at 7:30 A.M. they outperformed the no-nap group on a test of attention, more efficiently inserted a catheter in a virtual simulation and were more alert during an interactive simulation of driving a car home.

Long naps work great when people have enough time to recover from “sleep inertia”—post-nap grogginess that, in some cases, can take more than two hours to fade. In other situations micronaps may be a smarter strategy. An intensive 2006 study by Amber Brooks and Leon Lack of Flinders University in Australia and their colleagues pitted naps of five, 10, 20 and 30 minutes against one another to find out which was most restorative. Over a span of three years 24 college students periodically slept for only five hours on designated nights. The day after each of those nights they visited the lab to nap and take tests of attention that required them to respond quickly to images on a screen, complete a word search and accurately copy sequences of arcane symbols.

A five-minute nap barely increased alertness, but naps of 10, 20 and 30 minutes all improved the students’ scores. But volunteers that napped 20 or 30 minutes had to wait half an hour or more for their sleep inertia to wear off before regaining full alertness, whereas 10-minute naps immediately enhanced performance just as much as the longer naps without any grogginess. An explanation for this finding, Brooks and Lack speculate, may involve the brain’s so-called “sleep switch.” Essentially, one cluster of neurons is especially important for keeping us awake, whereas another distinct circuit induces sleepiness. When neurons in one region fire rapidly they directly inhibit the firing of neurons in the other region, thereby operating as a sleep/wake switch. Neurons in the wake circuit likely become fatigued and slow down after many hours of firing during the day, which allows the neurons in the sleep circuit to speed up and initiate the flip to a sleep state. Once someone begins to doze, however, a mere seven to 10 minutes of sleep may be enough to restore the wake-circuit neurons to their former excitability.

Although some start-ups and progressive companies provide their employees with spaces to nap at the office, most workers in the U.S. do not have that option. An equally restorative and likely far more manageable solution to mental fatigue is spending more time outdoors—in the evenings, on the weekends and even during lunch breaks by walking to a nearby park, riverfront or anywhere not dominated by skyscrapers and city streets. Marc Berman, a psychologist at the University of South Carolina and a pioneer of a relatively new field called ecopsychology, argues that whereas the hustle and bustle of a typical city taxes our attention, natural environments restore it. Contrast the experience of walking through Times Square in New York City—where the brain is ping-ponged between neon lights, honking taxies and throngs of tourists—with a day hike in a nature reserve, where the mind is free to leisurely shift its focus from the calls of songbirds to the gurgling and gushing of rivers to sunlight falling through every gap in the tree branches and puddling on the forest floor.

In one of the few controlled ecopsychology experiments, Berman asked 38 University of Michigan students to study lists of random numbers and recite them from memory in reverse order before completing another attention-draining task in which they memorized the locations of certain words arranged in a grid. Half the students subsequently strolled along a predefined path in an arboretum for about an hour whereas the other half walked the same distance through highly trafficked streets of downtown Ann Arbor for the same period of time. Back at the lab the students memorized and recited digits once again. On average, volunteers that had ambled among trees recalled 1.5 more digits than the first time they took the test; those who had walked through the city improved by only 0.5 digits—a small but statistically significant difference between the two groups.

Beyond renewing one's powers of concentration, downtime can in fact bulk up the muscle of attention—something that scientists have observed repeatedly in studies on meditation. There are almost as many varieties and definitions of meditation as there are people who practice it. Although meditation is not equivalent to zoning out or daydreaming, many styles challenge people to sit in a quiet space, close their eyes and turn their attention away from the outside world toward their own minds. Mindfulness meditation, for example, generally refers to a sustained focus on one’s thoughts, emotions and sensations in the present moment. For many people, mindfulness is about paying close attention to whatever the mind does on its own, as opposed to directing one’s mind to accomplish this or that.

Mindfulness training has become more popular than ever in the last decade as a strategy to relieve stress, anxiety and depression. Many researchers acknowledge that studies on the benefits of mindfulness often lack scientific rigor, use too few participants and rely too heavily on people’s subjective reports, but at this point they have gathered enough evidence to conclude that meditation can indeed improve mental health, hone one’s ability to concentrate and strengthen memory. Studies comparing long-time expert meditators with novices or people who do not meditate often find that the former outperform the latter on tests of mental acuity.

In a 2009 study, for example, Sara van Leeuwen of Johann Wolfgang Goethe University in Germany and her colleagues tested the visual attention of three groups of volunteers: 17 adults around 50 years old with up to 29 years of meditation practice; 17 people of the same age and gender who were not longtime meditators; and another 17 young adults who had never meditated before. In the test, a series of random letters flashed on a computer screen, concealing two digits in their midst. Volunteers had to identify both numerals and to guess if they did not glimpse one in time; recognizing the second number is often difficult because earlier images mask it. Performance on such tests usually declines with age, but the expert meditators outscored both their peers and the younger participants.

Heleen Slagter of Leiden University in Amsterdam and her colleagues used the same type of attention test in a 2007 study to compare 17 people who had just completed a three-month meditation retreat in Barre, Mass., with 23 mindfulness-curious volunteers who were meditating around 20 minutes a day. Both groups were evenly matched before their training, but when the retreat was over the meditation marathoners trumped the novices. Judging by recordings from an electroencephalogram, 90 days of meditation likely made the brain more efficient, so that it used up less available attention to successfully complete the test.

Rather profound changes to the brain's structure and behavior likely underlie many of these improvements. Numerous studies have shown that meditation strengthens connections between regions of the default mode network, for example, and can help people learn to more effectively shift between the DMN and circuits that are most active when we are consciously fixated on a task. Over time expert meditators may also develop a more intricately wrinkled cortex—the brain’s outer layer, which is necessary for many of our most sophisticated mental abilities, like abstract thought and introspection. Meditation appears to increase the volume and density of the hippocampus, a seahorse-shaped area of the brain that is absolutely crucial for memory; it thickens regions of the frontal cortex that we rely on to rein in our emotions; and it stymies the typical wilting of brain areas responsible for sustaining attention as we get older.

Just how quickly meditation can noticeably change the brain and mind is not yet clear. But a handful of experiments suggest that a couple weeks of meditation or a mere 10 to 20 minutes of mindfulness a day can whet the mind—if people stick with it. Likewise, a few studies indicate that meditating daily is ultimately more important than the total hours of meditation over one’s lifetime.

In a 2007 study by Richard Chambers of the University of Melbourne, 40 people between the ages of 21 and 63 took various tests of attention and working memory, a collection of mental talents that allow someone to temporarily store and manipulate information. Half the volunteers completed the tests immediately before participating in an intensive 10-day meditation course—something they had never done before—and took the same tests again seven to 10 days after the course ended. The other half also took the tests on two occasions 21 days apart but did not practice any meditation. Whereas people who meditated performed quite a bit better on the tests the second time around, those who did not meditate showed no meaningful improvement. Similarly, in a 2007 study, 40 Chinese college students scored higher on attention tests after a mere 20 minutes of mindfulness-related meditation a day for five days, whereas 40 of their peers who did not meditate did not improve. And as little as 12 minutes of mindfulness meditation a day helped prevent the stress of military service from deteriorating the working memory of 34 U.S. marines in a 2011 study conducted by Amishi Jha, now at the University of Miami, and her colleagues.

"When people in the military have a gym they will work out in the gym. When they are on the side of a mountain they will make do with what they have and do push-ups to stay in shape,” Jha says. “Mindfulness training may offer something similar for the mind. It's low-tech and easy to implement." In her own life, Jha looks for any and all existing opportunities to practice mindfulness, such as her 15-minute trip to and from work each day.

Likewise, Michael Taft advocates deliberate mental breaks during "all the in-between moments" in an average day—a subway ride, lunch, a walk to the bodega. He stresses, though, that there's a big difference between admiring the idea of more downtime and committing to it in practice. "Getting out into nature on the weekends, meditating, putting away our computers now and then—a lot of it is stuff we already know we should probably do," he says. "But we have to be a lot more diligent about it. Because it really does matter."

Editor's Note: This article originally stated that researchers sometimes implant electrodes in the brains of epilepsy patients undergoing surgery. In fact, doctors implant these electrodes as part of the surgery and researchers record from them. The text has been edited to reflect this distinction.


Vulscan – Vulnerability Scanning with Nmap – Hack4Net

$
0
0
Nmap NSE Vulscan

Vulscan is a module which enhances nmap to a vulnerability scanner. The nmap option -sV enables version detection per service which is used to determine potential flaws according to the identified product. The data is looked up in an offline version of VulDB.

DATABASES

There are the following pre-installed databases available at the moment:

The Three Pillars of Healthy Open Source Communities

$
0
0

Earlier this month, Zalando Tech hosted a panel discussion in Berlin with industry experts on open source:

The discussion was lively and I wont try to sum it up here. But it became clear to me that healthy open source communities have three pillars:

  • Be Nice

    Nobody wants to spend time in a space that is hostile or unwelcoming. People are the lifeblood of your project and your community, and you should do you best to make them feel good.

  • Value non-code contributions

    There are many potential contributors to your project. Be the designers, marketing people, meetup organisers, legal experts, YouTube pros, musicians, and so on. All of these things are important to the overall success of your project, just like code is. If you want to encourage these sorts of contributions, you should actively seek out and reward them.

  • Encourage

    Sometimes people would like to make a contribution, but don’t know how. Or don’t know where to start. Make it easy to find small tasks to work on, and actively support people as much as possible. If you’re lucky, some of these people may go on to become a long-standing contributors.

Open source projects live and die by the communities they manage to create, grow, and nurture. Community should not be an afterthought. It should be your number one concern. Because with a healthy community, you can pretty much solve any other problem. And the bigger and the healthier your community, the more chance you have of being able to attract more contributors and grow the project.

But what’s the best way to get started? Well, thriving communities such as Rust have had a code of conduct as their foundation from the start. A code of conduct creates a safe environment where everybody knows what is and what isn’t okay. This a great place to start if you want to start taking your community seriously.

GOP lawmakers shamed on billboards for trying to repeal net neutrality rules

$
0
0

Pro-net neutrality activist group Fight for the Future has put up a series of billboards shaming Republican members of Congress who want to eliminate the Federal Communications Commission's net neutrality rules and classification of broadband providers as common carriers.

The billboards in the lawmakers' home states urge people to contact their elected officials and say that a net neutrality repeal will lead to "slower, censored, and more expensive Internet." The signs were paid for by hundreds of small donations, the group said. Broadband providers Comcast, Verizon, and Charter get shoutouts on the billboards as well.

“Voters from across the political spectrum all agree that they don’t want companies like Comcast and Verizon dictating what they can see and do online," Fight for the Future Campaign Director Evan Greer said in an announcement yesterday. "No one is fooled by corrupt lawmakers’ attempts to push for bad legislation while they strip Internet users of protections at the FCC. Hundreds of people donated to make these billboards possible. When you come for the Internet, the Internet comes for you.”

Fight for the Future also helped organize the recent Day of Action to Save Net Neutrality.

The group said its billboards "feature some of the few members of Congress who came out with early support for [the] FCC’s plan to repeal net neutrality rules." They include Rep. Marsha Blackburn (R-Tenn.), who chairs a telecommunications subcommittee and previously filed legislation she calls the "Internet Freedom Act" to overturn the FCC rules. After FCC Chairman Ajit Pai proposed a repeal of the rules this year, Blackburn called it a "positive step" that will make sure the Internet is not "under heavy government control."

One billboard pictures Senator John Thune (R-S.D.), chairman of the Senate Commerce Committee, who proposes overturning the common carrier classification while enacting a "permanent legislative solution for net neutrality that would ban blocking, throttling, and paid prioritization of Internet traffic."

There are also billboards for Speaker of the House Paul Ryan (R-Wisc.), House Majority Leader Kevin McCarthy (R-Calif.), Rep. Tom Graves (R-Ga.), and Sen. Roger Wicker (R-Miss.). As Fight for the Future noted, allofthem have supported efforts to undo the current net neutrality rules.

"The billboards highlight the increasing scrutiny on Congress, [which has] important oversight authority over the FCC," Fight for the Future's announcement said. "With no viable legislation on the table, net neutrality supporters remain opposed to any attempt at legislation that would undermine the strong rules at the FCC, which were fought for by millions of Americans."

In May, the activist group put up similar billboards featuring Republican members of Congress who voted to eliminate broadband privacy rules.

Disclosure: The Advance/Newhouse Partnership, which owns about 13 percent of Charter, is part of Advance Publications. Advance Publications owns Condé Nast, which owns Ars Technica.

Google's stance on neo-Nazis 'dangerous', says EFF

$
0
0
Stop Fascism protest sign outside the White HouseImage copyrightGetty Images
Image caption Events in Charlottesville have spurred a national conversation in the US about far-right groups, free speech and censorship

Decisions by Google, GoDaddy and Cloudflare to eject a neo-Nazi site from their services were "dangerous", a US-based digital rights group has said.

The Daily Stormer had denigrated 32-year-old Heather Heyer who died while protesting against a far-right rally in Charlottesville.

This led to a backlash in which multiple web firms kicked the site off their platforms.

But the Electronic Frontier Foundation (EFF) has now criticised this response.

"We strongly believe that what GoDaddy, Google, and Cloudflare did here was dangerous," the EFF said.

"Because internet intermediaries, especially those with few competitors, control so much online speech, the consequences of their decisions have far-reaching impacts on speech around the world."

It added that it believed "no-one" - including the government and private companies - should decide who is able to speak or not.

"We wholeheartedly agree with the concerns raised by the EFF," said Cloudflare chief executive Matthew Prince.

"They reflect the same concerns we raised in our blog."

Mr Prince had said that explained that he made his decision after the Daily Stormer's administrators suggested that Cloudflare supported their cause.

Google and GoDaddy said earlier in the week that they were cancelling the Daily Stormer's registration with Google Domains as it had violated the terms of service.

In the dark

The Daily Stormer is currently inaccessible on the open web, after various domain providers and hosting firms - including one in Russia - banned it from their services.

However, it has relocated to the dark web.

Dark web network Tor has said it has no plans to stop the Daily Stormer from using its technology.

"Tor is designed to defend human rights and privacy by preventing anyone from censoring things, even us," the Tor Project explained in a blog post.

But the list of businesses that have shut out the Daily Stormer and other neo-Nazi or white nationalist sites has now grown very large.

Payment giants Mastercard, Visa , Paypal and American Express all said this week that they would take a tough stance on sites that engaged in illegal activities.

Paypal, for example, mentioned sites that incite hate, racial intolerance or violence.

And music streaming services offered by Google, Deezer and Spotify have said they would remove music that incites violence, hatred or racism.

Spotify said: "We are glad to have been alerted to this content - and have already removed many of the bands identified, while urgently reviewing the remainder."

1.8M Chicago voter records exposed online

$
0
0

Election Systems & Software (ES&S), the Nebraska-based voting software and election management company, confirmed the leak on Thursday.

In a blog post, the company said the voter data leak contained names, addresses, birthdates, partial social security numbers and some driver's license and state ID numbers stored in backup files on a server. Authorities alerted ES&S to the leak on Aug. 12, and the data was secured.

A security researcher from UpGuard discovered the breach.

The data did not contain any voting information, like the results of how someone voted.

Jim Allen, a spokesman for the Chicago Board of Elections, said the leak did not contain or affect anyone's voting ballots, which are handled by a different vendor.

"We deeply regret this," Allen said. "It was a violation of our information security protocol by the vendor."

Forensic experts are investigating the ES&S leak. A spokesperson for ES&S said in a statement the firm has no indication that the information had been previously accessed by people other than the researchers who discovered it.

Related: How much of my voter data is public?

UpGuard security researcher Jon Hendren found the cache of data exposed on an Amazon(AMZN, Tech30) Web Services server Friday night. He handed it off to analyst Chris Vickery who downloaded the information to examine the content. Vickery shared his findings with local and Illinois state authorities Saturday morning.

Amazon buckets -- where data is stored -- are private by default. This means someone at ES&S misconfigured a security setting and exposed the data online.

"This data would be an identity thief's dream to find," Vickery told CNN Tech. He also said the leaked files contained some voting system administration credentials.

Researchers at UpGuard are responsible for discovering a number of major data leaks from publicly available databases online, including millions of people's information from a GOP analytics company and Verizon. It also recently discovered critical infrastructure data exposed by a Texas energy firm.

Related: 100 experts tell Congress how to improve election security

Data breaches like this happen far more frequently than the public might realize.

Vickery said when he devotes one day to looking for exposed servers, he findsdozens of data breaches. Some are not as big as schematics on energy companies or millions of partial social security numbers, but he saidit's something companies need to be much more aware of.

"It's really kind of an epidemic that people don't have any idea about," Vickery said. "System administrators leaving things open and exposed to the public internet is like a cancer on security."

CNNMoney (San Francisco) First published August 17, 2017: 7:10 PM ET

Inside one of the world’s largest Bitcoin mines

$
0
0

One of the world’s largest bitcoin mines is located in the SanShangLiang industrial park on the outskirts of the city of Ordos, in Inner Mongolia, an autonomous region that’s part of China. It’s 400 miles from China’s capital, Beijing, and 35 miles from the the city of Baotou. The mine is just off the highway, near the intersection of Latitutde 3rd Road and Longitude 3rd Road. It sits amidst abandoned, half-built factories—victims of an earlier coal mining boom that fizzled out, leaving Ordos and its outlying areas littered with the shells of unfinished buildings.

The mine belongs to Bitmain, a Beijing-based company that also makes mining machines that perform billions of calculations per second to try and crack the cryptographic puzzle that yields new bitcoins. Fifty Bitmain staff, many of them local to Ordos, watch over eight buildings crammed with 25,000 machines that are cranking through calculations 24 hours a day. One of the buildings is devoted to mining litecoin, an ascendant cryptocurrency. The staff live on-site in a building with a dormitory, offices, a canteen, and a repair center. For recreation, they play basketball on an unfinished cement court.

Bitcoin mining consumes enormous amounts of electricity, which is why miners seek out locations that offer cheap energy. The Ordos mine was set up in 2014, making it China’s oldest large-scale bitcoin mining facility. Bitmain acquired it in 2015. It’s powered by electricity from coal-fired power plants. Its daily electricity bill amounts to $39,000. Bitmain also operates other mines in China’s remote areas, like the mountainous Yunnan province in the south and the autonomous region of Xinjiang in the east.

Despite the costs, bitcoin mining remains a lucrative industry. At the current bitcoin price of about $4,000 per bitcoin, miners compete for over $7 million in new bitcoins a day. The more processing power a mining operation controls, the higher its chances of winning a chunk of those millions. The Ordos mine accounts for over 4% of the processing power on the bitcoin network—a huge amount for a single facility.

Quartz visited the mine in Ordos on Aug. 11.

Bitmain_127
A set of keys used to get access to the transformers on site. (Aurelien Foucault for Quartz)
Bitmain_94
View of one of the buildings in the compound. (Aurelien Foucault for Quartz)

Read next: The lives of bitcoin miners digging for digital gold in Inner Mongolia

Read next: Take a 360 walk around one of the world’s biggest bitcoin mines

Nymph: A slightly different version of C

$
0
0

README.md

Let's see what we can achieve by reworking C syntax.

Updates

A new parser has been implemented. Default object member values implemented.

Goals

...

Example

box.n

#include <stdlib.h>
#include <stdio.h>

object Box {
    int height = 1;
    int width = 1;
    int depth = 1;
}

private int add(int a, int b) {
  return a + b;
}

public void printBox(Box *this) {
    printf("%i %i %i\n",1+add(2+this->height+2, this->height)+2, 2+this->width+2, this->depth);
}

rect.n

#include <stdlib.h>
#include <stdio.h>

object Rect {
    int height = 3;
    int width = 3;
}

public void printRect(Rect *this) {
    printf("%i %i\n", this->height, this->width);
}

main.n

#include <stdlib.h>
#include "box.n"
#include "rect.n"

private int main(int argc, const char * argv[]) {

    Box **myBoxes = new Box*10;
    Box *myBox = new Box;
    myBoxes[0] = myBox;
    Rect *myRect = new Rect;

    printBox(myBoxes[0]);
    printRect(myRect);

    free(myBox);
    free(myBoxes);
    free(myRect);

    return 0;
}

makefile

nymph: nymph_compiler.c
        gcc -std=c11 nymph_compiler.c -o nymph
        ./nymph main.n main
        gcc -std=c11 -c box.c box.h
        gcc -std=c11 -c rect.c rect.h
        gcc -std=c11 -c main.c main.h
        gcc -std=c11 main.o box.o rect.o -o out

Truth Values

$
0
0
First published Tue Mar 30, 2010; substantive revision Mon Mar 27, 2017

Truth values have been put to quite different uses in philosophy and logic, being characterized, for example, as:

  • primitive abstract objects denoted by sentences in natural and formal languages,
  • abstract entities hypostatized as the equivalence classes of sentences,
  • what is aimed at in judgements,
  • values indicating the degree of truth of sentences,
  • entities that can be used to explain the vagueness of concepts,
  • values that are preserved in valid inferences,
  • values that convey information concerning a given proposition.

Depending on their particular use, truth values have been treated as unanalyzed, as defined, as unstructured, or as structured entities.

The notion of a truth value has been explicitly introduced into logic and philosophy by Gottlob Frege—for the first time in Frege 1891, and most notably in his seminal paper (Frege 1892). Although it was Frege who made the notion of a truth value to one of the central concepts of semantics, the idea of special semantical values, however, was anticipated by Boole and Peirce already, see the survey article on a “history of truth values” by Béziau (2012). According to Kneale and Kneale (1962: 413), Boole’s system contains all that is needed for its interpretation “in terms of truth values of propositions”, and as Church (1956: 17) remarks, the “explicit use of two truth-values appears for the first time in a paper by C.S. Peirce in the American Journal of Mathematics, vol. 7 (1885), pp. 180–202”. Frege conceived this notion as a natural component of his language analysis where sentences, being saturated expressions, are interpreted as a special kind of names, which refer to (denote, designate, signify) a special kind of objects: truth values. Moreover, there are, according to Frege, only two such objects: the True (das Wahre) and the False (das Falsche):

Every assertoric sentence … is to be regarded as a proper name, and its Bedeutung, if it has one, is either the True or the False. (Frege 1892, trans. Beaney 1997: 158)

This new and revolutionary idea has had a far reaching and manifold impact on the development of modern logic. It provides the means to uniformly complete the formal apparatus of a functional analysis of language by generalizing the concept of a function and introducing a special kind of functions, namely propositional functions, or truth value functions, whose range of values consists of the set of truth values. Among the most typical representatives of propositional functions one finds predicate expressions and logical connectives. As a result, one obtains a powerful tool for a conclusive implementation of the extensionality principle (also called the principle of compositionality), according to which the meaning of a complex expression is uniquely determined by the meanings of its components. On this basis one can also discriminate between extensional and intensional contexts and advance further to the conception of intensional logics. Moreover, the idea of truth values has induced a radical rethinking of some central issues in the philosophy of logic, including: the categorial status of truth, the theory of abstract objects, the subject-matter of logic and its ontological foundations, the concept of a logical system, the nature of logical notions, etc.

In the following, several important philosophical problems directly connected to the notion of a truth value are considered and various uses of this notion are explained.

1. Truth values as objects and referents of sentences

1.1 Functional analysis of language and truth values

The approach to language analysis developed by Frege rests essentially on the idea of a strict discrimination between two main kinds of expressions: proper names (singular terms) and functional expressions. Proper names designate (signify, denote, or refer to) singular objects, and functional expressions designate (signify, denote, or refer to) functions. [Note: In the literature, the expressions ‘designation’, ‘signification’, ‘denotation’, and ‘reference’ are usually taken to be synonymous. This practice is used throughout the present entry.] The name ‘Ukraine’, for example, refers to a certain country, and the expression ‘the capital of’ denotes a one-place function from countries to cities, in particular, a function that maps the Ukraine to Kyiv (Kiev). Whereas names are “saturated” (complete) expressions, functional expressions are “unsaturated” (incomplete) and may be saturated by applying them to names, producing in this way new names. Similarly, the objects to which singular terms refer are saturated and the functions denoted by functional expression are unsaturated. Names to which a functional expression can be applied are called the arguments of this functional expression, and entities to which a function can be applied are called the arguments of this function. The object which serves as the reference for the name generated by an application of a functional expression to its arguments is called the value of the function for these arguments. Particularly, the above mentioned functional expression ‘the capital of’ remains incomplete until applied to some name. An application of the function denoted by ‘the capital of’ to Ukraine (as an argument) returns Kyiv as the object denoted by the compound expression ‘the capital of Ukraine’ which, according to Frege, is a proper name of Kyiv. Note that Frege distinguishes between an \(n\)-place function \(f\) as an unsaturated entity that can be completed by and applied to arguments \(a_1\),…, \(a_n\) and its course of values, which can be seen as the set-theoretic representation of this function: the set

\[\{\langle a_1, \ldots, a_n, a\rangle \mid a = f(a_1,\ldots , a_n)\}.\]

Pursuing this kind of analysis, one is very quickly confronted with two intricate problems. First, how should one treat declarative sentences? Should one perhaps separate them into a specific linguistic category distinct from the ones of names and functions? And second, how—from a functional point of view—should one deal with predicate expressions such as ‘is a city’, ‘is tall’, ‘runs’, ‘is bigger than’, ‘loves’, etc., which are used to denote classes of objects, properties of objects, or relations between them and which can be combined with (applied to) singular terms to obtain sentences? If one considers predicates to be a kind of functional expressions, what sort of names are generated by applying predicates to their arguments, and what can serve as referents of these names, respectively values of these functions?

A uniform solution of both problems is obtained by introducing the notion of a truth value. Namely, by applying the criterion of “saturatedness” Frege provides a negative answer to the first of the above problems. Since sentences are a kind of complete entities, they should be treated as a sort of proper names, but names destined to denote some specific objects, namely the truth values: the True and the False. In this way one also obtains a solution of the second problem. Predicates are to be interpreted as some kind of functional expressions, which being applied to these or those names generate sentences referring to one of the two truth values. For example, if the predicate ‘is a city’ is applied to the name ‘Kyiv’, one gets the sentence ‘Kyiv is a city’, which designates the True (i.e., ‘Kyiv is a city’ is true). On the other hand, by using the name ‘Mount Everest’, one obtains the sentence ‘Mount Everest is a city’ which clearly designates the False, since ‘Mount Everest is a city’ is false.

Functions whose values are truth values are called propositional functions. Frege also referred to them as concepts (Begriffe). A typical kind of such functions (besides the ones denoted by predicates) are the functions denoted by propositional connectives. Negation, for example, can be interpreted as a unary function converting the True into the False andvice versa, and conjunction is a binary function that returnsthe True as a value when both its argument positions are filled in by the True, etc. Propositional functions mapping \(n\)-tuples of truth values into truth values are also calledtruth-value functions.

Frege thus in a first step extended the familiar notion of a numerical function to functions on singular objects in general and, moreover, introduced a new kind of singular objects that can serve as arguments and values of functions on singular objects, the truth values. In a further step, he considered propositional functions taking functions as their arguments. The quantifier phrase ‘every city’, for example, can be applied to the predicate ‘is a capital’ to produce a sentence. The argument of the second-order function denoted by ‘every city’ is the first-order propositional function on singular objects denoted by ‘is a capital’. The functional value denoted by the sentence ‘Every city is a capital’ is a truth value, the False.

Truth values thus prove to be an extremely effective instrument for a logical and semantical analysis of language. Moreover, Frege provides truth values (as proper referents of sentences) not merely with a pragmatical motivation but also with a strong theoretical justification. The idea of such justification, that can be found in Frege 1892, employs the principle of substitutivity of co-referential terms, according to which the reference of a complex singular term must remain unchanged when any of its sub-terms is replaced by an expression having the same reference. This is actually just an instance of the compositionality principle mentioned above. If sentences are treated as a kind of singular terms which must have designations, then assuming the principle of substitutivity one “almost inevitably” (as Kurt Gödel (1944: 129) explains) is forced to recognize truth values as the most suitable entities for such designations. Accordingly, Frege asks:

What else but the truth value could be found, that belongs quite generally to every sentence if the reference of its components is relevant, and remains unchanged by substitutions of the kind in question? (Geach and Black 1952: 64)

The idea underlying this question has been neatly reconstructed by Alonzo Church in his Introduction to Mathematical Logic (1956: 24–25) by considering the following sequence of four sentences:

  • C1.Sir Walter Scott is the author of Waverley.
  • C2.Sir Walter Scott is the man who wrote 29 Waverley Novels altogether.
  • C3.The number, such that Sir Walter Scott is the man who wrote that many Waverley Novels altogether is 29.
  • C4.The number of counties in Utah is 29.

C1–C4 present a number of conversion steps each producing co-referential sentences. It is claimed that C1 and C2 must have the same designation by substitutivity, for the terms ‘the author of Waverley’ and ‘the man who wrote 29 Waverley Novels altogether’ designate one and the same object, namely Walter Scott. And so must C3 and C4, because the number, such that Sir Walter Scott is the man who wrote that many Waverley Novels altogether is the same as the number of counties in Utah, namely 29. Next, Church argues, it is plausible to suppose that C2, even if not completely synonymous with C3, is at least so close to C3 “so as to ensure its having the same denotation”. If this is indeed the case, then C1 and C4 must have the same denotation (designation) as well. But it seems that the only (semantically relevant) thing these sentences have in common is that both are true. Thus, taken that there must be something what the sentences designate, one concludes that it is just their truth value. As Church remarks, a parallel example involving false sentences can be constructed in the same way (by considering, e.g., ‘Sir Walter Scott is not the author of Waverley’).

This line of reasoning is now widely known as the “slingshot argument”, a term coined by Jon Barwise and John Perry (in Barwise and Perry 1981: 395), who stressed thus an extraordinary simplicity of the argument and the minimality of presuppositions involved. Stated generally, the pattern of the argument goes as follows (cf. Perry 1996). One starts with a certain sentence, and then moves, step by step, to a completely different sentence. Every two sentences in any step designate presumably one and the same thing. Hence, the starting and the concluding sentences of the argument must have the same designation as well. But the only semantically significant thing they have in common seems to be their truth value. Thus, what any sentence designates is just its truth value.

A formal version of this argument, employing the term-forming, variable-binding class abstraction (or property abstraction) operator λ\(x\) (“the class of all \(x\) such that” or “the property of being such an \(x\) that”), was first formulated by Church (1943) in his review of Carnap’s Introduction to Semantics. Quine (1953), too, presents a variant of the slingshot using class abstraction, see also (Shramko and Wansing 2009). Other remarkable variations of the argument are those by Kurt Gödel (1944) and Donald Davidson (1967, 1969), which make use of the formal apparatus of a theory of definite descriptions dealing with the description-forming, variable-binding iota-operator (ι\(x\), “the \(x\) such that”). It is worth noticing that the formal versions of the slingshot show how to move—using steps that ultimately preserve reference—from any true (false) sentence to any other such sentence. In view of this result, it is hard to avoid the conclusion that what the sentences refer to are just truth values.

The slingshot argument has been analyzed in detail by many authors (see especially the comprehensive study by Stephen Neale (Neale 2001) and references therein) and has caused much controversy notably on the part of fact-theorists, i.e., adherents of facts, situations, propositions, states of affairs, and other fact-like entities conceived as alternative candidates for denotations of declarative sentences. Also see thesupplement on the slingshot argument.

1.2 Truth as a property versus truth as an object

Truth values evidently have something to do with a general concept of truth. Therefore it may seem rather tempting to try to incorporate considerations on truth values into the broader context of traditional truth-theories, such as correspondence, coherence, anti-realistic, or pragmatist conceptions of truth. Yet, it is unlikely that such attempts can give rise to any considerable success. Indeed, the immense fruitfulness of Frege’s introduction of truth values into logic to a large extent is just due to its philosophical neutrality with respect to theories of truth. It does not commit one to any specific metaphysical doctrine of truth. In one significant respect, however, the idea of truth values contravenes traditional approaches to truth by bringing to the forefront the problem of its categorial classification.

In most of the established conceptions, truth is usually treated as a property. It is customary to talk about a “truth predicate” and its attribution to sentences, propositions, beliefs or the like. Such an understanding corresponds also to a routine linguistic practice, when one operates with the adjective ‘true’ and asserts, e.g., ‘That 5 is a prime number is true’. By contrast with this apparently quite natural attitude, the suggestion to interpret truth as an object may seem very confusing, to say the least. Nevertheless this suggestion is also equipped with a profound and strong motivation demonstrating that it is far from being just an oddity and has to be taken seriously (cf. Burge 1986).

First, it should be noted that the view of truth as a property is not as natural as it appears on the face of it. Frege brought into play an argument to the effect that characterizing a sentence as true adds nothing new to its content, for ‘It is true that 5 is a prime number’ says exactly the same as just ‘5 is a prime number’. That is, the adjective ‘true’ is in a sense redundant and thus is not a real predicate expressing a real property such as the predicates ‘white’ or ‘prime’ which, on the contrary, cannot simply be eliminated from a sentence without an essential loss for its content. In this case a superficial grammatical analogy is misleading. This idea gave an impetus to the deflationary conception of truth (advocated by Ramsey, Ayer, Quine, Horwich, and others, see the entry on the deflationary theory of truth).

However, even admitting the redundancy of truth as a property, Frege emphasizes its importance and indispensable role in some other respect. Namely, truth, accompanying every act of judgment as its ultimate goal, secures an objective value of cognition by arranging for every assertive sentence a transition from the level of sense (the thought expressed by a sentence) to the level of denotation (its truth value). This circumstance specifies the significance of taking truth as a particular object. As Tyler Burge explains:

Normally, the point of using sentences, what “matters to us”, is to claim truth for a thought. The object, in the sense of the point or objective, of sentence use was truth. It is illuminating therefore to see truth as an object. (Burge 1986: 120)

As it has been observed repeatedly in the literature (cf., e.g., Burge 1986, Ruffino 2003), the stress Frege laid on the notion of a truth value was, to a great extent, pragmatically motivated. Besides an intended gain for his system of “Basic Laws” (Frege 1893/1903) reflected in enhanced technical clarity, simplicity, and unity, Frege also sought to substantiate in this way his view on logic as a theoretical discipline with truth as its main goal and primary subject-matter. Incidentally, Gottfried Gabriel (1986) demonstrated that in the latter respect Frege’s ideas can be naturally linked up with a value-theoretical tradition in German philosophy of the second half of the 19th century; see also the recent (Gabriel 2013) on the relation between Frege’s value-theoretically inspired conception of truth values and his theory of judgement. More specifically, Wilhelm Windelband, the founder and the principal representative of the Southwest school of Neo-Kantianism, was actually the first who employed the term “truth value” (“Wahrheitswert”) in his essay “What is Philosophy?” published in 1882 (see Windelband 1915: 32), i.e., nine years before Frege 1891, even if he was very far from treating a truth value as a value of a function.

Windelband defined philosophy as a “critical science about universal values”. He considered philosophical statements to be not mere judgements but rather assessments, dealing with some fundamental values, the value of truth being one of the most important among them. This latter value is to be studied by logic as a special philosophical discipline. Thus, from a value-theoretical standpoint, the main task of philosophy, taken generally, is to establish the principles of logical, ethical and aesthetical assessments, and Windelband accordingly highlighted the triad of basic values: “true”, “good” and “beautiful”. Later this triad was taken up by Frege in 1918 when he defined the subject-matter of logic (see below). Gabriel points out (1984: 374) that this connection between logic and a value theory can be traced back to Hermann Lotze, whose seminars in Göttingen were attended by both Windelband and Frege.

The decisive move made by Frege was to bring together a philosophical and a mathematical understanding of values on the basis of a generalization of the notion of a function on numbers. While Frege may have been inspired by Windelband’s use of the word ‘value’ (and even more concretely – ‘truth value’), it is clear that he uses the word in its mathematical sense. If predicates are construed as a kind of functional expressions which, being applied to singular terms as arguments, produce sentences, then the values of the corresponding functions must be references of sentences. Taking into account that the range of any function typically consists of objects, it is natural to conclude that references of sentences must be objects as well. And if one now just takes it that sentences refer to truth values (the True andthe False), then it turns out that truth values are indeed objects, and it seems quite reasonable to generally explicate truth and falsity as objects and not as properties. As Frege explains:

A statement contains no empty place, and therefore we must take itsBedeutung as an object. But this Bedeutung is a truth-value. Thus the two truth-values are objects. (Frege 1891, trans. Beaney 1997: 140)

Frege’s theory of sentences as names of truth values has been criticized, for example, by Michael Dummett who stated rather dramatically:

This was the most disastrous of the effects of the misbegotten doctrine that sentences are a species of complex singular terms, which dominated Frege’s later period: to rob him of the insight that sentences play a unique role, and that the role of almost every other linguistic expression … consists in its part in forming sentences. (Dummett 1981: 196)

But even Dummett (1991: 242) concedes that “to deny that truth-values are objects … seems a weak response”.

1.3 The ontology of truth values

If truth values are accepted and taken seriously as a special kind of objects, the obvious question as to the nature of these entities arises. The above characterization of truth values as objects is far too general and requires further specification. One way of such specification is to qualify truth values as abstract objects. Note that Frege himself never used the word ‘abstract’ when describing truth values. Instead, he has a conception of so called “logical objects”, truth values being the most fundamental (and primary) of them (Frege 1976: 121). Among the other logical objects Frege pays particular attention to are sets and numbers, emphasizing thus their logical nature (in accordance with his logicist view).

Church (1956: 25), when considering truth values, explicitly attributes to them the property of being abstract. Since then it is customary to label truth values as abstract objects, thus allocating them into the same category of entities as mathematical objects (numbers, classes, geometrical figures) and propositions. One may pose here an interesting question about the correlation between Fregean logical objects and abstract objects in the modern sense (see the entry onabstract objects). Obviously, the universe of abstract objects is much broader than the universe of logical objects as Frege conceives them. The latter are construed as constituting an ontological foundation for logic, and hence for mathematics (pursuant to Frege’s logicist program). Generally, the class of abstracta includes a wide diversity of platonic universals (such as redness, youngness, or geometrical forms) and not only those of them which are logically necessary. Nevertheless, it may safely be said that logical objects can be considered as paradigmatic cases of abstract entities, or abstract objects in their purest form.

It should be noted that finding an adequate definition of abstract objects is a matter of considerable controversy. According to a common view, abstract entities lack spatio-temporal properties and relations, as opposed to concrete objects which exist in space and time (Lowe 1995: 515). In this respect truth values obviously are abstract as they clearly have nothing to do with physical spacetime. In a similar fashion truth values fulfill another requirement often imposed upon abstract objects, namely the one of a causal inefficacy (see, e.g., Grossmann 1992: 7). Here again, truth values are very much like numbers and geometrical figures: they have no causal power and make nothing happen.

Finally, it is of interest to consider how truth values can be introduced by applying so-called abstraction principles, which are used for supplying abstract objects with criteria of identity. The idea of this method of characterizing abstract objects is also largely due to Frege, who wrote:

If the symbol a is to designate an object for us, then we must have a criterion that decides in all cases whether b is the same as a, even if it is not always in our power to apply this criterion. (Frege 1884, trans. Beaney 1997: 109)

More precisely, one obtains a new object by abstracting it from some given kind of entities, in virtue of certain criteria of identity for this new (abstract) object. This abstraction is performed in terms of an equivalence relation defined on the given entities (see Wrigley 2006: 161). The celebrated slogan by Quine (1969: 23) “No entity without identity” is intended to express essentially the same understanding of an (abstract) object as an “item falling under a sortal concept which supplies a well-defined criterion of identity for its instances” (Lowe 1997: 619).

For truth values such a criterion has been suggested in Anderson and Zalta (2004: 2), stating that for any two sentences \(p\) and \(q\), the truth value of \(p\) is identical with the truth value of \(q\) if and only if \(p\) is (non-logically) equivalent with \(q\) (cf. also Dummett 1959: 141). This idea can be formally explicated following the style of presentation in Lowe (1997: 620):

\[ \forall p\forall q[(\textit{Sentence}(p) \mathbin{\&} \textit{Sentence}(q)) \Rightarrow(tv(p)=tv(q) \Leftrightarrow(p\leftrightarrow q))], \]

where &, \(\Rightarrow, \Leftrightarrow, \forall\) stand correspondingly for ‘and’, ‘if… then’, ‘if and only if’ and ‘for all’ in the metalanguage, and \(\leftrightarrow\) stands for someobject language equivalence connective (biconditional).

Incidentally, Carnap (1947: 26), when introducing truth-values as extensions of sentences, is guided by essentially the same idea. Namely, he points out a strong analogy between extensions of predicators and truth values of sentences. Carnap considers a wide class of designating expressions (“designators”) among which there are predicate expressions (“predicators”), functional expressions (“functors”), and some others. Applying the well-known technique of interpreting sentences as predicators of degree 0, he generalizes the fact that two predicators of degree \(n\) (say, \(P\) and \(Q)\) have the same extension if and only if \(\forall x_1\forall x_2 \ldots \forall x_n(Px_1 x_2\ldots x_n \leftrightarrow Qx_1 x_2\ldots x_n)\) holds. Then, analogously, two sentences (say, \(p\) and \(q)\), being interpreted as predicators of degree 0, must have the same extension if and only if \(p\leftrightarrow q\) holds, that is if and only if they are equivalent. And then, Carnap remarks, it seems quite natural to take truth values as extensions for sentences.

Note that this criterion employs a functional dependency between an introduced abstract object (in this case a truth value) and some other objects (sentences). More specifically, what is considered is the truth value \(of\) a sentence (or proposition, or the like). The criterion of identity for truth values is formulated then through the logical relation of equivalence holding between these other objects—sentences, propositions, or the like (with an explicit quantification over them).

It should also be remarked that the properties of the object language biconditional depend on the logical system in which the biconditional is employed. Biconditionals of different logics may have different logical properties, and it surely matters what kind of the equivalence connective is used for defining truth values. This means that the concept of a truth value introduced by means of the identity criterion that involves a biconditional between sentences is also logic-relative. Thus, if ‘\(\leftrightarrow\)’ stands for material equivalence, one obtains classical truth values, but if the intuitionistic biconditional is employed, one gets truth values of intuitionistic logic, etc. Taking into account the role truth values play in logic, such an outcome seems to be not at all unnatural.

Anderson and Zalta (2004: 13), making use of an object theory from Zalta (1983), propose the following definition of ‘the truth value of proposition \(p\)’ (‘\(tv(p)\)’ [notation adjusted]):

\[ tv(p) =_{df} ι x(A!x \wedge \forall F(xF \leftrightarrow \exists q(q\leftrightarrow p \wedge F= [λ y\ q]))), \]

where \(A\)! stands for a primitive theoretical predicate ‘being abstract’, \(xF\) is to be read as “\(x\) encodes \(F\)” and [λy q] is a propositional property (“being such a \(y\) that \(q\)”). That is, according to this definition, “the extension of \(p\) is the abstract object that encodes all and only the properties of the form [λy q] which are constructed out of propositions \(q\) materially equivalent to \(p\)” (Anderson and Zalta 2004: 14).

The notion of a truth value in general is then defined as an object which is the truth value of some proposition:

\[TV(x) =_{df} \exists p(x = tv(p)).\]

Using this apparatus, it is possible to explicitly define the Fregean truth values the True \((\top)\) and the False \((\bot)\):

\[ \begin{align} \top &=_{df} ι x(A!x \wedge \forall F(xF \leftrightarrow \exists p(p \wedge F= [λ y\ p])));\\ \bot &=_{df} ιx (A!x \wedge \forall F(xF \leftrightarrow \exists p(\neg p \wedge F= [λ y\ p]))).\\ \end{align} \]

Anderson and Zalta prove then that \(\top\) and \(\bot\) are indeed truth values and, moreover, that there are exactly two such objects. The latter result is expected, if one bears in mind that what the definitions above actually introduce are the classical truth values (as the underlying logic is classical). Indeed, \(p\leftrightarrow q\) is classically equivalent to \((p\wedge q)\vee(\neg p\wedge \neg q)\), and \(\neg(p\leftrightarrow q)\) is classically equivalent to \((p\wedge \neg q)\vee(\neg p\wedge q)\). That is, the connective of material equivalence divides sentences into two distinct collections. Due to the law of excluded middle these collections are exhaustive, and by virtue of the law of non-contradiction they are exclusive. Thus, we get exactly two equivalence classes of sentences, each being a hypostatized representative of one of two classical truth values.

2. Truth values as logical values

2.1 Logic as the science of logical values

In a late paper Frege (1918) claims that the word ‘true’ determines the subject-matter of logic in exactly the same way as the word ‘beautiful’ does for aesthetics and the word ‘good’ for ethics. Thus, according to such a view, the proper task of logic consists, ultimately, in investigating “the laws of being true” (Sluga 2002: 86). By doing so, logic is interested in truth as such, understood objectively, and not in what is merely taken to be true. Now, if one admits that truth is a specific abstract object (the corresponding truth value), then logic in the first place has to explore the features of this object and its interrelations to other entities of various other kinds.

A prominent adherent of this conception was Jan Łukasiewicz. As he paradigmatically put it:

All true propositions denote one and the same object, namely truth, and all false propositions denote one and the same object, namely falsehood. I consider truth and falsehood to be singular objects in the same sense as the number 2 or 4 is. … Ontologically, truth has its analogue in being, and falsehood, in non-being. The objects denoted by propositions are called logical values. Truth is the positive, and falsehood is the negative logical value. … Logic is the science of objects of a special kind, namely a science of logical values. (Łukasiewicz 1970: 90)

This definition may seem rather unconventional, for logic is usually treated as the science of correct reasoning and valid inference. The latter understanding, however, calls for further justification. This becomes evident, as soon as one asks, on what grounds one should qualify this or that pattern of reasoning as correct or incorrect.

In answering this question, one has to take into account that any valid inference should be based on logical rules which, according to a commonly accepted view, should at least guarantee that in a valid inference the conclusion(s) is (are) true if all the premises are true. Translating this demand into the Fregean terminology, it would mean that in the course of a correct inference the possession of the truth value The True should be preserved from the premises to the conclusion(s). Thus, granting the realistic treatment of truth values adopted by Frege, the understanding of logic as the science of truth values in fact provides logical rules with an ontological justification placing the roots of logic in a certain kind of ideal entities (see Shramko 2014).

These entities constitute a certain uniform domain, which can be viewed as a subdomain of Frege’s so-called “third realm” (the realm of the objective content of thoughts, and generally abstract objects of various kinds, see Frege 1918, cf. Popper 1972 and also Burge 1992: 634). Among the subdomains of this third realm one finds, e.g., the collection of mathematical objects (numbers, classes, etc.). The set of truth values may be regarded as forming another such subdomain, namely the one of logical values, and logic as a branch of science rests essentially on this logical domain and on exploring its features and regularities.

2.2 Many-valued logics, truth degrees and valuation systems

According to Frege, there are exactly two truth values, the True and the False. This opinion appears to be rather restrictive, and one may ask whether it is really indispensable for the concept of a truth value. One should observe that in elaborating this conception, Frege assumed specific requirements of his system of the Begriffsschrift, especially the principle of bivalence taken as a metatheoretical principle, viz. that there exist only two distinct logical values. On the object-language level this principle finds its expression in the famous classical laws of excluded middle and non-contradiction. The further development of modern logic, however, has clearly demonstrated that classical logic is only one particular theory (although maybe a very distinctive one) among the vast variety of logical systems. In fact, the Fregean ontological interpretation of truth values depicts logical principles as a kind of ontological postulations, and as such they may well be modified or even abandoned. For example, by giving up the principle of bivalence, one is naturally led to the idea of postulating many truth values.

It was Łukasiewicz, who as early as 1918 proposed to take seriously other logical values different from truth and falsehood (see Łukasiewicz 1918, 1920). Independently of Łukasiewicz, Emil Post in his dissertation from 1920, published as Post 1921, introduced \(m\)-valued truth tables, where \(m\) is any positive integer. Whereas Post’s interest in many-valued logic (where “many” means “more than two”) was almost exclusively mathematical, Łukasiewicz’s motivation was philosophical (see the entry on many-valued logic). He contemplated the semantical value of sentences about the contingent future, as discussed in Aristotle’s De interpretatione. Łukasiewicz introduced a third truth value and interpreted it as “possible”. By generalizing this idea and also adopting the above understanding of the subject-matter of logic, one naturally arrives at the representation of particular logical systems as a certain kind of valuation systems (see, e.g., Dummett 1981, 2000; Ryan and Sadler 1992).

Consider a propositional language \(\mathcal{L}\) built upon a set of atomic sentences \(\mathcal{P}\) and a set of propositional connectives \(\mathcal{C}\) (the set of sentences of \(\mathcal{L}\) being the smallest set containing \(\mathcal{P}\) and being closed under the connectives from \(\mathcal{C})\). Then a valuation system \(\mathbf{V}\) for the language \(\mathcal{L}\) is a triple \(\langle \mathcal{V}, \mathcal{D}, \mathcal{F}\rangle\), where \(\mathcal{V}\) is a non-empty set with at least two elements, \(\mathcal{D}\) is a subset of \(\mathcal{V}\), and \(\mathcal{F} = \{f_{c _1},\ldots, f_{c _m}\}\) is a set of functions such that \(f_{c _i}\) is an \(n\)-place function on \(\mathcal{V}\) if \(c_i\) is an \(n\)-place connective. Intuitively, \(\mathcal{V}\) is the set of truth values, \(\mathcal{D}\) is the set ofdesignated truth values, and \(\mathcal{F}\) is the set of truth-value functions interpreting the elements of \(\mathcal{C}\). If the set of truth values of a valuation system \(\mathbf{V}\) has \(n\) elements, \(\mathbf{V}\) is said to be \(n\)-valued. Any valuation system can be equipped with an assignment function which maps the set of atomic sentences into \(\mathcal{V}\). Each assignment \(a\) relative to a valuation system \(\mathbf{V}\) can be extended to all sentences of \(\mathcal{L}\) by means of a valuation function \(v_a\) defined in accordance with the following conditions:

\[ \begin{align} \forall p &\in \mathcal{P} , &v_a (p) &= a(p) ; \tag{1}\\ \forall c_i &\in \mathcal{C} , & v_a ( c_i ( A_1 ,\ldots , A_n )) &= f_{c_i} ( v_a ( A_1 ),\ldots , v_a ( A_n )) \tag{2} \\ \end{align} \]

It is interesting to observe that the elements of \(\mathcal{V}\) are sometimes referred to as quasi truth values. Siegfried Gottwald (1989: 2) explains that one reason for using the term ‘quasi truth value’ is that there is no convincing and uniform interpretation of the truth values that in many-valued logic are assumed in addition to the classical truth values the True and the False, an understanding that, according to Gottwald, associates the additional values with the naive understanding of being true, respectively the naive understanding ofdegrees of being true (cf. also the remark by Font (2009: 383) that “[o]ne of the main problems in many-valued logic, at least in its initial stages, was the interpretation of the ‘intermediate’ or ‘non-classical’ values”, et seq.). In later publications, Gottwald has changed his terminology and states that

[t]o avoid any confusion with the case of classical logic one prefers in many-valued logic to speak of truth degrees and to use the word “truth value” only for classical logic. (Gottwald 2001: 4)

Nevertheless in what follows the term ‘truth values’ will be used even in the context of many-valued logics, without any commitment to a philosophical conception of truth as a graded notion or a specific understanding of semantical values in addition to the classical truth values.

Since the cardinality of \(\mathcal{V}\) may be greater than 2, the notion of a valuation system provides a natural foundational framework for the very idea of a many-valued logic. The set \(\mathcal{D}\) of designated values is of central importance for the notion of a valuation system. This set can be seen as a generalization of the classical truth value the True in the sense that it determines many central logical notions and thereby generalizes some of the important roles played by Frege’s the True (cf. the introductory remarks about uses of truth values). For example, the set of tautologies (logical laws) is directly specified by the given set of designated truth values: a sentence \(A\) is atautology in a valuation system \(\mathbf{V}\) iff for every assignment \(a\) relative to \(\mathbf{V}\), \(v_a(A) \in \mathcal{D}\). Another fundamental logical notion—that of an entailment relation—can also be defined by referring to the set \(\mathcal{D}\). For a given valuation system \(\mathbf{V}\) a corresponding entailment relation \((\vDash_V)\) is usually defined by postulating the preservation of designated values from the premises to the conclusion:

\[ \tag{3} Δ\vDash_V A \textrm{ iff }\forall a[(\forall B \in Δ: v_a (B) \in \mathcal{D}) \Rightarrow v _a (A) \in \mathcal{D}]. \]

A pair \(\mathcal{M} = \langle \mathbf{V}, v_a\rangle\), where \(\mathbf{V}\) is an \((n\)-valued) valuation system and \(v_a\) a valuation in \(\mathbf{V}\), may be called an \((n\)-valued)model based on \(\mathbf{V}\). Every model \(\mathcal{M} = \langle \mathbf{V}, v_a\rangle\) comes with a corresponding entailment relation \(\vDash_{\mathcal{M}}\) by defining \(Δ\vDash_{\mathcal{M} }A\textrm{ iff }(\forall B \in Δ: v_a (B) \in \mathcal{D}) \Rightarrow v_a(A) \in \mathcal{D}\).

Suppose \(\mathfrak{L}\) is a syntactically defined logical system \(\mathfrak{L}\) with a consequence relation \(\vdash_{ \mathfrak{L} }\), specified as a relation between the power-set of \(\mathcal{L}\) and \(\mathcal{L}\). Then a valuational system \(\mathbf{V}\) is said to be strictly characteristic for \(\mathfrak{L}\) just in case \(Δ\vDash_V A \textrm{ iff } Δ\vdash_{ \mathfrak{L} }A\) (see Dummett 1981: 431). Conversely, one says that \(\mathfrak{L}\) is characterized by \(\mathbf{V}\). Thus, if a valuation system is said to determine a logic, the valuation system by itself is, properly speaking,not a logic, but only serves as a semantic basis for some logical system. Valuation systems are often referred to as (logical) matrices. Note that in (Urquhart 1986) the set \(\mathcal{D}\) of designated elements of a matrix is required to be non-empty, and in (Dunn and Hardegree 2001) \(\mathcal{D}\) is required to be a non-empty proper subset of \(\mathbf{V}\). With a view on semantically defining a many-valued logic, these restrictions are very natural and have been taken up in (Shramko and Wansing 2011) and elsewhere. For the characterization of consequence relations (see the supplementary documentSuszko’s Thesis), however, the restrictions do not apply.

In this way Fregean, i.e., classical, logic can be presented as determined by a particular valuation system based on exactly two elements: \(\mathbf{V}_{cl} = \langle \{T, F\}, \{T\}, \{ f_{\wedge}, f_{\vee}, f_{\rightarrow}, f_{\sim}\}\rangle\), where \(f_{\wedge}, f_{\vee}, f_{\rightarrow},f_{\sim}\) are given by the classical truth tables for conjunction, disjunction, material implication, and negation.

As an example for a valuation system based on more that two elements, consider two well-known valuation systems which determine Kleene’s (strong) “logic of indeterminacy” \(K_3\) and Priest’s “logic of paradox” \(P_3\). In a propositional language without implication, \(K_3\) is specified by the Kleene matrix \(\mathbf{K}_3 = \langle \{T, I, F\}, \{T\}, \{ f_c: c \in \{\sim , \wedge , \vee \}\} \rangle\), where the functions \(f_c\) are defined as follows:

\[ \begin{array}{c|c} f_\sim & \\\hline T & F \\ I & I \\ F & T \\ \end{array}\quad \begin{array}{c|c|c|c} f_\wedge & T & I & F \\\hline T & T & I & F \\ I & I & I & F \\ F & F & F & F \\ \end{array}\quad \begin{array}{c|c|c|c} f_\vee & T & I & F \\\hline T & T & T & T \\ I & T & I & I \\ F & T & I & F \\ \end{array} \]

The Priest matrix \(\mathbf{P}_3\) differs from \(\mathbf{K}_3\) only in that \(\mathcal{D} = \{T, I\}\). Entailment in \(\mathbf{K}_3\) as well as in \(\mathbf{P}_3\) is defined by means of (3).

There are natural intuitive interpretations of \(I\) in \(\mathbf{K}_3\) and in \(\mathbf{P}_3\) as theunderdetermined and the overdetermined value respectively—a truth-value gap and a truth-value glut. Formally these interpretations can be modeled by presenting the values as certain subsets of the set of classical truth values \(\{T, F\}\). Then \(T\) turns into \(\mathbf{T} = \{T\}\) (understood as “true only”), \(F\) into \(\mathbf{F} = \{F\}\) (“false only”), \(I\) is interpreted in \(K_3\) as \(\mathbf{N} = \{\} = \varnothing\) (“neither true nor false”), and in \(P_3\) as \(\mathbf{B} = \{T, F\}\) (“both true and false”). (Note that also Asenjo (1966) considers the same truth-tables with an interpretation of the third value as “antinomic”.) The designatedness of a truth value can be understood in both cases as containment of the classical \(T\) as a member.

If one combines all these new values into a joint framework, one obtains the four-valued logic \(B_4\) introduced by Dunn and Belnap (Dunn 1976; Belnap 1977a,b). A Gentzen-style formulation can be found in Font (1997: 7)). This logic is determined by the Belnap matrix \(\mathbf{B}_4 = \langle \{\mathbf{N}, \mathbf{T}, \mathbf{F}, \mathbf{B}\}, \{\mathbf{T}, \mathbf{B}\}, \{ f_c: c \in \{\sim , \wedge , \vee \}\}\rangle\), where the functions \(f_c\) are defined as follows:

\[ \begin{array}{c|c} f_\sim & \\\hline T & F \\ B & B \\ N & N \\ F & T \\ \end{array}\quad \begin{array}{c|c|c|c|c} f_\wedge & T & B & N & F \\\hline T & T & B & N & F \\ B & B & B & F & F \\ N & N & F & N & F \\ F & F & F & F & F\\ \end{array}\quad \begin{array}{c|c|c|c|c} f_\vee & T & B & N & F \\\hline T & T & T & T & T\\ B & T & B & T & B \\ N & T & T & N & N \\ F & T & B & N & F \\ \end{array} \]

Definition (3) applied to the Belnap matrix determines the entailment relation of \(\mathbf{B}_4\). This entailment relation is formalized as the well-known logic of “first-degree entailment” (\(E_{fde}\)) introduced in Anderson and Belnap (1975).

The syntactic notion of a single-conclusion consequence relation has been extensively studied by representatives of the Polish school of logic, most notably by Alfred Tarski, who in fact initiated this line of research (see Tarski 1930a,b; cf. also Wójcicki 1988). In view of certain key features of a standard consequence relation it is quite remarkable—as well as important—that any entailment relation \(\vDash_V\) defined as above has the following structural properties (see Ryan and Sadler 1992: 34):

\[ \begin{align} \tag{4} Δ\cup \{A\}\vDash_V A && \textrm{(Reflexivity)} \\ \tag{5} \textrm{If } Δ\vDash_V A & \textrm{ then } Δ\cup Γ\vDash_V A & \textrm{(Monotonicity)}\\ \tag{6} \textrm{If } Δ\vDash_V A \textrm{ for every } A \in Γ \textrm{ and } Γ\cup Δ \vDash_V B, &\textrm{ then } Δ\vDash_VB & \textrm{(Cut)} \end{align} \]

Moreover, for every \(A \in \mathcal{L}\), every \(Δ \subseteq \mathcal{L}\), and every uniform substitution function \(σ\) on \(\mathcal{L}\) the following Substitution property holds (\(σ(Δ)\) stands for \(\{ σ(B) \mid B \in Δ\})\):

\[ \tag{7} Δ\vDash_V A \textrm{ implies } σ(Δ)\vDash_Vσ(A). \]

(The function of uniform substitution σ is defined as follows. Let \(B\) be a formula in \(\mathcal{L}\), let \(p_1,\ldots, p_n\) be all the propositional variables occurring in \(B\), and let \(σ(p_1) = A_1,\ldots , σ(p_n) = A_n\) for some formulas \(A_1 ,\ldots ,A_n\). Then σ\((B)\) is the formula that results from B by substituting simultaneously \(A_1\),…, \(A_n\) for all occurrences of \(p_1,\ldots, p_n\), respectively.)

If \(\vDash_V\) in the conditions(4)–(6) is replaced by \(\vdash_{ \mathfrak{L} }\), then one obtains what is often called a Tarskian consequence relation. If additionally a consequence relation has the substitution property (7), then it is called structural. Thus, any entailment relation defined for a given valuation system \(\mathbf{V}\) presents an important example of a consequence relation, in that \(\mathbf{V}\) is strictly characteristic for some logical system \(\mathfrak{L}\) with a structural Tarskian consequence relation.

Generally speaking, the framework of valuation systems not only perfectly suits the conception of logic as the science of truth values, but also turns out to be an effective technical tool for resolving various sophisticated and important problems in modern logic, such as soundness, completeness, independence of axioms, etc.

2.3 Truth values, truth degrees, and vague concepts

The term ‘truth degrees’, used by Gottwald and many other authors, suggests that truth comes by degrees, and these degrees may be seen as truth values in an extended sense. The idea of truth as a graded notion has been applied to model vague predicates and to obtain a solution to the Sorites Paradox, the Paradox of the Heap (see the entry on the Sorites Paradox). However, the success of applying many-valued logic to the problem of vagueness is highly controversial. Timothy Williamson (1994: 97), for example, holds that the phenomenon of higher-order vagueness “makes most work on many-valued logic irrelevant to the problem of vagueness”.

In any case, the vagueness of concepts has been much debated in philosophy (see the entry onvagueness) and it was one of the major motivations for the development offuzzy logic (see the entry onfuzzy logic). In the 1960s, Lotfi Zadeh (1965) introduced the notion of a fuzzy set. A characteristic function of a set \(X\) is a mapping which is defined on a superset \(Y\) of \(X\) and which indicates membership of an element in \(X\). The range of the characteristic function of a classical set \(X\) is the two-element set \(\{0,1\}\) (which may be seen as the set of classical truth values). The function assigns the value 1 to elements of \(X\) and the value 0 to all elements of \(Y\) not in \(X\). A fuzzy set has a membership function ranging over the real interval [0,1]. A vague predicate such as ‘is much earlier than March 20th, 1963’, ‘is beautiful’, or ‘is a heap’ may then be regarded as denoting a fuzzy set. The membership function \(g\) of the fuzzy set denoted by ‘is much earlier than March 20th, 1963’ thus assigns values (seen as truth degrees) from the interval [0, 1] to moments in time, for example \(g\)(1p.m., August 1st, 2006) \(= 0\), \(g\)(3a.m., March 19th, 1963) \(= 0\), \(g\)(9:16a.m., April 9th, 1960) \(= 0.005\), \(g\)(2p.m., August 13th, 1943) \(= 0.05\), \(g\)(7:02a.m., December 2nd, 1278) \(= 1\).

The application of continuum-valued logics to the Sorites Paradox has been suggested by Joseph Goguen (1969). The Sorites Paradox in its so-called conditional form is obtained by repeatedly applyingmodus ponens in arguments such as:

  • A collection of 100,000 grains of sand is a heap.
  • If a collection of 100,000 grains of sand is a heap, then a collection 99,999 grains of sand is a heap.
  • If a collection of 99,999 grains of sand is a heap, then a collection 99,998 grains of sand is a heap.
  • If a collection of 2 grains of sand is a heap, then a collection of 1 grain of sand is a heap.
  • Therefore: A collection of 1 grain of sand is a heap.

Whereas it seems that all premises are acceptable, because the first premise is true and one grain does not make a difference to a collection of grains being a heap or not, the conclusion is, of course, unacceptable. If the predicate ‘is a heap’ denotes a fuzzy set and the conditional is interpreted as implication in Łukasiewicz’s continuum-valued logic, then the Sorites Paradox can be avoided. The truth-function \(f_{\rightarrow}\) of Łukasiewicz’s implication \(\rightarrow\) is defined by stipulating that if \(x \le y\), then \(f_{\rightarrow}(x, y) = 1\), and otherwise \(f_{\rightarrow}(x, y) = 1 - (x - y)\). If, say, the truth value of the sentence ‘A collection of 500 grains of sand is a heap’ is 0.8 and the truth value of ‘A collection of 499 grains of sand is a heap’ is 0.7, then the truth value of the implication ‘If a collection of 500 grains of sand is a heap, then a collection 499 grains of sand is a heap.’ is 0.9. Moreover, if the acceptability of a statement is defined as having a value greater than \(j\) for \(0 \lt j \lt 1\) and all the conditional premises of the Sorites Paradox do not fall below the value \(j\), then modus ponens does not preserve acceptability, because the conclusion of the Sorites Argument, being evaluated as 0, is unacceptable.

Alasdair Urquhart (1986: 108) stresses

the extremely artificial nature of the attaching of precise numerical values to sentences like … “Picasso’s Guernica is beautiful”.

To overcome the problem of assigning precise values to predications of vague concepts, Zadeh (1975) introduced fuzzy truth values as distinct from the numerical truth values in [0, 1], the former being fuzzy subsets of the set [0, 1], understood as true, very true, not very true, etc.

The interpretation of continuum-valued logics in terms of fuzzy set theory has for some time be seen as defining the field of mathematical fuzzy logic. Susan Haack (1996) refers to such systems of mathematical fuzzy logic as “base logics” of fuzzy logic and reserves the term ‘fuzzy logics’ for systems in which the truth values themselves are fuzzy sets. Fuzzy logic in Zadeh’s latter sense has been thoroughly criticized from a philosophical point of view by Haack (1996) for its “methodological extravagances” and its linguistic incorrectness. Haack emphasizes that her criticisms of fuzzy logic do not apply to the base logics. Moreover, it should be pointed out that mathematical fuzzy logics are nowadays studied not in the first place as continuum-valued logics, but as many-valued logics related to residuated lattices (see Hajek 1998; Cignoli et al. 2000; Gottwald 2001; Galatoset al. 2007), whereas fuzzy logic in the broad sense is to a large extent concerned with certain engineering methods.

A fundamental concern about the semantical treatment of vague predicates is whether an adequate semantics should be truth-functional, that is, whether the truth value of a complex formula should depend functionally on the truth values of its subformulas. Whereas mathematical fuzzy logic is truth-functional, Williamson (1994: 97) holds that “the nature of vagueness is not captured by any approach that generalizes truth-functionality”. According to Williamson, the degree of truth of a conjunction, a disjunction, or a conditional just fails to be a function of the degrees of truth of vague component sentences. The sentences ‘John is awake’ and ‘John is asleep’, for example, may have the same degree of truth. By truth-functionality the sentences ‘If John is awake, then John is awake’ and ‘If John is awake, then John is asleep’ are alike in truth degree, indicating for Williamson the failure of degree-functionality.

One way of in a certain sense non-truthfunctionally reasoning about vagueness is supervaluationism. The method of supervaluations has been developed by Henryk Mehlberg (1958) and Bas van Fraassen (1966) and has later been applied to vagueness by Kit Fine (1975), Rosanna Keefe (2000) and others.

Van Fraassen’s aim was to develop a semantics for sentences containing non-denoting singular terms. Even if one grants atomic sentences containing non-denoting singular terms and that some attributions of vague predicates are neither true nor false, it nevertheless seems natural not to preclude that compound sentences of a certain shape containing non-denoting terms or vague predications are either true or false, e.g., sentences of the form ‘If \(A\), then \(A\)’. Supervaluational semantics provides a solution to this problem. A three-valued assignment \(a\) into \(\{T, I, F\}\) may assign a truth-value gap (or rather the value \(I)\) to the vague sentence ‘Picasso’s Guernica is beautiful’. Any classical assignment \(a'\) that agrees with \(a\) whenever \(a\) assigns \(T\) or \(F\) may be seen as a precisification (or superassignment) of \(a\). A sentence may than be said to be supertrue under assignment \(a\) if it is true under every precisification \(a'\) of \(a\). Thus, if \(a\) is a three-valued assignment into \(\{T, I, F\}\) and \(a'\) is a two-valued assignment into \(\{T, F\}\) such that \(a(p) = a'(p)\) if \(a(p) \in \{T, F\}\), then \(a'\) is said to be a superassignment of \(a\). It turns out that if \(a\) is an assignment extended to a valuation function \(v_a\) for the Kleene matrix \(\mathbf{K}_3\), then for every formula \(A\) in the language of \(\mathbf{K}_3\), \(v_a (A) = v_{a'}(A)\) if \(v_a (A) \in \{T, F\}\). Therefore, the function \(v_{a'}\) may be called a supervaluation of \(v_a\). A formula is then said to be supertrue under a valuation function \(v_a\) for \(\mathbf{K}_3\) if it is true under every supervaluation \(v_{a'}\) of \(v_a\), i.e., if \(v_{a'}(A) = T\) for every supervaluation \(v_{a'}\) of \(v_a\). The property of beingsuperfalse is defined analogously.

Since every supervaluation is a classical valuation, every classical tautology is supertrue under every valuation function in \(\mathbf{K}_3\). Supervaluationism is, however, not truth-functional with respect to supervalues. The supervalue of a disjunction, for example, does not depend on the supervalue of the disjuncts. Suppose \(a(p) = I\). Then \(a(\neg p) = I\) and \(v_{a'} (p\vee \neg p) = T\) for every supervaluation \(v_{a'}\) of \(v_a\). Whereas \((p\vee \neg p)\) is thus supertrue under \(v_a,p\vee p\) is not, because there are superassignments \(a'\) of \(a\) with \(a'(p) = F\). An argument against the charge that supervaluationism requires a non-truth-functional semantics of the connectives can be found in MacFarlane (2008) (cf. also other references given there).

Although the possession of supertruth is preserved from the premises to the conclusion(s) of valid inferences in supervaluationism, and although it might be tempting to consider supertruth an abstract object on its own, it seems that it has never been suggested to hypostatize supertruth in this way, comparable to Frege’s the True. A sentence supertrue under a three-valued valuation \(v\) just takes the Fregean value the True under every supervaluation of \(v\). The advice not to confuse supertruth with “real truth” can be found in Belnap (2009).

2.4 Suszko’s thesis and anti-designated values

One might, perhaps, think that the mere existence of many-valued logics shows that there exist infinitely, in fact, uncountably many truth values. However, this is not at all clear (recall the more cautious use of terminology advocated by Gottwald).

In the 1970’s Roman Suszko (1977: 377) declared many-valued logic to be “a magnificent conceptual deceit”. Suszko actually claimed that “there are but two logical values, true and false” (Caleiro et al. 2005: 169), a statement now called Suszko’s Thesis. For Suszko, the set of truth values assumed in a logical matrix for a many-valued logic is a set of “admissible referents” (called “algebraic values”) of formulas but not a set of logical values. Whereas the algebraic values are elements of an algebraic structure and referents of formulas, the logical value true is used to define valid consequence: If every premise is true, then so is (at least one of) the conclusion(s). The other logical value,false, is preserved in the opposite direction: If the (every) conclusion is false, then so is at least one of the premises. The logical values are thus represented by a bi-partition of the set of algebraic values into a set of designated values (truth) and its complement (falsity).

Essentially the same idea has been taken up earlier by Dummett (1959) in his influential paper, where he asks

what point there may be in distinguishing between different ways in which a statement may be true or between different ways in which it may be false, or, as we might say, between degrees of truth and falsity. (Dummett 1959: 153)

Dummett observes that, first,

the sense of a sentence is determined wholly by knowing the case in which it has a designated value and the cases in which it has an undesignated one,

and moreover,

finer distinctions between different designated values or different undesignated ones, however naturally they come to us, are justified only if they are needed in order to give a truth-functional account of the formation of complex statements by means of operators. (Dummett 1959: 155)

Suszko’s claim evidently echoes this observation by Dummett.

Suszko’s Thesis is substantiated by a rigorous proof (the Suszko Reduction) showing that every structural Tarskian consequence relation and therefore also every structural Tarskian many-valued propositional logic is characterized by a bivalent semantics. (Note also that Richard Routley (1975) has shown that every logic based on a λ-categorical language has a sound and complete bivalent possible worlds semantics.) The dichotomy between designated values and values which are not designated and its use in the definition of entailment plays a crucial role in the Suszko Reduction. Nevertheless, while it seems quite natural to construe the set of designated values as a generalization of the classical truth value \(T\) in some of its significant roles, it would not always be adequate to interpret the set of non-designated values as a generalization of the classical truth value \(F\). The point is that in a many-valued logic, unlike in classical logic, “not true” does not always mean “false” (cf., e.g., the above interpretation of Kleene’s logic, where sentences can be neither true nor false).

In the literature on many-valued logic it is sometimes proposed to consider a set of antidesignated values which not obligatorily constitute the complement of the set of designated values (see, e.g., Rescher 1969, Gottwald 2001). The set of antidesignated values can be regarded as representing a generalized concept of falsity. This distinction leaves room for values that areneither designated nor antidesignated and even for values that are both designated and antidesignated.

Grzegorz Malinowski (1990, 1994) takes advantage of this proposal to give a counterexample to Suszko’s Thesis. He defines the notion of a single-conclusion quasi-consequence \((q\)-consequence) relation. The semantic counterpart of \(q\)-consequence is called \(q\)-entailment. Single-conclusion \(q\)-entailment is defined by requiring that if no premise is antidesignated, the conclusion is designated. Malinowski (1990) proved that for every structural \(q\)-consequence relation, there exists a characterizing class of \(q\)-matrices, matrices which in addition to a subset \(\mathcal{D}^{+}\) of designated values comprise a disjoint subset \(\mathcal{D}^-\) of antidesignated values. Not every \(q\)-consequence relation has a bivalent semantics.

In the supplementary documentSuszko’s Thesis, Suszko’s reduction is introduced, Malinowski’s counterexample to Suszko’s Thesis is outlined, and a short analysis of these results is presented.

Can one provide evidence for a multiplicity of logical values? More concretely, \(is\) there more than one logical value, each of which may be taken to determine its own (independent) entailment relation? A positive answer to this question emerges from considerations on truth values as structured entities which, by virtue of their internal structure, give rise to natural partial orderings on the set of values.

3. Ordering relations between truth-values

3.1 The notion of a logical order

As soon as one admits that truth values come with valuationsystems, it is quite natural to assume that the elements of such a system are somehow interrelated. And indeed, already the valuation system for classical logic constitutes a well-known algebraic structure, namely the two-element Boolean algebra with \(\cap\) and \(\cup\) as meet and join operators (see the entry on themathematics of Boolean algebra). In its turn, this Boolean algebra forms a lattice with a partial order defined by \(a\le_t b \textrm{ iff } a\cap b = a\). This lattice may be referred to as TWO. It is easy to see that the elements of TWO are ordered as follows: \(F\le_t T\). This ordering is sometimes called the truth order (as indicated by the corresponding subscript), for intuitively it expresses an increase in truth: \(F\) is “less true” than \(T\). It can be schematically presented by means of a so-called Hasse-diagram as in Figure 1.

[a horizontal line segment with the left endpoint labeled 'F' and the right endpoint labeled 'T', below an arrow goes from left to right with the arrowhead labeled 't'.]

Figure 1: Lattice TWO

It is also well-known that the truth values of both Kleene’s and Priest’s logic can be ordered to form a lattice (THREE), which is diagrammed in Figure 2.

[The same as figure 1 except the line segment has a point near the middle labeled 'I'.]

Figure 2: Lattice THREE

Here \(\le_t\) orders \(T, I\) and \(F\) so that the intermediate value \(I\) is “more true” than \(F\), but “less true” than \(T\).

The relation \(\le_t\) is also called a logical order, because it can be used to determine key logical notions: logical connectives and an entailment relation. Namely, if the elements of the given valuation system \(\mathbf{V}\) form a lattice, then the operations of meet and join with respect to \(\le_t\) are usually seen as the functions for conjunction and disjunction, whereas negation can be represented by the inversion of this order. Moreover, one can consider an entailment relation for \(\mathbf{V}\) as expressing agreement with the truth order, that is, the conclusion should be at least as true as the premises taken together:

\[ \tag{8} Δ\vDash B\textrm{ iff }\forall v_a[\Pi_t\{ v_a (A) \mid A \in Δ\} \le_t v_a (B)], \]

where \(\Pi_t\) is the lattice meet in the corresponding lattice.

The Belnap matrix \(\mathbf{B}_4\) considered above also can be represented as a partially ordered valuation system. The set of truth values \(\{\mathbf{N}, \mathbf{T}, \mathbf{F}, \mathbf{B}\}\) from \(\mathbf{B}_4\) constitutes a specific algebraic structure – the bilattice FOUR\(_2\) presented in Figure 3 (see, e.g., Ginsberg 1988, Arieli and Avron 1996, Fitting 2006).

[a graph with the y axis labeled 'i' and the x axis labeled 't'. A square with the corners labeled 'B' (top), 'T' (right), 'N' (bottom), and 'F' (left).]

Figure 3: The bilatticeFOUR\(_2\)

This bilattice is equipped with two partial orderings; in addition to a truth order, there is an information order \((\le_i )\) which is said to order the values under consideration according to the information they give concerning a formula to which they are assigned. Lattice meet and join with respect to \(\le_t\) coincide with the functions \(f_{\wedge}\) and \(f_{\vee}\) in the Belnap matrix \(\mathbf{B}_4\), \(f_{{\sim}}\) turns out to be the truth order inversion, and an entailment relation, which happens to coincide with the matrix entailment, is defined by (8).FOUR\(_2\) arises as a combination of two structures: the approximation lattice \(A_4\) and the logical lattice \(L_4\) which are discussed in Belnap 1977a and 1977b (see also, Anderson, Belnap and Dunn 1992: 510–518)).

3.2 Truth values as structured entities. Generalized truth values

Frege (1892: 30) points out the possibility of “distinctions of parts within truth values”. Although he immediately specifies that the word ‘part’ is used here “in a special sense”, the basic idea seems nevertheless to be that truth values are not something amorphous, but possess some inner structure. It is not quite clear how serious Frege is about this view, but it seems to suggest that truth values may well be interpreted as complex, structured entities that can be divided into parts.

There exist several approaches to semantic constructions where truth values are represented as being made up from some primitive components. For example, in some explications of Kripke models for intuitionistic logic propositions (identified with sets of “worlds” in a model structure) can be understood as truth values of a certain kind. Then the empty proposition is interpreted as the value false, and the maximal proposition (the set of all worlds in a structure) as the value true. Moreover, one can consider non-empty subsets of the maximal proposition as intermediate truth values. Clearly, the intuitionistic truth values so conceived are composed from some simpler elements and as such they turn out to be complex entities.

Another prominent example of structured truth values are the “truth-value objects” in topos models from category theory (see the entry on category theory). For any topos \(C\) and for a \(C\)-object Ω one can define a truth value of \(C\) as an arrow \(1 \rightarrow Ω\) (“a subobject classifier for \(C\)”), where 1 is a terminal object in \(C\) (cf. Goldblatt 2006: 81, 94). The set of truth values so defined plays a special role in the logical structure of \(C\), since arrows of the form \(1 \rightarrow Ω\) determine central semantical notions for the given topos. And again, these truth values evidently have some inner structure.

One can also mention in this respect the so-called “factor semantics” for many-valued logic, where truth values are defined as ordered \(n\)-tuples of classical truth values \((T\)-\(F\) sequences, see Karpenko 1983). Then the value \(3/5\), for example, can be interpreted as a \(T\)-\(F\) sequence of length 5 with exactly 3 occurrences of \(T\). Here the classical values \(T\) and \(F\) are used as “building blocks” for non-classical truth values.

Moreover, the idea of truth values as compound entities nicely conforms with the modeling of truth values considered above in three-valued (Kleene, Priest) and four-valued (Belnap) logics as certain subsets of the set of classical truth values. The latter approach stems essentially from Dunn (1976), where a generalization of the notion of a classical truth-value function has been proposed to obtain so-called “underdetermined” and “overdetermined” valuations. Namely, Dunn considers a valuation to be a function not from sentences to elements of the set \(\{T, F\}\) but from sentences to subsets of this set (see also Dunn 2000: 7). By developing this idea, one arrives at the concept of a generalized truth value function, which is a function from sentences into the subsets of some basic set of truth values (see Shramko and Wansing 2005). The values of generalized truth value functions can be called generalized truth values.

By employing the idea of generalized truth value functions, one can obtain a hierarchy of valuation systems starting with a certain set-theoretic representation of the valuation system for classical logic. The representation in question is built on a single initial value which serves then as the designated value of the resulting valuation system. More specifically, consider the singleton \(\{\varnothing \}\) taken as the basic set subject to a further generalization procedure. At the first stage \(\varnothing\) comes out with no specific intuitive interpretation, it is only important to take it as some distinct unit. Consider then the power-set of \(\{\varnothing \}\) consisting of exactly two elements: \(\{\{\varnothing \}, \varnothing \}\). Now, these elements can be interpreted as Frege’s the True and the False, and thus it is possible to construct a valuation system for classical logic, \(\mathbf{V}^{\varnothing}_{cl} = \langle \{\{\varnothing \}, \varnothing \}, \{\{\varnothing \}\}, \{f_{\wedge}, f_{\vee}, f_{\rightarrow}, f_{\sim}\}\rangle\), where the functions \(f_{\wedge}, f_{\vee}, f_{\rightarrow}, f_{\sim}\) are defined as follows (for \[ \begin{align} X, Y \in \{\{\varnothing \}, \varnothing \}:\quad & f_{\wedge}(X, Y) = X\cap Y; \\& f_{\vee}(X, Y) = X\cup Y; \\& f_{\rightarrow}(X, Y) = (\{\{\varnothing \}, \varnothing \}-X)\cup Y; \\& f_{\sim}(X) = \{\{\varnothing \}, \varnothing \}-X. \end{align} \] It is not difficult to see that for any assignment \(a\) relative to \(\mathbf{V}^{\varnothing}_{cl}\), and for any formulas \(A\) and \(B\), the following holds:

\(v_a (A\wedge B) = \{\varnothing \}\Leftrightarrow v_a (A) = \{\varnothing \}\) and \(v_a (B) = \{\varnothing \}\);
\(v_a (A\vee B) = \{\varnothing \}\Leftrightarrow v_a (A) = \{\varnothing \}\) or \(v_a (B) = \{\varnothing \}\);
\(v_a (A\rightarrow B) = \{\varnothing \}\Leftrightarrow v_a (A) = \varnothing\) or \(v_a (B) = \{\varnothing \}\);
\(v_a (\sim A) = \{\varnothing \}\Leftrightarrow v_a (A) = \varnothing\).

This shows that \(f_{\wedge}, f_{\vee}, f_{\rightarrow}\) and \(f_{\sim}\) determine exactly the propositional connectives of classical logic. One can conveniently mark the elements \(\{\varnothing \}\) and \(\varnothing\) in the valuation system \(\mathbf{V}^{\varnothing}_{cl}\) by the classical labels \(T\) and \(F\). Note that within \(\mathbf{V}^{\varnothing}_{cl}\) it is fully justifiable to associate \(\varnothing\) with falsity, taking into account the virtual monism of truth characteristic for classical logic, which treats falsity not as an independent entity but merely as the absence of truth.

Then, by taking the set \(\mathbf{2} = \{F, T\}\) of these classical values as the basic set for the next valuation system, one obtains the four truth values of Belnap’s logic as the power-set of the set of classical values \(\mathcal{P}(\mathbf{2}) = \mathbf{4}: \mathbf{N} = \varnothing\), \(\mathbf{F} = \{F\} (= \{\varnothing \})\), \(\mathbf{T} = \{T\} (= \{\{\varnothing \}\})\) and \(\mathbf{B} = \{F, T\} (= \{\varnothing, \{\varnothing \}\})\). In this way, Belnap’s four-valued logic emerges as a certain generalization of classical logic with its two Fregean truth values. In Belnap’s logic truth and falsity are considered to be full-fledged, self-sufficient entities, and therefore \(\varnothing\) is now to be interpreted not as falsity, but as a real truth-value gap (neither true nor false). The dissimilarity of Belnap’s truth and falsity from their classical analogues is naturally expressed by passing from the corresponding classical values to their singleton-sets, indicating thus their new interpretations as false only and true only. Belnap’s interpretation of the four truth values has been critically discussed in Lewis 1982 and Dubois 2008 (see also the reply to Dubois in Wansing and Belnap 2010).

Generalized truth values have a strong intuitive background, especially as a tool for the rational explication of incomplete and inconsistent information states. In particular, Belnap’s heuristic interpretation of truth values as information that “has been told to a computer” (see Belnap 1977a,b; also reproduced in Anderson, Belnap and Dunn 1992, §81) has been widely acknowledged. As Belnap points out, a computer may receive data from various (maybe independent) sources. Belnap’s computers have to take into account various kinds of information concerning a given sentence. Besides the standard (classical) cases, when a computer obtains information either that the sentence is (1) true or that it is (2) false, two other (non-standard) situations are possible: (3) nothing is told about the sentence or (4) the sources supply inconsistent information, information that the sentence is true and information that it is false. And the four truth values from \(\mathbf{B}_4\) naturally correspond to these four situations: there is no information that the sentence is false and no information that it is true \((\mathbf{N})\), there is merely information that the sentence is false \((\mathbf{F})\), there is merely information that the sentence is true \((\mathbf{T})\), and there is information that the sentence is false, but there is also information that it is true \((\mathbf{B})\).

Joseph Camp in 2002: 125–160 provides Belnap’s four values with quite a different intuitive motivation by developing what he calls a “semantics of confused thought”. Consider a rational agent, who happens to mix up two very similar objects (say, \(a\) and \(b)\) and ambiguously uses one name (say, ‘\(C\)’) for both of them. Now let such an agent assert some statement, saying, for instance, that \(C\) has some property. How should one evaluate this statement if \(a\) has the property in question whereas \(b\) lacks it? Camp argues against ascribing truth values to such statements and puts forward an “epistemic semantics” in terms of “profitability” and “costliness” as suitable characterizations of sentences. A sentence \(S\) is said to be “profitable” if one would profit from acting on the belief that \(S\), and it is said to be “costly” if acting on the belief that \(S\) would generate costs, for example as measured by failure to achieve an intended goal. If our “confused agent” asks some external observers whether \(C\) has the discussed property, the following four answers are possible: ‘yes’ (mark the corresponding sentence with \(\mathbf{Y})\), ‘no’ (mark it with \(\mathbf{N})\), ‘cannot say’ (mark it with ?), ‘yes’ and ‘no’ (mark it with Y&N). Note that the external observers, who provide answers, are “non-confused” and have different objects in mind as to the referent of ‘\(C\)’, in view of all the facts that may be relevant here. Camp conceives these four possible answers concerning epistemic properties of sentences as a kind of “semantic values”, interpreting them as follows: the value \(\mathbf{Y}\) is an indicator of profitability, the value \(\mathbf{N}\) is an indicator of costliness, the value ? is no indicator either way, and the valueY&N is both an indicator of profitability and an indicator of costliness. A strict analogy between this “semantics of confused reasoning” and Belnap’s four valued logic is straightforward. And indeed, as Camp (2002: 157) observes, the set of implications valid according to his semantics is exactly the set of implications of the entailment system \(E_{fde}\). In Zaitsev and Shramko 2013 it is demonstrated how ontological and epistemic aspects of truth values can be combined within a joint semantical framework.

The conception of generalized truth values has its purely logical import as well. If one continues the construction and applies the idea of generalized truth value functions to Belnap’s four truth values, then one obtains further valuation systems which can be represented by various multilattices. One arrives, in particular, at SIXTEEN\(_3\) – the trilattice of 16 truth-values, which can be viewed as a basis for a logic of computer networks (see Shramko and Wansing 2005, 2006; Kamide and Wansing 2009; Odintsov 2009; Wansing 2010; Odintsov and Wansing 2015; cf. also Shramko, Dunn, Takenaka 2001). The notion of a multilattice and SIXTEEN\(_3\) are discussed further in the supplementary documentGeneralized truth values and multilattices. A comprehensive study of the conception of generalized logical values can be found in Shramko and Wansing 2011.

4. Concluding remarks

Gottlob Frege’s notion of a truth value has become part of the standard philosophical and logical terminology. The notion of a truth value is an indispensable instrument of realistic, model-theoretic approaches to semantics. Indeed, truth values play an essential role in applications of model-theoretic semantics in areas such as, for example, knowledge representation and theorem proving based on semantic tableaux, which could not be treated in the present entry. Moreover, considerations on truth values give rise to deep ontological questions concerning their own nature, the feasibility of fact ontologies, and the role of truth values in such ontological theories. Furthermore, there exist well-motivated theories of generalized truth values that lead far beyond Frege’s classical values the True and the False. (For various directions of recent logical and philosophical investigations in the area of truth values see Truth Values I 2009 and Truth Values II 2009.)

Bibliography

  • Anderson, Alan R. and Nuel D. Belnap, 1975, Entailment: The Logic of Relevance and Necessity, Vol. I, Princeton, NJ: Princeton University Press.
  • Anderson, Alan R., Nuel D. Belnap, and J. Michael Dunn, 1992,Entailment: The Logic of Relevance and Necessity, Vol. II, Princeton, NJ: Princeton University Press.
  • Anderson, David and Edward Zalta, 2004, “Frege, Boolos, and logical objects”, Journal of Philosophical Logic, 33: 1–26.
  • Arieli, Ofer and Arnon Avron, 1996, “Reasoning with logical bilattices”, Journal of Logic, Language and Information, 5: 25–63.
  • Asenjo, Florencio G., 1966, “A calculus of antinomies”, Notre Dame Journal of Formal Logic, 7: 103–105.
  • Barwise, Jon and John Perry, 1981, “Semantic innocence and uncompromising situations”, Midwest Studies in the Philosophy of Language, VI: 387–403.
  • Beaney, Michael (ed. and transl.), 1997, The Frege Reader, Oxford: Wiley-Blackwell.
  • Belnap, Nuel D., 1977a, “How a computer should think”, in G. Ryle (ed.), Contemporary Aspects of Philosophy, Stocksfield: Oriel Press Ltd., 30–55.
  • –––, 1977b, “A useful four-valued logic”, in: J.M. Dunn and G. Epstein (eds.), Modern Uses of Multiple-Valued Logic, Dordrecht: D. Reidel Publishing Co., 8–37.
  • –––, 2009, “Truth values, neither-true-nor-false, and supervaluations”, Studia Logica, 91: 305–334.
  • Bennett, Jonathan, 1988, Events and their Names. New York: Hackett.
  • Béziau, Jean-Yves, 2012, “A History of Truth-Values”, in D. Gabbay et al. (eds.), Handbook of the History of Logic. Vol. 11, Logic: A History of its Central Concepts, Amsterdam: North-Holland, 235–307.
  • Brown, Bryson and Peter Schotch, 1999, “Logic and aggregation”, Journal of Philosophical Logic, 28: 265–287.
  • Burge, Tyler, 1986, “Frege on truth”, in: L. Haaparanta and J. Hintikka (eds.), Frege Synthesized, Dordrecht: D. Reidel Publishing Co., 97–154.
  • –––, 1992, “Frege on knowing the Third Realm”, Mind, 101: 633–650.
  • Caleiro, Carlos, Walter Carnielli, Marcelo Coniglio, and João Marcos, 2005, “Two’s company: ‘The humbug of many logical values’”, in: J.-Y. Beziau (ed.), Logica Universalis, Basel: Birkhäuser Verlag, 169–189.
  • Camp, Joseph L., 2002, Confusion: A Study in the Theory of Knowledge, Cambridge, MA: Harvard University Press.
  • Carnap, Rudolf, 1942, Introduction to Semantics, Cambridge, MA: Harvard University Press.
  • –––, 1947, Meaning and Necessity. A Study in Semantics and Modal Logic, Chicago: University of Chicago Press.
  • Church, Alonzo, 1943, “Review of Rudolf Carnap, Introduction to Semantics”, The Philosophical Review, 52: 298–304.
  • –––, 1956, Introduction to Mathematical Logic, Vol. I, Princeton: Princeton University Press.
  • Cignoli, Roberto, Itala D’Ottaviano, and Daniele Mundici, 2000, Algebraic Foundations of Many-valued Reasoning, Dordrecht: Kluwer Academic Publishers.
  • da Costa, Newton, Jean-Yves Béziau, and Otávio Bueno, 1996, “Malinowski and Suszko on many-valued logics: on the reduction of many-valuedness to two-valuedness”, Modern Logic, 6: 272–299.
  • Czelakowski, Janusz, 2001, Protoalgebraic Logics, Dordrecht: Kluwer Academic Publishers.
  • Davidson, David, 1967, “Truth and meaning”, Synthese, 17: 304–323.
  • –––, 1969, “True to the facts”, Journal of Philosophy, 66: 748–764.
  • Dubois, Didier, 2008, “On ignorance and contradiction considered as truth-values”, Logic Journal of the IGPL, 16: 195–216.
  • Dummett, Michael, 1959, “Truth”, in: Proceedings of the Aristotelian Society, 59: 141–162 (Reprinted in: Truth and Other Enigmas, Cambridge, MA: Harvard University Press, 1978, 1–24).
  • –––, 1981, Frege: Philosophy of Language, 2nd ed., London: Duckworth Publishers.
  • –––, 1991, Frege and Other Philosophers, Oxford: Oxford University Press.
  • –––, 2000, Elements of Intuitionism, 2nd ed., Oxford: Clarendon Press.
  • Dunn, J. Michael, 1976, “Intuitive semantics for first-degree entailment and ‘coupled trees’”, Philosophical Studies, 29: 149–168.
  • –––, 2000, “Partiality and its dual”, Studia Logica, 66: 5–40.
  • Dunn, J. Michael and Gary M. Hardegree, 2001, Algebraic Methods in Philosophical Logic (Oxford Logic Guides, Volume 41), Oxford: Science Publications.
  • Fine, Kit, 1975, “Vagueness, truth and logic”, Synthese, 30: 265–300.
  • Fitting, Melvin, 2006, “Bilattices are nice things”, in: T. Bolander, V. Hendricks, and S.A. Pedersen (eds.), Self-Reference, Stanford: CSLI Publications, 53–77.
  • Font, Josep Maria, 1997, “Belnap’s four-valued logic and De Morgan lattices”, Logic Journal of IGPL, 5: 1–29.
  • –––, 2009, “Taking degrees of truth seriously”, Studia Logica, 91: 383–406.
  • van Fraassen, Bas, 1966, “Singular terms, truth-value gaps, and free logic”, Journal of Philosophy, 63: 481–495.
  • Frankowski, Szymon, 2004, “Formalization of a plausible inference”, Bulletin of the Section of Logic, 33: 41–52.
  • Frege, Gottlob, 1884, Grundlagen der Arithmetik. Eine logisch-mathematische Untersuchung über den Begriff der Zahl, Hamburg: Felix Meiner Verlag, 1988.
  • –––, 1891, “Function und Begriff. Vortrag, gehalten in der Sitzung vom 9. Januar 1891 der Jenaischen Gesellschaft für Medicin und Naturwissenschaft”, Jena: H. Pohle (Reprinted in Frege 1986.)
  • –––, 1892, “Über Sinn und Bedeutung”, Zeitschrift für Philosophie und philosophische Kritik, 100: 25–50. (Reprinted in Frege 1986.)
  • –––, 1893/1903, Grundgesetze der Arithmetik, 2 volumes, Jena: Verlag Hermann Pohle; reprinted, Darmstadt: Wissenschaftliche Buchgesellschaft, 1962.
  • –––, 1918, “Der Gedanke”, Beiträge zur Philosophie des deutschen Idealismus 1: 58–77. (Reprinted in Frege 1967.)
  • –––, 1967, Kleine Schriften, Ignacio Angelli (ed.), Darmstadt: Wissenschaftliche Buchgesellschaft.
  • –––, 1976, Wissenschaftlicher Briefwechsel, G. Gabriel, H. Hermes, F. Kambartel, C. Thiel, and A. Veraart (eds.), Hamburg: Felix Meiner Verlag.
  • –––, 1986, Funktion, Begriff, Bedeutung. Fünf logische Studien, G. Patzig (ed.), Göttingen: Vandenhoeck & Ruprecht.
  • –––, 1990, “Einleitung in die Logik”, in: Frege, G., Schriften zur Logik und Sprachphilosophie, Hamburg: Felix Meiner Verlag, 74–91.
  • Gabriel, Gottfried, 1984, “Fregean connection: Bedeutung, value and truth-value”, The Philosophical Quarterly, 34: 372–376.
  • –––, 1986, “Frege als Neukantianer”, Kant-Studien, 77: 84–101.
  • –––, 2013, “Truth, value, and truth value. Frege’s theory of judgement and its historical background”, in: M. Textor (ed.), Judgement and Truth in Early Analytic Philosophy and Phenomenology, Basingstoke: Palgrave Macmillan, 36–51.
  • Galatos, Nikolaos, Peter Jipsen, Tomasz Kowalski and Hiroakira Ono, 2007, Residuated Lattices: An Algebraic Glimpse at Substructural Logics, Amsterdam: Elsevier.
  • Geach, Peter and Max Black (eds.), 1952, Translations from the Philosophical Writings of Gottlob Frege, New York: Philosophical Library.
  • Ginsberg, Matthew, 1988, “Multivalued logics: a uniform approach to reasoning in AI”, Computer Intelligence, 4: 256–316.
  • Gödel, Kurt, 1944, “Russell’s mathematical logic”, in: P.A. Schilpp (ed.), The Philosophy of Bertrand Russell, Evanston and Chicago: Northwestern University Press, 125–53.
  • Goldblatt, Robert, 2006, Topoi: The Categorial Analysis of Logic, Mineola, NY: Dover Publications.
  • Gottwald, Siegfried, 1989, Mehrwertige Logik. Eine Einführung in Theorie und Anwendungen, Berlin: Akademie-Verlag.
  • –––, 2001, A Treatise on Many-valued Logic, Baldock: Research Studies Press.
  • Goguen, Joseph, 1969, “The logic of inexact concepts”, Synthese, 19: 325–373.
  • Grossmann, Reinhardt, 1992, The Existence of the World, London: Routledge.
  • Haack, Susan, 1996, Deviant Logic, Fuzzy Logic. Beyond the Formalism, Chicago: University of Chicago Press.
  • Hajek, Petr, 1998, Metamathematics of fuzzy logic, Dordrecht: Kluwer Academic Publishers.
  • Jennings, Ray and Peter Schotch, 1984, “The preservation of coherence”, Studia Logica, 43: 89–106.
  • Kamide, Norihiro and Heinrich Wansing, 2009, “Sequent calculi for some trilattice logics”, Review of Symbolic Logic, 2: 374–395.
  • Karpenko, Alexander, 1983, “Factor semantics for \(n\)-valued logics”, Studia Logica, 42: 179–185.
  • Keefe, Rosanna, 2000, Theories of Vagueness, Cambridge: Cambridge University Press.
  • Kneale, William and Martha Kneale, 1962, The Development of Logic, Oxford: Oxford University Press.
  • Lewis, Clarence Irving, 1943, “The modes of meaning”, Philosophy and Phenomenological Research, 4: 236–249.
  • Lewis, David, 1982, “Logic for equivocators”, Noûs, 16: 431–441.
  • Lowe, Jonathan, 1995, “The metaphysics of abstract objects”, The Journal of Philosophy, 92: 509–524.
  • –––, 1997, “Objects and criteria of identity”, in: A Companion to the Philosophy of Language, R. Hale and C. Wright (eds.), Oxford: Basil Blackwell, 613–33.
  • Łukasiewicz, Jan, 1918, “Farewell lecture by professor Jan Łukasiewicz,” delivered in the Warsaw University Lecture Hall in March, 1918, in: (Łukasiewicz 1970), 87–88.
  • –––, 1920, “O logice trójwartościowej”, Ruch Filozoficny, 5: 170–171. (English translation as “On three-valued logic” in: (Łukasiewicz 1970), 87–88.)
  • –––, 1921, “Logika dwuwartościowa”, Przegl ad Filosoficzny, 13: 189–205. (English translation as “Two-valued logic” in: (Łukasiewicz 1970), 89–109.)
  • –––, 1970, Selected Works, L. Borkowski (ed.), Amsterdam: North-Holland, and Warsaw: PWN.
  • MacFarlane, John, 2002, “Review of Stephen Neale, Facing Facts”, Notre Dame Philosophical Reviews, [available online].
  • –––, 2008, “Truth in the garden of forking paths”, in: Relative Truth, Max Kölbel and Manuel García-Carpintero (eds.), Oxford: Oxford University Press, 81–102.
  • Malinowski, Grzegorz, 1990, “Q-consequence operation”, Reports on Mathematical Logic, 24: 49–59.
  • –––, 1993, Many-Valued Logics, Oxford: Clarendon Press.
  • –––, 1994, “Inferential many-valuedness”, in: Jan Wolenski (ed.), Philosophical Logic in Poland, Dordrecht: Kluwer Academic Publishers, 75–84.
  • Mehlberg, Henryk, 1958, The Reach of Science, Toronto: University of Toronto Press.
  • Meyer, Robert K., 1978 Why I Am Not a Relevantist, Research Paper No. 1, Canberra: Australian National University (Logic Group, Research School of the Social Sciences).
  • Neale, Stephen, 1995, “The Philosophical significance of Gödel’s slingshot”, Mind, 104: 761–825.
  • –––, 2001, Facing Facts, Oxford: Oxford University Press.
  • Odintsov, Sergei, 2009, “On axiomatizing Shramko-Wansing’s logic”, Studia Logica, 93: 407–428. doi:10.1007/s11225-009-9181-6
  • Odintsov, Sergei and Heinrich Wansing, 2015, “The logic of generalized truth values and the logic of bilattices”, Studia Logica, 103(1): 91–112. doi:10.1007/s11225-014-9546-3
  • Oppy, Graham, 1997, “The Philosophical Insignificance of Gödel’s Slingshot”, Mind, 106(421): 121–141.
  • Peirce, C.S., 1885, “On the Algebra of Logic: A Contribution to the Philosophy of Notation”, American Journal of Mathematics, 7(2): 180–202. doi:10.2307/2369451
  • Perry, John, 1996, “Evading the slingshot”, in: A. Clark, J. Ezquerro, and J. Larrazabal (eds.), Philosophy and Cognitive Science. Categories, Consciousness, and Reasoning, Dordrecht: Kluwer Academic Publishers, pp. 95–114.
  • Popper, Karl, 1972, Objective Knowledge: An Evolutionary Approach, Oxford: Oxford University Press.
  • Post, Emil, 1921, “Introduction to a general theory of elementary propositions”, American Journal of Mathematics, 43: 163–185.
  • Priest, Graham, 1979, “Logic of Paradox”, Journal of Philosophical Logic, 8: 219–241.
  • Quine, Willard Van Orman, 1953, “Reference and modality”, in W.v.O. Quine, From a Logical Point of View, Cambridge, MA: Harvard University Press, 139–159.
  • –––, 1960, Word and Object, Cambridge, MA: MIT Press.
  • –––, 1969, Ontological Relativity and Other Essays, New York: Columbia University Press.
  • Reck, Erich, 2007, “Frege on truth, judgment, and objectivity”, Grazer Philosophische Studien, 75: 149–173.
  • Rescher, Nicholas, 1969, Many-Valued Logic, New York: McGraw-Hill.
  • Routley, Richard, 1975, “Universal semantics?”, The Journal of Philosophical Logic, 4: 327–356.
  • Ruffino, Marco, 2003, “Wahrheit als Wert und als Gegenstand in der Logik Freges”, in: D. Greimann (ed.), Das Wahre und das Falsche. Studien zu Freges Auffassung von Wahrheit, Hildesheim: Georg Olms Verlag, 203–221.
  • Russell, Bertrand, 1918, 1919 [1992], “The philosophy of logical atomism”, Monist, 28: 495–527; 29: 32–63, 190–222, 345–380. reprinted in his Logic and Knowledge, London: Allen and Unwin, 1956. Page numbers from the Routlege edition, 1992, pp. 175–282.
  • Ryan, Mark and Martin Sadler, 1992, “Valuation systems and consequence relations”, in: S. Abramsky, D. Gabbay, and T. Maibaum (eds.), Handbook of Logic in Computer Science, Vol. 1., Oxford: Oxford University Press, 1–78.
  • Searle, John, 1995, “Truth: A Reconsideration of Strawson’s View”, in L.E. Hahn (ed.), The Philosophy of P.F. Strawson, Chicago: Open Court.
  • Shramko, Yaroslav, 2014, “The logical way of being true: Truth values and the ontological foundation of logic”, Logic and Logical Philosophy.
  • Shramko, Yaroslav, J. Michael Dunn, and Tatsutoshi Takenaka, 2001, “The trilaticce of constructive truth values”, Journal of Logic and Computation, 11: 761–788.
  • Shramko, Yaroslav and Heinrich Wansing, 2005, “Some useful 16-valued logics: how a computer network should think”, Journal of Philosophical Logic, 34: 121–153.
  • –––, 2006, “Hypercontradictions, generalized truth values, and logics of truth and falsehood”, Journal of Logic, Language and Information, 15: 403–424.
  • –––, 2009, “The Slingshot-Argument and sentential identity”, Studia Logica, 91: 429–455.
  • –––, 2011, Truth and Falsehood: An Inquiry into Generalized Logical Values, Trends in Logic Vol. 36, Dordrecht, Heidelberg, London, New York: Springer.
  • Sluga, Hans, 2002, “Frege on the indefinability of truth”, in: E. Reck (ed.), From Frege to Wittgenstein: Perspectives on Early Analytic Philosophy, Oxford: Oxford University Press, 75–95.
  • Stoutland, Frederick, 2003, “What philosophers should know about truth and the slingshot”, in Matti Sintonen, Petri Ylikoski, and Kaarlo Miller (eds.), Realism in Action: Essays in the Philosophy of the Social Sciences, Dordrecht: Kluwer Academic Publishers, 3–32. doi:10.1007/978-94-007-1046-7_1
  • Suszko, Roman, 1977, “The Fregean axiom and Polish mathematical logic in the 1920s”, Studia Logica, 36: 373–380. doi:10.1007/BF02120672
  • Tarski, Alfred, 1930a, “Über einige fundamentale Begriffe der Metamathematik”, Comptes Rendus des Séances de la Société des Sciences et des Lettres de Varsovie XXIII, Classe III: 22–29.
  • –––, 1930b, “Fundamentale Begriffe der Methodologie der deduktiven Wissenschaften, I”, Monatshefte für Mathematik und Physik, 37: 361–404.
  • Taylor, Barry, 1985, Modes of Occurrence: Verbs, Adverbs and Events, Oxford: Blackwell.
  • Truth Values. Part I, 2009, Special issue of Studia logica, Yaroslav Shramko and Heinrich Wansing (eds.), Vol. 91, No. 3.
  • Truth Values. Part II, 2009, Special issue of Studia logica, Yaroslav Shramko and Heinrich Wansing (eds.), Vol. 92, No. 2.
  • Urquhart, Alasdair, 1986, “Many-valued logic”, in: D. Gabbay and F. Guenther (eds.), Handbook of Philosophical Logic, Vol. III., D. Reidel Publishing Co., Dordrecht, 71–116.
  • Wansing, Heinrich, 2010, “The power of Belnap. Sequent systems for SIXTEEN\(_3\)”, Journal of Philosophical Logic, 39(4): 369–393. doi:10.1007/s10992-010-9139-1
  • Wansing, Heinrich and Nuel Belnap, 2010, “Generalized truth values: A reply to Dubois”, Logic Journal of the IGPL, 18(6): 921–935. doi:10.1093/jigpal/jzp068
  • Wansing, Heinrich and Yaroslav Shramko, 2008, “Suszko’s Thesis, inferential many-valuedness, and the notion of a logical system”, Studia Logica, 88: 405–429, 89: 147.
  • Williamson, Timothy, 1994, Vagueness, London: Routledge.
  • Windelband, Wilhelm, 1915, Präludien: Aufätze und Reden zur Philosophie und ihrer Geschichte, 5. Aufgabe, Bnd. 1., Tübingen.
  • Wójcicki, Ryszard, 1970, “Some remarks on the consequence operation in sentential logics”, Fundamenta Mathematicae, 68: 269–279.
  • –––, 1988, Theory of Logical Calculi. Basic Theory of Consequence Operations, Dordrecht: Kluwer Academic Publishers.
  • Wrigley, Anthony, 2006, “Abstracting propositions”, Synthese, 151: 157–176.
  • Zadeh, Lotfi, 1965, “Fuzzy sets”, Information and Control, 8: 338–53.
  • –––, 1975, “Fuzzy logic and approximate reasoning”, Synthese, 30: 407–425.
  • Zaitsev, Dmitry and Yaroslav Shramko, 2013, “Bi-facial truth: a case for generalized truth values”, Studia Logica, 101: 1299–1318.
  • Zalta, Edward, 1983, Abstract Objects: An Introduction to Axiomatic Metaphysics, Dordrecht: D. Reidel Publishing Co.

Other Internet Resources

[Please contact the authors with suggestions.]

Copyright © 2017 by
Yaroslav Shramko<shramko@rocketmail.com>
Heinrich Wansing<Heinrich.Wansing@rub.de>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free

Agorakit, an open source groupware for citizens

$
0
0
Agorakit, a groupware for citizens

Agorakit is a web based open source groupware for citizens initiatives.
By creating collaborative groups, people can discuss, organize events, store files and keep everyone updated when needed.
Agorakit is a forum, agenda, file manager, mapping tool and email notifier.

Create groups for your projects

You can create as many groups as you like, groups can be fully open or closed (membership approval required).

Manage a collaborative agenda

Each group has an agenda, you can display a complete agenda of every groups. iCal feed for each agenda ready to be imported elsewhere.

Geolocalize groups, people and events

Put everything on a nice map automatically. Map everyone, every group and every event as needed. Wake up the paranoid inside you.

Get an overview of your unread discussions and upcoming events

Every user get a dashboard where you can see every unread discussions. No more mailing list horror. Keep an archive of everything for new comers. Avoid being spammed with "me too" replies.

Receive email notifications at the rate YOU specify for each groups

aka "I don't want to be spammed for each comment in each group".
Everyone can decide how often to receive notifications, for each groups. Choose your level of involvment per group. Also known as "do not disturb me more than once a week"

Manage your files and links

Each groups has a file repository where you can store nice pictures of cute kittens, meetings summaries, links to shared documents, etc...

In use

Agorakit has been used successfully since 2015 for several citizen initiatives such asTout autre chose, Hart boven hard and other intitiatives.

Licence

agorakit is released under the GPL licence. It is open source and can be freely extended and used by others.

Status

This software is in use daily. The biggest install has more than thousand registered users.
While still in development, our hope are high that it will be useful to other initiatives. Join the team and help us fine tune the beast!

Contact

Please drop a line to info (at) agorakit.org to keep in touch.

Made with love in Brussels

DR70 – A dedicated machine for astrologers

$
0
0
This week, a real oddity from 1978, the DR70 - a dedicated machine for astrologers, and the very model President Reagan used, indirectly, to choose propitious times for important decisions and meetings. It was supplied in a hard case for lugging, with a printer in a second case. "Included are planetary routines good for several thousand years and a wide variety of house systems and current pattern systems. It can even compare charts. If all you want is personal charts it is great—especially since it is portable."
...
Battery powered and weighing just 8 pounds, according to this advert:
http://www.deathwishindustries.com/index.php?op=home/What%20Is%20Best/Astology%20Computers%20and%20Ads

Large advert image:
https://s-media-cache-ak0.pinimg.com/originals/5d/98/36/5d98361b48fa1f63539b635307fd4efc.jpg
"It is quite a marriage of science and creativity"

We believe there's a 6502 inside, because of this press article:
https://books.google.co.uk/books?id=05wAGZQlo9QC&pg=PA603
(This page suggests the DR-70 is Z80 based, but we say [citation needed]:
http://www.mwigan.com/mrw/2_DigiComp_DR-70_Astrological_System.html
)

Digicomp made a successor, the Astrion System 80 - which might well represent a switch to the Z80 micro, or just a switch to the 1980s. (This is the same Digicomp which made two mechanical "computer" models, or toys.)

Ref:
Duncan Campbell the investigative journalist said "Ronald Reagan has been secretly programmed by a computer for the past eight years".

E-commerce will evolve next month as Amazon loses the 1-Click patent

$
0
0

Next month e-commerce will change forever thanks to Amazon. September 12 marks 20 years since Amazon filed for their 1-Click patent. This means that the patent will expire and the technology behind it will be free to be used by any e-commerce site. Starting next month more and more sites will be offering a one click checkout experience. Most major sites have already started development with plans to launch soon after the patent expires.

History behind the patent

Amazon applied for the 1-Click patent in September of 1997, the actual patent was granted in 1999. The whole idea behind the patent is when you store a user’s credit card and address you only need a single click to order a product. For the last 20 years Amazon has kept a tight hold on this technology, they have only licensed it to one company Apple. No one knows what Apple paid to license the technology, but the value of the patent has been assessed at 2.4 billion dollars by sources. Over the last 20 years Amazon has defended the validity of the patent in several cases, even having to revise the patent at one point. But, now the wait is almost over and this technology is about to make it into the open market.

Not a one page checkout

The one click checkout is not to be confused with a one page checkout. With a one page checkout all of the account, checkout, and payment information is on one page. With a one click checkout a user is sent straight from the product (or category) page to the order confirmation page. No clicking through any steps or accepting any charges, one click from a product page and an order is placed. The user will land directly on the order confirmation page. Order placed, once click and done.

Merchants listen up

If you are a merchant, this can be a huge opportunity for you. With the holiday season right around the corner who does not want to offer their customers a quicker, easier way to checkout? You can reduce the friction of going through a whole checkout process down to just one button press from a product page. Look at the image below, pressing the buy now button will take a user directly to an order confirmation page and charge their payment method.

thirty bees buy now

Not all credit card processors have the technology to support a one click checkout system. Some that we know that have the technology are:

  • Stripe
  • Authorize.net
  • First Data
  • Paypal Pro
  • Skybank

These are the ones we have worked with in the past that we know use a card vault. Others likely support it too, so if you know another processor that uses a card vault let us know. The card vault is the key to the frictionless payment. Customers store their card to use it later, that is one of the keys to the one click checkout process.

How serious is this?

It is serious enough that the World Wide Web Consortium (W3C) has started writing a draft proposal for one click buying methods.  They have recruited some of the top companies in the industry like Google, Apple, and Facebook to help come up with a set of specifications. Google has already implemented some of the standards in its Chrome and Chrome Mobile browsers, with more likely to come in the future. They have proposed ways of storing cards and address data in the browser and letting the browser communicate directly with your payment gateway to send the card or bank information. Sounds pretty useful doesn’t it?

What are we doing?

We realize that is technology is important to our merchants. This is something that will change e-commerce in a major way over the next year. We have already started on a framework to extend the thirty bees 1.0.x branch to allow for single click buying. We are developing a module that will allow payment modules to hook into it, so that developers can extend their payment modules to work with a single click buying. We are going to develop several of these modules in house, such as the Stripe module and a couple of other modules. We are also going to release a couple tutorials on how to hook into the single click checkout module, so that developers will be able to easily update their modules to support the new system.

HelenOS: portable microkernel-based multiserver operating system

$
0
0

screenshot.png

HelenOS features in a single screenshot. The image depicts the HelenOS compositing GUI, networking, filesystems, sound subsystem and a multithreaded, multiprocessor 64-bit kernel in action. The Colorful Prague picture used in the screenshot is a courtesy of Miroslav Petrasko.

HelenOS is a portable microkernel-based multiserver operating system designed and implemented from scratch. It decomposes key operating system functionality such as file systems, networking, device drivers and graphical user interface into a collection of fine-grained user space components that interact with each other via message passing. A failure or crash of one component does not directly harm others. HelenOS is therefore flexible, modular, extensible, fault tolerant and easy to understand.

HelenOS does not aim to be a clone of any existing operating system and trades compatibility with legacy APIs for cleaner design. Most of HelenOS components have been made to order specifically for HelenOS so that its essential parts can stay free of adaptation layers, glue code, franken-components and the maintenance burden incurred by them.

HelenOS runs on seven different processor architectures and machines ranging from embedded ARM devices and single-board computers through multicore 32-bit and 64-bit desktop PCs to 64-bit Itanium and SPARC rack-mount servers.

HelenOS is open source, free software. Its source code is available under the BSD license. Some third-party components are licensed under GPL.

The latest release of HelenOS 0.7.0 (Parabolic Potassium) is out. Check out the downloads for sources and binaries and the release notes for more information.

Viewing all 25817 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>