Quantcast
Channel: Hacker News
Viewing all 25817 articles
Browse latest View live

Linux 4.13 support for TLS record layer in kernel space

$
0
0
Overview========Transport Layer Security (TLS) is a Upper Layer Protocol (ULP) that runs overTCP. TLS provides end-to-end data integrity and confidentiality.User interface==============Creating a TLS connection-------------------------First create a new TCP socket and set the TLS ULP. sock = socket(AF_INET, SOCK_STREAM, 0); setsockopt(sock, SOL_TCP, TCP_ULP, "tls", sizeof("tls"));Setting the TLS ULP allows us to set/get TLS socket options. Currentlyonly the symmetric encryption is handled in the kernel. After the TLShandshake is complete, we have all the parameters required to move thedata-path to the kernel. There is a separate socket option for movingthe transmit and the receive into the kernel. /* From linux/tls.h */ struct tls_crypto_info { unsigned short version; unsigned short cipher_type; }; struct tls12_crypto_info_aes_gcm_128 { struct tls_crypto_info info; unsigned char iv[TLS_CIPHER_AES_GCM_128_IV_SIZE]; unsigned char key[TLS_CIPHER_AES_GCM_128_KEY_SIZE]; unsigned char salt[TLS_CIPHER_AES_GCM_128_SALT_SIZE]; unsigned char rec_seq[TLS_CIPHER_AES_GCM_128_REC_SEQ_SIZE]; }; struct tls12_crypto_info_aes_gcm_128 crypto_info; crypto_info.info.version = TLS_1_2_VERSION; crypto_info.info.cipher_type = TLS_CIPHER_AES_GCM_128; memcpy(crypto_info.iv, iv_write, TLS_CIPHER_AES_GCM_128_IV_SIZE); memcpy(crypto_info.rec_seq, seq_number_write, TLS_CIPHER_AES_GCM_128_REC_SEQ_SIZE); memcpy(crypto_info.key, cipher_key_write, TLS_CIPHER_AES_GCM_128_KEY_SIZE); memcpy(crypto_info.salt, implicit_iv_write, TLS_CIPHER_AES_GCM_128_SALT_SIZE); setsockopt(sock, SOL_TLS, TLS_TX, &crypto_info, sizeof(crypto_info));Sending TLS application data----------------------------After setting the TLS_TX socket option all application data sent over thissocket is encrypted using TLS and the parameters provided in the socket option.For example, we can send an encrypted hello world record as follows: const char *msg = "hello world\n"; send(sock, msg, strlen(msg));send() data is directly encrypted from the userspace buffer providedto the encrypted kernel send buffer if possible.The sendfile system call will send the file's data over TLS records of maximumlength (2^14). file = open(filename, O_RDONLY); fstat(file, &stat); sendfile(sock, file, &offset, stat.st_size);TLS records are created and sent after each send() call, unlessMSG_MORE is passed. MSG_MORE will delay creation of a record untilMSG_MORE is not passed, or the maximum record size is reached.The kernel will need to allocate a buffer for the encrypted data.This buffer is allocated at the time send() is called, such thateither the entire send() call will return -ENOMEM (or block waitingfor memory), or the encryption will always succeed. If send() returns-ENOMEM and some data was left on the socket buffer from a previouscall using MSG_MORE, the MSG_MORE data is left on the socket buffer.Send TLS control messages-------------------------Other than application data, TLS has control messages such as alertmessages (record type 21) and handshake messages (record type 22), etc.These messages can be sent over the socket by providing the TLS record typevia a CMSG. For example the following function sends @data of @length bytesusing a record of type @record_type./* send TLS control message using record_type */ static int klts_send_ctrl_message(int sock, unsigned char record_type, void *data, size_t length) { struct msghdr msg = {0}; int cmsg_len = sizeof(record_type); struct cmsghdr *cmsg; char buf[CMSG_SPACE(cmsg_len)]; struct iovec msg_iov; /* Vector of data to send/receive into. */ msg.msg_control = buf; msg.msg_controllen = sizeof(buf); cmsg = CMSG_FIRSTHDR(&msg); cmsg->cmsg_level = SOL_TLS; cmsg->cmsg_type = TLS_SET_RECORD_TYPE; cmsg->cmsg_len = CMSG_LEN(cmsg_len); *CMSG_DATA(cmsg) = record_type; msg.msg_controllen = cmsg->cmsg_len; msg_iov.iov_base = data; msg_iov.iov_len = length; msg.msg_iov = &msg_iov; msg.msg_iovlen = 1; return sendmsg(sock, &msg, 0); }Control message data should be provided unencrypted, and will beencrypted by the kernel.Integrating in to userspace TLS library---------------------------------------At a high level, the kernel TLS ULP is a replacement for the recordlayer of a userspace TLS library.A patchset to OpenSSL to use ktls as the record layer is here:https://github.com/Mellanox/tls-opensslAn example of calling send directly after a handshake usinggnutls. Since it doesn't implement a full record layer, controlmessages are not supported:https://github.com/Mellanox/tls-af_ktls_tool

Introduction to HTML Components

$
0
0

HTML Components (HTC), introduced in Internet Explorer 5.5, offers a powerful new way to author interactive Web pages. Using standard DHTML, JScript and CSS knowledge, you can define custom behaviors on elements using the “behavior” attribute. Let’s create a behavior for a simple kind of “image roll-over” effect. For instance, save the following as “roll.htc”:

<PUBLIC:ATTACH EVENT="onmouseover" ONEVENT="rollon()" />
<PUBLIC:ATTACH EVENT="onmouseout" ONEVENT="rollout()" />
<SCRIPT LANGUAGE="JScript">
tmpsrc = element.src;
function rollon() {
    element.src = tmpsrc + "_rollon.gif"
}
function rollout() {
    element.src = tmpsrc + ".gif";
}
rollout();
</SCRIPT>

This creates a simple HTML Component Behavior that swaps the image’s source when the user rolls over and rolls off of the mentioned image. You can “attach” such a behavior to any element using the CSS attribute, “behavior”.

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML>
<BODY>
<IMG STYLE="behavior: url(roll.htc)" SRC="logo">
</BODY>
</HTML>

The benefit of HTML Components is that we can apply them to any element through simple CSS selectors. For instance:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML>
<HEAD>
<STYLE>
.RollImg {
  behavior: url(roll.htc);
}
</STYLE>
</HEAD>
<BODY>
<IMG CLASS="RollImg" SRC="logo">
<IMG CLASS="RollImg" SRC="home">
<IMG CLASS="RollImg" SRC="about">
<IMG CLASS="RollImg" SRC="contact">
</BODY>
</HTML>

This allows us to reuse them without having to copy/paste code. Wonderful! This is known as an Attached Behavior, since it is directly attached to an element. Once you’ve mastered these basic Attached Behaviors, we can move onto something a bit more fancy, Element Behaviors. With Element Behaviors, you can create custom element types and create custom programmable interfaces, allowing us to build a library of custom components, reusable between pages and projects. Like before, Element Behaviors consist of an HTML Component, but now we have to specify our component in <PUBLIC:COMPONENT>.

<PUBLIC:COMPONENT TAGNAME="ROLLIMG">
<PUBLIC:ATTACH EVENT="onmouseover" ONEVENT="rollon()" />
<PUBLIC:ATTACH EVENT="onmouseout" ONEVENT="rollout()" />
<PUBLIC:PROPERTY NAME="basesrc" />
</PUBLIC:COMPONENT>
<img id="imgtag" />
<SCRIPT>
img = document.all['imgtag'];
element.appendChild(img);
function rollon() {
    img.src = element.basesrc + "_rollon.gif";
}
function rollout() {
    img.src = element.basesrc + ".gif";
}
rollout();
</SCRIPT>

I’ll get to the implementation of ROLLIMG in a bit, but first, to use a custom element, we use the special <?IMPORT> tag which allows us to import a custom element into an XML namespace.

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML XMLNS:CUSTOM>
<HEAD>
<?IMPORT NAMESPACE="CUSTOM" IMPLEMENTATION="RollImgComponent.htc">
</HEAD>
<BODY>
<CUSTOM:ROLLIMG BASESRC="logo">
<CUSTOM:ROLLIMG BASESRC="home">
<CUSTOM:ROLLIMG BASESRC="about">
<CUSTOM:ROLLIMG BASESRC="contact">
</BODY>
</HTML>

The ROLLIMG fully encapsulates the behavior, freeing the user of having to “know” what kind of element to use the Attached Behavior on! The implementation of the Custom Element Behavior might seem a bit complex, but it’s quite simple. When Internet Explorer parses a Custom Element, it synchronously creates a new HTML Component from this “template” and binds it to the instance. We also have two “magic global variables” here: “element” and “document”. Each instance of this HTML Component gets its own document, the children of which are reflowed to go inside the custom element. “element” refers to the custom element tag in the outer document which embeds the custom element. Additionally, since each custom element has its own document root, that means that it has its own script context, and its own set of global variables.

We can also set up properties as an API for the document author to use when they use our custom element.

Here, we use an img tag as a “template” of sorts, add it to our custom element’s document root.

After IE puts it together, the combined DOM sort of looks like this:

<CUSTOM:ROLLIMG BASESRC="logo">
<IMG ID="imgtag" SRC="logo.gif">
</CUSTOM:ROLLIMG>

<CUSTOM:ROLLIMG BASESRC="home">
<IMG ID="imgtag" SRC="home.gif">
</CUSTOM:ROLLIMG>

...

Unfortunately, this has one final flaw. Due to the natural cascading nature of CSS Stylesheets, such “implementation details” will leak through. For instance, if someone adds a <STYLE>IMG { background-color: red; }</STYLE>, this will affect our content. While this can sometimes be a good thing if you want to develop a styleable component, it often results in undesirable effects. Thankfully, Internet Explorer 5.5 adds a new feature, named “Viewlink”, which encapsulates not just the implementation of your HTML Component, but the document as well. “Viewlink” differs from a regular component in that instead of adding things as children of our element, we instead can provide a document fragment which the browser will “attach” to our custom element in a private, encapsulated manner. The simplest way to do this is to just use our HTML Component’s document root.

<PUBLIC:COMPONENT TAGNAME="ROLLIMG">
<PUBLIC:ATTACH EVENT="onmouseover" ONEVENT="rollon()" />
<PUBLIC:ATTACH EVENT="onmouseout" ONEVENT="rollout()" />
<PUBLIC:PROPERTY NAME="basesrc" />
</PUBLIC:COMPONENT>
<img id="imgtag" />
<SCRIPT>
defaults.viewLink = document;
var img = document.all['imgtag'];
function rollon() {
    img.src = element.basesrc + "_rollon.gif";
}
function rollout() {
    img.src = element.basesrc + ".gif";
}
rollout();
</SCRIPT>

Using the “defaults.viewLink” property, we can set our HTML Component’s private document fragment as our viewLink, rendering the children but without adding them as children of our element. Perfect encapsulation.

*cough* OK, obviously it’s 2017 and Internet Explorer 5.5 isn’t relevant anymore. But if you’re a Web developer, this should have given you some pause for thought. The modern Web Components pillars: Templates, Custom Elements, Shadow DOM, and Imports, were all features originally in IE5, released in 1999.

Now, it “looks outdated”: uppercase instead of lowercase tags, the “on”s everywhere in the event names, but that’s really just a slight change of accent. Shake off the initial feeling that it’s cruft, and the actual meat is all there, and it’s mostly the same. Sure, there’s magic XML tags instead of JavaScript APIs, and magic globals instead of callback functions, but that’s nothing more than a slight change of dialect. IE says tomato, Chrome says tomato.

Now, it’s likely you’ve never heard of HTML Components at all. And, perhaps shockingly, a quick search at the time of this article’s publishing shows nobody else does at all.

Why did IE5’s HTML Components never quite catch on? Despite what you might think, it’s not because of a lack of open standards. Reminder, a decent amount of the web API, today, started from Internet Explorer’s DHTML initiative. contenteditable, XMLHttpRequest, innerHTML were all carefully, meticulously reverse-engineered from Internet Explorer. Internet Explorer was the dominant platform for websites — practically nobody designed or even tested websites for Opera or Netscape. I can remember designing websites that used IE-specific features like DirectX filters to flip images horizontally, or the VML

And it’s not because of a lack of evangelism or documentation. Microsoft was trying to push DHTML and HTML Components hard. Despite the content being nearly 20 years old at this point, documentation on HTML Components and viewLink is surprisingly well-kept, with diagrams and images, sample links and all, archived without any broken links. Microsoft’s librarians deserve fantastic credit on that one.

For any browser or web developer, please go read the DHTML Dude columns. Take a look at the breadth of APIs available on display, and go look at some examplecomponents on display. Take a look at the persistence API, or dynamic expression properties. Besides the much-hyped-but-dated-in-retrospect XML data binding tech, it all seems relatively modern. Web fonts? IE4. CSS gradients? IE5.5. Vector graphics? VML (which, in my opinion, is a more sensible standard than SVG, but that’s for another day.)

So, again I ask, why did this never catch on? I’m sure there are a variety of complex factors, probably none of which are technical reasons. Despite our lists of “engineering best practices” and “blub paradoxes“, computer engineering has, and always will be, dominated by fads and marketing and corporate politics.

The more important question is a bigger one: Why am I the first one to point this out? Searching for “HTML Components” and “Viewlink” leads to very little discussion about them online, past roughly 2004. Microsoft surely must have been involved in the Web Components Working Group. Was this discussed at all?

Pop culture and fads pop in and fade out over the years. Just a few years ago, web communities were excited about Object.observe before React proved it unnecessary. Before node.js’s take on “isomorphic JavaScript” was solidified, heck, even before v8cgi / teajs, an early JavaScript-as-a-Server project, another bizarre web framework known as Aptana Jaxer was doing it in a much more direct way.

History is important. It’s easier to point and laugh and ignore outdated technology like Internet Explorer. But tech, so far, has an uncanny ability to keep repeating itself. How can we do a better job paying attention to things that happened before us, rather than assuming it was all bad?

HomeBrew Analytics – top 1000 packages installed over last year

$
0
0
FormulaEvents%#1node736,2434.31%#2git354,6712.08%#3wget344,5222.02%#4yarn315,4421.85%#5python3261,8151.53%#6python256,7301.50%#7mysql247,4741.45%#8coreutils241,2211.41%#9openssl205,4701.20%#10postgresql198,3391.16%#11imagemagick191,0701.12%#12mongodb188,2171.10%#13pkg-config187,7581.10%#14chromedriver180,7941.06%#15awscli179,6341.05%#16automake174,6071.02%#17vim172,5591.01%#18youtube-dl152,2100.89%#19libtool150,1470.88%#20cmake145,7190.85%#21readline145,1400.85%#22go128,8050.75%#23maven127,5420.75%#24libyaml127,4030.75%#25autoconf127,0660.74%#26watchman124,7510.73%#27redis124,2270.73%#28ffmpeg121,2200.71%#29heroku114,8540.67%#30rbenv114,7380.67%#31gradle113,4120.66%#32tmux112,5990.66%#33ruby109,5980.64%#34openssl@1.1106,7660.63%#35libksba105,3480.62%#36zsh99,0930.58%#37pidof98,9930.58%#38nginx90,2870.53%#39selenium-server-standalone82,6480.48%#40carthage82,3030.48%#41tree81,9070.48%#42jq79,2950.46%#43docker76,3020.45%#44nmap76,2590.45%#45htop74,3680.44%#46nvm72,3830.42%#47pyenv71,0350.42%#48gcc70,9970.42%#49gnupg68,4710.40%#50homebrew/php/php7167,9690.40%#51ansible66,2830.39%#52bash-completion62,0000.36%#53sbt57,2210.34%#54curl53,2810.31%#55terraform52,5220.31%#56graphviz51,7380.30%#57wine50,3480.30%#58boost49,5020.29%#59zsh-completions48,9490.29%#60mercurial48,9190.29%#61elasticsearch44,7950.26%#62unrar44,6020.26%#63git-lfs44,4660.26%#64homebrew/science/opencv44,3670.26%#65the_silver_searcher44,0040.26%#66node@643,1120.25%#67protobuf43,0800.25%#68kubernetes-cli43,0460.25%#69homebrew/php/composer41,5770.24%#70macvim41,4440.24%#71bash40,5200.24%#72dnsmasq39,9110.23%#73qt39,3650.23%#74mariadb39,1160.23%#75ant38,8490.23%#76homebrew/php/php7038,8490.23%#77ruby-build38,6170.23%#78scala38,4960.23%#79gdb36,4840.21%#80sqlite36,1980.21%#81phantomjs36,1340.21%#82hugo35,8080.21%#83elixir35,7420.21%#84libxml235,6540.21%#85fish35,5140.21%#86watch35,0170.21%#87swiftlint34,6830.20%#88ghostscript32,6300.19%#89imagemagick@631,1830.18%#90rabbitmq31,0670.18%#91p7zip30,4080.18%#92flow30,3440.18%#93homebrew/php/php5630,2900.18%#94neovim/neovim/neovim29,6870.17%#95libav --with-libvorbis --with-libvpx --with-freetype --with-fdk-aac --with-opus29,6140.17%#96httpie29,2000.17%#97docker-compose28,8480.17%#98mono28,6280.17%#99ctags28,0320.16%#100pandoc27,5700.16%#101bazel27,4690.16%#102gnupg227,1300.16%#103libpng25,9580.15%#104libffi25,5690.15%#105qt525,4230.15%#106tig25,1030.15%#107llvm25,0740.15%#108homebrew/dupes/zlib24,9540.15%#109memcached24,8840.15%#110homebrew/science/r24,7830.15%#111mysql@5.624,3480.14%#112ack24,0560.14%#113xz24,0170.14%#114docker-machine23,5670.14%#115jpeg23,3890.14%#116sdl223,3850.14%#117midnight-commander23,3290.14%#118ethereum/ethereum/ethereum23,1040.14%#119pyenv-virtualenv22,9130.13%#120capstone22,9020.13%#121putty22,6890.13%#122hub22,6010.13%#123thefuck22,1440.13%#124emacs --with-cocoa22,1150.13%#125gnutls22,0550.13%#126emacs21,9320.13%#127subversion21,5930.13%#128highlight21,4400.13%#129reattach-to-user-namespace21,2490.12%#130ideviceinstaller21,0930.12%#131homebrew/science/octave21,0640.12%#132tomcat21,0050.12%#133libimobiledevice --HEAD20,9390.12%#134ssh-copy-id20,9030.12%#135jenkins20,7640.12%#136certbot20,0410.12%#137vim --with-override-system-vi19,9840.12%#138fzf19,7440.12%#139gcc@4.919,4480.11%#140dos2unix19,4400.11%#141rust19,0280.11%#142packer18,8910.11%#143graphicsmagick18,8260.11%#144libtiff18,7440.11%#145cocoapods18,7300.11%#146mtr18,6280.11%#147doxygen18,5200.11%#148go-delve/delve/delve18,4810.11%#149vim --with-lua18,4660.11%#150groovy18,2720.11%#151winetricks18,2250.11%#152freetype17,8820.10%#153gnu-sed17,7030.10%#154apache-spark17,6060.10%#155pcre17,4220.10%#156tesseract17,3720.10%#157swig17,2800.10%#158gnuplot17,0030.10%#159node --without-npm16,9920.10%#160sdl16,9660.10%#161homebrew/science/opencv316,9530.10%#162nginx --with-http216,7710.10%#163git-flow16,7510.10%#164homebrew/php/php71-xdebug16,7280.10%#165gcc --without-multilib16,7040.10%#166pv16,6820.10%#167boost-python16,6700.10%#168gawk16,5320.10%#169gmp16,5060.10%#170qemu16,3910.10%#171cairo16,3360.10%#172lua16,2110.09%#173docker-machine-driver-xhyve16,1180.09%#174cloudfoundry/tap/cf-cli16,0970.09%#175zsh-syntax-highlighting15,8400.09%#176socat15,7940.09%#177numpy15,4640.09%#178homebrew/php/php70-mcrypt15,4400.09%#179zeromq15,4300.09%#180cask15,2310.09%#181homebrew/php/php71 --with-httpd2415,2010.09%#182homebrew/fuse/ntfs-3g15,0370.09%#183autojump14,9840.09%#184android-platform-tools14,9520.09%#185geckodriver14,7710.09%#186homebrew/science/hdf514,6340.09%#187tbb14,4160.08%#188homebrew/php/php56-mcrypt14,4140.08%#189mobile-shell14,4010.08%#190macvim --with-override-system-vim14,3830.08%#191gettext14,3750.08%#192libxslt14,2480.08%#193glib14,2470.08%#194homebrew/php/php56-xdebug14,1430.08%#195kotlin14,1320.08%#196node@414,1270.08%#197gdbm14,1190.08%#198hadoop14,1090.08%#199wireshark13,9700.08%#200eigen13,9520.08%#201homebrew/php/php71-mcrypt13,8110.08%#202kubernetes-helm13,8050.08%#203exiftool13,6080.08%#204caskformula/caskformula/inkscape13,5640.08%#205aria213,5020.08%#206homebrew/php/php56 --with-httpd2413,4870.08%#207libevent13,4850.08%#208binutils13,4750.08%#209zlib13,4240.08%#210bison13,2640.08%#211rename13,2210.08%#212snappy13,1750.08%#213qt@5.513,1570.08%#214poppler13,1480.08%#215lynx13,0550.08%#216erlang13,0030.08%#217mitmproxy12,9680.08%#218openvpn12,9330.08%#219tor12,8590.08%#220cowsay12,8120.08%#221icu4c12,8010.08%#222aws-elasticbeanstalk12,7520.07%#223ios-webkit-debug-proxy12,7410.07%#224homebrew/php/php70-xdebug12,7270.07%#225jenv12,3890.07%#226pyqt12,3090.07%#227ripgrep12,2000.07%#228cloc12,1910.07%#229homebrew/php/php70 --with-httpd2412,1500.07%#230sdl_image12,1500.07%#231curl --with-openssl11,8910.07%#232gnu-sed --with-default-names11,8640.07%#233mas11,8620.07%#234parallel11,7870.07%#235android-sdk11,7470.07%#236glog11,7040.07%#237speedtest_cli11,6660.07%#238shellcheck11,6640.07%#239sdl_mixer11,6230.07%#240bower11,5060.07%#241sdl_ttf11,4940.07%#242mpv11,4930.07%#243findutils11,4770.07%#244homebrew/apache/httpd24 --with-privileged-ports --with-http211,4550.07%#245glide11,4000.07%#246gnu-tar11,2850.07%#247postgis11,2570.07%#248ntfs-3g11,2410.07%#249kafka11,2090.07%#250portmidi11,1990.07%#251freetds11,1780.07%#252mysql-connector-c11,1360.07%#253irssi11,0690.06%#254leveldb11,0270.06%#255exercism11,0200.06%#256homebrew/science/opencv3 --HEAD --with-contrib --with-python310,7670.06%#257fswatch10,7550.06%#258yasm10,6680.06%#259kibana10,6420.06%#260libdvdcss10,3110.06%#261gflags10,2640.06%#262aircrack-ng10,2500.06%#263gd10,1960.06%#264pango10,1780.06%#265portaudio10,1580.06%#266homebrew/php/php70-intl10,1180.06%#267httrack10,1150.06%#268homebrew/php/phpunit10,0100.06%#269boot2docker9,9550.06%#270mcrypt9,9140.06%#271cassandra9,8810.06%#272clang-format9,8490.06%#273nodebrew9,8280.06%#274direnv9,8020.06%#275s3cmd9,7310.06%#276perl9,6470.06%#277libusb9,5830.06%#278jmeter9,5720.06%#279gdal9,4470.06%#280ethereum/ethereum/cpp-ethereum9,4240.06%#281d12frosted/emacs-plus/emacs-plus9,3570.05%#282leiningen9,3260.05%#283git-flow-avh9,3100.05%#284webp9,2550.05%#285homebrew/php/php71-intl9,2440.05%#286terminal-notifier9,2180.05%#287fortune9,2100.05%#288openssh9,1740.05%#289lmdb9,1330.05%#290grafana9,0570.05%#291fontconfig9,0370.05%#292sox8,9530.05%#293syncthing8,8750.05%#294open-mpi8,8610.05%#295colordiff8,8590.05%#296pwgen8,8380.05%#297homebrew/science/openblas8,8180.05%#298homebrew/science/matplotlib8,7660.05%#299swagger-codegen8,6860.05%#300md5sha1sum8,6380.05%#301consul8,6160.05%#302guetzli8,5970.05%#303iperf8,5950.05%#304homebrew/apache/httpd248,5120.05%#305watchman --HEAD8,4790.05%#306libimobiledevice8,4570.05%#307glfw8,4520.05%#308rsync8,4160.05%#309homebrew/php/php-cs-fixer8,3770.05%#310gtk+8,2860.05%#311libmagic8,2830.05%#312gifsicle8,2670.05%#313sl8,2080.05%#314pivotal/tap/springboot8,1720.05%#315vault8,1680.05%#316sshfs8,1610.05%#317libav8,1300.05%#318libunistring8,1160.05%#319influxdb8,0950.05%#320iperf38,0530.05%#321homebrew/php/php56-intl7,9940.05%#322scipy7,9070.05%#323netcat7,8780.05%#324peco7,8620.05%#325haskell-stack7,8570.05%#326unison7,8440.05%#327neovim/neovim/neovim --HEAD7,8330.05%#328nano7,7900.05%#329pstree7,7590.05%#330logstash7,7190.05%#331gstreamer7,7090.05%#332glew7,6430.04%#333libgcrypt7,6220.04%#334sdl2_image7,5130.04%#335elasticsearch@2.47,5050.04%#336kops7,4260.04%#337rpm7,4150.04%#338grep7,4000.04%#339pyqt57,3930.04%#340xcproj7,3500.04%#341gsl7,3430.04%#342opam7,3390.04%#343unixodbc7,3230.04%#344sqlmap7,2890.04%#345thrift7,2890.04%#346vapor/tap/vapor7,1840.04%#347mplayer7,1790.04%#348z7,1720.04%#349homebrew/php/php71-opcache7,1690.04%#350mosquitto7,1550.04%#351libzip7,1160.04%#352gtk+37,0970.04%#353nkf7,0950.04%#354openexr7,0910.04%#355git-extras7,0860.04%#356ninja7,0740.04%#357dpkg7,0340.04%#358lftp7,0320.04%#359bash-git-prompt7,0300.04%#360sdl2_mixer7,0020.04%#361pass6,9850.04%#362proxychains-ng6,8920.04%#363mutt6,8820.04%#364media-info6,8730.04%#365openconnect6,8580.04%#366minicom6,8560.04%#367npth6,8380.04%#368sdl2_ttf6,8040.04%#369ccache6,7780.04%#370screen6,7440.04%#371neovim6,7430.04%#372vim --with-python36,7010.04%#373ncdu6,6900.04%#374make6,6610.04%#375v86,6550.04%#376geoip6,6280.04%#377nasm6,5730.04%#378giflib6,5490.04%#379homebrew/php/phpmyadmin6,5400.04%#380vagrant-completion6,4850.04%#381gnuplot --with-cairo --without-lua --with-wxmac --with-pdflib-lite --with-x116,4700.04%#382ruby-install6,4550.04%#383homebrew/science/opencv3 --with-contrib --with-python36,4330.04%#384harfbuzz6,4150.04%#385fontforge6,4020.04%#386brew-cask-completion6,3900.04%#387upx6,3540.04%#388homebrew/fuse/sshfs6,3390.04%#389apktool6,3050.04%#390rlwrap6,2380.04%#391lrzsz6,1960.04%#392librsvg6,1870.04%#393szip6,1010.04%#394wxpython6,0790.04%#395moreutils5,9140.03%#396ghc5,9080.03%#397ocaml5,8620.03%#398valgrind5,8610.03%#399postgresql@9.55,8290.03%#400chruby5,8170.03%#401gnuplot --with-x115,8160.03%#402libiconv5,7690.03%#403atool5,7520.03%#404screenfetch5,7310.03%#405adns5,7210.03%#406homebrew/php/php71-redis5,7050.03%#407diff-so-fancy5,7040.03%#408siege5,6710.03%#409apr-util5,6260.03%#410homebrew/science/opencv3 --with-contrib5,6250.03%#411figlet5,5760.03%#412zookeeper5,5050.03%#413ethereum/ethereum/cpp-ethereum --devel5,4740.03%#414homebrew/php/php71-imagick5,4390.03%#415homebrew/php/php70-opcache5,4370.03%#416libuv5,4340.03%#417crystal-lang5,4300.03%#418codeclimate/formulae/codeclimate5,4040.03%#419grc5,3890.03%#420docker-completion5,3760.03%#421curl --with-nghttp25,3710.03%#422mysql@5.55,3700.03%#423sbcl5,3700.03%#424gpg-agent5,3570.03%#425fftw5,3420.03%#426mycli5,3170.03%#427isl@0.125,3140.03%#428asciinema5,2690.03%#429w3m5,2240.03%#430homebrew/php/php55 --with-httpd245,1770.03%#431libmemcached5,1350.03%#432gcc@55,1160.03%#433aspell5,0950.03%#434gcc@4.85,0910.03%#435rethinkdb5,0910.03%#436postgresql@9.45,0900.03%#437little-cms25,0750.03%#438xctool5,0430.03%#439pypy5,0390.03%#440opus5,0270.03%#441neo4j4,9780.03%#442libvorbis4,9680.03%#443apr4,9580.03%#444dockutil4,9560.03%#445homebrew/php/php56-imagick4,9470.03%#446pyenv-virtualenvwrapper4,8720.03%#447htop-osx4,8620.03%#448iftop4,8530.03%#449mkvtoolnix4,8190.03%#450ideviceinstaller --HEAD4,7440.03%#451homebrew/php/php70-imagick4,7420.03%#452wrk4,7340.03%#453homebrew/science/opencv3 --with-python34,7140.03%#454lame4,7060.03%#455spark4,7010.03%#456smpeg4,6900.03%#457ctop4,6730.03%#458homebrew/dupes/rsync4,6030.03%#459cmatrix4,6020.03%#460gnu-tar --with-default-names4,5950.03%#461plantuml4,5720.03%#462sphinx-doc4,5100.03%#463homebrew/php/php56-redis4,5060.03%#464koekeishiya/formulae/kwm4,5050.03%#465trash4,4720.03%#466homebrew/dupes/openssh4,4480.03%#467neofetch4,4440.03%#468pngquant4,4410.03%#469md5deep4,4360.03%#470optipng4,4290.03%#471cabextract4,4240.03%#472findutils --with-default-names4,4220.03%#473facebook/fb/buck4,4150.03%#474you-get4,4150.03%#475hydra4,4090.03%#476wxmac4,4060.03%#477libsodium4,3740.03%#478pgcli4,3690.03%#479homebrew/dupes/grep4,3670.03%#480csshx4,3550.03%#481grep --with-default-names4,3530.03%#482git-duet/tap/git-duet4,3400.03%#483autoenv4,3380.03%#484antigen4,3350.03%#485lolcat4,3330.03%#486gedit4,2980.03%#487shadowsocks-libev4,2880.03%#488ffmpeg --with-fdk-aac --with-libass --with-sdl2 --with-x265 --with-freetype --with-libvorbis --with-libvpx --with-opus4,2820.03%#489dart-lang/dart/dart --with-content-shell --with-dartium4,2760.03%#490freetds --with-unixodbc4,2550.02%#491swi-prolog4,2550.02%#492axel4,2530.02%#493typesafe-activator4,2430.02%#494gnu-getopt4,2370.02%#495smartmontools4,2360.02%#496homebrew/php/php71-mongodb4,2310.02%#497mpv --with-bundle4,2310.02%#498n4,2140.02%#499wallix/awless/awless4,2110.02%#500e2fsprogs4,2070.02%#501iproute2mac4,2040.02%#502dfu-util4,2000.02%#503pygtk4,1880.02%#504gobject-introspection4,1830.02%#505libssh24,1720.02%#506casperjs4,1300.02%#507ios-deploy4,1130.02%#508vim --with-override-system-vi --with-lua4,0930.02%#509handbrake4,0880.02%#510homebrew/php/php70-mongodb4,0780.02%#511oniguruma4,0630.02%#512homebrew/science/vtk4,0580.02%#513etcd4,0460.02%#514tidy-html54,0390.02%#515flex4,0380.02%#516homebrew/php/php70-redis4,0280.02%#517hashcat4,0130.02%#518homebrew/php/php553,9680.02%#519uncrustify3,9670.02%#520bash-completion@23,9640.02%#521sip3,9550.02%#522libusb-compat3,9440.02%#523swiftformat3,9410.02%#524pinentry-mac3,9350.02%#525oclint/formulae/oclint3,9330.02%#526ranger3,9270.02%#527aspnet/dnx/dnvm3,9200.02%#528geos3,8880.02%#529scons3,8870.02%#530homebrew/php/php56-opcache3,8810.02%#531homebrew/php/php-code-sniffer3,8740.02%#532mackup3,8600.02%#533wine --devel3,8560.02%#534railwaycat/emacsmacport/emacs-mac3,8550.02%#535tomcat@73,8410.02%#536shared-mime-info3,8160.02%#537imagemagick --with-webp3,8110.02%#538homebrew/php/drush3,8020.02%#539jpegoptim3,7930.02%#540solr3,7920.02%#541tcptraceroute3,7840.02%#542tesseract --with-all-languages3,7720.02%#543gdk-pixbuf3,7690.02%#544libplist3,7690.02%#545tmate3,7310.02%#546homebrew/php/php56-memcached3,7210.02%#547gnuplot --with-aquaterm3,7130.02%#548homebrew/php/php71-pdo-pgsql3,7090.02%#549weechat3,7090.02%#550sshuttle3,6950.02%#551nodenv3,6760.02%#552homebrew/nginx/nginx-full --with-rtmp-module3,6730.02%#553homebrew/php/php71-memcached3,6600.02%#554subversion --with-java3,6440.02%#555eugenmayer/dockersync/unox3,6390.02%#556zsh-autosuggestions3,6360.02%#557gource3,6260.02%#558wakeonlan3,6200.02%#559sonar-scanner3,6160.02%#560azure-cli3,6120.02%#561wireshark --with-qt3,6040.02%#562bfg3,5820.02%#563ext4fuse3,5670.02%#564webpack3,5630.02%#565openshift-cli3,5570.02%#566intltool3,5420.02%#567jeffreywildman/virt-manager/virt-manager3,5270.02%#568msgpack3,5260.02%#569markdown3,5200.02%#570luajit3,5170.02%#571gnuradio3,5120.02%#572cscope3,5070.02%#573cabal-install3,5050.02%#574unzip3,4920.02%#575algol68g3,4850.02%#576xhyve3,4840.02%#577mecab3,4730.02%#578ffmpeg --with-fdk-aac --with-libass --with-opencore-amr --with-openjpeg --with-rtmpdump --with-schroedinger --with-sdl2 --with-tools --with-freetype --with-frei0r --with-libvorbis --with-libvpx --with-opus --with-speex --with-theora3,4690.02%#579binwalk3,4570.02%#580cpanminus3,4520.02%#581homebrew/science/opencv3 --with-contrib --without-python --with-python33,4400.02%#582autossh3,4380.02%#583activemq3,4110.02%#584typescript3,4070.02%#585ipcalc3,3980.02%#586jpeg-turbo3,3920.02%#587sphinx3,3870.02%#588byobu3,3830.02%#589haproxy3,3800.02%#590hive3,3750.02%#591homebrew/php/php71-apcu3,3710.02%#592osquery3,3640.02%#593dinkypumpkin/get_iplayer/get_iplayer3,3570.02%#594homebrew/dupes/libiconv3,3490.02%#595homebrew/fuse/ext4fuse3,3400.02%#596augeas3,3280.02%#597homebrew/php/php56-apcu3,3210.02%#598qcachegrind3,3150.02%#599docker-compose-completion3,3080.02%#600jasper3,3060.02%#601pinentry3,2910.02%#602jmeter --with-plugins3,2870.02%#603arp-scan3,2750.02%#604fabric3,2700.02%#605homebrew/php/php56-mongodb3,2470.02%#606links3,2350.02%#607rbenv-gemset3,2310.02%#608jansson3,2240.02%#609archey3,2220.02%#610mecab-ipadic3,2220.02%#611libsndfile3,2100.02%#612ffmpeg --with-sdl23,2080.02%#613ossp-uuid3,1720.02%#614sonarqube3,1680.02%#615ncurses3,1630.02%#616couchdb3,1620.02%#617git-crypt3,1610.02%#618fdupes3,1550.02%#619homebrew/science/pillow3,1330.02%#620hping3,1330.02%#621transmission3,1310.02%#622diffutils3,1250.02%#623homebrew/dupes/apple-gcc423,1230.02%#624jemalloc3,1150.02%#625grpc3,1100.02%#626pigz3,1100.02%#627grunt-cli3,1060.02%#628grails3,1040.02%#629dbus3,1030.02%#630ispell3,1020.02%#631terragrunt3,1010.02%#632zenity3,0650.02%#633node-build3,0490.02%#634paritytech/paritytech/parity --stable3,0460.02%#635x2643,0380.02%#636jlhonora/lsusb/lsusb3,0340.02%#637avrdude3,0260.02%#638mpg1233,0150.02%#639flyway3,0080.02%#640lz42,9990.02%#641xmlstarlet2,9910.02%#642lastpass-cli --with-pinentry2,9820.02%#643homebrew/dupes/screen2,9780.02%#644libdnet2,9740.02%#645fping2,9710.02%#646gnu-which --with-default-names2,9690.02%#647tldr2,9610.02%#648numpy --with-python32,9400.02%#649libvpx2,9390.02%#650facebook/fb/fbsimctl --HEAD2,9350.02%#651libev2,9320.02%#652homebrew/php/php56-mongo2,9290.02%#653dex2jar2,9170.02%#654homebrew/science/opencv3 --HEAD --with-contrib2,9150.02%#655homebrew/apache/httpd24 --with-privileged-ports2,9100.02%#656gzip2,9060.02%#657gnu-indent --with-default-names2,9040.02%#658thoughtbot/formulae/rcm2,8940.02%#659gnuplot --with-qt@5.72,8920.02%#660boost --c++112,8870.02%#661clisp2,8780.02%#662netpbm2,8630.02%#663osx-cross/avr/avr-libc2,8580.02%#664cppcheck2,8390.02%#665tcpreplay2,8380.02%#666caddy2,8290.02%#667xpdf2,8210.02%#668ffmpeg --with-libvpx2,8140.02%#669bazaar2,7970.02%#670macvim --with-lua2,7940.02%#671dart-lang/dart/dart2,7910.02%#672clamav2,7860.02%#673stunnel2,7830.02%#674fasd2,7800.02%#675usbmuxd2,7760.02%#676homebrew/php/php70-apcu2,7710.02%#677elinks2,7570.02%#678infer2,7530.02%#679cryptopp2,7490.02%#680libogg2,7480.02%#681vitorgalvao/tiny-scripts/cask-repair2,7470.02%#682gmp@42,7430.02%#683homebrew/php/wp-cli2,7380.02%#684mpfr2,7330.02%#685ethereum/ethereum/solidity2,7270.02%#686mkvtoolnix --with-qt2,7210.02%#687flac2,7170.02%#688tcl-tk2,7160.02%#689gitlab-ci-multi-runner2,7150.02%#690pngcrush2,7100.02%#691ddrescue2,7080.02%#692homebrew/science/netcdf2,7000.02%#693zplug2,6910.02%#694protobuf@2.52,6740.02%#695fcrackzip2,6720.02%#696ldid2,6710.02%#697boost-python --with-python32,6680.02%#698kafkacat2,6660.02%#699polipo2,6620.02%#700swiftgen2,6590.02%#701icdiff2,6520.02%#702glm2,6500.02%#703privoxy2,6480.02%#704hhvm/hhvm/hhvm2,6230.02%#705repo2,6160.02%#706homebrew/php/php56-memcache2,6120.02%#707libmpc2,6090.02%#708libassuan2,6060.02%#709raggi/ale/openssl-osx-ca2,6060.02%#710rclone2,6010.02%#711homebrew/php/php70-memcached2,5960.02%#712webkit2png2,5840.02%#713homebrew/versions/logstash242,5830.02%#714koekeishiya/formulae/khd2,5770.02%#715nghttp22,5660.02%#716android-ndk2,5650.02%#717elm2,5580.01%#718xhyve --HEAD2,5540.01%#719gperftools2,5530.01%#720oath-toolkit2,5400.01%#721supervisor2,5340.01%#722chisel2,5320.01%#723sourcekitten2,5300.01%#724homebrew/nginx/openresty2,5280.01%#725vim --with-python3 --with-lua2,4990.01%#726terminator2,4970.01%#727libsass2,4930.01%#728libgpg-error2,4920.01%#729x2652,4630.01%#730c-ares2,4470.01%#731libxmlsec12,4470.01%#732reaver2,4460.01%#733pssh2,4430.01%#734astyle2,4320.01%#735cvs2,4320.01%#736goaccess2,4310.01%#737libressl2,4250.01%#738john2,4240.01%#739scala@2.112,4240.01%#740ruby@2.22,4190.01%#741librdkafka2,4180.01%#742less2,4090.01%#743conan2,4040.01%#744rtmpdump2,3990.01%#745megatools2,3930.01%#746homebrew/science/samtools2,3840.01%#747telegraf2,3810.01%#748dialog2,3800.01%#749testssl2,3770.01%#750mpv --HEAD --with-bundle --with-libdvdread2,3690.01%#751jetty2,3630.01%#752homebrew/php/php55-xdebug2,3600.01%#753mpich2,3600.01%#754freetds@0.912,3590.01%#755libgit22,3530.01%#756lzlib2,3520.01%#757homebrew/science/pymol2,3500.01%#758homebrew/dupes/grep --with-default-names2,3430.01%#759wireshark --with-qt52,3430.01%#760gnuplot --with-aquaterm --with-x112,3350.01%#761homebrew/apache/mod_wsgi2,3310.01%#762miniupnpc2,3130.01%#763macvim --with-override-system-vim --with-lua --with-luajit2,3110.01%#764sslscan2,3110.01%#765wdiff --with-gettext2,3090.01%#766libass2,3060.01%#767ipython2,3000.01%#768mongodb@3.22,2980.01%#769homebrew/php/php55-mcrypt2,2920.01%#770homebrew/science/opencv3 --HEAD --with-contrib --with-java2,2810.01%#771gist2,2790.01%#772py2cairo2,2780.01%#773node@0.122,2720.01%#774pypy32,2700.01%#775homebrew/php/php70-pdo-pgsql2,2660.01%#776codekitchen/dinghy/dinghy2,2610.01%#777micro2,2560.01%#778calc2,2510.01%#779gpac2,2480.01%#780percona-server2,2270.01%#781ammonite-repl2,2260.01%#782pidcat2,2220.01%#783qt@5.72,2210.01%#784pre-commit2,2190.01%#785procmail2,2170.01%#786fdk-aac2,2110.01%#787pandoc-citeproc2,2100.01%#788shopify/shopify/themekit2,2050.01%#789makedepend2,1980.01%#790berkeley-db2,1860.01%#791qpdf2,1760.01%#792aws-sdk-cpp2,1700.01%#793aws-shell2,1700.01%#794net-snmp2,1640.01%#795kops --HEAD2,1610.01%#796dark-mode2,1540.01%#797pcre22,1540.01%#798ettercap2,1530.01%#799docker-machine-completion2,1480.01%#800px4/px4/gcc-arm-none-eabi2,1440.01%#801cartr/qt4/qt2,1380.01%#802pidgin2,1380.01%#803ruby@2.32,1290.01%#804squid2,1120.01%#805mit-scheme2,1100.01%#806task2,1070.01%#807mpv --with-libcaca2,1020.01%#808saltstack2,0740.01%#809elasticsearch@1.72,0650.01%#810ngrep2,0620.01%#811ipmitool2,0610.01%#812osrf/simulation/gazebo82,0610.01%#813pdf2htmlex2,0550.01%#814giter82,0490.01%#815m-cli2,0380.01%#816shtool2,0370.01%#817theora2,0360.01%#818fltk2,0290.01%#819radare22,0270.01%#820tcpflow2,0160.01%#821source-highlight2,0140.01%#822imagemagick --with-x112,0130.01%#823gdrive2,0100.01%#824m42,0080.01%#825rbenv-default-gems2,0080.01%#826texinfo2,0040.01%#827josegonzalez/php/composer2,0020.01%#828pixman2,0000.01%#829percona-toolkit1,9980.01%#830offlineimap1,9960.01%#831cntlm1,9950.01%#832asciidoc1,9910.01%#833hyper1,9910.01%#834libcouchbase1,9910.01%#835macvim --with-override-system-vim --with-lua1,9910.01%#836git-credential-manager1,9900.01%#837re2c1,9830.01%#838lnav1,9800.01%#839tmux --HEAD1,9790.01%#840lua@5.11,9770.01%#841guile1,9760.01%#842zbar1,9720.01%#843cmus1,9710.01%#844berkeley-db@41,9700.01%#845git-standup1,9640.01%#846jruby1,9630.01%#847arping1,9560.01%#848imagesnap1,9560.01%#849homebrew/php/brew-php-switcher1,9520.01%#850mashape/kong/kong1,9500.01%#851grunt1,9490.01%#852libvirt1,9480.01%#853stow1,9460.01%#854mysql-utilities1,9440.01%#855godep1,9430.01%#856hbase1,9410.01%#857zopfli1,9390.01%#858expect1,9380.01%#859hunspell1,9380.01%#860homebrew/php/php-version1,9340.01%#861filebeat1,9320.01%#862v8@3.151,9320.01%#863homebrew/php/xdebug-osx1,9210.01%#864libpqxx1,9210.01%#865tomcat@8.01,9180.01%#866nikto1,9150.01%#867cloog1,9140.01%#868googler1,9110.01%#869universal-ctags/universal-ctags/universal-ctags --HEAD1,9080.01%#870libmicrohttpd1,9040.01%#871autoconf-archive1,9010.01%#872homebrew/php/php541,8990.01%#873jeffreywildman/virt-manager/virt-viewer1,8970.01%#874blackfireio/blackfire/blackfire-agent1,8950.01%#875libtasn11,8850.01%#876expat1,8840.01%#877vegeta1,8810.01%#878homebrew/nginx/nginx-full1,8800.01%#879lastpass-cli1,8770.01%#880homebrew/dupes/make1,8740.01%#881instantclienttap/instantclient/instantclient-basic1,8720.01%#882mpd1,8680.01%#883protobuf@2.61,8670.01%#884rocksdb1,8640.01%#885global1,8510.01%#886zsh --without-etcdir1,8490.01%#887homebrew/php/php56-phalcon1,8470.01%#888qrencode1,8440.01%#889p11-kit1,8430.01%#890arangodb1,8380.01%#891iterate-ch/cyberduck/duck1,8380.01%#892ccat1,8290.01%#893wine --without-x111,8240.01%#894sassc1,8230.01%#895homebrew/dupes/nano1,8200.01%#896docker-machine-nfs1,8130.01%#897freerdp1,8120.01%#898cloudfoundry/tap/bosh-cli1,8070.01%#899bradp/vv/vv1,7970.01%#900homebrew/science/pcl1,7960.01%#901lcov1,7930.01%#902lzip1,7930.01%#903pyqt --with-python1,7850.01%#904testdisk1,7820.01%#905homebrew/apache/ab1,7710.01%#906homebrew/dupes/ncurses1,7680.01%#907gnupg@1.41,7640.01%#908libpcap1,7630.01%#909multitail1,7630.01%#910duti1,7600.01%#911libtermkey1,7570.01%#912ffmpeg --with-libvorbis --with-libvpx1,7550.01%#913git-review1,7520.01%#914grace1,7520.01%#915homebrew/php/php531,7510.01%#916jsoncpp1,7510.01%#917autogen1,7490.01%#918sloccount1,7480.01%#919bind1,7470.01%#920qt --with-qtwebkit1,7460.01%#921sanemat/font/ricty1,7430.01%#922foremost1,7420.01%#923texi2html1,7420.01%#924gradle@2.141,7360.01%#925doctl1,7340.01%#926gst-plugins-good1,7320.01%#927homebrew/php/php56-pdo-pgsql1,7190.01%#928nuget1,7170.01%#929prometheus1,7170.01%#930gauge1,7150.01%#931cartr/qt4/qt-legacy-formula1,7130.01%#932enca1,7120.01%#933homebrew/science/root61,7100.01%#934qt5 --with-qtwebkit1,7040.01%#935gst-plugins-bad1,7010.01%#936vim --with-lua --with-luajit1,6950.01%#937d12frosted/emacs-plus/emacs-plus --HEAD1,6940.01%#938thoughtbot/formulae/parity1,6940.01%#939homebrew/php/php71 --with-pear1,6910.01%#940purescript1,6810.01%#941lzo1,6800.01%#942ncftp1,6770.01%#943atk1,6760.01%#944mkvtoolnix --with-qt51,6760.01%#945freeglut1,6730.01%#946git-cola1,6690.01%#947homebrew/php/php71-yaml1,6690.01%#948ethereum/ethereum/cpp-ethereum --devel --successful1,6660.01%#949ta-lib1,6660.01%#950libtensorflow1,6640.01%#951assimp1,6620.01%#952grip1,6600.01%#953minio1,6570.01%#954ffmpeg --with-fdk-aac --with-libass --with-tools --with-x265 --with-freetype --with-libvorbis --with-libvpx1,6550.01%#955libxml2 --with-python1,6530.01%#956flake81,6510.01%#957shopify/shopify/yarn1,6490.01%#958sysdig1,6480.01%#959emscripten1,6470.01%#960rdesktop1,6420.01%#961planck1,6410.01%#962dfu-programmer1,6400.01%#963pygobject31,6380.01%#964ipfs1,6360.01%#965openjpeg1,6360.01%#966open-ocd1,6350.01%#967platformio1,6350.01%#968josegonzalez/php/php561,6290.01%#969phrase/brewed/phraseapp1,6240.01%#970gnu-indent1,6220.01%#971homebrew/dupes/tcl-tk1,6210.01%#972sfml1,6120.01%#973rhino1,6110.01%#974dmd1,6100.01%#975minimal-racket1,6070.01%#976sysbench1,6050.01%#977git --with-brewed-openssl --with-brewed-curl1,6020.01%#978git-quick-stats1,6020.01%#979go@1.71,5980.01%#980gcc@61,5940.01%#981ios-sim1,5890.01%#982git-town1,5850.01%#983uber/alt/cerberus1,5850.01%#984homebrew/boneyard/pyqt1,5840.01%#985rpm2cpio1,5840.01%#986gibo1,5790.01%#987rbenv-bundler1,5790.01%#988homebrew/science/bedtools1,5720.01%#989python --with-tcl-tk1,5720.01%#990cgal1,5700.01%#991boost-python --without-python --with-python31,5680.01%#992ghq1,5680.01%#993homebrew/science/glpk1,5670.01%#994libconfig1,5670.01%#995ed --with-default-names1,5570.01%#996openssl --universal1,5540.01%#997sphinx --with-mysql1,5540.01%#998mvnvm1,5530.01%#999amazon-ecs-cli1,5500.01%#1000homebrew/versions/v8-3151,5500.01%

Watsi launches universal health coverage, funded by YC Research

$
0
0

We’re excited to announce that we’re expanding Watsi to provide health coverage! Together, Watsi Crowdfunding and Watsi Coverage will help create a world where everyone has access to care — whether that’s by raising money through crowdfunding or enrolling in health coverage.

Five years ago, in a Costa Rican town called Watsi, one of our founders met a woman on a hot, crowded bus who was asking passengers for donations to fund her son's healthcare.

Inspired by that woman, we created Watsi to help people access healthcare. We started by launching our crowdfunding platform, which has processed more than $9M from 23,154 donors and funded healthcare for 13,772 patients.

Ten-year-old Witcheldo before surgery.

One of those patients is Witcheldo from Haiti. Watsi donors contributed $1,500 to the cost of open-heart surgery to treat a life-threatening cardiac condition—a condition he developed due to a case of untreated strep throat.

If Witcheldo had access to health coverage, his strep throat could have been treated with an antibiotic for a few dollars. Cases like his inspired us to explore different models of providing health coverage, including community-based health insurance.

Witcheldo after life-saving heart surgery.

In the countries where we work, universal health coverage is a top priority, but it’s prohibitively expensive because up to 40% of healthcare funding is lost to inefficiencies. These inefficiencies stem from administering complex systems with pen and paper, which drives up costs and results in errors, fraud, and a lack of insight into the quality of care.

We met one government insurance administrator who couldn’t verify whether the claims she received were for patients enrolled in the insurance program, because doing so would require manually digging through thousands of enrollment records – so she approved every claim. As a result, the system was teetering on bankruptcy, requiring the government to increase premiums, and forcing members to drop out.

We believe low-income countries have an opportunity to leapfrog the inefficiencies of traditional health coverage by building their national programs with technology. Recognizing this opportunity, YC Research gave us funding to build technology that makes it easy to administer health coverage.

With their support, a few of us moved to rural Uganda and spent three months living in a convent alongside the nuns who own and operate the local clinic. We developed our system with direct input from the community, local stakeholders, and global experts.

In six weeks, we coded a mobile app to run the system and opened enrollment in March 2017. To date, 98% of the community has signed up, bringing the program’s membership to 5,880 people. Once enrolled, members can access care at the clinic. Our app streamlines enrolling members, verifying their identity with their thumbprint when they visit the clinic, collecting data on the care they receive, and reimbursing the provider for their costs.

The primary benefit of automating health coverage from end-to-end is improved efficiency. For example, our mobile app has reduced the time it takes to enroll new members and process claims from weeks to minutes. And by surfacing real-time data, we have insight into the cost and quality of care, making it possible to do things like identify unnecessary prescriptions and ensure treatment guidelines are followed. Currently, the cost of care is just $0.78 per member per month.

193 of the world's governments share a goal to achieve universal health coverage by 2030. If we prove our system can make health coverage more affordable, we believe governments will adopt our technology to accelerate progress towards universal health coverage. But most importantly, when we think back to that woman on the bus, we’re excited to be one step closer to ensuring she never has to worry about affording the care her son needs to get healthy.

We hope you’ll support our journey towards making healthcare a reality for everyone by donating to a Watsi patient.

Spyware Dolls and Intel's VPro

$
0
0

Back in February, it was reported that a "smart" doll with wireless capabilities could be used to remotely spy on children and was banned for breaching German laws on surveillance devices disguised as another object.

Would you trust this doll?

For a number of years now there has been growing concern that the management technologies in recent Intel CPUs (ME, AMT and vPro) also conceal capabilities for spying, either due to design flaws (no software is perfect) or backdoors deliberately installed for US spy agencies, as revealed by Edward Snowden. In a 2014 interview, Intel's CEO offered to answer any question, except this one.

The LibreBoot project provides a more comprehensive and technical analysis of the issue, summarized in the statement "the libreboot project recommends avoiding all modern Intel hardware. If you have an Intel based system affected by the problems described below, then you should get rid of it as soon as possible" - eerily similar to the official advice German authorities are giving to victims of Cayla the doll.

All those amateur psychiatrists suggesting LibreBoot developers suffer from symptoms of schizophrenia have had to shut their mouths since May when Intel confirmed a design flaw (or NSA backdoor) in every modern CPU had become known to hackers.

Bill Gates famously started out with the mission to put a computer on every desk and in every home. With more than 80% of new laptops based on an Intel CPU with these hidden capabilities, can you imagine the NSA would not have wanted to come along for the ride?

Four questions everybody should be asking

  • If existing laws can already be applied to Cayla the doll, why haven't they been used to alert owners of devices containing Intel's vPro?
  • Are exploits of these backdoors (either Cayla or vPro) only feasible on a targeted basis, or do the intelligence agencies harvest data from these backdoors on a wholesale level, keeping a mirror image of every laptop owner's hard desk in one of their data centers, just as they already do with phone and Internet records?
  • How long will it be before every fast food or coffee chain with a "free" wifi service starts dipping in to the data exposed by these vulnerabilities as part of their customer profiling initiatives?
  • Since Intel's admissions in May, has anybody seen any evidence that anything is changing though, either in what vendors are offering or in terms of how companies and governments outside the US buy technology?

Share your thoughts

This issue was recently raised on the LibrePlanet mailing list. Please feel free to join the list and click here to reply on the thread.

Ask HN: What's the worst Oracle can do to OpenJDK?

$
0
0
Ask HN: What's the worst Oracle can do to OpenJDK?
119 points by yaanoncoward7 hours ago | hide | past | web | 50 comments | favorite
I've recently started a project using Java. From a technical perspective and after careful analysis of alternative technologies, for what I am doing it currently is the right choice.

But with the Google-Oracle lawsuit, Oracle laying off the Sun team, and my professional experience, I really have to convince myself building anything on top of a technology stack where they are such a powerful player.

I understand that OpenJDK is GPLed with class path exception, but is that enough? Could Oracle somehow sabotage OpenJDK into oblivion? What's the most probable steps Google, IBM and RedHat could take if Oracle pulls the plug on Java, or worse, plays some dirty legal tricks?

I know my concerns are vague but I wonder if people who know better could share their thoughts?


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Scaling Your Org with Microservices [slides]

$
0
0

Co-presented with Charity Majors

Description
Ask people about their experience rolling out microservices, and one theme dominates: engineering is the easy part, people are super hard! Everybody knows about Conway’s Law, everybody knows they need to make changes to their organization to support a different product model, but what are those changes? How do you know if you’re succeeding or failing, if people are struggling and miserable or just experiencing the discomfort of learning new skills? We’ll talk through real stories of pain and grief as people modernize their team and their stack.

Detailed review by Tanya Reilly


Video recording of talk

Tweets

Show HN: Play – A simple cli audio player

$
0
0

README.md

Play audio files from terminal.

Imgur

Quick Start

  1. Download the rep or clone the rep: git clone ht​tps://git@github.com:roecrew/play.git
    • brew install portaudio
    • brew install libsndfile
    • brew install glfw3
    • brew install glew
  2. make
  3. ./play a.wav

Flags

-v ... opens visualization window

-l ... loops audio

Supported Filetypes

See http://www.mega-nerd.com/libsndfile/#Features


Bitcoin's Academic Pedigree

$
0
0
July/August issue of acmqueue

The July/August issue of acmqueue is out now



Networks

  Download PDF version of this article PDF

Jacob Loveless

Years ago I squandered most of my summer break locked inside my apartment, tackling an obscure problem in network theory (the bidirectional channel capacity problem). I was convinced that I was close to a breakthrough. (I wasn't.) Papers were everywhere, intermingled with the remnants of far too many 10¢ Taco Tuesday wrappers.

A good friend stopped by to bring better food, lend a mathematical hand, and put an end to my solitary madness. She listened carefully while I jumped across the room grabbing papers and incoherently babbling about my "breakthrough."

Then she somberly grabbed a pen and wrote out the flaw that I had missed, obliterating the proof.

I was stunned and heartbroken. She looked up and said, "But this is great, because what you've done here is define the problem more concretely." She continued with a simple truth that I've carried with me ever since:

"Most times, defining the problem is harder and more valuable than finding the answer. The world needs hard, well-defined problems. Generally, only one or two people work together on an answer—but hundreds work on a well-defined problem."

And so, dear reader, this is where I would like to begin. Unlike most articles in acmqueue, this one isn't about a new technique or an exposition on a practitioner's solution. Instead, this article looks at the problems inherent in building a more decentralized Internet. Audacious? Yes, but this has become a renewed focus in recent years, even by the father of the Web himself (see Tim Berners-Lee's Solid project4). Several companies and open-source projects are now focusing on different aspects of the "content-delivery" problem. Our company Edgemesh (https://edgemesh.com) is working on peer-enhanced client-side content acceleration, alongside other next-generation content-delivery networks such as Peer5 (https://peer5.com) and Streamroot (https://streamroot.io), both of which are focused on video delivery. Others, such as the open-source IPFS (InterPlanetary File System; https://ipfs.io) project are looking at completely new ways of defining and distributing and defining "the web."

Indeed, the concept of a better Internet has crept into popular media. In season 4 of HBO's sitcom Silicon Valley, the protagonist Richard Hendricks devises a new way to distribute content across the Internet in a completely distributed manner using a P2P (peer-to-peer) protocol. "If we could do it, we could build a completely decentralized version of our current Internet," Hendricks says, "with no firewalls, no tolls, no government regulation, no spying. Information would be totally free in every sense of the word." The story line revolves around the idea that thousands of users would allocate a small portion of their available storage on their mobile devices, and that the Pied Piper software would assemble the storage across these devices in a distributed storage "cloud." Then, of course, phones explode and hilarity ensues.

The core idea of a distributed Internet does make sense, but how would it be built? As I learned in my self-imposed solitary confinement so long ago, before diving into possible solutions, you need to define the problems more clearly.

Problems of a Distributed Internet

In his 2008 acmqueue article "Improving Performance on the Internet," Tom Leighton, cofounder of Akamai, the largest content-distribution network in the world, outlined four major architectures for content distribution: centralized hosting, "big data center" CDNs (content-delivery networks), highly distributed CDNs, and P2P networks. Of these, Leighton noted that:

"P2P can be thought of as taking the distributed architecture to its logical extreme, theoretically providing nearly infinite scalability. Moreover, P2P offers attractive economics under current network pricing structures."11

He then noted what others have found in the past, that although the P2P design is theoretically the most scalable, there are several practical issues, specifically throughput, availability, and capacity.

Throughput

The most commonly noted issue is the limited uplink capacity of edge devices, as noted by Leighton in his 2008 article:

P2P faces some serious limitations, most notably because the total download capacity of a P2P network is throttled by its total uplink capacity. Unfortunately, for consumer broadband connections, uplink speeds tend to be much lower than downlink speeds: Comcast's standard high-speed Internet package, for example, offers 6 Mbps for download but only 384 Kbps for upload (one-sixteenth of download throughput).11

This limitation is not as acute today as it was nearly a decade ago when upload speeds in the U.S. hovered around .5 Mbps. Figure 1 shows the current upload and download as taken from Speedtest.net (http://www.speedtest.net/reports/). These data points show that global "last-mile" throughput rates are nearly 30 times their 2008 counterparts. Is this enough? Would a peer with an upload rate at the lower quartile of these metrics (~4 Mbps) suffice? This question has been thoroughly explored in regard to actual webpage load time.

Building a decentralized web-delivery model

When Mike Belshe (of Google SPDY fame) looked at the relationship between end-client bandwidth and page-load time, he discovered that "bandwidth doesn't matter (much)."3 Once the client's available bandwidth reached 5 Mbps, the impact on the end user's page load is negligible. Figure 2 shows page-load time as a function of client bandwidth, assuming a fixed RTT (round-trip time) of 60 ms.

Building a decentralized web-delivery model

Availability

The next major hurdle for the distributed Internet is peer availability. Namely, are there enough peers, and are they online and available for enough time to provide value? In the past ten years the edge device count has certainly increased, but has it increased enough? Looking at "Internet Trends 2017,"14 (compiled by Mary Meeker of the venture capital firm KPCB), you can see how much the "available peer" count has increased over the past ten years from mobile alone (see figure 3).

Building a decentralized web-delivery model

Today roughly 49 percent of the world's population is connected10—around 3.7 billion people, many with multiple devices—so that's a big pool of edge devices. Peter Levine of the venture capital firm Andressen Horowitz has taken us out a few years further and predicted that we will shortly be going beyond billions and heading toward trillions of devices.12

You can get a sense of scale by looking at an Edgemesh network for a single e-commerce customer's website with a global client base, shown in figures 4 and 5.

Building a decentralized web-delivery model

Building a decentralized web-delivery model

It's probably safe to say there are enough devices online, but does the average user stay online long enough to be available? What is "long enough" for a peer to be useful?

A sensible place to start might be to want peers to be online long enough for any peer to reach any other peer anywhere on the globe. Given that, we can set some bounds.

The circumference of the earth is approximately 40,000 km. The rule of thumb is that light takes 4.9 microseconds to move 1 km through fiber optics. That would mean data could circumnavigate the globe in about one-fifth of a second (196 milliseconds). Oh, if wishing only made it so, but as Stanford Unversity's Stuart Cheshire points out in "It's the Latency, Stupid,"6 the Internet operates at least a factor of two slower than this. This 2x slowdown would mean it would take approximately 400 milliseconds to get around the globe. Unfortunately, I have spent some time in telecom—specifically in latency-optimized businesses13—and I think this time needs to be doubled again to account for global transit routing; thus, the data can go around the world in some 800 milliseconds. If users are online and available for sub-800 millisecond intervals, this may become problematic. Since most decentralized solutions would require the user to visit the content (e.g., be on the website), the real question is, what is the average page-view time for users across the globe?

Turns out it is 3 minutes 36 seconds,24 or 216,000 milliseconds.

To double-check this, I took all peer-session times (amount of time Edgemesh peers were online and connected to the mesh) across the Edgemesh user base for the past six months (figure 6). The average was right in line at 3 minutes 47 seconds.

Building a decentralized web-delivery model

In either case, if the node stays online just long enough to download a single web page, that would be enough time for the data to circumnavigate the globe 270 times, certainly long enough to contact a peer anywhere on Earth.

Capacity

If enough users are online for a long enough duration, and they have an acceptable egress throughput (upload bandwidth), all that remains is the question of whether there is enough spare capacity (disk space) available to provide a functional network.

If we assume a site has 20 percent of its users on mobile and 80 percent of its users on desktops—and further expand this to 500 MB of available capacity per desktop user and 50 MB per mobile user (the lower end of browser-available storage pools)—we can extract an estimated required mesh size to achieve a given cache hit rate if the requested content follows a Zipf distribution.1 Figure 7 shows estimated mesh size required for variable cache hit rates for .5 TB, 1 TB, and 2 TB active caches. These numbers certainly seem reasonable for a baseline. Essentially, a website with 500 GB of static content (about 16 million average web images) would need an online capacity of 2 million distinct nodes to achieve a theoretical offload of 100 percent of its traffic to a P2P mesh (approximately an 8:1 ratio of images to users).

Building a decentralized web-delivery model

Enabling a Distributed Internet

Now that we've better defined the problems and established the theoretical feasibility of a new solution, it's time to look at the technology available to bring to bear on the problem. To start, we can constrain our focus a bit. Implementations such as IPFS focus on distributing the entire content base, allowing you to free yourself from the restrictions of web servers and DNS entirely. This is a fantastic wholesale change, but the tradeoff is that it will require users to dramatically modify how they access content.

Since a peer-to-peer design is dependent on the total network size, this model has difficulty growing until it reaches critical mass. At Edgemesh we wanted to focus on enhancing existing web-content delivery transparently (e.g., in the browser) without requiring any changes to the user experience. This means ensuring that the technology abides by the following three restrictions:

• For users, the solution should be transparent.

• For developers, the solution should require zero infrastructure changes.

• For operations, the solution should be self-managing.

The next question is where exactly to focus.

Fully enabling peer-enhanced delivery of all content is difficult and dangerous (especially allowing for peer-delivered JavaScript to be executed). Is there an 80 percent solution? Trends posted by HTTP Archive reveal that static components (images/video/fonts/CSS) make up roughly 81 percent of the total page weight,9 as shown in figure 8.

Building a decentralized web-delivery model

Given these details, let's narrow the focus to enabling/enhancing edge delivery of these more traditional CDN assets and the associated challenges of moving and storing data.

Moving data: Building a new network (overlay)

To support peer-to-peer distribution, an overlay network needs to be developed to allow the peer-to-peer connections to operate within the larger existing Internet infrastructure. Luckily, such a stack is available: WebRTC (Real-Time Communications19). Started in earnest in 2011 by Google, WebRTC is an in-browser networking stack that enables peer-to-peer communication. WebRTC is primarily employed by voice and video applications (Google Hangouts/Dua/Allo, Slack, Snapchat, Amazon Chime, WhatsApp, Facebook Messenger) to facilitate peer-to-peer video- and audioconferencing.

WebRTC is a big deal; in June 2016 (only five years later) Google provided several key milestones7 from stats it collected (with some additional updates at the end of 201625):

• Two billion Chrome browsers with WebRTC.

• One billion WebRTC audio/video minutes per week on Chrome.

• One petabyte of DataChannel traffic per week on Chrome (0.1 percent of all web traffic).

• 1,200 WebRTC-based companies and projects (it was 950 in June).

• Five billion mobile app downloads that include WebRTC.

WebRTC support exists in the major browsers (Chrome, Firefox, Edge, and now Safari2). Comparing WebRTC's five-year adoption against other VoIP-style protocols shows the scale (see figure 9).8

Building a decentralized web-delivery model

WebRTC is a user-space networking stack. Unlike HTTP, which is dependent on TCP for transfer, WebRTC has its roots in a much older protocol—SCTP (Stream Control Transmission Protocol)—and encapsulates this in UDP (User Datagram Protocol). This allows for much lower latency transfer, removes head-of-line blocking, and, as a separate network stack, allows WebRTC to use significantly more bandwidth than HTTP alone.

SCTP is a little like the third wheel of the transport layer of the OSI (Open Systems Interconnection) model—we often forget it's there but it has some very powerful features. Originally introduced to support signaling in IP networks,22 SCTP quickly found adoption in next-generation networks (IMS and LTE).

WebRTC leverages SCTP to provide a reliable, message-oriented delivery transport (encapsulated in UDP or TCP, depending on the implementation5). Alongside SCTP, WebRTC leverages two additional major protocols: DTLS (Datagram Transport Layer Security) for security (a derivative of SSL) and ICE (Interactive Connectivity Establishment) to allow for support in NAT (network address translation) environments (e.g., firewall traversal).

The details of the ICE protocol and how it works with signaling servers (e.g., STUN and TURN) are beyond the scope of this article, but suffice it to say that WebRTC has all the necessary plumbing to enable real peer-to-peer networking.

A simple example is a WebRTC Golang implementation by Serene Han.21 Han's chat demo allows you to pass the SDP (Session Description Protocol) details between two machines (copy paste signaling) to enable peer-to-peer chat. To run this yourself (assuming you have a Docker instance locally), simply do the following:

docker run —it golang bash

Then in the Docker instance, this one-liner will get you set up:

apt-get update && apt-get install libx11-dev -y && \ go get github.com/keroserene/go-webrtc && \ cd /go/src/github.com/keroserene/go-webrtc && \ go run demo/chat/chat.go

If you prefer a browser-native starting point, look at simple-peer module,20 originally from Feross Aboukhadijeh's work with WebTorrent (https://webtorrent.io).

Storing data: Browser storage options and asset interception

The next step is finding a method both to intercept standard HTTP requests and to develop a system for storing peer-to-peer delivered assets. For the request-intercept problem, you have to look no further than the service worker.18 The service worker is a new feature available in most browsers that allows for a background process to run in the browser. Like a web worker (which can be used as a proxy for threads), a service worker has restrictions on how it can interact and exchange data with the DOM (Document Object Model).

The service worker does, however, have a powerful feature that was originally developed to support offline page loads: the Fetch API16 . The Fetch API allows a service worker to intercept request and response calls, similar to an HTTP proxy. This is illustrated in figure 10.

Building a decentralized web-delivery model

With the service worker online, you can now intercept traditional HTTP requests and offload these requests to the P2P network. The last remaining component will be a browser local storage model where P2P-accelerated content can be stored and distributed. Although no fewer than five different storage options exist in the browser, the IndexedDB17 implementation is the only storage API available within a service-worker context and the DOM context (where the WebRTC code can execute, which is why Edgemesh chose it as the storage base). Alternatively, the CacheStorage API may also be used within the service-worker context.15

Implementing a Distributed Internet

We have a theoretically viable model to support peer-to-peer content delivery. We have a functional network stack to enable ad-hoc efficient peer-to-peer transfer and access to an in-browser storage medium. And so, the game is afoot!

Figure 11 is a flowchart of the Edgemesh P2P-accelerated content-delivery system. The figure shows where the service-worker framework will enable asset interception, and WebRTC (aided by a signal server) will facilitate browser-to-browser asset replication.

Building a decentralized web-delivery model

Returning, then, to Mike Belshe's research, we can start to dig into some of the key areas to be optimized. Unlike bandwidth, where adding incrementally more bandwidth above 5 Mbps has negligible impact on page-load time, latency (RTT) dramatically increases page-load time, as shown in figure 12.3

Building a decentralized web-delivery model

WebRTC is already an efficient protocol, but the peer-selection process presents opportunities for further latency reduction. For example, if you are located in New York, providing a peer in Tokyo is likely a nonoptimal choice. Figure 13 shows a sampling of WebRTC latency distributions for a collection of sessions across the Edgemesh networks. Can we do better?

Building a decentralized web-delivery model

A simple optimization might be to prefer peers that reside in the same network, perhaps identified by the AS (autonomous system)23 number of each peer. Even this simple optimization can cut the average latency by a factor of two. Figure 14 shows performance increase by intra-AS routing preference.

Building a decentralized web-delivery model

Another optimization is choosing which assets to replicate into a peer. For example, if a user is currently browsing the landing page of a site, we can essentially precache all the images for the next pages, effectively eliminating the latency altogether. This is a current area of research for the team at Edgemesh, but early solutions have already shown significant promise. Figure 15 shows the effective render time for Edgemesh-enabled clients (accelerated) and non-Edgemesh enabled clients (standard) for a single customer domain. The average page-load time has been reduced by almost a factor of two.

Building a decentralized web-delivery model

This is most clearly seen when most of the page content can be effectively precached, as shown in the page-load time statistics of figure 16.

Building a decentralized web-delivery model

Conclusion

It had been a few days since I'd been outside, and a few more since I would make myself presentable. For the previous few weeks the team and I had been locked inside the office around the clock, essentially rewriting the software from scratch. We thought it would take a week, but we were now three months into 2017. The growing pile of empty delivery bags resting atop the ad-hoc whiteboard tables we were using was making it difficult to make out exactly what the big change was. We were convinced this was the breakthrough we had been looking for (turns out it was), and that this version would be the one that cracked the problem wide open. I was head down trying to get my parts to work, and I was lost. Then I heard the knock at the door. She came in and sat down, patiently moving aside the empty pizza boxes while I babbled on about our big breakthrough and how I was stuck.

Then, just like she had nearly two decades earlier, she grabbed the marker and said:

"Honey, I think I see the issue. You haven't properly defined the problem. Most times, defining the problem is harder and more valuable than finding the answer. So, what exactly are you trying to solve?"

The world is more connected than it ever has been before, and with our pocket supercomputers and IoT (Internet of Things) future, the next generation of the web might just be delivered in a peer-to-peer model. It's a giant problem space, but the necessary tools and technology are here today. We just need to define the problem a little better.

References

1. Adamic, L. A., Huberman, B. A. 2002. Zipf's law and the Internet. Glottometrics 3: 143-150; http://www.hpl.hp.com/research/idl/papers/ranking/adamicglottometrics.pdf.

2. Apple Inc. Safari 11.0; https://developer.apple.com/library/content/releasenotes/General/WhatsNewInSafari/Safari_11_0/Safari_11_0.html.

3. Belshe, M. 2010. More bandwidth doesn't matter (much); https://docs.google.com/.

4. Berners-Lee, T. Solid; https://solid.mit.edu/.

5. BlogGeek.Me. 2014. Why was SCTP selected for WebRTC's data channel; https://bloggeek.me/sctp-data-channel/.

6. Cheshire, S. 1996-2001. It's the latency, stupid; http://www.stuartcheshire.org/rants/latency.html.

7. Google Groups. 2016. WebRTC; https://groups.google.com/forum/#!topic/discuss-webrtc/I0GqzwfKJfQ.

8. Hart, C. 2017. WebRTC: one of 2016's biggest technologies no one has heard of. WebRTC World; http://www.webrtcworld.com/topics/webrtc-world/articles/428444.

9. HTTP Archive; http://httparchive.org/trends.php.

10. Internet World Stats. 2017. World Internet usage and population statistics—March 31, 2017; http://www.internetworldstats.com/stats.htm.

11. Leighton, T. 2008. Improving performance on the Internet. acmqueue 6(6); http://queue.acm.org/detail.cfm?id=1466449.

12. Levine, P. 2016. The end of cloud computing. Andreessen Horowitz; http://a16z.com/2016/12/16/the-end-of-cloud-computing/.

13. Loveless, J. 2013. Barbarians at the gateways. acmqueue 11(8); http://queue.acm.org/detail.cfm?id=2536492.

14. Meeker, M. 2017. Internet trends 2017—code conference. KPCB; http://www.kpcb.com/internet-trends.

15. Mozilla Developer Network. 2017. CacheStorage; https://developer.mozilla.org/en-US/docs/Web/API/CacheStorage.

16. Mozilla Developer Network. 2017. FetchAPI; https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API.

17. Mozilla Developer Network. 2017. IndexedDB API; https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API.

18. Mozilla Developer Network. 2017. Using service workers; https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API/Using_Service_Workers.

19. Real-time communication in web browsers (rtcweb) Charter for Working Group; http://datatracker.ietf.org/wg/rtcweb/charter/.

20. Simple WebRTC video/voice and data channels. Github; https://github.com/edgemesh/simple-peer.

21. WebRTC for Go. Github; https://github.com/keroserene/go-webrtc.

22. Wikipedia. Signalling System No. 7; https://en.wikipedia.org/wiki/Signalling_System_No._7.

23. Wikipedia. Autonomous system; https://en.wikipedia.org/wiki/Autonomous_system_(Internet).

24. Wolfgang Digital. 2016. E-commerce KPI benchmarks; https://www.wolfgangdigital.com/uploads/general/KPI_Infopgrahic_2016.jpg.

25. YouTube. 2016. WebRTC; https://youtu.be/OUfYFMGtPQ0?t=16504.

Jacob Loveless is chief executive officer at Edgemesh Corporation, the premier edge-network acceleration platform. Prior to Edgemesh, Loveless served as CEO of Lucera Financial Infrastructures, a global network service provider for financial institutions. He was a partner at Cantor Fitzgerald and Company, responsible for running the firm's global low-latency trading operations for nearly 10 years. Prior to Wall Street, he was a senior engineer and consultant for the Department of Defense, focused on large-scale data-analysis programs and distributed networks. Loveless focuses primarily on low-latency networks and distributed computing. His prior ACM articles are available at http://queue.acm.org/detail.cfm?id=2536492 and http://queue.acm.org/detail.cfm?id=2534976.

Copyright © 2017 held by owner/author. Publication rights licensed to ACM.

acmqueue

Originally published in Queue vol. 15, no. 4
see this item in the ACM Digital Library





Related:

Theo Schlossnagle - Time, but Faster
A computing adventure about time through the looking glass

Neal Cardwell, Yuchung Cheng, C. Stephen Gunn, Soheil Hassas Yeganeh, Van Jacobson - BBR: Congestion-Based Congestion Control
Measuring bottleneck bandwidth and round-trip propagation time

Josh Bailey, Stephen Stuart - Faucet: Deploying SDN in the Enterprise
Using OpenFlow and DevOps for rapid development

Amin Vahdat, David Clark, Jennifer Rexford - A Purpose-built Global Network: Google's Move to SDN
A discussion with Amin Vahdat, David Clark, and Jennifer Rexford



Comments

(newest first)



© 2017 ACM, Inc. All Rights Reserved.

Chinese man jailed for selling VPN software

$
0
0

China’s crackdown on VPNs appears to be continuing apace after it emerged that a man has been jailed for selling VPN software online.

Deng Jiewei, a 26-year old man from the city of Dongguan, in Guangdong province, close to Hong Kong, had been selling VPN software through his own small independent website. Online records from the China’s Supreme People’s Court (SPC) database [in Chinese] show that he has been convicted of the offence of “illegal control of a computer system”, under Article 285 of China’s Criminal Law. He has subsequently been sentenced to nine months imprisonment.

9 months in jail for a $2,000 profit

Details of his prosecution, which have been reported on the WhatsonWeibo website, note that he set up a ‘dot com’ website in October 2016 through which users could download two types of VPN software. This software enabled users to get around China’s Great Firewall censorship system and enjoy their right to free access to the internet.

Deng had reported made a profit of just US$2,125 from sales which suggests that, despite his lengthy sentence, he had not sold a great number of VPN connections.

His sentence, which was handed down in January but has only recently come to light, appears to be part of a concerted effort by the Chinese Communist regime to crack down even harder on internet freedoms in the run up to this year’s National Congress meeting.

VPN crackdown well underway

As we have reported regularly, China has announced a VPN ban which will officially come into place in February 2018. Whilst it is not yet entirely clear how the regime plans to enforce the ban, they have already taken a number of steps to make it harder for Chinese people to access VPNs.

As well as shutting down a number of local VPNs, they have also targeted various local e-commerce sites which sold VPNs either directly or indirectly.  Meanwhile, Apple is also cooperating with the Communist Party by removing many VPN apps from their Chinese App store.

For Chinese people, a VPN is their only real means of accessing the internet uninhibited by the Communist Party’s censorship apparatus and without the risk of the wide-scale online surveillance programmes which watch what every Chinese citizen is doing online at all times. With online anonymity also being targeted of late, if VPNs do disappear, it will make freedom of speech and access to information a great deal harder.

People worried by prison sentence

Understandably, Chinese people are less than impressed with the prison sentence handed down to Deng Jiewei. The story has been shared more than 10,000 on the Chinese social media site Weibo. Some questioned how selling VPNs could be an offence under Article 285 of China’s criminal law, asking “How can using a VPN be defined as ‘intruding into computer systems?”

They suggested this law could perhaps be applied to all VPN users, with another commenter noting that “I am scared we could all be arrested now.” “The dark days are coming”, said another and on the face of it, this gloomy premonition appears to be correct.

Until efforts to ban VPNs in China come into force, it will be difficult to see how effective they can be. But the challenge appears to be akin to holding back the tide.

At the moment, VPNs are still popular in China as they still offer the best means of escaping the restriction of online censorship and surveillance. Recent tests suggested that VyprVPN offered the fastest connection speeds for Chinese internet users. But they are not the only VPN that works in China and you can read our rundown of the best providers here.

Despite the threat of imprisonment, the sale and use of VPNs are not going to go away as the people’s desire for online freedom and information will always trump the oppressive measures of the Communist Party regime.

Ethical Hacking Course: Wireless Lab Session

$
0
0

For wireless hacking we will first of all put our wireless card into monitor mode and the command is –

airmon -ng start wlan0

We can check that it has been started by doing ifconfig. We are actually running this on a physical machine because not all wireless cards can be used for wireless hacking. So one should have the right hardware to do this.

airodump

The airodump command goes out and checks the network traffic. We will use the command –

airodump –ng wlan0mon

Figure1. airodump output

Here we can see the MAC addresses, the BSSIDs and other information of all the different WAPs in the region. Out of these we will attack WLAN1. For this we will isolate that. We will open another window and run the following command –

airodump -ng -c <channel> –bssid <bssid> -w dump1 wlan0mon

Figure2. airodump for wlan1

Now we are monitoring the one we want to crack. For demo purpose we will log on to the network using the iPhone. While logging we can see the handshake come through on the right hand side of the first line of our output. Now open up another terminal and do ‘ls’. It will show that there are dump files. These files are split into four files i.e. .cap, .csv, kismet and .netxml. We will run a crack on the dump to see if there are any handshakes.

aircrack -ng <filename>

We can see that there is no handshake. So now we will try to get a handshake to be activated on that card by logging in the network again. Now check the dump again. This time we have the handshake –

Figure3. aircrack output showing handshake

As it is WPA encryption we will use a dictionary. Here is the command –

aircrack -ng -w words.txt dump-01.cap

Here words.txt is the dictionary which has the key in it. The key is found –

Figure4. aircrack key found

This shows that we have successfully cracked the network.

Ethical Hacking Tutorial – Wireless Lab Session Video:

HSBC is killing my business, piece by piece 

$
0
0

I set up my own company in 2012. After decades of working for other people I felt the time was right. I had the experience and the contacts to make it viable and I was willing to put in the monumental effort needed to embark on such a venture. It was a nerve wracking decision, but more than anything it was exciting. In April 2012 Photon Storm Ltd was incorporated and began trading.

After talking with my accountant we decided to open an HSBC Business Account here in the UK. The process was problem free. They provided me with a company debit card and standard banking services. I did my part, ensuring we were always in credit and never needing to borrow any money. My accountants looked after the paperwork, keeping everything up to date. And we’ve been profitable and maintained a healthy positive bank balance since day one.

Everything was going great until Thursday August 10th 2017. I tried to login to our internet banking to pay a freelancer and was met with this message:

A genuine WTF?! moment if ever there was one

At first I assumed it was just a mistake. Maybe the anti-fraud team were being a little overzealous and wanted to check some card payments? I called them immediately. After bouncing through multiple departments I finally end-up talking with HSBC Safeguarding. They tell me they can’t give any more information, cannot un-suspend the account and would need to call me the following week.

However, the alarm bells were already ringing. I knew exactly who the Safeguarding team were because I had talked to them earlier in the year after receiving their scary letter. The letter said they needed to conduct a business review or they would be forced to suspend my account. That’s not the kind of letter you ignore. So I duly responded and completed my business review with them. This was 2 months before the account was suspended, back in June.

Safeguarding is a process that HSBC is taking all of its accounts through in order to better understand how they are used. It’s the fallout of HSBC receiving a record $1.9 billion fine as a result of a US Senate ruling where they were found guilty of allowing money laundering to “drug kingpins and rogue nations”.

It involves them asking all kinds of questions related to what you do and the comings and goings on your account. Who are the people and companies paying you, and being paid by you? What work was invoice X for? Do you have any large sums of money coming in or going out? Which countries do you work with? Why do you hold the bank balance you do? and so on.

It takes around an hour and although marketed as offering me protection against financial crime, what it was really doing was checking I’m not a drug cartel or rogue nation. I am of course neither of these things, but I appreciate they had to check, so I answered every question as fully as I could. I was told that if they needed more information they would be in touch, otherwise it was all fine.

Fast forward to August 10th and clearly things are not “fine” at all.

My business earns income via two streams:

1) Game development. A client will request a game, usually as part of a marketing campaign and we build it. Sometimes we supply everything: the concept, art, coding and support. And other times the client will handle the design in-house and we provide the programming that glues it all together.

2) Our second method of income is from our open source software. We publish and maintain an HTML5 game framework called Phaser. The core library itself is completely free but we make money from the sale of plugins and books via our shop, as well as our Patreon.

All of this was explained to HSBC during our business review.

Then they drop the bombshell

So I wait patiently and anxiously for HSBC to ring. At the appointed time someone from the Safeguarding team calls and explains that they want to conduct the entire business review again, from scratch. No definitive reason was given as to why they needed to do this. It sounded like they were unhappy with the level of questioning asked the first time around.

Frustrated but wanting to resolve this as quickly as possible I comply and go through the entire review again, answering in even more detail than before, to make it painfully clear what we do and where our money comes from.

The second review ends. I’m told that the information is to be sent off to another department who check it, and if they want more details they’ll “be in touch”. I’d heard this same line before, back in June and I no longer trusted them. I begin calling every day to check on progress. It starts taking up to 40 minutes to get through. Clearly they’re dealing with a lot more customers now. Every time they tell me the same thing, that the “other” department hasn’t looked at it yet, but they’ll be in touch if they need more information and “your account will remain suspended in the meantime”.

No-one will admit it’s a mistake that this was even happening. No-one will tell me why they didn’t ever call to ask for more details back in June after the first review. No-one will tell me why they suspended the account without even notifying me in writing. I’ve been wrongfully lumped in with all of those who perhaps didn’t reply to their initial warnings and I have to just sit and wait it out. I’ve filed complaints via their official channels which have so farelicited no response at all.

This has been going on for weeks. At the time of writing our account has been suspended for nearly 1 month and I’m still no closer to understanding how much longer it will be.

Also, it appears I am not alone:

One part of the above article in particular stood out to me:

“Inhibiting an account is always a last resort, so to get to that stage we will have done everything we can to contact the customer and get the information we need,” said Amanda Murphy, head of commercial banking for HSBC UK.

Like hell they did.

Because our account is suspended all direct debits linked to it automatically fail. All services that store our debit card and try to charge it also fail. We are unable to transfer any money out of our business account, which means we cannot pay ourselves, our freelancers, or any of our suppliers.

Like most people I’ve been in the situation before where I didn’t have much money. Running on fumes come the end of the month, eagerly awaiting my salary. But I have never been in the situation where I have all the money I need, that I spent years working hard to earn and save, but cannot access a penny of it.

It’s a uniquely frustrating feeling being this powerless.

Everything starts to break

An interesting thing happens when you run a business that relies on internet services to operate but have no means of paying them: It starts to break. Not all at once, but in small pieces. Like a stress fracture that grows bigger over time. Here is a small section of my inbox to give you an idea of the scale of the problem after a few weeks:

The first to die was GitHub. We have a bunch of private repositories and if you don’t pay your GitHub bill they eventually close access to the private repos until the account is settled. We store our entire web site in a private repo, so we had to pull some funds from our rapidly dwindling personal account to cover it, otherwise we literally couldn’t update our site.

Then Apple failed. This was a strange one — it appears you actually need a valid payment card associated with your Apple account or you cannot download free apps or update existing ones. Every time you try it just asks you to re-enter payment details. Not a show-stopper, but frustrating all the same.

And so it carries on. Photoshop, Trello, Beanstalk, Slack, GoDaddy, Basecamp, SendGrid — you name it, when the bill is past due, they all eventually fail. Some of them fail more gracefully than others. Adobe at least give you 30 days to resolve the issue before turning your software off. SendGrid give you just 48 hours to “avoid the suspension and / or limitation of your SendGrid services.”.

I don’t blame any of these organisations for doing this. I have no personal relationship with them, they don’t know me from Adam. I’m just another failed bank card to them, draining their systems. They don’t understand my situation and to be fair they don’t have to care even if they did.

My web server is hosted with UK Fast, who I do have a client relationship with and was able to explain what is happening to them. So far they have been excellent and it’s only because of them that my web site is even still running and generating my only source of income right now.

But bigger services will start to fail soon. Broadband, the phone line, Vodafone, the water and electricity providers that supply the office I work from. Even the office rent is due next month. Where possible I’ve told everyone I can what is going on but it can’t last for ever.

Most harrowing of all I’ve been unable to pay a member of staff what he is owed. I pushed a personal credit card to the limit just to send him some money via TransferWise but he has had to find other employment while this mess gets sorted out. I don’t blame him at all, I would do the same thing in his situation as I’ve a mortgage to pay, a family to feed and bills too. It’s incredibly frustrating knowing the money I need to solve all of this is right there, but untouchable.

Hints and Tips for your bank screwing up

I figured that at the very least I would try and offer some words of advice based on the back of what’s happening right now:

  1. Don’t bank with HSBC. If you’re about to start a small business, think twice. The banking service is perfectly fine, but when something out of the ordinary happens they move like dinosaurs.
  2. Don’t keep all your business funds with the same bank. This one is a lot harder to arrange and can complicate your accounting, but I’d say it’s worth the hassle. Make sure you’ve enough funds set-aside in an entirely separate account, with an entirely different institution, to cover what you need for a month or more. I wish I had.
  3. If you can pay for an internet service for a year, do so. Most services offer discounts if you pre-pay anyway, so it saves money, but it would also protect you against temporary payment problems in the future, unless of course you’re incredibly unlucky and they land at the same time your yearly payment is due. Our DropBox account was paid for the year thankfully, so our files remained intact.
  4. If you don’t need a service, cancel it, or do it yourself. When everything started failing I was surprised to see a couple of subscriptions I had that weren’t even needed any longer. The payments were quite tiny but I didn’t need to be spending the money at all, so at least I got to cancel those. It’s also made me question the need for a couple of services I have that I could spend some time and do myself locally (git repo hosting for private projects for a single team is a good example of this)
  5. Keep control of your DNS with a provider separate to your web host. Although it’s a horrendous situation to be in, should you be forced into it at least you can update your DNS to point to a new host. This isn’t always possible if they manage DNS for you as well, but if your business relies on your site for income it’s a safe thing to do.
  6. Be able to redirect payments to another bank account. This was an absolute life saver for me. All of our shop sales are handled via Gumroad and we were able to change the account they pay in to each week away from the business one and into our personal one. It’s going to be a nightmare to unpick when this mess is over, but it was that or don’t buy any food. The groceries won. We also get money from advertising, affiliates and Patreon into our PayPal account. A massive shout-out to PayPal for being so excellent. They were able to issue us with a MasterCard (linked to our PayPal balance, not a credit card) and allow us to transfer money into our personal account, instead of the business one. This was quite literally the only way we managed to pay our mortgage this month. PayPal, thank you. Your support was fantastic. I only wish HSBC were more like you.
  7. If you run a small business ask yourself this: What would happen if your account was frozen and you couldn’t access a single penny in it? How would you cope? It’s an unusual predicament, but clearly not a rare one.

What next?

I really don’t know. Our account is still suspended. HSBC are still a brick wall of silence. The only income we have at the moment is from shop sales, Patreon and donations. It’s barely enough to cover our living costs, but thanks to some superb thriftiness from my wife, we’re making it work. Just. We are literally being saved by the income from our open source project, but unless HSBC hurry up, it won’t be enough to save my company as well.

I cannot wait for this nightmare to be over. Once it is, I cannot wait to transfer my business away from HSBC. Assuming I still have one left to transfer.

Until then, opening Photoshop this morning summed it up well:

With Android Oreo, Google is introducing Linux kernel requirements

$
0
0

android-oreo-security-fixed

Android may be a Linux-based operating system, but the Linux roots are something that few people pay much mind. Regardless of whether it is known or acknowledged by many people, the fact remains that Android is rooted in software regarded as horrendously difficult to use and most-readily associated with the geekier computer users, but also renowned for its security.

As is easy to tell by comparing versions of Android from different handset manufacturers, developers are -- broadly speaking -- free to do whatever they want with Android, but with Oreo, one aspect of this is changing. Google is introducing a new requirement that OEMs must meet certain requirements when choosing the Linux kernel they use.

See also:

Until now, as pointed out by XDA Developers, OEMs have been free to use whatever Linux kernel they wanted to create their own version of Android. Of course, their builds still had to pass Google's other tests, but the kernel number itself was not an issue. Moving forward, Android devices running Oreo must use at least kernel 3.18, but there are more specific requirements to meet as well.

Google explains on the Android Source page:

Android O mandates a minimum kernel version and kernel configuration and checks them both in VTS as well as during an OTA. Android device kernels must enable the kernel .config support along with the option to read the kernel configuration at runtime through procfs.

The company goes on to detail the Linux kernel version requirements:

  • All SoCs productized in 2017 must launch with kernel 4.4 or newer.
  • All other SoCs launching new Android devices running Android O must use kernel 3.18 or newer.
  • Regardless of launch date, all SoCs with device launches on Android O remain subject to kernel changes required to enable Treble.
  • Older Android devices released prior to Android O but that will be upgraded to Android O can continue to use their original base kernel version if desired.

The main reason for introducing the Linux kernel mandate is security -- and it's hard to argue with that.

Miscellaneous Arduino bits and pieces

$
0
0

I wrote most of the stuff you'll find here mainly for my own benefit, so that I won't forget what I've learned so far. Stored on my website is probably the safest place to keep it so I thought others may as well take a look as I learn about Arduino. Hopefully, anyone with more experience who happens to land on the site will point out any errors & pitfalls.

Switching Your Site to HTTPS on a Shoestring Budget

$
0
0

Google's Search Console team recently sent out an email to site owners with a warning that Google Chrome will take steps starting this October to identify and show warnings on non-secure sites that have form inputs.

Here's the notice that landed in my inbox:

The notice from the Google Search Console team regarding HTTPS support

If your site URL does not support HTTPS, then this notice directly affects you. Even if your site does not have forms, moving over to HTTPS should be a priority, as this is only one step in Google's strategy to identify insecure sites. They state this clearly in their message:

The new warning is part of a long term plan to mark all pages served over HTTP as "not secure".

Current Chrome's UI for a site with HTTP support and a site with HTTPS

The problem is that the process of installing SSL certificates and transitioning site URLs from HTTP to HTTPS—not to mention editing all those links and linked images in existing content—sounds like a daunting task. Who has time and wants to spend the money to update a personal website for this?

I use GitHub Pages to host a number sites and projects for free—including some that use custom domain names. To that end, I wanted to see if I could quickly and inexpensively convert a site from HTTP to HTTPS. I wound up finding a relatively simple solution on a shoestring budget that I hope will help others. Let's dig into that.

Enforcing HTTPS on GitHub Pages

Sites hosted on GitHub Pages have a simple setting to enable HTTPS. Navigate to the project's Settings and flip the switch to enforce HTTPS.

The GitHub Pages setting to enforce HTTPS on a project

But We Still Need SSL

Sure, that first step was a breeze, but it's not the full picture of what we need to do to meet Google's definition of a secure site. The reason is that enabling the HTTPS setting neither provides nor installs a Secure Sockets Layer (SSL) certificate to a site that uses a custom domain. Sites that use the default web address provided by GitHub Pages are fully secure with that setting, but those of us that use a custom domain have to go the extra step of securing SSL at the domain level.

That's a bummer because SSL, while not super expensive, is yet another cost and likely one you may not want to incur when you're trying to keep costs down. I wanted to find a way around this.

We Can Get SSL From a CDN ... for Free!

This is where Cloudflare comes in. Cloudflare is a Content Delivery Network (CDN) that also provides distributed domain name server services. What that means is that we can leverage their network to set up HTTPS. The real kicker is that they have a free plan that makes this all possible.

It's worth noting that there are a number of good posts here on CSS-Tricks that tout the benefits of a CDN. While we're focused on the security perks in this post, CDNs are an excellent way to help reduce server burden and increase performance.

From here on out, I'm going to walk through the steps I used to connect Cloudflare to GitHub Pages so, if you haven't already, you can snag a free account and follow along.

Step 1: Select the "+ Add Site" option

First off, we have to tell Cloudflare that our domain exists. Cloudflare will scan the DNS records to verify both that the domain exists and that the public information about the domain are accessible.

Cloudflare's "Add Website" Setting

Step 2: Review the DNS records

After Cloudflare has scanned the DNS records, it will spit them out and display them for your review. Cloudflare indicates that it believes things are in good standing with an orange cloud in the Status column. Review the report and confirm that the records match those from your registrar. If all is good, click "Continue" to proceed.

The DNS record report in Cloudflare

Step 3: Get the Free Plan

Cloudflare will ask what level of service you want to use. Lo and behold! There is a free option that we can select.

Cloudflare's free plan option

Step 4: Update the Nameservers

At this point, Cloudflare provides us with its server addresses and our job is to head over to the registrar where the domain was purchased and paste those addresses into the DNS settings.

Cloudflare provides the nameservers for updated the registrar settings.

It's not incredibly difficult to do this, but can be a little unnerving. Your registrar likely has instructions for how to do this. For example, here are GoDaddy's instructions for updating nameservers for domains registered through their service.

Once you have done this step, your domain will effectively be mapped to Cloudflare's servers, which will act as an intermediary between the domain and GitHub Pages. However, it is a bit of a waiting game and can take Cloudflare up to 24 hours to process the request.

If you are using GitHub Pages with a subdomain instead of a custom domain, there is one extra step you are required to do. Head over to your GitHub Pages settings and add a CNAME record in the DNS settings. Set it to point to <your-username>.github.io, where <your-username> is, of course, your GitHub account handle. Oh, and you will need to add a CNAME text file to the root of your GitHub project which is literally a text file named CNAME with your domain name in it.

Here is a screenshot with an example of adding a GitHub Pages subdomain as a CNAME record in Cloudflare's settings:

Adding a GitHub Pages subdomain to Cloudflare

Step 5: Enable HTTPS in Cloudflare

Sure, we've technically already done this in GitHub Pages, but we're required to do it in Cloudflare as well. Cloudflare calls this feature "Crypto" and it not only forces HTTPS, but provides the SSL certificate we've been wanting all along. But we'll get to that in just a bit. For now, enable Crypto for HTTPS.

The Crypto option in Cloudflare's main menu

Turn on the "Always use HTTPS" option:

Enable HTTPS in the Cloudflare settings

Now any HTTP request from a browser is switched over to the more secure HTTPS. We're another step closer to making Google Chrome happy.

Step 6: Make Use of the CDN

Hey, we're using a CDN to get SSL, so we may as well take advantage of its performance benefits while we're at it. We can speed up performance by reducing files automatically and extend browser cache expiration.

Select the "Speed" option in the settings and allow Cloudflare to auto minify our site's web assets:

Allow Cloudflare to minify the site's web assets

We can also set the expiration on browser cache to maximize performance:

Set the browser cache in Cloudflare's Speed settings

By moving the expiration out date a longer than the default option, the browser will refrain itself from asking for a site's resources with each and every visit—that is, resources that more than likely haven't been changed or updated. This will save visitors an extra download on repeat visits within a month's time.

Step 7: Make External Resource Secure

If you use external resources on your site (and many of us do), then those need to be served securely as well. For example, if you use a Javascript framework and it is not served from an HTTP source, that blows our secure cover as far as Google Chrome is concerned and we need to patch that up.

If the external resource you use does not provide HTTPS as a source, then you might want to consider hosting it yourself. We have a CDN now that makes the burden of serving it a non-issue.

Step 8: Activate SSL

Woot, here we are! SSL has been the missing link between our custom domain and GitHub Pages since we enabled HTTPS in the GitHub Pages setting and this is where we have the ability to activate a free SSL certificate on our site, courtesy of Cloudflare.

From the Crypto settings in Cloudflare, let's first make sure that the SSL certificate is active:

Cloudflare shows an active SSL certificate in the Crypto settings

If the certificate is active, move to "Page Rules" in the main menu and select the "Create Page Rule" option:

Create a page rule in the Cloudflare settings

...then click "Add a Setting" and select the "Always use HTTPS" option:

Force HTTPS on that entire domain! Note the asterisks in the formatting, which is crucial.

After that click "Save and Deploy" and celebrate! We now have a fully secure site in the eyes of Google Chrome and didn't have to touch a whole lot of code or drop a chunk of change to do it.

In Conclusion

Google's push for HTTPS means front-end developers need to prioritize SSL support more than ever, whether it's for our own sites, company sites, or client sites. This move gives us one more incentive to make the move and the fact that we can pick up free SSL and performance enhancements through the use of a CDN makes it all the more worthwhile.

Have you written about your adventures moving to HTTPS? Let me know in the comments and we can compare notes. Meanwhile, enjoy a secure and speedy site!


Run your own OAuth2 server

$
0
0

This is how ORY Hydra works

I originally built ORY Hydra because all other OAuth2 servers forced their user management on me. But I had a user management in place already, and did not want to migrate away from it. Some of the providers allowed me to integrate with LDAP or SAML, but this was not implemented in my user management.

ORY Hydra defines a consent flow which let's you implement the bridge between ORY Hydra and your user management easily using only a few lines of code. If you want, you could say that ORY Hydra translates the user information you provide to OAuth2 Access Tokens and OpenID Connect ID Tokens, which are then reusable across all your applications (web app, mobile app, CRM, Mail, ...) and also by third-party developers.

Collection of generative models in Tensorflow

$
0
0

README.md

Tensorflow implementation of various GANs and VAEs.

Pytorch version

Pytorch Version is now availabel at https://github.com/znxlwm/pytorch-generative-model-collections

Generative Adversarial Networks (GANs)

Lists

Variants of GAN structure

Results for mnist

Network architecture of generator and discriminator is the exaclty sames as in infoGAN paper.
For fair comparison of core ideas in all gan variants, all implementations for network architecture are kept same except EBGAN and BEGAN. Small modification is made for EBGAN/BEGAN, since those adopt auto-encoder strucutre for discriminator. But I tried to keep the capacity of discirminator.

The following results can be reproduced with command:

python main.py --dataset mnist --gan_type <TYPE> --epoch 25 --batch_size 64

Random generation

All results are randomly sampled.

Conditional generation

Each row has the same noise vector and each column has the same label condition.

InfoGAN : Manipulating two continous codes

Results for fashion-mnist

Comments on network architecture in mnist are also applied to here.
Fashion-mnist is a recently proposed dataset consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. (T-shirt/top, Trouser, Pullover, Dress, Coat, Sandal, Shirt, Sneaker, Bag, Ankle boot)

The following results can be reproduced with command:

python main.py --dataset fashion-mnist --gan_type <TYPE> --epoch 40 --batch_size 64

Random generation

All results are randomly sampled.

Conditional generation

Each row has the same noise vector and each column has the same label condition.

Without hyper-parameter tuning from mnist-version, ACGAN/infoGAN does not work well as compared iwth CGAN.
ACGAN tends to fall into mode-collapse.
infoGAN tends to ignore noise-vector. It results in that various style within the same class can not be represented.

InfoGAN : Manipulating two continous codes

Some results for celebA

(to be added)

Variational Auto-Encoders (VAEs)

Lists

Variants of VAE structure

Results for mnist

Network architecture of decoder(generator) and encoder(discriminator) is the exaclty sames as in infoGAN paper. The number of output nodes in encoder is different. (2x z_dim for VAE, 1 for GAN)

The following results can be reproduced with command:

python main.py --dataset mnist --gan_type <TYPE> --epoch 25 --batch_size 64

Random generation

All results are randomly sampled.

Results of GAN is also given to compare images generated from VAE and GAN. The main difference (VAE generates smooth and blurry images, otherwise GAN generates sharp and artifact images) is cleary observed from the results.

Conditional generation

Each row has the same noise vector and each column has the same label condition.

Results of CGAN is also given to compare images generated from CVAE and CGAN.

Results for fashion-mnist

Comments on network architecture in mnist are also applied to here.

The following results can be reproduced with command:

python main.py --dataset fashion-mnist --gan_type <TYPE> --epoch 40 --batch_size 64

Random generation

All results are randomly sampled.

Results of GAN is also given to compare images generated from VAE and GAN.

Conditional generation

Each row has the same noise vector and each column has the same label condition.

Results of CGAN is also given to compare images generated from CVAE and CGAN.

Results for celebA

(to be added)

Folder structure

The following shows basic folder structure.

├── main.py # gateway
├── data
│   ├── mnist # mnist data (not included in this repo)
│   |   ├── t10k-images-idx3-ubyte.gz
│   |   ├── t10k-labels-idx1-ubyte.gz
│   |   ├── train-images-idx3-ubyte.gz
│   |   └── train-labels-idx1-ubyte.gz
│   └── fashion-mnist # fashion-mnist data (not included in this repo)
│       ├── t10k-images-idx3-ubyte.gz
│       ├── t10k-labels-idx1-ubyte.gz
│       ├── train-images-idx3-ubyte.gz
│       └── train-labels-idx1-ubyte.gz
├── GAN.py # vainilla GAN
├── ops.py # some operations on layer
├── utils.py # utils
├── logs # log files for tensorboard to be saved here
└── checkpoint # model files to be saved here

Acknowledgements

This implementation has been based on this repository and tested with Tensorflow over ver1.0 on Windows 10 and Ubuntu 14.04.

Writing a SQLite clone from scratch in C

$
0
0

Writing a sqlite clone from scratch in C

Overview

View the Project on GitHub

  • What format is data saved in? (in memory and on disk)
  • When does it move from memory to disk?
  • Why can there only be one primary key per table?
  • How does rolling back a transaction work?
  • How are indexes formatted?
  • When and how does a full table scan happen?
  • What format is a prepared statement saved in?

In short, how does a database work?

I’m building a clone of sqlite from scratch in C in order to understand, and I’m going to document my process as I go.

The Wonderful World of Webpack

$
0
0

Monday, 04 September 2017

Webpack is a JavaScript module bundler, or so the blurb goes. This is an apt name for it. However, what I would like to do in this article, is to expand on the true power of Webpack.

This article will not explain how to use Webpack. Rather, explain the reasoning behind it, and what makes it more special than just a bundler.

Webpack is still a Bundler

One of the main reasons for tools like Webpack is to solve the dependency problem. The dependency problem that is caused by modules within JavaScript; specifically Node.js.

Node.js allows you to modularise code. Modularisation of code causes an issue with dependencies. Cyclic dependencies can occur-e.g., A -> B -> A referencing. What tools like Webpack can do, is build an entire dependency graph of all of your referenced modules. With this graph, analysis can occur to help you alleviate the stress of such a dependency graph.

Webpack can take multiple entry points into your code, and spit out an output that has bundled your dependency graph into one or more files.

Webpack is so much more

For me, what makes webpack so special is the great extension points it provides.

Loaders

Loaders are what I like to refer to as mini-transpilers. They take a file of any kind - e.g., TypeScript, CoffeeScript, JSON, etc. - and produce JavaScript code for later addition to the dependency graph Webpack is building.

The power of loaders is that they are not in short supply. Loaders are an extension point. You can create your own loader, and there are 100's of default and 3rd party loaders out there.

For example, could there be a point where we would ever want to take a statically typed language like C#, and transpile this into JavaScript for Webpack to understand?

The limits are boundless with loaders. Loaders can be chained, configured, filtered out based on file type, and more.

Custom Loader Example

As the webpack documentation explains, a loader is just a node module exporting a function. A loader is as simple as a defined node module that exports a function:


    module.exports = function(src) {
        return src + '\n'
            + 'window.onload = function() { \n'
            + ' console.log("This is from the loader!"); \n'
            + '}';
    };

This is a trivial example of what a loader is. All this loader is doing is appending a function to write to the console on window load for the current browser session.

With this idea in mind, it becomes apparant that we now have the power to take any source input, and interpret it in anyway we want. So coming back to our previous example, we could take C# as the input, and create a parser that transpiles it into native JavaScript that Webpack expects.

A C# to JavaScript transpiler is a bit far-fetched, and in all honesty slightly pointless, but I hope you appreciate how we can leverage loaders in Webpack to make it more than a bundler.

Plugins

Plugins allow the customisation of Webpack on a more broader scope than file by file like loaders. Plugins are where you can add extra functionality to the core of Webpack. For example, you can add a plugin for minification; Extract certain text from output such as CSS; Use plugins for compression, and so on.

Plugins work by having access to the Webpack compiler itself. They have access to all the compilation steps that can and may occur, and can modify those steps. This means a plugin can modify what files get produced, what files to add as assets, ans so on.

A small example of a plugin is the following:


file: './my-custom-plugin.js'

function MyCustomPlugin() {}

MyCustomPlugin.prototype.apply = function(compiler) {
    compiler.plugin('emit', displayCurrentDate);
    compiler.plugin('after-emit', displayCurrentDate)
}

function displayCurrentDate(compilation, callback) {
    console.log(Date());

    callback();
}

module.exports = MyCustomPlugin;

In this example, we are adding two event handlers to two separate event hooks in the Webpack compiler. The outcome of this is one date that is printed to console just before the assets are emitted to the output directory, and one date after the assets have been emitted.

This plugin can be used in the main Webpack configuration:


var MyCustomPlugin = require('my-custom-plugin');

var webpackConfig = {
    ...
    plugins: [
        new MyCustomPlugin()
    ]
}

This plugin will now run on the emit and after-emit stages of the compilation process. A good list of compiler event hooks are available on the Webpack website.

The importance of plugins, once again, is that they are an extension point. The way Webpack has been designed is to allow the user to fully extend the core of Webpack. There are many plugins to choose from, and a lot are 3rd party.

With this in mind, a plugin could take all your assets that you require, and compress them with an algorithm. In fact, there is already a plugin for this very thing.

Summary

Webpack is a module bundler, that is what the label says. It takes your dependency graph, and outputs a browser readable format.

However, webpack can be so much more.

What if we could take C# code, and transpile it into JavaScript? What if we could take a YAML configuration file, and create a working program just out of configuration? What if we took an image, and automatically made it cropped and greyscaled?

I think if you start thinking of Webpack as more of a transpiler, not just a bundler, the true power of Webpack can be seen.

Thanks for reading and hope this helps.

“Google: it is time to return to not being evil”

$
0
0

I have known Google longer than most. At Opera, we were the first to add their search into the browser interface, enabling it directly from the search box and the address field. At that time, Google was an up-and-coming geeky company. I remember vividly meeting with Google’s co-founder Larry Page, his relaxed dress code and his love for the Danger device, which he played with throughout our meeting. Later, I met with the other co-founder of Google, Sergey Brin, and got positive vibes. My first impression of Google was that it was a likeable company.

Our cooperation with Google was a good one. Integrating their search into Opera helped us deliver a better service to our users and generated revenue that paid the bills. We helped Google grow, along with others that followed in our footsteps and integrated Google search into their browsers.

However, then things changed. Google increased their proximity with the Mozilla foundation. They also introduced new services such as Google Docs. These services were great, gained quick popularity, but also exposed the darker side of Google. Not only were these services made to be incompatible with Opera, but also encouraged users to switch their browsers. I brought this up with Sergey Brin, in vain. For millions of Opera users to be able to access these services, we had to hide our browser’s identity. The browser sniffing situation only worsened after Google started building their own browser, Chrome.

Now, we are making the Vivaldi browser. It is based on Chromium, an open-source project, led by Google and built on WebKit and KHTML. Using Google’s services should not call for any issues, but sadly, the reality is different. We still have to hide our identity when visiting services such as Google Docs.

And now things have hit a new low.

As the biggest online advertising company in the world, Google is often the first choice for businesses that want to promote their products or services on the Internet. Being excluded from using Google AdWords could be a major problem, especially for digital companies.

Recently, our Google AdWords campaigns were suspended without warning. This was the second time that I have encountered this situation. This time, however, timing spoke volumes.

I had several interviews where I voiced concerns about the data gathering and ad targeting practices – in particular, those of Google and Facebook. They collect and aggregate far too much personal information from their users. I see this as a very serious, democracy-threatening problem, as the vast targeting opportunities offered by Google and Facebook are not only good for very targeted marketing, but also for tailored propaganda. The idea of the Internet turning into a battlefield of propaganda is very far away from the ideal.

Two days after my thoughts were published in an article by Wired, we found out that all the campaigns under our Google AdWords account were suspended – without prior warning. Was this just a coincidence? Or was it deliberate, a way of sending us a message?

When we reached out to Google to resolve the issue, we got a clarification masqueraded in the form of vague terms and conditions, some of which, they admitted themselves, were not a “hard” requirement. In exchange for being reinstated in Google’s ad network, their in-house specialists dictated how we should arrange content on our own website and how we should communicate information to our users.

We made effort to understand their explanations and to work with them on their various unreasonable demands (some of which they don’t follow themselves, by the way). After almost three months of back-and-forth, the suspension to our account has been lifted, but only when we bent to their requirements.

A monopoly both in search and advertising, Google, unfortunately, shows that they are not able to resist the misuse of power. I am saddened by this makeover of a geeky, positive company into the bully they are in 2017. I feel blocking competitors on thin reasoning lends credence to claims of their anti-competitive practices. It is also fair to say that Google is now in a position where regulation is needed. I sincerely hope that they’ll get back to the straight and narrow.

Viewing all 25817 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>