Quantcast
Channel: Hacker News
Viewing all 25817 articles
Browse latest View live

Certificate transparency logs and how they are a gold mine to bug hunters

$
0
0

What is CT?

Certificate Transparency (CT) is an experimental IETF standard. The goal of CT is to allow the public to audit which certificates were created by Certificate Authorities (CA). TLS has a weakness that comes from the large list of CAs that your browser implicitly trusts. If any of those CAs were to maliciously create a new certificate for a domain, your browser would trust it. CT adds benefits to TLS certificate trust: Companies can monitor who is creating certificates for the domains they own. It also allows browsers to verify that the certificate for a given domain is in the public log record.

How can CT logs benefit Red Teams, Pentesters, and Bug Bounty Hunters?

These logs end up being a gold mine of information for penetration testers and Red Teams. I’m not the first to use this, people have been using this technique for years to gain OSINT on targets. When you start to experiment with searching on some of the web based CT log web sites, you’ll quickly realized this is an information leak by design.

After copying and pasting from the search results, I knew this wasn’t going to scale. Some companies have thousands of domains that show up, so I knew I needed to automate this. I also noticed that some domains either didn’t resolve or were only accessible from the inside networks (internal IP space, or DNS ‘view’ for BIND server prevented a resposne).

It goes without saying that these logs can help you find more targets, that might not have seen many eyes. Making it easier to get a bug bounty award. In my experience CT logs provide more sub domains than a google dork with site:domain.com.

ct-exposer

ct-exposer logo OSINT

Instead of googling to see if a tool existed to search CT logs for unknown subdomains, I decided to make my own. In addition to querying and creating a sub domain list, I also wanted to do DNS queries on the domains to see which ones had records configured. When companies have one or two thousand subdomains in CT logs, it can take a little while to resolve them one at a time. To speed this up I used gevents, which sped things up nicely. I’ve seen 2k resolutions get processed in about a second or two. I added the ability to export the ip addresses into a masscan input format to help automate tasks that take place after you’ve located IPs to investigate.

Interesting things I’ve learned during this project

Symantec’s CT log searching tool has a bug. If you search for a domain with it, and then hit their ‘export as csv’ endpoint, it will not give the correct response. The csv will be filled with spam domains that do not include the domain you’re looking for, but with hits that look like: domain-requested.com.something.spammer.net. This I guess would be useful for companies who want to track down phishing sites, but was unhelpful for anyone else. The web page table did have the correct information though, but that is a pain to scrape. It was also very slow to reply with search results.

While doing testing of the gevents portion of my code, I noticed that responses from my DNS servers would come back in batches of 36 or 50. It would pause a couple of seconds, before the next similarly sized responses came in. At first I thought it was my code, and then I looked into linux being the problem, but then after trying on another network the resolution of 1000 domains would complete in a second or two. It seems my ISP is rate limiting DNS traffic.

I also noticed that a lot of companies have leaked domains that can only be accessed on their internal networks. Sometimes I even saw internal only 192.168.0.0 addresses coming back. This can be very helpful for internal server side request forgery (SSRF) targets, or pivoting once you get inside of the network.

GitHub

I’ve posted ct-exposer on my GitHub. If you have any feedback or suggestions, please contact me.

Example output

python3 ct-exposer.py -d teslamotors.com
[+]: Downloading domain list...
[+]: Download of domain list complete.
[+]: Parsed 76 domain(s) from list.

[+]: Domains found:
205.234.27.243	adfs.teslamotors.com
104.92.115.166	akamaisecure.qualtrics.com
211.147.80.202	cn.auth.teslamotors.com
211.147.88.104	cnvpn.teslamotors.com
209.10.208.24	energystorage.teslamotors.com
209.11.133.110	epc.teslamotors.com
149.14.82.93	euvpn.teslamotors.com
209.11.133.50	extconfl.teslamotors.com
209.11.133.35	extissues.teslamotors.com
209.10.208.31	fleetview.teslamotors.com
64.125.183.134	leaseapp.teslamotors.com
64.125.183.134	leaseappde.teslamotors.com
209.11.133.11	lync.teslamotors.com
211.147.80.201	mycn-origin.teslamotors.com
205.234.27.211	origin-www45.teslamotors.com
205.234.31.120	owner-api.teslamotors.com
12.201.132.70	plcvpn.teslamotors.com
205.234.27.246	quickbase.teslamotors.com
104.86.205.249	resources.teslamotors.com
209.10.208.55	sdlcvpn.teslamotors.com
209.11.133.37	service.teslamotors.com
205.234.27.226	sftp.teslamotors.com
23.227.38.64	shop.eu.teslamotors.com
209.133.79.61	shop.teslamotors.com
23.227.38.64	shop.uk.teslamotors.com
205.234.27.197	smswsproxy.teslamotors.com
209.11.133.36	supercharger.teslamotors.com
209.133.79.59	suppliers.teslamotors.com
209.133.79.61	tesla.com
209.11.133.106	teslamotors.com
205.234.27.200	teslaplm-external.teslamotors.com
209.11.133.107	toolbox.teslamotors.com
209.10.208.20	trt.teslamotors.com
205.234.27.250	upload.teslamotors.com
209.10.208.27	us.auth.teslamotors.com
205.234.27.218	vpn.teslamotors.com
211.147.80.205	wechat.teslamotors.com
205.234.27.212	wsproxy.teslamotors.com
209.133.79.54	www-origin.teslamotors.com
104.86.216.34	www.teslamotors.com
209.11.133.61	xmail.teslamotors.com
211.147.80.203	xmailcn.teslamotors.com

[+]: Domains with no DNS record:
none	cdn02.c3edge.net
none	creditauction.teslamotors.com
none	evprd.teslamotors.com
none	imail.teslamotors.com
none	jupytersvn.teslamotors.com
none	leadgen.teslamotors.com
none	lockit.teslamotors.com
none	lockpay.teslamotors.com
none	neovi-vpn.teslamotors.com
none	origin-wte.teslamotors.com
none	referral.teslamotors.com
none	resources.tesla.com
none	securemail.teslamotors.com
none	shop.ca.teslamotors.com
none	shop.no.teslamotors.com
none	sip.teslamotors.com
none	sjc04p2staap04.teslamotors.com
none	sling.teslamotors.com
none	tesla3dx.teslamotors.com
none	testimail.teslamotors.com
none	toolbox-energy.teslamotors.com
none	vpn-node0.teslamotors.com
none	wd.s3.teslamotors.com
none	www-uat2.teslamotors.com
none	www45.teslamotors.com

Smenu, a command-line advanced selection filter and a menu builder for terminal

$
0
0

smenu.gif

What is it?

smenu is a selection filter just like sed is an editing filter.

This simple tool reads words from the standard input, presents them in a cool interactive window after the current line on the terminal and writes the selected word, if any, on the standard output.

After having unsuccessfully searched the NET for what I wanted, I decided to try to write my own.

I have tried hard to made its usage as simple as possible. It should work, even when using an old vt100 terminal and is UTF-8 aware.

The wiki (https://github.com/p-gen/smenu/wiki) contains screenshots and animations that detail some of the concepts and features of smenu.

How to build it?

smenu can be built on every system where a working terminfo development platform is available. This includes every Unix and Unix like systems I am aware of.

Please use the provided build.sh to build the executable. This script accepts the same arguments as configure, type build.sh --help the see them.

The script autogen.sh is also provided if you need to generate a new configure script from configure.ac and Makefile.am. The GNU autotools will need to be installed for this script to work.

How to install it?

Once the build process has finished, a simple make install with the appropriate privileges will do it

Some examples.

Linux example.

This program should work on most Unix but if you are using Linux, try to type the following line at a shell prompt (here: "$ " ):

$ R=$(grep Vm /proc/$$/status \
      | smenu -n20 -W $':\t\n' -q -c -b -g -s /VmH)
$ echo $R

Something like this should now be displayed with the program waiting for commands: (numbers are mine, yours will be different)

VmPeak¦    23840 kB
VmSize¦    23836 kB
VmLck ¦        0 kB
VmHWM ¦     2936 kB
VmRSS ¦     2936 kB
VmData¦     1316 kB
VmStk ¦      136 kB
VmExe ¦       28 kB
VmLib ¦     3956 kB
VmPTE ¦       64 kB
VmSwap¦        0 kB

A cursor should be under "VmHWM ".

After having moved the cursor to " 136 kB" and ended the program with <Enter>, the shell variable R should contain: " 136 kB".

Unix example.

The following command, which is Unix brand agnostic, should give you a scrolling window if you have more than 10 accounts on your Unix with a UID lower than 100:

$ R=$(awk -F: '$3 < 100 {print $1,$3,$4,$NF}' /etc/passwd \
      | smenu -n10 -c)
$ echo $R

On mine (LANG and LC_ALL set to POSIX) it displays:

at      25 25  /bin/bash      \
sys     0  3   /usr/bin/ksh   +
bin     1  1   /bin/bash      |
daemon  2  2   /bin/bash      |
ftp     40 49  /bin/bash      |
games   12 100 /bin/bash      |
lp      4  7   /bin/bash      |
mail    8  12  /bin/false     |
named   44 44  /bin/false     |
ntp     74 108 /bin/false     v

Note the presence of a scrollbar.

Testing and reporting.

The included testing system is relatively young, please be indulgent.

IMPORTANT the testing system has some dependencies, please read thetest/README.rst before going further.

WARNING running all the test by running ./tests.sh in thetests directory will take some time (around 15 min for now).

NOTE on some systems like *BSD some tests may fail. This can be explained by differences in posix/libc/... implementations. This can notably occur when some specific regular expressions or uncommon UTF-8 byte sequences are used.

If a test fails for unknown reason, then please send me its directory name and the relevant .bad file.

If you are hit by a bug that no test covers, then you can create a new test in the tests directory in a existing or new directory, read thetests/README.rst file, use an existing test as model, create an.in file and a .tst file and send them to me as well as the produced files.

Interested?

Please use the included man page to learn more about this little program.

Why I never finish my Haskell programs (part 2 of ∞)

$
0
0

Why I never finish my Haskell programs (part 2 of ∞)

Here's something else that often goes wrong when I am writing a Haskell program. It's related to the problem in the previous article but not the same.

Let's say I'm building a module for managing polynomials. SayPolynomial a is the type of (univariate) polynomials over some number-like set of coefficients a.

Now clearly this is going to be a functor, so I define the Functor instance, which is totally straightforward:

      instance Functor Polynomial where
          fmap f (Poly a) = Poly $ map f a

Then I ask myself if it is also going to be an Applicative. Certainly the pure function makes sense; it just lifts a number to be a constant polynomial:

       pure a = Poly [a]

But what about <*>? This would have the type:

    (Polynomial (a -> b)) -> Polynomial a -> Polynomial b

The first argument there is a polynomial whose coefficients are functions. This is not something we normally deal with. That ought to be the end of the matter.

But instead I pursue it just a little farther. Suppose we did have such an object. What would it mean to apply a functional polynomial and an ordinary polynomial? Do we apply the functions on the left to the coefficients on the right and then collect like terms? Say for example

$$\begin{align} \left((\sqrt\bullet) \cdot x + \left(\frac1\bullet\right) \cdot 1 \right) ⊛ (9x+4) & = \sqrt9 x^2 + \sqrt4 x + \frac19 x + \frac14 \\& = 3x^2 + \frac{19}{9} x + \frac 14 \end{align}$$

Well, this is kinda interesting. And it would mean that the pure definition wouldn't be what I said; instead it would lift a number to a constant function:

    pure a = Poly [λ_ -> a]

Then the ⊛ can be understood to be just like polynomial multiplication, except that coefficients are combined with function composition instead of with multiplication. The operation is associative, as one would hope and expect, and even though the ⊛ operation is not commutative, it has a two-sided identity element, which is Poly [id]. Then I start to wonder if it's useful for anything, and how ⊛ interacts with ordinary multiplication, and so forth.

This is different from the failure mode of the previous article because in that example I was going down a Haskell rabbit hole of more and more unnecessary programming. This time the programming is all trivial. Instead, I've discovered a new kind of mathematical operation and I abandon the programming entirely and go off chasing a mathematical wild goose.

[Other articles in category /prog/haskell] permanent link


How Tutanota replaced Google’s FCM with their own notification system

$
0
0

As mentioned in This Week In F-Droid 17, Tutanota is now on F-Droid.

In this special post Ivan from Tutanota, tells us the story.

Hi, I’m Ivan and I am developing Tutanota to help build the web of the future where our right to privacy is being respected. I believe that privacy should not be a luxury for the rich and tech-savvy, it should be a basic human right.

GCM (or, how it’s called now, FCM, Firebase Cloud Messaging) is a service owned by Google. We at Tutanota used FCM for our old Android app. Unfortunately, FCM includes Google’s tracking code for analytics purposes, which we didn’t want to use. And, even more important: For being able to use FCM, you have to send all your notification data to Google. You also have to use their proprietary libraries. Because of the privacy and security concerns that naturally go along with this, we did not send any info in the notification messages with the old app (which, understandably, led to complaints by our users). Therefore, the push notification in the old app only mentioned that you received a new message without any reference to the email itself or to the mailbox the message has been placed in.

FCM is quite convenient to use, over the years Google made changes to Android which made it harder not to use their service for notifications. On the other hand, giving up Google’s notification service would free us from requiring our users to have Google Play Services on their phones.

The challenge to replace Google’s FCM

The Tutanota apps are Libre software, and we wanted to publish our Android app on F-Droid. We wanted our users to be able to use Tutanota on every ROM and every device, without the control of a third-party like Google. We decided to take on the challenge and to build our own push notification service.

When we started designing our push system, we had several goals in mind:

  • it must be secure
  • it must be fast
  • it must be power-efficient

We’ve made a research on how others (Signal, Wire, Conversations, Riot, Facebook, Mastodon) have been solving similar problems. We had several options in mind, including WebSockets, MQTT, Server Sent Events and HTTP/2 Server Push.

Replaced FCM with SSE

We settled on the SSE (Server Sent Events) because it seemed like a simple solution. By that I mean “easy to implement, easy to debug”. Debugging these types of things can be a major headache so one should not underestimate this factor. Another argument in favour of SSE was relative power efficiency: We didn’t need upstream messages and a constant connection was not our goal.

So, what is SSE?

SSE is a web API which allows a server to send events to the connected clients. It is a relatively old API which is, in my opinion, underused. I’ve never heard about SSE before looking at the federated network Mastodon: They use SSE for real-time timeline updates, and it is working great.

The protocol itself is very simple and resembles good old polling: The client opens a connection, and the server keeps it open. The difference from classical polling is that we keep this connection open for multiple events. The server can send events and data messages; they are just separated by new lines. So the only thing the client needs to do is to open a connection with big timeout and read the stream in a loop.

SSE fits our needs better than WebSocket would (it is cheaper and converges faster, because it’s not duplex). We’ve seen multiple chat apps trying using WebSocket for push notifications and it didn’t seem power efficient.

We had some experience with WebSocket already, and we knew that firewalls don’t like keepalive connections. To solve this, we used the same workaround for SSE as we did for WebSocket: We send “heartbeat” empty messages every few minutes. We made this interval adjustable from the server side and randomised to not overwhelm the server.

It should be noted that the Tutanota app has multi-account support, and this posed a challenge for us: We wanted to keep only one connection open per device. After a few iterations, we’ve found the design that satisfied us. Each device has only one identifier. When opening the connection, the client sends the list of users for which it wants to receive notifications. The server validates this list against user records and filters out invalid ones.

Users may delete a notification token from their Settings but it would not affect other logins on this device. In addition to that, we had to build a delivery tracking mechanism when a notification is received. Unfortunately, we discovered that our server is unable to detect when a connection is broken so we had to send confirmations from the client side.

To receive notifications, we leverage Android capabilities. We run a background service which keeps the connection to the server open, similar to what the FCM process does. Another difficulty was caused by the Doze mode, introduced in Android M. The Doze, which is turned on after a period of inactivity, among other things prevents background processes to access the network. As you can imagine, this prevents our app from receiving notifications.

We mitigate this problem by asking users to make an exemption from battery optimisations for our app. It worked fairly well. The similar problem, but unrelated to Doze is vendor-specific battery optimisations. In order to prolong the battery life of their devices phone manufacturers, like Xiaomi, enable strict battery optimisations by default. Luckily users can disable them, but we must communicate this better.

Another problem was caused by the Android O changes. One of them is background process restrictions: Unless your app is visible to the user, your background processes are going to be stopped and you’re unable to launch new ones.

Initially we thought that we can solve this by showing a persistent notification with minimal priority, which is visible in the notification gutter, but not in the status bar. This didn’t work for Oreo: If you try to launch a background service and use priority minimum for its notification, the notification priority is upgraded to a higher priority (visible all the time) and, in addition to that, the system shows another persistent notification: “App X is using battery”.

We initially planned to explain users how they can hide these persistent notifications but that wasn’t a great user experience so we had to find a better solution. We leveraged Android Job mechanism to launch our service periodically (at least every 15 minutes), and we also try to keep it alive afterwards. We don’t hold WakeLocks manually – the system does this for us. We were able to ditch persistent notifications altogether. Even if notifications sometimes have a small delay, it will always be received and emails are there instantly.

In the end, we had to do some work but it was totally worth it. Our new app is still in beta but thanks to non-blocking IO, we’ve been able to maintain thousands of simultaneous connections without problems. We freed our users from Google Play Services requirement. Finally, everyone is able to get the Tutanota app on F-Droid. The system now combines both: good power efficiency and speed.

Final thought: Every user should be able to choose a “Notification Provider” for every app

Wouldn’t it be great if the user could just pick a “push notifications provider” in the phone settings and OS managed all these hard details by itself? So every app, which doesn’t want to be policed by the platform owner, didn’t have to invent the system anew? It could be end-to-end encrypted between the app and the app server. There’s no real technical difficulty in that, but as long as our systems are controlled by big players who do not allow this, we have to solve it by ourselves.

Tell HN: Google requiring phone number to log into Chromebook

$
0
0
Tell HN: Google requiring phone number to log into Chromebook
140 points by pisky5 hours ago | hide | past | web | favorite | 59 comments
Long story short: bought a couple of Chromebooks over the years (as they're nice multi user machines), created Google accounts on each but never gave a phone number. Now after years of use, Google pops up an "unrecognized device" roadblock AFTER I enter the password to log in, with the message "enter a phone number to get a text message with a verification code".

There is no mention of suspicious activity. The only trigger I can think of is a recent modem reset that changed my Public IP, and my new IP doesn't appear to resolve to my old physical location in Google's geoip db.

Am I crazy or does this seem like an extremely cynical attempt to get more phone numbers? I don't even understand how giving them my phone number proves anything as I definitely did not ever give them one previously.

Unfortunately burner phones are not available in my country, so that's not an option.


Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

To Restore Civil Society, Start with the Library

$
0
0

To appreciate why this matters, compare the social space of the library with the social space of commercial establishments like Starbucks or McDonald’s. These are valuable parts of the social infrastructure, but not everyone can afford to frequent them, and not all paying customers are welcome to stay for long.

Older and poor people will often avoid Starbucks altogether, because the fare is too expensive and they feel that they don’t belong. The elderly library patrons I got to know in New York told me that they feel even less welcome in the trendy new coffee shops, bars and restaurants that are so common in the city’s gentrifying neighborhoods. Poor and homeless library patrons don’t even consider entering these places. They know from experience that simply standing outside a high-end eatery can prompt managers to call the police. But you rarely see a police officer in a library.

This is not to say that libraries are always peaceful and serene. During the time I spent doing research, I witnessed a handful of heated disputes, physical altercations and other uncomfortable situations, sometimes involving people who appeared to be mentally ill or under the influence of drugs. But such problems are inevitable in a public institution that’s dedicated to open access, especially when drug clinics, homeless shelters and food banks routinely turn away — and often refer to the library! — those who most need help. What’s remarkable is how rarely these disruptions happen, how civilly they are managed and how quickly a library regains its rhythm afterward.

The openness and diversity that flourish in neighborhood libraries were once a hallmark of urban culture. But that has changed. Though American cities are growing more ethnically, racially and culturally diverse, they too often remain divided and unequal, with some neighborhoods cutting themselves off from difference — sometimes intentionally, sometimes just by dint of rising costs — particularly when it comes to race and social class.

Libraries are the kinds of places where people with different backgrounds, passions and interests can take part in a living democratic culture. They are the kinds of places where the public, private and philanthropic sectors can work together to reach for something higher than the bottom line.

This summer, Forbes magazine published an article arguing that libraries no longer served a purpose and did not deserve public support. The author, an economist, suggested that Amazon replace libraries with its own retail outlets, and claimed that most Americans would prefer a free-market option. The public response — from librarians especially, but also public officials and ordinary citizens — was so overwhelmingly negative that Forbes deleted the article from its website.

We should take heed. Today, as cities and suburbs continue to reinvent themselves, and as cynics claim that government has nothing good to contribute to that process, it’s important that institutions like libraries get the recognition they deserve. It’s worth noting that “liber,” the Latin root of the word “library,” means both “book” and “free.” Libraries stand for and exemplify something that needs defending: the public institutions that — even in an age of atomization, polarization and inequality — serve as the bedrock of civil society.

If we have any chance of rebuilding a better society, social infrastructure like the library is precisely what we need.

Eric Klinenberg (@EricKlinenberg), a professor of sociology and the director of the Institute for Public Knowledge at New York University, is the author of the forthcoming book “Palaces for the People: How Social Infrastructure Can Help Fight Inequality, Polarization, and the Decline of Civic Life,” from which this essay is adapted.

Time to look beyond Oracle's JDK

$
0
0

From Java 11 its time to think beyond Oracle's JDK. Time to appreciate the depth of the ecosystem built on OpenJDK. Here are some of the key builds available.

This is a quick follow up to my recent zero-cost Java post

OpenJDK builds

In practical terms, there is only one set of source code for the JDK. The source code is hosted in Mercurial at OpenJDK.

Anyone can take that source code, produce a build and publish it on a URL. But there is a distinct certification process that should be used to ensure the build is valid.

Certification is run by the Java Community Process, which provides a Testing Compatibility Kit (TCK, sometimes referred to as the JCK). If an organization produces an OpenJDK build that passes the TCK then that build can be described as "Java SE compatible".

Note that the build cannot be referred to as "Java SE" without the vendor getting a commercial license from Oracle. For example, builds from AdoptOpenJDK that pass the TCK are not "Java SE", but "Java SE compatible" or "compatible with the Java SE specification". Note also that certification is currently on a trust-basis - the results are not submitted to the JCP/Oracle for checking and cannot be made public. See Volker's excellent comment for more details.

To summarise, the OpenJDK + Vendor process turns one sourcebase into many different builds.

In the process of turning the OpenJDK sourcebase into a build, the vendor may, or may not, add some additional branding or utilities, provided these do not prevent certification. For example, a vendor cannot add a new public method to an API, or a new language feature.

Oracle JDK

From Java 11 this is a branded commercial build with paid-for support. It may be available for free for development use, but not for production. Oracle plans to provide full paid support until 2026 or later (details). Note that unlike in the past, the Oracle JDK is not "better" than the OpenJDK build (provided both are at the same security patch level).

OpenJDK builds by Oracle

These are $free pure unbranded builds of OpenJDK under the GPL license with Classpath Extension (safe for use in companies). These builds are only available for the first 6 months of a release. For Java 11, the expectation is there will be Java 11.0.0, then two security patches 11.0.1 and 11.0.2. To continue using the OpenJDK build by Oracle with security patches, you would have to move to Java 12 within one month of it being released. (Note that the provision of security patches is not the same as support. Support involves paying someone to triage and act upon your bug reports.)

AdoptOpenJDK builds

These are $free pure unbranded builds of OpenJDK under the GPL license with Classpath Extension. Unlike the OpenJDK builds by Oracle, these builds will continue for a much longer period for major releases like Java 11. The Java 11 builds will continue for 4 years, one year after the next major release (details). AdoptOpenJDK is a community group. They will provide builds provided that other groups create and publish security patches in a source repository at OpenJDK. Both IBM and Red Hat have indicated that they intend to provide those security patches.

AdoptOpenJDK OpenJ9 builds

In addition to the standard OpenJDK builds, AdoptOpenJDK will also be providing builds with OpenJ9 instead of HotSpot. OpenJ9 was originally IBM's JVM, but OpenJ9 is now Open Source at Eclipse.

Red Hat OpenJDK builds

Red Hat provides builds of OpenJDK via Red Hat Enterprise Linux (RHEL) which is a commercial product with paid-for support (details). They are very good at providing security patches back to OpenJDK, and Red Hat has run the security updates project of Java 6 and 7. The Red Hat build is integrated better into the operating system, so it is not a pure OpenJDK build (although you wouldn't notice the difference).

Other Linux OpenJDK builds

Different Linux distros have different ways to access OpenJDK. Here are some links for common distros:Debian,Fedora,Arch,Ubuntu.

Azul Zulu

Zulu is a branded build of OpenJDK with commercial paid-for support. In addition, Azul provides some Zulu builds for $free as "Zulu Community", however there are no specific commitments as to the availability of those $free builds. Azul has an extensive plan for supporting Zulu commercially, including plans to support Java 9, 13 and 15, unlike any other vendor (details).

IBM

IBM provides and supports a JDK for Java 8 and earlier. They also provide commercial paid-for support for the AdoptOpenJDK builds with OpenJ9.

SAP

SAP provides a JDK for Java 10 and later under the GPL+CE license. They also have a commercial closed-source JVM. I haven't found any information on support lifetimes.

Others

There are undoubtedly other builds of OpenJDK, both commercial and $free. Please contact me if you'd like me to consider adding another section.

Summary

There are many different builds of OpenJDK, the original upstream source repository. Each build offers its own unique take - $free or commercial, branded or unbranded.

Choice is great. But if you just want the "standard", currently my best advice is to use the OpenJDK builds by Oracle, AdoptOpenJDK builds or the one in your Operating System (Linux).

DIY puller RC flying wing airplane with Kline–Fogleman modified airfoil

$
0
0
Building puller (or tractor) RC flying wing airplane with Kline–Fogleman modified airfoil

Building Do-It-Yourself (DIY) puller (or tractor) RC flying wing airplane with Kline–Fogleman modified airfoil

To build the airframe I used the drawing of the Dead Simple Wing 24'' from the website RCGroups.com. This model is based on the Kline-Fogleman modified airfoil KFm2. While printing the schematics I increased the scale by 123% in the Adobe reader. So my wing has got the wingspan of 75 cm, almost 30 inches. This time it is a puller twin engine airplane. Puller, or tractor, means that the engines are situated forward of the wing. A puller has got some advantages over a pusher. It is quieter, easier to launch, more stable in the air.


This time I glued a carbon road in the upper layer of foam-board for rigidity. It is visible on the photo as a black line between engine bays.


This aircraft is launched very easily. It kind of "wants to fly by itself". While in the air it is incredibly stable. I could even film it in the air with a smartphone simultaneously with piloting. This plane can fly both slowly and fast.


Engine firewalls are made from usual 4 mm plywood. I strengthened them with a CA glue. The engine bays I glued with the hot glue and the packing tape. The airframe is build also with the hot glue. This time I used super-strong hot glue sticks which can withstand temperature from -10C to +80C. The total coast of a foam-board sheet, hot glue sticks, and packing tape, which were used to buid this airframe, is about 5 USD. I scraped the two motors, ESCs, and propellers from a retired quad-copter.


The stickers mark the CG, the center of gravity. The KFm2 is not very sensitive to the CG, still it certainly flies better if the CG is balanced. This plane is capable to ascend almost vertically due to its twin engine configuration and the fact that the KFm2 airfoil has got remarkable lifting characteristics.


The transportation box is also made from foam-board, hot glue, and laminated with a packing tape for strength and water resistance.


The box is strong enough to be transported on a bicycle.


These are the settings, which I used to print the drawing.

If you have questions about building or piloting this aircraft, please, do not hesitate to contact me. I will gladly assist you.

Warning: The hot glue has got the temperature of 200C. It can cause serious burns to hands. Be very careful while working with it. When I work with hot glue I keep a cold gel pad from a refrigerator nearby with the temperature of -18C to cool a piece of glue on the skin immediately. Also a hobby board-cutter knife which is used to cut foam-board is very sharp and can cause serious injuries. I always keep a first aid kit and medical insurance card nearby when working with a sharp tool.

O.M. September 8, 2018


Tools of the Trade, from Hacker News

$
0
0

Tools of The Trade, from Hacker News.

Contents · Use · Authors

Background

In 2010, Joshua Schachter, the founder of Delicious, posted the following on Hacker News:

When I first started delicious, we had to host most of the services ourselves. CVS, mail, mailing lists, etc etc etc.

These days, lots of that stuff is available as SaaS. What are the tools and services people use instead of hosting their own?

(I'm not talking about actual production services like EC2 and Heroku and whatnot. We can go over this in another thread.)

In 2013, Sharjeel Qureshi, posted the following:

Few years ago, Joshua Schachter started this thread on HN for discussing hosted useful services: https://news.ycombinator.com/item?id=1769910

The contribution in thread introduced many interesting SaaS services which can immensely help in deploying services as well as development.

It's been three years since then. What do we have today?

Many thanks to the big contributors to the previous threads, including garrettdimon, espeed, netshade, and cmadan, and many more that I haven't named.

Now

I've collected more data from Hacker News, AngelList and Quora, to make the 2015 (and hopefully beyond) version. This list also includes self-hosted as well as hosted services.

It's hosted on GitHub for a reason! Please submit pull requests.

Contents

Identity Verification

Browser/Email Testing

Bug/Issue Tracking

  • BitBucket Issues | https://bitbucket.org | @bitbucket | $10/mo - $200/mo | Unlimited private code repositories | Host, manage, and share Git and Mercurial repositories in the cloud. Free, unlimited private repositories for up to 5 developers give teams the flexibility to grow and code without restrictions.
  • BugHerd | https://bugherd.com | @bugherd | $29/mo - $180/mo | Turn client feedback into actionable tasks. | BugHerd lets you quickly see, at a glance, how your project is going and what everyone is working on. The task board lets you keep team members in sync by assigning and scheduling tasks with a simple drag and drop.
  • Bugify | $59 | https://bugify.com | @bugify | Self hosted issue management system. One-time payment. Written in PHP.
  • GitHub Issues | https://github.com | @GitHub | $7/mo - $50/mo | Build software better, together. | GitHub is the largest code host on the planet with over 13.2 million repositories. Large or small, every repository comes with the same powerful tools. These tools are open to the community for public projects and secure for private projects.
  • GitLab Issues | https://about.gitlab.com | @gitlabhq | free |GitLab is open source software to collaborate on code that is used by more than 100,000 organisations worldwide. Unlimited private repositories on GitLab.com or host your own instance. Enterprise Edition includes deep LDAP support.
  • Huboard | https://huboard.com | @huboard | $7/mo - $24/mo | GitHub issues made awesome Instant project management for GitHub repositories |
  • JIRA | https://www.atlassian.com/software/jira | @JIRA | $10/mo hosted - $10/yr self-hosted | JIRA is the tracker for teams planning and building great products. Thousands of teams choose JIRA to capture and organize issues, assign work, and follow team activity. At your desk or on the go with the new mobile interface, JIRA helps your team get the job done.
  • Lighthouse | http://lighthouseapp.com | @lighthouseapp | $25/mo - $100/mo | Whether you're a large company or a small bootstrapped team, Lighthouse is the perfect ticket tracking solution. | Collaborate effortlessly on projects. Whether you’re a team of 5 or studio of 50, Lighthouse will help you keep track of your project development with ease.
  • Pinitto.me | https://pinitto.me | Post It Notes on a virtual corkboard (OSS)
  • Post It Notes on a (Physical) Wall
  • Sifter | http://sifterapp.com | @sifterapp | $29/mo - $149/mo | Less configuring. More doing. | We've put in the time researching bug tracking to help create the simplest possible workflow for you to get work done. From time-to-time we even blog about some of our ideas around this optimal bug tracking process…
  • Usersnap | https://usersnap.com | @usersnap | $19/mo-$99/mo | Usersnap is visual bug reporting for anyone working on web projects. | Get visual feedback and precious browser information with every bug report to reproduce and fix them even faster.

Planning & Project Management

  • Aha! | https://www.aha.io | @aha_io | $69/mo, Ask about startup plan | The new way to create brilliant product strategy and visual roadmaps.
  • Sprintly | http://sprint.ly | @sprintly | $49/mo- $399/mo | Don't ask how projects are going. Watch how they're going in real-time. | Use our elegant interface to prioritize, tag, manage, estimate, and measure your software developers' progress in real-time.
  • Podio | https://podio.com | @Podio | free | Teamwork made easy | News and views from the team behind Podio - changing the way the world works since 2009.
  • Flow | https://www.getflow.com | @flowapp | $19/mo - $249/mo | Stop managing projects from your inbox. | Flow is a collaborative task management app for the web and iPhone.
  • Basecamp | https://basecamp.com | @37signals | $20/mo - $150/mo | The official account for Basecamp®. Helping Basecamp customers every Mon-Fri 9am-6pm CT! | Basecamp to help organize the store design, develop fixtures, and manage craftspeople. Primarily through word-of-mouth alone, Basecamp has become the world’s #1 project management tool.
  • Apollo | http://www.apollohq.com | @applicomhq | $23/mo - $148/mo | Integrated Project and Contact Management Done Right | Apollo is project and contact management done right. Using Apollo, you will realise that it's built to help you get things done, quickly and efficiently. With Apollo, you will always know where your projects, your contacts and your life are at and you will feel on top of everything — regardless of how hectic your schedule is.
  • Pivotal Tracker | https://www.pivotaltracker.com | @pivotaltracker | $7/mo - $175/mo | BUILD BETTER SOFTWARE FASTER | Break your project down into bite-sized stories, which get your product closer to the business goal. Use points to estimate each story’s relative complexity and prioritize it in the backlog.
  • Asana | https://asana.com | @asana | $50/mo - $800/mo | Teamwork without email | Asana is our go-to for prioritizing projects, keeping up w/orders & staying on top of a growing to-do list
  • WeekPlan | https://weekplan.net | @weekplan | $7/mo- $19/mo | Time management inspired by the "7 habits of highly effective people" Features: goals of the week, week view and quadrant matrix, pomodoro timer, shared workspaces, etc...
  • Trello | https://trello.com | @trello | $5/mo | Organize anything, together Trello is the fastest, easiest way to organize anything, from your day-to-day work, to a favorite side project, to your greatest life plans.
  • Blossom | https://www.blossom.co | @blossom_io | $19/mo - $149/mo | Agile Project Management | Blossom gives each member of the team clear overview about who’s doing what & why and at the same time it helps you to focus on what matters most. With Blossom you can efficiently manage your whole development process in one place, built with simplicity in mind. Blossom is based on the principles ofKanban, a way of working that emphasizes iterative delivery cycles and continuously improves the workflow of your team or organization.
  • Redmine | http://www.redmine.org | Flexible project management web application. Written using Ruby on Rails framework, it is cross-platform and cross-database.
  • JIRA Agile | https://www.atlassian.com/software/jira/agile | @jira | $10/mo - $30/mo | Dream big, work smart, deliver fast | Makers of @JIRA, @Confluence, @Bitbucket and more. Software to plan, collaborate, code, and support. Built for teams
  • Tom's Planner | https://www.tomsplanner.com | @tomsplanner | $9/mo - $19/mo | | Tom's Planner is online Gantt chart software that allows anyone to create, collaborate and share Gantt Charts online with drag and drop simplicity. It's web based, extremely intuitive and easy-to-use.
  • LeanKit | https://leankit.com | @leankit | $15/mo - $19/mo | Instant Project visibility | In LeanKit, you map your organization’s processes onto virtual whiteboards. On each board the process steps are represented as vertical and horizontal lanes. Cards represent work items, which team members update and move from across the board as they complete their share of the work. Rather than having to ask for status reports, managers and customers can just look at the board. Board updates are visible in seconds around the globe and e-mail alerts and RSS feeds are available, so you and your team can take immediate action to resolve issues before they turn into serious problems.
  • Breeze | https://www.breeze.pm | @BreezeTeam | $29/mo - $129/mo | Organize and track everything. Breeze shows you what's being worked on, who's working on what, where things are in the workflow and how much time it took.

Time Tracking

  • Toggl | https://toggl.com | @toggl | free-$59/mo | Toggl’s time tracker is built for speed and ease of use. Time logging with Toggl is so simple that you’ll actually use it.
  • Clockify | https://clockify.me | @clockify | free | Clockify is the only 100% free time tracking software that lets you and your team track time with one click. It works like Toggl but offers unlimited features and unlimited users.
  • Hubstaff | https://hubstaff.com | @hubstaff | free-$9/mo | Hubstaff is time tracking software designed to make remote team management more effective and efficient. You just have to sign up, download our intuitive desktop app and push the start button to begin tracking time.
  • Tickspot | https://www.tickspot.com/ | @tickspot | free-$149/mo | Straightforward time tracking software to help your team run more profitable projects. Whether you prefer iOS, Android, the Apple Watch, or your desktop computer, Tick is the easiest way to track your time.
  • Kimai | https://www.kimai.org/ | @kimai_org | Kimai is a free open source timetracker. It tracks work time and prints out a summary of your activities on demand. Yearly, monthly, daily, by customer, by project, by action, etc. | self-hosted

App Developer Tools

Localization & Internationalization

Business & Traffic Analytics

  • KISSmetrics | https://www.kissmetrics.com | @kissmetrics/ | $150/mo - $500/mo | KISSmetrics tells you who’s doing it. | Every last piece gets connected to a real person. All of it. It doesn’t matter if people bounce around between different browsers and devices. Or even if it takes them 6 months to come back. You’ll see what real people do.
  • Localytics | https://www.localytics.com | @localytics/ | Free up to 10k MAUs, $200/mo - $2700/mo above that | Find out what works in your mobile or web app. Do more of it. All in one place | Advanced analytics provide data and insight to help you build more successful apps. Integrated Marketing helps you easily engage and acquire more customers.
  • Mixpanel | https://mixpanel.com | @mixpanel/ | $150/mo - $2000/mo | Actions speak louder than page views. | For years, companies have pushed page views as a primary measure of success. Page view counts are popular because they are easy to report, but ultimately cannot tell you how engaged your visitors are. Mixpanel lets you measure what customers do in your app by reporting actions, not page views.
  • Amplitude | https://amplitude.com | $299/mo | Mobile Analytics for decision makers
  • Snowplow | https://snowplowanalytics.com | @SnowPlowData | | Snowplow is the most powerful, flexible, scalable web analytics platform in the world. | Snowplow enables analysts to perform a wide variety of both simple and sophisticated analytics on your web analytics data.
  • Segment | https://segment.com | | $29/mo - $349/mo | The right way to manage your tools. | The idea is simple: one pipeline for all your data. Send data to any third-party tool with a single integration.
  • Clicky | https://clicky.com | @clicky | $9.99/mo - $19.99/mo | Real Time Web Analytics | Clicky lets you see every visitor and every action they take on your web site, with the option to attach custom data to visitors, such as usernames or email addresses. Analyze each visitor individually and see their full history.
  • Google Analytics | http://www.google.com/analytics/ | Google Analytics lets you measure your advertising ROI as well as track your Flash, video, and social networking sites and applications.
  • Matomo | https://matomo.org | @matomo_org | Whether you are an individual blogger, a small business, or a large corporation, Matomo helps you gain valuable insights to help your business or readership grow.
  • Chartio | https://chartio.com | @chartio | | Visualize and explore your data with Chartio. | Create interactive charts and perfect dashboards through an intuitive drag and drop interface. Switch from basic tables to sophisticated data visualizations in a single click. Powerful filters let you slice and dice your data, and you can drill down into most charts without configuring a thing.
  • Chartbeat | https://chartbeat.com | @chartbeat | $9.95/mo - $49.95/mo | Build a loyal and valuable audience for your site. | Chartbeat's real-time traffic and audience-behavior data shows you who's on your site and how they're engaging with your content right now — so you can take action on what matters when it matters.
  • Calq | https://calq.io | @CalqAnalytics | $0 - $2500/mo | Advanced custom analytics for mobile and web applications. | Calq is an analytics platform that measures user actions rather than page views. An action can be anything a user does: reviewing a product, playing a level on a mobile game, making a purchase on your site, anything. Calq's ability to work with custom events AND custom data is what raises it above more traditional analytics platforms.
  • GoSquared | https://www.gosquared.com | @gosquared | £21.60 - £396/mo | Easy to use real-time web analytics.
  • Improvely | https://www.improvely.com | @improvelycom | $29 - $899/mo | Conversion tracking and click fraud monitoring platform. The easiest way to track the performance of marketing campaigns and monitor them for click fraud.
  • Keen IO | https://keen.io | @keen_io | $0 - $2000+/mo | Custom analytics shouldn't be a pain in the backend. Keen IO's powerful APIs do the heavy lifting for you, so you can gather all the data you want and start getting the answers you need.
  • Heap Analytics | https://heapanalytics.com | @heap | 0 - $599+ | Instant, retroactive analytics for web and iOS. No code required.
  • Gauges | https://get.gaug.es | @GaugesApp | $6-$48/mo | Gauges provides real time web analytics such as how many people visit your site, where they come from, and where they go.
  • Wisdom | https://getWisdom.io | Free - $2000+/mo | Session Replay | Wisdom is the most accurate live visitor session recorder service available. Focusing only on session replay, Wisdom reconstructs a virtual desktop screen for every visitor, across all tabs, to capture the true feel of every visitor's experience.

Conversion Optimization & A/B Testing

  • Optimizely | https://www.optimizely.com | @optimizely | $17/mo - $359/mo | A/B testing you'll actually use | Track engagement, clicks, conversions, sign ups, or anything else that matters to you and your business. Optimizely's custom goal tracking provides an endless range of measurable actions that you can define. Just tell Optimizely what to measure, and we will do the rest.
  • Visual Website Optimizer | https://vwo.com | @wingify | $49/mo - $129/mo | Increase your website sales and conversions | Using Visual Website Optimizer, they A/B test different versions of their website and landing pages to find out which one works best. Made for marketers, our tool is incredibly easy to use, and doesn't need IT resources.
  • EyeQuant | http://www.eyequant.com | @eyequant | $199/mo - $999/mo | Instantly understand what your visitors will see and miss in their first seconds on your site, and improve your conversions. Analyse live sites or mockups within seconds, no code required.

User Management

User Testing

  • Silverback 2.0 | http://silverbackapp.com | @silverbackapp | 69.95 | Guerrilla usability testing software for designers and developers | Silverback makes it easy, quick and cheap for everyone to perform guerrilla usability tests with no setup and no expense, using hardware already in your Mac.
  • HotJar | https://www.hotjar.com | @hotjar | Free - $29/mo (personal) | Records videos and collects heatmaps of your site visitor actions.
  • Wisdom | https://getWisdom.io | Free - $2000+/mo | Session Replay | Wisdom is the most accurate live visitor session recorder service available. Focusing only on session replay, Wisdom reconstructs a virtual desktop screen for every visitor, across all tabs, to capture the true feel of every visitor's experience.

HR

  • Workday | https://www.workday.com | @workday | | Workday works the way people work—collaboratively, on the go, and in real-time. Explore the product previews below to learn how Workday can change the way you work. | With powerful business applications and a user experience that's unmatched in enterprise software, Workday gives you everything you need to transform your business.
  • Lever | https://www.lever.co | @lever | A modern web app for hiring | Leverage your entire company – interviewers, managers, and recruiters – to source, vet, and close.
  • Zenefits | https://www.zenefits.com | @zenefits | $0/mo | The #1 All-In-One HR Platform | Payroll. Benefits. Time. Compliance. All online, all in one place.
  • TestDome | https://www.testdome.com/ | @TestDome | $8/candidate - $20/candidate | Automated testing of programming skills, ask candidates to write real code before calling them for an interview.
  • HackerRank | https://www.hackerrank.com/ | @hackerrank | paid | End-to-end technical recruiting platform for hiring engineers.

Payroll

  • Gusto | https://gusto.com | @gustohq | $29/mo + $6/user | Payroll and benefits that put people first, easy setup, automated tax filings and thoughtful support.
  • WagePoint | https://wagepoint.com | @wagepoint | $20 + $2/employy | The Simple, Fast & Friendly way to pay your employees.

Continuous Integration/Code Quality

  • Travis | https://travis-ci.org | @travisci | free - $489/mo | Hi I’m Travis CI, a hosted continuous integration service for open source and private projects: travis-ci.com System status updates: @traviscistatus
  • AppVeyor | https://www.appveyor.com | @appveyor | AppVeyor automates building, testing and deployment of .NET applications, helping your team to focus on delivering great apps.
  • Codeship | https://codeship.com | @codeship | Continuous Delivery as a service, start testing and deploying your code immediately | Start with 100 builds per month free, Unlimited plans start at 49$
  • Circle | http://circleci.com | @circleci | $19/mo - $269/mo | Ship better code, faster | Easy, fast, continuous integration and deployment for web apps.
  • Nevercode | https://nevercode.io | @nevercodehq
  • Hound | https://houndci.com | @houndci | Free | Automated Code Review | Take care of pesky code reviews with a trusty Hound
  • CodeClimate | https://codeclimate.com | @codeclimate | $0/mo - $399/mo | Automated Code Review | Code Climate hosted software metrics help you ship quality Ruby and JavaScript code faster. Get control of your technical debt with real time static analysis of your code.
  • Codacy | https://www.codacy.com | $0-$150/mo | Automated Code Review | Continuous Static Analysis designed to complement your unit tests. Similar to CodeClimate.
  • Codecov | https://codecov.io | $0-$5/mo | Hosted Code Coverage | Code coverage reporting done right.
  • Semaphore | https://semaphoreci.com | @semaphoreci | $14/mo - $899/mo | Create an Amazing Workflow. | Semaphore assumes that your private or open source project is on GitHub. There are no new dependencies, hooks or SSH keys to manage. It works without any change in source code.
  • Solano CI | https://www.solanolabs.com | @SolanoLabs | $15/mo - $100/mo | Faster Continuous Integration and Deployment with patented auto-parallelization | Solano CI sets up Continuous Integration in minutes, frees you from managing a build server, and lets you deploy software 10x - 80x faster by running tests in parallel safely and automatically. It also lets you use our massively scalable environment even before you push to CI. Seamlessly integrates into existing workflows. Free 14-day trial, no credit card required. Formerly loved as tddium.
  • Jenkins | https://jenkins.io | @jenkinsci | Jenkins provides continuous integration services for software development. It is a server-based system that supports SCM tools including AccuRev, CVS, Subversion, Git, Mercurial, Perforce, Clearcase and RTC, and can execute Apache Ant and Apache Maven based projects as well as arbitrary shell scripts and Windows batch commands. Released under the MIT License, Jenkins is free software.
  • Bamboo | https://www.atlassian.com/software/bamboo | @atlassian | $10/mo - $1000/mo | | Bamboo does more than just run builds and tests. It connects issues, commits, test results, and deploys so the whole picture is available to your entire product team – from project managers, to devs and testers, to sys admins.
  • Buildkite (Buildbox) | https://buildkite.com | @buildkite | $15/dev/mo | Semi-hosted continuous integration and deployment | Buildkite uses your own infrastructure to run builds so you can test any language or run any deployment scripts. You can run as many parallel agents (and builds) as you want.
  • Crucible | https://www.atlassian.com/software/crucible | @atlassian | $10/mo - $8000/mo | Code review system | Review code, discuss changes, share knowledge, and identify defects with Crucible's flexible review workflow. It's code review made easy for Git, Subversion, CVS, Perforce, and more.
  • Coveralls | https://coveralls.io | @coverallsapp | $0/mo - $50/mo | Coveralls works with your continuous integration server to give you test coverage history and statistics. It integrates with any langauge and is free for open source.
  • Testributor | http://about.testributor.com | @testributor | Free | Testributor is an open source Continuous Integration platform. A hosted version is available for free, both for open source and private projects.
  • Wercker | http://www.wercker.com | @wercker | $0/mo - $350/mo | Wercker is a Docker-Native CI/CD Automation platform for Kubernetes & Microservice Deployments.

Dashboards

  • Geckoboard | https://www.geckoboard.com | @geckoboard | $17/mo - $899/mo | Meet Geckoboard. It's Your Key Data, In One Place. | Geckoboard monitors your business’s vital signs – don’t wait, see it live on a business dashboard as it happens. Focus on what matters and react faster to important events.
  • Telemetry | https://www.telemetrytv.com | @telemetrytv | $9/mo - $749/mo | Build realtime dashboard with powerful visualizations that look beautiful on big screen TVs, desktop computers, mobile devices, and embedded systems—all using a simple REST API that works with all modern languages.
  • Dashing | http://dashing.io | Dashing is a Sinatra based framework that lets you build beautiful dashboards.
  • Klipfolio | https://www.klipfolio.com | @klipfolio | $5/user/mo - $20/user/mo | Meet Your Business Dashboard | Connect to any data service to bring your key numbers together on one dashboard. Assign your data to visualizations to show the story behind the numbers. Cultivate a data-driven culture by sharing dashboards with everyone on your team.

Error/Exception Handling

  • Crashlytics | http://try.crashlytics.com | @crashlytics | Free | Crash reports and grouping for easier analysis. Basic analytics and reports. | iOS & Android
  • Sentry | https://sentry.io/welcome/ | @getsentry | $24/mo - $199/mo | Sentry notifies you when your users experience errors. | Know immediately when things happen in your application. Engage users before they have a chance to report a problem.
  • HoneyBadger | https://www.honeybadger.io | @honeybadgerapp | $39/mo - $249/mo | Exception, uptime, and performance monitoring for Ruby. | It tells you about errors, downtime and performance issues as they happen. And it gives you the tools you need to fix them ...without burying you in data. Without silly rate limits or per-server fees.
  • BugSnag | https://www.bugsnag.com | @bugsnag | $29/mo - $249/mo | Automatic, full-stack error monitoring | Web app monitoring for Rails, PHP, Node.js, Java, and every other leading platform.
  • Raygun | https://raygun.com | @raygunio | $14/mo - $199/mo | Exceptional Error Tracking | Your software faults get automatically sent to the Raygun service and analysis begins immediately. Raygun intelligently groups your errors so you're dealing with root causes, not every single error instance!
  • Airbrake | https://airbrake.io | @airbrake | $49-249/mo | No More Searching Log Files Capture and Track Your Application's Exceptions in 3 Minutes | Airbrake is the leading exception reporting service, currently providing error tracking for 50,000 applications with support for 18 programming languages.
  • Atatus | https://www.atatus.com | @atatusapp | $12/mo - $159/mo | Simple JavaScript Error Tracking | Atatus is a simple error tracking and uptime monitoring system. Add two lines of code and get alerted on any errors that occurs in your application in realtime.
  • Rollbar | https://rollbar.com | @rollbar | $12/mo - $1249/mo | Take control of your errors | Rollbar is platform-agnostic and can accept data from anything that can speak HTTP and JSON. You can use our official libraries for Ruby, Python, PHP, Node.js, JavaScript, Android, iOS, or Flash, or roll your own with our API.
  • Errorception | https://errorception.com | @errorception | $5/mo - $59/mo | Painless JavaScript Error Tracking | Errorception is a simple and painless way to find out about JavaScript errors, as they occur in your users' browsers. All you need to do is insert a script tag on your page, and you will start recording errors as they happen in real-time.
  • Errbit | OSS | https://errbit.com | The open source error catcher that's Airbrake API compliant.
  • OverOps | https://www.overops.com | @overopshq | God Mode in Production Code for java and scala applications.

Application Distribution

  • HockeyApp | https://www.hockeyapp.net | @hockeyapp | Free - $129/mo depending on number of apps and number of owners | Distribution of iOS, Android, Windows Phone and Mac OS apps | Includes analytics, user feedback and crash reports.

Log Monitoring

  • Fluentd | https://www.fluentd.org | @fluentd | | Set Up Once, Collect More | Fluentd is an open source data collector designed for processing data streams. 150+ plugins instantly enable you to store the data for Log Management, Big Data Analytics, etc
  • Flume | https://github.com/cloudera/flume
  • Graylog | https://www.graylog.org | @graylog2 | Field-tested open source data analytics system used and trusted all around the world. Search your logs, create charts, send reports and be alerted when something happens. All running on the existing JVM in your datacenter.
  • LogEntries | https://logentries.com | @logentries | $16/mo - $245/mo | Log Management & Analytics Made Easy | Logentries provides an easy-to-use cloud service for log management and analytics.
  • Loggly | https://www.loggly.com | @loggly | $49/mo - $349/mo | Solve operational problems faster. | Loggly helps cloud-centric organizations—organizations that build and manage cloud-facing applications—to solve operational problems faster.
  • Logstash | https://www.elastic.co/products/logstash | @logstash | | Ship logs from any source, parse them, get the right timestamp, index them, and search them. | logstash is a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use (like, for searching). Speaking of searching, logstash comes with a web interface for searching and drilling into all of your logs.
  • Papertrail | https://papertrailapp.com | @papertrailapp | $7/mo - $230/mo | Frustration-free log management. Get started in seconds. | Use Papertrail's time-saving log tools, flexible system groups, team-wide access, long-term archives, charts and analytics exports, monitoring webhooks, and 45-second setup.
  • Stackify | https://stackify.com | @Stackify | $15/ mo | Connecting the dots for you | Stackify focuses on application health, magnifying critical insights for developers, operations, and support teams.
  • statsd | https://github.com/etsy/statsd/
  • Sumo Logic | https://www.sumologic.com | @SumoLogic | | Log Data is Big Data | Once enabled by the administrator, the new data will be searchable in the customer account. Sumo Logic provides an Application for Data Volume for out-of–the-box dashboards and searches that deliver a comprehensive view of data usage volume by category, collector, source name, and hosts.

Application Performance

  • AppNeta | https://www.appneta.com | @AppNeta | Free to $119 / mo | Full-stack application monitoring for web apps | Get visibility into code, network, and end user, especially for polyglot and service-oriented applications, by looking at transactions, errors, browser metrics, host metrics, and more.
  • DripStat | https://dripstat.com | @DripStat | $20/ mo | Application monitoring for Java | Next gen Java APM. Full visibility across your stack. Specificially designed for Java. Supports SQL databases, MongoDB and Cassandra.
  • New Relic | https://newrelic.com | @NewRelic | $149/ mo | Application monitoring for all your web apps. | It’s about gaining actionable, real-time business insights from the billions of metrics your software is producing, including user click streams, mobile activity, end user experiences and transactions.
  • AppSignal | https://appsignal.com | @AppSignal | $49/mo - $259/mo | Better monitoring for your Rails applications. | Get detailled statistics on your site's performance with mean and 90th percentile measurements.
  • Instrumental | https://instrumentalapp.com | @instrumental | $150/mo - $750/mo | Monitor Your App in Realtime | Instrumental’s made to monitor metrics at a ridiculously high scale. At rates of 500,000 metrics per second and higher, it doesn’t even break a sweat

Load Testing

  • Blitz | https://www.blitz.io | @blitz_io | $19.99/mo - $399.99/mo | LOAD TESTING FROM THE CLOUD | Building mobile applications, websites or APIs is an iterative process. New features and capabilities are being added constantly. Your application is rapidly and iteratively going through several distinct phases - Development, Staging, Production and Operations. At every step of the way, the ability to ensure that your application meets the highest levels of user satisfaction is critical.
  • Bees with Machine Guns! | https://github.com/newsapps/beeswithmachineguns
  • Flood.io | https://flood.io | @flood_io | Free to $399/mo | Auto setup and results summaries/graphs of JMeter and Gatling load tests. Can easily scale to 100K+ reqs/min.
  • Neustar Website Load Testing | https://www.neustar.biz/services/web-performance/load-testing | @Neustar | $80/mo | | Tackle performance problems such as bandwidth limitations, error rates exceeding thresholds, server PU limitations and much more.
  • Loader.io | https://loader.io | Free to 100.00$ / mo | Loader.io is a free load testing service that allows you to stress test your web-apps/apis with thousands of concurrent connections.
  • Locust.io | https://locust.io | @locustio | Open Source

Server Monitoring

  • Server Density | https://www.serverdensity.com | @serverdensity | | Premium hosted website and server monitoring tool. | All your activity syncs in real time - from starting new instances to upgrading or deleting old ones. Work wherever you want - through web, mobile, API or directly with your provider. Everything stays in sync.
  • Datadog | https://www.datadoghq.com | @datadoghq | $0/mo - $15/host/mo | Datadog is a monitoring service for IT, Operations and Development teams who write and run applications at scale, and want to turn the massive amounts of data produced by their apps, tools and services into actionable insight.
  • Circonus | https://www.circonus.com | @circonus | $15/host/mo - $25/host/mo | Circonus combines multiple monitoring, alerting, event reporting, and analytical tools into one unified solution. Use any data, in any application, from any system, and visualize it in real-time.
  • TrueSight Pulse | http://truesightpulse.bmc.com | @truesightpulse | | Real-time visibility into cloud and server infrastructure
  • Librato | https://www.librato.com | @Librato | $0.05/metric/mo to $0.30/metric/mo | Librato provides a complete solution for monitoring and understanding the metrics that impact your business at all levels of the stack.
  • Scout | https://scoutapp.com/

Customer Support/Help Desks

  • Desk | https://www.desk.com | @desk | $3/mo - $50/mo | Deliver Customer Service That Wows | Desk.com creates the leading customer service application, which helps fast-growing companies deliver outstanding customer support. Desk.com's intuitive user interface and powerful features make solving customers' problems more efficient for the entire company. Plus, Desk.com is the only Customer Service Application backed by Salesforce, providing easy integration with other Salesforce services and robust security. There are thousands of companies using Desk.com as their help desk software application, from household names like Square and Instagram to the burrito shop down the street. Give it a try for free.
  • HelpScout | https://www.helpscout.net | @helpscout | $15/ mo | Scalable customer support, no help desk headaches | Based on conditions you specify, Help Scout automatically performs one or more actions.
  • ZenDesk | https://www.zendesk.com | @zendesk | $1/mo - $195/mo | Relationships between businesses and their customers can be hard. | Better customer service starts with better communication Zendesk brings all your customer conversations into one place.
  • Groove | https://www.groovehq.com | @groove | $15/ mo | Everything you need to deliver awesome, personal support to every customer. | Your customers will never know that you’re using a helpdesk. To them, your messages look like regular emails. It feels like personal support, and it helps you build deeper relationships with your customers.
  • Intercom | https://www.intercom.com | @intercom | $49/mo - $449/mo | The easiest way to see and talk to your users | Intercom is a single platform where you can see in real-time who is using your product and send personalized messages to the right users at the right time based on their behavior.
  • Tender | http://tenderapp.com | @tenderapp | $9/mo - $99/mo | Better, Simpler, Customer Support Software. | Support your customers in the open! With public forums, you can offer a public space to your users to discuss common issues and get feedback, while still keeping certain categories private (billing, orders, ...). Power users can subscribe to categories and new discussions, and help out other customers.
  • Enchant | https://www.enchant.com | @enchanthq | $9/ mo | It's like gmail on steroids! | Enchant is a powerful helpdesk that helps your team deliver awesome support to each and every customer. To your customers, it's just email. They will never see a ticket number and will never have to log into anything!
  • Freshdesk | https://freshdesk.com | @freshdesk | $16/mo - $70/mo | Everything you need to deliver Exceptional Customer Support | Freshdesk keeps you from running behind issues blindly and gets your customer support issues under control.
  • UserDeck | https://userdeck.com | @user_deck | $0 - $25/mo | Customer support software that embeds into your existing website.
  • Sirportly | https://sirportly.com | @sirportly | £0 - £15/mo | Grow your business and provide world class customer support. Simply setup your helpdesk in less than a few minutes. Integrate with your other software tools and take advantage of the automated rules and macros to scale your customer support to a new level, become more professional and customer focused and turn your customers into raving fans with Sirportly.
  • Olark | https://www.olark.com/
  • SnapEngage | https://snapengage.com/
  • Get Satisfaction! | https://getsatisfaction.com/corp/ | Customer communities for social support, social marketing & customer feedback - online community software | Get Satisfaction is the leading customer engagement platform that helps companies build better relationships with their customers and prospects, through the best online customer community.
  • Reamaze | https://www.reamaze.com | @reamaze | $15/mo | Lightweight, Lightspeed Help Desk. Email, Social, Branded, Integrated. | Reamaze provides your team with helpdesk functionality that integrates with your application, as well as integrations with popular 3rd party tools in your workflow.
  • Jitbit Helpdesk | https://www.jitbit.com/helpdesk/ | @jitbithelpdesk | $29/mo - $199/mo | A help desk app that actually makes your work easier, not harder. | Comes in both hosted and on-premise versions. Very well designed and easy to use. Has all the must-have features and doesn't get in your way.
  • Drift | https://www.drift.com | @drift | free for < 100 contacts, paid from $49/mo | Stop wasting your website traffic. | Sales-oriented live chat and in-app messaging, with chatbot automation.

Transactional Email

  • Postmark | https://postmarkapp.com | @postmarkapp | $1.50 | Email delivery for web apps – done right. | Postmark removes the headaches of delivering and parsing transactional email for webapps with minimal setup time and zero maintenance. We have years of experience getting email to the inbox, so you can work and rest easier.
  • Mandrill | http://mandrill.com | @mandrillapp | $0.20/mo - $0.10/mo | THE FASTEST WAY TO DELIVER EMAIL | Mandrill is a scalable and affordable email infrastructure service. Whether you're just getting started, have some questions, or are looking for a quick reference, we've got you covered.
  • MailGun | https://www.mailgun.com | @Mail_Gun | $20.00 | The Email Service For Developers | Easy SMTP integration and a simple, RESTful API abstracts away the messy details of email. Scale quickly, whether you need to send 10 or 10 million emails.
  • AWS SES
  • SendGrid | https://sendgrid.com | @SendGrid | $9.95/mo - $399.95 | Email Delivery. Simplified. | SendGrid delivers billions of emails for companies of all sizes every month. Select the package that best fits with your sending volume, set-up your account, and let SendGrid take care of the rest!
  • CritSend | https://www.critsend.com | @critsend | $50/mo - $3000/Mo | The Best SMTP Relay for Developers | Use the most reliable infrastructure for your transactional and bulk emails. It only takes 5 minutes to setup Critsend and start enjoying fast delivery time and automatic scalability.
  • Postage | http://postageapp.com | @postagebird | $9/mo - $399/mo | The easier way to send email from web apps | ostageApp helps design, send, and analyze emails within minutes.
  • Sendwithus | https://www.sendwithus.com | @sendwithus | Free 'Hacker' plan of 1000 messages/month | Transactional email A/B testing and drip campaigns |

Other APIs

  • Filestack | https://www.filestack.com | @FileStack | $0/mo - $49/mo | Easy, Powerful File Uploads | Connect, Store, and Process any file from anywhere on the Internet
  • Open Exchange Rates | https://openexchangerates.org | | $12/mo - $97/mo | Real-time exchange rates & currency conversion JSON API | A simple and easy-to-integrate exchange rates API in JSON format, with HTTPS and JSONP support, with examples, guides and full documentation.

Site Search

  • Elasticsearch | https://www.elastic.co/products/elasticsearch | @elasticsearch | | an end-to-end search and analytics platform. infinitely versatile. | By combining the massively popular Elasticsearch, Logstash and Kibana we have created an end-to-end stack that delivers actionable insights in real-time from almost any type of structured and unstructured data source. Built and supported by the engineers behind each of these open source products, the Elasticsearch ELK stack makes searching and analyzing data easier than ever before.
  • Swiftype | https://swiftype.com | @Swiftype | Free - $250/mo, Enterprise Plans available | Swiftype is a hosted website search service, available as a web crawler or as an API integration. API clients are available for major frameworks and languages, plugins are available for major third party platforms.
  • Algolia | https://www.algolia.com | @Algolia | $49/mo - $449/mo | Build Realtime Search | Algolia is a fully hosted search service, available as a REST API. API clients are also available for all major frameworks, platforms and languages.
  • Apache Solr | http://lucene.apache.org/solr/ - an open source search platform, based and co-released with Apache Lucene.
  • Amazon Cloudsearch | https://aws.amazon.com/cloudsearch/ | Search SaaS

Email Marketing

  • MailCharts | https://www.mailcharts.com | @mailcharts | $30/mo | Track & understand how your competitors use email marketing | MailCharts tracks over a thousand companies, bringing you actionable insights to help you improve your email marketing strategy, make data-driven decisions and gain design and content inspiration.
  • Customer.io | https://customer.io | | $50/mo - $1250/Mo | Send email based on what people do or don't do in your app | Customer.io lets you send newsletters to segments of customers using data from your site.
  • Vero | https://www.getvero.com | @veroapp | $99/mo | Send emails based on what your customers do | Vero makes it easy to create segments of customers based on the attributes you capture (e.g. age, location, gender) and the actions your customers take (e.g. logged in, used feature x, checked out, etc.).
  • MailChimp | https://mailchimp.com
  • Campaign Monitor | https://www.campaignmonitor.com | @ | $9/mo - $699/mo | CAMPUnbounce Feature Tour |
  • Sendy | https://sendy.co | @getsendy | $59 one time fee, host yourselfor use a hosted sendy instance from a variety of providers.
  • Image-Charts | https://image-charts.com | @imagecharts | free - $49/mo, self-hosted plan available | Include animated charts as image into emails, no server-side rendering, 1 URL = 1 chart, compatible with Google Image Charts.
  • Drip | https://www.drip.com | @getdrip | free for < 100 subscribers, paid plans from $49/mo | Advanced marketing automation with workflows, drip campaigns, conversion tracking etc..
  • MailerLite | https://www.mailerlite.com | @mailerlite | free - $140/month | free plan offers a lot of features

Email Collection/Landing Page Apps

  • Launchrock | https://www.launchrock.com | @launchrock | $49/mo - $199/mo | | Even if you know how to code a web page with HTML, you'll love how much faster it is with our landing page builder. Load up your logo and graphic assets, set up a few base colors from your brand palette and publish away.
  • Unbounce | https://unbounce.com | @unbounce | $49/mo - $199/mo | | Unbounce empowers marketers to act independently from technical teams, improving their efficiency and their ability to generate sales. Produce high-converting landing pages without dealing with I.T. bottlenecks. See how Unbounce can enhance your campaigns and maximize your marketing spend.
  • LeadPages | https://www.leadpages.net/ | $25/mo - $199/mo | Generate leads and increase revenue using the industry-leading landing page builder with accompanying suite of lead generation and opt-in tools.
  • Instapge | https://instapage.com/ | $29/mo - $127/mo | A landing page solution for optimizing your ad spend. Build, Integrate, Collaborate & Optimize.
  • KickoffLabs | https://kickofflabs.com | @kickofflabs | $29/mo - $99/mo | Stop building landing pages. Start building smarter campaigns. | Send customers to a tailored page that speaks to them. Keep them engaged with signup-forms, newsletters and – best of all – a very cool, unique REWARD system for customer referrals.
  • Prefinery | https://www.prefinery.com | @prefinery | $19/mo- $399/mo | Stress-free Beta Invitation and Management | Prefinery is beta invitations & management for serious product launches. You need more than just a landing page -- outsource your beta to Prefinery!

CRM/Sales Tools

  • Salesforce | http://www.salesforce.com | @salesforce | | TRANSFORM CONSUMERS INTO CUSTOMERS ACROSS ALL DIGITAL CHANNELS. | The ExactTarget Marketing Cloud, built on the Salesforce1 Platform, allows marketers to create 1:1 campaigns like never before. So you can combine traditional digital channels like email, mobile, social, and the web with any conceivable IP-addressable product to turn consumers into customers.
  • SalesforceIQ | https://www.salesforceiq.com | @salesforceiq | £17-85/user/mo | Helps businesses build stronger relationships using Relationship Intelligence.
  • SugarCRM | https://www.sugarcrm.com | @sugarcrm | $35/mo - $150/mo | Sugar's open, flexible platform easily solves real business problems. From automating sales, marketing and customer support to creating a custom CRM application, we've got you covered. | By placing the individual at the center of its solution, SugarCRM puts the "i" in CRM and empowers every customer-facing individual to create extraordinary customer relationships.
  • Insight.ly | https://www.insightly.com | @insightly | $7/mo | Small Business CRM | Manage contacts, organizations, partners, vendors and suppliers. See everything from background, email history, events, projects or opportunities
  • Close.io | https://close.io | @closeio | $59/mo - $299/mo | Sales Communication Platform | Close.io automatically logs sent and received emails with your leads. You can send/receive emails painlessly from Close.io. Or, by entering your IMAP and SMTP mail settings, Close.io can track emails that you send from Gmail or any email client.
  • Streak | https://www.streak.com | @streak | | Manage email support inside Gmail | Streak lets you keep track of all your deals right from your inbox. We let you group emails from the same customer together into one view and push that customer through your pipeline. When a new email comes in, you'll have all the context you need.
  • Base | https://getbase.com | @getbase | $15/mo - $125/mo | The Sales and CRM Software Your Team Will Actually Use | Base is designed to significantly boost your team’s sales productivity and give you the core sales tools you need to grow your business. Your leads come from a variety of sources. Lead management in Base helps you organize leads and assign them to the right sales reps so they can be followed up on and qualified quickly. After being qualified, convert a lead in Base and all of the contact information you have about your lead is transferred your new customer contact card. Optionally, you can specify follow up tasks and even create a deal at the same time.
  • Pipedrive | https://www.pipedrive.com | | $9/ mo | Pipeline software that gets you organized. | Pipedrive is built for salespeople who need to put in serious effort to turn leads into sales. It helps to organize the work and spend less time on admin.
  • Contactually | https://www.contactually.com | @Contactually | $17.99/mo - $99.99/mo | Maximize your network ROI. More referrals. More repeat business | Contactually helps businesses follow up with the right people, at the right time, to maximize relationship ROI.

Best CRM Software for Startups

Social Media Marketing

  • Buffer | https://buffer.com | @buffer | $10/mo | Buffer is the easiest way to publish on social media | Buffer helps you share to Twitter, Facebook and more. 
  • HootSuite | https://hootsuite.com | @HootSuite | $8.99/mo | Add speed and agility to your social media strategy | From one dashboard you’ll schedule Tweets and Facebook posts, monitor conversations, and more. When you need to prove your social ROI, quickly create and customize ready-to-present analytics reports.
  • Claim.io | http://claim.io/welcome | @claimio | $189/mo- $499/mo | | Owning your name on 300 Social media sites not only makes it easier for people to find you or your business online - it also works as a "social media identity insurance", protecting you from name squatting and identity fraud, minimizing risk to your brand.
  • Exacttarget Marketing Cloud/Buddy Media | http://www.salesforce.com/products/marketing-cloud/overview/ | @marketingcloud | | Bringing you closer to your social customers with an integrated social content solution. | Run integrated campaigns across Facebook, Twitter, YouTube and your websites Increase fans, followers and advocates. Publish engaging and interactive social apps. Easily create landing pages and microsites and extend social to your websites. Understand engagement trends, demographics, conversions and business metrics with powerful analytics.
  • Sprout Social | https://sproutsocial.com | @sproutsocial | $59/mo- $1500/mo | Powerful Social Media Software | Sprout lets you post messages on Facebook, Twitter, Google+ and LinkedIn simultaneously from one easy-to-use message composition tool. Shorten links, attach photos, target your audience on Facebook, customize your posts and much more.

Naming

  • Trademarkia | https://www.trademarkia.com | @trademarkia | | Trademarkia is one of the largest trademark search engines in the world. | LegalForce's network of licensed patent attorneys and agents have filed hundreds of patent applications for companies of every size.
  • NameRobot | https://www.namerobot.com | @namerobotEN] | 0$ - 300$/mo | Find, create and check the name for your project. NameRobot offers everything you need to create suitable naming ideas in a short time.
  • DomainTools Whois Lookup | http://whois.domaintools.com | @DomainTools | free / $99/mo | Go beyond ordinary Whois to discover the people or organizations behind a domain name or IP address.

Space Rental

  • 42Floors | https://42floors.com | @42floors | | The Best Place to Find Office & Commercial Space Rentals | We're gathering listing data from everyone in the market. Even including off-market listings that landlords haven't yet posted on 42Floors.com or anywhere else.
  • Liquidspace | https://liquidspace.com | @LiquidSpace | | Optimize Real Estate for Your Enterprise | Whether planning ahead or booking on the fly, find and reserve a great place to work. Rent professional conference and meeting rooms, private offices, or coworking spaces daily or hourly.
  • PivotDesk | https://www.pivotdesk.com | @PivotDesk | | An office sharing marketplace that helps startups that need space find host companies that have excess space.

Community Tools

  • Discourse | https://www.discourse.org | @discourse | | Ready for a new discussion platform? | Discussion software is a group of people interested in a common topic who are willing to type paragraphs to each other on a web page.

Personal Productivity

  • Tomatoes | http://www.tomato.es/ | @tomatoesapp | | Track your time and be productive with the Pomodoro Technique | Tomatoes is a "pomodoro tracker", a Pomodoro technique® driven time tracker. Track your time using 25 minutes slots called "pomodoros".
  • Do.com
  • RescueTime | https://www.rescuetime.com | @rescuetime | $9/mo | RescueTime gives you an accurate picture of how you spend your time to help you become more productive every day.
  • Qbserve | https://qotoqot.com/qbserve/ | @Qbserve | $40 one-time | Qbserve is a time tracker for Mac that does whatever RescueTime can but also tracks project time automatically, generates invoices, and stores tracked data locally.
  • Timing | https://timingapp.com/ | @TimingApp | $29 - $79 | Automatic time and productivity tracking for Mac. Helps you stay on track with your work and ensures no billable hours get lost (if you are billing hourly).
  • fman | https://fman.io | @m_herrmann | $14 | Manage and transfer your files with ease. For Windows, Mac and Linux.

Prototyping/Mockups

  • Creately | https://creately.com | @creately | Free - $750/mo | Creately | Web based diagramming tool for fast easy diagrams. Supports flowcharts, mock-ups, wire-frames, mind maps, organizational charts, network diagrams, AWS diagrams, UML diagrams and many other diagram types. Real-time collaboration plus innovative productivity features to create diagrams 3 times faster.
  • Keynote | https://www.apple.com/keynote/ | $19.99 | Easily create gorgeous presentations with the all-new Keynote, featuring powerful yet easy-to-use tools and dazzling effects that will make you a very hard act to follow. Also checkout the Keynotopia Themes to get all the common UI elements for iOS, Android etc.
  • OmniGraffle | https://www.omnigroup.com/omniGraffle | @omniGraffle | $99.99 | Diagramming Worth a Thousand Words | Need a diagram, process chart, quick page-layout, website wireframe or graphic design? OmniGraffle can help you make eye-popping graphic documents quickly by keeping lines connected to shapes even when they're moved, providing powerful styling tools, and magically organizing diagrams with just one click. Whether you need a quick 
  • moqups | https://moqups.com | @moqups | $9/mo- $39/mo | Moqups is a nifty HTML5 App used to create wireframes, mockups or UI concepts, prototypes depending on how you like to call them. | The most stunning HTML5 app for creating resolution-independent SVG mockups & wireframes for your next project.
  • Balsamiq | https://balsamiq.com | @balsamiq | Life's too short for bad software! | Balsamiq Mockups is a rapid wireframing tool that helps you Work Faster & Smarter. It reproduces the experience of sketching on a whiteboard, but using a computer.
  • Proto.io | https://proto.io | @protoio | $24/mo- $199/mo | Silly-fast mobile prototyping | Build high-fidelity fully interactive mobile app prototypes in minutes. Prototypes can be viewed on browser or device giving a real experience to the user how the app will look like and behave. Multiple devices like smart phones and tablets/pads are supported including iPhone, iPad and Android devices.
  • invision | https://www.invisionapp.com | @InVisionApp | $0/mo- $100+/mo | Free Web & Mobile (iOS, Android) Prototyping and UI Mockup Tool | Transform your Web & Mobile (iOS, Android) designs into clickable, interactive Prototypes and Mockups. Share and Collaborate on them with others.
  • Sketch | https://www.sketchapp.com | @sketchapp | $99 | Professional digital design for Mac. Sketch gives you the power, flexibility and speed you always wanted in a lightweight and easy-to-use package. Finally you can focus on what you do best: Design.

Content Creation/Infographics

  • Visual.ly | https://visual.ly | @Visually | $195/mo- $994/mo | ORIGINAL VISUAL CONTENT FOR BRANDS | We only work with the best creative talent available. Thousands of designers, journalists, animators and developers are standing by to help you achieve your goals and take your project to the next level.
  • Canva | https://www.canva.com | @canva | Design great social media images with text and graphics for free or a few bucks depending on the images you select

Customer Feedback

  • PickFu | https://www.pickfu.com | @pickfu | $20/mo- $299/mo | REAL CONSUMER FEEDBACK IN MINUTES | PickFu is a tool that provides instant, unbiased and insightful public opinion on questions that you care about.
  • Promoter.io | https://www.promoter.io | @Promoter_io | $50/mo - $500/mo | Helps companies quickly gain predictive customer intelligence & insights driven by NPS (Net Promoter) to increase revenue and reduce churn.

Data

  • Factual | https://www.factual.com | @Factual | | GLOBAL DATA. LOCAL CONTEXT. | Factual’s location platform enriches mobile location signals with definitive global data, enabling personalized and contextually relevant mobile experiences. Built from billions of inputs, the data is constantly updated by Factual’s real-time data stack.

Database

  • Bulbs | http://bulbflow.com
  • Datomic | https://www.datomic.com | @datomic_team | The fully transactional, cloud-ready, immutable database. | Immutable data means strong consistency combined with horizontal read scalability, plus built-in caching. Datomic is a distributed database designed to enable scalable, flexible and intelligent applications, running on next-generation cloud architectures.
  • Tinkerpop | http://tinkerpop.apache.org | Open source software products in the graph space.
  • Vertabelo | http://www.vertabelo.com | Web-based tool for database design | @vertabelo | Vertabelo allows you to visually design database models for PostgreSQL, MySQL, Oracle, SQL Server, SQLite, and IBM DB2. You can import the existing database structure from SQL, XML, or using reverse engineering tool. After you design a model, you can generate SQL script or ready-to-use code for various ORMs (Propel, jOOQ, SQLAlchemy, or Vertabelo Mobile ORM).

Accounting/Invoicing

  • Harvest | https://www.getharvest.com | @harvest | $12/mo- $99/mo | Time Tracking Made Easy | Time tracking is simple and lightning fast with Harvest. Set up takes seconds, and there’s nothing to install. We’ve simplified the timesheet and timesheets approval process so you can stay focused on work.
  • Ballpark | https://www.getballpark.com | | $12/mo- $99/mo | Stop sending your clients ugly paper invoices. Go paperless today, with Ballpark. | Our beautiful, web-based invoices and estimates make it easier than ever to get paid and discuss projects with your clients and colleagues.
  • PaySimple | https://paysimple.com | @PaySimple | $34.95/mo | Simplify how you bill and collect | PaySimple is an industry-leading provider of payment management solutions. PaySimple simplifies billing and collection processes by enabling you to bill, collect and deposit all of your payments automatically. Our customized, secure ASP solution includes auto-recurring billing, electronic check processing, direct debit and credit card processing at some of the lowest rates available.
  • FreshBooks | https://www.freshbooks.com | @freshbooks | $19.95/mo- $39.95/mo | Accounting Made for You, the Non-Accountant | FreshBooks is simple and intuitive, so accounting isn't intimidating. Plus you can talk to a real, live person anytime you have a question, 9am-6pm EDT, Mon-Fri.
  • FreeAgent | https://www.freeagent.com | @freeagent | US $20 /mo | Accounting software trusted by over 35,000 freelancers and small businesses | Reconcile money in and out of the business via your electronic bank statements and build monthly balance charts.
  • Blinksale | https://www.blinksale.com | @blinksale | $15.00 | INTRODUCING BLINKSALE UNLIMITED. ONE PLAN. ONE PRICE. UNLIMITED EVERYTHING. | Blinksale makes you look your best. With over a dozen invoice designs & thank you notes, you’re sure to put your best foot forward. Every time.
  • Cashboard | http://cashboardapp.com | @cashboard | $8.25/mo- $250/mo | FREELANCE TIME TRACKING & INVOICE SOFTWARE TRUSTED BY THOUSANDS, WORLDWIDE | Cashboard is the tool we designed to remedy that situation. It works for our software consultancy and we think it’ll work for you too. We launched Cashboard in Spring of 2007after many sleepless nights of hard work. It was the first solution to combine estimates, invoices, time tracking, and online payments into one tool.
  • Paydirt | https://paydirtapp.com | @paydirtapp | $8/mo- $149/mo | Smart Time Tracking Easier Invoicing Online Payments | In Paydirt, you can start a timer from any page in one click. No fiddly menus. No navigating around. Just a start button for each task.

Privacy Policy, Terms & Conditions, Legal Documents

  • iubenda | http://www.iubenda.com | @iubenda | free - $27/year - customization services | The easiest way to generate a professional, customizable, self-updating privacy policy. Choose between 6 languages. Documents hosted and kept up to date. Backed by real lawyers. Additional assistance service with premium legal team for custom Privacy Policy and Terms & Conditions.

Income Analytics

Payments, Billing & Downloads

  • PayPal | https://www.paypal.com | @PayPal | PayPal is an international e-commerce business allowing payments and money transfers to be made through the Internet. Online money transfers serve as electronic alternatives to paying with traditional paper methods, such as checks and money orders
  • Gumroad | https://gumroad.com | @gumroad | | See higher conversion, lower fees, and more customer control. Sell films directly to your viewers. | Creating digital products is hard, selling them shouldn't be. We let you start selling downloads in seconds.
  • FetchApp | https://www.fetchapp.com | @fetchapp | $5/mo- $500/mo | The Simpler way to Fetch. | Simply put, FetchApp allows you to sell and digitally delivery downloadable goods
  • Chargify | https://www.chargify.com | @chargify | $459/mo- $65/mo | Easily Manage Your Recurring Revenue Business | Customers sign up, make payments, use coupons, upgrade... You bill one-time & recurring fees using whatever pricing model you need, charge cards, send invoices & reminders, etc.
  • Recurly | https://recurly.com | @Recurly | $99/mo- $259/mo | Subscription Billing Automation | As the leading recurring billing platform, Recurly ensures setup is easy, integrations are quick, and our service scales with the needs of your business. With Recurly you'll be ready to accept payments and focus on growing your sales in no time.
  • ChargeBee | https://www.chargebee.com | @chargebee | $49/mo- $249/mo | ChargeBee is an easy to use recurring billing and invoicing solution for online businesses

Billing & Payment Processing

  • Braintree | https://www.braintreepayments.com | @braintree | | Accept payments in your app or website | Braintree handles transactions for some of the fastest growing mobile companies like Uber, Airbnb, HotelTonight and Fab. With native, easy-to-follow SDKs for iOS, Android and Windows Phone you can quickly add native payments to your app.
  • Dwolla | https://www.dwolla.com | @dwolla | 25¢ per transaction | The best way to move money. | Dwolla, Inc. is an agent of Veridian Credit Union and all funds associated with your account in the Dwolla network are held in a pooled account at Veridian Credit Union. These funds are not eligible for individual insurance, and may not be eligible for share insurance by the National Credit Union Share Insurance Fund. Dwolla, Inc. is the operator of a software platform that communicates user instructions for funds transfers to Veridian Credit Union.
  • Stripe | https://stripe.com | @stripe | 2.9% + 30¢ per successful charge. | Feature-packed payments | No need to design payment forms from scratch. Stripe Checkout offers a beautiful, customizable payment flow that works great across desktop and mobile. When you use Checkout, you’re always up-to-date, with no extra code required.
  • Pin | https://pinpayments.com | @pin_payments | 2.9% + 30¢ per successful charge. | Payments, Rebooted. | Accepting credit card payments from a global audience typically requires a merchant acount. The process of establishing a merchant account for each currency can be too difficult and costly for small businesses.
  • PayMill | https://www.paymill.com | @Paymill | 0.28 € - 0.25 € | Online payments made easy | Make payments personal by customising the checkout according to the flow of your website
  • Spreedly | https://www.spreedly.com | @spreedly | $150/mo - $1500/mo | Payments as a Platform | One of Spreedly's major benefits is reaching a large number of merchant accounts by working across multiple payment gateways. As a direct merchant you can transact globally but deposit funds in unique merchant accounts based on geographic or other business rules. As a SaaS platform you can support the unique merchant accounts of your individual customers. A payment gateway token is your way to indicate to us which unique merchant account this particular transaction will go against. Each unique merchant account = one unique payment gateway token.
  • WePay | https://go.wepay.com | @wepay | 2.9% + 30¢ per transaction. | WePay is the first payments engine to offer platforms — marketplaces, crowdfunding, and business software/tools — a way to own their customer experience while still shielding them from 100 percent of fraud and regulatory risk.
  • Paddle | https://paddle.com | @PaddleHQ | 5% + 50¢ per transaction. | Payment processing and fulfillment, specialized for desktop apps and SaaS subscription services. Handles VAT and invoicing for you, so your accounting becomes easier.

Banking

Phone/PBX/SMS

System Monitoring

Search

Security

Shipping

User Feedback

Designers

Notes

Group Communication/Chat Tools

HipChat (now Stride) Alternatives

Remote Collaboration

DNS

Status Blogs/User Alerts

Forms / Surveys

  • Wufoo | https://www.wufoo.com
  • Google Forms
  • Typeform | https://www.typeform.com | @typeform | $0/mo - $25/mo | Ask Awesomely! | Typeform makes asking questions easy, human & beautiful. A user experience that makes your questions look & feel great everywhere. Stimulated, inspired, excited, happy respondents boost completion rates. Gain insights with integrated analysis tools.
  • Qualaroo | https://qualaroo.com | @qualarooinc | $63/mo -499/mo | Qualaroo website surveys uncover customer insights that lead to better business results.
  • Formcarry | https://formcarry.com | $0/mo - $99/mo | Handle forms without a single line of back-end code.

Source Code Hosting

Design Collaboration

PaaS

VPS

Heroku Tools

AWS Tools

Database-aaS

  • HumongouS.io | https://www.humongous.io | HumongouS.io is a web-based user interface (GUI) for MongoDB.
  • mLab | https://mlab.com |
  • Compose | https://www.compose.com | Compose is a fully-managed platform used by developers to deploy, host and scale databases (Elasticsearch and MongoDB.)
  • RedisLabs | https://redislabs.com | @RedisLabsInc | free - $338+/mo | RedisLabs offers fully-managed cloud service for hosting and running your redis or memcache datasets in a highly-available and scalable manner, with predictable and stable top performance.

Backend-aaS

WebSockets-aaS

Ops Alerts and Scheduling

  • PagerDuty | https://www.pagerduty.com/
  • Opsgenie | https://www.opsgenie.com | @opsgenie | $0 - $16 user/month | We make alerts work for you | We provide the tools you need to design meaningful, actionable alerts and ensure the right people are notified.

Accounting

Video Hosting

Knowledge Tracking/Wiki

Offsite Backups

Personal Machine Backups

Remote Workers

Deployment

SEO Tools

  • AccuRanker | https://www.accuranker.com/
  • Ahrefs | https://ahrefs.com | @ahrefs | $79/mo - $2500/mo | Ahrefs provide a complete digital marketing suite with tools for analyzing back links, analizing websites, rank tracking, content exploring and more.
  • SerpBook | https://serpbook.com
  • WooRank | https://www.woorank.com | @woorank | Free - $49/mo | WooRank analyzes your website for optimization best practices and shows how it ranks against your competition. Its real-time brandable reports, consisting of over 150 data-points, help you to instantly spot critical issues that impact traffic, usability and conversions. Sync your analytics, social accounts and keywords for even more robust tracking.
  • Moz | https://moz.com | @moz | $99/mo - $599/mo | Moz provides you all the tools you need to effectively do search engine optimization. On site optimization graders, competitor tracking, back link analysis, rank trakcing and many more features available to users.

API Builder

  • Deployd | http://deployd.com | @deploydapp | Free (OSS) | Design, build, and scale APIs for web and mobile apps in minutes instead of days.
  • Apiary | https://apiary.io | @apiary | Free - $99/mo | Powerful API Design Stack. Built for Developers. | Work together to quickly design, prototype, document and test APIs.

Password Management

Sources of Clicks/Ad Platforms

Storage

  • Kloudless | https://kloudless.com | @Kloudless | $10/10k API requests/month | Kloudless provides developers with a single cloud storage API in the place of many | Kloudless is the last cloud storage API you'll ever need. Integrate a single REST API instead of many and use our UI tools to quickly build cloud storage support into your app. We maintain all the integrations so you can focus on building awesome products.

Task Scheduling

  • EasyCron | https://www.easycron.com
  • IFTTT | https://ifttt.com
  • Zapier | https://zapier.com | Zapier | $99/mo - $15/mo | Superpowers to get your work done. | A Zap is a link between two apps (a "trigger" and an "action"). Zaps run automatically in the background every few minutes to move and manage data on your behalf. Only live Zaps count against your limit — you can have as many paused and unfinished Zaps as you'd like. For example, one Zap might be "Send me an SMS every time I get a new email".

Documentation

Business Cards and Print Material

Presentations / Slides

Use

The best ways to use this list are:

  • by browing the contents
  • by using command + F to search the contents

This list also uses tags to help when searching the contents:

  • Hosted?Hosted, Self-hosted

Authors

Chris Barber

Craig Davison

With many thanks to the contributors. 👏

Contributions are welcome! Check out the Contributing Guidelines. 🙌

Mistakes C/C++ Devs make writing Go

$
0
0

Nyah Check (@nyah_check, slides) is a software engineer at Altitude Networks.

Nyah comes from a C/C++ background and subsequently wrote a lot of bad Go code early on. He hopes others can learn from his mistakes.

He has classified his mistakes under 3 topics:

  • Heap vs. Stack
  • Memory & Goroutine leaks
  • Error handling

Heap vs. Stack

What is a Heap and Stack in Go? A Stack is a special region in created to store temporary variables bound to a function. It's self cleaning and expands and shrinks accordingly.

A Heap is a bigger region in memory in addition to the stack used for storing values, It's more costly to maintain since the GC needs to clean the region from time to time adding extra latency.

image

Mistake 1: New doesn't mean heap && var doesn't mean stack

An early mistake was to minimize escape analysis and it's possible implications on my program's perf.

Consider the following C++ code:

intfoo(){int*a =new(int);return*a;}***

Wrong assumptions..* In C++, we know new(int) is allocated on the heap.* In Go, we don't really know for sure.
* May be the new keyword was stolen from C++ as a result might likely be allocated on the heap?
* Given my C++ bias, I thought minimizing it's use will reduce heap allocation.

Let's look at some code...```gopackage mainimport"fmt"funcnewIntStack()*int{

    vv :=new(int)return vv}funcmain(){ fmt.Println(*newIntStack())}

This is a program that tries to establish if allocation takes place on the heap or the stack. When he runs this program (go run -gcflags -m file.go), you see that the new(int) variable does not escape (i.e., it's on the stack, not the heap). In C++, it would be allocated on the heap.

Let's take a look at another example:

package mainimport"fmt"funcmain(){
    x :="GOPHERCON-2018"
    fmt.Println(x)}

In the above example, x escapes to the heap. That's because fmt.Println takes an interface, which means x gets transferred to the heap.

Escape analysis is not trivial in Go. You need to do runtime analysis; can't just look at the code.

Lessons

  • Escape analysis is very important in writing more performant Go programs, yet there's no language specification on this.
  • Some of the compiler's escape analysis decisions are counterintuitive, yet trial and error is the only way to know
  • Do not make assumptions, rather do escape analysis on the code and make informed decisions.

Conclusion: "Understand heap vs stack allocation in your Go program by checking the compiler's escape analysis report and making informed decisions, do not guess"

Memory leaks

How does memory leak in Go

  • I assumed since there's a garbage collector, then everything is fine Not True!
  • Memory leaks are common in any language including garbage collected languages
  • It can be caused by: assigned but unused memory, synchronization issues.
  • Some of these errors can be hard to detect, but Go has a set of tools which could be very effective in debugging these bugs

Mistake 2: Do not defer in an infinite Loop

The defer statement is used to clean up resources after you open up a resource(e.g. file, connection etc)

So an idiomatic way will be:

fp, err := os.Open("path/to/file.text")if err !=nil{}defer fp.Close()

This snippet is guaranteed to work even if cases where there’s a panic and it’s standard Go practice.

So what's the problem? In very large files where resources cannot be tracked and freed properly, this becomes a problem.

Consider a file monitoring program in C where:

  • We check a specific directory for db file dumps
  • perform some operation(logging, file versioning, etc)

Something like this might work:

#define TIME_TO_WAIT 1 intmain(){
    FILE *fp;
    clock_t last =clock();char* directory[2]={"one.txt","two.txt"};for(;;){
        clock_t current =clock();if(current >=(last + TIME_TO_WAIT + CLOCKS_PER_SEC)){for(int i =0; i <2; i++){
                fp =fopen(directory[i],"r+");printf("\nopening %s", directory[i]);if(fp ==NULL){fprintf(stderr,"Invalid file %s", directory[i]);exit(EXIT_FAILURE);}fclose(fp);printf("\nclosing %s", directory[i]);
                last = current;}}}

This will be sure to open and close up the files (open, close, open, close, etc.) once the operations are done.

However in Go:

funcloggingMonitorErr(files ...string){forrange time.Tick(time.Second){for_, f :=range files {
            fp :=OpenFile(f)defer fp.Close()}}}

The output from running the program shows there is no closing of files.

Problems:

  • Deferred code never executes since the function has not returned
  • So memory clean up never happens and it’s use keeps piling up
  • Files will never be closed, therefore causing loss of data due to lack of flush.

How do I fix this?

  • Creating a function literal for each file monitoring process
  • This ensures everything is bound to the context
  • Hence files are opened and closed

The fixed solution looks like this:

type file stringfuncOpenFile(s string) file {
    log.Printf("opening %s", s)returnfile(s)}func(f file)Close(){ log.Printf("closing %s", f)}funcloggingMonitorFix(files ...string){forrange time.Tick(time.Second){for_, f :=range files {func(){
                fp :=OpenFile(f)defer fp.Close()}()}}}

Lessons learned:

  • Since defer is tied to the new function context, we are sure it's executed and memory is flushed when files close
  • When defer executes we are certain our function literal finished execution, so no memory leaks

Conclusion: "Do not defer in an infinite loop, since the defer statement invokes the function execution ONLY when the surrounding function returns"

Pointers to accessible parts of a slice

What's a slice?

A slice is a dynamically sized flexible view into an array.

We know arrays have fixed fizes.

There are two main features of slices to think about:

  • The length of a slice is simply the total number of elements contained in the slice
  • The capacity of a slice is the number of elements in the underlying array.

Understanding this can avoid some robustness issues.

Mistake 3: Keeping pointers to an accessible(although not visible) part of a slice

Prior to Go 1.2 there was a memory safety issue with slices:

  • You could access elements of the underlying array.
  • This could lead to unintended memory writes.
  • Cause robustness issues
  • These regions of memory are not garbage collected.

Example:

funcmain(){
    a :=[]*int{new(int),new(int)}
    fmt.Println(a)
    b := a[:1]
    fmt.Println(b)

    
    c := b[:2]
    fmt.Println(c)}

If you run this code, the third Println shows in c you somehow have access to elements in a that aren't accessible in b.

What are some of the problems?

  • Write regions of memory unintentionally.
  • Robustness issues: Memory is not garbage collected since there's a reference to it.
  • It's a source for potential bugs

How do you solve this then? Go 1.2++ added the 3-Index-Slice operation:

  • This enables you to specify the capacity during slicing.
  • The restricted slice capacity provides a level of protection to the underlying array
  • No unintended memory writes.
  • Unused areas of the underlying array are garbage collected.

Rewriting our code gives:

funcmain(){
    a :=[]*int{new(int),new(int)}
    fmt.Println(a)

    
    b := a[:1:1]
    fmt.Println(b)
    c := b[:2]
    fmt.Println(c)}

Our output becomes:

➜  examples git:(master) ✗ go run main.go
[0xc420016090 0xc420016098]
[0xc420016090]
panic: runtime error: slice bounds out of range

goroutine 1 [running]:
main.main()
    /Users/nyahcheck/go/src/github.com/Ch3ck/5-mistakes-c-cpp-devs-make-writing-go/03-pointer-in-non-visible-slice-portion/examples/main.go:27 +0x1ae
exit status 2

Our slice cap was set to 1, we can't access regions of memory we don't have permissions to, rightly creating a panic.

Lesson

  • Our slice capacity was set to 1, so can't access restricted regions in memory, rightly creating a panic
  • More robust programs
  • Fewer memory leaks since unused memory is garbage collected.
  • Reduce sources for potential bugs in your code.

Goroutine leaks

What's a Goroutine?

  • It's a lightweight thread of execution, it consists of functions that run concurrently with other functions/methods.
  • What about channels?
  • A channel is a pipe that connects concurrent goroutines.
  • An understanding of these two concepts embodies concurrency in Go.

There's no language-level analog in C/C++. You have to use special libraries to write multi-threaded code.

  • C/C++ has libraries for multi-threaded programming.
  • Concurrency in Go materializes itself in the form of goroutines and channels.

How do goroutines leak? There are different possible causes for goroutine leaks, some include:

  • Infinite loops
  • Blocks on synchronization points(channels, mutexes), deadlocks

However when these occur the program takes up more memory than it actually needs leading to high latency and frequent crashes.

Let's take a look at an example.

Mistake 4: Error handling with channels where # channels < # goroutines

Consider:

funcdoSomethingTwice()error{
    errc :=make(chanerror)gofunc(){defer fmt.Println("done wth a")
        errc <-doSomething("a")}()gofunc(){defer fmt.Println("done with b")
        errc <-doSomething("b")}()
    err :=<-errcreturn err}

What are the problems with the code?

  • More goroutines than channels are present to write to send data back to main
  • When one routine writes to the channel, the program exits and the other goroutine is lost, building up memory use as a results
  • that region of memory is not garbage collected

How do we fix this? We simply increase the number of channels to 2, This makes it possible for the two goroutines to pass their results to the calling program.

funcdoSomethingTwice()error{
    errc :=make(chanerror,2)gofunc(){defer fmt.Println("done wth a")
        errc <-doSomething("a")}()gofunc(){defer fmt.Println("done with b")
        errc <-doSomething("b")}()
    err :=<-errcreturn err}

Performing Traces on the code

Goroutine leaks are very common in Go development.

However there are some best practices you can follow to avoid some of these errors:

  • Using the context package to terminate or timeout goroutines which may otherwise run indefinitely
  • Using a done signal or timeout channel can help in terminating a running goroutine preventing leaks
  • Profiling the code, Stack trace instrumentation and adding benchmarks can go a long way in finding these leaks
  • Take advantage of the go tooling ecosystem: go tool trace, go tool profile , go-torch, gops, leaktest etc
  • Worth checking the errgroup package for this pattern

Error handling

What are errors in Go?

Go has a built-in error type which uses error values to indicate an abnormal state.

Also these error type is based on an error *interface.

typeerrorinterface{Error()string}

The Error method in error returns a string

A closer look at the errors package will provide some good insides into handling errors in Go.

Mistake 5: Errors are not just strings, but much more

Consider a C program with a division by zero error:

#include<stdio.h>#include<stdlib.h>intmain(void){int dividend =50;int divisor =0;int quotient;if(divisor ==0){fprintf(stderr,"Division by zero! Aborting...\n");exit(EXIT_FAILURE);}
   quotient = dividend / divisor;exit(EXIT_SUCCESS);}

Handling errors in C typically consists of writing error message to stderr and returning an exit code.

However, in Go errors are much more sophisticated than strings.

Consider this example:

funcmain(){
    conn, err := net.Dial("tcp","goooooooooooogle.com:80")if err !=nil{
        fmt.Printf("%T\n", err)
        log.Fatal(err)}defer conn.Close()}

Using %T in the format string, you can print the type of the error, which often provides useful information.

Wrapping Errors in Go with github.com/pkg/errors:

Consider another example

funcconnect(addr string)error{
    conn, err := net.Dial("tcp", addr)if err !=nil{switch err := err.(type){case*net.OpError:return errors.Wrapf(err,"failed to connect to %s", err.Net)default:return errors.Wrap(err,"unknown error")}}defer conn.Close()returnnil}

Advantages of Wrap and Cause funcs:

  • You can preserve the error context and pass to the calling program
  • Using the errors.Cause() function call we can determine what caused this error later in the program

Nyah believes it’s a feature some developers my overlook but if used properly will give a better Go development experience.

Lessons learned:

  • The errors package provides a lot of powerful tools for handling errors which some devs may ignore.
  • Wrap() and errors.Cause() are very useful in preserving context of an error later in the program.

Take a look at the errors package and see elegant examples.

Conclusion

  • Understand Escape analysis by looking at the compiler decisions, do not make reasonable guesses.
  • Defer executes only when the function returns. Using it in a infinite loop is a mistake.
  • Three Index_slices adds an extra robustness utility in Go, use it.
  • Profile your Go code to identify bottlenecks early on, it's a good practice.
  • Errors in Go are not just strings, but much more.
  • Wrap errors to preserve context and handle them gracefully.

There are many more errors C/C++ devs make. Just remember:

  • Bringing concepts from C/C++ is fine but be ready to be challenged by differences.
  • "Programming in Go is like being young again (but more productive!)."

Q&A

What tools do you miss from C/C++?

  • GDB and Valgrind
  • For GDB-lovers, there's the Delve Go debugger.

First-party isolation in Firefox: what breaks if you enable it?

$
0
0

First-Party Isolation (FPI) is an optional privacy feature built-in to Firefox that enforces stricter security policies to block tracking between websites. Essentially, every website you visit will store data separately and isolated from every other website you visit. So what could go wrong?

First-Party Isolation offers the same privacy protection within one browser as you would get if you open one website in Safari (or another web browser) and then open another website in Firefox. Network connections, caches of different types, cookies and other persistent data stores would work as normal but there is no known way for one website to save data in a way that could be read by any other website.

This is a more complete protection against cross-origin tracking between different websites by traditional means like the option for disabling third-party cookies and storage found in most web browsers. FPI was originally designed for use in the privacy oriented Tor browser, which is based on Firefox.

Mozilla have been really quite about this feature. It was first introduced in Firefox 55 after years of development work, but wasn’t mentioned in the official release notes or marketing. I’m not sure whether that is because Mozilla consider it unsafe, unpractical, or don’t want to commit to maintain the feature in future releases.


First-Party Isolation is a fundamental change in how the web browser operates and it breaks many assumptions made by web developers. If you want to enable the feature and gain better privacy than you also have to pay the price of breaking those assumptions.

Here is a list of the types of problems you can expect to run into with FPI enabled:

Third-party login failures

Mozilla ran a study where they looked at how many issues people would run into if they enabled one of eight different privacy-enhancing settings in Firefox. The study found that people in the group with First-Party Isolation enabled reported the most number of issues out of any of the test groups.

The same study found that people ran into trouble with Facebook and Google domains, and had trouble login in to websites. Both Facebook and Google provide authentication and login services for other websites that don’t always work as expected when you enable FPI.

Depending on the implementation, you click on a login button on a website which opens a window with either Facebook or Google. This new window is opened in a new security context and is isolated from the originating website. Users can poke a whole in the isolation barrier by changing the privacy.firstparty.isolate.restrict_opener_access option to true, which allows for some communication between the login provider and the original website. However, changing this option only makes the login break later in the process.

Users can sometimes work around these issues to some degree. E.g. a comment form from Disqus or Facebook is loaded inside a frame on the original page. By right-clicking on the frame and choosing This Frame: Open Frame In a New Tab, you’ll end up on a new frame with just the comment form with the origin as either a Disqus or Facebook domaim. By loading the frame from the original origin, you’ll have access to the cookies for that origin and can login and comment as per normal.

To fully resolve this issue, the login service providers has to change their products to work with stricter origin controls. Unless Mozilla were to enable FPI by default, I doubt any company will invest any time or money in fixing these issues.

No migration path

You loose all your cookies, caches, and data stores when you first enable First-Party Isolation in Firefox. Firefox doesn’t record the origin of data before enabling the setting, so existing user data has to go when switching to a stricter origin-policy.

Firefox can’t record this data before isolating websites from each other as it otherwise wouldn’t know which changes were made by which website if multiple websites had access to read and write to the same data stores. It’s kind of an unavoidable problem, as I see it.

The initial loss of logins, website settings, and avalanche of reappearing cookie consent toolbars could help explain for the higher number of issue reports in Mozilla’s study. Mozilla doesn’t mention this issue specifically in any publication.

More CAPTCHAs

As I noted only recently, Google reCAPTCHA has a 99,3 % global marketshare in CAPTCHA services.

No CAPTCHA reCAPTCHA uses Google’s knowledge and insights about you from tracking you around the web to determine whether you’re a computer or a human; instead of asking you to pass a cognitive tests. Google seem to have reduced confidence in their ability to identify you as a human with reduced tracking and an unusual number of unique users (every website is assigned different tracking/user ID/user instead of sharing the same ID) from your IP address.

In my own experience, I’ve had to spend more time and energy trying to manually solve the harder reCAPTCHA options rather than bypassing it altogether with Google’s No CAPTCHA reCAPTCHA after enabling First-Party Isolation in Firefox.

Less shared caching means more data usage

Caching is good. It means you don’t have to download the same font or JavaScript library multiple times if it’s already stored in your browser’s cache. Firefox will still cache content with First-Party Isolation enabled, however it will do so far less efficiently.

Normally, your browser will download one copy of a specific font or JavaScript library once from a content-delivery network (CDN) which can be shared between multiple websites. FPI disrupts this causing the browser to download a new copy per website instead of relying of its own shared cache.

Firefox doesn’t set aside more disk storage space for its cache when enabling FPI. The browser thus have to store multiple copies of the same files in the same amount of disk space; which causes reduced efficiency for prioritizing what to keep in the cache and what to delete.

You can somewhat reduce the impact of this for some types of common JavaScript libraries by installing the Decentraleyes extension. This extension acts as a local CDN provider that can bypass the First-Party Isolation by serving the content from the extension rather than a remote CDN.

Depending on your location and internet connection, enabling First-Party Isolation in Firefox may significantly increase costs and slow down page loading times.

Can’t log-in to many extensions

The extension API for OAuth2 logins, browser.identify.launchWebAuthFlow, fails to inform extensions about whether logins failed or succeeded. Extensions will wait for login attempts forever; believing the user just haven’t completed it yet.

You can work-around the problem temporarily by disabling First-Party Isolation, restarting the browser, logging in to the extension, and then re-enabling First-Party Isolation. Depending on the configuration of the service you login to,m you may nee to repeat the process every few months, weeks, days, or even hours.

Update : Added information about login issues with extension API.

Other extensions assume that they can access cookies cross-domains in a way that is prohibited by First-Party Isolation.

Can’t log-in to Pocket

Pocket is Mozilla’s reading list and article recommendation service that you’ll find built-right in to Firefox. Unfortunately, you can’t login to Pocket when First-Party isolation is enabled.

You can work-around the problem by temporarily disabling First-Party Isolation, restarting the browser, logging in to Pocket, and then re-enabling First-Party Isolation. The process must be repeated every six months or so.

Update : Added information about Pocket login issues.

Enabling First-Party Isolation

You can enable the feature by typing about:config in the address field, changing the privacy.firstparty.isolate option to true, and restarting Firefox. You should only enable the feature if you’re prepared to run into a few issues now and then.

I’ve enabled First-Party Isolation myself and have used it for months already. It can be annoying, but I believe it to overall be a good security and privacy feature that I hope Firefox can one day enable by default. You can check out the Tor Browser if you wish to use a web browser based on Firefox that enables this feature by default.

XSV: A fast CSV command-line toolkit written in Rust

$
0
0

xsv is a command line program for indexing, slicing, analyzing, splitting and joining CSV files. Commands should be simple, fast and composable:

  1. Simple tasks should be easy.
  2. Performance trade offs should be exposed in the CLI interface.
  3. Composition should not come at the expense of performance.

This README contains information on how toinstall xsv, in addition to a quick tour of several commands.

Linux build statusWindows build status

Dual-licensed under MIT or the UNLICENSE.

Available commands

  • cat - Concatenate CSV files by row or by column.
  • count - Count the rows in a CSV file. (Instantaneous with an index.)
  • fixlengths - Force a CSV file to have same-length records by either padding or truncating them.
  • flatten - A flattened view of CSV records. Useful for viewing one record at a time. e.g., xsv slice -i 5 data.csv | xsv flatten.
  • fmt - Reformat CSV data with different delimiters, record terminators or quoting rules. (Supports ASCII delimited data.)
  • frequency - Build frequency tables of each column in CSV data. (Uses parallelism to go faster if an index is present.)
  • headers - Show the headers of CSV data. Or show the intersection of all headers between many CSV files.
  • index - Create an index for a CSV file. This is very quick and provides constant time indexing into the CSV file.
  • input - Read CSV data with exotic quoting/escaping rules.
  • join - Inner, outer and cross joins. Uses a simple hash index to make it fast.
  • partition - Partition CSV data based on a column value.
  • sample - Randomly draw rows from CSV data using reservoir sampling (i.e., use memory proportional to the size of the sample).
  • reverse - Reverse order of rows in CSV data.
  • search - Run a regex over CSV data. Applies the regex to each field individually and shows only matching rows.
  • select - Select or re-order columns from CSV data.
  • slice - Slice rows from any part of a CSV file. When an index is present, this only has to parse the rows in the slice (instead of all rows leading up to the start of the slice).
  • sort - Sort CSV data.
  • split - Split one CSV file into many CSV files of N chunks.
  • stats - Show basic types and statistics of each column in the CSV file. (i.e., mean, standard deviation, median, range, etc.)
  • table - Show aligned output of any CSV data usingelastic tabstops.

A whirlwind tour

Let's say you're playing with some of the data from theData Science Toolkit, which contains several CSV files. Maybe you're interested in the population counts of each city in the world. So grab the data and start examining it:

$ curl -LO http://burntsushi.net/stuff/worldcitiespop.csv
$ xsv headers worldcitiespop.csv
1   Country
2   City
3   AccentCity
4   Region
5   Population
6   Latitude
7   Longitude

The next thing you might want to do is get an overview of the kind of data that appears in each column. The stats command will do this for you:

$ xsv stats worldcitiespop.csv --everything | xsv table
field       type     min            max            min_length  max_length  mean          stddev         median     mode         cardinality
Country     Unicode  ad             zw             2           2                                                   cn           234
City        Unicode   bab el ahmar  Þykkvibaer     1           91                                                  san jose     2351892
AccentCity  Unicode   Bâb el Ahmar  ïn Bou Chella  1           91                                                  San Antonio  2375760
Region      Unicode  00             Z9             0           2                                        13         04           397
Population  Integer  7              31480498       0           8           47719.570634  302885.559204  10779                   28754
Latitude    Float    -54.933333     82.483333      1           12          27.188166     21.952614      32.497222  51.15        1038349
Longitude   Float    -179.983333    180            1           14          37.08886      63.22301       35.28      23.8         1167162

The xsv table command takes any CSV data and formats it into aligned columns using elastic tabstops. You'll notice that it even gets alignment right with respect to Unicode characters.

So, this command takes about 12 seconds to run on my machine, but we can speed it up by creating an index and re-running the command:

$ xsv index worldcitiespop.csv
$ xsv stats worldcitiespop.csv --everything | xsv table
...

Which cuts it down to about 8 seconds on my machine. (And creating the index takes less than 2 seconds.)

Notably, the same type of "statistics" command in anotherCSV command line toolkit takes about 2 minutes to produce similar statistics on the same data set.

Creating an index gives us more than just faster statistics gathering. It also makes slice operations extremely fast because only the sliced portion has to be parsed. For example, let's say you wanted to grab the last 10 records:

$ xsv count worldcitiespop.csv
3173958
$ xsv slice worldcitiespop.csv -s 3173948 | xsv table
Country  City               AccentCity         Region  Population  Latitude     Longitude
zw       zibalonkwe         Zibalonkwe         06                  -19.8333333  27.4666667
zw       zibunkululu        Zibunkululu        06                  -19.6666667  27.6166667
zw       ziga               Ziga               06                  -19.2166667  27.4833333
zw       zikamanas village  Zikamanas Village  00                  -18.2166667  27.95
zw       zimbabwe           Zimbabwe           07                  -20.2666667  30.9166667
zw       zimre park         Zimre Park         04                  -17.8661111  31.2136111
zw       ziyakamanas        Ziyakamanas        00                  -18.2166667  27.95
zw       zizalisari         Zizalisari         04                  -17.7588889  31.0105556
zw       zuzumba            Zuzumba            06                  -20.0333333  27.9333333
zw       zvishavane         Zvishavane         07      79876       -20.3333333  30.0333333

These commands are instantaneous because they run in time and memory proportional to the size of the slice (which means they will scale to arbitrarily large CSV data).

Switching gears a little bit, you might not always want to see every column in the CSV data. In this case, maybe we only care about the country, city and population. So let's take a look at 10 random rows:

$ xsv selectCountry,AccentCity,Population worldcitiespop.csv \| xsv sample 10 \| xsv table
Country  AccentCity       Population
cn       Guankoushang
za       Klipdrift
ma       Ouled Hammou
fr       Les Gravues
la       Ban Phadèng
de       Lüdenscheid      80045
qa       Umm ash Shubrum
bd       Panditgoan
us       Appleton
ua       Lukashenkivske

Whoops! It seems some cities don't have population counts. How pervasive is that?

$ xsv frequency worldcitiespop.csv --limit 5
field,value,count
Country,cn,238985
Country,ru,215938
Country,id,176546
Country,us,141989
Country,ir,123872
City,san jose,328
City,san antonio,320
City,santa rosa,296
City,santa cruz,282
City,san juan,255
AccentCity,San Antonio,317
AccentCity,Santa Rosa,296
AccentCity,Santa Cruz,281
AccentCity,San Juan,254
AccentCity,San Miguel,254
Region,04,159916
Region,02,142158
Region,07,126867
Region,03,122161
Region,05,118441
Population,(NULL),3125978
Population,2310,12
Population,3097,11
Population,983,11
Population,2684,11
Latitude,51.15,777
Latitude,51.083333,772
Latitude,50.933333,769
Latitude,51.116667,769
Latitude,51.133333,767
Longitude,23.8,484
Longitude,23.2,477
Longitude,23.05,476
Longitude,25.3,474
Longitude,23.1,459

(The xsv frequency command builds a frequency table for each column in the CSV data. This one only took 5 seconds.)

So it seems that most cities do not have a population count associated with them at all. No matter—we can adjust our previous command so that it only shows rows with a population count:

$ xsv search -s Population '[0-9]' worldcitiespop.csv \| xsv selectCountry,AccentCity,Population \| xsv sample 10 \| xsv table
Country  AccentCity       Population
es       Barañáin         22264
es       Puerto Real      36946
at       Moosburg         4602
hu       Hejobaba         1949
ru       Polyarnyye Zori  15092
gr       Kandíla          1245
is       Ólafsvík         992
hu       Decs             4210
bg       Sliven           94252
gb       Leatherhead      43544

Erk. Which country is at? No clue, but the Data Science Toolkit has a CSV file called countrynames.csv. Let's grab it and do a join so we can see which countries these are:

curl -LO https://gist.githubusercontent.com/anonymous/063cb470e56e64e98cf1/raw/98e2589b801f6ca3ff900b01a87fbb7452eb35c7/countrynames.csv
$ xsv headers countrynames.csv
1   Abbrev
2   Country
$ xsv join --no-case  Country sample.csv Abbrev countrynames.csv | xsv table
Country  AccentCity       Population  Abbrev  Country
es       Barañáin         22264       ES      Spain
es       Puerto Real      36946       ES      Spain
at       Moosburg         4602        AT      Austria
hu       Hejobaba         1949        HU      Hungary
ru       Polyarnyye Zori  15092       RU      Russian Federation | Russia
gr       Kandíla          1245        GR      Greece
is       Ólafsvík         992         IS      Iceland
hu       Decs             4210        HU      Hungary
bg       Sliven           94252       BG      Bulgaria
gb       Leatherhead      43544       GB      Great Britain | UK | England | Scotland | Wales | Northern Ireland | United Kingdom

Whoops, now we have two columns called Country and an Abbrev column that we no longer need. This is easy to fix by re-ordering columns with the xsv select command:

$ xsv join --no-case  Country sample.csv Abbrev countrynames.csv \| xsv select'Country[1],AccentCity,Population' \| xsv table
Country                                                                              AccentCity       Population
Spain                                                                                Barañáin         22264
Spain                                                                                Puerto Real      36946
Austria                                                                              Moosburg         4602
Hungary                                                                              Hejobaba         1949
Russian Federation | Russia                                                          Polyarnyye Zori  15092
Greece                                                                               Kandíla          1245
Iceland                                                                              Ólafsvík         992
Hungary                                                                              Decs             4210
Bulgaria                                                                             Sliven           94252
Great Britain | UK | England | Scotland | Wales | Northern Ireland | United Kingdom  Leatherhead      43544

Perhaps we can do this with the original CSV data? Indeed we can—because joins in xsv are fast.

$ xsv join --no-case Abbrev countrynames.csv Country worldcitiespop.csv \| xsv select'!Abbrev,Country[1]' \> worldcitiespop_countrynames.csv
$ xsv sample 10 worldcitiespop_countrynames.csv | xsv table
Country                      City                   AccentCity             Region  Population  Latitude    Longitude
Sri Lanka                    miriswatte             Miriswatte             36                  7.2333333   79.9
Romania                      livezile               Livezile               26      1985        44.512222   22.863333
Indonesia                    tawainalu              Tawainalu              22                  -4.0225     121.9273
Russian Federation | Russia  otar                   Otar                   45                  56.975278   48.305278
France                       le breuil-bois robert  le Breuil-Bois Robert  A8                  48.945567   1.717026
France                       lissac                 Lissac                 B1                  45.103094   1.464927
Albania                      lumalasi               Lumalasi               46                  40.6586111  20.7363889
China                        motzushih              Motzushih              11                  27.65       111.966667
Russian Federation | Russia  svakino                Svakino                69                  55.60211    34.559785
Romania                      tirgu pancesti         Tirgu Pancesti         38                  46.216667   27.1

The !Abbrev,Country[1] syntax means, "remove the Abbrev column and remove the second occurrence of the Country column." Since we joined withcountrynames.csv first, the first Country name (fully expanded) is now included in the CSV data.

This xsv join command takes about 7 seconds on my machine. The performance comes from constructing a very simple hash index of one of the CSV data files given. The join command does an inner join by default, but it also has left, right and full outer join support too.

Installation

Binaries for Windows, Linux and Mac are available from Github.

If you're a Mac OS X Homebrew user, then you can install xsv from homebrew-core:

$ brew install xsv

If you're a Nix/NixOS user, you can install xsv from nixpkgs:

$ nix-env -i xsv

Alternatively, you can compile from source byinstalling Cargo (Rust's package manager) and installing xsv using Cargo:

Compiling from this repository also works similarly:

git clone git://github.com/BurntSushi/xsvcd xsv
cargo build --release

Compilation will probably take a few minutes depending on your machine. The binary will end up in ./target/release/xsv.

Benchmarks

I've compiled some very rough benchmarks of various xsv commands.

Motivation

Here are several valid criticisms of this project:

  1. You shouldn't be working with CSV data because CSV is a terrible format.
  2. If your data is gigabytes in size, then CSV is the wrong storage type.
  3. Various SQL databases provide all of the operations available in xsv with more sophisticated indexing support. And the performance is a zillion times better.

I'm sure there are more criticisms, but the impetus for this project was a 40GB CSV file that was handed to me. I was tasked with figuring out the shape of the data inside of it and coming up with a way to integrate it into our existing system. It was then that I realized that every single CSV tool I knew about was woefully inadequate. They were just too slow or didn't provide enough flexibility. (Another project I had comprised of a few dozen CSV files. They were smaller than 40GB, but they were each supposed to represent the same kind of data. But they all had different column and unintuitive column names. Useful CSV inspection tools were critical here—and they had to be reasonably fast.)

The key ingredients for helping me with my task were indexing, random sampling, searching, slicing and selecting columns. All of these things made dealing with 40GB of CSV data a bit more manageable (or dozens of CSV files).

Getting handed a large CSV file once was enough to launch me on this quest. From conversations I've had with others, CSV data files this large don't seem to be a rare event. Therefore, I believe there is room for a tool that has a hope of dealing with data that large.

More Companies That No Longer Require a Degree

$
0
0

With college tuition soaring nationwide, many Americans don’t have the time or money to earn a college degree. However, that doesn’t mean your job prospects are diminished. Increasingly, there are many companies offering well-paying jobs to those with non-traditional education or a high-school diploma.

“When you look at people who don’t go to school and make their way in the world, those are exceptional human beings. And we should do everything we can to find those people,” said Google’s former SVP of People Operations Laszlo Bock.

“Academic qualifications will still be taken into account and indeed remain an important consideration when assessing candidates as a whole, but will no longer act as a barrier to getting a foot in the door,” added Maggie Stilwell, Ernst and Young’s managing partner for talent.

Google and Hilton are just two of the champion companies who realize that book smarts don’t necessarily equal strong work ethic, grit and talent. Whether you have your GED and are looking for a new opportunity or charting your own path beyond the traditional four-year college route, here are 15 companies that have said they do not require a college diploma for some of their top jobs. Your dream role awaits!

1. Google
Company Rating: 4.4
Hiring For: Product Manager, Recruiter, Software Engineer, Product Marketing Manager, Research Scientist, Mechanical Engineer, Developer Relations Intern, UX Engineer, SAP Cloud Consultant, Administrative Business Partner & more.
Where Hiring: Mountain View, CA; Austin, TX; Indianapolis, IN; San Francisco, CA; Pryor, OK; Chicago, IL and more
What Employees Say: “There a huge diversity of work ranging from defending independent journalism worldwide (Google Project Shield) to crisis response during disasters (see Maps during Hurricane Sandy or Tsunamis), to the best machine learning experts and projects in the world, to more mundane revenue-driving projects in advertising, there’s really something for everybody.” —Current Software Engineer II

See Open Jobs

2. EY (UK)
Company Rating: 3.7
Hiring For: Assurance Services Senior, Risk Advisor, Experience Management Manager, Tax Services Senior, Financial Services Senior Manager, Auditor, Risk Management Operations & Quality Compliance Associate, Payroll Operations Analyst Associate & more.
Where Hiring: Alpharetta, GA; San Francisco, CA; Toronto; Boston, MA; New York, NY; Cleveland, OH; Secaucus, NJ and more
What Employees Say: “The people, the flexibility and many great assignments. This is a place that really takes care of its people and is regularly reaching out to understand what would make a better experience for us.” —Current Employee

See Open Jobs

3. Penguin Random House
Company Rating: 3.8
Hiring For: Marketing Designer, Publicity Assistant, Senior Manager of Finance, Production Assistant, Senior Editor, Production Editor, Art Director & more.
Where Hiring: New York, NY; London, England; Colorado Springs, CO and more
What Employees Say: “Being a large corporation, the benefits at PRH are great. You learn a great deal about the industry as PRH is among the top few publishing houses in the world. There can be strong mentors depending on the department you’re in and the supervisor you work for.” —Current Employee

See Open Jobs

4. Costco Wholesale
Company Rating: 3.9
Hiring For: Cashier, Stocker, Pharmacy Sales Assistant, Bakery Wrapper, Cake Decorator, Licensed Optician, Cashier Assistant, Depot Solutions Functional Analyst, Forklift Driver, Seasonal Help & more.
Where Hiring: Baton Rouge, LA; Vallejo, CA; Kalamazoo, MI; Issaquah, WA and more
What Employees Say: “Very affordable high-quality health insurance benefits even for PT employees. Great for working parents who split up child care and need coverage. The key to succeeding at Costco is to work hard, have a good attitude and be nice to people. It’s hard work and fasted paced, you have to be down for that to succeed.” —Current Cashier Assistant

See Open Jobs

5. Whole Foods
Company Rating: 3.5
Hiring For: Grocery Team Member, Cashier, Bakery Team Member, Whole Body Team member, Specialty Team Member, Part Time Grocery Team Member, Chef, Seafood Team Member & more.
Where Hiring: Napa, CA; Petaluma, CA; Tigard, OR; Wichita, KS; Austin, TX; Portland, OR and more
What Employees Say: “Autonomy, freedom to be creative, free food pretty much every day, GREAT people to work with and great customers, awesome benefits, paid time off that accumulates quickly and you can use whenever you want.” —Former Employee

See Open Jobs

6. Hilton
Company Rating: 4.0
Hiring For: Event Manager, Front Office Manager, Housekeeper, Hotel Manager, Assistant Director of Food & Beverage, On-Call Banquet Server, International Sales Coordinator, Security Officer, Barback & more.
Where Hiring: San Rafael, CA; Napa, CA; Indianapolis, IN; Tampa, FL; Madison Heights, MI; Augusta, GA; Chicago, IL and more
What Employees Say: “I started as a line level employee in the hospitality industry, joined Hilton 20 years ago and just celebrated my 5th year as a General Manager in a full-service hotel. Hilton Worldwide is a company dedicated to developing talent at every level in the organization and I owe my success and career to their continual investment in me.” —Current General Manager

See Open Jobs

7. Publix
Company Rating: 3.7
Hiring For: Pharmacist, Retail Set-Up Coordinator, Maintenance Technician, Job Fair, In-House Maintenance Technician, Prepared Food Clerks, Assistant Pharmacy Manager, Beverage Server & more.
Where Hiring: Lakeland, FL; Atlanta, GA; Deerfield Beach, FL and more
What Employees Say: “As an associate, I really am happy to see that even the managers work right next to us and are never too busy to hear your concerns. Makes for a friendly environment to work in.”—Current Employee

See Open Jobs

8. Apple
Company Rating: 4.0
Hiring For: Genius, Design Verification Engineer, Engineering Project Manager, iPhone Buyer, Apple Technical Specialist, AppleCare at Home Team Manager, Apple TV Product Design Internship, Business Traveler Specialist, Part Time Reseller Specialist & more.
Where Hiring: Santa Clara, CA; Austin, TX; Las Vegas, NV; Charleston, SC; Chapel Hill, NC; Maiden, NC and more
What Employees Say: “The company is AMAZING. There are limitless advancement opportunities. You work with some very cool people and the leadership cares about your development. You may get coaching but you never get battered or belittled. The pay is decent and the benefits include, 401(k) match, stock purchase options, product discounts and discounts on services across many different areas, education assistance, child care assistance, paid vacation, sick time and other time off options, health club Reimbursement or bike cost set off. You get 1.5 time for OT and it’s pretty much unlimited as long as you don’t exceed 12 hours in a day or 59 total in a week.” —Current At-Home Advisor

See Open Jobs

9. Starbucks
Company Rating: 3.8
Hiring For: Barista, Shift Supervisor, Store Manager & more.
Where Hiring: Dublin, GA; San Francisco, CA; Compton, CA; Seattle, WA; Chicago, IL; Arcadia, CA; Denver, CO; Boston, MA and more
What Employees Say: “The benefits are out of sight. I was offered Starbucks stock after my first year, as well as 401k through Fidelity, and a superb Blue Cross Blue Shield health insurance plan. You can cover your whole family with that plan, and it can include domestic partners. I got a pound of free coffee every week and free coffee all day (although I think that was specific to my store, which bent the rules). There’s also an Employee Assistance Hotline which you can call if you’re having issues in your personal life. And HR is really responsive–they won’t see you as a troublemaker if you’re legitimately having an issue. They will handle it. Also, sexual orientation and gender identity are included in their anti-discrimination policy. None of the gay or lesbian people on my staff got crap for it, even though about half the staff was quietly conservative Christian and Republican. If you’re a people person, you develop relationships with the regulars and it’s fun to make their day. I felt it was pretty rewarding to make drinks. I loved the artistic side of it. And again, the free coffee…just awesome.” —Former Barista

See Open Jobs

10. Nordstrom
Company Rating: 3.6
Hiring For: Retail Sales, Cleaning, Stock and Fulfillment, Bartender, Barista, Spa Esthetician, Cosmetics Beauty Stylist, Seasonal Alterations and Tailor Shop Apprentice, Sr. Site Reliability Engineer, Recruiter, Social Media Manager & more.
Where Hiring: Phoenix, AZ; Las Vegas, NV; Scottsdale, AZ; Washington, DC; Arlington, VA; Bethesda, MD and more
What Employees Say: “Very fun job to work part-time as a student. Depending on how hard you want to work, you will be compensated fairly as it is a commission vs draw pay schedule. Lots of incentives to work harder such as free clothes or even cash. Managers are usually very flexible with your personal life and hours. You will meet a lot of great people working here. If you have the desire to move up in the company, the managers will have no issues with helping you in doing so.” —Former Sales Associate

See Open Jobs

11. Home Depot
Company Rating: 3.5
Hiring For: Department Supervisor, Customer Service Sales, Store Support, Cashier, Assistant Store Manager, Outside Sales Consultant, Warehouse Associate, Product Manager, Analyst & more.
Where Hiring: Colonial Heights, VA; South Plainfield, NJ; San Diego, CA; Kennesaw, GA; Atlanta, GA; Daly City, CA and more
What Employees Say: “The Home Depot has a solid moral compass. They aren’t about sacrificing their ethics for the sake of a sale, which I love. They also hire a diverse group of people and give a lot back to the community. They offer a lot of benefits and little treats for their employees. Although they push their associates to meet quota, they’re realistic about what can and can’t be accomplished.” —Current Merchandising Execution Associate

See Open Jobs

12. IBM
Company Rating: 3.6
Hiring For: Financial Blockchain Engineer, Lead Recruiter, Contract & Negotiations Professional, Product Manager, Entry Level System Services Representative, Research Staff Member, Client Solution Executive & more.
Where Hiring: San Francisco, CA; Raleigh-Durham, NC; Austin, TX; New York, NY and more
What Employees Say: “Excellent opportunities for career advancement. Flexible working hours as long as you make your targets. People are terrific. Only the strong and motivated will survive and thrive. More training and education that you have time for. Resources are abundant if you know how to leverage them. Medical benefits are outstanding. Loved working at IBM.” —Former Sales Representative

See Open Jobs

13. Bank of America
Company Rating: 3.5
Hiring For: Client Service Representative, Client Associate, Analyst, Executive Assistant, Relationship Manager, Consumer Banking Market Manager, Treasury Solutions Analyst, Small Business Consultant & more.
Where Hiring: Tulsa, OK; Wilmington, DE; New York, NY; Plymouth, MI; Grand Rapids, MI; Des Moines, IA; Cincinnati, OH; Cleveland, OH; Lebanon, NH; Philadelphia, PA and more
What Employees Say:“Company provides great benefits: vacation time, sick time, medical insurances are decent. Most other companies aren’t able to compare to the time off benefits.” —Current Operations Team Manager

See Open Jobs

14. Chipotle
Company Rating: 3.4
Hiring For: District Manager, Kitchen Manager, Service Manager, Restaurant Team Member, General manager, Restaurant Shift Leader & more.
Where Hiring: Sandy, UT; Woburn, MA; Pleasant Hill, CA; Kansas City, MO; Estero, FL; Colorado Springs, CO; Philadelphia, PA; Alameda, CA; Denver, CO; Minneapolis, MN; East Point, GA; Garner, NC and more
What Employees Say: “Free meals during shifts. 50% a meal to take home. Ease of requesting time off. Flexible hours. Can learn multiple positions. Room for growth if you really work towards it and want it.” —Former Employee

See Open Jobs

15. Lowe’s
Company Rating: 3.3
Hiring For: Plumbing Associate, Commercial Sales Loader, Lumber Associate, Front End Cashier-Seasonal, Internet Fulfillment, Seasonal Customer Service Associate, Delivery Puller, Installed Sales Manager & more.
Where Hiring: Westborough, MA; Omaha, NE; Mooresville, NC; Silverthorne, CO; Madison Heights, VA; San Francisco, CA; Yonkers, NY; Paris, TN; Alcoa, TN; Ridgeland, MS; Columbus, MS and more
What Employees Say:“The people are amazing and the system works pretty well. I haven’t had any bad days unless it was having to deal with a customer. Advancement was quick for me. I went from part-time to Manager within 2 years.” —Current Department Manager

See Open Jobs

blogbanner 3

Lisp for the Web

$
0
0

Lisp for the web - the book

The following article and code has been updated as the bookLisp for the Web. You can get it for any price you want at Leanpub.

by Adam Tornhill, April 2008

With his essay Beating the Averages , Paul Graham told the story of how his web start-up Viaweb outperformed its competitors by using Lisp. Lisp? Did I parse that correctly? That ancient language with all those scary parentheses? Yes, indeed! And with the goal of identifying its strengths and what they can do for us, I'll put Lisp to work developing a web application. In the process we'll find out how a 50 years old language can be so well-suited for modern web development and yes, it's related to all those parentheses.

What to expect

Starting from scratch, we'll develop a three-tier web application. I'll show how to:

  • utilize powerful open source libraries for expressing dynamic HTML and JavaScript in Lisp,
  • develop a small, embedded domain specific language tailored for my application,
  • extend the typical development cycle by modifying code in a running system and execute code during compilation,
  • and finally migrate from data structures in memory to persistent objects using a third party database.

I'll do this in a live system transparent to the users of the application. Because Lisp is so high-level, I'll be able to achieve everything in just around 70 lines of code.

This article will not teach you Common Lisp (for that purpose I recommend Practical Common Lisp ). Instead, I'll give a short overview of the language and try to explain the concepts as I introduce them, just enough to follow the code. The idea is to convey a feeling of how it is to develop in Lisp rather than focusing on the details.

The Lisp story

Lisp is actually a family of languages discovered by John McCarthy 50 years ago. The characteristic of Lisp is that Lisp code is made out of Lisp data structures with the practical implication that it is not only natural, but also highly effective to write programs that write programs. This feature has allowed Lisp to adapt over the years. For example, as object-oriented programming became popular, powerful object systems could be implemented in Lisp as libraries without any change to the core language. Later, the same proved to be true for aspect-oriented programming.

This idea is not only applicable to whole paradigms of programming. Its true strength lays in solving everyday problems. With Lisp, it's straightforward to build-up a domain specific language allowing us to program as close to the problem domain as our imagination allows. I'll illustrate the concept soon, but before we kick-off, let's look closer at the syntax of Lisp.

Crash course in Lisp

What Graham used for Viaweb was Common Lisp, an ANSI standardized language, which we'll use in this article too (the other main contender is Scheme, which is considered cleaner and more elegant, but with a much smaller library).

Common Lisp is a high-level interactive language that may be either interpreted or compiled. You interact with Lisp through its top-level . The top-level is basically a prompt. On my system it looks like this:

        CL-USER>

Through the top-level, we can enter expressions and see the results (the values returned by the top-level are highlighted ):

        CL-USER>(+ 1 2 3)6

As we see in the example, Lisp uses a prefix notation. A parenthesized expression is referred to as a form . When fed a form such as (+ 1 2 3) , Lisp generally treats the first element (+) as a function and the rest as arguments. The arguments are evaluated from left to right and may themselves be function calls:

        CL-USER>(+ 1 2 (/ 6 2))6

We can define our own functions with defun:

        CL-USER>(defun say-hello (to)
                    (format t "Hello, ~a" to))

Here we're defining a function say-hello , taking one argument: to . The format function is used to print a greeting and resembles a printf on steroids. Its first argument is the output stream and here we're using t as a shorthand for standard output. The second argument is a string, which in our case contains an embedded directive ~a instructing format to consume one argument and output it in human-readable form. We can call our function like this:

        CL-USER>(say-hello "ACCU")Hello, ACCUNIL

The first line is the side-effect, printing "Hello, ACCU" and NIL is the return value from our function. By default, Common Lisp returns the value of the last expression. From here we can redefine say-hello to return its greeting instead:

        CL-USER>(defun say-hello (to)
                     (format nil "Hello, ~a" to))

With nil as its destination, format simply returns its resulting string:

        CL-USER>(say-hello "ACCU")"Hello, ACCU"

Now we've gotten rid of the side-effect. Programming without side-effects is in the vein of functional programming, one of the paradigms supported by Lisp. Lisp is also dynamically typed. Thus, we can feed our function a number instead:

        CL-USER>(say-hello 42)"Hello, 42"

In Lisp, functions are first-class citizens. That means, we can create them just like any other object and we can pass them as arguments to other functions. Such functions taking functions as arguments are called higher-order functions . One example is mapcar . mapcar takes a function as its first argument and applies it subsequently to the elements of one or more given lists:

        CL-USER>(mapcar #'say-hello (list "ACCU" 42 "Adam"))("Hello, ACCU" "Hello, 42" "Hello, Adam")

The funny #' is just a shortcut for getting at the function object. As you see above, mapcar collects the result of each function call into a list, which is its return value. This return value may of course serve as argument to yet another function:

        CL-USER>(sort (mapcar #'say-hello (list "ACCU" 42 "Adam")) #'string-lessp)("Hello, 42" "Hello, ACCU" "Hello, Adam")

Lisp itself isn't hard, although it may take some time to wrap ones mindset around the functional style of programming. As you see, Lisp expressions are best read inside-out. But the real secret to understanding Lisp syntax is to realize that the language doesn't have one; what we've been entering above is basically parse-trees, generated by compilers in other languages. And, as we'll see soon, exactly this feature makes it suitable for metaprogramming.

The Brothers are History

Remember the hot gaming discussions 20 years ago? "Giana Sisters" really was way better than "Super Mario Bros", wasn't it? We'll delegate the question to the wise crowd by developing a web application. Our web application will allow users to add and vote for their favourite retro games. A screenshot of the end result is provided in Figure 1 below.

Retro Games front page

From now on, I start to persist my Lisp code in textfiles instead of just entering expressions into the top-level. Further, I define a package for my code. Packages are similar to namespaces in C++ or Java's packages and helps to prevent name collisions (the main distinction is that packages in Common Lisp are first-class objects).

        (defpackage :retro-games
             (:use :cl :cl-who :hunchentoot :parenscript))

The new package is named :retro-games and I also specify other packages that we'll use initially:

  • CL is Common Lisp's standard package containing the whole language.
  • CL-WHO is a library for converting Lisp expressions into XHTML.
  • Hunchentoot is a web-server, written in Common Lisp itself, and provides a toolkit for building dynamic web sites.
  • ParenScript allows us to compile Lisp expressions into JavaScript. We'll use this for client-side validation.

With my package definition in place, I'll put the rest of the code inside it by switching to the :retro-games package:

        (in-package :retro-games)

Most top levels indicate the current package in their prompt. On my system the prompt now looks like this:

        RETRO-GAMES>

Representing Games

With the package in place, we can return to the problem. It seems to require some representation of a game and I'll choose to abstract it as a class:

        (defclass game ()
            ((name  :initarg  :name)
             (votes :initform 0)))

The expression above defines the class game without any user-specified superclasses, hence the empty list () as second argument. A game has two slots (slots are similar to attributes or members in other languages): a name and the number of accumulated votes . To create a game object I invoke make-instance and passes it the name of the class to instantiate:

        RETRO-GAMES>(setf many-lost-hours (make-instance 'game :name "Tetris"))#<GAME @ #x7213da32>

Because I specified an initial argument in my definition of the name slot, I can pass this argument directly and initialize that slot to "Tetris". The votes slot doesn't have an initial argument. Instead I specify the code I want to run during instantiation to compute its initial value through :initform . In this case the code is trivial, as I only want to initialize the number of votes to zero. Further, I use setf to assign the object created by make-instance to the variable many-lost-hours.

Now that we got an instance of game we would like to do something with it. We could of course write code ourselves to access the slots. However, there's a more lispy way; defclass provides the possibility to automatically generate accessor functions for our slots:

        (defclass game ()
          ((name  :reader   name
                  :initarg  :name)
           (votes :accessor votes
                  :initform 0)))

The option :reader in the name slot will automatically create a read function and the option :accessor used for the votes slot will create both read and write functions. Lisp is pleasantly uniform in its syntax and these generated functions are invoked just like any other function:

        RETRO-GAMES>(name many-lost-hours)"Tetris"
        RETRO-GAMES>(votes many-lost-hours)0
        RETRO-GAMES>(incf (votes many-lost-hours))1
        RETRO-GAMES>(votes many-lost-hours)1

The only new function here is incf , which when given one argument increases its value by one. We can encapsulate this mechanism in a method used to vote for the given game:

        (defmethod vote-for (user-selected-game)
	   (incf (votes user-selected-game)))

The top-level allows us to immediately try it out and vote for Tetris:

       RETRO-GAMES>(votes many-lost-hours)1
       RETRO-GAMES>(vote-for many-lost-hours)2
       RETRO-GAMES>(votes many-lost-hours)2

A prototypic back end

Before we can jump into the joys of generating web pages, we need a back end for our application. Because Lisp makes it so easy to modify existing applications, it's common to start out really simple and let the design evolve as we learn more about the problem we're trying to solve. Thus, I'll start by using a list in memory as a simple, non-persistent storage.

        (defvar *games* '())

The expression above defines and initializes the global variable (actually the Lisp term is special variable) *games* to an empty list. The asterisks aren't part of the syntax; it's just a naming convention for globals. Lists may not be the most efficient data structure for all problems, but Common Lisp has great support for lists and they are easy to work with. Later we'll change to a real database and, with that in mind, I encapsulate the access to *games* in some small functions:

        (defun game-from-name (name)
	  (find name *games* :test #'string-equal 
	                     :key  #'name))

Our first function game-from-name is implemented in terms of find . find takes an item and a sequence. Because we're comparing strings I tell find to use the function string-equal for comparison (remember, #' is a short cut to refer to a function). I also specify the key to compare. In this case, we're interested in comparing the value returned by the name function on each game object.

If there's no match find returns NIL , which evaluates to false in a boolean context. That means we can reuse game-from-name when we want to know if a game is stored in the *games* list. However, we want to be clear with our intent:

        (defun game-stored? (game-name)
	 (game-from-name game-name))

As illustrated in Figure 1, we want to present the games sorted on popularity. Using Common Lisp's sort function this is pretty straightforward; we only have to take care, because for efficiency reasons sort is destructive. That is, sort is allowed to modify its argument. We can preserve our *games* list by passing a copy to sort . I tell sort to return a list sorted in descending order based on the value returned by the votes function invoked on each game:

        (defun games ()
	 (sort (copy-list *games*) #'> :key #'votes))

So far the queries. Let's define one more utility for actually adding games to our storage:

        (defun add-game (name)
	 (unless (game-stored? name)
	   (push (make-instance 'game :name name) *games*)))

push is a modifying operation and it prepends the game instantiated by make-instance to the *games* list. Let's try it all out at the top level.

      RETRO-GAMES>(games)NIL
      RETRO-GAMES>(add-game "Tetris")(#<GAME @ #x71b943c2>)
      RETRO-GAMES>(game-from-name "Tetris")#<GAME @ #x71b943c2>
      RETRO-GAMES>(add-game "Tetris")NIL
      RETRO-GAMES>(games)(#<GAME @ #x71b943c2>)
      RETRO-GAMES>(mapcar #'name (games))("Tetris")

The values returned to the top level may not look too informative. It's basically the printed representation of a game object. Common Lisp allows us to customize how an object shall be printed, but we will not go into the details. Instead, with this prototypic back end in place, we're prepared to enter the web.

Generating HTML dynamically

The first step in designing an embedded domain specific language is to find a Lisp representation of the target language. For HTML this is really simple as both HTML and Lisp are represented in tree structures, although Lisp is less verbose. Here's an example using the CL-WHO library:

       (with-html-output (*standard-output* nil :indent t)
	   (:html
              (:head
                 (:title "Test page"))
              (:body
                 (:p "CL-WHO is really easy to use"))))

This code will expand into the following HTML, which is outputted to *standard-output* :

<html><head><title>Test page </title></head><body><p> CL-WHO is really easy to use </p></body></html>

CL-WHO also allows us to embed Lisp expressions, setting the scene for dynamic web pages.

Macros: Fighting the evils of code duplication

Although CL-WHO does provide a tighter representation than raw HTML we're still facing the potential risk of code duplication; the html, head, and body tags form a pattern that will recur on all pages. And it'll only get worse as we start to write strict and validating XHTML 1.0, where we have to include more tags and attributes and, of course, start every page with that funny DOCTYPE line.

Further, if you look at Figure 1 you'll notice that the retro games page has a header with a picture of that lovely Commodore (photo by Bill Bertram - thanks!) and a strap line. I want to be able to define that header once and have all my pages using it automatically. The problem screams for a suitable abstraction and this is where Lisp differs from other languages. In Lisp, we can actually take on the role of a language designer and extend the language with our own syntax. The feature that allows this is macros. Syntactically, macros look like functions, but are entirely different beasts. Sure, just like functions macros take arguments. The difference is that the arguments to macros are source code, because macros are used by the compiler to generate code.

Macros can be a conceptual challenge as they erase the line between compile time and runtime. What macros do are expanding themselves into code that are actually compiled. In their expansion macros have access to the whole language, including other macros, and may call functions, create objects, etc.

So, let's put this amazing macro mechanism to work by defining a new syntactic construct, the standard-page . A standard-page will abstract away all XHTML boiler plate code and automatically generate the heading on each page. The macro will take two arguments. The first is the title of the page and the second the code defining the body of the actual web-page. Here's a simple usage example:

       (standard-page (:title "Retro Games")
		      (:h1 "Top Retro Games")
		      (:p "We'll write the code later..."))

Much of the macro will be straightforward CL-WHO constructs. Using the backquote syntax(the ` character), we can specify a template for the code we want to generate:

       (defmacro standard-page ((&key title) &body body)
	 `(with-html-output-to-string (*standard-output* nil :prologue t :indent t)
	   (:html :xmlns "http://www.w3.org/1999/xhtml"
		  :xml\:lang "en" 
		  :lang "en"
		  (:head 
		   (:meta :http-equiv "Content-Type" 
			  :content    "text/html;charset=utf-8")
		   (:title ,title)
		   (:link :type "text/css" 
			  :rel "stylesheet"
			  :href "/retro.css"))
		  (:body 
		   (:div :id "header" ; Retro games header
			 (:img :src "/logo.jpg" 
			       :alt "Commodore 64" 
			       :class "logo")
			 (:span :class "strapline" 
				"Vote on your favourite Retro Game"))
		   ,@body))))

Within the backquoted expression we can use , (comma) to evaluate an argument and ,@ (comma-at) to evaluate and splice a list argument. Remember, the arguments to a macro are code. In this example the first argument title is bound to "Retro Games" and the second argument body contains the :h1 and :p expressions wrapped-up in a list. In the macro definition, the code bound to these arguments is simply inserted on the proper places in our backquoted template code.

The power we get from macros become evident as we look at the generated code. The three lines in the usage example above expands into this (note that Lisp symbols are case-insensitive and thus usually presented in uppercase):

       (WITH-HTML-OUTPUT-TO-STRING (*STANDARD-OUTPUT* NIL :PROLOGUE T :INDENT T)  
	(:HTML :XMLNS "http://www.w3.org/1999/xhtml"
	       :|XML:LANG| "en" 
	       :LANG "en"
	       (:HEAD
		(:META :HTTP-EQUIV "Content-Type" 
		       :CONTENT "text/html;charset=utf-8")(:TITLE "Retro Games")
		(:LINK :TYPE "text/css" 
		       :REL "stylesheet" 
		       :HREF "/retro.css"))
	       (:BODY
		(:DIV :ID "header"
		      (:IMG :SRC "/logo.jpg" 
			    :ALT "Commodore 64" 
			    :CLASS "logo")
		      (:SPAN :CLASS "strapline" 
			     "Vote on your favourite Retro Game"))(:H1 "Top Retro Games")
		(:P "We'll write the code later..."))))

This is a big win; all this is code that we don't have to write. Now that we have a concise way to express web-pages with a uniform look, it's time to introduce Hunchentoot.

More than an opera

Named after a Zappa sci-fi opera, Edi Weitz's Hunchentoot is a full featured web-server written in Common Lisp. To launch Hunchentoot, we just invoke its start-server function:

        RETRO-GAMES>(start-server :port 8080)

start-server supports several arguments, but we're only interested in specifying a port other than the default port 80. And that's it - the server's up and running. We can test it by pointing a web browser to http://localhost:8080/, which should display Hunchentoot's default page. To actually publish something, we have to provide Hunchentoot with a handler. In Hunchentoot all requests are dynamically dispatched to an associated handler and the framework contains several functions for defining dispatchers. The code below creates a dispatcher and adds it to Hunchentoot's dispatch table:

        (push (create-prefix-dispatcher "/retro-games.htm" 'retro-games) *dispatch-table*)

The dispatcher will invoke the function, retro-games , whenever an URI request starts with /retro-games.htm . Now we just have to define the retro-games function that generates the HTML:

       (defun retro-games ()
	 (standard-page (:title "Retro Games")
			(:h1 "Top Retro Games")
			(:p "We'll write the code later...")))

That's it - the retro games page is online. But I wouldn't be quick to celebrate; while we took care to abstract away repetitive patterns in standard-page , we've just run into another more subtle form of duplication. The problem is that every time we want to create a new page we have to explicitly create a dispatcher for our handle. Wouldn't it be nice if Lisp could do that automatically for us? Basically I want to define a function like this:

       (define-url-fn (retro-games)
	 (standard-page (:title "Retro Games")
			(:h1 "Top Retro Games")
			(:p "We'll write the code later...")))

and have Lisp to create a handler, associate it with a dispatcher and put it in the dispatch table as I compile the code. Guess what, using macros the syntax is ours. All we have to do is reformulate our wishes in a defmacro :

       (defmacro define-url-fn ((name) &body body)
	 `(progn
	    (defun ,name ()
	      ,@body)
	    (push (create-prefix-dispatcher ,(format nil "/~(~a~).htm" name) ',name) *dispatch-table*)))

Now our "wish code" above actually compiles and generates the following Lisp code (macro arguments highlighted):

       (PROGN 
	(DEFUN RETRO-GAMES ()
	  (STANDARD-PAGE (:TITLE "Retro Games")
			 (:H1 "Top Retro Games")
			 (:P "We'll write the code later...")))
	(PUSH (CREATE-PREFIX-DISPATCHER "/retro-games.htm" 'RETRO-GAMES) *DISPATCH-TABLE*))

There's a few interesting things about this macro:

  1. It illustrates that macros can take other macros as arguments. The Lisp compiler will continue to expand the macros and standard-page will be expanded too, writing even more code for us.
  2. Macros may execute code as they expand. The prefix string "/retro-games.htm" is assembled with format during macro expansion time. By using comma, I evaluate the form and there's no trace of it in the generated code - just the resulting string.
  3. A macro must expand into a single form, but we actually need two forms; a function definition and the code for creating a dispatcher. progn solves this problem by wrapping the forms in a single form and then evaluating them in order.

Putting it together

Phew, that was a lot of Lisp in a short time. But using the abstractions we've created, we're able to throw together the application in no time. Let's code out the main page as it looks in Figure 1 above:

       (define-url-fn (retro-games)
	 (standard-page (:title "Top Retro Games")
			(:h1 "Vote on your all time favourite retro games!")
			(:p "Missing a game? Make it available for votes " (:a :href "new-game.htm" "here"))
			(:h2 "Current stand")
			(:div :id "chart" ; For CSS styling of links
			      (:ol
			       (dolist (game (games))
				 (htm  
				  (:li 
				   (:a :href (format nil "vote.htm?name=~a" (name game)) "Vote!")
				   (fmt "~A with ~d votes" (name game) (votes game)))))))))

Here we utilize our freshly developed embedded domain specific language for defining URL functions ( define-url-fn ) and creating standard-pages . The following lines are straightforward XHTML generation, including a link to new-game.htm; a page we haven't specified yet. We will use some CSS to style the Vote! links to look and feel like buttons, which is why I wrap the list in a div -tag.

The first embedded Lisp code is dolist . We use it to create each game item in the ordered HTML list. dolist works by iterating over a list, in this case the return value from the games -function, subsequently binding each element to the game variable. Using format and the access methods on the game object, I assemble the presentation and a destination for Vote!. Here's some sample HTML output from one session:

<div id='chart'><ol><li><a href='vote.htm?name=Super Mario Bros'>Vote!</a> Super Mario Bros with 12 votes</li><li><a href='vote.htm?name=Last Ninja'>Vote!</a> Last Ninja with 11 votes</li></ol></div>

As the user presses Vote! we'll get a request for vote.htm with the name of the game attached as a query parameter. Hunchentoot provides a parameter function that, just as you might expect, returns the value of the parameter named by the following string. We pass this value to our back end abstraction game-from-name and binds the result to a local variable with let :

       (define-url-fn (vote)
	 (let ((game (game-from-name (parameter "name"))))
	   (if game
	       (vote-for game))
	   (redirect "/retro-games.htm")))

After a vote-for the requested game, Hunchentoot's redirect function takes the client to the updated chart.

Retro Games add game page

Now when we're able to vote we need some games to vote-for . In the code for the retro-games page above, I included a link to new-game.htm. That page is displayed in Figure 2. Basically it contains a HTML form with a text input for the game name:

       (define-url-fn (new-game)
	 (standard-page (:title "Add a new game")
			(:h1 "Add a new game to the chart")
			(:form :action "/game-added.htm" :method "post" 
			       (:p "What is the name of the game?" (:br)
				   (:input :type "text"  
					   :name "name" 
					   :class "txt"))
			       (:p (:input :type "submit" 
					   :value "Add" 
					   :class "btn")))))

As the user submits the form, its data is sent to game-added.htm:

       (define-url-fn (game-added)
	 (let ((name (parameter "name")))
	   (unless (or (null name) (zerop (length name)))
	     (add-game name))
	   (redirect "/retro-games.htm")))

The first line in our URL function should look familiar; just as in our vote function, we fetch the value of the name parameter and binds it to a local variable ( name ). Here we have to guard against an empty name. After all, there's nothing forcing the user to write anything into the field before submitting the form (we'll see in a minute how to add client-side validation). If we get a valid name, we add it to our database through the add-game function.

Expressing JavaScript in Lisp

Say we want to ensure that the user at least typed something before submitting the form. Can we do that in Lisp? Yes, actually. We can write Lisp code that compiles into JavaScript and we use the ParenScript library for the task.

Unobtrusive JavaScript is an important design principle and ParenScript supports that too. But in Lisp this becomes less of an issue; I'm not actually writing JavaScript, everything is Lisp. Thus I embed my event handler in the form:

      (:form :action "/game-added.htm" :method "post" 
	     :onsubmit 
	     (ps-inline
	      (when (= name.value "")
		(alert "Please enter a name.")
		(return false)))

This code will compile into the following mixture of HTML and JavaScript:

<form action='/game-added.htm' method='post' 
         onsubmit='javascript:if (name.value == "") {
            alert("Please enter a name.");
            return false;
         }'>

Persistent Objects

Initially we kind of ducked the problem with persistence. To get things up and running as quickly as possible, we used a simple list in memory as "database". That's fine for prototyping but we still want to persist all added games in case we shutdown the server. Further, there are some potential threading issues with the current design. Hunchentoot is multithreaded and requests may come in different threads. We can solve all that by migrating to a thread-safe database. And with Lisp, design decisions like that are only a macro away; please meet Elephant!

Elephant is a wickedly smart persistent object protocol and database. To actually store things on disk, Elephant supports several back ends such as PostGres and SqlLite. In this example I'll use Berkeley DB, simply because it has the best performance with Elephant.

The first step is to open a store controller, which serves as a bridge between Lisp and the back end:

        (open-store '(:BDB "/home/adam/temp/gamedb/"))

Here I just specify that we're using Berkely DB ( :BDB ) and give a directory for the database files. Now, let's make some persistent objects. Have a look at our current game class again:

	 (defclass game ()
	   ((name :reader   name 
		  :initarg :name)
	    (votes :accessor votes 
		   :initform 0)))

Elephant provides a convenient defpclass macro that creates persistent classes. The defpclass usage looks very similar to Common Lisp's defclass , but it adds some new features; we'll use :index to specify that we want our slots to be retrievable by slot values. I also add an initial argument to votes , which I use later when transforming our old games into this persistent class:

	 (defpclass persistent-game ()
	   ((name :reader name 
		  :initarg :name 
		  :index t)
	    (votes :accessor votes 
		   :initarg :votes 
		   :initform 0 
		   :index t)))

The Elephant abstraction is really clean; persistent objects are created just like any other object:

	 RETRO-GAMES>(make-instance 'persistent-game :name "Winter Games")#<PERSISTENT-GAME oid:100>

Elephant comes with a set of functions for easy retrieval. If we want all instances of our persistent-game class, it's a simple as this:

	 RETRO-GAMES>(get-instances-by-class 'persistent-game)(#<PERSISTENT-GAME oid:100>)

We can of course keep a reference to the returned list or, because we know we just instantiated a persistent-game , call a method on it directly:

	 RETRO-GAMES>(name (first (get-instances-by-class 'persistent-game)))"Winter Games"

We took care earlier to encapsulate the access to the back end and that pays off now. We just have to change those functions to use the Elephant API instead of working with our *games* list. The query functions are quit simple; because we indexed our name slot, we can use get-instance-by-value to get the matching persistent object:

	  (defun game-from-name (name)
	    (get-instance-by-value 'persistent-game 'name name))

Just like our initial implementation using find , get-instance-by-value returns NIL in case no object with the given name is stored. That means that we can keep game-stored? exactly as it is without any changes. But what about adding a new game? Well, we no longer need to maintain any references to the created objects. The database does that for us. But, we have to change add-game to make an instance of persistent-game instead of our old game class. And even though Elephant is thread-safe we have to ensure that the transactions are atomic. Elephant provides a nice with-transaction macro to solve this problem:

	   (defun add-game (name)
	     (with-transaction ()
			(unless (game-stored? name)
			    (make-instance 'persistent-game :name name))))

Just one final change before we can compile and deploy our new back end: the games function responsible for returning a list of all games sorted on popularity:

	    (defun games ()
	      (nreverse (get-instances-by-range 'persistent-game 'votes nil nil)))

votes is an indexed slot, so we can use get-instances-by-range to retrieve a sorted list. The last two arguments are both nil , which will retrieve all stored games. The returned list will be sorted from lowest score to highest, so I apply nreverse to reverse the list (the n in nreverse indicates that it is a destructive function).

Remembering the Games

Obviously we want to keep all previously added games. After all, users shouldn't suffer because we decide to change the implementation. So, how do we transform existing games into persistent objects? The simplest way is to map over the *games* list and instantiate a persistent-game with the same slot values as the old games:

	    RETRO-GAMES>(mapcar	#'(lambda (old-game)
				     (make-instance 'persistent-game 
						    :name (name old-game)
						    :votes (votes old-game)))
				*games*)

We could have defined a function for this task using defun but, because it is a one-off operation, I go with an anonymous function aka lambda function (see the highlighted code above). And that's it - all games have been moved into a persistent database. We can now set *games* to NIL (effectively making all old games available for garbage collection) and even make the *games* symbol history by removing it from the package:

	    RETRO-GAMES> (setf *games* nil)NIL
	    RETRO-GAMES> (unintern '*games*)T

Outro

This article has really just scratched the surface of what Lisp can do. Yet I hope that if you made it this far, you have seen that behind all those parenthesis there's a lot of power. With its macro system, Lisp can basically be what you want it to.

Due to the dynamic and interactive nature of Lisp it's a perfect fit for prototyping. And because Lisp programs are so easy to evolve, that prototype may end up as a full-blown product one day.

References

The full source code for the Retro Games application is available here .

Matthew Snyder wrote a sequel. His article is available here: Lisp for the Web. Part II

Economists Gear Up to Challenge the Monopolies

$
0
0
Terms of Service Violation

Your usage has been flagged as a violation of our terms of service.

For inquiries related to this message please contact support. For sales inquiries, please visit http://www.bloomberg.com/professional/request-demo

If you believe this to be in error, please confirm below that you are not a robot by clicking "I'm not a robot" below.


Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review the Terms of Service and Cookie Policy.


Block reference ID:


UK’s worst-selling map: The empty landscape charted by OS440

$
0
0

A single Scots pine grows from a boulder standing in the middle of Achness waterfall in Glen Cassley in the Highlands. It is a striking sight. Isolated in the fiercely flowing river Cassley, the tree towers above a long stretch of rocks swept by torrents of water. Salmon leap upriver in summer while golden eagles swoop overhead. It is an image of Scotland at its glorious, scenic best and would be expected to attract tourists in their droves. But in Glen Cassley, 50 miles north of Inverness, visitors are conspicuous by their absence.

Indeed, according to the Ordnance Survey, its map of Glen Cassley is the least purchased item in the entire OS Explorer map series. “Getting up there is only for the more hardy of us, perhaps, but it is still not clear why the map should be so unpopular,” said Nick Giles, the managing director of Ordnance Survey Consumer.

Area covered by map 440

The Ordnance Survey now sells 1.7m paper maps a year (an increase on previous years) but is coy about sales of individual maps “for reasons of commercial sensitivity”. However, it recently revealed that its most popular map – Explorer OL17 of Snowdonia and Conwy Valley – sells about 180 times more copies than its worst seller, Explorer map 440: Glen Cassley and Glen Oykel. In other words, for every person who uses a map to explore the waterfalls and moorland of these two glens, there are 180 who would rather make the most of the crags and tracks of Snowdonia.

Certainly some of this disparity can be blamed on remoteness. Glasgow is more than 200 miles to the south. Nevertheless, the area still seems curiously unloved even closer to home.

Inverness tourism office had stackloads of local Explorer maps during my visit – with one exception, map 440. Similarly, the WH Smith nearby also had shelves groaning with cartographic offerings but only one of Glen Cassley. This may not be the map that time forgot, but it is not far off.

“Essentially we are dealing with the least populated place in Britain,” said Dave Robertson, an OS surveyor for the Highlands. “There are a few dozen houses inside the land covered by map 440 but many of these are only sporadically inhabited holiday homes or shooting lodges.”

Robertson estimates that there are fewer than a couple of hundred people living in the 826 square kilometres covered by map 440. “By contrast, there are other OS Explorer maps which cover areas with millions of inhabitants,” he added. “Essentially the Glen Cassley area is an empty zone.”

It is Robertson’s job to update OS maps when new roads are built, mobile phone masts put up, houses constructed or wind farms erected on hill tops.

Mapping these features was once a laborious process involving theodolites and other instruments but has been transformed by digital technology. Now Robertson uses a two-metre pole – known as a Global Navigation Satellite System receiver – that is fitted with GPS sensors. They can pick up a combination of US, Russian and Chinese satellite signals that allow him to pinpoint his position on Earth’s surface with centimetre accuracy.

“You follow the line of the road or track you that are mapping and the GPS receivers marks your route on your laptop. Then you record what the surface is made of – grass, or tarmac, or soil.” All that information is recorded and is then used to generate a new map of the area. It is a constant business even in the Highlands. Or at least in most parts.

“The one exception to all that activity is Glen Cassley,” said Robertson. “I have very little to do there. It doesn’t change and nothing much happens there to require new mapping.” That lack of activity and isolation gives the area its grand, bleak beauty. There are no villages within its borders, and only a handful of farms, a couple of hotels, and a few roads, nearly all of them single track. By contrast, there are acres of blanket bog covered with blaeberries (bilberries), heather, bog cotton, tormentil and an exotic range of fungi including the purple amethyst deceiver.

Robin McKie, right, with Dave Robertson of the Ordnance Survey in Glen Cassley. Photograph: Robin McKie for the Observer

Some of Scotland’s finest fishing rivers cut through this boggy land and there are stunning waterfalls and salmon leaps. Bird life includes buzzards, stonechats and an occasional golden eagle.

On my visit last week, Dave Robertson and I strolled through these wonders that were only intermittently blighted by rain or midges. We met only one set of fellow walkers – who looked aghast when I explained that I would be writing about the region. “Please don’t let everyone else know about this place,” they pleaded.

In fact, the Ordnance Survey says it is very keen to get more and more people to know about lost national treasures such as Glen Cassley. At the end of this month, on Sunday 30 September, the OS will be promoting National GetOutside Day when it hopes to get as many as a million people to take trips, walk and enjoy the outdoors. Thousands of routes around Britain will be promoted in the process.

“Once people realise what is on offer in places like Glen Cassley, they could make a lot of difference,” said Giles. “Certainly, it would be good if we could get Glen Cassley off the bottom of this year’s sales charts though that would only mean we would have to try to do the same for the current second bottom selling map – Peterhead and Fraserburgh – next year. And that might be harder.”

Lost and found

The origin of the Ordnance Survey can be traced to the government’s decision to map the Highlands in the wake of the Jacobite rebellion. British troops, in pursuit of the defeated rebels, found they had no proper maps to help them locate their enemies. So the government launched a mapping exercise that produced the first detailed maps of the Highlands and later the rest of the country.

Today the OS has two main series of British maps: the Landranger with red covers and the Explorer with orange covers. The latter are scaled 1:25,000, in which 4cm represent 1 km. Landranger maps at 1:50,000 scale have less detail but more coverage on a single sheet.

The top 10 most popular Explorer maps all cover areas of England and Wales. The 10 least popular all cover areas of Scotland.

The top three are:

OL17 Snowdonia and Conwy Valley.

OL7 the Lake District, South Eastern section.

OL24 the Peak District.

The three least popular are:

OS440 Glen Cassley and Glen Oykel.

OS427 Peterhead and Fraserburgh.

OS333 Kilmarnock and Irvine.

Porcupine: A fast linearizability checker written in Go

$
0
0

Porcupine is a fast linearizability checker for testing the correctness of distributed systems. It takes a sequential specification as executable Go code, along with a concurrent history, and it determines whether the history is linearizable with respect to the sequential specification.

Porcupine implements the algorithm described in Faster linearizability checking via P-compositionality, an optimization of the algorithm described in Testing for Linearizability.

Porcupine is faster and can handle more histories than Knossos's linearizability checker. Testing on the data in test_data/jepsen/, Porcupine is generally 1,000x-10,000x faster and has a much smaller memory footprint. On histories where it can take advantage of P-compositionality, Porcupine can be millions of times faster.

Usage

Porcupine takes an executable model of a system along with a history, and it runs a decision procedure to determine if the history is linearizable with respect to the model. Porcupine supports specifying history in two ways, either as a list of operations with given call and return times, or as a list of call/return events in time order.

See model.go for documentation on how to write a model or specify histories. Once you've written a model and have a history, you can use theCheckOperations and CheckEvents functions to determine if your history is linearizable.

Example

Suppose we're testing linearizability for operations on a read/write register that's initialized to 0. We write a sequential specification for the register like this:

type registerInput struct {
    op bool// false = write, true = read
    value int
}// a sequential specification of a registerregisterModel:= porcupine.Model{Init: func() interface{} {return0
    },// step function: takes a state, input, and output, and returns whether it// was a legal operation, along with a new stateStep: func(state, input, output interface{}) (bool, interface{}) {regInput:= input.(registerInput)if regInput.op == false {returntrue, regInput.value// always ok to execute a write
        } else {readCorrectValue:= output == statereturn readCorrectValue, state // state is unchanged
        }
    },
}

Suppose we have the following concurrent history from a set of 3 clients. In a row, the first | is when the operation was invoked, and the second | is when the operation returned.

C0:  |-------- Write(100) --------|
C1:      |--- Read(): 100 ---|
C2:          |- Read(): 0 -|

We encode this history as follows:

events:= []porcupine.Event{// C0: Write(100)
    {Kind: porcupine.CallEvent, Value: registerInput{false, 100}, Id: 0},// C1: Read()
    {Kind: porcupine.CallEvent, Value: registerInput{true, 0}, Id: 1},// C2: Read()
    {Kind: porcupine.CallEvent, Value: registerInput{true, 0}, Id: 2},// C2: Completed Read -> 0
    {Kind: porcupine.ReturnEvent, Value: 0, Id: 2},// C1: Completed Read -> 100
    {Kind: porcupine.ReturnEvent, Value: 100, Id: 1},// C0: Completed Write
    {Kind: porcupine.ReturnEvent, Value: 0, Id: 0},
}

We can have Porcupine check the linearizability of the history as follows:

ok:= porcupine.CheckEvents(registerModel, events)// returns true

Now, suppose we have another history:

C0:  |------------- Write(200) -------------|
C1:    |- Read(): 200 -|
C2:                        |- Read(): 0 -|

We can check the history with Porcupine and see that it's not linearizable:

ok:= porcupine.CheckEvents(registerModel, events)// returns false

See porcupine_test.go for more examples on how to write models and histories.

Notes

Porcupine's API is not stable yet. Please vendor this package before using it.

Citation

If you use Porcupine in any way in academic work, please cite the following:

@misc{athalye2017porcupine,
  author = {Anish Athalye},
  title = {Porcupine: A fast linearizability checker in {Go}},
  year = {2017},
  howpublished = {\url{https://github.com/anishathalye/porcupine}},
  note = {commit xxxxxxx}
}

License

Copyright (c) 2017-2018 Anish Athalye. Released under the MIT License. SeeLICENSE.md for details.

How social-media platforms dispense justice

$
0
0

EVERY other Tuesday at Facebook, and every Friday at YouTube, executives convene to debate the latest problems with hate speech, misinformation and other disturbing content on their platforms, and decide what should be removed or left alone. In San Bruno, Susan Wojcicki, YouTube’s boss, personally oversees the exercise. In Menlo Park, lower-level execs run Facebook’s “Content Standards Forum”.

The forum has become a frequent stop on the company’s publicity circuit for journalists. Its working groups recommend new guidelines on what to do about, say, a photo showing Hindu women being beaten in Bangladesh that may be inciting violence offline (take it down), a video of police brutality when race riots are taking place (leave it up), or a photo alleging that Donald Trump wore a Ku Klux Klan uniform in the 1990s (leave it up but reduce distribution of it, and inform users it’s a fake). Decisions made at these meetings eventually filter down into instructions for thousands of content reviewers around the world.

Seeing how each company moderates content is encouraging. The two firms no longer regard making such decisions as a peripheral activity but as core to their business. Each employs executives who are thoughtful about the task of making their platforms less toxic while protecting freedom of speech. But that they do this at all is also cause for concern; they are well on their way to becoming “ministries of truth” for a global audience. Never before has such a small number of firms been able to control what billions can say and see.

Politicians are paying ever more attention to the content these platforms carry, and to the policies they use to evaluate it. On September 5th Sheryl Sandberg, Facebook’s number two, and Jack Dorsey, the boss of Twitter, testified before the Senate Select Intelligence Committee on what may be the companies’ most notorious foul-up, allowing their platforms to be manipulated by Russian operatives seeking to influence the 2016 presidential election. Mr Dorsey later answered pointed questions from a House committee about content moderation. (In the first set of hearings Alphabet, the parent of Google, which also owns YouTube, was represented by an empty chair after refusing to make Larry Page, its co-founder, available.)

Scrutiny of Facebook, Twitter, YouTube et al has intensified recently. All three faced calls to ban Alex Jones of Infowars, a conspiracy theorist; Facebook and YouTube eventually did so. At the same time the tech platforms have faced accusations of anti-conservative bias for suppressing certain news. Their loudest critic is President Donald Trump, who has threatened (via Twitter) to regulate them. Straight after the hearings, Jeff Sessions, his attorney-general, said that he would discuss with states’ attorneys-general the “growing concern” that the platforms are hurting competition and stifling the free exchange of ideas.

Protected species

This turn of events signals the ebbing of a longstanding special legal protection for the companies. Internet firms in America are shielded from legal responsibility for content posted on their services. Section 230 of the Communications Decency Act of 1996 treats them as intermediaries, not publishers—to protect them from legal jeopardy.

When the online industry was limited to young, vulnerable startups this approach was reasonable. A decade ago content moderation was a straightforward job. Only 100m people used Facebook and its community standards fitted on two pages. But today there are 2.2bn monthly users of Facebook and 1.9bn monthly logged-on users of YouTube. They have become central venues for social interaction and for all manner of expression, from lucid debate and cat videos to conspiracy theories and hate speech.

At first social-media platforms failed to adjust to the magnitude and complexity of the problems their growth and power were creating, saying that they did not want to be the “arbiters of truth”. Yet repeatedly in recent years the two companies, as well as Twitter, have been caught flat-footed by reports of abuse and manipulation of their platforms by trolls, hate groups, conspiracy theorists, misinformation peddlers, election meddlers and propagandists. In Myanmar journalists and human-rights experts found that misinformation on Facebook was inciting violence against Muslim Rohyinga. In the aftermath of a mass shooting at a school in Parkland, Florida, searches about the shooting on YouTube surfaced conspiracy videos alleging it was a hoax involving “crisis actors”.

In reaction, Facebook and YouTube have sharply increased the resources, both human and technological, dedicated to policing their platforms. By the end of this year Facebook will have doubled the number of employees and contractors dedicated to the “safety and security” of the site, to 20,000, including 10,000 content reviewers. YouTube will have 10,000 people working on content moderation in some form. They take down millions of posts every month from each platform, guided by thick instruction manuals—the guidelines for “search quality” evaluators at Google, for example, run to 164 pages.

Although most of the moderators work for third-party firms, the growth in their numbers has already had an impact on the firms’ finances. When Facebook posted disappointing quarterly results in July, causing its market capitalisation to drop by over $100bn, higher costs for moderation were partly implicated. Mark Zuckerberg, the firm’s chief executive, has said that in the long run the problem of content moderation will have to be solved with artificial intelligence (AI). In the first three months of 2018 Facebook took some form of action on 7.8m pieces of content that included graphic violence, hate speech or terrorist propaganda, twice as many as in the previous three months (see chart), mostly owing to improvements in automated detection. But moderating content requires wisdom, and an algorithm is only as judicious as the principles with which it is programmed.

At Facebook’s headquarters in Menlo Park, executives instinctively resist making new rules restricting content on free-speech grounds. Many kinds of hateful, racist comments are allowed, because they are phrased in such a way as to not specifically target a race, religion or other protected group. Or perhaps they are jokes.

Fake news poses different questions. “We don’t remove content just for being false,” says Monika Bickert, the firm’s head of product policy and counterterrorism. What Facebook can do, instead of removing material, she says, is “down-rank” fake news flagged by external fact-checkers, meaning it would be viewed by fewer people, and show real information next to it. In hot spots like Myanmar and Sri Lanka, where misinformation has inflamed violence, posts may be taken down.

YouTube’s moderation system is similar to Facebook’s, with published guidelines for what is acceptable and detailed instructions for human reviewers. Human monitors decide quickly what to do with content that has been flagged, and most such flagging is done via automated detection. Twitter also uses AI to sniff out fake accounts and some inappropriate content, but it relies more heavily on user reports of harassment and bullying.

As social-media platforms police themselves, they will change. They used to be, and still see themselves as, lean and mean, keeping employees to a minimum. But Facebook, which has about 25,000 people on its payroll, is likely soon to keep more moderators busy than it has engineers. It and Google may be rich enough to absorb the extra costs and still prosper. Twitter, which is financially weaker, will suffer more.

More profound change is also possible. If misinformation, hate speech and offensive content are so pervasive, critics say, it is because of the firms’ business model: advertising. To sell more and more ads, Facebook’s algorithms, for instance, have favoured “engaging” content, which can often be the bad kind. YouTube keeps users on its site by offering them ever more interesting videos, which can also be ever more extreme ones. In other words, to really solve the challenge of content moderation, the big social-media platforms may have to say goodbye to the business model which made them so successful.

Isochronous Curves

$
0
0
From Wikipedia, the free encyclopedia
Jump to navigationJump to search
Four balls slide down a cycloid curve from different positions, but they arrive at the bottom at the same time. The blue arrows show the points' acceleration along the curve. On the top is the time-position diagram.
Objects representing tautochrone curve

A tautochrone or isochrone curve (from Greek prefixes tauto- meaning same or iso-equal, and chronotime) is the curve for which the time taken by an object sliding without friction in uniform gravity to its lowest point is independent of its starting point. The curve is a cycloid, and the time is equal to π times the square root of the radius (of the circle which generates the cycloid) over the acceleration of gravity. The tautochrone curve is the same as the brachistochrone curve for any given starting point.

The tautochrone problem

It was in the left hand try-pot of the Pequod, with the soapstone diligently circling round me, that I was first indirectly struck by the remarkable fact, that in geometry all bodies gliding along the cycloid, my soapstone for example, will descend from any point in precisely the same time.

Moby Dick by Herman Melville, 1851

The tautochrone problem, the attempt to identify this curve, was solved by Christiaan Huygens in 1659. He proved geometrically in his Horologium Oscillatorium, originally published in 1673, that the curve was a cycloid.

On a cycloid whose axis is erected on the perpendicular and whose vertex is located at the bottom, the times of descent, in which a body arrives at the lowest point at the vertex after having departed from any point on the cycloid, are equal to each other...[1]

Huygens also proved that the time of descent is equal to the time a body takes to fall vertically the same distance as the diameter of the circle that generates the cycloid, multiplied by π/2. In modern terms, this means that the time of descent is , where r is the radius of the circle which generates the cycloid, and g is the gravity of Earth.

Five isochronous cycloidal pendulums with different amplitudes

This solution was later used to attack the problem of the brachistochrone curve. Jakob Bernoulli solved the problem using calculus in a paper (Acta Eruditorum, 1690) that saw the first published use of the term integral.[2]

Schematic of a cycloidal pendulum

The tautochrone problem was studied by Huygens more closely when it was realized that a pendulum, which follows a circular path, was not isochronous and thus his pendulum clock would keep different time depending on how far the pendulum swung. After determining the correct path, Christiaan Huygens attempted to create pendulum clocks that used a string to suspend the bob and curb cheeks near the top of the string to change the path to the tautochrone curve. These attempts proved unhelpful for a number of reasons. First, the bending of the string causes friction, changing the timing. Second, there were much more significant sources of timing errors that overwhelmed any theoretical improvements that traveling on the tautochrone curve helps. Finally, the "circular error" of a pendulum decreases as length of the swing decreases, so better clock escapements could greatly reduce this source of inaccuracy.

Later, the mathematicians Joseph Louis Lagrange and Leonhard Euler provided an analytical solution to the problem.

Lagrangian solution

If the particle's position is parametrized by the arclengths(t) from the lowest point, the kinetic energy is proportional to The potential energy is proportional to the height y(s). One way the curve can be an isochrone is if the Lagrangian is that of a simple harmonic oscillator: the height of the curve must be proportional to the arclength squared.

where the constant of proportionality has been set to 1 by changing units of length. The differential form of this relation is

which eliminates s, and leaves a differential equation for dx and dy. To find the solution, integrate for x in terms of y:

where . This integral is the area under a circle, which can be naturally cut into a triangle and a circular wedge:

To see that this is a strangely parametrized cycloid, change variables to disentangle the transcendental and algebraic parts by defining the angle . This yields

which is the standard parametrization, except for the scale of x, y and θ.

"Virtual gravity" solution

The simplest solution to the tautochrone problem is to note a direct relation between the angle of an incline and the gravity felt by a particle on the incline. A particle on a 90° vertical incline feels the full effect of gravity, while a particle on a horizontal plane feels effectively no gravity. At intermediate angles, the "virtual gravity" felt by the particle is g sin θ. The first step is to find a "virtual gravity" that produces the desired behavior.

The "virtual gravity" required for the tautochrone is simply proportional to the distance remaining to be traveled, which admits a simple solution:

It can be easily verified both that this solution solves the differential equation and that a particle will reach s = 0 at time π/(2k) from any starting height A. The problem is now to construct a curve that will produce a "virtual gravity" proportional to the distance remaining to travel, i.e., a curve that satisfies:

The explicit appearance of the distance remaining is troublesome, but we can differentiate to obtain a more manageable form:

or

This equation relates the change in the curve's angle to the change in the distance along the curve. We now use the Pythagorean theorem, the fact that the slope of the curve is equal to the tangent of its angle, and some trigonometric identities to obtain ds in terms of dx:

Substituting this into the first differential equation lets us solve for x in terms of θ:

Likewise, we can also express dx in terms of dy and solve for y in terms of θ:

Substituting and , we see that these equations for and are those of a circle rolling along a horizontal line — a cycloid:

Solving for k and remembering that is the time required for descent, we find the descent time in terms of the radius r:

(Based loosely on Proctor, pp. 135–139)

Abel's solution

Niels Henrik Abel attacked a generalized version of the tautochrone problem (Abel's mechanical problem), namely, given a function T(y) that specifies the total time of descent for a given starting height, find an equation of the curve that yields this result. The tautochrone problem is a special case of Abel's mechanical problem when T(y) is a constant.

Abel's solution begins with the principle of conservation of energy— since the particle is frictionless, and thus loses no energy to heat, its kinetic energy at any point is exactly equal to the difference in potential energy from its starting point. The kinetic energy is , and since the particle is constrained to move along a curve, its velocity is simply , where is the distance measured along the curve. Likewise, the gravitational potential energy gained in falling from an initial height to a height is , thus:

In the last equation, we've anticipated writing the distance remaining along the curve as a function of height (s(y)), recognized that the distance remaining must decrease as time increases (thus the minus sign), and used the chain rule in the form .

Now we integrate from to to get the total time required for the particle to fall:

This is called Abel's integral equation and allows us to compute the total time required for a particle to fall along a given curve (for which would be easy to calculate). But Abel's mechanical problem requires the converse — given , we wish to find , from which an equation for the curve would follow in a straightforward manner. To proceed, we note that the integral on the right is the convolution of with and thus take the Laplace transform of both sides:

Since , we now have an expression for the Laplace transform of in terms of 's Laplace transform:

This is as far as we can go without specifying . Once is known, we can compute its Laplace transform, calculate the Laplace transform of and then take the inverse transform (or try to) to find .

For the tautochrone problem, is constant. Since the Laplace transform of 1 is , we continue:

Making use again of the Laplace transform above, we invert the transform and conclude:

It can be shown that the cycloid obeys this equation.

(Simmons, Section 54).

See also

References

  1. ^Blackwell, Richard J. (1986). Christiaan Huygens' The Pendulum Clock. Ames, Iowa: Iowa State University Press. ISBN 0-8138-0933-9. Part II, Proposition XXV, p. 69.
  2. ^Jeff Miller (20 July 2010). "Earliest Known Uses of Some of the Words of Mathematics (I)". Retrieved 28 June 2012.

Bibliography

External links

Mezcal Is Born of Time, Tradition and a Slow-Growing Plant

$
0
0

This cousin of tequila is handcrafted by farmers in small batches from the maguey plant, which takes 7 to 30 years to reach maturity.

Photographs and Text byBrett Gundlock

Mezcal is a drink like no other. “El elíxir de los dioses” (the elixir of the gods) is a potent and largely handcrafted libation that has been consumed at quinceañeras, weddings and funerals for generations in Oaxaca.

Unlike its cousin tequila, mezcal is not easy to produce commercially, limiting its export. And even with a boom in international interest, local mezcal maestros have focused on quality production in small batches. Witnessing the traditional process at a palenque, or artisanal distillery, is one of the few ways to understand mezcal’s cultural significance.

Maguey, or the agave plant used to make mezcal, can take seven to 30 years to mature. There are roughly 30 different species used to make mezcal in Oaxaca, each with a distinct flavor: tobalá, which takes an average 15 years to grow, has a smooth, fruity taste, while tepeztate, which matures in about 25 years, is strong and earthy; you can really taste the plant.

When a maguey plant is harvested, its sugar-rich base, the piña, is dug out of the ground; this “pineapple” is the key to mezcal. The piñas will be covered with rocks in an embers-lined pit and roasted for hours, giving mezcal its famously smoky taste. They are crushed and fermented; the mixture is then distilled several times over wood-burning ovens, yielding a spirit that is rated between 35 and 90 percent alcohol. I find that between 45 and 50 percent is the sweet spot.

Much of my work in Mexico has focused on campesinos in the mountains, who struggle against poverty and drug violence. But the story of mezcal is a positive one, about the opportunity for farmers to be autonomous. While shooting for this article, I slept on cement floors in a storage room, rode in the back of pickup trucks through blistering sun, hiked through rugged sierras near unmarked ancient Zapotec ruins and drank magical, handcrafted mezcal under the stars.

Viewing all 25817 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>