I get a lot of email. I'm also pretty sure you get a lot of email. However, email is still not a solved problemThis is evidenced by the fact that a quick Google search yields no less than ten viable options for email clients on my Mac. . Each potential email client is acceptable on it's own, yet none of them satisfied all of my desired features:
This is evidenced by the fact that a quick Google search yields no less than ten viable options for email clients on my Mac.
The ability to access my email without an internet connectionI travel quite a lot, so this was very important to me. .
I travel quite a lot, so this was very important to me.
Easily move messages between different folders, which is how I keep all of my emails organized by project.
Quick yet powerful search of all my mail messages.
Having an auto-updating status indicator that shows me how many unread messages I have.
Managing multiple accounts (Gmail for personal emails and Microsoft Exchange for work emails) and syncing local changes so that my phone can still be up-to-date.
If you follow this blog, you'll recognize that I've gotten a bit carried away with migrating the different aspects of my life to operate within the Emacs environment. So it was only a matter of time until I finally decided to give it a shot, and I converged upon a solution which happily satisfies all of the above constraints. Every email service is a bit different so YMMV, but this setup works for me.
Here's a screenshot of what we'll be setting up:
A screenshot of the mu4e interface after searching for recent emails from Amazon. Notice that I've "marked" a number of messages for deletion d, archiving r, and moving m. I also have an icon at the bottom right corner that shows I have 17 unread messages.
A screenshot of the mu4e interface after searching for recent emails from Amazon. Notice that I've "marked" a number of messages for deletion d, archiving r, and moving m. I also have an icon at the bottom right corner that shows I have 17 unread messages.
My brief adventure with Gnus
There are a ton of tutorials available for reading one's email with Gnus, so it was a natural starting point for my quest. After setting it up, Gnus starts into the standard group summary list, which will display all of the folders it discovered in your various mail accounts. To me, this seemed a bit much, since I have a ton of folders, but this alone wasn't enough to deter me. Unfortunately, I also found that it was quite slowIn fact, the slowness of Gnus is rather well documented on the Emacs Wiki and that the interface was rather cumbersome. The suggested solution to this problem is to host a local email server (groan), which syncs with the Gmail and your other email accounts.
In fact, the slowness of Gnus is rather well documented on the Emacs Wiki
In order to get Gnus working properly, Sacha Chua recommends installing two tools: offlineimap for the email synchronization and dovecot for hosting a local IMAP server, since that's how Gnus is able to read the messages. I was able to get offlineimap working relatively quickly (more on that later), and before too long I had a local copy of all my emails since the dawn of time. By contrast, dovecot had me scratching my head. Not only could I not get it to work, but it seemed like an unnecessary amount of complexity; I was hosting a local webserver so that my Emacs mail client could read emails that were already saved to my system. So it was at this point that I moved on in search of a better way.
An introduction to mu
After a bit of searching around, I came across a fantastic tool called mu. At it's core, mu is a simple command line tool for searching through emailsSee the mu "cheatsheet" for examples of more powerful search features within mu. . Simply type mu find $SEARCH into the terminal to query your emails.
See the mu "cheatsheet" for examples of more powerful search features within mu.
It's a cute little tool, and is especially nice for allowing you to quickly check for any new emailYou can easily search for unread emails with flag:unread. without leaving the terminal. Yet this still doesn't solve all my problems; sure I have an offline copy of my messages and I can search them with ease, but how do I read them, move them around, or interact with them in other ways?
You can easily search for unread emails with flag:unread.
This is where mu4e comes in, the Emacs email client included with mu. It's this that provides me with all of the functionality that I desire: being able to search an offline copy of my emails, easily move them around, and send/reply to different mail servers.
In addition, mu4e has the ability to auto-complete email addresses from names, follow rules about where to archive mail that matches certain filters (like keywords in the subject line) and, via an Emacs package, display a status icon in the modeline when I have new mail messages. In the next few sections, I'll describe how I got everything to work, and any pitfalls I encountered along the way.
Getting set up with mu and OfflineIMAP
As advertised, mu is really just for indexing and searching emails, and relies on other software to maintain a local copy of your messages, which it can then use. To do this, I chose to use the popular OfflineIMAPOn macOS, I installed this with brew install offlineimap; on Ubuntu, this can be done with apt. , since it's relatively easy to get setup. I have my OfflineIMAP manage two different accounts, Gmail and Exchange, and sync changes between the online services every 5 minutes. Rather than ramble on about how everything should be set up, I'll just reproduce some of the important parts of my configuration file here (taken from my ~/.offlineimaprc):
On macOS, I installed this with brew install offlineimap; on Ubuntu, this can be done with apt.
And example OfflineIMAP configuration python
And example OfflineIMAP configuration
python
[general]
accounts = Gmail, Exchange
maxsyncaccounts = 2
[Account Gmail]
localrepository = LocalGmail
remoterepository = RepositoryGmail
autorefresh = 5
quick = 10
postsynchook = mu index --maildir ~/Maildir
status_backend = sqlite
[Reposiroty LocalGmail]
type = Maildir
localfolders = ~/Maildir/Gmail
[Reposiroty RepositoryGmail]
type = Gmail
maxconnections = 2
remoteuser = YOUR_GMAIL_USERNAME
remotepass = YOUR_GMAIL_PASSWORD
folderfilter = lambda foldername: foldername not in ['[Gmail]/All Mail', '[Gmail]/Important']
sslcacertfile = /usr/local/etc/openssl/cert.pem # This will only work for macOS
## Try one of the following for Ubuntu or Arch:
# sslcacertfile = /etc/ssl/certs/ca-certificates.crt
# sslcacertfile = OS-DEFAULT
# These are effectively the same as the above
[Account Exchange]
[Repository LocalExchange]
[Repository RemoteExchange]
You'll notice a few things about this configurationYou may also notice that arbitrary python code can be specified as part of the configuration. . First, as I have it listed above, you have to enter your password directly into this file, which you probably don't want to do; there's a great Stack Exchange post on how to use GPG and python to encrypt your password. Second, I have included a folderfilter to avoid storing the All Mail and Important folders that Gmail annoyingly creates. Finally, I call mu index whenever the sync is complete, via postsynchook, to ensure that my mu database is as up-to-date as much as possibleBy default, mu looks to ~/Maildir for mail, but I like to include it for clarity. .
You may also notice that arbitrary python code can be specified as part of the configuration.
By default, mu looks to ~/Maildir for mail, but I like to include it for clarity.
Once this is setup, calling offlineimap from the command line will sync with the remote repositories every 5 minutes. However, this requires keeping the terminal window open. This can be solved by creating a daemon process. On macOS, this is built in to brew, and calling brew services start offlineimap will get everything started; for Linux, you can follow these instructions on the Arch Linux wiki.
With this step complete, the command line version of mu should now be syncing with the remote server(s) without any issues.
Configuring mu4e
Before even getting to the Emacs configuration file, you should ensure that mu4e is properly installed. Since mu4e is included with the installation of muOn macOS, this is only partially true. See to insure that the install includes mu4e. you need to include mu4e. This can be done with something like (add-to-list 'load-path "/usr/local/share/emacs/site-lisp/mu/mu4e"). Now, upon reopening Emacs, M-x mu4e should open a simple window with some shortcuts. Typing J will bring up a menu for selecting a mail folder. Chose one, and you should be presented with something resembling the screenshot above.
When in the headers view, which displays your email messages, you can easily navigate through different messages using n and p and hitting return will open a message, allowing you to read it. In addition, mu4e includes some very useful marking capabilities: d marks a message for deletion, r for refiling/archiving, and m for moving (after a target directory is specified). Simply press x to "execute" the marks. In addition, with * you can "bulk mark" emails; pressing x after some messages have been marked with x will allow you to perform an action to all of them. See the mu4e user manual for more details.
I mentioned above that I have two different email addresses and rely on mu4e to manage them both. In the previous screenshot, you can see that I've marked messages for archiving with r and deletionSee the section below for a caveat about deletion, to avoid premeturely deleting your messages! with d yet the behavior for the different messages changes depending on their mu4e context. By setting the mu4e-contexts variable, mu4e will search through the list of options, see if the message of interest matches :match-func and sets some local variables, like the mu4e-refile-folder. In the snippet below, I check to see if the mail directory (:maildir) includes /Gmail and, if it does, sets the trash and refile folders accordingly.
See the section below for a caveat about deletion, to avoid premeturely deleting your messages!
For my Exchange server, I have a slightly more complicated procedure; rather than including a specific refile folder, I define a function exchange-mu4e-refile-folder which does some more filtering. Apparently I don't want any emails from this fictitious going to the typical archive folder. So, whenever I get a message which includes "[some-mailing-list]" in the subject, I can still refile the message with r and know that it will go to the correct folder.
A custom refiling function lisp
A custom refiling function
lisp
(defun exchange-mu4e-refile-folder (msg)
"Function for chosing the refile folder for my Exchange email.
MSG is a message p-list from mu4e."
(cond
;; FLA messages
((string-match "\\[some-mailing-list\\]"
(mu4e-message-field msg :subject))
"/Exchange/mailing-list")
(t "/Exchange/Archive")
)
)
Alerts for new mail
Now that we can receive email, move it around and keep everything in sync with our different IMAP servers, the next task is to ensure that we're alerted whenever new mail arrives. Fortunately, there's another Emacs package for doing just this: mu4e-alert. The procedure for using mu4e-alert is relatively simple. Whenever you call mu4e-alert-enable-mode-line-display, your modeline will be updated to include a little envelope icon and the current count of unread messagesThe format of the modeline display can be changed by customizing mu4e-alert-modeline-formatter. .
The format of the modeline display can be changed by customizing mu4e-alert-modeline-formatter.
You're expectedly a bit annoyed, thinking I thought the icon would update itself! Fortunately, Emacs has the run-with-timer for just this purpose. However, there remains a small issue: whenever mu4e is open, it maintains a connection to the server. This means that mu index cannot be run by the OfflineIMAP process whenever mu4e is left open, and new mail will not appear. This is far from ideal. Again, I have a slightly hacky solution. By calling mu4e~proc-kill periodically, we can sever mu4e's connection to the server. The only consequence of this is that I may occasionally try to archive messages in my inbox that I've already moved on my phone, an issue which is easily remedied by refreshing my mu4e buffer.
My complete mu4e-alert configuration, which relies on John Wiegley's use-package, is as follows:
There's one other hiccup that I haven't yet mentioned; some email servers (cough Gmail cough) will mark messages as unread whenever they are moved to other folders, including the trash. As a result, I've customized my mu4e-alert-interesting-mail-query variable to check for unread messages in only my inbox folders.
Using mu4e to send mail
Unfortunately IMAP, the protocol for checking email and moving them around, cannot be used to send emails: for that you need to configure SMTP. This process isn't particularly difficult, but it does include a bunch of code, most of which is adapted from the mu4e documentationIf you only have a single account, most of this is unnecessary. . After setting the default values for many of the SMTP parameters, we create a list of account-specific parameter values which are loaded upon composing a message by the my-mu4e-set-account function. I've included most of my configuration here for the sake of completeness.
If you only have a single account, most of this is unnecessary.
Configuration for sending mail lisp
Configuration for sending mail
lisp
;; I have my "default" parameters from Gmail
(setq mu4e-sent-folder "/Users/Greg/Maildir/sent"
;; mu4e-sent-messages-behavior 'delete ;; Unsure how this should be configured
mu4e-drafts-folder "/Users/Greg/Maildir/drafts"
user-mail-address "gregory.j.stein@gmail.com"
smtpmail-default-smtp-server "smtp.gmail.com"
smtpmail-smtp-server "smtp.gmail.com"
smtpmail-smtp-service 587)
;; Now I set a list of
(defvar my-mu4e-account-alist
'(("Gmail"
(mu4e-sent-folder "/Gmail/sent")
(user-mail-address "YOUR.GMAIL.USERNAME@gmail.com")
(smtpmail-smtp-user "YOUR.GMAIL.USERNAME")
(smtpmail-local-domain "gmail.com")
(smtpmail-default-smtp-server "smtp.gmail.com")
(smtpmail-smtp-server "smtp.gmail.com")
(smtpmail-smtp-service 587)
)
;; Include any other accounts here ...
))
(defun my-mu4e-set-account ()
"Set the account for composing a message.
This function is taken from:
https://www.djcbsoftware.nl/code/mu/mu4e/Multiple-accounts.html"
(let* ((account
(if mu4e-compose-parent-message
(let ((maildir (mu4e-message-field mu4e-compose-parent-message :maildir)))
(string-match "/\\(.*?\\)/" maildir)
(match-string 1 maildir))
(completing-read (format "Compose with account: (%s) "
(mapconcat #'(lambda (var) (car var))
my-mu4e-account-alist "/"))
(mapcar #'(lambda (var) (car var)) my-mu4e-account-alist)
nil t nil nil (caar my-mu4e-account-alist))))
(account-vars (cdr (assoc account my-mu4e-account-alist))))
(if account-vars
(mapc #'(lambda (var)
(set (car var) (cadr var)))
account-vars)
(error "No email account found"))))
(add-hook 'mu4e-compose-pre-hook 'my-mu4e-set-account)
Pitfalls and additional tweaks
I already touched upon a few of the minor issues I encountered when getting everything here to work properly, including how moved messages will occasionally be marked as unread. The biggest uh oh I had to deal with stemmed from some unexptected behavior with OfflineIMAP. Apparently, whenever a message is marked with the trash label T, which happens whenever you 'delete' a message with d, OfflineIMAP won't sync it back to the server and, worse still, may delete it entirely. Even though I've marked an item for deletion, I'm comforted by the fact that I can recover a message if I accidentally move it to the trash.
Avoiding this issue requires modifying the way the delete mark d operates. I simply replaced +T-N with -N in the definition of the trash mark. It was a simple (if rather verbose) fix, so I've included it here in its entirety.
Finally, here are a few more tweaks to the mu4e settings that I frequently use.
Other tweaks lisp
;; Include a bookmark to open all of my inboxes
(add-to-list 'mu4e-bookmarks
(make-mu4e-bookmark
:name "All Inboxes"
:query "maildir:/Exchange/INBOX OR maildir:/Gmail/INBOX"
:key ?i))
;; This allows me to use 'helm' to select mailboxes
(setq mu4e-completing-read-function 'completing-read)
;; Why would I want to leave my message open after I've sent it?
(setq message-kill-buffer-on-exit t)
;; Don't ask for a 'context' upon opening mu4e
(setq mu4e-context-policy 'pick-first)
;; Don't ask to quit... why is this the default?
(setq mu4e-confirm-quit nil)
Wrapping Up
I'll try to keep this document up-to-date as I experiment more, however I'm already quite happy with my setup after a couple of weeks of trying it out. There are plenty of features that I haven't touched upon as well, including the ability to link to email messages via org-mode, in which I do much of my work. At any rate, it's just another excuse for me to never leave my Emacs environment.
Everyone’s favorite database, PostgreSQL, has a new release coming out soon: Postgres 11
In this post we take a look at some of the new features that are part of the release, and in particular review the things you may need to monitor, or can utilize to increase your application and query performance.
Just-In-Time compilation (JIT) in Postgres 11
Just-In-Time compilation (JIT) for query execution was added in Postgres 11. It's not going to be enabled for queries by default, similar to parallel query in Postgres 9.6, but can be very helpful for CPU-bound workloads and analytical queries.
Specifically, JIT currently aims to optimize two essential parts of query execution: Expression evaluation and tuple deforming. To quote the Postgres documentation:
Expression evaluation is used to evaluate WHERE clauses, target lists, aggregates and projections. It can be accelerated by generating code specific to each case.
Tuple deforming is the process of transforming an on-disk tuple into its in-memory representation. It can be accelerated by creating a function specific to the table layout and the number of columns to be extracted.
Often you will have a workload that is mixed, where some queries will benefit from JIT, and some will be slowed down by the overhead.
Here is how you can monitor JIT performance using EXPLAIN and auto_explain, as well as how you can determine whether your queries are benefiting from JIT optimization.
Monitoring JIT with EXPLAIN / auto_explain
First of all, you will need to make sure that your Postgres packages are compiled with JIT support (--with-llvm configuration switch). Assuming that you have Postgres binaries compiled like that, the jit configuration parameter controls whether JIT is actually being used.
For this example, we’re working with one of our staging databases, and pick a relatively simple query that can benefit from JIT:
SELECT COUNT(*) FROM log_lines
WHERE log_classification = 65 AND (details->>'new_dead_tuples')::integer >= 0;
For context, the table log_lines is an internal log event statistics table of pganalyze, which is typically indexed per-server, but in this case we want to run an analytical query across all servers to count interesting autovacuum completed log events.
First, if we run the query with jit = off, we will get an execution plan and runtime like this:
Note the usage of EXPLAIN's BUFFERS option so we can compare whether any caching behavior affects our benchmarking. We can also see that I/O time was 1,098 ms out of 3,499 ms, so this query is definitely CPU bound.
For comparison, when we enable JIT, we can see the following:
In this case, JIT yields about a 25% speed-up, due to spending less CPU time, without any extra effort on our end. We can also see that JIT tasks themselves added 79 ms to the runtime.
You can fine tune whether JIT is used for a particular query by the jit_above_cost parameter which applies to the total cost of the query as determined by the Postgres planner. The cost is 649724 in the above EXPLAIN output, which exceeds the default jit_above_cost threshold of 100000. In a future post we'll walk through more examples of when using JIT can be beneficial.
You can gather these JIT statistics either for individual queries that you are interested in (using EXPLAIN), or automatically collect it for all of your queries using the auto_explain extension. If you want to learn more about how to enable auto_explain we recommend reviewing our guide about it: pganalyze Log Insights - Tuning Log Config Settings.
Fun fact: As part of the writing of this article we ran experiments with JIT and auto_explain, and discovered that JIT information wasn’t included with auto_explain, but only with regular EXPLAINs. Luckily, we were able to contribute a bug fix to Postgres, which has been merged and will be part of the Postgres 11 release.
Preventing cold caches: Auto prewarm in Postgres 11
A neat feature that will help you improve performance right after restarting Postgres, is the new autoprewarm background worker functionality.
If you are not familiar with pg_prewarm, its an extension thats bundled with Postgres (much like pg_stat_statements), that you can use to preload data that’s on disk into the Postgres buffer cache.
It is often very useful to ensure that a certain table is cached before the first production query hits the database, to avoid an overly slow response due to data being loaded from disk.
Previously, you needed to manually specify which relations (i.e. tables) and which page offsets to preload, which was cumbersome, and hard to automate.
Caching tables with autoprewarm
Starting in Postgres 11, you can instead have this done automatically, by adding pg_prewarm to shared_preload_libraries like this:
Doing this will automatically save information on which tables/indices are in the buffer cache (and which parts of them) every 300 seconds to a file called autoprewarm.blocks, and use that information after Postgres restarts to reload the previously cached data from disk into the buffer cache, thus improving initial query performance.
Stored procedures in Postgres 11
Postgres has had database server-side functions for a long time, with a variety of supported languages. You might have used the term “procedures” before to refer to such functions, as they are similar to what’s called “Stored Procedures” in other database systems such as Oracle.
However, one detail that is sometimes missed, is that the existing functions in Postgres were always running within the same transaction. There was no way to begin, commit, or rollback a transaction within a function, as they were not allowed to run outside of a transaction context.
Starting in Postgres 11, you will have the ability to use CREATE PROCEDURE instead of CREATE FUNCTION to create procedures.
Benefits of using stored procedures
Compared to regular functions, procedures can do more than just query or modify data: They also have the ability to begin/commit/rollback transactions within the procedure.
Particularly for those moving over from Oracle to PostgreSQL, the new procedure functionality can be a significant time saver. You can find some examples of how to convert procedures between those two relational database systems in the Postgres documentation.
How to use stored procedures
First, let’s create a simple procedure that handles some tables:
CREATE PROCEDURE my_table_task() LANGUAGE plpgsql AS $$
DECLARE
BEGIN
CREATE TABLE table_committed (id int);
COMMIT;
CREATE TABLE table_rolled_back (id int);
ROLLBACK;
END $$;
We can then call this procedure like this, using the new CALL statement:
=# CALL my_table_task();
CALL
Time: 1.573 ms
Here you can see the benefit of procedures - despite the rollback the overall execution is successful, and the first table got created, but the second one was not since the transaction was rolled back.
Be careful: Transaction timestamps and xact_start for procedures
Expanding on how transactions work inside procedures, there is currently an oddity with the transaction timestamp, which for example you can see in xact_start. When we expand the procedure like this:
CREATE PROCEDURE my_table_task() LANGUAGE plpgsql AS $$
DECLARE
clock_str TEXT;
tx_str TEXT;
BEGIN
CREATE TABLE table_committed (id int);
SELECT clock_timestamp() INTO clock_str;
SELECT transaction_timestamp() INTO tx_str;
RAISE NOTICE 'After 1st CREATE TABLE: % clock, % xact', clock_str, tx_str;
PERFORM pg_sleep(5);
COMMIT;
CREATE TABLE table_rolled_back (id int);
SELECT clock_timestamp() INTO clock_str;
SELECT transaction_timestamp() INTO tx_str;
RAISE NOTICE 'After 2nd CREATE TABLE: % clock, % xact', clock_str, tx_str;
ROLLBACK;
END $$;
And then call the procedure, we see the following:
Despite there being two transactions in the procedure, the transaction start timestamp is that of when the procedure got called, not when the embedded transaction actually started.
You will see the same problem with the xact_start field in pg_stat_activity, causing monitoring scripts to potentially detect false positives for long running transactions. This issue is currently in discussion and likely to be changed before the final release.
How often does my stored procedure get called?
Now, if you want to monitor the performance of procedures, it gets a bit difficult. Whilst regular functions can be tracked using track_functions = on, there is no such facility for procedures. You can however track the execution of CALL statements using pg_stat_statements:
=# SELECT query, calls, total_time FROM pg_stat_statements WHERE query LIKE 'CALL%';
┌────────────┬───────┬────────────┐
│ query │ calls │ total_time │
├────────────┼───────┼────────────┤
│ CALL abc() │ 4 │ 5.62299 │
└────────────┴───────┴────────────┘
Postgres 11 is going to be the best Postgres release yet, and we are excited to put it into use.
Whilst common wisdom is to not upgrade right after a release, we encourage you to try out the new release early, help the community find bugs (just like we did!), and make sure that your performance monitoring systems are ready to handle the new features that were added.
PS: If this article was useful to you and you want to share it with your peers you can tweet it by clicking here.
When Anand Kalelkar started a new job at a large insurance company, colleagues flooded him with instant messages and emails and rushed to introduce themselves in the cafeteria.
He soon learned his newfound popularity came with strings attached. Strings of code. Many of Mr. Kalelkar’s co-workers had heard he was a wizard at
Microsoft
Excel and were seeking his help in taming unruly spreadsheets and pivot tables gone wrong.
“People would come up to me and say, ‘Hey, I hear you’re the Excel guy,’ ” said the 37-year-old metrics consultant from Oak Brook, Ill. Mr. Kalelkar said he has become “a little more passive-aggressive,” warning help-seekers, “Don’t come to me, go to Google first.”
‘I’ve been asked to help a whole lot of people on many occasions with the simplest tasks,’ says Anand Kalelkar.
Photo:
Anand Kalelkar
Excel buffs are looking to lower their profiles. Since its introduction in 1985 by Microsoft Corp., the spreadsheet program has grown to hundreds of millions of users world-wide. It has simplified countless office tasks once done by hand or by rudimentary computer programs, streamlining the work of anyone needing to balance a budget, draw a graph or crunch company earnings. Advanced users can perform such feats as tracking the expenditures of thousands of employees.
At the same time, it has complicated the lives of the office Excel Guy or Gal, the virtuosos whose superior skills at writing formula leave them fighting an endless battle against the circular references, merged cells and mangled macros left behind by their less savvy peers.
“If someone tells you that they ‘just have a few Excel sheets’ that they want help with, run the other way,” tweeted 32-year-old statistician Andrew Althouse. “Also, you may want to give them a fake phone number, possibly a fake name. It may be worth faking your own death, in extreme circumstances.”
The few Excel sheets in question, during one recent encounter, turned out to have 400 columns each, replete with mismatched terms and other coding no-nos, said Mr. Althouse, who works at the University of Pittsburgh. The project took weeks to straighten out.
Microsoft Excel
Photo:
Microsoft
“Let’s just say that was a poor use of time,” he said. He advises altruistic Excel mavens to “figure out what you’re getting into” before offering to lend a hand.
Microsoft’s Jared Spataro, a corporate vice president for Office and Windows marketing, wrote in a recent blog post that “Excel’s power comes from its simplicity,” calling it “an incredibly flexible app.”
A company spokeswoman said the program has recently added artificial intelligence features that are “opening up new possibilities for all users.”
Nevertheless, years of dealing with colleagues’ Excel emergencies have taught John Mechalas to keep his mastery of spreadsheets a secret.
The trouble often starts with a group email asking if there is anyone who knows Excel really well, said Mr. Mechalas, a 48-year-old software engineer atIntel Corp.
in Hillsboro, Ore.
“People say, ’Oh, this is just a really quick thing,’ ” he said. “Then I look at it, and it’s not a quick thing.”
These days, Mr. Mechalas will lay low until someone has a dire need before offering his expertise. His willpower was put to the test earlier this week, as he suppressed the urge to yell “just come to me for help” while staring at a badly tangled spreadsheet during a presentation.
“I’m an altruist, but it’s not my job to save the world,” he said.
Colin McIllece, 36, a New York purchasing analyst, said being good at Excel has benefits. “It’s kind of like being a wizard,” he said. “You say, ‘I can think of a spreadsheet for that,’ and it’s like you performed a magic trick.”
Mr. McIllece recalls one fiasco where a colleague presented him with a huge document saved into a jumble of folders and teeming with dreaded # symbols, usually an indication of an Excel error.
Like Mr. Kalelkar, he is now more likely to show colleagues they can find answers to their problems though Google searches—a method even the most experienced Excel users often fall back on. People who keep bothering him get their instant messages ignored.
As an Excel expert, “you become indispensable, and that’s a double-edged sword,” Mr. McIllece said.
Jen Lipschitz, a 32-year-old data analyst and project manager from Quincy, Mass., says colleagues often turn to her and the rest of her department for help with their Excel travails.
People say, “ ‘This is Jen, she’s in the smart department,’ ” Ms. Lipschitz said. “If they can’t figure out why the data is being weird, they’ll just go ask Jen down the hall.”
Ms. Lipschitz’s solution: “I’ll just stand there,” she said. As co-workers are explaining the problem, they will frequently figure it out for themselves.
She believes some people get overwhelmed by the possibilities of Excel, a program that manages to be at once simple and mind-bogglingly complex.
“People get intimidated that Excel can do so many things,” she said. “They forget that they need to try.”
CNXSoft: Guest post by Blu about Baikal T1 development board and SoC, potentially one of the last MIPS consumer grade platforms ever.
It took me a long time to start writing this article, even though I had been poking at the test subject for months, and I felt during that time that there were findings worth sharing with fellow embedded devs. What was holding me back was the thought that I might be seeing one of the last consumer-grade specimen of a paramount ISA that once turned upside-down the CPU world. That thought was giving me mixed feelings of part sadness, part hesitation ‒ to not do some injustice to a possibly last-of-its-kind device. So it was with these feelings that I took to writing this article. But first, a short personal story.
Two winters ago I was talking to a friend of mine over beers. We were discussing CPU architectures and hypothesizing on future CPU developments in the industry, when I mentioned to him that the latest Imagination Technologies’ MIPS P5600 ‒ a MIPS32r5 ‒ hosted an interesting SIMD extension ‒ a previously-unseen one in the MIPS world. I had just skimmed through the docs for that extension ‒ MIPS SIMD Architecture (MSA), and I was impressed with how clean and practical this new vector instruction set looked in comparison to the SIMD ISAs of the day, partiularly to those by a very venerable CPU manufacturer. We discussed how the P5600 had found its way into a SoC by the Russian semiconductor vendor Baikal Electronics, and how they were releasing a devboard, which, thanks to limited-series manufacturing, would be well out-of-reach for mortal devs.
Fast forward to this summer, when I got a ping from my friend ‒ he was currently in St. Petersburg, Russia, and he was browsing the online store of a Moscow computer shop, and there was the Baikal T1 BFK 3.1 board, for the equivalent of 500 EUR, so if I ever wanted to get one, now was the time.
Did I want one? Last MIPS I had an encounter with was the Imagination CI20 board, hosting an Ingenic JZ4780 application SoC ‒ a dual-core MIPS32r2 implementation, and that was a mixed experience. I just had higher expectations of that SoC, as neither the SoC vendor nor Imagination did a good job setting the user expectations of what the XBurst MIPS cores actually were ‒ short in-order pipelines, with a non-pipelined scalar FPU, and an obscure integer-only SIMD specialized for video codecs. The one interesting part in that SoC, from my perspective, was the fully-fledged GLESv2/EGL stack for the aging SGX540. What I was looking for this time around was a “meatier” MIPS, one which was closer to the state of the art of this ISA, and the P5600 was precisely that.
So, yes, I very much wanted one. That price was very close to my threshold of ‘buy for science’, but I still had to keep in check my overgrown annual ‘scientific budget’ (as I refer to my devboard expenses in front of my wife), so I hesitated for a moment. To which my friend suggested ‘Listen, your birthday occurs annually, so how about I get you a birthday present, with some credit from future birthdays?’ [A huge thank you, Mitia, for your ingenuity, kindness and generosity!]
The BFK 3.1 is a sub-uATX board ‒ namely of the flexATX factor ‒ a bit larger than mini-ITX, which means it’s compact ‒ not RPi compact, mind you, but still compact for a devboard. Baikal T1 itself is a compact SoC ‒ not much larger than the Ingenic JZ4780. The latter is 17x17mm BGA390 (40nm), vs 25x25mm BGA576 (28nm) for the T1. But the T1 is a proper SoC that contains everything needed for a small gen-purpose computer (sans a GPU), which is what the BFK 3.1 seeks to be. Combined with the versatile MCU STM32F205 (ARM Cortex-M3 @ 120MHz), the T1 allows for an essentially two-chip devboard. Aside form the SoC and its companion MCU, the BFK 3.1 hosts a PCIe x16 connector (x4 active lanes), a SO-DIMM slot, an ATX power connector, 2x 1Gb Ethernet and 2x SATA 3 connectors, a USB2.0, an UART (via mini-USB) and what appears to be a USB OTG, a couple of JTAGs and even a RPi GPIO connector ‒ the rest of the board’s top surface is nearly pristine clean. Ok, there’s one more connector ‒ a proprietary one for the optional 10Gb Ethernet add-on, but that comes more as a curiosity from my current perspective.
Getting the board live was practically uneventful. BFK 3.1 power delivery is via a 24-pin ATX connector ‒ no barrel connectors of any kind, which in my case made two large drawers worth of PSUs useless, but I also had a 20-pin ATX picoPSU at hand (80W DC-DC, 12V input) and a spare AC-DC 12V convertor (60W) ‒ that improvised power delivery covered the board plus a SSD more than fine ‒ actually it was an overkill, given the manufacturer’s TDP rating of the SoC of 5W. I also had a leftover 4GB DDR3 SO-DIMM from a decommissioned notebook, so I thought I had the RAM covered as well. A “minor” detail had escaped my attention ‒ that SO-DIMM was of the 1333MT/s (667MHz) variety, whereas the board took 1600MT/s (800MHz) sharp ‒ my first booting of the board took me as far as RAM controller negotiations.
Board fitted with “wrong” SO-DIMM @ 667 MHz – Click to Enlarge
One facepalm and a visit to the local store later, the board was hosting shiny-new 8GB of DDR3, to specs and all.
Yet another minor detail about the RAM had originally escaped my attention, but that detail was not crucial to the booting of the board, and I found it out only after the first boot: the SoC had a 32-bit RAM bus, so it was seeing half the capacity of the 64-bit DIMM. Perhaps it could be arranged for such a bus to see the full DIMM capacity ‒ I’m not a hw engineer to know such things, and the designers of the BFK 3.1 clearly did not arrange for that. Which is a bit unfortunate for a devboard. Oh well ‒ back to square ‘4GB of RAM’.
Click to Enlarge
Apropos, as it turned out, I did really need RAM, since for exposing the full potential of the P5600 I had some compiler building ahead of me, and I always self-host builds when possible. But I’m getting ahead of myself.
The board arrives with a Busybox in SPI flash, and Baikal Electronics provide two revisions of Debian Stretch images with kernel 4.4 for day-to-day uses from a SATA drive. All available boot media are exposed via the cleanest U-Boot menu interface I’ve seen yet.
Footnote: aside from dd-ing the Debian image to the SSD, all interactions with the BFK 3.1 were done without involvement of PCs ‒ the above screengrab is from my trusty chromebook.
The obligatory dump of basic caps follows:
Linux baikal4.4.100-bfk3#4 SMP Thu Feb 15 17:25:02 MSK 2018 mips GNU/Linux
Whether the kernel saw this as a MIPS32r2 machine or it made use of the address extensions ‒ all that was beyond the scope of this first reconnaissance. I wanted to examine uarch performance, and as long as compilers were in the clear about the CPU’s true ISA capabilities I was set.
The VZ extension is a virtualization thing ‒ far from my interests. The EVA and XPA are addressing extensions ‒ Enhanced Virtual Address and Extended Physical Address, respectively. The former allows more efficient virtual-space mapping between kernel and userspace for the 32-bit/4GB process-addressable memory space. And the latter is, well, a physical address extension. From the P5600 manual:
Extended Physical Address (XPA) that allows the physical address to be extended from 32-bits to 40-bits.
Clearly both addressing extensions could be of good use to kernel developers. Me, of the listed ISA extensions, MSA was the one I truly cared about.
Timing buffered disk reads:1206MB in3.00seconds=401.89MB/sec
As wise men say, ‘Have decent SATA performance ‒ will use for a build machine.’
And finally, an interrupts-related observation that might help me obtain cleaner benchmarking results:
blu@baikal:~$cat/proc/interrupts
CPU0 CPU1
1:169069097MIPS GIC Local1timer
2:00MIPS GIC Local0watchdog
8:54280MIPS GIC8IPI resched
9:04970MIPS GIC9IPI resched
10:46930MIPS GIC10IPI call
11:015118MIPS GIC11IPI call
23:00MIPS GIC23be-apb
31:00MIPS GIC31timer0
38:00MIPS GIC381f200000.pvt
40:00MIPS GIC401f046000.i2c0
41:1910MIPS GIC411f047000.i2c1
47:50MIPS GIC47dw_spi0
48:00MIPS GIC48dw_spi1
55:24640MIPS GIC55serial
56:100MIPS GIC56serial
63:00MIPS GIC63dw_dmac
71:218320MIPS GIC711f050000.sata
75:00MIPS GIC75xhci-hcd:usb1
79:6520MIPS GIC79eth1
87:00MIPS GIC87eDMA-Tx-0
88:00MIPS GIC88eDMA-Tx-1
89:00MIPS GIC89eDMA-Tx-2
90:00MIPS GIC90eDMA-Tx-3
91:00MIPS GIC91eDMA-Rx-0
92:00MIPS GIC92eDMA-Rx-1
93:00MIPS GIC93eDMA-Rx-2
94:00MIPS GIC94eDMA-Rx-3
95:00MIPS GIC95MSI PCI
96:00MIPS GIC96AER PCI
103:00MIPS GIC103emc-dfi
104:00MIPS GIC104emc-ecr
105:00MIPS GIC105emc-euc
134:00MIPS GIC134be-axi
ERR:0
Notice how all serial and SATA interrupts are serviced by the 1st core? We could put that to some use.
Now the actual fun could begin! Being the control freak that I am, I tend to run a couple of micro-benchmarks when testing new uarchitectures ‒ one on the ‘gen-purpose’ side of performance, and one on the ‘sustained fp’ side of performance. Both of them being single-threaded, and the CPU at hand not featuring SMT, that meant I could focus on the details of the uarch by isolating all tests to the relatively-uninterrupted 2nd core.
Unfortunately, there was one last obstacle before me ‒ Debian Stretch comes with gcc-6.3 which does not know of the MSA extension in the P5600. For that I needed one major compiler revision later ‒ gcc-7.3 was fully aware of the novel instruction set, and so my next step was building gcc-7.3 for the platform. Easy-peasy. Or so I thought.
A short rant: I have difficulties understanding why a compiler’s default-settings self-hosted build would fail with an ‘illegal instruction’ in the bootstrap phase. But that’s the case with g++-7.3 on Debian Stretch when doing a self-hosted --target=mipsel-linux-gnu build on the BFK 3.1, and that’s what made me approach the gcc-dev mailing list with the wrong kind of support question, to which, luckily, I still got helpful responses.
Back to the BFK 3.1, where I eventually got a good g++-7.3 build via the following config, largely copied over from Debian’s g++-6.3:
Yay, got MSA compiler support! Now I could do all the fp32 (and not only) SIMD I wanted.
But first I stumbled upon a surprise coming from the non-SIMD micro-benchmark ‒ a Mandelbrot plot written in the language Brainfuck, and run through a home-grown Brainfuck interpreter.
Running that before and after upgrading the compiler showed the following results:
Brainstorm Mandelbrot ‒ three versions of the code, across two compilers: g++-6.3.0: 0m43.539s (vanilla) g++-6.3.0: 0m38.176s (alt) g++-6.3.0: 0m38.176s (alt^2)
Notice how for the exact-same code and the exact-same optimization flags the two compilers produced performance delta for the resulting binary as large as 20% in favor of the newer g++? That was not due to some new, smarter P5600 instructions utilized by the newer compiler ‒ nope, the generated codes in both cases used the same ISA. It’s just that the newer compiler produced notably better-quality code ‒ fewer branches, more linear control flow. Yay for better compilers!
Those g++7.3 results positioned the P5600 firmly between the AMD A8-7600 and the Intel Core2 Duo P8600 in the clock-normalized Mandelbrot performance charts (where the Penryn also takes advantage of the custom Apple clang compiler, which generally outperforms gcc at this combination of CPU and task.
Per-clock, the P5600 also scored ahead of the Cortex-A15, which I believe is the closest competitor in the category of the P5600. Where the P5600, or perhaps its incarnation in the Baikal T1, fell short, was in absolute performance due to low clocks. Should that core reach clocks closer to 2GHz, we’d be seeing much more interesting absolute-performance results.
Ok, it was time to see how the P5600 did at fp32 SIMD. For that an SGEMM matrix multiplier was to be used. Making use of the novel MSA ISA took minimal effort, partially thanks to gcc’s support for generic vectors, partially thanks to the simplicity of the MSA ISA. The MSA version of the matmul code, dubbed ‘ALT=8’, took less than an hour to code and tune, and resulted in ~3.9 flop/clock for the small, cache-fitting dataset (64×64 matrices), and 2.1 flop/clock for the large dataset (512×512 matrices). Those results placed the P5600 firmly between Intel Merom and Intel Penryn for the small dataset, and slightly below the level of ARM Cortex-A72 and Intel Merom for the large dataset. The large dataset, though, exhibited a rather erratic behavior ‒ run-times varied considerably even when pinned to the 2nd core. It was as if the memory subsystem, past L2D, was behaving inconsistently doing 128-bit-wide accesses. That warranted further investigation, which would happen on a better day.
But let me finish my BFK 3.1 story here, and give my subjective, not-guaranteed-impartial opinion of the test subject.
My impressions of the P5600 in the Baikal T1 are largely positive. Using my limited micro-benchmark set as a basis, that uarchitecture does largely deliver on its promises of good gen-purposes IPC and good SIMD throughput per clock, and could be considered a direct competitor to the best of 32-bit ARM Cortex designs. That said, Baikal T1 could use higher clocks, which would position it in absolute-performance terms right in the group of the Core2 lineup by Intel and the Cortex-A12/15/17 lineup by ARM. Which, if one thinks of it in the grand scheme things, would be nothing short of a great achievement for the Baikal Warrior (Imagination aptly named the P-series MIPS designs ‘Warrior’‒ they’d have to fight for the survival of their ISA). If we ever live to see another Baikal T-series, that is ‒ Baikal Electronics are also developing their Baikal M-series ‒ ARM Cortex-A57 designs.
MIPS once turned the CPU world around. Can it survive its darkest hour (at least in the West ‒ in the East the Chinese have their Loongson) and step into a renaissance, or will it perish into oblivion? I, for one, would love to see the former, but I’m just an old coder, and old coders don’t get much say these days.
We’re excited to announce a project that we’ve been working on for quite some time: Over the next three months, we will be combining our two Linux infrastructures into a single platform.
This is a big user experience improvement, making it easier to setup and maintain projects on Travis CI. In particular, it will simplify setup for folks new to Travis CI, who have to make a decision based on the finer points of containers vs. vms. We would like our platform to be as accessible as possible to all users, and see a combined Linux infrastructure as an important step in that direction.
Going forward, we will slowly transition the container-based environment out, in favor of a build environment that is entirely vm-based. Folks using container-based infrastructures will be the only ones affected, and this transition will roll out slowly, depending on whether you specify sudo: false in your .travis.yml. Repositories created before January 2015 are also already routed to the vm-based infrastructure (if you don’t specify sudo: false). If you’re currently using the vm-based Linux infrastructure (or run your own Travis CI Enterprise installation), this change will not affect your projects.
We know the container-based infrastructure has quite a few fans - in fact, it runs about 45% of all Travis CI builds. However, we also know that more and more folks are building and deploying Docker images, and using Docker within a container-based build is not a great experience. This is especially true since the privileges within that container are significantly reduced to maintain security. In addition, we’ve also noticed as folks move toward more automation on top of CI builds, it takes too many steps to change settings to switch infrastructures. What should be an easy step in adopting a tool, requires more config down the road than is necessary. We’re pretty excited to offer a better user experience with the single infrastructure.
Also, this change will reduce duplicate maintenance and monitoring work our build infrastructure team has to do - meaning these fabulous humans will have more time to work on other projects to improve your experience of Travis CI.
Migration Timeline
Since combining infrastructures impacts lots of folks, we’re making changes incrementally. The process will happen in phases:
Changes to the Default Behavior (Starting Now)
Starting this week, we have begun the process to move repositories on travis-ci.org to using the vm-based infrastructure. If you do not specify sudo: false in your .travis.yml, your next build will run on the vm-based infrastructure.
The process for travis-ci.com will be a bit more measured. Starting in two weeks, we’ll begin randomly sampling a percentage of these repositories to redirect to the combined infrastructure, until everyone has moved over.
Temporary Rollback
While we expect fairly limited impact to most builds, folks that depend on the container-based IPs should expect to see these IPs changing, as projects move over to the new infrastructure. (You can check what new IPs to expect in the docs) In addition, if you use a specific build environment group and do not specify sudo: required, it’s possible you’ll see slight variations in your build environment as projects are migrating.
You can rollback any time during this phase by adding sudo: false to your to .travis.yml. We encourage you to make any updates you need soon, though, as sudo: false will be fully phased out shortly.
Changing Behavior for Explicit sudo:false (November)
Starting in the second week of November (we’ll post another blog post), we will start to migrate repositories that use sudo: false to the vm-based infrastructure. Open source repositories will be rerouted first, with closed-source to follow. We’ll keep you posted via this blog and the changelog and twitter - have a look out for updates!
If you have feedback or anything you’d like to ask, give us a shout on the community forum, travis-ci.community or email us at support@travis-ci.com. We’re looking forward to helping out.
Thank yous!
Last of all, we wanted to give a big shout-out to our community - you all are amazing, and we appreciate your suggestions, patience, and comments! Also, we owe a huge thanks to our friends at GCE who are incredibly helpful in this project. Finally, major props to the Build Infrastructure and Support & Success teams who are making this all happen! 💚
The digital marketing manager will be responsible for executing and optimizing search engine marketing campaigns that dramatically increase the top of our lead funnel. You’ll work in coordination with the Director of Marketing, the Sales Team, and Director of Product to make sure campaigns are constantly improving and in line with our sales and strategic company plans. Campaign activities include search engine optimization (SEO), local search optimization, content development, link building, paid search campaign management (SEM), and marketing research and analysis for individual company campaigns. You’ll also be responsible for making certain all our web properties such as the marketing site and blogs are search engine optimized (SEO). Additionally, you’ll be responsible for helping with technical audits, content marketing strategy, and ongoing contributions to the evolution of Submittable’s search marketing methodology.
Core Responsibility:
Generate consistent and high-quality leads via SEO and SEM
Experience and Skills Required:
2+ years SEM experience managing accounts in Google AdWords and Bing Ads
2+ years SEO experience
Ability to forecast and manage digital spends and stay within approved budget
Ability to handle multiple projects at once and perform under tight deadlines
Excellent analytical, research, written and verbal communication skills
Excellent at Excel and/or Google Sheets
Ability to work in the Missoula, MT, headquarters office during core business hours, 8:30a-5:30p Monday through Friday
Nice to have:
Bachelor's Degree in something interesting
Google Analytics, Google AdWords, and Bing Ads certified
Knowledge of web technologies such as HTML & JavaScript
Experience with CMS and CRM platforms such as HubSpot
About Submittable:
Founded in 2010, Submittable is a submission management platform used by 11,000+ clients in over 63 countries, including CBS, MIT, National Geographic, and The Kennedy Center. Started by a novelist, a musician, and a filmmaker who all also happened to be engineers, Submittable is now a rapidly growing company of 60+ valued and thriving employees with diverse backgrounds and interests. The software is used by over a million people every month.
Based in Missoula, Montana, Submittable’s office is 1 block from surfing, 7 miles from skiing, and 5 miles from the Rattlesnake Wilderness. If you’re not into sports or the outdoors, many of us have other interests, too--we also have, among our staff, professional musicians, accomplished artists, published writers (even poets!), former chefs, and a quilter who once won third place in the county fair. Our average "commute" time is 10 minutes, with many staff biking or walking to our office in downtown Missoula. Housing costs ⅓ that of NYC and SF (10% of Submittable employees are first-time home buyers). The average TSA wait time at Missoula International Airport is 5 minutes.
We offer highly competitive benefits for full-time employees, including:
Health insurance, 401K, and optional HSA and DCA accounts
Flexible hours, including flexible vacations and sick leave
Generous paid parental leave policy for mothers, fathers, and adoptive parents
Discounted fitness memberships, personal development stipends, and book purchase reimbursement
Involvement in community outreach programs for all employees, including company volunteer outings at local nonprofits
Fully stocked kitchen with complimentary snacks and beverages for all employees
As a product used globally, we're very motivated to hire and support employees who are representative of different and diverse backgrounds and experiences, including but not limited to diversity of ethnicity, sexual orientation, gender, religion, ability, culture, and socioeconomics.
The ability to log in as one of your users is one of the highest value features you can develop to support your customers.
The ability to log in as one of your users is one of the most dangerous features you can develop to support your customers.
With that pithy introduction out of the way…
No, actually, let’s back up a minute because I’m not sure that you’ve fully appreciated what you’re about to do: you are creating a security hole in your app.
If your app is Helm’s Deep, then impersonating users is like adding a small unguarded culvert that bypasses the main fortifications. You should expect Orks… and add the appropriate defences.
There’s a lot of things to consider when implementing a feature like this and the technical details are possibly the least interesting. They also vary considerably between apps, frameworks, and languages.
Technically, logging in as another user is probably as simple as session[:current_user] = user.id or something similar. Whatever. You probably know how this works.
Logging in as another user is not the hard part.
Here’s some more important things to consider:
Do users need to give explicit permission for support staff to impersonate them?
Who is authorised to impersonate users?
How have you authenticated the support staff?
How long does the impersonation last?
How does the impersonator know they impersonating another user?
What unintentional effects do you need to avoid?
Getting permission
This might not be required in every application but if you’re dealing with sensitive or financial data you might need to ask the user’s permission before viewing their account. I’ve seen this implemented by FreeAgent as a special code visible in the user’s settings which must be provided directly to the support staff
The user can also opt-in/opt-out off allowing support staff to access their account.
Who can impersonate users?
This is really an internal company process but you should be clear about who can and cannot impersonate a user, under what circumstances, and for what purposes. You probably want your support staff to impersonate users so they can fix/debug an ongoing issue. You probably don’t want your sales people impersonating users out of idle curiosity.
One feature I’ve previously built is some form of accountability. You might build an audit log in the database recording each time a member of the support staff impersonated a user. Personally, I think audit logs are great for analysing abuse after it’s occurred but do little to act as a deterrent. Instead, I think good behaviour can be enforce by announcing the impersonation publicly — posting to a Slack channel each time some one is impersonated is a simple method of ensuring accountability.
Authentication
Now that your admin accounts are a backdoor to every user account, it’s time to take another look at their security.
First, I think it’s important to have separate Admin and User models as the simplest way to avoid privilege escalation attacks
Next, we should ensure that it really is an admin impersonating the user. But don’t we just check that they’re logged-in as an admin?
Ha! Er… no. What happens if your support staff laptop is stolen? Or they’ve reused a password? You need another means of verifying it’s really an admin user. A sort of second password…a two-factor authentication if you will. 2FA. Top tip: just use https://www.twilio.com/authy to generate and confirm a confirmation code. It’s dead simple and will take a few hours at most.
This ensures that the logged-in admin account is being operated by the member of staff you think it is.
How long does the impersonation last?
A fairly common problem occurs when you impersonate a user on Friday, and then on Monday you open the app and forget you’re logged into that user’s account. Hopefully you realise in time before you do anything too… permanent like send a newsletter out with the wrong account.
A simple solution is to expire the impersonation much quicker than normal session cookies. If your user sessions normally last 30 days, I’d reduce the session timeout for impersonations to something like 1 hour.
How does the admin know they impersonating another user?
Even if you limit the duration, you’ll still want to display some indication that they’re impersonating another user.
In one app, I added a large/prominent ghost 👻 fixed in the left-hand corner which would end the session when clicked. It was a fun but important feature. A banner at the top works just as well
What unintentional effects do you need to avoid?
It’s only after you’ve built an impersonation feature that you discover all the unintended side-effects. Try to shortcut this process by considering where else you send your user information.
Some of my hard-won lessons include:
turn off Intercom when impersonating a user! Otherwise, you’ll send a message and end up it reading it yourself in the impersonated session… and the user will never get a notification!
disable all analytics or you’ll develop a very suspicious hotspot of user activity around your support staff’s location!
if possible, disable user notifications/emails when an account is being impersonated — or remind staff that impersonating a user will still generate emails, notifications, and dashboard events.
Multi-tenant applications
It’s slightly more complex to impersonate users when they’re on different subdomains or custom domains. The basic process isn’t too arduous though:
Generate a secure token attached to the target user’s account
Sign them in using whatever version of session[:current_user] = user.id your app requires
Remove the token from the user account so the impersonation can’t be replayed
Recap
So here’s the outline process for impersonating a user:
In your admin dashboard, let staff choose a user account to impersonate
Request a 2FA verification code to confirm the identity of the admin user
Once you’ve confirmed their identity, create the user session. In a simple web app, this might be just session[:current_user] = user.id . Or you might do the more complex multi-tenant dance with tokens and redirects.
Record the impersonation session in an audit log
Notify a team Slack channel with the details of the session
Add a session variable indicating that the account is being impersonated session[:impersonating] = user.id
Display a banner with a warning message, the name current user, and a way to end the session
Disable all user analytics, both javascript and server-side
If necessary & possible, disable user notifications like account activity emails
Anyone manufacturing an internet-connected device in California will, from 2020, have to give it a unique password in an effort to increase overall online security.
That's the main impact of a new bill recently signed into law by Cali governor Jerry Brown, SB-327 called "Security of connected devices."
The law is the US state's effort to deal with an ever-increasing problem: sloppy security on millions of new consumer devices that are being sold and attached to home wireless networks.
In recent years, automated malware has wreaked havoc across the globe, from NotPetya to WannaCrypt, shutting down everything from an average user's PC to entire hospital networks. As well as hacking systems and grabbing sensitive information, miscreants have also managed to create huge global networks of compromised devices to carry out denial-of-service attacks.
The new law is intended to deal with one of the more common routes to mass infection: default or hardcoded passwords.
It is much easier and simpler for a manufacturer of, say, security cameras to have a single password on all their devices and prod users to change it. But, with depressing predictability, most consumers don't bother – they just fire up their device, connect it to their wireless and leave it be. That leaves the device – multiplied by millions – wide open to attack.
Do I look bovered?
Manufacturers know this, and they know the answer is to give each device its own unique password, but many still don't bother because it costs money and once the device is out their hands, it is not longer their responsibility.
The law changes that to a degree, without adding extra liability, by requiring that manufacturers either include "a preprogrammed password unique to each device manufactured" or "a security feature that requires a user to generate a new means of authentication before access is granted to the device for the first time."
It will kick in on January 1, 2020 and will "require a manufacturer of a connected device… to equip the device with a reasonable security feature or features that are appropriate to the nature and function of the device… and designed to protect the device and any information contained therein from unauthorized access, destruction, use, modification, or disclosure, as specified."
Which is all great.
But it is also a massive missed opportunity and a sign that there remains a dangerous lack of decent technical knowledge in the corridors of power.
The idea of the bill is to address – and get ahead of – massive and increasing insecurity in the internet overall. Seemingly everything connects to the internet these days – even baths and showers – in large part because a huge percentage of the population in Western economies now have smartphones and internet access. It helps that it is easier and cheaper than ever to put internet connectivity into a device.
But while requiring that every device have its own unique password is a step forward, it represents the lowest-hanging fruit on the security tree and may even give lawmakers their own false sense of security that they have fixed the problem. They have not.
Update
While default passwords are a particular problem, a bigger one is the failure to update software. There are many ways to access an electronic product – and a username/password is only one of them.
New security holes are being discovered all the time and they typically take advantage of the various authentication systems that exist in such products but which are invisible to consumers.
Even when a manufacturer does go to the trouble to update their software to deal with the latest security threats, it often falls to the consumer to run updates on their system to install it. And if consumers can't be bothered to change a default password, they almost certainly can't be bothered to periodically update their devices' software.
The largest companies – like Apple, for example – go to some trouble to prod their users into downloading and installing updates where security fixes are often mixed with new or improved features. But you only have to look at the long delays in security updates with Google and Android to see that without some kind of persistent prodding or shiny incentive, updates do not happen.
And that's phones and computers: things that people typically look at and directly interface with multiples times a day. Updating is a much, much bigger problem with things like internet routers or security cameras or smart lights, smart sockets and other smart-whatevers that you rarely interact with.
It would likely be a mistake to mandate automated security updates because that would then make the system that companies set up to provide those automated updates a prime target for attack by hackers. If you could hack a manufacturer's system, you can install whatever you wanted on every device in one fell swoop.
Alert! Alert!
But using alerts and transparency could achieve the same goal: get people to pay attention to their insecure devices. Battery powered smoke alarms go off every year when the battery runs out – what if other devices emitted a similar alarm once a year, requiring you to check and install any updates before the noise stops?
Many device manufacturers also have clunky update interfaces that puts people off using them – but that would almost certainly change if consumers have little choice but to use them to get the device working optimally again.
Similarly, there are other lazy shortcuts that manufacturers take that leave their devices insecure. Leaving unused ports open, for example. Or allowing their devices to communicate with anything and everything else on the same network.
If manufacturers were forced to adopt a GDPR-style minimization effort where the philosophy is that only what is needed is allowed, then it would not only make everything more secure but would force companies to put greater thought into the security of their device. Or how about two-factor authentication?
Not that these proposals are perfect – they are just ideas – but they are they sort of ideas that should be doing the rounds in Sacramento and Washington DC, with consumers groups, technical experts and internet-connected device manufacturers all asked to supply their viewpoints and suggestions. There's also the underestimated impact of actually educating people about the risks.
We need an Internet Device Security Bill with a clear goal to improve overall online security in a real way, with proper examination of all the issues and political will driving to a series of real, effective changes.
We need a new Ralph Nader and an internet seat-belt law. And we need it before the next wave of malware causes every greater problems.
California's SB-327 is one step on that path, but it is only one step and it's not clear anyone is planning to take another one anytime soon. ®
Danish authorities have been trying to unravel Mr. Shah’s handiwork for over three years. Much of his modus operandi was revealed, experts believe, in 2017 when police in Germany, who were acting at the behest of the Danes, used a search warrant to sift through the records of North Channel Bank, a small bank in Mainz, a city outside Frankfurt. A team of 60 investigators found that the bank was used by 27 of the American pension plans, which were ultimately paid a total of about $168 million by SKAT.
What investigators found is that the accounts didn’t actually own any shares of Danish companies, said Prof. Christoph Spengel, who served as an adviser to Germany’s Parliament during an inquiry into the questionable trades. He studied the results of the North Channel investigation, issued in a report by a German district attorney. He said that the 27 plans primarily traded with one another. One would place an order to short a chunk of shares of Danish stock — essentially, a promise to buy the shares once they dipped below a certain price.
Soon after, an order was placed by another of the 27 plans to buy the order for the shorted shares. That open buy order — essentially, a promise to purchase shares that the other plan still didn’t own — was proof enough for SKAT to approve a refund. Once the refund was issued, the buy order was canceled.
“This wasn’t a transaction, this wasn’t tax planning,” Professor Spengel said. “This was fraud.”
A spokeswoman for North Channel said the bank was cooperating with the authorities and had no comment.
After funds were wired to North Bank, Professor Spengel said, they were shunted to two banks, first in London, then another in Germany. Finally, he said, they were sent to accounts controlled by Mr. Shah and his wife, Usha.
Jack Irvine, Mr. Shah’s spokesman, said none of this was true.
“Neither Solo nor Sanjay have had anything to do with North Channel Bank,” he wrote in an email, “so there appears to be confusion, which is not unusual in this case.”
There has been outrage in Denmark over the SKAT scandal but so far the repercussions have been surprisingly limited. No ministers have been fired. The director of SKAT was laid off in August 2016, though Mr. Shah’s machinations were among several causes. A new investigation into the cum-ex disaster was ordered by the justice minister in February, which could last years. For now, politicians here seem to emphasize pragmatism over finger-pointing.
When rolled out its membership-based two-day shipping service in 2005, e-commerce and customer expectations around fulfillment speed changed forever.
Today, more than 100 million people use Amazon Prime. That means, 100 million people are fully accustomed to two-day shipping and if they can’t have it, they shop elsewhere. As Christopher Mims recently put it: “Alongside life, liberty and the pursuit of happiness, you can now add another inalienable right: two-day shipping on practically everything.”
To power these Prime-like delivery options, and the Canadian e-commerce business Shopify are relying on a little upstart.
One-year-old Deliverr helps businesses offer rapid delivery experiences to their customers. Today, the company is announcing a $7.1 million Series A led by Joe Lonsdale’s 8VC, with participation from Zola founder Shan-Lyn Ma, Flexport chief executive officer Ryan Peterson and others.
The San Francisco-based startup uses machine learning and predictive intelligence to determine which of its warehouses to store its client’s goods.
Currently, operates out of more than 10 warehouses in Texas, Missouri, Pennsylvania, Ohio and New Jersey, among other states, though co-founder Michael Krakaris says that number is growing every week. Its customers typically store inventory in three to five different locations based on Deliverr’s predictive algorithms.
Unlike Amazon, which owns more than 75 fulfillment centers, Deliverr doesn’t own its warehouses. Krakaris describes the company’s strategy as a sort of for fulfillment.
“Uber didn’t change the physical infrastructure of cars. They didn’t build their own taxis. What they did was create software that could connect excess capacity drivers,” Krakaris told TechCrunch. “Most warehouses aren’t going to be full. We are going in and filling that extra space they wouldn’t otherwise fill.”
One of the startup’s tricks is to use brand-neutral packaging so any and all marketplaces could theoretically power fulfillment through Deliverr. Amazon, of course, sticks a Prime sticker on all its outgoing packages. And because Amazon’s fulfillment service is used by some sellers, eBay items are known to show up at customers’ homes in Amazon-branded packaging. Not a great look for eBay.
“You need an independent fulfillment service that can handle all these different fulfillment channels and be neutral,” Krakaris said.
Deliverr plans to use the investment to scale its team and ink partnerships with additional online retailers.
The sales intelligence firm Apollo sent a notice to its customers last week disclosing a data breach it suffered over the summer. "On discovery, we took immediate steps to remediate our systems and confirmed the issue could not lead to any future unauthorized access," cofounder and CEO Tim Zheng wrote. "We can appreciate that this situation may cause you concern and frustration." In fact, the scale and scope of the breach has a lot of people concerned.
Apollo is a data aggregator and analytics service aimed at helping sales teams know who to contact, when, and with what message to make the most deals. "No one ever drowned in revenue," the company says on its site. Apollo also claims in its marketing materials to have 200 million contacts and information from over 10 million companies in its vast reservoir of data. That's apparently not just spin. Night Lion Security founder Vinny Troia, who routinely scans the internet for unprotected, freely accessible databases, discovered Apollo's trove containing 212 million contact listings as well as nine billion data points related to companies and organizations. All of which was readily available online, for anyone to access. Troia disclosed the exposure to the company in mid-August.
"There is always a high risk for fraud, spam, or other even harmful actions when these types of data sets leak."
Vinny Troia, Night Lion Security
As Apollo noted in its letter to customers, it draws a lot of its information from public sources around the web, including names, email addresses, and company contact information. But it also scrapes Twitter and LinkedIn. In fact, the information in the profiles Apollo compiles is so detailed that Troia originally mistook it for a trove from LinkedIn. Some of Troia's methods of investigating the Apollo breach have been called into question, though, particularly that he posted a listing for the exposed LinkedIn data on a dark web marketplace. Troia claims he never planned to actually sell the data, and that he made the post as a ruse to aid other ongoing research.
For its part, LinkedIn issued a firm rebuke. "Our investigation into this claim found that a third-party sales intelligence company that is not associated with LinkedIn was compromised and exposed a large set of data aggregated from a number of social networks, websites, and the company’s own customers," the company said in a statement.
Combining all of that public data in one easily accessible location creates inherent risk; if it leaks, as the Apollo data has, it enables scammers, fraudsters, and phishers to craft compelling targeted attacks against a huge number of people. But the Apollo breach has an additionally problematic layer. "Some client-imported data was also accessed without authorization," Zheng wrote in the disclosure to customers last week.
Customers access Apollo's data and predictive features through a main dashboard. They also have the option to connect other data tools they might use, for example authorizing their Salesforce accounts to port data into Apollo. Troia found that more than seven million pieces of internal "opportunity" data, information about impending sales commonly associated with Salesforce, were exposed in the breach. One Apollo client alone had almost a million records exposed.
"There is always a high risk for fraud, spam, or other even harmful actions when these types of data sets leak," Troia says. "People already receive phishing and voice-phishing messages every day. Now you are talking about exposing potentially hundreds of millions of people to more avenues for phishing and fraud. Meanwhile, Apollo seems to have about 530 clients who each had different amounts of valuable opportunity data caught up in this leak."
Apollo cofounder and CTO Ray Li told WIRED that the company is investigating the breach and has reported it to law enforcement. The data does not include financial data, Social Security numbers, or account credentials. Apollo said in its initial letter to customers that, "an unidentified third party accessed our systems without authorization before our remediation efforts," which could mean that the data is already in the hands of scammers.
Troia also provided the contact data included in the breach to security researcher Troy Hunt, who runs the data breach tracking service HaveIBeenPwned. Hunt has added the Apollo data to the repository, and plans to notify the HaveIBeenPwned network about the incident.
"It's just a staggering amount of data. There were 125,929,660 unique email addresses in total. This will probably be the most email notifications HaveIBeenPwned has ever sent for one breach," Hunt says. "Clearly this is all about 'data enrichment,' creating comprehensive profiles of individuals that can then be used for commercial purposes. As such, the more data an organization like Apollo can collect, the more valuable their service becomes."
Apollo's core product not only collects publicly available information, but creates a web of business and employee connections out of it. In addition to names, contact information, and job titles for employees, the data also includes things like the dates companies were founded, revenue numbers, keywords associated with the work companies do, number of employees, and website ranking by the Amazon-owned analytics company Alexa. The service then uses all of this information to try to draw connections between companies and identify possible sales opportunities.
"It's just a staggering amount of data."
Troy Hunt, HaveIBeenPwned
The Salesforce data pulled into the Apollo breach raises the stakes, since that information was never meant to be public, and many clients rely on Salesforce as an internal tool for business development. During his research, Troia became even more concerned when he noticed that when a user authorizes Salesforce to connect with Apollo, they apparently can't authorize Apollo to only pull specific types of data. Choosing to connect the two services seems to initiate total access.
This doesn't mean Apollo grabbed all of a given company's Salesforce data, but Troia notes that Apollo may have held more private opportunity data than some clients realized. Salesforce declined to comment for this story about the breach or how third-party authorizations work. Apollo's Li told WIRED that, "Customers have full and customizable control and management of the data they’ve imported to Apollo."
Apollo is far from the first data aggregator to have a breach, and as all the incidents compound, the threat of having all of that curated information so easily accessible becomes even more pressing.
"What almost worries me more [than the raw data exposure] is the mapping of social identities to email address and other personal data, because there's now so much more you can pull on a person," Hunt says. "We're continually seeing massive breaches of data aggregators who hold information on people who have no idea their personal information has been used in this fashion. I understand that it's Apollo's customers who provided access to their customers, but the fact remains that there are north of 100 million people out there who have no idea who Apollo is nor that their information was exposed."
"The Awful German Language" by Mark Twain[This is Appendix D from Twain's 1880 book A Tramp Abroad.
This text is basically a HTML conversion of the plain ASCII e-text formerly found atgopher://english.hss.cmu.edu:70/0F-2%3A2607%3AThe%20Awful%20German%20Language,
with some further editing. Report errors tochurchh@uts.cc.utexas.edu;
note that the German orthography is that of the late 19th century.]
I went often to look at the collection of curiosities in Heidelberg Castle,
and one day I surprised the keeper of it with my German. I spoke entirely in
that language. He was greatly interested; and after I had talked a while he
said my German was very rare, possibly a "unique"; and wanted to add it to his
museum.
If he had known what it had cost me to acquire my art, he would also have
known that it would break any collector to buy it. Harris and I had been hard
at work on our German during several weeks at that time, and although we had
made good progress, it had been accomplished under great difficulty and
annoyance, for three of our teachers had died in the mean time. A person who
has not studied German can form no idea of what a perplexing language it
is.
Surely there is not another language that is so slipshod and systemless,
and so slippery and elusive to the grasp. One is washed about in it, hither
and thither, in the most helpless way; and when at last he thinks he has
captured a rule which offers firm ground to take a rest on amid the general
rage and turmoil of the ten parts of speech, he turns over the page and reads,
"Let the pupil make careful note of the following exceptions." He runs
his eye down and finds that there are more exceptions to the rule than
instances of it. So overboard he goes again, to hunt for another Ararat and
find another quicksand. Such has been, and continues to be, my experience.
Every time I think I have got one of these four confusing "cases" where I am
master of it, a seemingly insignificant preposition intrudes itself into my
sentence, clothed with an awful and unsuspected power, and crumbles the ground
from under me. For instance, my book inquires after a certain bird -- (it is
always inquiring after things which are of no sort of consequence to anybody):
"Where is the bird?" Now the answer to this question -- according to the book
-- is that the bird is waiting in the blacksmith shop on account of the rain.
Of course no bird would do that, but then you must stick to the book. Very
well, I begin to cipher out the German for that answer. I begin at the wrong
end, necessarily, for that is the German idea. I say to myself, "Regen
(rain) is masculine -- or maybe it is feminine -- or possibly neuter -- it is
too much trouble to look now. Therefore, it is either der (the) Regen,
or die (the) Regen, or das (the) Regen, according to which
gender it may turn out to be when I look. In the interest of science, I will
cipher it out on the hypothesis that it is masculine. Very well -- thenthe rain is der Regen, if it is simply in the quiescent state of
being mentioned, without enlargement or discussion -- Nominative case;
but if this rain is lying around, in a kind of a general way on the ground, it
is then definitely located, it is doing something -- that is,resting (which is one of the German grammar's ideas of doing
something), and this throws the rain into the Dative case, and makes itdem Regen. However, this rain is not resting, but is doing somethingactively, -- it is falling -- to interfere with the bird, likely -- and
this indicates movement, which has the effect of sliding it into the
Accusative case and changing dem Regen into den Regen." Having
completed the grammatical horoscope of this matter, I answer up confidently
and state in German that the bird is staying in the blacksmith shop "wegen (on
account of) den Regen." Then the teacher lets me softly down with the
remark that whenever the word "wegen" drops into a sentence, it always
throws that subject into the Genitive case, regardless of consequences
-- and that therefore this bird stayed in the blacksmith shop "wegendes Regens."
N. B. -- I was informed, later, by a higher authority, that there was an
"exception" which permits one to say "wegen den Regen" in certain
peculiar and complex circumstances, but that this exception is not extended to
anything but rain.
There are ten parts of speech, and they are all troublesome. An average
sentence, in a German newspaper, is a sublime and impressive curiosity; it
occupies a quarter of a column; it contains all the ten parts of speech -- not
in regular order, but mixed; it is built mainly of compound words constructed
by the writer on the spot, and not to be found in any dictionary -- six or
seven words compacted into one, without joint or seam -- that is, without
hyphens; it treats of fourteen or fifteen different subjects, each inclosed in
a parenthesis of its own, with here and there extra parentheses which
reinclose three or four of the minor parentheses, making pens within pens:
finally, all the parentheses and reparentheses are massed together between a
couple of king-parentheses, one of which is placed in the first line of the
majestic sentence and the other in the middle of the last line of it --after which comes the VERB, and you find out for the first time what
the man has been talking about; and after the verb -- merely by way of
ornament, as far as I can make out -- the writer shovels in "haben sind
gewesen gehabt haben geworden sein," or words to that effect, and the
monument is finished. I suppose that this closing hurrah is in the nature of
the flourish to a man's signature -- not necessary, but pretty. German books
are easy enough to read when you hold them before the looking-glass or stand
on your head -- so as to reverse the construction -- but I think that to learn
to read and understand a German newspaper is a thing which must always remain
an impossibility to a foreigner.
Yet even the German books are not entirely free from attacks of the
Parenthesis distemper -- though they are usually so mild as to cover only a
few lines, and therefore when you at last get down to the verb it carries some
meaning to your mind because you are able to remember a good deal of what has
gone before. Now here is a sentence from a popular and excellent German novel
-- which a slight parenthesis in it. I will make a perfectly literal
translation, and throw in the parenthesis-marks and some hyphens for the
assistance of the reader -- though in the original there are no
parenthesis-marks or hyphens, and the reader is left to flounder through to
the remote verb the best way he can:
"But when he, upon the street, the
(in-satin-and-silk-covered-now-very-unconstrained-after-the-newest-fashioned-dressed)
government counselor's wife met," etc., etc. [1]
1. Wenn er aber auf der Strasse der in Sammt und Seide
gehüllten jetzt sehr ungenirt nach der neusten Mode gekleideten
Regierungsräthin begegnet.
That is from The Old Mamselle's Secret, by Mrs. Marlitt. And
that sentence is constructed upon the most approved German model. You observe
how far that verb is from the reader's base of operations; well, in a German
newspaper they put their verb away over on the next page; and I have heard
that sometimes after stringing along the exciting preliminaries and
parentheses for a column or two, they get in a hurry and have to go to press
without getting to the verb at all. Of course, then, the reader is left in a
very exhausted and ignorant state.
We have the Parenthesis disease in our literature, too; and one may see
cases of it every day in our books and newspapers: but with us it is the mark
and sign of an unpracticed writer or a cloudy intellect, whereas with the
Germans it is doubtless the mark and sign of a practiced pen and of the
presence of that sort of luminous intellectual fog which stands for clearness
among these people. For surely it is not clearness -- it necessarily
can't be clearness. Even a jury would have penetration enough to discover
that. A writer's ideas must be a good deal confused, a good deal out of line
and sequence, when he starts out to say that a man met a counselor's wife in
the street, and then right in the midst of this so simple undertaking halts
these approaching people and makes them stand still until he jots down an
inventory of the woman's dress. That is manifestly absurd. It reminds a
person of those dentists who secure your instant and breathless interest in a
tooth by taking a grip on it with the forceps, and then stand there and drawl
through a tedious anecdote before they give the dreaded jerk. Parentheses in
literature and dentistry are in bad taste.
The Germans have another kind of parenthesis, which they make by splitting
a verb in two and putting half of it at the beginning of an exciting chapter
and the other half at the end of it. Can any one conceive of anything
more confusing than that? These things are called "separable verbs." The
German grammar is blistered all over with separable verbs; and the wider the
two portions of one of them are spread apart, the better the author of the
crime is pleased with his performance. A favorite one is reiste ab --
which means departed. Here is an example which I culled from a novel and
reduced to English:
"The trunks being now ready, he DE- after kissing his
mother and sisters, and once more pressing to his bosom his adored Gretchen,
who, dressed in simple white muslin, with a single tuberose in the ample folds
of her rich brown hair, had tottered feebly down the stairs, still pale from
the terror and excitement of the past evening, but longing to lay her poor
aching head yet once again upon the breast of him whom she loved more dearly
than life itself, PARTED."
However, it is not well to dwell too much on the separable verbs. One is
sure to lose his temper early; and if he sticks to the subject, and will not
be warned, it will at last either soften his brain or petrify it. Personal
pronouns and adjectives are a fruitful nuisance in this language, and should
have been left out. For instance, the same sound, sie, meansyou, and it means she, and it means her, and it meansit, and it means they, and it means them. Think of the
ragged poverty of a language which has to make one word do the work of six --
and a poor little weak thing of only three letters at that. But mainly, think
of the exasperation of never knowing which of these meanings the speaker is
trying to convey. This explains why, whenever a person says sie to me,
I generally try to kill him, if a stranger.
Now observe the Adjective. Here was a case where simplicity would have
been an advantage; therefore, for no other reason, the inventor of this
language complicated it all he could. When we wish to speak of our "good
friend or friends," in our enlightened tongue, we stick to the one form and
have no trouble or hard feeling about it; but with the German tongue it is
different. When a German gets his hands on an adjective, he declines it, and
keeps on declining it until the common sense is all declined out of it. It is
as bad as Latin. He says, for instance:
SINGULAR
Nominative -- Mein guter Freund, my good friend.
Genitives -- Meines guten Freundes, of my good
friend.
Dative -- Meinem guten Freund, to my good friend.
Accusative -- Meinen guten Freund, my good friend.
PLURAL
N. -- Meine guten Freunde, my good friends.
G. -- Meiner guten Freunde, of my good friends.
D. -- Meinen guten Freunden, to my good friends.
A. -- Meine guten Freunde, my good friends.
Now let the candidate for the asylum try to memorize those variations, and
see how soon he will be elected. One might better go without friends in
Germany than take all this trouble about them. I have shown what a bother it
is to decline a good (male) friend; well this is only a third of the work, for
there is a variety of new distortions of the adjective to be learned when the
object is feminine, and still another when the object is neuter. Now there
are more adjectives in this language than there are black cats in Switzerland,
and they must all be as elaborately declined as the examples above suggested.
Difficult? -- troublesome? -- these words cannot describe it. I heard a
Californian student in Heidelberg say, in one of his calmest moods, that he
would rather decline two drinks than one German adjective.
The inventor of the language seems to have taken pleasure in complicating
it in every way he could think of. For instance, if one is casually referring
to a house, Haus, or a horse, Pferd, or a dog, Hund, he
spells these words as I have indicated; but if he is referring to them in the
Dative case, he sticks on a foolish and unnecessary e and spells themHause, Pferde, Hunde. So, as an added e often
signifies the plural, as the s does with us, the new student is likely
to go on for a month making twins out of a Dative dog before he discovers his
mistake; and on the other hand, many a new student who could ill afford loss,
has bought and paid for two dogs and only got one of them, because he
ignorantly bought that dog in the Dative singular when he really supposed he
was talking plural -- which left the law on the seller's side, of course, by
the strict rules of grammar, and therefore a suit for recovery could not
lie.
In German, all the Nouns begin with a capital letter. Now that is a good
idea; and a good idea, in this language, is necessarily conspicuous from its
lonesomeness. I consider this capitalizing of nouns a good idea, because by
reason of it you are almost always able to tell a noun the minute you see it.
You fall into error occasionally, because you mistake the name of a person for
the name of a thing, and waste a good deal of time trying to dig a meaning out
of it. German names almost always do mean something, and this helps to
deceive the student. I translated a passage one day, which said that "the
infuriated tigress broke loose and utterly ate up the unfortunate fir forest"
(Tannenwald). When I was girding up my loins to doubt this, I found out
that Tannenwald in this instance was a man's name.
Every noun has a gender, and there is no sense or system in the
distribution; so the gender of each must be learned separately and by heart.
There is no other way. To do this one has to have a memory like a
memorandum-book. In German, a young lady has no sex, while a turnip has.
Think what overwrought reverence that shows for the turnip, and what callous
disrespect for the girl. See how it looks in print -- I translate this from a
conversation in one of the best of the German Sunday-school books:
"Gretchen.
Wilhelm, where is the turnip?
Wilhelm.
She has gone to the kitchen.
Gretchen.
Where is the accomplished and beautiful English
maiden?
Wilhelm.
It has gone to the opera."
To continue with the German genders: a tree is male, its buds are female,
its leaves are neuter; horses are sexless, dogs are male, cats are female --
tomcats included, of course; a person's mouth, neck, bosom, elbows, fingers,
nails, feet, and body are of the male sex, and his head is male or neuter
according to the word selected to signify it, and not according to the
sex of the individual who wears it -- for in Germany all the women either male
heads or sexless ones; a person's nose, lips, shoulders, breast, hands, and
toes are of the female sex; and his hair, ears, eyes, chin, legs, knees,
heart, and conscience haven't any sex at all. The inventor of the language
probably got what he knew about a conscience from hearsay.
Now, by the above dissection, the reader will see that in Germany a man maythink he is a man, but when he comes to look into the matter closely,
he is bound to have his doubts; he finds that in sober truth he is a most
ridiculous mixture; and if he ends by trying to comfort himself with the
thought that he can at least depend on a third of this mess as being manly and
masculine, the humiliating second thought will quickly remind him that in this
respect he is no better off than any woman or cow in the land.
In the German it is true that by some oversight of the inventor of the
language, a Woman is a female; but a Wife (Weib) is not -- which is
unfortunate. A Wife, here, has no sex; she is neuter; so, according to the
grammar, a fish is he, his scales are she, but a fishwife is
neither. To describe a wife as sexless may be called under-description; that
is bad enough, but over-description is surely worse. A German speaks of an
Englishman as the Engländer; to change the sex, he addsinn, and that stands for Englishwoman -- Engländerinn. That
seems descriptive enough, but still it is not exact enough for a German; so he
precedes the word with that article which indicates that the creature to
follow is feminine, and writes it down thus: "die
Engländerinn," -- which means "the she-Englishwoman." I
consider that that person is over-described.
Well, after the student has learned the sex of a great number of nouns, he
is still in a difficulty, because he finds it impossible to persuade his
tongue to refer to things as "he" and "she," and "him"
and "her," which it has been always accustomed to refer to it as
"it." When he even frames a German sentence in his mind, with the hims
and hers in the right places, and then works up his courage to the
utterance-point, it is no use -- the moment he begins to speak his tongue
flies the track and all those labored males and females come out as
"its." And even when he is reading German to himself, he always calls
those things "it," where as he ought to read in this way:
2. I capitalize the nouns, in the German (and ancient English)
fashion.
It is a bleak Day. Hear the Rain, how he pours, and the Hail, how he
rattles; and see the Snow, how he drifts along, and of the Mud, how deep he
is! Ah the poor Fishwife, it is stuck fast in the Mire; it has dropped its
Basket of Fishes; and its Hands have been cut by the Scales as it seized some
of the falling Creatures; and one Scale has even got into its Eye, and it
cannot get her out. It opens its Mouth to cry for Help; but if any Sound
comes out of him, alas he is drowned by the raging of the Storm. And now a
Tomcat has got one of the Fishes and she will surely escape with him. No, she
bites off a Fin, she holds her in her Mouth -- will she swallow her? No, the
Fishwife's brave Mother-dog deserts his Puppies and rescues the Fin -- which
he eats, himself, as his Reward. O, horror, the Lightning has struck the
Fish-basket; he sets him on Fire; see the Flame, how she licks the doomed
Utensil with her red and angry Tongue; now she attacks the helpless Fishwife's
Foot -- she burns him up, all but the big Toe, and even she is partly
consumed; and still she spreads, still she waves her fiery Tongues; she
attacks the Fishwife's Leg and destroys it; she attacks its Hand and
destroys her also; she attacks the Fishwife's Leg and destroysher also; she attacks its Body and consumes him; she wreathes
herself about its Heart and it is consumed; next about its Breast, and
in a Moment she is a Cinder; now she reaches its Neck -- he
goes; now its Chin -- it goes; now its Nose -- she goes. In
another Moment, except Help come, the Fishwife will be no more. Time presses
-- is there none to succor and save? Yes! Joy, joy, with flying Feet the
she-Englishwoman comes! But alas, the generous she-Female is too late: where
now is the fated Fishwife? It has ceased from its Sufferings, it has gone to a
better Land; all that is left of it for its loved Ones to lament over, is this
poor smoldering Ash-heap. Ah, woeful, woeful Ash-heap! Let us take him up
tenderly, reverently, upon the lowly Shovel, and bear him to his long Rest,
with the Prayer that when he rises again it will be a Realm where he will have
one good square responsible Sex, and have it all to himself, instead of having
a mangy lot of assorted Sexes scattered all over him in Spots.
There, now, the reader can see for himself that this pronoun business is a
very awkward thing for the unaccustomed tongue. I suppose that in all
languages the similarities of look and sound between words which have no
similarity in meaning are a fruitful source of perplexity to the foreigner.
It is so in our tongue, and it is notably the case in the German. Now there is
that troublesome word vermählt: to me it has so close a
resemblance -- either real or fancied -- to three or four other words, that I
never know whether it means despised, painted, suspected, or married; until I
look in the dictionary, and then I find it means the latter. There are lots
of such words and they are a great torment. To increase the difficulty there
are words which seem to resemble each other, and yet do not; but they
make just as much trouble as if they did. For instance, there is the wordvermiethen (to let, to lease, to hire); and the wordverheirathen (another way of saying to marry). I heard of an Englishman
who knocked at a man's door in Heidelberg and proposed, in the best German he
could command, to "verheirathen" that house. Then there are some words which
mean one thing when you emphasize the first syllable, but mean something very
different if you throw the emphasis on the last syllable. For instance, there
is a word which means a runaway, or the act of glancing through a book,
according to the placing of the emphasis; and another word which signifies toassociate with a man, or to avoid him, according to where you
put the emphasis -- and you can generally depend on putting it in the wrong
place and getting into trouble.
There are some exceedingly useful words in this language. Schlag,
for example; and Zug. There are three-quarters of a column ofSchlags in the dictionary, and a column and a half of Zugs. The
word Schlag means Blow, Stroke, Dash, Hit, Shock, Clap, Slap, Time,
Bar, Coin, Stamp, Kind, Sort, Manner, Way, Apoplexy, Wood-cutting, Enclosure,
Field, Forest-clearing. This is its simple and exact meaning -- that is
to say, its restricted, its fettered meaning; but there are ways by which you
can set it free, so that it can soar away, as on the wings of the morning, and
never be at rest. You can hang any word you please to its tail, and make it
mean anything you want to. You can begin with Schlag-ader, which means
artery, and you can hang on the whole dictionary, word by word, clear through
the alphabet to Schlag-wasser, which means bilge-water -- and includingSchlag-mutter, which means mother-in-law.
Just the same with Zug. Strictly speaking, Zug means Pull,
Tug, Draught, Procession, March, Progress, Flight, Direction, Expedition,
Train, Caravan, Passage, Stroke, Touch, Line, Flourish, Trait of Character,
Feature, Lineament, Chess-move, Organ-stop, Team, Whiff, Bias, Drawer,
Propensity, Inhalation, Disposition: but that thing which it does not
mean -- when all its legitimate pennants have been hung on, has not been
discovered yet.
One cannot overestimate the usefulness of Schlag and Zug.
Armed just with these two, and the word also, what cannot the foreigner
on German soil accomplish? The German word also is the equivalent of
the English phrase "You know," and does not mean anything at all -- intalk, though it sometimes does in print. Every time a German opens his
mouth an also falls out; and every time he shuts it he bites one in two
that was trying to get out.
Now, the foreigner, equipped with these three noble words, is master of the
situation. Let him talk right along, fearlessly; let him pour his indifferent
German forth, and when he lacks for a word, let him heave a Schlag into
the vacuum; all the chances are that it fits it like a plug, but if it doesn't
let him promptly heave a Zug after it; the two together can hardly fail
to bung the hole; but if, by a miracle, they should fail, let him
simply say also! and this will give him a moment's chance to think of
the needful word. In Germany, when you load your conversational gun it is
always best to throw in a Schlag or two and a Zug or two,
because it doesn't make any difference how much the rest of the charge may
scatter, you are bound to bag something with them. Then you blandly sayalso, and load up again. Nothing gives such an air of grace and
elegance and unconstraint to a German or an English conversation as to scatter
it full of "Also's" or "You knows."
In my note-book I find this entry:
July 1. -- In the hospital yesterday, a word of thirteen
syllables was successfully removed from a patient -- a North German from near
Hamburg; but as most unfortunately the surgeons had opened him in the wrong
place, under the impression that he contained a panorama, he died. The sad
event has cast a gloom over the whole community.
That paragraph furnishes a text for a few remarks about one of the most
curious and notable features of my subject -- the length of German words.
Some German words are so long that they have a perspective. Observe these
examples:
Freundschaftsbezeigungen.
Dilettantenaufdringlichkeiten.
Stadtverordnetenversammlungen.
These things are not words, they are alphabetical processions. And they
are not rare; one can open a German newspaper at any time and see them
marching majestically across the page -- and if he has any imagination he can
see the banners and hear the music, too. They impart a martial thrill to the
meekest subject. I take a great interest in these curiosities. Whenever I come
across a good one, I stuff it and put it in my museum. In this way I have
made quite a valuable collection. When I get duplicates, I exchange with
other collectors, and thus increase the variety of my stock. Here rare some
specimens which I lately bought at an auction sale of the effects of a
bankrupt bric-a-brac hunter:
Generalstaatsverordnetenversammlungen.
Alterthumswissenschaften.
Kinderbewahrungsanstalten.
Unabhängigkeitserklärungen.
Wiedererstellungbestrebungen.
Waffenstillstandsunterhandlungen.
Of course when one of these grand mountain ranges goes stretching across
the printed page, it adorns and ennobles that literary landscape -- but at the
same time it is a great distress to the new student, for it blocks up his way;
he cannot crawl under it, or climb over it, or tunnel through it. So he
resorts to the dictionary for help, but there is no help there. The
dictionary must draw the line somewhere -- so it leaves this sort of words
out. And it is right, because these long things are hardly legitimate words,
but are rather combinations of words, and the inventor of them ought to have
been killed. They are compound words with the hyphens left out. The various
words used in building them are in the dictionary, but in a very scattered
condition; so you can hunt the materials out, one by one, and get at the
meaning at last, but it is a tedious and harassing business. I have tried
this process upon some of the above examples.
"Freundschaftsbezeigungen" seems to be "Friendship demonstrations,"
which is only a foolish and clumsy way of saying "demonstrations of
friendship." "Unabhängigkeitserklärungen" seems to be
"Independencedeclarations," which is no improvement upon "Declarations of
Independence," so far as I can see.
"Generalstaatsverordnetenversammlungen" seems to be
"General-statesrepresentativesmeetings," as nearly as I can get at it -- a
mere rhythmical, gushy euphuism for "meetings of the legislature," I judge. We
used to have a good deal of this sort of crime in our literature, but it has
gone out now. We used to speak of a things as a "never-to-be-forgotten"
circumstance, instead of cramping it into the simple and sufficient word
"memorable" and then going calmly about our business as if nothing had
happened. In those days we were not content to embalm the thing and bury it
decently, we wanted to build a monument over it.
But in our newspapers the compounding-disease lingers a little to the
present day, but with the hyphens left out, in the German fashion. This is
the shape it takes: instead of saying "Mr. Simmons, clerk of the county and
district courts, was in town yesterday," the new form put it thus: "Clerk of
the County and District Courts Simmons was in town yesterday." This saves
neither time nor ink, and has an awkward sound besides. One often sees a
remark like this in our papers: "Mrs. Assistant District Attorney
Johnson returned to her city residence yesterday for the season." That is a
case of really unjustifiable compounding; because it not only saves no time or
trouble, but confers a title on Mrs. Johnson which she has no right to. But
these little instances are trifles indeed, contrasted with the ponderous and
dismal German system of piling jumbled compounds together. I wish to submit
the following local item, from a Mannheim journal, by way of illustration:
"In the daybeforeyesterdayshortlyaftereleveno'clock Night, the
inthistownstandingtavern called `The Wagoner' was downburnt. When the fire to
the onthedownburninghouseresting Stork's Nest reached, flew the parent Storks
away. But when the bytheraging, firesurrounded Nest itself caught
Fire, straightway plunged the quickreturning Mother-stork into the Flames and
died, her Wings over her young ones outspread."
Even the cumbersome German construction is not able to take the pathos out
of that picture -- indeed, it somehow seems to strengthen it. This item is
dated away back yonder months ago. I could have used it sooner, but I was
waiting to hear from the Father-stork. I am still waiting.
"Also!" If I had not shown that the German is a difficult language,
I have at least intended to do so. I have heard of an American student who
was asked how he was getting along with his German, and who answered promptly:
"I am not getting along at all. I have worked at it hard for three level
months, and all I have got to show for it is one solitary German phrase --
`Zwei Glas'" (two glasses of beer). He paused for a moment,
reflectively; then added with feeling: "But I've got that solid!"
And if I have not also shown that German is a harassing and infuriating
study, my execution has been at fault, and not my intent. I heard lately of a
worn and sorely tried American student who used to fly to a certain German
word for relief when he could bear up under his aggravations no longer -- the
only word whose sound was sweet and precious to his ear and healing to his
lacerated spirit. This was the word Damit. It was only thesound that helped him, not the meaning; [3] and so, at last, when he
learned that the emphasis was not on the first syllable, his only stay and
support was gone, and he faded away and died.
3. It merely means, in its general sense,
"herewith."
I think that a description of any loud, stirring, tumultuous episode must
be tamer in German than in English. Our descriptive words of this character
have such a deep, strong, resonant sound, while their German equivalents do
seem so thin and mild and energyless. Boom, burst, crash, roar, storm,
bellow, blow, thunder, explosion; howl, cry, shout, yell, groan; battle, hell.
These are magnificent words; the have a force and magnitude of sound befitting
the things which they describe. But their German equivalents would be ever so
nice to sing the children to sleep with, or else my awe-inspiring ears were
made for display and not for superior usefulness in analyzing sounds. Would
any man want to die in a battle which was called by so tame a term as aSchlacht? Or would not a consumptive feel too much bundled up, who was
about to go out, in a shirt-collar and a seal-ring, into a storm which the
bird-song word Gewitter was employed to describe? And observe the
strongest of the several German equivalents for explosion --Ausbruch. Our word Toothbrush is more powerful than that. It seems to
me that the Germans could do worse than import it into their language to
describe particularly tremendous explosions with. The German word for hell --
Hölle -- sounds more like helly than anything else; therefore, how
necessary chipper, frivolous, and unimpressive it is. If a man were told in
German to go there, could he really rise to thee dignity of feeling
insulted?
Having pointed out, in detail, the several vices of this language, I now
come to the brief and pleasant task of pointing out its virtues. The
capitalizing of the nouns I have already mentioned. But far before this virtue
stands another -- that of spelling a word according to the sound of it. After
one short lesson in the alphabet, the student can tell how any German word is
pronounced without having to ask; whereas in our language if a student should
inquire of us, "What does B, O, W, spell?" we should be obliged to reply,
"Nobody can tell what it spells when you set if off by itself; you can only
tell by referring to the context and finding out what it signifies -- whether
it is a thing to shoot arrows with, or a nod of one's head, or the forward end
of a boat."
There are some German words which are singularly and powerfully effective.
For instance, those which describe lowly, peaceful, and affectionate home
life; those which deal with love, in any and all forms, from mere kindly
feeling and honest good will toward the passing stranger, clear up to
courtship; those which deal with outdoor Nature, in its softest and loveliest
aspects -- with meadows and forests, and birds and flowers, the fragrance and
sunshine of summer, and the moonlight of peaceful winter nights; in a word,
those which deal with any and all forms of rest, repose, and peace; those
also which deal with the creatures and marvels of fairyland; and lastly and
chiefly, in those words which express pathos, is the language surpassingly
rich and affective. There are German songs which can make a stranger to the
language cry. That shows that the sound of the words is correct -- it
interprets the meanings with truth and with exactness; and so the ear is
informed, and through the ear, the heart.
The Germans do not seem to be afraid to repeat a word when it is the right
one. they repeat it several times, if they choose. That is wise. But in
English, when we have used a word a couple of times in a paragraph, we imagine
we are growing tautological, and so we are weak enough to exchange it for some
other word which only approximates exactness, to escape what we wrongly fancy
is a greater blemish. Repetition may be bad, but surely inexactness is
worse.
There are people in the world who will take a great deal of trouble to
point out the faults in a religion or a language, and then go blandly about
their business without suggesting any remedy. I am not that kind of person. I
have shown that the German language needs reforming. Very well, I am ready to
reform it. At least I am ready to make the proper suggestions. Such a course
as this might be immodest in another; but I have devoted upward of nine full
weeks, first and last, to a careful and critical study of this tongue, and
thus have acquired a confidence in my ability to reform it which no mere
superficial culture could have conferred upon me.
In the first place, I would leave out the Dative case. It confuses the
plurals; and, besides, nobody ever knows when he is in the Dative case, except
he discover it by accident -- and then he does not know when or where it was
that he got into it, or how long he has been in it, or how he is going to get
out of it again. The Dative case is but an ornamental folly -- it is better
to discard it.
In the next place, I would move the Verb further up to the front. You may
load up with ever so good a Verb, but I notice that you never really bring
down a subject with it at the present German range -- you only cripple it. So
I insist that this important part of speech should be brought forward to a
position where it may be easily seen with the naked eye.
Thirdly, I would import some strong words from the English tongue -- to
swear with, and also to use in describing all sorts of vigorous things in a
vigorous ways. [4]
4. "Verdammt," and its variations and enlargements, are words
which have plenty of meaning, but the sounds are so mild and
ineffectual that German ladies can use them without sin. German ladies who
could not be induced to commit a sin by any persuasion or compulsion, promptly
rip out one of these harmless little words when they tear their dresses or
don't like the soup. It sounds about as wicked as our "My gracious." German
ladies are constantly saying, "Ach! Gott!" "Mein Gott!" "Gott in Himmel!"
"Herr Gott" "Der Herr Jesus!" etc. They think our ladies have the same
custom, perhaps; for I once heard a gentle and lovely old German lady say to a
sweet young American girl: "The two languages are so alike -- how pleasant
that is; we say `Ach! Gott!' you say `Goddamn.'"
Fourthly, I would reorganizes the sexes, and distribute them accordingly to
the will of the creator. This as a tribute of respect, if nothing else.
Fifthly, I would do away with those great long compounded words; or require
the speaker to deliver them in sections, with intermissions for refreshments.
To wholly do away with them would be best, for ideas are more easily received
and digested when they come one at a time than when they come in bulk.
Intellectual food is like any other; it is pleasanter and more beneficial to
take it with a spoon than with a shovel.
Sixthly, I would require a speaker to stop when he is done, and not hang a
string of those useless "haben sind gewesen gehabt haben geworden
seins" to the end of his oration. This sort of gewgaws undignify a
speech, instead of adding a grace. They are, therefore, an offense, and
should be discarded.
Seventhly, I would discard the Parenthesis. Also the reparenthesis, the
re-reparenthesis, and the re-re-re-re-re-reparentheses, and likewise the final
wide-reaching all-inclosing king-parenthesis. I would require every
individual, be he high or low, to unfold a plain straightforward tale, or else
coil it and sit on it and hold his peace. Infractions of this law should be
punishable with death.
And eighthly, and last, I would retain Zug and Schlag, with
their pendants, and discard the rest of the vocabulary. This would simplify
the language.
I have now named what I regard as the most necessary and important
changes. These are perhaps all I could be expected to name for nothing; but
there are other suggestions which I can and will make in case my proposed
application shall result in my being formally employed by the government in
the work of reforming the language.
My philological studies have satisfied me that a gifted person ought to
learn English (barring spelling and pronouncing) in thirty hours, French in
thirty days, and German in thirty years. It seems manifest, then, that the
latter tongue ought to be trimmed down and repaired. If it is to remain as it
is, it ought to be gently and reverently set aside among the dead languages,
for only the dead have time to learn it.
Gentlemen: Since I arrived, a month ago, in this old wonderland, this vast
garden of Germany, my English tongue has so often proved a useless piece of
baggage to me, and so troublesome to carry around, in a country where they
haven't the checking system for luggage, that I finally set to work, and
learned the German language. Also! Es freut mich dass dies so ist, denn es
muss, in ein hauptsächlich degree, höflich sein, dass man auf ein
occasion like this, sein Rede in die Sprache des Landes worin he boards,
aussprechen soll. Dafür habe ich, aus reinische Verlegenheit -- no,
Vergangenheit -- no, I mean Höflichkeit -- aus reinische Höflichkeit habe ich
resolved to tackle this business in the German language, um Gottes willen!
Also! Sie müssen so freundlich sein, und verzeih mich die interlarding
von ein oder zwei Englischer Worte, hie und da, denn ich finde dass die
deutsche is not a very copious language, and so when you've really got
anything to say, you've got to draw on a language that can stand the
strain.
Wenn haber man kann nicht meinem Rede Verstehen, so werde ich ihm
später dasselbe übersetz, wenn er solche Dienst verlangen wollen
haben werden sollen sein hätte. (I don't know what "wollen haben werden
sollen sein hätte" means, but I notice they always put it at the end of a
German sentence -- merely for general literary gorgeousness, I suppose.)
This is a great and justly honored day -- a day which is worthy of the
veneration in which it is held by the true patriots of all climes and
nationalities -- a day which offers a fruitful theme for thought and speech;
und meinem Freunde -- no, meinen Freunden -- meines
Freundes -- well, take your choice, they're all the same price; I don't
know which one is right -- also! ich habe gehabt haben worden gewesen sein, as
Goethe says in his Paradise Lost -- ich -- ich -- that is to say
-- ich -- but let us change cars.
Also! Die Anblich so viele Grossbrittanischer und Amerikanischer hier
zusammengetroffen in Bruderliche concord, ist zwar a welcome and inspiriting
spectacle. And what has moved you to it? Can the terse German tongue rise to
the expression of this impulse? Is it
Freundschaftsbezeigungenstadtverordnetenversammlungenfamilieneigenthümlichkeiten?
Nein, o nein! This is a crisp and noble word, but it fails to pierce the
marrow of the impulse which has gathered this friendly meeting and produced
diese Anblick -- eine Anblich welche ist gut zu sehen -- gut für die
Augen in a foreign land and a far country -- eine Anblick solche als in die
gewöhnliche Heidelberger phrase nennt man ein "schönes Aussicht!"
Ja, freilich natürlich wahrscheinlich ebensowohl! Also! Die Aussicht auf
dem Königsstuhl mehr grösser ist, aber geistlische sprechend nicht
so schön, lob' Gott! Because sie sind hier zusammengetroffen, in
Bruderlichem concord, ein grossen Tag zu feirn, whose high benefits were not
for one land and one locality, but have conferred a measure of good upon all
lands that know liberty today, and love it. Hundert Jahre vorüber, waren
die Engländer und die Amerikaner Feinde; aber heute sind sie herzlichen
Freunde, Gott sei Dank! May this good-fellowship endure; may these banners
here blended in amity so remain; may they never any more wave over opposing
hosts, or be stained with blood which was kindred, is kindred, and always will
be kindred, until a line drawn upon a map shall be able to say: "This
bars the ancestral blood from flowing in the veins of the descendant!"
Microsoft unveiled a bunch of Surface hardware during a press event in New York City last night. While matte black Surfaces, headphones with Cortana, and a new Surface Studio were the highlights of the hardware side, Microsoft unveiled an interesting change to its Windows operating system. Windows 10 will soon fully embrace Android to mirror these mobile apps to your PC.
The Android app mirroring will be part of Microsoft’s new Your Phone app for Windows 10. This app debuts this week as part of the Windows 10 October 2018 Update, but the app mirroring part won’t likely appear until next year. Microsoft briefly demonstrated how it will work, though; You’ll be able to simply mirror your phone screen straight onto Windows 10 through the Your Phone app, which will have a list of your Android apps. You can tap to access them and have them appear in the remote session of your phone.
Android app mirroring on Windows 10
We’ve seen a variety of ways of bringing Android apps to Windows in recent years, including Bluestacks and even Dell’s Mobile Connect software. This app mirroring is certainly easier to do with Android, as it’s less restricted than iOS. Still, Microsoft’s welcoming embrace of Android in Windows 10 with this app mirroring is just the latest in a number of steps the company has taken recently to really help align Android as the mobile equivalent of Windows.
Microsoft Launcher is designed to replace the default Google experience on Android phones, and bring Microsoft’s own services and Office connectivity to the home screen. It’s a popular launcher that Microsoft keeps updating, and it’s even getting support for the Windows 10 Timeline feature that lets you resume apps and sites across devices.
All of this just reminds me of Windows Phone. It’s only been three years since Microsoft launched its Lumia 950 Windows 10 Mobile device at a packed holiday hardware event. Windows Phone has vanished in the last couple of years, and Microsoft finally admitted Windows Phone was dead nearly a year ago. The software maker has now embraced the reality that people don’t need Windows on a phone. Instead, it’s embracing Android as the mobile version of Windows.
Microsoft’s Lumia 950
Microsoft’s best mobile work is debuting on Android right now, and if you’re a Windows user then Google’s operating system has always felt like the natural companion anyway. As Microsoft can’t replicate a lot of Your Phone functionality on iPhones, Android now feels like the only choice if you want a close mobile connection to a Windows PC.
We’re only at the early stages of Microsoft’s new mobile strategy of making iOS and Android better at connecting to Windows, but it’s clear the company won’t hold back on features to ensure they’re available on iPhones too. Bringing Android apps to Windows 10 PCs through a remote window into your phone is a useful and clever way of keeping Windows 10 users focused on using their PCs more.
This is all part of Microsoft’s bigger productivity push, and a renewed focus on “prosumers” that use Windows for both work and home. It’s encouraging that Microsoft is willing to embrace a rival operating system to deliver mobile functionality that we’d never see from Apple and Google unless you bought a MacBook or a Chromebook. Microsoft’s “for the people” fluffy message always feels like marketing, but this new mobile push is a good example of doing something that will actually benefit Windows 10 users, Android owners, and iPhone users.
libnop is a header-only library for serializing and deserializing C++ data
types without external code generators or runtime support libraries. The only
mandatory requirement is a compiler that supports the C++14 standard.
Note: This is not an officially supported Google product at this time.
Goals
libnop has the following goals:
Make simple serialization tasks easy and complex tasks tractable.
Remove the need to use code generators and schema files to describe data
types, formats, and protocols: perform these tasks naturally within the C++
language.
Avoid additional runtime support requirements for serialization.
Provide contemporary features such as bidirectional binary compatability,
data validation, type safety, and type fungibility.
Handle intrinsic types, common STL types and containers, and user-defined
types with a minimum of effort.
Produce optimized code that is easy to analyze and profile.
Avoid internal dynamic memory allocation when possible.
Getting Started
Take a look at Getting Started for an introduction to
the library.
Quick Examples
Here is a quick series of examples to demonstrate how libnop is used. You can
find more examples in the repository under examples/.
CNXSoft: Guest post by Blu about Baikal T1 development board and SoC, potentially one of the last MIPS consumer grade platforms ever.
It took me a long time to start writing this article, even though I had been poking at the test subject for months, and I felt during that time that there were findings worth sharing with fellow embedded devs. What was holding me back was the thought that I might be seeing one of the last consumer-grade specimen of a paramount ISA that once turned upside-down the CPU world. That thought was giving me mixed feelings of part sadness, part hesitation ‒ to not do some injustice to a possibly last-of-its-kind device. So it was with these feelings that I took to writing this article. But first, a short personal story.
Two winters ago I was talking to a friend of mine over beers. We were discussing CPU architectures and hypothesizing on future CPU developments in the industry, when I mentioned to him that the latest Imagination Technologies’ MIPS P5600 ‒ a MIPS32r5 ‒ hosted an interesting SIMD extension ‒ a previously-unseen one in the MIPS world. I had just skimmed through the docs for that extension ‒ MIPS SIMD Architecture (MSA), and I was impressed with how clean and practical this new vector instruction set looked in comparison to the SIMD ISAs of the day, partiularly to those by a very venerable CPU manufacturer. We discussed how the P5600 had found its way into a SoC by the Russian semiconductor vendor Baikal Electronics, and how they were releasing a devboard, which, thanks to limited-series manufacturing, would be well out-of-reach for mortal devs.
Fast forward to this summer, when I got a ping from my friend ‒ he was currently in St. Petersburg, Russia, and he was browsing the online store of a Moscow computer shop, and there was the Baikal T1 BFK 3.1 board, for the equivalent of 500 EUR, so if I ever wanted to get one, now was the time.
Did I want one? Last MIPS I had an encounter with was the Imagination CI20 board, hosting an Ingenic JZ4780 application SoC ‒ a dual-core MIPS32r2 implementation, and that was a mixed experience. I just had higher expectations of that SoC, as neither the SoC vendor nor Imagination did a good job setting the user expectations of what the XBurst MIPS cores actually were ‒ short in-order pipelines, with a non-pipelined scalar FPU, and an obscure integer-only SIMD specialized for video codecs. The one interesting part in that SoC, from my perspective, was the fully-fledged GLESv2/EGL stack for the aging SGX540. What I was looking for this time around was a “meatier” MIPS, one which was closer to the state of the art of this ISA, and the P5600 was precisely that.
So, yes, I very much wanted one. That price was very close to my threshold of ‘buy for science’, but I still had to keep in check my overgrown annual ‘scientific budget’ (as I refer to my devboard expenses in front of my wife), so I hesitated for a moment. To which my friend suggested ‘Listen, your birthday occurs annually, so how about I get you a birthday present, with some credit from future birthdays?’ [A huge thank you, Mitia, for your ingenuity, kindness and generosity!]
The BFK 3.1 is a sub-uATX board ‒ namely of the flexATX factor ‒ a bit larger than mini-ITX, which means it’s compact ‒ not RPi compact, mind you, but still compact for a devboard. Baikal T1 itself is a compact SoC ‒ not much larger than the Ingenic JZ4780. The latter is 17x17mm BGA390 (40nm), vs 25x25mm BGA576 (28nm) for the T1. But the T1 is a proper SoC that contains everything needed for a small gen-purpose computer (sans a GPU), which is what the BFK 3.1 seeks to be. Combined with the versatile MCU STM32F205 (ARM Cortex-M3 @ 120MHz), the T1 allows for an essentially two-chip devboard. Aside form the SoC and its companion MCU, the BFK 3.1 hosts a PCIe x16 connector (x4 active lanes), a SO-DIMM slot, an ATX power connector, 2x 1Gb Ethernet and 2x SATA 3 connectors, a USB2.0, an UART (via mini-USB) and what appears to be a USB OTG, a couple of JTAGs and even a RPi GPIO connector ‒ the rest of the board’s top surface is nearly pristine clean. Ok, there’s one more connector ‒ a proprietary one for the optional 10Gb Ethernet add-on, but that comes more as a curiosity from my current perspective.
Getting the board live was practically uneventful. BFK 3.1 power delivery is via a 24-pin ATX connector ‒ no barrel connectors of any kind, which in my case made two large drawers worth of PSUs useless, but I also had a 20-pin ATX picoPSU at hand (80W DC-DC, 12V input) and a spare AC-DC 12V convertor (60W) ‒ that improvised power delivery covered the board plus a SSD more than fine ‒ actually it was an overkill, given the manufacturer’s TDP rating of the SoC of 5W. I also had a leftover 4GB DDR3 SO-DIMM from a decommissioned notebook, so I thought I had the RAM covered as well. A “minor” detail had escaped my attention ‒ that SO-DIMM was of the 1333MT/s (667MHz) variety, whereas the board took 1600MT/s (800MHz) sharp ‒ my first booting of the board took me as far as RAM controller negotiations.
Board fitted with “wrong” SO-DIMM @ 667 MHz – Click to Enlarge
One facepalm and a visit to the local store later, the board was hosting shiny-new 8GB of DDR3, to specs and all.
Yet another minor detail about the RAM had originally escaped my attention, but that detail was not crucial to the booting of the board, and I found it out only after the first boot: the SoC had a 32-bit RAM bus, so it was seeing half the capacity of the 64-bit DIMM. Perhaps it could be arranged for such a bus to see the full DIMM capacity ‒ I’m not a hw engineer to know such things, and the designers of the BFK 3.1 clearly did not arrange for that. Which is a bit unfortunate for a devboard. Oh well ‒ back to square ‘4GB of RAM’.
Click to Enlarge
Apropos, as it turned out, I did really need RAM, since for exposing the full potential of the P5600 I had some compiler building ahead of me, and I always self-host builds when possible. But I’m getting ahead of myself.
The board arrives with a Busybox in SPI flash, and Baikal Electronics provide two revisions of Debian Stretch images with kernel 4.4 for day-to-day uses from a SATA drive. All available boot media are exposed via the cleanest U-Boot menu interface I’ve seen yet.
Footnote: aside from dd-ing the Debian image to the SSD, all interactions with the BFK 3.1 were done without involvement of PCs ‒ the above screengrab is from my trusty chromebook.
The obligatory dump of basic caps follows:
Linux baikal4.4.100-bfk3#4 SMP Thu Feb 15 17:25:02 MSK 2018 mips GNU/Linux
Whether the kernel saw this as a MIPS32r2 machine or it made use of the address extensions ‒ all that was beyond the scope of this first reconnaissance. I wanted to examine uarch performance, and as long as compilers were in the clear about the CPU’s true ISA capabilities I was set.
The VZ extension is a virtualization thing ‒ far from my interests. The EVA and XPA are addressing extensions ‒ Enhanced Virtual Address and Extended Physical Address, respectively. The former allows more efficient virtual-space mapping between kernel and userspace for the 32-bit/4GB process-addressable memory space. And the latter is, well, a physical address extension. From the P5600 manual:
Extended Physical Address (XPA) that allows the physical address to be extended from 32-bits to 40-bits.
Clearly both addressing extensions could be of good use to kernel developers. Me, of the listed ISA extensions, MSA was the one I truly cared about.
Timing buffered disk reads:1206MB in3.00seconds=401.89MB/sec
As wise men say, ‘Have decent SATA performance ‒ will use for a build machine.’
And finally, an interrupts-related observation that might help me obtain cleaner benchmarking results:
blu@baikal:~$cat/proc/interrupts
CPU0 CPU1
1:169069097MIPS GIC Local1timer
2:00MIPS GIC Local0watchdog
8:54280MIPS GIC8IPI resched
9:04970MIPS GIC9IPI resched
10:46930MIPS GIC10IPI call
11:015118MIPS GIC11IPI call
23:00MIPS GIC23be-apb
31:00MIPS GIC31timer0
38:00MIPS GIC381f200000.pvt
40:00MIPS GIC401f046000.i2c0
41:1910MIPS GIC411f047000.i2c1
47:50MIPS GIC47dw_spi0
48:00MIPS GIC48dw_spi1
55:24640MIPS GIC55serial
56:100MIPS GIC56serial
63:00MIPS GIC63dw_dmac
71:218320MIPS GIC711f050000.sata
75:00MIPS GIC75xhci-hcd:usb1
79:6520MIPS GIC79eth1
87:00MIPS GIC87eDMA-Tx-0
88:00MIPS GIC88eDMA-Tx-1
89:00MIPS GIC89eDMA-Tx-2
90:00MIPS GIC90eDMA-Tx-3
91:00MIPS GIC91eDMA-Rx-0
92:00MIPS GIC92eDMA-Rx-1
93:00MIPS GIC93eDMA-Rx-2
94:00MIPS GIC94eDMA-Rx-3
95:00MIPS GIC95MSI PCI
96:00MIPS GIC96AER PCI
103:00MIPS GIC103emc-dfi
104:00MIPS GIC104emc-ecr
105:00MIPS GIC105emc-euc
134:00MIPS GIC134be-axi
ERR:0
Notice how all serial and SATA interrupts are serviced by the 1st core? We could put that to some use.
Now the actual fun could begin! Being the control freak that I am, I tend to run a couple of micro-benchmarks when testing new uarchitectures ‒ one on the ‘gen-purpose’ side of performance, and one on the ‘sustained fp’ side of performance. Both of them being single-threaded, and the CPU at hand not featuring SMT, that meant I could focus on the details of the uarch by isolating all tests to the relatively-uninterrupted 2nd core.
Unfortunately, there was one last obstacle before me ‒ Debian Stretch comes with gcc-6.3 which does not know of the MSA extension in the P5600. For that I needed one major compiler revision later ‒ gcc-7.3 was fully aware of the novel instruction set, and so my next step was building gcc-7.3 for the platform. Easy-peasy. Or so I thought.
A short rant: I have difficulties understanding why a compiler’s default-settings self-hosted build would fail with an ‘illegal instruction’ in the bootstrap phase. But that’s the case with g++-7.3 on Debian Stretch when doing a self-hosted --target=mipsel-linux-gnu build on the BFK 3.1, and that’s what made me approach the gcc-dev mailing list with the wrong kind of support question, to which, luckily, I still got helpful responses.
Back to the BFK 3.1, where I eventually got a good g++-7.3 build via the following config, largely copied over from Debian’s g++-6.3:
Yay, got MSA compiler support! Now I could do all the fp32 (and not only) SIMD I wanted.
But first I stumbled upon a surprise coming from the non-SIMD micro-benchmark ‒ a Mandelbrot plot written in the language Brainfuck, and run through a home-grown Brainfuck interpreter.
Running that before and after upgrading the compiler showed the following results:
Brainstorm Mandelbrot ‒ three versions of the code, across two compilers: g++-6.3.0: 0m43.539s (vanilla) g++-6.3.0: 0m38.176s (alt) g++-6.3.0: 0m38.176s (alt^2)
Notice how for the exact-same code and the exact-same optimization flags the two compilers produced performance delta for the resulting binary as large as 20% in favor of the newer g++? That was not due to some new, smarter P5600 instructions utilized by the newer compiler ‒ nope, the generated codes in both cases used the same ISA. It’s just that the newer compiler produced notably better-quality code ‒ fewer branches, more linear control flow. Yay for better compilers!
Those g++7.3 results positioned the P5600 firmly between the AMD A8-7600 and the Intel Core2 Duo P8600 in the clock-normalized Mandelbrot performance charts (where the Penryn also takes advantage of the custom Apple clang compiler, which generally outperforms gcc at this combination of CPU and task.
Per-clock, the P5600 also scored ahead of the Cortex-A15, which I believe is the closest competitor in the category of the P5600. Where the P5600, or perhaps its incarnation in the Baikal T1, fell short, was in absolute performance due to low clocks. Should that core reach clocks closer to 2GHz, we’d be seeing much more interesting absolute-performance results.
Ok, it was time to see how the P5600 did at fp32 SIMD. For that an SGEMM matrix multiplier was to be used. Making use of the novel MSA ISA took minimal effort, partially thanks to gcc’s support for generic vectors, partially thanks to the simplicity of the MSA ISA. The MSA version of the matmul code, dubbed ‘ALT=8’, took less than an hour to code and tune, and resulted in ~3.9 flop/clock for the small, cache-fitting dataset (64×64 matrices), and 2.1 flop/clock for the large dataset (512×512 matrices). Those results placed the P5600 firmly between Intel Merom and Intel Penryn for the small dataset, and slightly below the level of ARM Cortex-A72 and Intel Merom for the large dataset. The large dataset, though, exhibited a rather erratic behavior ‒ run-times varied considerably even when pinned to the 2nd core. It was as if the memory subsystem, past L2D, was behaving inconsistently doing 128-bit-wide accesses. That warranted further investigation, which would happen on a better day.
But let me finish my BFK 3.1 story here, and give my subjective, not-guaranteed-impartial opinion of the test subject.
My impressions of the P5600 in the Baikal T1 are largely positive. Using my limited micro-benchmark set as a basis, that uarchitecture does largely deliver on its promises of good gen-purposes IPC and good SIMD throughput per clock, and could be considered a direct competitor to the best of 32-bit ARM Cortex designs. That said, Baikal T1 could use higher clocks, which would position it in absolute-performance terms right in the group of the Core2 lineup by Intel and the Cortex-A12/15/17 lineup by ARM. Which, if one thinks of it in the grand scheme things, would be nothing short of a great achievement for the Baikal Warrior (Imagination aptly named the P-series MIPS designs ‘Warrior’‒ they’d have to fight for the survival of their ISA). If we ever live to see another Baikal T-series, that is ‒ Baikal Electronics are also developing their Baikal M-series ‒ ARM Cortex-A57 designs.
MIPS once turned the CPU world around. Can it survive its darkest hour (at least in the West ‒ in the East the Chinese have their Loongson) and step into a renaissance, or will it perish into oblivion? I, for one, would love to see the former, but I’m just an old coder, and old coders don’t get much say these days.
As the seventh annual js13kGames competition comes to a close, a grand total of 274 games were submitted. Even more impressive, each one was created in a single month, using less than 13 kB.
We rounded up a few of our favorites featuring a number of different styles and genres. From dark shooters and pixelated beat ‘em ups to perplexing puzzle and platform games—enjoy some downtime this weekend and play them all (or fork and hack on them with your own customizations)!
UNDERRUN
UNDERRUN is a twin, stick shooter “in 256 shades of brown,” using webGL from @phoboslab. In this game, you must defend yourself from predators while figuring out how to restore power to fix all system failures. Sounds simple enough, right? See for yourself when you play this highly-addictive shooter (and enjoy the haunting music). Read more about how the game was created in the retrospective.
@DennisBengs created the challenging puzzle game, Envisionator. The goal of the game is to escape a building on lockdown by giving a robot commands. What’s the catch? The robot needs you to give it each and every direction, step by step—one false move, and…well, you’ll see! Play Envisionater to see if you can escape.
Things aren’t as black and white as they appear in ONOFF. Dodge spikes, jump over pits, and toggle between dimensions. Think you can overcome each level of traps? You’re in for a treat with this mind-boggling, fast-paced platformer from @starzonmyarmz. Play it to see what we mean!
The Chroma Incident by @Rybar is also a twin, stick shooter but with a few more colors than UNDERRUN. The problem is the color’s been stolen by the Achromats, and it’s up to you to bring it back. Shoot your way through areas to reclaim those colors—give it a go!
Get nostalgic and relive some of the intense fight scenes with Neo from The Matrix. Use the arrow keys, S to kick and D punch your way through this JavaScript matrix from @agar3s. Can you find a way to the end of the rabbit hole before it’s too late? Play The Matri13k and test your combat skills.
Not to be confused with 2048(!), 1024 Moves is a polished puzzle game from @GregPeck. Get the ball, and avoid the holes—what’s the catch? See if you can solve the entire game in less than 1,024 moves. Play and test your problem-solving skills.
Think you know a little bit about world geography? Or are you lost with even the simplest of directions? Prove how much of a geography all-star you are by playing Geoquiz2—or brush up on your worldly knowledge. You can even read about how @xem made the game in the GeoQuiz2 retrospective.
@tricsi’s Spacecraft challenges you to collect as many data tokens as possible from the planets and moons of the Solar System. It’s easy—until gravity accelerates your ship, and you have to avoid obstacles along the way in, “space, the final frontier.” How far can you go before your probe goes offline? The only way to find out is to play on.
How are your gaming reflexes? You’ll quickly find out when you jump Off the Line to collect coins in this arcade tapper from @regularkid. Take your time to figure out the best way to collect coins, or go crazy with a timed, ultra difficult ULTRA MEGA MODE (if you’re feeling lucky). Play it and see how many coins you can collect.
You are the commander of a long-forgotten expedition to a distant star, and there are forces out to get you. Survive waves upon wave of enemies in Exo, a space-based tower defence game brought to you by @scorp200. Play Exo to unravel the story, arm your base, and reclaim your expedition.
You are in control of your destiny in this space-based exploration game. Will you fight for the good of all or make enemies by being evil? Forge alliances, study star systems, fight against enemy combatants, and more in Everyone’s Sky from @remvst.
In @herebefrogs’s Submersible Warship 2063, enemy submarines are invading, fast. Make strategic use of your sonar to identify targets and evade torpedoes. Can you beat them before they beat you? Stay off enemy radar, and fight on by playing Submersible Warship 2063.
If you enjoy playing high-stakes puzzles, Re-wire was made for you. Bring the system back online by rewiring power nodes, but watch out for the traps! This game from @JMankopf will have you… wired to it for hours.
Post Graduate in Computer Science, 10 years of data analytics experience
"This is the company with a vision of promoting learning. I am proud to say that after 2 years, I’ve helped numerous students and professionals because of the training, opportunity, and belief that this company provides."
Arnoldas
Big passion in technology, 3 years of experience of in Excel and C++ programming
"I’ve been with Got It Pro for 6 months, and I thoroughly enjoy the experience of helping people through Excel problems - not to mention the awesome team of our fellow Experts!"
Viedite
Dedicated teacher with 2 years of Excel experience
"I will genuinely teach you how to excel at Excel. It’s easy to get deceived by Excel, but I will lead you through any difficulties with useful tips and tricks to help you expand your Excel skills!"
Last month, we asked Atlas Obscura readers to tell us about their favorite treehouses. Why treehouses? Because we love almost everything about them—the childlike sense of wonder they inspire, the quirks and secret cubbyholes that make each one unique. Also, we’re nosy. Treehouses are often hidden in backyards, stubbornly refusing to reveal themselves to passersby. We want to see them!
The submissions we received revealed magical tree-based structures of all sorts, from an elevated fort inspired by young love to a hanging shelter that required more than a little engineering know-how. Overall, you also told us how your favorite treehouses are all the more impressive for the memories they represent.
Below you’ll find a selection of some of our favorite submissions. Every treehouse has the potential to make the world a little more wondrous—with any luck, one of these stories will inspire you to look up at the leaves and dream.
Mike Caveney
An Inspired Getaway
Pasadena, California
“Built it myself after seeing an article in Smithsonian Magazine. Solar power run lights, radio, and TV.” — Mike Caveney, Pasadena, California
Michael Plank
Building Memories
Lanett, Alabama
“During my doctoral program, my boys dreamed it up while watching Treehouse Masters. ‘We could do that!’ So I let them design it. It took two years of weekends, several friends, and family, but we finally completed it in April. We reclaimed as much wood as possible. The siding is from an old fence at my in-laws’. It’s magical at night with all the lights on. But my most favorite part is that I built it with my boys. A forever memory.” — Michael Plank, Lanett, Alabama
Kevin Tracy
Hanging Hideaway
Central Oregon
“My brother and I built it over the summer of 2002 in a trio of Ponderosa pines on my off-grid property in Oregon. All hand tools, no electricity, or even a cordless drill. It’s about 25-feet up, suspended with cables so it sways with the trees in the wind. We built the floor platform on the ground, then hoisted it up into place using a large pulley and my pickup truck. We then added the walls and roof up there, swinging around in rock climbing harnesses and pulling materials up with the pulley. Only way up is to climb a tree and hoist yourself up through a trapdoor in the porch floor. I sleep on the porch up there whenever I can make it out to my property. Because the trees grow at different rates, we need to re-level it every few years using turnbuckles in the suspension cables.” — Kevin Tracy, Michigan
C. Hope Clark
Grandson’s House
Chapin, South Carolina
“It was our present to our two-year-old grandson who just turned five. We wanted to construct something he could grow up with and enjoy into adulthood. It is also big enough to put lawn chairs for the adults to sit back and enjoy. It overlooks the chicken coops on one side, our garden on the other, and the biggest view is downhill to the lake. My husband has used it to watch deer at dawn. It’s constructed around a hickory tree and under the canopy of other hickories, pines, and even one dogwood, and has a coach lap outside the stairs. We built it with stairs and a landing, dedicated it to our grandson, naming it Fort Jackson. The goal is to install a drop down ladder to the underneath of it when he is 7 years old. It’s equally a deer stand and an adult watering hole.” — C. Hope Clark, Chapin, South Carolina
Deb Kreutz
A Stranger’s Passion
Thailand
“I was told that the gentleman who designs, builds, and owns these treetop escapes had a career as a professional in the city. When that job came to an end, he became, for whatever reason, a chicken farmer. Apparently, he was also a dreamer and he began building treehouses that he imagined as being in the trees of his rural farm, located in the forest outside of Chiang Rai, Thailand. Each treehouse is unique and each is rented as a bed and breakfast unit. Lying safe and cozy in a leafy bower listening to the song of tropical birds and the gentle gurgle of the stream below… magic.” — Deb Kreutz, California
Martin Schmidt
Child-Size and Carpenter-Built
Pacific Grove, California
“My father was a professional carpenter and he constructed the treehouse in a group of four oak trees which grew closely together in our front yard. He built a sturdy wooden platform about five feet above ground level. Then constructed the walls and roof of the treehouse out of cedar roof shakes which had been left over from the construction of our ‘real’ house. My mother was very creative and she served as art director for the creation of the treehouse, suggesting features such as the diamond-pane windows and the crooked stovepipe on the roof. One Christmas she made a pair of elves out of styrofoam, coat hanger wire, and oilcloth. She positioned the elves on the roof with a string of lights in their hands as if they were decorating the treehouse. The treehouse was small but cozy and a great place to spend an afternoon reading or just dreaming away the time. Not many treehouses look like a fairytale cottage with a crooked stovepipe on the roof. It was built in the mid-1960s and dismantled in 1972 when we moved away.” — Martin Schmidt, Carmel, California
Sonja Peshkoff
Arboreal Architecture
Bad Harzburg, Germany
“Architectural design turned reality through treehouse hotel project, organized by the land owner and developer. The roof is curved.” — Sonja Peshkoff, Hamburg, Germany
Monica Paxson
Honeymoon Cottage
Julian, California
“It was created as a ‘honeymoon cottage’ by the owners when they married. At night, coyotes would climb the spiral staircase to the tin roof and dance around, with their nails clicking on the tin. It had a tiny galley kitchen and a wood-burning stove. ‘Something’ would chew on the house at night and I would throw shoes in the direction of the chewing.” — Monica Rix Paxson, Cuernavaca, Mexico
Emma Clifford
A Dream Come True
Vermont
“My husband, Shane Clifford, designed and built it. He is a teacher, and woodworking is one of his hobbies. He dreamed of a treehouse like this when he was a kid, and wanted to build it for our own three kids. It took two summers to build and required some technical maneuvering with ropes and harnesses. Eventually he’d like to add a spiral staircase winding up the tree to the opening in the railing. I’d like to add a twisty tunnel slide someday! It sleeps six people and each bed has a special animal name and painting adorning it: Heron’s Hideaway (folds down out of the wall from a chalkboard station), Rabbit’s Rest, Coyote’s Cot, Fox’s Featherbed, Bear’s Bungalow, and The Crow’s Nest (tucked up in the peak of the roof).” — Emma Clifford Sharon, Vermont
To some, Vim is a beautiful relic from the past. To others, it’s that annoying thing you have to escape whenever you need to write a message for a merge commit.
Today I’ll introduce you to this picturesque text editor and its wonders, and show you why we’re still using it 26 years after its first release.
Whenever you open Vim on a file, there are three basic things you may want to do:
Reading the file’s contents
Writing on the file
Quitting the program
Navigation
To navigate a file on Vim, use the letters h,j,k, and l. These commands are called motions, as they move the cursor.
The keys h and l will move your cursor horizontally (one character at a time), while j and k move vertically (one line at a time). If you put your hand on them, the layout sorta makes sense.
Some people have trouble remembering which key goes up and which goes down. Pro tip: j sorta looks like a downwards pointing arrow.
As a general note, it is considered bad practice -even though it’s possible- to use the arrow keys for moving in Vim. Get used to using hjkl, and I promise you’ll see a significant boost in speed.
Repetition
Once you’re confident moving through a file one character or line at a time, try pressing a number (any number, it could have many digits) before moving. You’ll jump as many times as the number you entered.
This is a very powerful concept in Vim: Repetition. Have you ever found yourself editing a text file and doing the same thing over and over? Especially something very mundane, like deleting quotes and replacing them with commas? Vim’s got you covered: Just do the thing once, and press . to repeat it. Enter a number and press . again if you want to repeat it as many times.
Edition
Moving around in a text file and reading what’s in it is good and all, but what if we need to change some of its contents? Do not despair, editing a file is as easy as pressing the i key. That will move you from normal mode into editing mode.
Most Vim commands will only be available on normal mode, and entering editing mode is usually frowned upon, and should be avoided as much as possible. However, when entering editing mode, Vim will be like any other Text Editor (with Syntax Highlighting on), making its functionalities a superset of those available in your typical notepad.
To exit editing mode, press the ESC key.
Quitting
To quit Vim, enter normal mode, and press :wq if you want to save your changes (write and quit), or q! if you want to leave without saving.
More commands: Actually useful features.
Editing files from the terminal might make you look like a cool hacker or something, but why should we use this text-based program instead of good old Sublime Text? The answer is commands.
A thousand ways of deleting text
Want to delete part of your file? You could enter editing mode and press backspace once for every character. It doesn’t really beat using Sublime and pressing ctrl+shift+left to select a whole word before deleting it.
If you really want to harness the power of Vim, you should try pressing the d key. Pressed once, it won’t do anything. But it’s not sitting idle, it is actually expectant, waiting for an order. Pass it a motion like the ones we learned today (l, 5j, whichever you feel like really) and it will gobble those characters up. For instance, dNl for any number N, will delete the following N letters from the cursor.
Introducing new motions
e : Moves the cursor to the end of the current word (defined as a concatenation of letters, numbers and underscores).
w : Moves it to the beginning of the next word.
So if I have this text:
Hello there, general.
And my cursor is standing on the H. When I press de in normal mode, the line will end up looking like this:
there, general.
While using dw will leave it like this:
there, general
Notice how in the second example, there’s no space before ‘there’.
We could then press i to insert some replacement word after deleting ‘Hello’. Luckily, there’s an even more fluid way of doing that: the c command (for change). Pressing c and a motion is exactly equivalent to pressing d+motion+i.
This is starting to look nicer, but it still doesn’t beat pressing shift+home/end and deleting a whole line in a few keystrokes, right? Well I see that, and raise you to the $ and 0 motions.
0: moves the cursor to the first character in the current line.
$: moves the cursor to the last character in the line.
There’s an even faster way of deleting the whole current line though: dd. And if you want to delete many lines? dxd deletes the x following lines.
Generally useful Vim commands
By now, the usefulness of vim when editing code (and just code — I wouldn’t encourage you to use vim for other text editing) should start to become apparent.
A few other commands you may want to check out are:
o and O: create a new line above or below the current one, respectively, and enter editing mode.
v : enter visual mode to select text to which you may then apply more commands.
y or Y: yank (copy) the selected text, or the current line, respectively.
p : put the yanked content. Notice that yanking will move text to a special Vim reserved buffer, and not to your usual clipboard. This way, you can effectively manage two different clipboards! One you can paste from with ctrl+shift+v as usual (in editing mode), and the other with p (in normal mode).
* : find the next occurrence of the current word.
When writing software, I find myself duplicating lines to change a few words quite often, so I think Yp is an amazing command.
I’ve barely scratched the surface with this introduction, but I hope I’ll at least have persuaded you into trying Vim out for yourself. It may not replace an IDE if you’re coding in Java or C++, especially if you’re using Frameworks and auto-complete is helping you. But when coding in C or Python, I usually pick it as my editor of choice. And sometimes when I need to transform a string quickly, editing it from Vim is faster than coding a script in Bash or Python.
If you want to learn more, let me know, as I’ll probably keep coming back to this as a series. But I also encourage you to try the software on your own, and run the vimtutor program from your shell (it usually comes preinstalled with Linux and on Macs). If you want to really learn how to optimize your Vim use after going through vimtutor, this very geeky, very awesome site may be of interest to you as well.
I hope you found this article useful or interesting, and as usual any feedback will be welcome, whether anything I said was plain wrong, or you actually liked some part of this tutorial.
Follow me for more Programming tutorials, tips and tricks, and consider also following me on Twitter to keep up to date with my articles.