I’ve been working on a small change to the open source project I mentioned, Zulip. The code required was trivial, but I spent most of my time figuring out a rational order of operations and being confused about how to accomplish them with the version control system.

I’m certainly no stranger to source code version control, but mainly with commercial systems. Git, used by many open source projects, isn’t much like the others I have experience with. The Zulip project has documentation about how they use git and github, the website that provides an online version of that system. But, of course, not everything can be anticipated in advance (and documentation can have bugs too.)

The funny thing is, I’ve used github some before. But only in a very narrow way. If you have a private repository where you directly submit changes to a main branch, there’s not a lot to it. Combining together different changes can be a nuisance, but that’s going to happen in any system like this. And if there’s only one or a few people making changes, the need for it can be nearly avoided with a tiny amount of discipline.

Where things get complicated is when you have a bunch of people working all at once. Even more when they are only loosely coordinated. Git assumes developers will work on what they want, and a handful of administrators will direct traffic with incoming requests to include new code in the main repository.

One thing that I was misunderstanding is how (little) git thinks about branches. Branches are normal things in version control, you make one to take a copy of existing code so you can safely modify it away from the central, primary, version.

In some systems, this is a resource-intensive operation where each branch is literally a copy of everything. Git doesn’t work that way. Since it functionally costs little to make a branch, branching is encouraged. You have your own copy of the code at a particular point in time. Both you and other people can make changes independently on different branches. You make some more branches. In the git universe, that’s no big deal. Time marches forward.

After you do your thing with your branch, you probably want to somehow get it back into the main repository. I’m most familiar with merging, where the system compares two parallel but not identical sets of source code and figures out if the changes are neatly separated enough for it to safely mash them together for you. Some automagical stuff happens, and the result becomes the latest version. (This latest revision is typically called “HEAD”.)

If not, you get to do it by hand. Use a merge-intensive version control system for a while, and you will absolutely find yourself dealing with a horrific mess to unravel. Merging is ugly but, if you are used to it, it’s a known ugly. That’s a certain kind of comfort. You can do that with git if you want. Many people do.

And many people don’t.

One thing about branches: many systems consider HEAD the be-all and end-all picture of reality. You might not be happy with the most recent version of your branch, you could keep a pointer to the revision you’d rather have, but it’s always the most recent version. If you don’t like it, you make a change and now you have a new HEAD. Time always moves forward. Re-writing history, to the extent that it can be done, is only for the most dire of emergencies.

Git has something called “rebase.” You can use it in a couple different ways, but it’s basically the version control equivalent of a Constitutional Convention: everything is on the table. You don’t like the commit message from three changes ago? Rebase. Want to not have those 47 typo-fixing revisions you created? Rebase. It’s also an alternative to merging, where your other branch’s changes are stuck on the end after HEAD and any changes that were made between the time you branched and now are patched in to your code that didn’t get them. (If you want a real explanation, here’s a PDF that helped me understand how rebase works.)

Coming from a merge-land where HEAD is sacred, this terrifies me. You are going into the past and messing with history, and that Just Isn’t Done. Admit that you checked in something with the commit message “shit is broke” and move on.

When branches are expensive and you don’t want to make too many of them, you have to protect the integrity of the ones you have. The idea of something like rebase is dangerous, and with great power comes great responsibility.

When branches are cheap, and you make one because you feel like watching what happens when you delete the database maintenance subsystem? Well, have fun. Clean up after yourself when you are done. It’s not exactly a different universe, but you think about some things in different ways. I’m not entirely there yet, but rewriting history is apparently one of those things.

In making my code change, I ran into a bunch of small things I didn’t understand. I was concerned that I’d do something that would make a mess, and it would be hard to clean up. I didn’t yet know the commands that would have helped. I didn’t understand the multiple purposes of others. I was entirely terrified by the idea of rebase. (I still mostly am, to be honest.)

I made a small mess attempting to merge in an environment that was expecting a rebase. And then halfway in I attempted to cancel but it was applied anyway. There were a few mysteries as things seemed to behave inconsistently. Some of it would have been easier if I had thought to create a branch to try something, against my previous conditioning.

So that happened.

I’ve, of course, considered such services for a long time. My first serious identity theft episode (besides credit cards) was about 15 years ago, when I was informed by my mortgage loan officer that I would not be getting that top-tier rate we had previously discussed.

There were items sent to collections I had never heard of. Addresses reported where I had not lived. There was an unscrupulous collections agency who took my report of fraud, attached to their record the full correct contact info they required me to give them, and submitted it again to the credit agencies as valid.

Among other things, the thieves signed up for local telephone service. But the phone company had No Earthly Idea where they might be located and apologized that they would be unable to help me on that issue, Thank You And Have A Nice Day. A police department in a state I never lived in refused to accept a report except in person. I couldn’t get anyone to tell me if the drivers license number on one of the credit applications meant someone applied for one in my name. My own state and local authorities wanted nothing to do with it, because the visible crime happened elsewhere. “You could try calling the FBI, but they are only interested in cases over a million dollars.”

At one point, when I was having a rather convoluted “discussion” with one of the credit bureaus, I offered to come to their office with paper copies of documents supporting my request to remove the fraudulent items. The main corporate office was ten minute’s walk from my workplace. They offered to call the police if explored that possibility.

This took several years to fully clean up, continuing even after I moved to California. I still have to assume that my personal information is sitting out there, waiting for someone else to abuse it. For all practical purposes, I have a lifetime subscription to credit reports on demand.

So let’s just say I’ve gotten pretty good at this. It’s a giant pain in the ass, but not enough to pay someone a monthly fee for the rest of my life (and probably after.) Particularly when the services available consisted of little more than automated credit report checking. Once in a while something happens, I spend a few weeks arguing about it with various companies, and then it goes away. (Until next time.)

So what changed?

Well, you might have noticed I know a thing or two about computers. Keeping them safe and secure, to the best of my abilities and time available. You would not be surprised to learn that I like backups. Backups! Backups as far as the eye can see! Backups that run hourly. Backups that are swapped out whenever something has the slightest suggestion of a hardware blip. Backups that live in my travel bag. Backups that live at my mother’s house. And backups that live in my car.

My usual “offsite backup” stays in the car glovebox. Every so often, I try for at least monthly, I take it inside and refresh it. We do have a storage unit, I could keep it there, but it’s far less convenient. That means it would be updated less often, and monthly is already not that great.

My laptop backup is encrypted, as are all of my USB hard drives if possible. My server backup is one of those that is not, because the OS version is too old. So my glovebox backup is one USB drive with two volumes, one encrypted and one not.

The unencrypted server backup always concerns me a bit. If someone knowledgable got it, it has all the information necessary to royally screw with my server. That’s a problem. But eventually that server will be going away, replaced with something better. And it’s a basic machine that runs a few websites and processes my outbound email. (I haven’t hosted my own inbox in years.) Yeah, having some archived files of ancient email released would not be fun. But that’s the extent of anything that would impact my actual personal life.

I’d rather not have my backup drive stolen out of the car, sure. It would be annoying, both for the car and having to lock down my server. But it wouldn’t be the end of the world.

So that’s not it, what else? (I’m guessing, at this point, you have some idea that there will be a car chapter to this story.)

A few weeks ago, my spouse decided that this offsite backup thing wasn’t such a bad idea. The thought of having to use it, because the house burned down or all our stuff was stolen, is not pretty. But it’s better to have something in that situation than have nothing. And it’s not that difficult to remember to update and put back once in a while. So he did.

Given that he’s the inspiration for the “tinfoil hat” userpic I have on one of my social media accounts, I presumed it was encrypted. He has many years’ experience in professional system administration and is far, far more paranoid than I am. Nothing with a name or address is discarded intact. He insists the shredding goes to a facility where he can watch it being shredded. When I moved to California, he would not use the cheap 900 MHz cordless phone I brought with me because it was insecure. He doesn’t like my passwords because sometimes I have to choose ones that are capable of being manually typed within two or three tries.

Guess what. Oops.

A few days ago, someone broke into our car and ransacked the glovebox. The only things taken were a small bag of charging cables and two hard drives, mainly because there was nearly nothing else to be had. (This is, by far, not my first rodeo.) Car documents, paper napkins, and some random receipts were scattered about.

One of those hard drives is my spouse’s unencrypted laptop backup.

First I dealt with the immediate problem of filing a police report, which took about 20 minutes on the phone. It is a process that is at least highly efficient, since it is almost certainly useless in getting our stuff back or even in identifying a suspect. But to be able to discuss this with my insurance company, it needed to be done.

Then came the discussion on what, exactly, was on that hard drive: it’s a copy of his user directory. So it didn’t contain system passwords, but that was about the only good thing that could be said. He uses a password manager for many things, but it’s not possible to protect everything that way. Years of email, confidential documents, client project details, credit card statements, tax returns, the medical documents I needed him to scan for me while I was out of town. All there. I handle most of the household finances, so a great many more items are instead on my machine. But sometimes you have to share, and things get passed around.

It’s almost certain that the thief didn’t care about the data. But wherever those drives get dumped, or whoever they are sold to, somebody very easily could. Names, addresses past and present, names and addresses of family members, birth dates, social security numbers, financial account numbers, everything necessary to utterly ruin our financial lives.

I’ll have more to say in other posts: which service I chose, what happened with the car, and how this story develops. But that explains why now, after many years of not being impressed with paid monitoring services, I now have forked over my money for one.

The past week I started looking at Zulip, an open source group communication tool. It has web and mobile clients, and a Python back end. I ran into a few speedbumps getting my development environment set up, so this is my collection of notes on that process. If you aren’t interested in Linux or Python, you might want to skip this post as it’s full of sysadmin stuff.

The Zulip development setup instructions are good, but assume you are running it on your local machine. There are instructions for several different Unix platforms, the simplest option is Ubuntu 14.04 or 16.04. (The production instructions assume you want a real production box, and Zulip requires proper certs to support SSL. Dev is plain old HTTP.)

The standard dev setup walks you through installing a virtual environment with Vagrant. But I’m using my Ubuntu test box, an Intel Next Unit of Computing (NUC). Many folks use these for small projects like home media controllers because they are inexpensive, low power, and self-contained. But hefty they are not. I have 2 GB of RAM and a 256 GB SSD, so I decided to go with the direct Zulip install without Vagrant. It isn’t complicated, but there isn’t a nice uninstall process if you want to remove it later. (I’m not worried about that for a test machine.)

I installed in my home directory, as my own user, and started with the suggested run-dev.py with no options. The standard configuration listens only on localhost, which was problem number one. I could wget so I knew something was working, but I didn’t have a web browser and I couldn’t access it with one from another machine.

I looked through the docs, which are pretty good on developer topics but have some thin spots, but didn’t find anything that looked like command-line reference. There was one mention of --interface='' buried in the Vagrant instructions, but with a null argument it wasn’t obvious its purpose. I asked in the Zulip dev forum (which is actually a channel, or “stream” at a public Zulip instance) and learned that is where I should specify my machine’s address.

So my start command looks like this:

$ ./tools/run-dev.py --interface=

This is where I get to speedbump number two. (I’ll skip over some random poking around here.)

The instructions say the server starts up on port 9991. Ok, great. The last part of the log on startup ends with this:

Starting development server at
Quit the server with CONTROL-C.

This, to me, says that it’s running on port 9992. Having seen previous cases of services failing to promptly release their ports, and then working around that by incrementing the number and trying again, I didn’t think much of it. I had stopped and started the process a bunch of times. This is a development install of an open source project under active development. Ok, whatever, I’ll investigate that later. 9992 it is.

Except it wasn’t. The web UI was indeed listening on 9991 as advertised, but I didn’t realize it. The closest thing I saw in the logs suggesting this is the line

webpack result is served from

but since I’m brand new to this project and have no idea what webpack is, that didn’t mean much. It took a couple stupid questions to work out, but eventually I got all the parts together.

So, to summarize:

Read the install directions. They work.

If you are installing on another host, set that host’s IP address with an option when you start the process:

run-dev.py --interface=

And look at port 9991 on that host for the web interface.

Continue reading ‘Playing with Zulip’ »

From Stolen Wallet to ID Theft, Wrongful Arrest

I saw this article today and it reminded me of one of the identity theft disasters I went through many years ago. While I was investigating accounts that had been opened in my name, I found that one had a drivers license number associated with it. It obviously wasn’t mine, because it was from a state I never lived in. But if it had been, things could have gone very differently.

This person discovered it the hard way, as he was arrested for crimes committed by someone pretending to be him. And that was even after having reported the theft to the local police.

The blog post goes on to discuss things to do after a wallet is stolen. It’s a list worth reading.

I try to make it difficult to grab my bag, but as we’ve noticed that doesn’t always help. But isolating the valuable and important things I do have to carry around with me did. They only got my phone, cash and a few other minor things that were in my transit wallet, and the effort required to get past the security features of my bag meant I knew immediately.

While I was writing the previous post, I came across this:

I got hacked mid-air while writing an Apple-FBI story

A journalist, working on a story, was shocked to have a fellow passenger quote back to him emails he had written while using the onboard network. It changed his mind about the “nothing to hide” argument that argues privacy and encryption aren’t a big deal so why make such a fuss about it. (You can likely guess my opinion on that.)

A couple of weeks ago I finally paid for wifi on a flight, mostly to check it out. And the very first thing I did was make sure I could turn on my VPN. Just as on any public network.

Now I’m not always the most diligent about ensuring no unencrypted communications leak out, but I try. Sometimes I forget to shut down apps, and they send and receive data before the VPN finishes comes up. That’s where I need to try harder. Turning off wifi before closing the laptop is also part of it. (I could configure my machine to block anything not using the VPN, but that is annoying when I’m home.)

Now what I don’t know is what is visible when I’m connected to the aircraft’s access point but don’t have a real Internet connection. I do that a lot to check the flight status, but without actual Internet there’s no way to enable my VPN. Other applications may be trying to send data anyway.

There’s a smaller group of possible snoopers on an airplane, but aside from that it’s no different from any other public network. That’s an important point to remember.

I did say that this blog would avoid getting into political issues and stick to practical concerns. But the events of the past week with Apple and the FBI are pretty disturbing and I want to talk about why.

First, nothing about the technical matters involved in the conversation (with one exception) is anything that you or I or any other private individual can do anything about. It’s all taking place in the rarified air of law enforcement vs public policy, from those who believe they know what is good for us, and have the power to change how others are allowed to access our personal data. We can lobby our elected officials and hope somebody can get past the fear mongering enough to listen.

Next, there is one technical thing you can do to protect your personal device: choose a strong passcode. I’m going to assume you already use a passcode, but the default four or six digit number isn’t going to stand up to a brute force attempt to break it. Make it longer. Make it numbers and letters if you can stand to (it’s a real pain to enter, I know.) Do it not because you “have something to hide” but because you will always have something you don’t want shared and it’s not possible to know in advance what, when, or how that might come about. Make protecting your personal data a normal activity. The longer and more complicated your passcode, the more effort it will take to guess. As long as we continue to use passcodes this will be true, and the goalpost is always moving.

Now, on with the real subject of this post. Get comfortable, this will take a while.

Folks who have followed this issue know that Apple (along with other companies) have routinely responded to search warrants and other official requests for customer data. From a practical standpoint, they have to. But they also have been re-designing parts of their systems to make it less feasible. (It’s important to note that recovering data from a locked device is not the same as unlocking it.) Not only is it now more difficult for people outside Apple to access private data on iOS devices, it’s also more difficult for Apple itself to do.

Discussion of the two current court cases, with detail on what is possible to recover from a locked device for various iOS versions
No, Apple Has Not Unlocked 70 iPhones For Law Enforcement

Court order requiring Apple to comply

The reason for this has many parts, and one very important part is of course to make their product more attractive to customers. Apple is in the business of selling equipment, that’s what they do. When it came out that we tinfoil hats hadn’t just been making up stuff we suspected the NSA was snooping on (and they far exceeded our speculations) suddenly US companies had a huge problem: international customers. Foreign organizations, businesses and governments alike, were none too keen to have confirmed in excruciating detail the extent that the US government was spying on everyone. If US companies want to continue to sell products, they have to be able to convince security-conscious customers that they aren’t just a lapdog for the NSA.

When somebody says “Apple is only doing this because of marketing” consider what that means. People don’t buy your product without “marketing.” Unless you have somehow managed a sweet exclusive deal that can never be taken away, your company depends on marketing for its continued existence. And your marketing and the products it promotes have to appeal to customers. All over the world, more and more people are saying “You know, I don’t much like the idea that the US Government could walk in and see my stuff.”

Strong cryptography is not just for protecting business interests. Livelihoods, and sometimes lives, also depend on the ability to keep private things private. For years people have claimed that products built in China are untrustworthy because the Chinese government can force their makers to provide a way in to protected data. It’s better to buy from trusted companies in enlightened countries where that won’t happen. Who is left on that list?

And what about terrorism? Of course, the things terrorists have done are awful. Nobody is contesting that. But opening up everyone to risk so governments have the ability to sneak up and overhear a potential terror plot doesn’t change how threats are discovered. The intelligence agencies already have more data than they are able to handle, it’s the process that’s broken and not that they suddenly have nothing to look at. There have been multiple cases where pronouncements of “This would have never happened without encryption” have been quickly followed by the discovery that perpetrators were using basic non-encrypted communications that were not intercepted or correctly analyzed. “Collect more data because we can” is not a rational proposal to improve the intelligence process, even if the abuse of privacy could be constitutionally justified.

There is no such thing as a magic key that only authorized users are permitted to use and all others will be kept out forever. If there’s a way in, someone will find it. Nothing is perfect, a defect will eventually be found, maybe even those authorized users will slip up and open the door. Also, state actors are hardly trustworthy when they say these powers will only be used to fight the most egregious terror threats and everybody else will be left alone. Even if they could prevent backdoors from being used without authorization, their own histories belie their claims.

The dangers of having “secret” entry enforced only by policy to not give out the key
TSA Doesn’t Care That Its Luggage Locks Have Been Hacked

Intelligence agencies claim encryption is the reason they can’t identify terror plots, when the far larger problem is that mass surveillance generates vast quantities of data they don’t have the ability to use effectively
5 Myths Regarding the Paris Terror Attacks

Officials investigating the San Bernardino attack report the terrorists used encrypted communication, but the Senate briefing said they didn’t
Clueless Press Being Played To Suggest Encryption Played A Role In San Bernardino Attacks

What the expanded “Sneak-and-Peek” secret investigatory powers of the Patriot Act, claimed to be necessary because of terrorism, are actually being used for
Surprise! Controversial Patriot Act power now overwhelmingly used in drug investigations

TSA ordered searches of cars valet parked at airports
TSA Is Making Airport Valets Search Your Trunk

What is being asked of Apple in this case?

Not to unlock the phone, because everyone agrees that’s not technically possible. Not to provide data recoverable from the locked device by Apple in their own labs, which they could do for previous iOS versions but not now. What the court order actually says is they must create a special version of the operating system that prevents data from being wiped after 10 incorrect passcodes, the means to rapidly try new passcodes in an automated fashion, and the ability to install this software on the target device (that will only accept OS updates via the Apple-authorized standard mechanism.)

What would happen if Apple did this?

The government says Apple would be shown to be a fine, upstanding corporate citizen, this one single solitary special case would be “solved,” and we all go on with our lives content to know that justice was served. Apple can even delete the software they created when they are done. The FBI claims the contents of this employer-owned phone are required to know if the terrorists were communicating with other terrorists in coordinated actions. No other evidence has suggested this happened, so it must be hidden on that particular phone (and not, for example, on the non-work phones that were destroyed or in any of the data on Apple’s servers that they did provide.)

How the law enforcement community is reacting to the prospect of the FBI winning this case
FBI Says Apple Court Order Is Narrow, But Other Law Enforcers Hungry to Exploit It

Apple would, first and foremost, be compelled to spend considerable effort on creating a tool to be used by the government. Not just “we’ll hack at it and see what we find” but a testable piece of software that can stand up to being verified at a level sufficient to survive court challenges of its accuracy and reliability. Because if the FBI did find evidence they wanted to use to accuse someone else, that party’s legal team will absolutely question how it was acquired. If that can’t be done, all this effort is wasted.

A discussion of the many, many requirements for building and maintaining a tool suitable for use as as source of evidence in criminal proceedings.
Apple, FBI, and the Burden of Forensic Methodology

Next, the probability of this software escaping the confines of Apple’s labs is high. The 3rd-party testing necessary to make the results admissible in court, at absolute minimum, gives anyone in physical possession of a test device access to reverse-engineer the contents. If the FBI has the target device, it too can give it to their security researchers to evaluate. Many people will need access to the software during the course of the investigation.

Finally, everyone in the world would know that Apple, who said they had no way to do this thing, now does. And now that it does, more people will want it. Other governments would love to have this capability and Apple, as a global company, will be pressured to give in. What would that pressure be? In the US, it’s standard for the government to threaten crushing fines or imprisonment of corporate officers for defying the courts. Governments can forbid Apple to sell products in their countries or assess punitive import taxes. Any of these can destroy a company.

Non-technical people often decry security folks as eggheads crying wolf over petty concerns when there are so many more important things to discuss. That’s fine, and our role as professionals includes the responsibility to educate and explain how what we do impacts others.

I encourage you to consider this issue for yourself and what it would mean if you were the person at the other end of that search warrant. Ubiquitous communication and data collection have fundamentally changed how much of the world lives and works, and there are plenty of governments far less principled than our own who want access to private data of mobile phone users. We should not be encouraging them by saying it’s just fine, no problem, go ahead, just hand over the (cryptographic) keys and everything will be ok.

A while back, I stopped paying attention to anything at forbes.com. It wasn’t on purpose (a friend of mine blogs there) but because without JavaScript it serves up a big, blank, nothing. I tried a few times to selectively allow scripts via the Firefox extension NoScript, but no combination of what I considered reasonable permissions would work. I gave up.

Then a security researcher, casually web browsing with (for a security researcher) a normal setup that includes an ad blocker, found malicious software (malware) coming from an advertisement on the Forbes website.

When easy to use tools to block web ads became available, some bemoaned the end of the Free (Internet) World because sites would no longer be able to rely on ads for revenue. Of course users, subjected to ever more annoying advertisements, disagreed.

But whether or not you believe blocking ads is a communist plot to destroy the Internet, there is another problem that this Forbes experience neatly points out: security.

The trouble is that those ads now usually include dynamic content, code sent to your browser that causes windows to open or move around, stuff to dance on your screen, and generally create a nuisance. But since you can’t know exactly what is sent, there could be other things. Popular at the moment is installing what’s called “ransomware“, software that encrypts files on your computer until you pay up.

Here’s a report of the Angler Exploit Kit, the one found in a previous Forbes malware discovery, being used for just that.

I don’t use a specific ad blocker because I’m already blocking dynamic content with NoScript. It’s basically the nuclear option, and isn’t for everyone. I still get ads, but without the singing and dancing (or malware.) If you want to try an actual ad blocker, here are some resources to look at:

The New York Times tests ad blockers for iOS 9
A survey of ad blocking browser plug-ins
Adblock Plus, a very popular plug-in for Firefox

One of the things I do to protect myself is vigorously restrict disclosure of my physical address. I use a mailbox service and only provide that unless I am compelled otherwise. For example, to register to vote I was required to give my actual residence so I can receive the correct ballot (which arrives at my mailing address.)

Then this happens:

Report: 191M voter records exposed online

Some organization that holds copies of US voter records, through a monumental database screw-up, has allowed public access on the Internet to all of the data. Nobody knows exactly how, or by whom, or even for how long, because the most likely actors are falling over themselves to disclaim any association with the breach.

The California Secretary of State reports that there were 17.7 million registered California voters in 2015. The author of the above article quotes a security researcher who verified access to “over 17 million California voters.” I will leave as an exercise for the reader the percent chance of my information having been exposed.

The problem with secret information is that once it’s released there’s no way to pull it back. Access to voter information varies by state, but many states restrict who can access it and for what purposes. California is particularly strict in that it can only be used for campaign or government purposes. Without question, this disclosure is violating the law. There will be investigations, and charges, and lawyers will wrangle over this for years to come. Maybe, eventually, some person or organization will be held to account.

But for some people, none of that will matter. It’s not just an academic discussion when I have friends and colleagues who regularly receive threats of death and other abuse of the most vile nature. Even for those who have similarly assiduously protected their physical addresses, they will need to face the possibility that the only option to protect themselves from their harassers is to move.

For those friends and colleagues, I can at least report that the State of California has a program that provides a free Post Office Box to qualifying abuse victims, than can legally be used to register to vote and access other government services. So if it comes to that horrible decision, perhaps you can get some help to protect yourself after.

For me, and everybody else, we are on our own. If you live in California and want to express an opinion in this matter, here are some suggestions:

Governor Edmund G. Brown Jr.
Secretary of State Alex Padilla
Senator Barbara Boxer
Senator Dianne Feinstein
Find Your California Representative

For other states:
Find Your Senators and Representatives – OpenCongress

This, friends, is the future.

You may recall my previous post about Apple’s two-step verification and how I reluctantly disabled it for a long trip outside the US. Now I find out that the government of Australia came to the same conclusion. Only one of us seems to be troubled by it, however.

Australian government tells citizens to turn off two-factor authentication
When going abroad, turn off additional security. What could possibly go wrong?

I’m not going to get into any conspiracy theories about why the Australian government might wish to discourage the use of better authentication methods. If they wanted to get into someone’s government services account, I presume they have other ways to do it than hope they can guess at their lousy password.

But putting out the suggestion that two factor auth is something maybe not so important? There’s the real offense. “Go ahead and enjoy your holiday, don’t bother your pretty little head about that complicated security thing.”

Yes, the problems of handling two factor auth when swapping SIMs are a concern. A concern for the people who design these systems that are complex and cumbersome to use and seem to forget that real people don’t conveniently stay put all the time. But how about we talk about that instead of discouraging people from using them?

I wanted a dedicated server to experiment with Swift development on Linux, so I set it up on an Intel NUC (“Next Unit of Computing”) embedded box similar to a Mac Mini. It’s a DE3815TYKHE kit I got from a Tizen developer event a while back. It comes with an Atom E3815 CPU and 2 GB of RAM. I’m not using the onboard 4 GB of flash storage but installed a 256 GB SSD.

Taking advice I found from other users, I updated the BIOS to something known to work as a headless server (without monitor and keyboard) and installed Ubuntu Server 14.04.3 LTS. I could have used the latest 15.10 version, but since Ubuntu has designated 14.04 as a Long Term Support release it’s safe to use for several more years without concern I will be forced to upgrade.

After getting the box set up, next is where to install the Swift dev tools. All the comments I’ve seen seem to expect you will put it in your own home directory, supported by the fact that the file permissions for the contents of the tar package are set to only allow access by the owner. That’s fine if you are doing this on a VM that only you will be using, but I wanted to allow the option of sharing this with another developer on my server. The only reasonable way to do that is put it in a system location and make it owned by root.

The topic of where to actually install a package on a Unix-type server is a religious discussion on the order of which editor to use, so I’ll just say that I put it in /usr/local. (I changed the versioned package directory name to “swift” for convenience.)

The install directions on the Swift download page are good and easy to follow if you are already comfortable with average command-line system administration tasks. (Don’t forget to add the install path to your user’s PATH as described.) Additionally, I installed clang 3.6 as suggested on the github page for anyone on Ubuntu 14.04 LTS.

The directions don’t talk about the install path much. I discovered I had a problem when I got permission errors trying to compile a trivial “Hello, World” example. root could compile, but not anybody else. The solution was to modify all the file permissions so other users can read and execute the needed files. Since I untarred into my install location as root, root already owned all the files so the owner permissions were fine. I didn’t want to universally change everything when adding group and other permissions (plain text files don’t need to be executable, after all) so I did that by hand.

First give group and other users read permissions. Even text files need this, so it’s safe to do it with one recursive command from the top level of my install directory.

chmod -R og+r *

Now locate all the directories and add execute permissions so regular users can traverse the filesystem.

find . -type d -exec chmod og+x {} \;

Finally, identify the remaining files that should be executable by searching for the original owner permissions in a detailed directory listing of everything.

ls -lR | grep rwx

These are the ones I found that only had “rwx” in positions 2-4 indicating permissions for the file owner:

in swift/usr/bin:

-rwxr--r-- 1 root root 56959 Dec 18 23:36 lldb-3.8.0
-rwxr--r-- 1 root root 86318 Dec 18 23:36 lldb-argdumper
-rwxr--r-- 1 root root 927980 Dec 18 23:36 lldb-mi-3.8.0
-rwxr--r-- 1 root root 63672187 Dec 18 23:36 lldb-server-3.8.0
-rwxr--r-- 1 root root 9177 Dec 18 23:35 repl_swift
-rwxr--r-- 1 root root 73808411 Dec 18 23:32 swift
-rwxr--r-- 1 root root 1754089 Dec 18 23:39 swift-build
-rwxr--r-- 1 root root 7683691 Dec 18 23:36 swift-build-tool
-rwxr--r-- 1 root root 856388 Dec 18 23:31 swift-demangle

in swift/usr/lib/swift/linux:

-rwxr--r-- 1 root root 7287250 Dec 18 23:39 libFoundation.so
-rwxr--r-- 1 root root 5037507 Dec 18 23:33 libswiftCore.so
-rwxr--r-- 1 root root 15373 Dec 18 23:33 libswiftGlibc.so
-rwxr--r-- 1 root root 172853 Dec 18 23:39 libXCTest.so

in swift/usr/lib/swift/pm

-rwxr--r-- 1 root root 284768 Dec 18 23:39 libPackageDescription.so

Add execute permissions to these files individually with chmod og+x.

After all this, I was able to compile from a regular user’s home directory.