A while back, I stopped paying attention to anything at forbes.com. It wasn’t on purpose (a friend of mine blogs there) but because without JavaScript it serves up a big, blank, nothing. I tried a few times to selectively allow scripts via the Firefox extension NoScript, but no combination of what I considered reasonable permissions would work. I gave up.

Then a security researcher, casually web browsing with (for a security researcher) a normal setup that includes an ad blocker, found malicious software (malware) coming from an advertisement on the Forbes website.

When easy to use tools to block web ads became available, some bemoaned the end of the Free (Internet) World because sites would no longer be able to rely on ads for revenue. Of course users, subjected to ever more annoying advertisements, disagreed.

But whether or not you believe blocking ads is a communist plot to destroy the Internet, there is another problem that this Forbes experience neatly points out: security.

The trouble is that those ads now usually include dynamic content, code sent to your browser that causes windows to open or move around, stuff to dance on your screen, and generally create a nuisance. But since you can’t know exactly what is sent, there could be other things. Popular at the moment is installing what’s called “ransomware“, software that encrypts files on your computer until you pay up.

Here’s a report of the Angler Exploit Kit, the one found in a previous Forbes malware discovery, being used for just that.

I don’t use a specific ad blocker because I’m already blocking dynamic content with NoScript. It’s basically the nuclear option, and isn’t for everyone. I still get ads, but without the singing and dancing (or malware.) If you want to try an actual ad blocker, here are some resources to look at:

The New York Times tests ad blockers for iOS 9
A survey of ad blocking browser plug-ins
Adblock Plus, a very popular plug-in for Firefox

One of the things I do to protect myself is vigorously restrict disclosure of my physical address. I use a mailbox service and only provide that unless I am compelled otherwise. For example, to register to vote I was required to give my actual residence so I can receive the correct ballot (which arrives at my mailing address.)

Then this happens:

Report: 191M voter records exposed online

Some organization that holds copies of US voter records, through a monumental database screw-up, has allowed public access on the Internet to all of the data. Nobody knows exactly how, or by whom, or even for how long, because the most likely actors are falling over themselves to disclaim any association with the breach.

The California Secretary of State reports that there were 17.7 million registered California voters in 2015. The author of the above article quotes a security researcher who verified access to “over 17 million California voters.” I will leave as an exercise for the reader the percent chance of my information having been exposed.

The problem with secret information is that once it’s released there’s no way to pull it back. Access to voter information varies by state, but many states restrict who can access it and for what purposes. California is particularly strict in that it can only be used for campaign or government purposes. Without question, this disclosure is violating the law. There will be investigations, and charges, and lawyers will wrangle over this for years to come. Maybe, eventually, some person or organization will be held to account.

But for some people, none of that will matter. It’s not just an academic discussion when I have friends and colleagues who regularly receive threats of death and other abuse of the most vile nature. Even for those who have similarly assiduously protected their physical addresses, they will need to face the possibility that the only option to protect themselves from their harassers is to move.

For those friends and colleagues, I can at least report that the State of California has a program that provides a free Post Office Box to qualifying abuse victims, than can legally be used to register to vote and access other government services. So if it comes to that horrible decision, perhaps you can get some help to protect yourself after.

For me, and everybody else, we are on our own. If you live in California and want to express an opinion in this matter, here are some suggestions:

Governor Edmund G. Brown Jr.
Secretary of State Alex Padilla
Senator Barbara Boxer
Senator Dianne Feinstein
Find Your California Representative

For other states:
Find Your Senators and Representatives – OpenCongress

This, friends, is the future.

You may recall my previous post about Apple’s two-step verification and how I reluctantly disabled it for a long trip outside the US. Now I find out that the government of Australia came to the same conclusion. Only one of us seems to be troubled by it, however.

Australian government tells citizens to turn off two-factor authentication
When going abroad, turn off additional security. What could possibly go wrong?

I’m not going to get into any conspiracy theories about why the Australian government might wish to discourage the use of better authentication methods. If they wanted to get into someone’s government services account, I presume they have other ways to do it than hope they can guess at their lousy password.

But putting out the suggestion that two factor auth is something maybe not so important? There’s the real offense. “Go ahead and enjoy your holiday, don’t bother your pretty little head about that complicated security thing.”

Yes, the problems of handling two factor auth when swapping SIMs are a concern. A concern for the people who design these systems that are complex and cumbersome to use and seem to forget that real people don’t conveniently stay put all the time. But how about we talk about that instead of discouraging people from using them?

I wanted a dedicated server to experiment with Swift development on Linux, so I set it up on an Intel NUC (“Next Unit of Computing”) embedded box similar to a Mac Mini. It’s a DE3815TYKHE kit I got from a Tizen developer event a while back. It comes with an Atom E3815 CPU and 2 GB of RAM. I’m not using the onboard 4 GB of flash storage but installed a 256 GB SSD.

Taking advice I found from other users, I updated the BIOS to something known to work as a headless server (without monitor and keyboard) and installed Ubuntu Server 14.04.3 LTS. I could have used the latest 15.10 version, but since Ubuntu has designated 14.04 as a Long Term Support release it’s safe to use for several more years without concern I will be forced to upgrade.

After getting the box set up, next is where to install the Swift dev tools. All the comments I’ve seen seem to expect you will put it in your own home directory, supported by the fact that the file permissions for the contents of the tar package are set to only allow access by the owner. That’s fine if you are doing this on a VM that only you will be using, but I wanted to allow the option of sharing this with another developer on my server. The only reasonable way to do that is put it in a system location and make it owned by root.

The topic of where to actually install a package on a Unix-type server is a religious discussion on the order of which editor to use, so I’ll just say that I put it in /usr/local. (I changed the versioned package directory name to “swift” for convenience.)

The install directions on the Swift download page are good and easy to follow if you are already comfortable with average command-line system administration tasks. (Don’t forget to add the install path to your user’s PATH as described.) Additionally, I installed clang 3.6 as suggested on the github page for anyone on Ubuntu 14.04 LTS.

The directions don’t talk about the install path much. I discovered I had a problem when I got permission errors trying to compile a trivial “Hello, World” example. root could compile, but not anybody else. The solution was to modify all the file permissions so other users can read and execute the needed files. Since I untarred into my install location as root, root already owned all the files so the owner permissions were fine. I didn’t want to universally change everything when adding group and other permissions (plain text files don’t need to be executable, after all) so I did that by hand.

First give group and other users read permissions. Even text files need this, so it’s safe to do it with one recursive command from the top level of my install directory.

chmod -R og+r *

Now locate all the directories and add execute permissions so regular users can traverse the filesystem.

find . -type d -exec chmod og+x {} \;

Finally, identify the remaining files that should be executable by searching for the original owner permissions in a detailed directory listing of everything.

ls -lR | grep rwx

These are the ones I found that only had “rwx” in positions 2-4 indicating permissions for the file owner:

in swift/usr/bin:

-rwxr--r-- 1 root root 56959 Dec 18 23:36 lldb-3.8.0
-rwxr--r-- 1 root root 86318 Dec 18 23:36 lldb-argdumper
-rwxr--r-- 1 root root 927980 Dec 18 23:36 lldb-mi-3.8.0
-rwxr--r-- 1 root root 63672187 Dec 18 23:36 lldb-server-3.8.0
-rwxr--r-- 1 root root 9177 Dec 18 23:35 repl_swift
-rwxr--r-- 1 root root 73808411 Dec 18 23:32 swift
-rwxr--r-- 1 root root 1754089 Dec 18 23:39 swift-build
-rwxr--r-- 1 root root 7683691 Dec 18 23:36 swift-build-tool
-rwxr--r-- 1 root root 856388 Dec 18 23:31 swift-demangle

in swift/usr/lib/swift/linux:

-rwxr--r-- 1 root root 7287250 Dec 18 23:39 libFoundation.so
-rwxr--r-- 1 root root 5037507 Dec 18 23:33 libswiftCore.so
-rwxr--r-- 1 root root 15373 Dec 18 23:33 libswiftGlibc.so
-rwxr--r-- 1 root root 172853 Dec 18 23:39 libXCTest.so

in swift/usr/lib/swift/pm

-rwxr--r-- 1 root root 284768 Dec 18 23:39 libPackageDescription.so

Add execute permissions to these files individually with chmod og+x.

After all this, I was able to compile from a regular user’s home directory.

Recently some folks with Tor, the open source project behind the global decentralized anonymizing network, released a beta version of a new chat client. It’s designed to be secure by default, but usable by normal people. This is something that has escaped many previous efforts, so it’s a welcome development

It encrypts messages with OTR (so only you and the person you are chatting with can see them) and sends them via the Tor network (to hide where you are on the Internet.) These are very, very good things and I’m happy to see user-friendly applications building on the excellent work Tor has been doing.

The difficulty for me is how it fits into the way I use chat, specifically that it’s impossible to save chat transcripts. While that has a benefit for the purest of high-impact security, what doesn’t exist can’t be compromised, it is exactly the opposite of how I use chat.

It seems that many people use instant messaging only for one-off communications. I treat it like email and constantly go back to reference something I’ve sent or information I received. This is a major reason I’m still using Apple’s Messages client, because it makes searching chats trivially easy.

But despite Messages allowing you to use a whole collection of different chat services, it doesn’t provide encryption for anything other than Apple’s own service. (Which I don’t use for reasons too long to go into right now.) I’ve tried other clients, but haven’t been thrilled. Even without getting into if or how they use encryption, I’ve found them clunky. And, most importantly, hard to reference old messages. The best of them, Adium, has a custom viewer only usable from inside the app but the archive chats use a tiny fixed size font that can’t be changed. That makes it useless for me.

Between encryption by default and using the Tor network, I really really want to like Tor Messenger. I dug around and with some help from the Tor folks figured out how to re-enable chat logs, but the results were not usable for several reasons:

First, it creates files in JSON format, something designed to be easily readable by computers. While it’s true that JSON contains text, it isn’t in a human-readable format by any rational definition because it contains a bunch of required formatting and other control structures that get in the way of human understanding.

Next, that file is overwritten every time the program starts. Unless you have your own way to save the contents automatically (and this is a far more difficult problem than it sounds) you lose your history anyway.

Finally, it’s located deep inside the app’s install directory. This is not a problem for me, but would certainly be an issue for anyone not very familiar with technical aspects of OS X. And that also means it’s excluded from Spotlight, Apple’s disk searching tool.

I still have hope, because it’s early and also because it’s open source. When they are able to release the Mac build instructions, I can just go change what’s annoying myself. (And if I’m going to choose an open source project to work on, I’m thinking I might prefer the more security-focused Tor over Adium. Sorry Adium friends.)

But for the moment, unless I’m willing to forge onward into the wilderness of creating my own custom version of something, I’m still stuck with the choice between secure and annoying, or insecure but fits into how I work.

I wish I could make a joke and say this is some new country music dance I’ve invented. But authorization problems are not very funny, particularly when it’s with something that is supposed to be helping me.

I’m going out of the country for a while, so in addition to the usual figuring out how to fit 10 pounds of travel gear in a 5 pound suitcase, I’m preparing my digital equipment as well. It started off simply enough, making sure I have the latest operating systems on all my devices. (Well, not really, but I’ll spare you the tedious Genius Bar conversations.)

The real problem is with my Apple ID and Apple’s two-step verification.

I have been using two-step verification, what the security world calls two-factor authentication, which means when I do certain things involving logging in with my Apple ID, I have to enter a code that is sent to my phone. That’s all well and good, to make sure the person logging in is actually me.

But what happens when you don’t have that phone? Or, relevant to my situation, when you’ve replaced your usual SIM with one you’ve bought in another country. Suddenly you can’t get those messages anymore, and you aren’t allowed to do whatever it was you were trying to do.

In theory, I could just register my other SIM as a “new device.” But to do that you need to have access to both devices at the same time, the old one to login to your account to make changes, and the new one to authorize it. But I don’t know what my phone number will be when I get there (my SIM from the last trip might have expired) so I can’t do it before I leave. And my home SIM may or may not work (or be hideously expensive to use) in my destination country. And in either case, since it’s only one physical phone, I can’t have both of them active at the same time. I have other devices, but this process requires one that can receive SMS and the wifi-only devices can’t.

Because of all this, I decided to disable two-step verification while I’m away.

Hugely Important Reminder: you should make any updates to your Apple ID before you leave, while you still have access to your regular phone number.

So I login, and disable two-step verification. Now that I’m not using it, I’m required to set security questions for my account. Security questions are horrible, and the way they are used make your account less secure, not more. (Here’s an article about that: Study: password resetting ‘security questions’ easily guessed.) But this is what Apple requires, so here I am making up yet more passwords that I have to remember.

I pick the set of questions I’m going to answer, open up my trusty password manager, and generate a bunch of random text strings like I do for passwords. I copy the first string and paste it into the appropriate field. I copy the second string, but when I switch back to the browser, it resets what I just put in for the first one. This means that I have to actually type each security question answer. That is a recipe for fail if I manage to mess up the complicated string I’ve just generated. So it’s back to using real words that I can (usually) get correct the first time.

If you want to know why I don’t like to do this, you can read this on Wikipedia about a particular type of password cracking: Dictionary attack.

I then compose a phrase for each question and save those in my password manager. I go to type it into the form and I can’t because it’s too long. The answer field is size limited, so my carefully crafted phrases are useless. I have to come up with shorter (less secure) phrases, that I can type without errors, and they must be unique. I make one, and then start sticking numbers in it for each question. Of course, I can’t do anything helpful like include information about the individual question, because that reduces the randomness. In a short string particularly, if any part of it is less random that severely reduces its password strength.

Now I decide to set a recovery email, which Apple will use to notify you of authentication matters regarding your Apple ID. It’s a good idea, because if somehow you lose access to your primary email you can check a different account and get the alerts. I make up a new email address (because I can do that) and save everything.

I’m not really done, because I haven’t responded to the recovery email verification message, but I’ll get to that in a minute. Now I get to repeat the process for my second Apple ID. (FYI: very not recommended, it has been nothing but problems and I wish I hadn’t been forced to.)

I get through everything for my second Apple ID, and go to set the recovery email. I use the same email address that I created earlier, which it happily accepts. Now I go look at the emails generated by this process: most are confirmations, but the ones about the recovery email need to have the address verified. Ok, fine.

First ID recovery email: go to the link in the message, type in Apple ID number one, it says that email address can’t be used. What?? So now they tell me that I can’t use the same recovery email for multiple accounts. And because of this, I can’t verify it. I have to go back and login as each ID (answering the security questions which, thankfully, can indeed be pasted from the password manager) and change the recovery email addresses to something else. Yet another thing that has to be saved in my password manager, in such a way so I don’t later confuse them between the two accounts.

NOW finally I’ve disabled two-step verification. I have six new unique passphrases and two new email addresses to keep track of, and my accounts are less secure than before. Win?

I’m packing for a trip and came across this article about RFID blocking wallets and such:

The Skimming Scam: RFID-blocking wallets can work. But do you really need one?

They block RF signals from reaching passports, credit cards, and other contactless data sources that can, in theory, can be accessed remotely by anybody nearby with the appropriate reader. I have a bunch of shielded stuff, and I use it. Why bother?

“What’s less clear is whether RFID skimming is a threat worth worrying about in the first place. For all the hype about the theoretical danger, there have been few if any reports of actual crimes involving RFID skimming. The technique appears to be far more popular among security researchers than it is among thieves, and for good reason: There are much easier and more effective ways to steal people’s money and data.”

I don’t think they are completely a waste for average people, but it’s certainly a marketing thing for the manufacturers. I do it because I’d rather share less data than more and, more importantly, because I hang out in places with security researchers.

Now I do buy bags with security features, many of which come with RFID blocking pockets. I like that they do, but it’s the other locks, clips, security straps and so on that are the reason I’m willing to pay more for them. (I look for them on sale or discontinued.) These kinds of physical security features are the primary interest, and are absolutely worth it for me.

And I thought a single highly disturbing security story was enough for one day. I’m not even all the way through reading the article from The Intercept about how GCHQ and NSA have the keys to decrypt a huge swath of the world’s mobile phone communications and I have the urge to throw away all my computers and hide under a rock.

The Great SIM Heist: How Spies Stole the Keys to the Encryption Castle

Normally I’m not prone to hyperbolic statements like “There is nowhere to hide” but for people who use any communication technology it’s more and more true. You are being monitored and archived. Maybe you are boring and uninteresting to government spooks. At the moment. Maybe forever. But how does it make you feel knowing that could, by deliberate action or entirely by accident, change at any time? It certainly doesn’t make me happy.

I woke up this morning to see that an actual computer hardware manufacturer has shipped machines with actual deliberately included “To improve customer experience” adware that compromises SSL for the user. Because capitalism, I presume.

Even with my non-expert understanding of digital security, reading this researcher’s discoveries was terrifying. And the manufacturer is claiming the impact is minimal because “Superfish was preloaded on to a select number of consumer models only.”

So far I haven’t seen cries of “just re-install the operating system from a trusted source.” Perhaps they are out there and I’m (thankfully) missing those kinds of people from my social media sphere. These are low-end machines intended for average users. And while I can’t comment on how it is in PC-Land, certainly for OS X users the process of re-installing a clean operating system has been made absurdly difficult. I don’t even always do it these days. But this surely points out that I should.

So this came out today:

“Severe” password manager attacks steal digital keys and data en masse

There are lots of nifty helpful password manager tools out there, that will seamlessly allow you to create and use your passwords across all your devices. I don’t use any of them.

I do use a password manager (DataVault, if you are interested.) It has some nifty cloud features that I ignore. I sync by WiFi, on my local network. Only. I had problems with the browser integration, so I don’t use auto-fill. I go to the app, find the password I want, and copy it.

It takes some effort, but it’s not going to be compromised by someone’s poor site security. It’s not that I think all those other people are bad programmers, it’s that they are people and they are programmers and bugs and other problems happen. Even without the slightest bit of poor coding, there could be a weakness in a 3rd party library, operating system, hosting service, or other thing the system is dependent on.

If you think I’m a horrible Luddite disparaging the wonders of modern hosted services, Spouse keeps his passwords as a text file saved in his system keychain and only accesses certain sites from his primary laptop in a secure location. Another perfectly acceptable solution.