I was not able to participate in a healthcare rally today, but instead I wrote about something that literally made it possible for me to have the career I do today. Something that won’t be there for other women if, by law or by economics, they are denied access to hormonal contraceptives.

When I was a teenager, every menstrual cycle brought nausea. I never knew how many hours I’d be too ill to do anything but lie down, or when it would happen. This was, apparently, “a thing that happens sometimes.” Supposedly I would grow out of it. My parents, who could barely discuss the rudiments of sex and female anatomy with me, seemingly weren’t interested in visits with any doctor other than my pediatrician.

When I started community college, I acquired both the legal freedom of being an adult and modest financial freedom from a part-time job. A romantic interest encouraged me to visit the local Planned Parenthood for birth control pills, “just to be prepared.” The boyfriend didn’t last, but the knowledge that hormonal birth control could control my cycle and reduce the nausea was amazing.

I tried to stay on the pill, but between sneaking out to go to the clinic and spending a good chunk of my tiny paycheck, it didn’t last. Back to trying to hide behind “No, really, I’m fine.” I didn’t grow out of it.

A few years later, it’s time to actually go to a real university and move out of my parents’ house. This means campus health services, covered by a mandatory student fee. It wasn’t insurance, exactly, but I could go to the clinic and see a doctor. I didn’t really think about it much, being more concerned with suddenly managing my own schedule, living with dorm roommates, and all the other normal things young people do going from family home to university campus. I had a Differential Equations class to pass.

The evening before my Diff Eq final, I had a particularly awful menstrual episode. I managed to drive back to the dorm from study group, but fell over vomiting outside the building. I wasn’t even surprised when people walked by saying only that I needed to sober up. (I don’t drink.) I crawled to my room and called a friend, who came right over.

And immediately called campus emergency.

I narrowly avoided being transported by the nice EMTs because I was able to muddle through the name, address, and number of fingers quizzes. They made me swear I would go to the clinic as soon as I could. The next afternoon, when I could walk without nausea again, I first told my Diff Eq prof I wasn’t going to contest the F for missing that morning’s final (I already wasn’t doing that great) and then went over to campus health.

When I signed in for a drop-in appointment, I said I had been ill but also wanted to speak to someone about birth control. That didn’t seem to have made it back to the doctor however. After going through my history and what happened, she asked if I had ever considered birth control pills. “That’s what I’m here for.” I needed to make an appointment with the Nurse Practitioner, who did the pelvic exams and dispensed pills, but they would get me set up right away.

The next semester was so much different. For the first time, I knew when I was going to get my period. Better yet, no vomiting! Ever! I could plan trips without concern I might get ill. I didn’t have to sit in class wondering if everyone (90% men) could tell that I wanted to puke. A whole part of my brain stopped having to worry about that anymore. That was 27 years ago. I have been on hormonal birth control continuously ever since.

In school, and later at work, my schedule was no longer unexpectedly interrupted by “Female Things.” If you think this is somehow all needless drama over trivial matters, try staying on good terms with your job when you can’t show up to work all the time, every time. There is a quiet horror in knowing that you’ll always have to tell your boss (male) and your co-workers (male) that you can’t come in today because of “Female Things.” It’s already difficult enough explaining that you, too, have an engineering degree and, no, you weren’t planning to go get coffee. Using all your combined vacation and sick leave for being ill is not a great way to look like a reliable, hard-working member of the team, worthy of desirable projects and promotions.

In the years since, I’ve had to go through all kinds of machinations to keep access to this medication. Doctors would write “nonspecific vaginitis” so the exam would be covered by my insurance. Occasionally one would ask why I wanted birth control if I wasn’t married. (I didn’t stay long with those doctors.) I’d move from job to job, from state to state, and from insurance company to insurance company, never entirely sure if I could get it covered until I came to California (where it was already required by state law.) Even now, it’s not a cheap medication to buy without insurance coverage. Back then there wasn’t even a generic version of the one I use.

The Affordable Care Act changed that. All of that. Contraception is a normal service, as are routine medical visits for preventive care. I don’t have to explain why and I don’t have to wonder if. It’s there. It’s a non-issue. I can apply my full attention to the important things in my life. Not where I’m going to come up with hundreds of dollars a year for something that allows me to work in my field, where a 40 hour week is laughably unrealistic.

There are so many other reasons the Affordable Care Act changed people’s lives for the better. This is just one part of my own story. Good health isn’t something nice to have if you can afford it, it’s the foundation on which we build a sustainable society where everyone gets a chance to find their own success. Don’t let it vanish.

I’ve been having this ongoing argument with my test environment over my github ssh key. First it was just when I used a particular dev housekeeping script my project needs, but now every time I log out of a shell, my github ssh key goes missing. I have to remember to ssh-add before I want to do something involving github.

I tracked down the problem to ssh-agent, which apparently works fine for most people but for me requires constantly adding my key back. I found a solution by installing a keychain package for Ubuntu.

I already had my key, so I only had to install the package and add the appropriate configuration to my .bashrc. Now, at login, ssh-agent is started if needed, and my github ssh key added. I can logout and login all I like, access github from my shell, and only re-enter my passphrase if I reboot the machine. (I haven’t tested that part yet.)

I had started to wonder why I had bothered with an ssh key in the first place. But with this, it now works as expected. So much nicer.

Now that I know my YAML file works, because the generated curl command succeeds, it’s time to look at why the interactive version doesn’t.

The way this is supposed to go is that you click the little red dot on the right in your spiffy new doc section, enter your credentials, and then fill in the blanks to test right there from the documentation. Magic! But why would this fail when the similarly auto-generated curl command succeeds? It’s because the server restricts access to the resource the originating webserver is requesting. This is good for safety, but sometimes it’s what you need to do. It’s called cross-origin resource sharing (CORS) and it was mentioned in the Swagger docs. But until I ran into a problem, I wasn’t sure what I was supposed to do with this information.

Swagger UI sends an OPTIONS request to the server as the opening of a handshake that results in the resource being accessible when the subsequent actual request shows up. (That is a terrible description, hopefully someone who actually understands this can do better.) I saw those OPTIONS requests, but then nothing happened, there was no POST, and my new message in Zulip was never created.

I still can’t fully explain it, but basically some extra headers are required from the server to verify that the requesting domain should be allowed to request the desired resource. The generated sample back-end already took care of that and it wasn’t a problem. But for a real API like Zulip, in an environment not expecting what Swagger needs, it means changing something with Django. I know nothing about Django.

I did some research and found a Django app (I had no idea there was such a thing) that handled this: django-cors-headers. I installed and configured it and suddenly everything worked. The bad part is that means a 3rd party package must be installed on any Zulip server that wants to work with Swagger UI documentation.

I looked into what django-cors-headers does, but there is a lot more than just creating the necessary header. Much of it is the tests, administrative features and other things that make it a useful piece of software. If I continued to investigate I certainly could figure out the actual minimal necessary change I’d need on the server side. But why do that when there’s already a nice package available? Except not everyone has the luxury of just installing something. That is a question I will need to find an answer for before deploying this to production.

But now I have reached the end of my proof-of-concept experiment, and I have nice interactive documentation to demonstrate how my API works. There’s much work yet to cover all the endpoints and all the features, but this shows that it can be done.

The basic steps are these:

  • Download the swagger-ui repo from github and drop the dist directory it contains in an appropriate location on your webserver
  • Build a YAML file that describes your API, according to the OpenAPI/Swagger spec
  • Update index.html to point to your file and make any other desired changes (like I did for language)
  • Configure your webserver, if needed, to handle CORS correctly

This post builds on what we’ve learned already to create a working Swagger front-end that talks to a real server. (See Testing Swagger and Making Swagger work for real, part 1)

The previous example used generated server code for the back end so it worked nicely with the Swagger UI front end by design. The next step is to make it work with a real Zulip server, which does not.

For the codegen-based test, a curl command to create a new item looked like this:

curl -X POST --header 'Content-Type: application/json' 
--header 'Accept: application/json' 
-d '{ "id": 5, "name": "potato" }' 
'http://10.2.3.4:8080/api/items'

But creating a new message in Zulip looks like this:

curl -X POST 
--header 'Content-Type: application/x-www-form-urlencoded' 
--header 'Accept: application/json' 
--header 'Authorization: Basic MyAuthHash' 
-d 'type=stream&content=bar&to=test&subject=foo' 
'http://10.2.3.4:9991/api/v1/messages'

There are two differences to account for here. The first is that the Zulip endpoint is expecting url-encoded text, not JSON, in the POST data. The second is that it requires authentication. Both of these need to be correctly described in the YAML description of the API.

Building the YAML file

The store example specified that the endpoint both accepts and produces “application/json” data:

consumes:
  - application/json
produces:
  - application/json

and “in: body” in the parameter list means it expects them to be in the body of the POST in the format described by the newItem schema.

post:
  description: Creates a new item in the store.  Duplicates are allowed
  operationId: addItem
  produces:
    - application/json
  parameters:
    - name: item
      in: body
      description: Item to add to the store
      required: true
      schema:
        $ref: '#/definitions/newItem'

For Zulip, the endpoint still returns JSON, but is expecting to receive x-www-form-urlencoded data (just like an HTML form.)

consumes:
  - application/x-www-form-urlencoded
produces:
  - application/json

The individual parameters must be defined as “formData”, which doesn’t use a separate schema definition:

parameters:
  - name: type
    in: formData
    description: type of message to create
    required: true
    type: string
  - name: content
    in: formData
    description: content of message to create
    required: true
    type: string
  - name: to
    in: formData
    description: recipient of message to create
    required: true
    type: string
  - name: subject
    in: formData
    description: subject of message to create
    required: true
    type: string

Now that the incoming data is defined, we need to make it use Basic Auth to authenticate with the Zulip back end. The first part of that is to add a “securityDefinitions” section:

securityDefinitions:
  basicAuth:
    type: basic
    description: HTTP Basic Auth

and in the parameters specify that they require this security:

security:
  - basicAuth: []

Here’s the full YAML file:

swagger: '2.0'
info:
  version: '1.0.0'
  title: Sample API
  description: Some Stuff I wrote
  termsOfService: http://example.com
  contact:
    name: Feorlen
    email: nobody@example.com
    url: http://example.com
  license:
    name: Foo
    url: http://example.com
host: 10.2.3.4:9991
basePath: /api/v1
schemes:
  - http
consumes:
  - application/x-www-form-urlencoded
produces:
  - application/json
securityDefinitions:
  basicAuth:
    type: basic
    description: HTTP Basic Auth
paths:
  /messages:
    post:
      description: Creates a new Zulip message
      operationId: addMessage
      produces:
        - application/json
      parameters:
        - name: type
          in: formData
          description: type of message to create
          required: true
          type: string
        - name: content
          in: formData
          description: content of message to create
          required: true
          type: string
        - name: to
          in: formData
          description: recipient of message to create
          required: true
          type: string
        - name: subject
          in: formData
          description: subject of message to create
          required: true
          type: string
      security:
        - basicAuth: []
      responses:
        '200':
          description: message response
          schema:
            $ref: '#/definitions/messageResponse'
        default:
          description: unexpected error
          schema:
            $ref: '#/definitions/errorModel'
definitions:
  messageResponse:
    type: object
    required:
      - msg
      - result
      - id
    properties:
      msg:
        type: string
      result:
        type: string
      id:
        type: string
  errorModel:
    type: object
    required:
      - code
      - message
    properties:
      code:
        type: integer
        format: int32
      message:
        type: string

With this configuration, Swagger UI generates a curl command that can connect to the server, authenticate, and create a new post. The problem is, however, that the nice clickable demo that Swagger UI offers doesn’t.

I’ll get into that with the next post.

When we last left our intrepid explorer, there were bits of YAML all over the floor, but the shape of a functioning API doc could be seen emerging through the debris. Or at least I remember it that way. (Here’s the previous blog post.) With the basic Swagger functionality proven, the next step is to make it work in something like the desired real environment. It does, but getting there was a bit of a winding path through parts of webapp development I hadn’t looked at yet. That’s another long story, and I’ll get to it.

Before I do, I want to talk a little about file formats. As I make this project happen for real, I need to choose how I’m going to write the spec file that drives it. That doesn’t just affect me, but anyone who later needs to read or edit it. It’s not something to be taken lightly, and is a hotly contested topic.

Here’s a nice discussion illustrating the concerns: Tom Limoncelli on TOML vs JSON. (No, TOML isn’t named for Tom L., but he does find that point amusing. If you like servers, you should read his stuff sometime.)

The Swagger (or OpenAPI) specification says you can use either YAML or JSON. The Swagger editor works in YAML, so clearly they have an opinion. (You can import and export JSON.)

My choice is also YAML, for these reasons:

  • It allows comments. You can annotate what something is, why you added it, or comment out work in progress.
  • It is primarily whitespace-delimited. That makes it easier to visually scan than JSON’s braces, brackets, and quotes.
  • That whitespace hierarchy, as annoying as it can be, is already familiar to Python developers. Zulip is mostly written in Python, so the developers maintaining it will already be accustomed to this.

Neither of these formats are really “human readable” if by that you mean “can be reliably maintained by average computer-using humans.” They can’t. They have strict format requirements and any deviation results in cryptic errors and non-obvious failures. I think YAML is better for humans than JSON, but even that isn’t saying much.

I started this experiment with YAML, and I see no reason to not stick with it. Despite much cursing in the process of building the file for the endpoint I was trying to model (and it isn’t even complete.) The tools are lousy. The failure modes are opaque. Some of that was my inexperience with the Swagger standard, but not all of it.

At one point I was making changes to my file in a text editor and pasting it into an empty Swagger editor window to see if it would validate. Because the errors didn’t make sense. Something, I’m still not sure what, introduced a tab character I couldn’t see. Between the four editors I had open, copying and pasting between them (one explicitly configured to not use tab characters) somehow it happened. Welcome to being a developer.

JSON, on the other hand, has so much visual clutter that I can’t read any non-trivial example without reformatting to add whitespace. Fans like that it’s more compact than YAML. (Whitespace characters are characters, tabs are forbidden, and more characters mean more bytes.) There are also more tools that work with it, because much of the web runs on data exchanged as JSON. But if I don’t have a reason to need maximum efficiency and performance, I’m going to choose the option that makes it easier on the human developer every time. (For the opposite end of this decision, see protocol buffers, a decidedly not human-readable format. It’s also not used for configuration files.)

So YAML it is. In the next post, I’ll talk about what I did with it.

Swagger is a language for defining an API, with an ecosystem of tools to generate both code and documentation. I’m experimenting with it to see how much I can use it to automate creating API docs for Zulip. It’s full power is in defining your API functionality in one file, and then generating both documentation and skeleton code for multiple programming languages. You still have to implement the actual behavior behind it, but it generates the structure to do that more easily.

As with most code generators, I’m not sure how useful this is going to be for an already existing system not designed with the same structure. But it’s useful enough as a doc tool if you are willing to create the spec by hand. Then you get some nice pretty pages with, in theory, the ability for 3rd party developers to test API samples right from the documentation.

My first pass is to set up some static docs, and then a basic generated API and docs with functional demos.

The simplest way to have docs is Swagger UI, a viewing front-end. I thought, from reading blog posts and tutorials from other users, that I had to modify Swagger UI itself to create docs from my own spec. That is a mess of installing a bunch of other things that I never got working. After some advice from the IRC channel, I learned I didn’t actually need to do that.

You can literally take a single directory out of the downloadable source from GitHub, add your custom YAML or JSON file defining your API, and stick it on a webserver. (For more, go to the Swagger docs and look for “Swagger UI Documentation”.) You don’t get nice interactive examples this way, but you do get a well-structured description of your API.

The default uses the Swagger pet store example, so in index.html you will need to change the line

url = "http://petstore.swagger.io/v2/swagger.json";

to be where you have your definition file. I put mine in the same directory, it can be either YAML or JSON. That’s it.

If you don’t want functioning examples (which won’t work in this simplistic demo) also change which methods are allowed for the “Try it out!” button. Remove all of them to disable the button altogether, or only some if you need to disallow testing certain types of requests.

supportedSubmitMethods: ['get', 'post', 'put', 'delete', 'patch'],

You can also specify which language you want Swagger UI to use for its static content (look in the lang directory), although I found from testing it appears you can only pick one. I haven’t figured out how to actually provide localized content in the user’s own language.

<!-- Some basic translations -->
<script src='lang/translator.js' type='text/javascript'></script>
<script src='lang/it.js' type='text/javascript'></script>

So that’s it for the super-simple option.

To generate a working API from your spec is a little more complicated, but not too much. In addition to creating your YAML or JSON file, you need to generate server code for it and install it on your own server. I had to install several additional packages, too.

For this, you need to start with Swagger Editor. You can install it locally, but I just used the web version.

I started with one of the samples available in the web version, a simplified pet store.

File-> Open Example -> petstore_simple.yaml

Edit the sample file as desired, the important things to note are the hostname and port of your server, and the base path where your API endpoint URLs will start. Here’s mine:

swagger: '2.0'
info:
  version: '1.0.0'
  title: Swagger test store
  description: A sample API that uses the swagger-2.0 specification
  termsOfService: http://feorlen.org
  contact:
    name: Feorlen
    email: foo@example.com
    url: http://swagger.io
  license:
    name: MIT
    url: http://opensource.org/licenses/MIT
host: 10.2.3.4:8080
basePath: /api
schemes:
  - http
consumes:
  - application/json
produces:
  - application/json
paths:
  /items:
    get:
      description: Returns all items from the system that the user has access to
      operationId: findItems
      produces:
        - application/json
        - application/xml
        - text/xml
        - text/html
      parameters:
        - name: tags
          in: query
          description: tags to filter by
          required: false
          type: array
          items:
            type: string
          collectionFormat: csv
        - name: limit
          in: query
          description: maximum number of results to return
          required: false
          type: integer
          format: int32
      responses:
        '200':
          description: item response
          schema:
            type: array
            items:
              $ref: '#/definitions/item'
        default:
          description: unexpected error
          schema:
            $ref: '#/definitions/errorModel'
    post:
      description: Creates a new item in the store.  Duplicates are allowed
      operationId: addItem
      produces:
        - application/json
      parameters:
        - name: item
          in: body
          description: Item to add to the store
          required: true
          schema:
            $ref: '#/definitions/newItem'
      responses:
        '200':
          description: item response
          schema:
            $ref: '#/definitions/item'
        default:
          description: unexpected error
          schema:
            $ref: '#/definitions/errorModel'
  /items/{id}:
    get:
      description: Returns a user based on a single ID, if the user does not have access to the item
      operationId: findItemById
      produces:
        - application/json
        - application/xml
        - text/xml
        - text/html
      parameters:
        - name: id
          in: path
          description: ID of item to fetch
          required: true
          type: integer
          format: int64
      responses:
        '200':
          description: item response
          schema:
            $ref: '#/definitions/item'
        default:
          description: unexpected error
          schema:
            $ref: '#/definitions/errorModel'
    delete:
      description: deletes a single item based on the ID supplied
      operationId: deleteItem
      parameters:
        - name: id
          in: path
          description: ID of item to delete
          required: true
          type: integer
          format: int64
      responses:
        '204':
          description: item deleted
        default:
          description: unexpected error
          schema:
            $ref: '#/definitions/errorModel'
definitions:
  item:
    type: object
    required:
      - id
      - name
    properties:
      id:
        type: integer
        format: int64
      name:
        type: string
      tag:
        type: string
  newItem:
    type: object
    required:
      - name
    properties:
      id:
        type: integer
        format: int64
      name:
        type: string
      tag:
        type: string
  errorModel:
    type: object
    required:
      - code
      - message
    properties:
      code:
        type: integer
        format: int32
      message:
        type: string

I edited this elsewhere and then pasted it back into the online editor. The right pane will tell you if you messed up anything required. Now generate server code based on this API definition. I’m using Python Flask, but there are many to choose from.

Generate Server -> Python Flask

Download the resulting zip file and put its contents on your server somewhere. Since I’m using port 8080, I can do this in my own home directory without running as root. (I’m making my changes on the server directly, but edit locally and then upload if you like.)

First, customize the generated code:

In app.py I added my correct host, some extra logging (“INFO” for less debugging), and changed the title. I haven’t figured out where this title is actually used, maybe I’ll find it eventually. It now looks like this:

#!/usr/bin/env python3                                                                              

import connexion
import logging

if __name__ == '__main__':
    logging.basicConfig(level=logging.DEBUG)
    app = connexion.App(__name__, specification_dir='./swagger/')
    app.add_api('swagger.yaml', arguments={'title': 'Sample Swagger server'})
    app.run(host='10.2.3.4', port=8080)

I also had to edit the generated implementations in controllers/default_controller.py to remove type hints because my version of Python doesn’t support them.

def add_item(item):
    return 'do some magic!'

Those are the basic changes. The code doesn’t do anything besides return a static message, but is otherwise functional. I haven’t figured out how to change the language for Swagger UI, but maybe that is possible.

Now install any necessary packages. I already have Python 3 and Flask, but I need Connexion to handle HTTP. To get that, I first have to install a version of pip that can handle packages for Python 3.

sudo apt-get install python3-pip
sudo pip3 install -U connexion

Next add execute permissions to the server script

chmod 744 app.py

and run the server

./app.py

And then the ui is available on port 8080 (note my base path is used here.)

http://10.2.3.4:8080/api/ui/


I can interact with the api using this page, or coping the curl examples and executing them in a shell. Click on the example values to post the sample payload into the appropriate field.

curl -X GET --header 'Accept: text/html' 'http://10.2.3.4:8080/api/items'
do some magic!
curl -X POST --header 'Content-Type: applicheader 'Accept: application/json' -d '{ "id": 5, "name": "potato" }' 'http://10.2.3.4:8080/api/items'
"do some magic!"

The next step of this example is to actually implement the endpoint functionality. For my purposes, my next task is to figure out how to make the web UI work with an existing API that was not designed to use it.

I’ve been working on a small change to the open source project I mentioned, Zulip. The code required was trivial, but I spent most of my time figuring out a rational order of operations and being confused about how to accomplish them with the version control system.

I’m certainly no stranger to source code version control, but mainly with commercial systems. Git, used by many open source projects, isn’t much like the others I have experience with. The Zulip project has documentation about how they use git and github, the website that provides an online version of that system. But, of course, not everything can be anticipated in advance (and documentation can have bugs too.)

The funny thing is, I’ve used github some before. But only in a very narrow way. If you have a private repository where you directly submit changes to a main branch, there’s not a lot to it. Combining together different changes can be a nuisance, but that’s going to happen in any system like this. And if there’s only one or a few people making changes, the need for it can be nearly avoided with a tiny amount of discipline.

Where things get complicated is when you have a bunch of people working all at once. Even more when they are only loosely coordinated. Git assumes developers will work on what they want, and a handful of administrators will direct traffic with incoming requests to include new code in the main repository.

One thing that I was misunderstanding is how (little) git thinks about branches. Branches are normal things in version control, you make one to take a copy of existing code so you can safely modify it away from the central, primary, version.

In some systems, this is a resource-intensive operation where each branch is literally a copy of everything. Git doesn’t work that way. Since it functionally costs little to make a branch, branching is encouraged. You have your own copy of the code at a particular point in time. Both you and other people can make changes independently on different branches. You make some more branches. In the git universe, that’s no big deal. Time marches forward.

After you do your thing with your branch, you probably want to somehow get it back into the main repository. I’m most familiar with merging, where the system compares two parallel but not identical sets of source code and figures out if the changes are neatly separated enough for it to safely mash them together for you. Some automagical stuff happens, and the result becomes the latest version. (This latest revision is typically called “HEAD”.)

If not, you get to do it by hand. Use a merge-intensive version control system for a while, and you will absolutely find yourself dealing with a horrific mess to unravel. Merging is ugly but, if you are used to it, it’s a known ugly. That’s a certain kind of comfort. You can do that with git if you want. Many people do.

And many people don’t.

One thing about branches: many systems consider HEAD the be-all and end-all picture of reality. You might not be happy with the most recent version of your branch, you could keep a pointer to the revision you’d rather have, but it’s always the most recent version. If you don’t like it, you make a change and now you have a new HEAD. Time always moves forward. Re-writing history, to the extent that it can be done, is only for the most dire of emergencies.

Git has something called “rebase.” You can use it in a couple different ways, but it’s basically the version control equivalent of a Constitutional Convention: everything is on the table. You don’t like the commit message from three changes ago? Rebase. Want to not have those 47 typo-fixing revisions you created? Rebase. It’s also an alternative to merging, where your other branch’s changes are stuck on the end after HEAD and any changes that were made between the time you branched and now are patched in to your code that didn’t get them. (If you want a real explanation, here’s a PDF that helped me understand how rebase works.)

Coming from a merge-land where HEAD is sacred, this terrifies me. You are going into the past and messing with history, and that Just Isn’t Done. Admit that you checked in something with the commit message “shit is broke” and move on.

When branches are expensive and you don’t want to make too many of them, you have to protect the integrity of the ones you have. The idea of something like rebase is dangerous, and with great power comes great responsibility.

When branches are cheap, and you make one because you feel like watching what happens when you delete the database maintenance subsystem? Well, have fun. Clean up after yourself when you are done. It’s not exactly a different universe, but you think about some things in different ways. I’m not entirely there yet, but rewriting history is apparently one of those things.

In making my code change, I ran into a bunch of small things I didn’t understand. I was concerned that I’d do something that would make a mess, and it would be hard to clean up. I didn’t yet know the commands that would have helped. I didn’t understand the multiple purposes of others. I was entirely terrified by the idea of rebase. (I still mostly am, to be honest.)

I made a small mess attempting to merge in an environment that was expecting a rebase. And then halfway in I attempted to cancel but it was applied anyway. There were a few mysteries as things seemed to behave inconsistently. Some of it would have been easier if I had thought to create a branch to try something, against my previous conditioning.

So that happened.

I’ve, of course, considered such services for a long time. My first serious identity theft episode (besides credit cards) was about 15 years ago, when I was informed by my mortgage loan officer that I would not be getting that top-tier rate we had previously discussed.

There were items sent to collections I had never heard of. Addresses reported where I had not lived. There was an unscrupulous collections agency who took my report of fraud, attached to their record the full correct contact info they required me to give them, and submitted it again to the credit agencies as valid.

Among other things, the thieves signed up for local telephone service. But the phone company had No Earthly Idea where they might be located and apologized that they would be unable to help me on that issue, Thank You And Have A Nice Day. A police department in a state I never lived in refused to accept a report except in person. I couldn’t get anyone to tell me if the drivers license number on one of the credit applications meant someone applied for one in my name. My own state and local authorities wanted nothing to do with it, because the visible crime happened elsewhere. “You could try calling the FBI, but they are only interested in cases over a million dollars.”

At one point, when I was having a rather convoluted “discussion” with one of the credit bureaus, I offered to come to their office with paper copies of documents supporting my request to remove the fraudulent items. The main corporate office was ten minute’s walk from my workplace. They offered to call the police if explored that possibility.

This took several years to fully clean up, continuing even after I moved to California. I still have to assume that my personal information is sitting out there, waiting for someone else to abuse it. For all practical purposes, I have a lifetime subscription to credit reports on demand.

So let’s just say I’ve gotten pretty good at this. It’s a giant pain in the ass, but not enough to pay someone a monthly fee for the rest of my life (and probably after.) Particularly when the services available consisted of little more than automated credit report checking. Once in a while something happens, I spend a few weeks arguing about it with various companies, and then it goes away. (Until next time.)

So what changed?

Well, you might have noticed I know a thing or two about computers. Keeping them safe and secure, to the best of my abilities and time available. You would not be surprised to learn that I like backups. Backups! Backups as far as the eye can see! Backups that run hourly. Backups that are swapped out whenever something has the slightest suggestion of a hardware blip. Backups that live in my travel bag. Backups that live at my mother’s house. And backups that live in my car.

My usual “offsite backup” stays in the car glovebox. Every so often, I try for at least monthly, I take it inside and refresh it. We do have a storage unit, I could keep it there, but it’s far less convenient. That means it would be updated less often, and monthly is already not that great.

My laptop backup is encrypted, as are all of my USB hard drives if possible. My server backup is one of those that is not, because the OS version is too old. So my glovebox backup is one USB drive with two volumes, one encrypted and one not.

The unencrypted server backup always concerns me a bit. If someone knowledgable got it, it has all the information necessary to royally screw with my server. That’s a problem. But eventually that server will be going away, replaced with something better. And it’s a basic machine that runs a few websites and processes my outbound email. (I haven’t hosted my own inbox in years.) Yeah, having some archived files of ancient email released would not be fun. But that’s the extent of anything that would impact my actual personal life.

I’d rather not have my backup drive stolen out of the car, sure. It would be annoying, both for the car and having to lock down my server. But it wouldn’t be the end of the world.

So that’s not it, what else? (I’m guessing, at this point, you have some idea that there will be a car chapter to this story.)

A few weeks ago, my spouse decided that this offsite backup thing wasn’t such a bad idea. The thought of having to use it, because the house burned down or all our stuff was stolen, is not pretty. But it’s better to have something in that situation than have nothing. And it’s not that difficult to remember to update and put back once in a while. So he did.

Given that he’s the inspiration for the “tinfoil hat” userpic I have on one of my social media accounts, I presumed it was encrypted. He has many years’ experience in professional system administration and is far, far more paranoid than I am. Nothing with a name or address is discarded intact. He insists the shredding goes to a facility where he can watch it being shredded. When I moved to California, he would not use the cheap 900 MHz cordless phone I brought with me because it was insecure. He doesn’t like my passwords because sometimes I have to choose ones that are capable of being manually typed within two or three tries.

Guess what. Oops.

A few days ago, someone broke into our car and ransacked the glovebox. The only things taken were a small bag of charging cables and two hard drives, mainly because there was nearly nothing else to be had. (This is, by far, not my first rodeo.) Car documents, paper napkins, and some random receipts were scattered about.

One of those hard drives is my spouse’s unencrypted laptop backup.

First I dealt with the immediate problem of filing a police report, which took about 20 minutes on the phone. It is a process that is at least highly efficient, since it is almost certainly useless in getting our stuff back or even in identifying a suspect. But to be able to discuss this with my insurance company, it needed to be done.

Then came the discussion on what, exactly, was on that hard drive: it’s a copy of his user directory. So it didn’t contain system passwords, but that was about the only good thing that could be said. He uses a password manager for many things, but it’s not possible to protect everything that way. Years of email, confidential documents, client project details, credit card statements, tax returns, the medical documents I needed him to scan for me while I was out of town. All there. I handle most of the household finances, so a great many more items are instead on my machine. But sometimes you have to share, and things get passed around.

It’s almost certain that the thief didn’t care about the data. But wherever those drives get dumped, or whoever they are sold to, somebody very easily could. Names, addresses past and present, names and addresses of family members, birth dates, social security numbers, financial account numbers, everything necessary to utterly ruin our financial lives.

I’ll have more to say in other posts: which service I chose, what happened with the car, and how this story develops. But that explains why now, after many years of not being impressed with paid monitoring services, I now have forked over my money for one.

The past week I started looking at Zulip, an open source group communication tool. It has web and mobile clients, and a Python back end. I ran into a few speedbumps getting my development environment set up, so this is my collection of notes on that process. If you aren’t interested in Linux or Python, you might want to skip this post as it’s full of sysadmin stuff.

The Zulip development setup instructions are good, but assume you are running it on your local machine. There are instructions for several different Unix platforms, the simplest option is Ubuntu 14.04 or 16.04. (The production instructions assume you want a real production box, and Zulip requires proper certs to support SSL. Dev is plain old HTTP.)

The standard dev setup walks you through installing a virtual environment with Vagrant. But I’m using my Ubuntu test box, an Intel Next Unit of Computing (NUC). Many folks use these for small projects like home media controllers because they are inexpensive, low power, and self-contained. But hefty they are not. I have 2 GB of RAM and a 256 GB SSD, so I decided to go with the direct Zulip install without Vagrant. It isn’t complicated, but there isn’t a nice uninstall process if you want to remove it later. (I’m not worried about that for a test machine.)

I installed in my home directory, as my own user, and started with the suggested run-dev.py with no options. The standard configuration listens only on localhost, which was problem number one. I could wget http://127.0.0.1:9991 so I knew something was working, but I didn’t have a web browser and I couldn’t access it with one from another machine.

I looked through the docs, which are pretty good on developer topics but have some thin spots, but didn’t find anything that looked like command-line reference. There was one mention of --interface='' buried in the Vagrant instructions, but with a null argument it wasn’t obvious its purpose. I asked in the Zulip dev forum (which is actually a channel, or “stream” at a public Zulip instance) and learned that is where I should specify my machine’s address.

So my start command looks like this:

$ ./tools/run-dev.py --interface=192.168.30.110

This is where I get to speedbump number two. (I’ll skip over some random poking around here.)

The instructions say the server starts up on port 9991. Ok, great. The last part of the log on startup ends with this:

Starting development server at http://127.0.0.1:9992/
Quit the server with CONTROL-C.

This, to me, says that it’s running on port 9992. Having seen previous cases of services failing to promptly release their ports, and then working around that by incrementing the number and trying again, I didn’t think much of it. I had stopped and started the process a bunch of times. This is a development install of an open source project under active development. Ok, whatever, I’ll investigate that later. 9992 it is.

Except it wasn’t. The web UI was indeed listening on 9991 as advertised, but I didn’t realize it. The closest thing I saw in the logs suggesting this is the line

webpack result is served from http://127.0.0.1:9991/webpack/

but since I’m brand new to this project and have no idea what webpack is, that didn’t mean much. It took a couple stupid questions to work out, but eventually I got all the parts together.

So, to summarize:

Read the install directions. They work.

If you are installing on another host, set that host’s IP address with an option when you start the process:

run-dev.py --interface=192.168.30.110

And look at port 9991 on that host for the web interface.

http://192.168.30.110:9991

From Stolen Wallet to ID Theft, Wrongful Arrest

I saw this article today and it reminded me of one of the identity theft disasters I went through many years ago. While I was investigating accounts that had been opened in my name, I found that one had a drivers license number associated with it. It obviously wasn’t mine, because it was from a state I never lived in. But if it had been, things could have gone very differently.

This person discovered it the hard way, as he was arrested for crimes committed by someone pretending to be him. And that was even after having reported the theft to the local police.

The blog post goes on to discuss things to do after a wallet is stolen. It’s a list worth reading.

I try to make it difficult to grab my bag, but as we’ve noticed that doesn’t always help. But isolating the valuable and important things I do have to carry around with me did. They only got my phone, cash and a few other minor things that were in my transit wallet, and the effort required to get past the security features of my bag meant I knew immediately.