Archive for the ‘Uncategorized’ Category

This week my internship with Zulip and Outreachy officially ends. I don’t intend to stop working on Zulip, one of the reasons I applied was because it was something I could see myself continuing to contribute to. I’ve learned new technologies, applied well-practiced skills, met new people, and explored things I knew would be a challenge. I’ve used this blog to talk about what I have been working on, but I haven’t said much about why I applied. I wanted to focus on the work at hand.

So now would be a good time for that. But first, a little background.

I’ve known about Outreachy for a while, I briefly considered applying back when it was still Outreach Program For Women. At that time, I was a maintenance engineer — fixing software bugs for a medium-size tech company. I had been there a while and was thinking about different directions I could go in my career.

I started out on a pretty typical CS path, with a degree and jobs on engineering teams. But things rarely go like you plan, and eventually I landed in support and maintenance. I was writing code, but I wasn’t doing what many of my engineering peers were with automated testing, cloud services, and iterative Agile development. I had looked at some open source projects as a way to try new things, but few seemed approachable. And the timing for OPW was inconvenient.

Instead I joined a tiny, tiny startup. I began comfortably in C code but rapidly picked up anything in the product that needed to get done. Server features, network problems, mobile clients, monitoring, you name it. I wrote JavaScript and fixed Android bugs. I did all kinds of things I knew nothing about. Some of it stuck, and some I still can’t explain what I thought was going on.

But the company didn’t go far, and I found it complicated to talk about something where little of my work was visible and the code proprietary. I have a hard time with portfolio projects because I can’t stay excited about an abstract problem solved in a theoretical vacuum. I’m much more interested in how interconnected parts work together, and that’s not something that shows well in a 30 second demo.

I knew Outreachy was not just for students, although mainly students and recent grads apply for it. It’s the nature of the thing: if you are an established working professional, it’s hard to take off a bunch of time to try something new. If you have other responsibilities besides work, doubly so. But I was able to, and saw it as an opportunity to explore a new area and build a visible record of my work. It’s an excellent professional opportunity, one that I’m fortunate to be able to consider. Even better that it improves open source software in the process.

There was one little word, however. “Intern.”

I’ve been an intern before. My school strongly encouraged all engineering students to do two “co-op” semesters, and I did. (I wrote documentation for a software company.) But as a middle-aged professional, sometimes when I mentioned I was applying for an open source internship program I’d get a funny look and a one-word response: “Why?” Wasn’t I a career software engineer already? I’d explain that it’s an opportunity to move into a new area and I’m excited about the possibilities and then everyone understood. But it was awkward. I was already questioning a culture where “rockstar” new grads land huge compensation packages and experienced engineers struggle through interviews about abstract CS theory. So, yes. Awkward. I had to think about that to be comfortable with it.

The application process was challenging, not only because I was learning a new codebase and new tools, but because I had to prepare a proposal for something I knew almost nothing about. I approached it as I would a professional task: spec and estimate new features appropriately scoped for a 3 month deadline. And how would I know what was reasonable? I had no idea. Yet, the experienced people answered my questions and encouraged me to build a solid but flexible plan where the schedule and tasks could be revised later. That was good to know. I was excited to learn I was selected. I was paired with two mentors, Sumana Harihareswara, already an active Zulip contributor, and Tollef Fog Heen, who has experience with services and APIs.

I knew I had signed up to do a lot of engineering work, and was confident I was able to execute to plan (for some value of “plan” at any rate.) There were new things to learn and a new codebase to become familiar with and all sorts of stuff that you deal with again and again when changing jobs. And this was a job, it was a full-time commitment over the course of the program. I wasn’t too concerned about that part.

The other things I would learn, I didn’t really know. Not in a “I have no clue” way, but more in that every new environment has things that come up or happen in unexpected ways. One new part in this was the open source component. I’ve worked on plenty of engineering teams, generally there is an overall design and individual areas are parceled out to developers or small teams to refine and implement. There are many decisions to be made, but most of the big ones are (hopefully) at least sketched out in some kind of architecture plan. Often lead engineers have strong opinions about how and why and where.

My few interactions with other open source projects suggested that outside contributions were a nice thing as long as it wasn’t too taxing for the core team. Clearly this was a different situation and I wouldn’t be left to my own devices, but it took some time to sort out where I was comfortable between working mostly on my own and seeking input beyond basic questions. After all, everyone was busy working on their own tasks, usually between other responsibilities. I was adding new functionality rather than working in an already established area, so I was unlikely to break a core feature. But I wanted it to fit with established standards and match overall goals. This was an area where my mentors were especially helpful: how often to ask busy people for feedback, what sorts of things are generally left to individual developers to handle.

Something I didn’t consider at first, and well into the program really, was learning as a specific goal. Of course, learning was a desired outcome: new skills that can be applied to other projects. Yet I’m accustomed to the task being the focus, and any necessary learning adjunct to that. I discounted the value of the effort I was putting into understanding new tools and environments, and sometimes frustrated about my productivity. Was I hitting milestones fast enough? Sometimes chasing down problems made me question whether I was accomplishing anything meaningful. But then, in conversation with my mentors, I realized that was the point.

The biggest surprise over the course of the program had nothing to do with code. I’ve always been a strong writer, but I am best when I can edit and revise. Sometimes speaking to people face-to-face is challenging, but there is enough room in the back and forth of a live conversation that I can get my point across most of the time. (Stressful situations less so.) Zulip is a group chat system, so I was hardly surprised that I was going to spend a lot of time sending short messages back and forth. At a modest pace, this isn’t a problem.

What I was entirely not prepared for was having status meetings in chat. Attempting to convey complete thoughts about where I was on a task while at the same time tracking questions asked about multiple things was extremely difficult. It was like having an important conversation in a loud room, where so much cognitive effort is required to parse the words that there is little space left to compose a response. Chat is such a central part of the project that I kept trying until everyone was clearly frustrated. It took a phone call to sort things out, and then we agreed to have status reports by email. Any needed discussion can be handled in chat, but most of the information was already provided. That entirely changed the regular meetings from something I struggled to get through to an orderly sharing of information.

There were many other things besides the technical tasks originally in my plan. At the suggestion of my mentors (and to no great surprise) I was encouraged to submit a talk to a conference. It was just a few days ago accepted, so now I can continue on and actually write the full presentation for the event in May. I added career tasks to my plan like updating my resume and attending community events.

The visible github activity will certainly be an advantage when looking for my next job. I’m happy to have found a project I enjoy participating in and now I have several complete features I can show as code samples. I expect there will be more.

I just finished a big task, significantly expanding some documentation. The original page was a summary of several ways to integrate with 3rd party systems, and an example of one of these methods. I used this document when I first tried to create an integration, and found it didn’t cover a lot of things I needed to know. So I wanted to improve it.

The document is now two pages, the example having been moved to its own page. I added more detail to the example itself, and a new section for additional topics it didn’t cover. The revised page is now online, and I’m excited to see it available for people to use.

Getting to that sometimes made me wonder how much I was actually making progress on my goal of a better docs page. There were other tasks to be done while this was happening, so it wasn’t all in one stretch. I had to learn to do the things I wanted to describe, which took lots of questions for more knowledgeable folks and experimentation to confirm my understanding. I made tons of notes, written with the hope they still made sense later when I needed them. Editing involved re-testing code examples to make sure I accurately described how they worked at an appropriate level of detail. By the end, in addition to the written material, I expanded one existing integration and wrote two entirely new ones.

The final result couldn’t have happened without preliminary exploratory work. The necessary information existed mostly in the personal experience of people who had done it before. Only some was documented. And, part way through, we decided my original idea of an entirely new document (with a new example) would be better incorporated into the existing material instead. So some partly completed writing was discarded, and extra work was needed to make the new code itself usable on its own. Wasn’t that inefficient? Does that matter? I see two different ways of looking at it.

One is that “All this is what brought me to this place.” The idea that the exploratory work was not only necessary, but important. Details that weren’t included in the final document shaped those that were, and tangents identified what was not central enough to the topic to merit space. The result wouldn’t be what it is without that work. To reference the joke in my title, this is like Carl Sagan’s apple pie, which could not exist without the entire history of cosmology that preceded it. (“If you wish to make an apple pie from scratch, you must first invent the universe,” from the original Cosmos TV show.)

The other way to think of it is that the preliminary work, or at least some portion of it, was an unfortunate necessity but not part of the actual work. Such yak shaving is often thought best to get through as quickly as possible. (“I want to install an app, but I have to upgrade my operating system first.”)

Sometimes it’s clear which camp something falls into. (I don’t need to write more OS upgrade documentation, for example.) Other times it depends on what the actual goal is. Is the information available, but in a less usable form? Then if the goal were doc updates as fast as possible, experimentation would be less relevant. (Or if less detail were expected.) But if the new working integrations are included, getting that code working would be very relevant. If personal learning is a goal, a wider range of things are fair game. Specific things may need to be time-limited, to keep on schedule.

My work for this doc task is on the pie end of this spectrum. The working code is relevant, not just a side-effect of research. The research was needed in any case because information wasn’t otherwise available. Learning is a specific expected outcome. That hasn’t always been the situation in a traditional job, something I have to remind myself here. My project updates now specifically include research and learning, they didn’t at first.

It sometimes felt odd to be dealing with a lot of “distractions,” or at least things that would normally be one in an average work environment. (“If it’s not in the plan, why are you working on it?”) But this is a different sort of thing, closer to a research project in some ways. Not knowing how tangential tasks would be viewed caused some stress. Yak shaving isn’t considered a good thing. But pies are ok. Particularly after I started thinking of them as first class tasks themselves.

I’m deep into preparing submissions for conference talks. This isn’t the first time, but it’s been a while and these talks have a different focus from the ones I’ve done before. I have a fair amount of prospective content together. But getting it into a cohesive abstract that is engaging, addresses the right audience, and, most of all, is short, has been a struggle. I’m grateful for assistance from folks who are much better writers than I am.

You may vaguely remember from a long-ago high school class the four types of writing: expository, descriptive, persuasive, and narrative.

Expository writing? I’m all over that. I’ve got a thing that I know about, and then I write about what that thing is, or what it does, or how to do it. There’s pages and pages (and pages and pages) of me doing this all over the internet. It’s classic technical writing: explaining something.

Those other kinds? I know about them, and sometimes use elements of them. Most people do. Where it varies is in how effective one is about it.

I want to be persuasive when I try to convince my spouse that we really should go back to Japan. Because it was so much fun, and there are so many neat things we didn’t see. And we’re gonna lose our elite airline status if we don’t get on the stick about it. (That’s pretty persuasive in my household.) I can pull out a bunch of supporting evidence, but mostly it’s that evidence that’s doing the persuading.

Descriptive writing is sometimes close to expository writing. You can explain how something looks or feels in a realistic, physical sense. But that’s a very narrow view of it. It includes the grand, sweeping words that paint pictures in your mind as they set the scene. “Rosy-fingered dawn” is a particularly elegant descriptive statement about a metaphorical sunrise. But someone else wrote that, as I’m unlikely to be mistaken for the progenitor of epic poetry.

I’m not sure how much I can describe my writing as narrative. Yes, I can do the “Who, What, Where, When, Why, and How” and that could be thought of as telling a story. But stories presume one has characters playing their parts in a recognizable plot, and that’s usually where I miss it. Narrative is expected to have a coherent structure, even if it isn’t necessarily linear. (For me, longer writing often starts to look like “paragraph salad,” as if someone dumped my index card notes on the floor.)

So back to editing.

My previous talks have been about explaining things. You want to know about log files? I can explain log files for hours. I might occasionally have an opinion to share, but mostly what I’m going to say is “See, this thing here? Here’s where it comes from and what it means.” My talk submissions were two paragraphs of “This is what I am going to explain to you.” There’s some element of “And this is why you should care,” but only enough to not sound like a documentation-generating robot.

The current conference, however, likes stories, opinions, and feelings to go with the facts. Those are basically anti-expository, so I’m having a tough time of it. I managed to write something, but it was closer to an outline for the entire talk than an advertisement saying “Please accept my talk for your conference.” And it was way, way too long.

I tried. I fiddled with this, and reworded that, and managed to cut about 30 percent. Still too long. I was holding on to the idea that I had to describe what I was going to explain, rather than persuade that I had something practical, interesting or enlightening to listen to. It’s not even that I was so attached to my words that I didn’t want to let any of them go. It’s that I am optimizing for the wrong thing. And the right thing is something I’m still working on even recognizing.

I sat down with a few writer friends, people who do this all day, every day and not just while waiting for a compile to finish. They took my paragraphs, put them in a blender, and out came something that even I could see was better. It wasn’t my voice, and I didn’t just wholesale use their results, but it gave me a way to see what my words could have been. I could then take my original text and move it closer to where it needed to be, even if I couldn’t explain why I was not able to get there on my own.

As people who have “Engineer” in their titles go, I’m a fairly decent writer. I’m proud of that. But that doesn’t mean I’m a great writer. I had gotten to a place where I couldn’t see the way out, and needed a professional intervention. I hope next time this will be easier.

Last weekend I attended a workshop about preparing to speak at technical conferences. It was somewhat more than that, but I’ll start there. That is something I’m interested in right now, as I’m working on proposals to submit to an upcoming conference.

It was organized by Write/Speak/Code, a group of people who do several events around women, technology and open source. (There is also a larger annual conference.) This event, Own Your Expertise, was focused on preparing women to submit talks to conferences and participate in open source communities. This is one of several workshops Write/Speak/Code offers, and thanks to GitHub, tickets were free. There was even a professional photographer so everybody looks good on conference web pages. (Mine is also for foreign job applications, which I’ll not make this post any longer by getting into here.)

So on to the day’s content. Yes, it’s about conference talks. The presenters have a somewhat different path to getting to that however. While it does get on to mechanics like what “CFP” means, it starts with getting yourself convinced you can actually do this. For me, I’ve presented at conferences before so it’s not unknown territory. But I’m hardly jumping at opportunities to do so because, surprise, I have a problem figuring out what I can talk about and convincing myself I have something relevant to say.

There are topics where I’m comfortable with my expertise, but in textiles rather than my professional work. The dynamics of textile communities are different for me than work, first and foremost that I don’t depend on textiles to make a living. My visibility and activity in that community have no bearing on whether or not I can pay rent or buy food, and can vary as circumstances change. (This is not the case for some of my friends.) Without that pressure, it’s easier to talk about what I do. I have trouble carrying that over to paid work however.

The first part of the day was group exercises around speaking more comfortably about one’s own expertise (hence the title), different areas each of us can influence and educate, and words we can use to describe what we have to offer. In honor of the occasion (while many of our friends were at Women’s Marches around the country) one of the exercises was “If you could be nominated for a Cabinet post you were patently unqualified for, which one would it be?” I volunteered for Health and Human Services, given my extensive experience in Yelling At Insurance Companies.

As a less gregarious person, sometimes the exercises seemed a bit silly. (And given limited time, rushed.) But everything was focused on putting together people who don’t know each other and getting them talking about areas of their own knowledge and experience. Sprinkled in with more than a bit of “You Go Girl!” chicks-can-do-this cheerleading. (Which sometimes is too much large group socializing for me, but I got through it.)

The second part gets into details about how to actually go about this, with breakout sessions focusing on different parts of the process. I was in the one about writing a proposal, since that’s exactly what I need now. We read our preliminary talk proposals to the group (about a paragraph) and discussed ways to improve them. Everybody exchanged contact info to keep working together on our talks.

As might be expected of something hosted at a Bay Area startup, there was much socializing, food, and following the programmed events, alcohol. The bartenders also concocted no-alcohol fancy drinks by request, so that was cool. (Yes, GitHub has a full bar in their cafeteria/event space. They are hardly alone in that. And I have Opinions about the role of alcohol in startups. Another time.)

I had a good time, I got some useful ideas in framing my topic, and met a bunch of people. I actually wrote down contact info and followed up with five people. That’s a lot for me. Go me. I hope I can keep in touch with a few (that is often where things fall down, on both ends.) I’ve already heard back from one, and we will probably meet up next week.

I’ve been having this ongoing argument with my test environment over my github ssh key. First it was just when I used a particular dev housekeeping script my project needs, but now every time I log out of a shell, my github ssh key goes missing. I have to remember to ssh-add before I want to do something involving github.

I tracked down the problem to ssh-agent, which apparently works fine for most people but for me requires constantly adding my key back. I found a solution by installing a keychain package for Ubuntu.

I already had my key, so I only had to install the package and add the appropriate configuration to my .bashrc. Now, at login, ssh-agent is started if needed, and my github ssh key added. I can logout and login all I like, access github from my shell, and only re-enter my passphrase if I reboot the machine. (I haven’t tested that part yet.)

I had started to wonder why I had bothered with an ssh key in the first place. But with this, it now works as expected. So much nicer.

Now that I know my YAML file works, because the generated curl command succeeds, it’s time to look at why the interactive version doesn’t.

The way this is supposed to go is that you click the little red dot on the right in your spiffy new doc section, enter your credentials, and then fill in the blanks to test right there from the documentation. Magic! But why would this fail when the similarly auto-generated curl command succeeds? It’s because the server restricts access to the resource the originating webserver is requesting. This is good for safety, but sometimes it’s what you need to do. It’s called cross-origin resource sharing (CORS) and it was mentioned in the Swagger docs. But until I ran into a problem, I wasn’t sure what I was supposed to do with this information.

Swagger UI sends an OPTIONS request to the server as the opening of a handshake that results in the resource being accessible when the subsequent actual request shows up. (That is a terrible description, hopefully someone who actually understands this can do better.) I saw those OPTIONS requests, but then nothing happened, there was no POST, and my new message in Zulip was never created.

I still can’t fully explain it, but basically some extra headers are required from the server to verify that the requesting domain should be allowed to request the desired resource. The generated sample back-end already took care of that and it wasn’t a problem. But for a real API like Zulip, in an environment not expecting what Swagger needs, it means changing something with Django. I know nothing about Django.

I did some research and found a Django app (I had no idea there was such a thing) that handled this: django-cors-headers. I installed and configured it and suddenly everything worked. The bad part is that means a 3rd party package must be installed on any Zulip server that wants to work with Swagger UI documentation.

I looked into what django-cors-headers does, but there is a lot more than just creating the necessary header. Much of it is the tests, administrative features and other things that make it a useful piece of software. If I continued to investigate I certainly could figure out the actual minimal necessary change I’d need on the server side. But why do that when there’s already a nice package available? Except not everyone has the luxury of just installing something. That is a question I will need to find an answer for before deploying this to production.

But now I have reached the end of my proof-of-concept experiment, and I have nice interactive documentation to demonstrate how my API works. There’s much work yet to cover all the endpoints and all the features, but this shows that it can be done.

The basic steps are these:

  • Download the swagger-ui repo from github and drop the dist directory it contains in an appropriate location on your webserver
  • Build a YAML file that describes your API, according to the OpenAPI/Swagger spec
  • Update index.html to point to your file and make any other desired changes (like I did for language)
  • Configure your webserver, if needed, to handle CORS correctly

This post builds on what we’ve learned already to create a working Swagger front-end that talks to a real server. (See Testing Swagger and Making Swagger work for real, part 1)

The previous example used generated server code for the back end so it worked nicely with the Swagger UI front end by design. The next step is to make it work with a real Zulip server, which does not.

For the codegen-based test, a curl command to create a new item looked like this:

curl -X POST --header 'Content-Type: application/json' 
--header 'Accept: application/json' 
-d '{ "id": 5, "name": "potato" }' 
'http://10.2.3.4:8080/api/items'

But creating a new message in Zulip looks like this:

curl -X POST 
--header 'Content-Type: application/x-www-form-urlencoded' 
--header 'Accept: application/json' 
--header 'Authorization: Basic MyAuthHash' 
-d 'type=stream&content=bar&to=test&subject=foo' 
'http://10.2.3.4:9991/api/v1/messages'

There are two differences to account for here. The first is that the Zulip endpoint is expecting url-encoded text, not JSON, in the POST data. The second is that it requires authentication. Both of these need to be correctly described in the YAML description of the API.

Building the YAML file

The store example specified that the endpoint both accepts and produces “application/json” data:

consumes:
  - application/json
produces:
  - application/json

and “in: body” in the parameter list means it expects them to be in the body of the POST in the format described by the newItem schema.

post:
  description: Creates a new item in the store.  Duplicates are allowed
  operationId: addItem
  produces:
    - application/json
  parameters:
    - name: item
      in: body
      description: Item to add to the store
      required: true
      schema:
        $ref: '#/definitions/newItem'

For Zulip, the endpoint still returns JSON, but is expecting to receive x-www-form-urlencoded data (just like an HTML form.)

consumes:
  - application/x-www-form-urlencoded
produces:
  - application/json

The individual parameters must be defined as “formData”, which doesn’t use a separate schema definition:

parameters:
  - name: type
    in: formData
    description: type of message to create
    required: true
    type: string
  - name: content
    in: formData
    description: content of message to create
    required: true
    type: string
  - name: to
    in: formData
    description: recipient of message to create
    required: true
    type: string
  - name: subject
    in: formData
    description: subject of message to create
    required: true
    type: string

Now that the incoming data is defined, we need to make it use Basic Auth to authenticate with the Zulip back end. The first part of that is to add a “securityDefinitions” section:

securityDefinitions:
  basicAuth:
    type: basic
    description: HTTP Basic Auth

and in the parameters specify that they require this security:

security:
  - basicAuth: []

Here’s the full YAML file:

swagger: '2.0'
info:
  version: '1.0.0'
  title: Sample API
  description: Some Stuff I wrote
  termsOfService: http://example.com
  contact:
    name: Feorlen
    email: nobody@example.com
    url: http://example.com
  license:
    name: Foo
    url: http://example.com
host: 10.2.3.4:9991
basePath: /api/v1
schemes:
  - http
consumes:
  - application/x-www-form-urlencoded
produces:
  - application/json
securityDefinitions:
  basicAuth:
    type: basic
    description: HTTP Basic Auth
paths:
  /messages:
    post:
      description: Creates a new Zulip message
      operationId: addMessage
      produces:
        - application/json
      parameters:
        - name: type
          in: formData
          description: type of message to create
          required: true
          type: string
        - name: content
          in: formData
          description: content of message to create
          required: true
          type: string
        - name: to
          in: formData
          description: recipient of message to create
          required: true
          type: string
        - name: subject
          in: formData
          description: subject of message to create
          required: true
          type: string
      security:
        - basicAuth: []
      responses:
        '200':
          description: message response
          schema:
            $ref: '#/definitions/messageResponse'
        default:
          description: unexpected error
          schema:
            $ref: '#/definitions/errorModel'
definitions:
  messageResponse:
    type: object
    required:
      - msg
      - result
      - id
    properties:
      msg:
        type: string
      result:
        type: string
      id:
        type: string
  errorModel:
    type: object
    required:
      - code
      - message
    properties:
      code:
        type: integer
        format: int32
      message:
        type: string

With this configuration, Swagger UI generates a curl command that can connect to the server, authenticate, and create a new post. The problem is, however, that the nice clickable demo that Swagger UI offers doesn’t.

I’ll get into that with the next post.

When we last left our intrepid explorer, there were bits of YAML all over the floor, but the shape of a functioning API doc could be seen emerging through the debris. Or at least I remember it that way. (Here’s the previous blog post.) With the basic Swagger functionality proven, the next step is to make it work in something like the desired real environment. It does, but getting there was a bit of a winding path through parts of webapp development I hadn’t looked at yet. That’s another long story, and I’ll get to it.

Before I do, I want to talk a little about file formats. As I make this project happen for real, I need to choose how I’m going to write the spec file that drives it. That doesn’t just affect me, but anyone who later needs to read or edit it. It’s not something to be taken lightly, and is a hotly contested topic.

Here’s a nice discussion illustrating the concerns: Tom Limoncelli on TOML vs JSON. (No, TOML isn’t named for Tom L., but he does find that point amusing. If you like servers, you should read his stuff sometime.)

The Swagger (or OpenAPI) specification says you can use either YAML or JSON. The Swagger editor works in YAML, so clearly they have an opinion. (You can import and export JSON.)

My choice is also YAML, for these reasons:

  • It allows comments. You can annotate what something is, why you added it, or comment out work in progress.
  • It is primarily whitespace-delimited. That makes it easier to visually scan than JSON’s braces, brackets, and quotes.
  • That whitespace hierarchy, as annoying as it can be, is already familiar to Python developers. Zulip is mostly written in Python, so the developers maintaining it will already be accustomed to this.

Neither of these formats are really “human readable” if by that you mean “can be reliably maintained by average computer-using humans.” They can’t. They have strict format requirements and any deviation results in cryptic errors and non-obvious failures. I think YAML is better for humans than JSON, but even that isn’t saying much.

I started this experiment with YAML, and I see no reason to not stick with it. Despite much cursing in the process of building the file for the endpoint I was trying to model (and it isn’t even complete.) The tools are lousy. The failure modes are opaque. Some of that was my inexperience with the Swagger standard, but not all of it.

At one point I was making changes to my file in a text editor and pasting it into an empty Swagger editor window to see if it would validate. Because the errors didn’t make sense. Something, I’m still not sure what, introduced a tab character I couldn’t see. Between the four editors I had open, copying and pasting between them (one explicitly configured to not use tab characters) somehow it happened. Welcome to being a developer.

JSON, on the other hand, has so much visual clutter that I can’t read any non-trivial example without reformatting to add whitespace. Fans like that it’s more compact than YAML. (Whitespace characters are characters, tabs are forbidden, and more characters mean more bytes.) There are also more tools that work with it, because much of the web runs on data exchanged as JSON. But if I don’t have a reason to need maximum efficiency and performance, I’m going to choose the option that makes it easier on the human developer every time. (For the opposite end of this decision, see protocol buffers, a decidedly not human-readable format. It’s also not used for configuration files.)

So YAML it is. In the next post, I’ll talk about what I did with it.

Swagger is a language for defining an API, with an ecosystem of tools to generate both code and documentation. I’m experimenting with it to see how much I can use it to automate creating API docs for Zulip. It’s full power is in defining your API functionality in one file, and then generating both documentation and skeleton code for multiple programming languages. You still have to implement the actual behavior behind it, but it generates the structure to do that more easily.

As with most code generators, I’m not sure how useful this is going to be for an already existing system not designed with the same structure. But it’s useful enough as a doc tool if you are willing to create the spec by hand. Then you get some nice pretty pages with, in theory, the ability for 3rd party developers to test API samples right from the documentation.

My first pass is to set up some static docs, and then a basic generated API and docs with functional demos.

The simplest way to have docs is Swagger UI, a viewing front-end. I thought, from reading blog posts and tutorials from other users, that I had to modify Swagger UI itself to create docs from my own spec. That is a mess of installing a bunch of other things that I never got working. After some advice from the IRC channel, I learned I didn’t actually need to do that.

You can literally take a single directory out of the downloadable source from GitHub, add your custom YAML or JSON file defining your API, and stick it on a webserver. (For more, go to the Swagger docs and look for “Swagger UI Documentation”.) You don’t get nice interactive examples this way, but you do get a well-structured description of your API.

The default uses the Swagger pet store example, so in index.html you will need to change the line

url = "http://petstore.swagger.io/v2/swagger.json";

to be where you have your definition file. I put mine in the same directory, it can be either YAML or JSON. That’s it.

If you don’t want functioning examples (which won’t work in this simplistic demo) also change which methods are allowed for the “Try it out!” button. Remove all of them to disable the button altogether, or only some if you need to disallow testing certain types of requests.

supportedSubmitMethods: ['get', 'post', 'put', 'delete', 'patch'],

You can also specify which language you want Swagger UI to use for its static content (look in the lang directory), although I found from testing it appears you can only pick one. I haven’t figured out how to actually provide localized content in the user’s own language.

<!-- Some basic translations -->
<script src='lang/translator.js' type='text/javascript'></script>
<script src='lang/it.js' type='text/javascript'></script>

So that’s it for the super-simple option.

To generate a working API from your spec is a little more complicated, but not too much. In addition to creating your YAML or JSON file, you need to generate server code for it and install it on your own server. I had to install several additional packages, too.

For this, you need to start with Swagger Editor. You can install it locally, but I just used the web version.

I started with one of the samples available in the web version, a simplified pet store.

File-> Open Example -> petstore_simple.yaml

Edit the sample file as desired, the important things to note are the hostname and port of your server, and the base path where your API endpoint URLs will start. Here’s mine:

swagger: '2.0'
info:
  version: '1.0.0'
  title: Swagger test store
  description: A sample API that uses the swagger-2.0 specification
  termsOfService: http://feorlen.org
  contact:
    name: Feorlen
    email: foo@example.com
    url: http://swagger.io
  license:
    name: MIT
    url: http://opensource.org/licenses/MIT
host: 10.2.3.4:8080
basePath: /api
schemes:
  - http
consumes:
  - application/json
produces:
  - application/json
paths:
  /items:
    get:
      description: Returns all items from the system that the user has access to
      operationId: findItems
      produces:
        - application/json
        - application/xml
        - text/xml
        - text/html
      parameters:
        - name: tags
          in: query
          description: tags to filter by
          required: false
          type: array
          items:
            type: string
          collectionFormat: csv
        - name: limit
          in: query
          description: maximum number of results to return
          required: false
          type: integer
          format: int32
      responses:
        '200':
          description: item response
          schema:
            type: array
            items:
              $ref: '#/definitions/item'
        default:
          description: unexpected error
          schema:
            $ref: '#/definitions/errorModel'
    post:
      description: Creates a new item in the store.  Duplicates are allowed
      operationId: addItem
      produces:
        - application/json
      parameters:
        - name: item
          in: body
          description: Item to add to the store
          required: true
          schema:
            $ref: '#/definitions/newItem'
      responses:
        '200':
          description: item response
          schema:
            $ref: '#/definitions/item'
        default:
          description: unexpected error
          schema:
            $ref: '#/definitions/errorModel'
  /items/{id}:
    get:
      description: Returns a user based on a single ID, if the user does not have access to the item
      operationId: findItemById
      produces:
        - application/json
        - application/xml
        - text/xml
        - text/html
      parameters:
        - name: id
          in: path
          description: ID of item to fetch
          required: true
          type: integer
          format: int64
      responses:
        '200':
          description: item response
          schema:
            $ref: '#/definitions/item'
        default:
          description: unexpected error
          schema:
            $ref: '#/definitions/errorModel'
    delete:
      description: deletes a single item based on the ID supplied
      operationId: deleteItem
      parameters:
        - name: id
          in: path
          description: ID of item to delete
          required: true
          type: integer
          format: int64
      responses:
        '204':
          description: item deleted
        default:
          description: unexpected error
          schema:
            $ref: '#/definitions/errorModel'
definitions:
  item:
    type: object
    required:
      - id
      - name
    properties:
      id:
        type: integer
        format: int64
      name:
        type: string
      tag:
        type: string
  newItem:
    type: object
    required:
      - name
    properties:
      id:
        type: integer
        format: int64
      name:
        type: string
      tag:
        type: string
  errorModel:
    type: object
    required:
      - code
      - message
    properties:
      code:
        type: integer
        format: int32
      message:
        type: string

I edited this elsewhere and then pasted it back into the online editor. The right pane will tell you if you messed up anything required. Now generate server code based on this API definition. I’m using Python Flask, but there are many to choose from.

Generate Server -> Python Flask

Download the resulting zip file and put its contents on your server somewhere. Since I’m using port 8080, I can do this in my own home directory without running as root. (I’m making my changes on the server directly, but edit locally and then upload if you like.)

First, customize the generated code:

In app.py I added my correct host, some extra logging (“INFO” for less debugging), and changed the title. I haven’t figured out where this title is actually used, maybe I’ll find it eventually. It now looks like this:

#!/usr/bin/env python3                                                                              

import connexion
import logging

if __name__ == '__main__':
    logging.basicConfig(level=logging.DEBUG)
    app = connexion.App(__name__, specification_dir='./swagger/')
    app.add_api('swagger.yaml', arguments={'title': 'Sample Swagger server'})
    app.run(host='10.2.3.4', port=8080)

I also had to edit the generated implementations in controllers/default_controller.py to remove type hints because my version of Python doesn’t support them.

def add_item(item):
    return 'do some magic!'

Those are the basic changes. The code doesn’t do anything besides return a static message, but is otherwise functional. I haven’t figured out how to change the language for Swagger UI, but maybe that is possible.

Now install any necessary packages. I already have Python 3 and Flask, but I need Connexion to handle HTTP. To get that, I first have to install a version of pip that can handle packages for Python 3.

sudo apt-get install python3-pip
sudo pip3 install -U connexion

Next add execute permissions to the server script

chmod 744 app.py

and run the server

./app.py

And then the ui is available on port 8080 (note my base path is used here.)

http://10.2.3.4:8080/api/ui/


I can interact with the api using this page, or coping the curl examples and executing them in a shell. Click on the example values to post the sample payload into the appropriate field.

curl -X GET --header 'Accept: text/html' 'http://10.2.3.4:8080/api/items'
do some magic!
curl -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' -d '{ "id": 5, "name": "potato" }' 'http://10.2.3.4:8080/api/items'
"do some magic!"

The next step of this example is to actually implement the endpoint functionality. For my purposes, my next task is to figure out how to make the web UI work with an existing API that was not designed to use it.

I’ve been working on a small change to the open source project I mentioned, Zulip. The code required was trivial, but I spent most of my time figuring out a rational order of operations and being confused about how to accomplish them with the version control system.

I’m certainly no stranger to source code version control, but mainly with commercial systems. Git, used by many open source projects, isn’t much like the others I have experience with. The Zulip project has documentation about how they use git and github, the website that provides an online version of that system. But, of course, not everything can be anticipated in advance (and documentation can have bugs too.)

The funny thing is, I’ve used github some before. But only in a very narrow way. If you have a private repository where you directly submit changes to a main branch, there’s not a lot to it. Combining together different changes can be a nuisance, but that’s going to happen in any system like this. And if there’s only one or a few people making changes, the need for it can be nearly avoided with a tiny amount of discipline.

Where things get complicated is when you have a bunch of people working all at once. Even more when they are only loosely coordinated. Git assumes developers will work on what they want, and a handful of administrators will direct traffic with incoming requests to include new code in the main repository.

One thing that I was misunderstanding is how (little) git thinks about branches. Branches are normal things in version control, you make one to take a copy of existing code so you can safely modify it away from the central, primary, version.

In some systems, this is a resource-intensive operation where each branch is literally a copy of everything. Git doesn’t work that way. Since it functionally costs little to make a branch, branching is encouraged. You have your own copy of the code at a particular point in time. Both you and other people can make changes independently on different branches. You make some more branches. In the git universe, that’s no big deal. Time marches forward.

After you do your thing with your branch, you probably want to somehow get it back into the main repository. I’m most familiar with merging, where the system compares two parallel but not identical sets of source code and figures out if the changes are neatly separated enough for it to safely mash them together for you. Some automagical stuff happens, and the result becomes the latest version. (This latest revision is typically called “HEAD”.)

If not, you get to do it by hand. Use a merge-intensive version control system for a while, and you will absolutely find yourself dealing with a horrific mess to unravel. Merging is ugly but, if you are used to it, it’s a known ugly. That’s a certain kind of comfort. You can do that with git if you want. Many people do.

And many people don’t.

One thing about branches: many systems consider HEAD the be-all and end-all picture of reality. You might not be happy with the most recent version of your branch, you could keep a pointer to the revision you’d rather have, but it’s always the most recent version. If you don’t like it, you make a change and now you have a new HEAD. Time always moves forward. Re-writing history, to the extent that it can be done, is only for the most dire of emergencies.

Git has something called “rebase.” You can use it in a couple different ways, but it’s basically the version control equivalent of a Constitutional Convention: everything is on the table. You don’t like the commit message from three changes ago? Rebase. Want to not have those 47 typo-fixing revisions you created? Rebase. It’s also an alternative to merging, where your other branch’s changes are stuck on the end after HEAD and any changes that were made between the time you branched and now are patched in to your code that didn’t get them. (If you want a real explanation, here’s a PDF that helped me understand how rebase works.)

Coming from a merge-land where HEAD is sacred, this terrifies me. You are going into the past and messing with history, and that Just Isn’t Done. Admit that you checked in something with the commit message “shit is broke” and move on.

When branches are expensive and you don’t want to make too many of them, you have to protect the integrity of the ones you have. The idea of something like rebase is dangerous, and with great power comes great responsibility.

When branches are cheap, and you make one because you feel like watching what happens when you delete the database maintenance subsystem? Well, have fun. Clean up after yourself when you are done. It’s not exactly a different universe, but you think about some things in different ways. I’m not entirely there yet, but rewriting history is apparently one of those things.

In making my code change, I ran into a bunch of small things I didn’t understand. I was concerned that I’d do something that would make a mess, and it would be hard to clean up. I didn’t yet know the commands that would have helped. I didn’t understand the multiple purposes of others. I was entirely terrified by the idea of rebase. (I still mostly am, to be honest.)

I made a small mess attempting to merge in an environment that was expecting a rebase. And then halfway in I attempted to cancel but it was applied anyway. There were a few mysteries as things seemed to behave inconsistently. Some of it would have been easier if I had thought to create a branch to try something, against my previous conditioning.