At Atlassian, we have a quarterly company-wide hackathon called ShipIt. On the last one (a few days ago), I got together with two of my colleagues and decided to try and build a HipChat addon. We wanted to explore the new glances and addons capabilities and see if we could build something cool with them.

After a brief, 5 minute research (we only had one day to build the addon, after all), the atlassian-connect-express looked to be the more user-friendly and advanced. Their getting started guide is fantastic, you get a running addon that says hello in 5-10 minutes. It also uses NodeJS, which is a platform that I wanted to learn more about, so this gave me the perfect excuse.

The getting started guide asked for node, redis, and ngrok. I didn’t have the first two installed on my work machine, and I didn’t want to deal with the hassle of installing it, configuring it, etc. Why do all that if I already have docker installed? Instead, I wrote this docker-compose file:

version: '2'
services:
    redis:
        command: redis-server --appendonly yes
        image: redis
        ports:
        - "6379:6379"
        volumes:
        - ./data:/data
    web:
        build: .
        environment:
        env_file: .env
        ports:
        - "8080:8080"
        depends_on:
        - redis

With this, running docker-compose --build up gave me a fully functional working environment in no time. As soon as we started to write our addon code we started to see a big problem in our workflow. Every time we wanted to try the new code, we had to stop the containers, run docker-compose --build up again, and wait the 2-3 seconds it took to start over.

Waiting 3 seconds per debugging cycle it’s not a lot, but it gets annoying very quickly. Fortunately, a quick google search brought up that someone in the Node community had already solved that problem with nodemon. I followed the instructions and updated my package.json script to look like this:

{
  "name": "hello-world",
  "version": "0.0.1",
  "private": true,
  "scripts": {
    "start": "node app.js",
    "watch": "nodemon"
  },
  "dependencies": {
    "atlassian-connect-express": "^1.0.10",
    "atlassian-connect-express-hipchat": "^0.3.5",
    "atlassian-connect-express-redis": "^0.1.6",
    "body-parser": "^1.15.2",
    "compression": "^1.6.2",
    "cors": "^2.7.1",
    "errorhandler": "^1.4.3",
    "express": "^4.14.0",
    "express-hbs": "^1.0.1",
    "lodash": "^4.13.1",
    "morgan": "^1.7.0",
    "request": "^2.72.0",
    "rsvp": "^3.2.1",
    "static-expiry": "^0.0.11",
    "uuid": "^2.0.2",
    "nodemon": "^1.10.0"
  }
}

Restarted my containers, wrote some more code, saved my files, and nothing happened. nodemon refused to do its magic. I spent a few minutes reading the documentation, double checking all my configurations, restarting my containers and the docker service a few times (hey, you never know) until it hit me. Since we were using docker, the source code files were being copied over, so nodemon was monitoring the files inside the container, not the ones on my machine.

Added the source code as docker volumes on my docker-compose file, restarted the containers, and nodemon started to reload the process every time I saved a file.

version: '2'
services:
    redis:
        command: redis-server --appendonly yes
        image: redis
        ports:
        - "6379:6379"
        volumes:
        - ./data:/data
    web:
        build: .
        command: npm run watch
        env_file: .env
        ports:
        - "8080:8080"
        depends_on:
        - redis
        volumes:
        - ./app.js:/usr/src/app/app.js
        - ./routes:/usr/src/app/routes
        - ./public:/usr/src/app/public
        - ./views:/usr/src/app/views
        - ./lib:/usr/src/app/lib

Once we had this up and running, our workflow became much more efficient. Every time we reached a certain milestone, it was only a matter of pushing the container into our registry and let the container service reload our containers in the production instances, hands-free. success

Hope it helps!

This took me a while to figure out, so I’m sharing it here in case you ever need a Twitter OAUTH2 token (e.g to reenable previews on your HipChat Server)


import base64
import requests
import urllib

apikey=urllib.unquote("YOUR_KEY")
apisecret=urllib.unquote("YOUR_API_SECRET")
bearer_creds="{}:{}".format(apikey, apisecret)
base64encoded_creds = base64.b64encode(bearer_creds)

twitter_api_url="https://api.twitter.com/oauth2/token"

headers = {"Content-Type": "application/x-www-form-urlencoded;charset=UTF-8", "Authorization": "Basic {}".format(base64encoded_creds)}

response = requests.post(
	twitter_api_url,
	data="grant_type=client_credentials",
	headers=headers)

print("Your token is: {}".format(response.json()))
# Output would be something like Your token is:
# {u'token_type': u'bearer', u'access_token': u'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA%2FAAAAAAAAAAAAAAAAAAAA%3DAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'}

Hope it helps!

Sharing files via HipChat is a very useful feature. However, it’s on my opinion, one of the hardest APIs to use. Here’s an example of how to do it in python:

#!/bin/python

import argparse
import json
import os.path
import re
import requests
import sys
from email.mime.multipart import MIMEMultipart
from email.mime.nonmultipart import MIMENonMultipart

requests.packages.urllib3.disable_warnings()

def post_file(token, url, msg_body, file_path):
    headers = {"Content-Type": "application/json", "Authorization": "Bearer {}".format(token)}

    related = MIMEMultipart("related")

    part_body = MIMENonMultipart("application", "json", charset="utf8")
    part_body.add_header("Content-Disposition", 'attachment; name="body"')
    part_body.set_payload(json.dumps(msg_body))
    related.attach(part_body)

    file_name = os.path.basename(file_path)
    with open(file_path, 'rb') as f:
        file_data = f.read()

    part_file = MIMENonMultipart('application', "octet-stream")
    part_file.set_payload(file_data, "utf-8")
    part_file.add_header("Content-Disposition", 'attachment; name="file"; filename="{}"'.format(file_name))
    related.attach(part_file)
    body = related.as_string().split('\n\n', 1)[1]

    headers.update(dict(list(related.items())))

    resp = requests.post(url, data=body, headers=headers, verify=False)
    resp.raise_for_status()

def share_file_with_room(api_url, room_id, token, file_path, message):
    url = "{}/room/{}/share/file".format(api_url, room_id)
    post_file(token, url, {"message": message}, file_path)    

if __name__ == '__main__':
    parser = argparse.ArgumentParser(description='Share a file with a room')

    parser.add_argument('-r', '--room', dest='room', type=str, help='The room id', required=True)
    parser.add_argument('-t', '--token', dest='token', type=str, help='The HipChat authorization token', required=True)
    parser.add_argument('-f', '--file', dest='file_path', type=str, help='The path to the file', required=True)
    parser.add_argument('-m', '--message', dest='message', type=str, help='The message to include with the file', required=False, default='')
    parser.add_argument('--api-url', dest='api_url', type=str, help='The HipChat API url', required=False, default="https://hipchat.com/v2")
    user_options = parser.parse_args()

    share_file_with_room(user_options.api_url, user_options.room, user_options.token, user_options.file_path, user_options.message)

Hope it helps!

Once you have a module that can be pip installed from git, the next logical step is to make it available directly from pypi. Doing this will allow your users to install your module locally by simply calling: pip install your-module , instead of having to deal with git repository access and addresses.

Create and setup your pypi account

The first thing to do is to create a pypi account. All you need is an email, and to pick a username and a password. I also recommend you to create an account on pypi’s test server, so we can validate our module before pushing to production.

We’re going to create a settings file for our pypi credentials, so we don’t have to type the urls and usernames again and again. Create a ~/.pypirc file like this one (replace my username with yours):

[distutils]
index-servers =
    pypi
    test

[pypi]
repository = https://pypi.python.org/pypi
username = MY_USERNAME
password = MY_PASSWORD

[test]
repository = https://testpypi.python.org/pypi
username = MY_USERNAME
password = MY_PASSWORD

I chose to set have the password there for simplicity. If you don’t think this is secure enough, you can remove the password lines and instead provide them manually everytime you run the upload commands

Generate the source distribution

In order to generate a source distribution, we need a version. Edit your setup.py file, and update the version of it to match the version you want to create. Remember that it needs to match the public version identifier scheme.

On your terminal, go to the root of your module, and create the distribution. I recommend you to use source distributions, unless you really know what you’re doing and think that wheels is better:

python setup.py sdist

If the command ran successfully, a dist folder will be created, with the source distribution file inside. It’s the one named YOUR_MODULE-YOUR_VERSION.tar.gz.

Once you have a source distribution file, it’s time to upload it. Fist, you need to register your module in pypi, by filling up this form. Do the same in the test serv

twine upload --repository test dist/YOUR_MODULE-YOUR_VERSION.tar.gz

Once the file is upload, install it locally at least once, to make sure everything works as expected:

virtualenv pip_test --clear
source pip_test/bin/activate
pip install -i https://testpypi.python.org/pypi YOUR_MODULE==YOUR_VERSION

If everything works fine, you will have a virtual environment with your module. I recommend you double check the version installed and run some tests, just to be sure.

Finally, we push to pypi:

twine upload --repository test dist/YOUR_MODULE-YOUR_VERSION.tar.gz

If the upload finished successfully, your module will the available to the world in the next 2-3 minutes.

Hope it helps!

If you’re anything like me, you always have multiple terminal tabs open at any given time. As soon as I have 5 or 6 tabs open, it becomes impossible to remember what I have running on each tab.

Here’s a script that will allow you to update the title of a terminal tab and/or window:

# !/bin/bash
title=$1
echo -n -e "\033]0;$title\007"

Copy the script above into /usr/bin/title and give it executable permissions (chomod +x /usr/bin/title). After that, typing title My Title will update the title of your current terminal or tab to the value provided. It works with both ASCII and Unicode characters.

Hope it helps!