After a few months on beta, Stride’s developer API is finally open to the world. Since I work at Atlassian, I’ve had access to it for a while, but I was still curious to see how it had changed since the early days.

I looked around, and the stride api tutorial seemed like the easiest way to get started. I cloned the repo, and then realized that I didn’t have node installed on my machine. I was at a coffee shop and didn’t felt like downloading gigs and gigs of stuff into my machine. I was about to give up when I remember reading about glitch. It’s supposed to be a great dev environment for node, so it seemed like a perfect fix for my project.

I signed up using my GitHub account (no new password, yay!), copied the files of the bot app, and updated my secrets. But then nothing happened. It took me a couple of seconds to realize that the app’s dependencies where being installed, and that there was a logs button flashing. I clicked on it, and it said /tmp/.__glitch__start.sh: line 1: null: command not found. After googling for a minute or two, I found the culprit, my package.json was missing a start script.

I went back to my code in the browser, and updated package.json with a start script:

"scripts": {
    "start": "node app.js"
  }

After a second or two, the logs tab showed me this:

logs.

Really? That simple? I clicked on show button in the top right, and I was welcomed by my bot’s descriptor json. It worked!

json.

Once the bot was running, I followed the rest of the instructions on the getting started guide, and after a couple I minutes I had my very own stride bot up and running. success.

stride-bot

15 minutes, a browser and a bunch of APIs. That’s all it took to get a chatbot up and running. Mindblowing. But then I found two features that are even more interesting than that.

First, it auto reloads everytime you stop typing. No need to redeploy anything, it just magically works in the background. It’s probably not the best pattern for a mission critical system (and even this is debatable), but it makes for an insanely fast dev cycle.

Second, and way more important. It makes sharing your running code super easy (they call it remix). Find it directly on glitch, link to it, or just click on the button below.

Remix on Glitch.

I only spent a few minutes playing with this, but the possibilities are exciting. Sharing samples, live documentation, teaching others to code (all you need is a browser!). And all you need is a browser.

Remix the stride bot, extend the code and play with it. It’s a lot of fun.

The other night I realized that I had being commiting to one of my personal open source projects using an email that I no longer use. I had being doing this for a while without noticing, until I enabled gpg signing on my github repos. At that point, github realized that the email I had configured on git didn’t match the key it was expected, and didn’t let me commit anymore. It took me some time to figure out what was going on, until I looked at my local git history and realized that I had the old email (facepalm).

Once I realized the issue, I fixed my email cofiguration and it was all good. But my commit history looked messy, so I decided to fix it.

git filter-branch lets you to rewrite your commit history, and it has a very powerful filtering system that lets you script your conditions. After searching online, and a bit of trial and error, I ran into this. I’m putting the script here for future reference.

 git filter-branch -f --env-filter '

OLD_EMAIL="[email protected]"
CORRECT_EMAIL="[email protected]"

if [ "$GIT_AUTHOR_EMAIL" = "$OLD_EMAIL" ]
then
    export GIT_AUTHOR_EMAIL="$CORRECT_EMAIL"
fi

if [ "$GIT_COMMITTER_EMAIL" = "$OLD_EMAIL" ]
then
    export GIT_COMMITTER_EMAIL="$CORRECT_EMAIL"
fi
'

When you run this script, git will go over every commit, compare the author, and fix it if needed. After you’re done, review your history, and if you’re happy with it, force push your changes (since you’re and you’re all done.

I ended up making this a bit more generic, and adding it as a git alias for future use on my ~/.gitconfig:

[alias]
 change-commits = "!fix() { var=$1; old=$2; new=$3; shift 3; git filter-branch --env-filter \"if [[ \\\"$`echo $var`\\\" = '$old' ]]; then export $var='$new'; fi\" [email protected]; }; fix"

Do keep in mind that this script is changing your commit history, so DO NOT RUN THIS ON A SHARED REPOSITORY, unless you really know what you are doing.

Hope it helps!

I’m a huge fan of Amazon Echo (aka Alexa). I bought my first one in the early days of the dev preview, and have been using it daily ever since. Keeping track of my shopping and to-do lists, setting up timers while cooking, playing music around the house, you name it. I like it so much that I even bought an extra dot for the bedroom. Can’t recommend it enough.

After having Alexa in the house for a few months, the whole home automation bug bit me hard. Wouldn’t it be cool to be able to control other stuff around the house? Well, enter Phillips Hue and a bunch of their ‘smart’ lightbulbs. They work great together, and you can turn them on and off with your voice via Alexa. You can even dim them, set up color schemes, and even group them together (e.g. ‘Alexa, turn the living room off’ or ‘Alexa, turn the bedroom on’). Cool stuff!

I was a happy user until I ran into a very first world problem. At night we like to watch tv in the bedroom, with the light in the nightstand on (a hue bloom), but all the other lights off (a few normal light bulbs). Making this happen through Alexa required two different commands in sequence (‘Alexa turn the lamps off’, followed by ‘Alexa turn the nightstand on’). Functional, but clunky and very error prone. All I wanted was to be able to shout ‘Alexa, turn movie mode on’ across the room and have everything setup for me (I know, first world problems). Sadly, I found out that Alexa doesn’t support this sort of orchestrations just yet.

After a couple of hours internet searches, I end up discovering a very exciting open-source home automation project, openHAB. openHAB is a beast of a project, with a lot of home automation features. But it supported three key things that I needed to solve my problem: Alexa integration, virtual devices, and custom orchestrations.

Install and configure openHAB

It seems lke you can run openHAB on a rasperry pi, but I already have a server running at home, and went the lazy way by using their docker container:

version: "2"
services:
  openHAB:
    image: 'openHAB/openHAB:amd64-online'
    restart: always
    ports:
      - 8080:8080
      - 8443:8443
      - 5555:5555
    network_mode: "host"
    volumes:
      - '/etc/localtime:/etc/localtime:ro'
      - '/etc/timezone:/etc/timezone:ro'
      - './opt/openHAB/userdata:/openHAB/userdata'
      - './opt/openHAB/conf:/openHAB/conf'
    command: "server"

Before starting it, I ran the following commands to create the initial directory structure. This was so that I wouldn’t lose my configurations when relaunching my container.

mkdir -p $(pwd)/opt/openHAB/userdata
mkdir -p $(pwd)/opt/openHAB/conf

After this, I just started my docker container, waited a couple of minutes, and my openHAB instance was up and running. Browse to http://${fqdn}:8080, and follow the setup instructions. I went with the most basic option since I only wanted the basics on it.

After the wizard finished, I selected the paper UI option, went to the extensions menu, and installed the following extensions:

  • On the bindings tab, the ‘Hue Binding’(version binding-hue - 0.9.0.SNAPSHOT)
  • On the misc tab, the ‘Hue Emulation’ (version misc-hueemulation - 2.0.0.SNAPSHOT), and the ‘Rule Engine’ (version misc-ruleengine - 2.0.0.SNAPSHOT)

Once the bindings are ready, go to the main page, and click on the ‘SEARCH FOR THINGS’ button to discover your devices. You’ll need to push the button in your HUE hub to allow pairing. Once they have paired (it took a few seconds for me), you will see all the devices available on the UI. Add the ones that you want to control, and give it a friendly name (If you run into any issues, I recommend looking at the logs on $(pwd)/openHAB/opt/openHAB/userdata/logs).

Make sure that you can control all your devices from the openHAB UI before going to the next step.

Create your virtual switch

To add our virtual switch, go to $(pwd)/openHAB/opt/openHAB/conf/items, and create a file called home.items with the following content:

Switch  MovieToggler     "Movie Time Toggle" [ "Switchable" ]

The line follows the format Type Name FriendlyName ListOfCapabilities. Feel free to customize the name and friendly name if you want. Make sure the file with the ‘.items’ extension or openHAB won’t pick it.

Create the rule

Next, we are going to create our rule. Rules on openHAB are very straight forward, they are composed of a trigger (e.g. when our switch is turned on), and a series of actions (‘turn on a series of lights’). All the commands require the item name (not the friendly one). I couldn’t figure out a way to get them via the UI, but the REST API was very handy for this. Just browse to http://{$fqd}:8080/rest/items to get a full list.

To create the rule, go to $(pwd)/openHAB/opt/openHAB/conf/rules, and create a file called home.rules with the following content:

rule "Movie time ON"
when
  Item MovieToggler received command ON
then
  sendCommand(MY_NIGHTSTAND_DEVICE_NAME, ON)
  sendCommand(MY_LAMP_DEVICE_NAME,  OFF)
end

rule "Movie time OFF"
when
  Item MovieToggler received command OFF
then
  sendCommand(MY_NIGHTSTAND_DEVICE_NAME, OFF)
  sendCommand(MY_LAMP_DEVICE_NAME,  ON)
end

Before saving the file, don’t forget to update the commands with your device id. Make sure the file with the ‘.rules’ extension or openHAB won’t pick it.

You can do more complex lookups and rules, but I decided to keep simple and straigh forward. To test that your rule works, you can do the following POST request:

curl --header "Content-Type: text/plain" -XPOST  http://${fqdn}:8080/rest/items/MovieToggler --data "OFF"
curl --header "Content-Type: text/plain" -XPOST  http://${fqdn}:8080/rest/items/MovieToggler --data "ON"

If everything is setup correctly, you should see the two lights toggle when you send the request.

Integrate with Alexa

Finally, it’s time to put everything together. The first thing you need to do is to enable discovery on your openHAB server. To do this, go to configuration -> services -> hue emulation -> configure and enable device pairing by selecting the ‘Pairing Enabled’ option. Remember to turn this off once you’re done pairing your devices, for security.

After pairing is enabled, go to your Alexa app -> Smart Home -> Discover Devices. If everything is configured properly, you should see all the devices configured on your openHAB there. Once I had the devices listed, I created a new group (just so that I could give it a friendly name), added the “Movie Time Toggle” device and saved it as ‘Movie Time’.

After all of this, I was able to shout a single ‘Alexa, turn movie time on’, and all the right lights turned off and on, with a single magical voice command! success.

Hope it helps!

For the past few months, I’ve been working a lot with clustered VMs (wink wink ;)). Over and over again, I would run into this scenario:

  • For each node in the cluster:
    • SSH into node 1
    • Fire vim and update a file
    • Restart services and test changes

This cycle gets old pretty quickly. I was chatting about this problem over lunch, when one of my coworkers introduced me to the most excellent CSSHX

CSSHX is a perl-based cluster SSH tool for Mac OSX. It’s very straightforward to use, you just call, passing the [email protected]:PORT of each server in your cluster:

With that command, CSSHX will open 3 terminals (one per server), and a master terminal. Anything you type in the master terminal will be automatically typed on all the other terminals. Simple, and effective.

If you have a series of clusters you interact with a lot, CSSHX also supports using a configuration file (it defaults to /etc/cluster). Every line of the file has to follow this format ‘name [email protected]:PORT [email protected]:PORT [email protected]:PORT…’. You can have as many lines as you want.

If your /etc/cluster looks like this:

You could control the dogfood cluster like this:

csshx dogfood

One drawback I’ve found is that, when connecting to new servers, the ‘clustered terminals’ fail to connect when SSH prompts to accept a new server ssh key. As a workaround, I connect to each server directly once, accept the host key, and then use CSSHX for any future connections.

Hope it helps!

My team relays heavily on CI for our development process. However, as our test corpus has grown, so has the amount of time we have to wait before we get our test results. And that’s not good. While better that not having tests, having a very long test cycle tends to be counter-productive, as developers grow restless, and stop leveraging CI during the dev process (if you have to wait 20 minutes for a suite, you’ll never run it locally).

Last week, I had some time on my hands, and decided to look a bit into it. My obvious question was, “why does it take this long?”. Fortunately, our test driver of choice nose, includes how long each test takes to run:

<testcase classname="test.integration.test_api_app.AppRest" name="test_info_requires_internal" time="0.003"/>
<testcase classname="test.integration.test_api_app.AppRest" name="test_info_requires_internal_no_query_param" time="0.004"/>
<testcase classname="test.integration.test_api_app.AppRest" name="test_info_requires_internal_no_query_param_with_header" time="0.003"/>

However, they are sorted by execution time, and not by time. Well, we code mostly in python, and if python is good for one thing, is at string manipulation, analysis and scripting:

import argparse
import sys
import xml.etree.ElementTree as ET

def getChildValue(val):
    try:
        return float(val)
    except:
        return val

def sortchildrenby(parent, attr):
    parent[:] = sorted(parent, key=lambda child: getChildValue(child.get(attr)))

parser = argparse.ArgumentParser(description='Sort junit files')
parser.add_argument('--field', dest='field', default='time',
                   help='sort tests by (default: time)')

args = parser.parse_args()

lines = sys.stdin.readlines()
tree = ET.fromstring("".join(lines))

sortchildrenby(tree, args.field)
for child in tree:
    sortchildrenby(child, args.field)

print(ET.tostring(tree, encoding='utf8', method='xml'))

With this script, I was able to post-process my test result files and discovered that 4 of our tests was taking about 10% of the execution time of the entire suite. facepalm

cat ~/Downloads/test-results.xml | python junit-sort.py

Hope it helps!

PS I am aware that nose has a really good plugin for this. We run a very diverse stack, so we needed a more generic solution.