Convox

I stumbled upon Convox a couple weeks ago, and found it pretty interesting. It’s led by a few people formerly from Heroku, and it certainly feels like it. A simple command-line interface to manage your applications on AWS, with almost no AWS-specific configuration required.

An example of how simple it is to deploy a new application:

$ cd ~/my-new-application
$ convox apps create
$ convox apps info
Name       my-new-application
Status     creating
Release    (none)
Processes  (none)
Endpoints  
$ convox deploy
Deploying my-new-application
Creating tarball... OK
Uploading... 911 B / 911 B  100.00 % 0       
RUNNING: tar xz
...
... wait 5-10 minutes for the ELB to be registered ...
$ convox apps info
Name       my-new-application
Status     running
Release    RIIDWNBBXKL
Processes  web
Endpoints  my-new-application-web-L7URZLD-XXXXXXX.ap-northeast-1.elb.amazonaws.com:80 (web)

Now, you can access your application at that ELB specified in the “Endpoints” section.

I haven’t used Convox with more complex applications, but it definitely looks interesting. It uses a little too much infrastructure than I would like for personal, small projects (a dedicated ELB for the service manager, for example). However, when you’re managing multiple large deploys of complex applications, the time saved by Convox doing the infrastructure work for you seems like it would pay for itself.

The philosophy behind Convox:

The Convox team and the Rack project have a strong philosophy about how to manage cloud services. Some choices we frequently consider:

  • Open over Closed
  • Integration over Invention
  • Services over Software
  • Robots over Humans
  • Shared Expertise vs Bespoke
  • Porcelain over Plumbing

I want to focus on “Robots over Humans” here — one of AWS’s greatest strengths is that almost every single thing can be automated via an API. However, I feel like its greatest weakness is that the APIs are not very user-friendly — they’re very disjointed and not consistent between services. The AWS-provided GUI “service dashboard” is packed with features, and you can see some kind of similarity with UI elements, but it basically stops there. Look at the Route 53, ElastiCache, and EC2 dashboards — they’re completely different.

Convox, in my limited experience, abstracts all of this unfriendliness away and presents you with a simple command line interface to allow you to focus on your core competency — being an application developer.

I, personally, am an application / infrastructure developer (some may call me DevOps, but I’m not particularly attached to that title), and Convox excites me because it has the potential to throw half of the work necessary to get a secure, private application cluster running on AWS.

A month with Linux on the Desktop

It’s been a bit over a month since I installed Linux as my main desktop OS on a PC I built to replace OS X on a (cylinder) Mac Pro. I installed Ubuntu MATE 16.04.

Here are my general thoughts:

  • Linux has come a far way in 6 years (last time I used it full-time on the desktop).
  • There are Linux versions of popular software that is vital to my workflows — Firefox, Chrome, Dropbox, Slack, Sublime Text, etc.
    • When there isn’t a direct equivalent, there is usually a clone that gets the job done. Zeal, Meld, for example.
  • It still is definitely not for the casual user.
  • Btrfs.
  • Lacks the behavioral consistency of OS X.
  • Some keyboard shortcuts get some getting used to (but most of the time, they’re completely configurable).
  • Steam is available for Linux! (10 of the 11 titles in my library run on Linux. Does that say something about the games I play, or are Linux ports popular these days?)
  • If something is broken, it can be fixed*.

(*) maybe, probably. Sometimes. It depends.

Some thoughts specific to the development work I do:

  • Docker is as easy to use as it is on a Linux server. Because the kernel is exactly the same. 🙂
  • I can quickly reproduce server environments locally with minimal effort.
  • Configuration files are in the same place as any Ubuntu 16.04 server.

Some things really surprised me. For example, I plugged my iPhone in to the USB to charge it, and it automatically launched the photo importer and started the tethering connection. I did not expect that on a clean install.

It hasn’t been all peaches and roses, though — there are some specific complaints I have about the file browser (Caja, a Nautilus fork) and the MATE Terminal — so much so that I have replaced the MATE Terminal with GNOME 3’s terminal emulator. I haven’t gotten around to trying other file browser because most of the time I’m browsing files, I’m in the terminal.

Other nice-to-have things that don’t relate to the OS itself, but rather to building your own PC (I’m aware of Hackintosh-ing, but my issues were mainly with software, not hardware):

  • The particular case I’m using has space for 2 large (optical drive-sized) bays and 8 3.5 inch hard drive bays. That’s a lot of storage. It currently holds 2 SATA SSDs (and one M.2 SSD, but that doesn’t take up any room in the case).
  • Access to equipment that is much newer / faster than anything you can get via the Apple Store. (I’m planning on getting the Nvidia GTX 1080 at some point in the future, and I’m currently using the i7-6700K quad-core CPU at 4.0GHz now)

Conclusion: I’m enjoying it. I realize that I’m a special case, and I strongly discourage anyone from using Linux on the Desktop unless they really know what they’re doing. In my case, I regularly manage Linux servers professionally, so I know how to fix something when it’s gone wrong (most of the time). I still use a MacBook Pro with OS X installed on it when I’m on the go or need something specifically for Mac, but it usually stays asleep for most of the time.

Elixir anonymous function shorthand

Elixir’s Getting Started guides go over the &Module.function/arity and &(...) anonymous function shorthand, but there are a couple neat tricks that are not immediately apparent about this shorthand.

For example, you can do something like &"...".

iex> hello_fun = &"Hello, #{&1}"
iex> hello_fun.("Keita")
"Hello, Keita"

Let’s have some more fun.

iex> fun = &~r/hello #{&1}/
iex> fun.("world")
~r/hello world/
iex> fun = &~w(hello #{&1})
iex> fun.("world")
["hello", "world"]
iex> fun.("world moon mars")
["hello", "world", "moon", "mars"]
iex> fun = &if(&1, do: "ok", else: "not ok")
iex> fun.(true)
"ok"
iex> fun.(false)
"not ok"

You can even use defmodule to create an anonymous function that defines a new module.

iex> fun = &defmodule(&1, do: def(hello, do: unquote(&2)))
iex> fun.(Hello, "hello there")
{:module, Hello, <<...>>, {:hello, 0}}
iex> Hello.hello
"hello there"

(Note that I don’t recommend overusing it like I did here! The only one that has been really useful to me was the first example, &"...")

Elixir: A year (and a few months) in

In the beginning of 2015, I wrote a blog post about how my then-current programming language of choice (Ruby) was showing itself to not be as future-proof as I would have liked it to be.

A lot has changed since then, but a lot has remained the same.

First: I have started a few open-source Elixir projects:

  • Exfile — a file upload handling, persistence, and processing library. Extracted from an image upload service I’m working on (also in Elixir).
  • multistream-downloader — a quick tool to monitor and download HTTP-based live streams.
  • runroller — a redirect “unroller” API, see the blog post about it.

The initial push to get me in to Elixir was indeed its performance, but that’s not what kept me. At the same time, I also tried learning Go and more recently, Rust has caught my attention.

In most cases, languages like Go and Rust can push more raw performance out of a simple “Hello World” benchmark test — exactly what I initially did to compare Ruby on Rails to Elixir / Phoenix. The more I used Elixir, the more I gained an appreciation for what I now regard critical language features (yes, most of these apply to Erlang and other functional languages too).

Immutability

This is a big one. Manipulating a mutable data structure is essentially changing the data in memory directly. This, however, is susceptible to the classic race condition bug programmers encounter when writing multithreaded programs:

  • Thread 1 reads “1” from “a”
  • Thread 2 reads “1” from “a”
  • T1 increments “1” to “2” and writes it to “a”
  • T2 increments “1” to “2” and writes it to “a”

In this case, the intended value is “3” because the programmer incremented “a” two times, but the actual value in memory is “2” due to the race condition.

In systems with immutable variables, this class of bug doesn’t exist. It forces the programmer to be explicit about shared state.

Lightweight Processes

A typical Erlang (Elixir) node can have millions of processes running on it. No, these are not your traditional OS processes — they are not threads, either. The Erlang VM uses its own scheduler and thread pool to execute code. The memory overhead of a single process is usually very light — on my machine, it’s 2.6kb (SMP and HiPE enabled).

Inter-Process Communication

Another big one.

Want to send a message to another process?

send(pid, :hello)

Want to handle a message from another process?

receive do
  :hello -> puts "I received :hello"
end

This works on processes that are running in the current node — but it also works across nodes. The syntax is exactly the same. The “pid” variable in the example is able to refer to a process anywhere in the cluster of nodes. The other node doesn’t even have to be Erlang, it just needs to be able to speak the same language, the distribution protocol and the external term format.

OTP

You can’t talk about Erlang or Elixir without bringing up OTP. OTP stands for “Open Telecom Platform” (Erlang was initially developed by Ericsson for use in telecom systems). OTP is a framework with many battle-tested tools to help you build your application — for example,

  • gen_server – an abstraction of the client-server model
  • gen_fsm – a finite state machine
  • supervisor – a supervisor process that automates recovery from temporary failures

OTP is, for all intents and purposes, part of the Erlang standard library. Thus, it is automatically included in any Elixir application as well. The nuts and bolts of OTP are out of the scope of this blog post, but having such a rich toolbox is like a breath of fresh air coming from Ruby (I thought the same when switching full-time from PHP to Ruby).

Conclusion

I’ve learned a lot in this past year, and yet I feel like I’ve only scratched the surface. Thanks for reading!

Playing around with AWS Certificate Manager

I’m a big Let’s Encrypt fan. They provide free SSL certificates for your web servers so you can protect the traffic from prying eyes. In fact, the connection between your web browser and my blog server is made private thanks to Let’s Encrypt.

Using Let’s Encrypt requires some setup and automation on your part if you want to use it in the AWS cloud, but AWS recently launched something called the AWS Certificate Manager or “ACM”. ACM takes care of issuing, renewing, and provisioning certificates for you — which is great because uploading SSL certificates to CloudFront and Elastic Load Balancers is not the most fun thing to do. I would pay for this, but Amazon has decided to give it to everyone for free. 🙂

As with anything AWS, this has a couple catches, but if you run your cloud resources in AWS you probably won’t be worried about them:

  • You don’t have access to the private key, which means you can’t use the same certificate elsewhere.
  • ACM is currently only available in the us-east-1 region. ACM is currently available in all major AWS regions1.
  • You can’t use ACM certificates across regions (with the exception of CloudFront, which doesn’t have a region — note that CloudFront ACM certificates must be located in the us-east-1 region, though.).

So, I decided to test it out on a silly little CloudFront distribution I have running I made for a blog post as a demonstration of how you can use S3 and CloudFront to serve a single-page JavaScript app with “proper” URLs.

Test it out for yourself:

https://single-page-test.kkob.us


  1. As of this update (2016/06/07), ACM is available all regions except US GovCloud and China Beijing)

JavaScript Unit Tests in a Phoenix Application

There’s a guide to writing browser acceptance tests for Phoenix. Acceptance tests are nice, but sometimes you want to have unit tests. This is very easy to do with your Elixir code, but what about your JavaScript code that lives inside your Phoenix application?

I couldn’t find a good guide on this, so I’ll go over what I have set up for one of my latest Phoenix projects.

Setup

First, install mocha if you haven’t already. I’ll be using mocha, but you can use whatever test runner you want to. You’ll also need babel-register — this will allow you to use Babel while the tests are being run in Node.

$ npm install --save-dev mocha
$ npm install --save-dev babel-register

Set up mocha to run your tests in package.json. The default Phoenix & Brunch installation doesn’t include a “scripts” section, so you’ll probably have to create it. If it’s already there, just add the “test” line.

{
  "dependencies": {
    ...
  },
  "scripts": {
    "test": "mocha --compilers js:babel-register test/js/**/*.js"
  }
}

Now, set up Babel to pick up the default preset (if you’re setting a different preset in your brunch-config.js, you just replace es2015 with what you have there). Put this in .babelrc in the root directory of your project.

{
  "presets": ["es2015"]
}

Now, you can put JavaScript unit tests in test/js and by running npm test, they will run inside Node.

Note that npm test will not be automatically run when you run mix test, so you’ll have to change the way tests are run on your CI run by either changing the test command to mix test && npm test or adding the npm test command to your CI configuration.

Example Test

Say we have a module that does nothing but export a function that returns the string "something":

export default function() {
  return "something";
}

This file lives in web/static/js/something.js.

Now, let’s write a test for it in test/js/something_test.js:

import assert from 'assert';
import something from '../../web/static/js/something';

describe('something()', function() {
  it('does something', function () {
    assert.equal('something', something());
  });
});

Now, run the tests:

$ npm test

> @ test /Users/keita/personal/phoenix-mocha-example
> mocha --compilers js:babel-register test/js/**/*.js



  something()
    ✓ does something


  1 passing (8ms)

This example project is available on GitHub if you would like to take a closer look at it. As always, please leave a comment or get in touch if you’d like to provide some input!