Learning how to make apps is not the same thing as learning to program

Learning how to develop iOS apps is not the same thing as learning to program. Sure, you’ll need some programming to develop an iOS app, but successfully creating an app involves integrating a bunch of things that have nothing to do with programming per se. Programming per se means taking some computational problem and formulating a program that solves it.

“But doesn’t making anything useful always involve things beyond pure programming? You need a platform, some UI framework..”, you’ll say. Well, it depends on your definition of useful. Programs that accept some input and calculate a pure result are useful if that’s what you need. Furthermore, even if the “useful” you’re interested involves an app or an iOS-like framework..whatever, the distinction is still significant for one reason: there is a point at which pure programming ends and the “iOS-ness” begins. Applications involve data and transformations on that data to calculate results and implement business logic. But if you’re new to programming and your starting point is making iOS apps, you conflate all the challenges and knowledge required. Understanding delegates, how to update the UI based on the model, etc is obviously very useful for iOS development, but it’s important to realize that this is a highly specialized skill that is relevant only to iOS development (or similar frameworks like Android). More importantly, it’s simply good design to separate concerns and write the pure program and when and only when that’s complete, to figure out how to interact with or present that program in some UI if that’s what you want. So, if the stated goal were to learn how to the iOS MVC scheme or UI kit works then learning these things would make sense. But if the goal is to learn programming then it doesn’t make sense and it only serves to frustrate and confuse.

People will argue that in order to motivate beginners, it’s important to have them working on something useful, something that excites them. This may be true, but it’s still important to compartmentalize knowledge and to be clear on precisely what it is one is trying to accomplish. Here’s an example: suppose a beginner wants to create a Tic-tac-toe game. Before she even thinks about an iOS app, she ought to first understand the problem domain, figure out how to properly model the game, what functions are required in order for a proper game of Tic-tac-toe to proceed (like transforming the state of the board, calculating the winner, etc). The point is that there’s a whole class of knowledge to be gained that has nothing to do with iOS whatsoever. To start with thinking about “the app” in this case would be like wanting to calculate the density of small business owners under 40 in New England and doing so by diving into JavaScript, D3.js, etc without first figuring out what data you need, how to analyze that data, and so on. D3.js and such are all very useful once you have a program. The UI and presentation lives at the edge of your “app”, the pure program has nothing to do with it.

If you want to learn programming, start with problems you want to solve and figure out how to compute them. Want to deliver some representation of that or offer interactivity with such a program? Learn iOS development. But you can’t get to the latter without the former and if you conflate the two you’ll be spinning your wheels for a long time learning a little bit about of bunch of different things that are conceptually unrelated.

How we deploy from Slack using Jenkins, Terraform, Packer, & Ansible

In this post, I wanted to give an overview of the tools and services we use to automate our infrastructure and deployment of Grasswire. I should mention that, up until recently, I didn’t have a ton of experience with systems administration or “devops”, but I’ve had to learn a few things in order to allow our team to be as effective as possible.

By the end of this post, you’ll see how we were easily able to get to a point where we can just type commands in Slack to kick off builds and deploys:

Screen Shot 2014-12-31 at 11.55.49 AM

Our application topology is fairly straightforward. We have several services that each perform a piece of functionality of our product. We have an API service, a service that ingests and analyzes information from various social networks, a service that processes user actions coming into our system, a website, etc. Almost all of the services connect to RabbitMQ, Redis, and Postgres. These services are all written in Scala and so the deployment goal is relatively simple: a jar needs to be built, wrapped up an init script and deployed to a machine somewhere. We use AWS so for us that means EC2.

There are many ways to go about automating infrastructure and deployments. There’s a wide range of levels of automation and a lot of very exotic options. For Grasswire, our goal was to figure out something tractable and scalable. Ultimately, we wanted to make it as easy as possible for anyone to make code changes and then build and/or deploy those changes. We tinkered with various solutions. Here’s the one that’s working quite nicely right now:

Since we use AWS, we needed a way to easily setup VPCs, load balancers, DNS records, security groups, the works. To do this, we use Terraform, which serves the function of CloudFormation, just a bit more powerful and with a much simpler declaration style. It definitely makes setting up the network topology a lot easier. We can create a plan and simply apply it to have instances started, placed in an ELB and have a route53 record set up to point to the ELB. Terraform will keep track of the state of your cloud and only make the changes necessary to bring it to the declared state.

Terraform isn’t something we use every day because we’re not constantly changing the structure of our cloud. But it’s very helpful for bootstrapping a cloud setup.

Here’s a simple plan for setting up most of what’s needed for our API networking:

Next we need a way to build and deploy our services. In our case, all of our services can be built and deployed in a very similar manner. Our services are all Scala apps, which means we can deploy everything as jars wrapped inside of an init script on a machine somewhere. Initially, we used Packer to create AMIS anew each time. We’d have Jenkins pull the latest changes from our repo, create a package and then bake that directly into an AMI which we’d then feed into Terraform which, in turn, would remove the existing instances and start up new ones. This quickly started to feel like overkill because the only thing (literally) that was changing was that a new release artifact was available and needed to replace the existing one running inside of a service on our instance. Baking a new AMI is also a lot slower than just deploying the changes in place.

So we moved to using Ansible to help us get our latest builds into our staging and/or production environments. We looked at Chef and Puppet, but Ansible seemed to be the most approachable. Everything works via SSH and you can write playbooks in yaml which are very easy to read. Run a playbook targeting a set of hosts and you’re done.

Here’ a sample playbook:

Now that we’re using Ansible, the provisioning that takes place with Packer is very minimal. Our templates look something like this:

Notice how we’re adding that setup_callback script to rc.local? What’s that all about? Let me explain:

We use Ansible Tower which provides a nice GUI and a centralized place where even non-devs can go to see what’s happened recently and deploy changes with the click of a button. Ansible has a nice module for working with AWS which lets you maintain a “dynamic inventory” (as opposed to a hardcoded inventory file with a list of host IPs). We provide Ansible with our AWS credentials and every time we run a playbook, in addition to Ansible pulling our latest playbook changes, it will refresh our inventory so that it knows about all of the machines currently running in EC2. With proper EC2 tags in place, we can easily target a service and an environment (all machines our tagged with name={service_name} and env={stage|prod}). Instead of thinking about machines, we can express the idea that we’d like to deploy our latest API service in stage, for example.

Here’s a screenshot of Ansible Tower, displaying the history of every job run:

Screen Shot 2014-12-31 at 11.43.43 AM

Ansible Tower also has a very nice feature called “provisioning callbacks” which brings everything together quite nicely. We simply set up a job using Ansible Tower which is backed by a playbook. After we enable callbacks, we can have our instances “dial in” to Ansible which will then run the playbook on the requesting machine. This allows us to do very little provisioning with Packer itself. We just bake an image and provision in to dial into Tower on launch and Ansible takes care of the rest. So an API image, as an example, is simply an image that’s been provisioned to dial in and kick off the corresponding API provisioning job in Tower. New instances launched in an autoscaling group will also dial in to Ansible Tower when they come online so no manual intervention is necessary.

To make things even easier, there’s a Tower CLI that let’s us kick off a job from the command line. We took advantage of this to create various Jenkins jobs which use tower-cli. To bring it all together, we also created an outgoing webhook for our #ops channel in Slack that will trigger these Jenkins jobs. In the end, anyone can simply type “deploy {service} {environment}” or whatever in our chatroom and the service will be live within seconds.

Screen Shot 2014-12-31 at 11.54.53 AM

The release stages end up being: make code changes, push, tell Jenkins to build via Slack (which will upload new artifacts to S3), kick off deploy via Slack, done.

Hope this post gave you some insight into (what I believe) is a pretty simple automation setup with very little overhead.