Select Page
Comic book legend Michael Golden shares his secrets for successful storytelling

Comic book legend Michael Golden shares his secrets for successful storytelling

This past Saturday, a group of fans and aspiring creatives of all kinds gathered together in the basement level of Silicon Valley Comic Con. Outside this room, the convention bustles noisily with costumed characters, video games, and the murmur of human voices. Inside it is a rather drab and gray setting for one of the most prolific and influential creators in the comic book industry, who incidentally, is running just a few minutes late.

Michael Golden is probably best known for creating the X-men character “Rogue” while at Marvel, but his comic works also include “The ‘Nam,” “Micronauts”, “G.I. Joe Yearbook”, “Dr. Strange”, and much more. His diverse commercial portfolio ranges from Nascar to Nasa, to Universal and Warner Brothers. In a few moments, the man himself approaches the stage. I suddenly recall that a wizard is never late, nor is he early, he arrives precisely when he means to.

“Rule number one is: people are stupid,” Golden says in his deep voice that is both gravel and soft butter all at once. This is not a man that wastes a lot of time in coming to his point. He is older, gray-bearded with glasses, and firmly clutching a Starbucks cup in his right hand. He smiles knowing that he has hooked us with surprise and continues, “Whoever you are trying to sell is ignorant of your story, and it’s up to you to give them the information. Make it dramatic, concise, involving, and understandable. People get bored easily, and they don’t come back.” There are only two explicit rules for success, apparently, and the second is a rule of no rules. “You hook them by sticking to who, what, when, where, why, how.”

At the age of 65, with a career spanning over 40 years, Golden’s longevity in this industry is unusual. “This industry will chew you up and spit you out,” he says. He credits his ability to stay ahead of the twenty-somethings to his discipline to keep learning and stay productive. “With commercial work, you no longer have the option to be in the mood.” A typical work day starts at 4:00 AM and ends at 8:00 PM. Golden forces himself to restrict creative work to the early part of the day. “By around 2:00 I really start to slow down. By 8:00 PM I am done, tired, wasted, and needing a drink.” He used to take Saturday and Sunday off without fail, but thanks to the Internet, Golden now works seven days a week.

When asked how he learns and draws inspiration, Golden says that he learns by doing, but he admonishes the audience to do as he says and not as he does. “Learn everything you can, go to school. Learn design, learn layouts, and learn color theory. There will be plenty of time later for Photoshop and Illustrator. Technology is just a tool; it can’t make you great.”

Although he does not say this, I start to realize there is something else going on here too. It is evident to me that, from his full work schedule to his penchant for learning and problem-solving, Golden is immersed in his work. You begin to get the sense that his favorite part of the day is in the pure flow of the creative process. This seems like it could be the real secret — being in love with your work — even after years and years.

Of course, being uniquely talented and globally recognized probably helps in the love-your-work department, and it may even account for some of Golden’s lack of burnout. But you can hear the underlying truth tinged everywhere in Golden’s story. Talent is only part of the equation. Success came after a lot of sweat, dedication, discipline, hard work, and long hours. “There is no formula!”, says Golden

Are Developers Coming Back to Microsoft?

Are Developers Coming Back to Microsoft?

Developer interest in C# has declined sharply since its peak in 2012 according to data from the TIOBE index.  As one former .NET developer wrote back in 2015, this precipitous decline in due to the rise of cloud-native application frameworks and mobile platforms where Microsoft developers are not first-class citizens.

The reach of Microsoft’s developer ecosystem has declined in the past five years due to the rise of non-Microsoft web frameworks and mobile platforms. Android and iOS control 90% of the world wide smartphone market and .net developers aren’t first class citizens on those platforms. – Justin Angel, former .NET developer

TIOBE Index C#



Microsoft’s mobile device market share slipped below 1% again earlier this year.  Yet there are growing signs that Microsoft’s long ball strategy of adopting open source and doubling down on developer tools may be starting to pay off.  Last year Microsoft surprised everyone when it announced a partnership with longtime rival Red Hat to deliver Linux on the Microsoft Azure cloud computing service.  Earlier this year Microsoft acquired Xamarin, which helps .NET developers build mobile apps for Android and IOS.

Microsoft seems to be listening to IT leaders, who have been under tremendous pressure to create agility for the enterprise application developers who are driving innovation. The move to open source .NET Foundation and the creation of Windows Server Nano is designed to put a stop to enterprises re-writing .NET applications in order to migrate to Linux.

Microsoft has also moved quickly to adopt Docker and containerization with native support for Docker containers in Windows Server 2016 and recently hiring Google’s lead developer on Kubernetes.  The widespread adoption of containers is a boon for Microsoft because it enables developers to easily move applications from one environment to another.  As developers increasingly use container-centric tools they become less reliant on the management interfaces of cloud rival Amazon Web Services and can more easily move applications to Azure. At the annual Ignite conference in September 2016 Microsoft showed a new demo of Docker in Visual Studio.

The results seem to suggest that Microsoft’s strategy is working.  According to a recent article data from Synergy Research Group puts Azure cloud growth at 100% year-over-year, making Azure the #2 pubic cloud behind Amazon AWS.  In 2017 enterprises with large investments in .NET may find these advances very compelling as they continue to seek a viable cloud strategy. Amazon’s lead in the public cloud has been so far been insurmountable.  Do my eyes deceive me or does the most recent data from Indeed show a modest increase in C# job postings over the last 12 months? In 2017 we may finally see a real contest. C# Job Postings C# Job Postings


Silicon Valley Ageism Versus the Productivity of Famous Inventors

Silicon Valley Ageism Versus the Productivity of Famous Inventors

A few weeks ago I was having lunch with a friend who half-jokingly asked me if I was ready to retire yet.  I half-jokingly quipped that I was well past the age of “fundable” established by Silicon Valley venture capitalists, and would therefore be relocating to Puerto Rico in the near future.  Jokes aside, ageism in the technology industry is a real phenomenon, and these perceptions are unfair on two counts.  First, venture capitalists with any common sense do in fact frequently fund entrepreneurs of all ages, although there are more than a few seemingly without any common sense.  Secondly, productivity and age are not correlated, but productivity, health, and wealth and probably are.

I took a wager with my friend that a cursory analysis of famous inventors would show no correlation of age to productivity.  I wanted to minimize the distortions of the modern market on intellectual property, so I just took the first few off the list of famous inventors from the last century.  I cannot claim that this is scientific or fully conclusive, but I do claim that someone owes me $20.  The data is actually a little difficult to find because the USPTO database is not searchable before 1976.  If someone wants to do a complete analysis of the famous or prolific inventors of the last century, I would be willing to reward you with the proceeds of my $20 wager.  Suffice it to say that you would be unwise to “hire young” as some people have suggested, even if you were comfortable with breaking the law.






Automating application deployments across clouds with Salt and Docker

Automating application deployments across clouds with Salt and Docker

saltstack-logoIf you have not had the chance to work with Salt yet, it is a very exciting new configuration management system that is easy to get up and running, powerful enough to support distributed command execution and complex configuration management, and scalable enough to support thousands of servers simultaneously.

Recently, I wrote about how a de facto containerization standard will enable a whole new generation of management tools.  Back in January, SaltStack announced several awesome new features in the Salt 2014.1.0 (Hydrogen) release, including support for the life-cycle management of Docker containers.  SaltStack is very early in getting Docker support, and Docker itself still does not consider itself production-ready (tell that to Yelp and Spotify), but together these tools offer an out-of-the box solution for getting started with immutable infrastructure.

The deployment and management of an application across multiple virtual machines, using multiple public clouds, is a use case that would have been considered categorically “hard” just a year ago.  Companies like Google, Facebook, and Ning spent many years developing this kind of orchestration technology internally in order to deal with their scale challenges. Today, using Docker containers, together with Salt-Cloud, and a few Salt States, we can do this from scratch in a few tens-of-minutes of effort.  And, because we are using Salt’s declarative configuration management, we can scale this pattern to actually operate our production environment.

Use Case


The core use case is one or more application containers which we want to deploy on one or more virtual machines, using one or more public cloud providers.

For the sake of simplicity, we will restrict this use case to:

  • Assume some familiarity with Salt
  • Assume some familiarity with Docker
  • Assume that you have a Salt master already installed
  • Assume that you want to do this on a single public cloud, using Digital Ocean (since adding new clouds to Salt-Cloud is dead simple)
  • Simulate a real-world application with a dummy apache service

Demo Use Case


In order to simulate a real-world application, will create a Docker container with the Apache web server.  Conceptually, this container could be a front-end proxy, a middle tier web service, a database, or virtually any other type of service we might need to deploy in our production application. To do that, we simply create a Dockerfile in a directory in the normal way, build the container, and push the container to the Docker repository.

Step 1: Create a Dockerfile

[email protected]:/some/dir/apache# cat Dockerfile
# A basic apache server. To use either add or bind mount content under /var/www
FROM ubuntu:12.04

MAINTAINER Kimbro Staken version: 0.1

RUN apt-get update && apt-get install -y apache2 && apt-get clean && rm -rf /var/lib/apt/lists/*

ENV APACHE_LOG_DIR /var/log/apache2


CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]

Step 2: Build the container

[email protected]:/some/dir/apache# docker build -t jthomason/apache .
Uploading context  2.56 kB
Uploading context
Step 0 : FROM ubuntu:12.04
 ---> 1edb91fcb5b5
Step 1 : MAINTAINER Kimbro Staken version: 0.1
 ---> Using cache
 ---> 534b8974c22c
Step 2 : RUN apt-get update && apt-get install -y apache2 && apt-get clean && rm -rf /var/lib/apt/lists/*
 ---> Using cache
 ---> 7d24f67a5573
Successfully built 527ad6962e09

Step 3: Push the container

[email protected]:/tmp/apache# docker push jthomason/apache
The push refers to a repository [jthomason/apache] (len: 1)
Pushing tag for rev [527ad6962e09] on {}

With our demo application successfully pushed to the Docker registry, we are now ready to proceed with orchestrating its deployment.  As mentioned previously, we assume you have a Salt master installed somewhere. If not, you’ll need to follow the documentation to get a Salt master installed.  The next step then is to configure Salt-Cloud for your choice of public cloud provider.  Configuring Salt-Cloud is simple.  We need to create an SSH key pair that Salt-Cloud will use to install the Salt Minion on newly created VMs, add that keypair to our choice of public cloud provider, and create a Salt-Cloud configuration file with the API credentials for our public cloud.

Step 4: Create an SSH Key Pair

[email protected]:/etc/salt/keys# ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/root/.ssh/id_dsa): digital-ocean-salt-cloud
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in id_dsa.
Your public key has been saved in
The key fingerprint is:
06:8f:6f:e1:97:5a:5a:48:ce:09:f3:b6:33:42:48:9a [email protected]
The key's randomart image is:
+--[ DSA 1024]----+
|                 |
|                 |
|      .          |
|    .  +         |
|   + .+ S        |
|  E . [email protected] + .     |
|     .  @ =      |
|      .ooB       |
|       .+o       |
[email protected]:/etc/salt/keys#

Step 5: Upload SSH Key Pair


With Digital Ocean now enabled with our SSH keys, next steps before provisioning time are to configure salt-cloud with Digital Ocean’s API key for our account, and define the profiles for virtual machine sizes, geographic locations, and images for Digital Ocean.  The salt-cloud configuration for credentials are kept on the salt-master in /etc/salt/cloud.providers.d/, while the profiles for each public cloud are kept in /etc/salt/cloud.profiles.d/.  See the salt-cloud documentation for more details on configuration options.

Step 6: Configure Salt-Cloud

  provider: digital_ocean
  # Digital Ocean account keys
  client_key: <YOUR KEY HERE>
  api_key: <YOUR API KEY HERE>
  # Directory & file name on your Salt master
  ssh_key_file: /etc/salt/keys/digital-ocean-salt-cloud
# Official distro images available for Arch, CentOS, Debian, Fedora, Ubuntu

  provider: do
  image: Ubuntu 12.04.4 x64
  size: 512MB
#  script: Optional Deploy Script Argument
  location: San Francisco 1
  script: curl-bootstrap-git
  private_networking: True

  provider: do
  image: Ubuntu 12.04.4 x64
  size: 1GB
#  script: Optional Deploy Script Argument
  location: New York 2
  script: curl-bootstrap-git
  private_networking: True

  provider: do
  image: Ubuntu 12.04.4 x64
  size: 2GB
#  script: Optional Deploy Script Argument
  location: New York 2
  script: curl-bootstrap-git
  private_networking: True

It is now time to configure Salt to provision our application container.  To do that, we need to create two Salt states, one to provision Docker on a newly created VM, and another to provision the application container.  Salt states are Salt’s declarative configuration states, which are executed on target hosts by the salt-minion.  States are an incredibly rich feature of Salt, one that could hardly be covered with any sufficient level of detail in a tutorial like this.  This example is not a particularly smart or optimal use of states, but it is simple. You’ll want to read up on Salt states to develop the best practices for your environment.

Step 7: Create a Salt state for Docker

    - name: python-apt

    - name: python-pip

    - name: docker-py
    - repo: git+
    - require:
      - pkg: docker-python-pip

    - pkgs:
      - iptables
      - ca-certificates
      - lxc

      - repo: 'deb docker main'
      - file: '/etc/apt/sources.list.d/docker.list'
      - key_url: salt://docker/docker.pgp
      - require_in:
          - pkg: lxc-docker
      - require:
        - pkg: docker-python-apt
      - require:
        - pkg: docker-python-pip

    - require:
      - pkg: docker-dependencies


This first salt state defines the dependencies and configuration for installing Docker on

Step 8: Create Salt state for the application container

     - name: jthomason/apache
     - require_in: apache-container

     - name: apache
     - hostname: apache
     - image: jthomason/apache
     - require_in: apache

     - container: apache
     - port_bindings:
                HostIp: ""
                HostPort: "80"

Now that configuration is complete, we are now ready to provision 1..n virtual machines, each with a running instance of our application container.  Before we do that, let us first verify that the Salt master is actually working. We know there is at least one agent that should be talking to this salt-master at this point, which is the agent running on the salt-master itself.

Step 9: Verify that Salt is working

[email protected]:~# salt '*'
[email protected]:~#

Satisfied that everything is in working order with the salt installation, we can now provision our first virtual machine with an instance of our container using salt-cloud.

Step 10: Provision a VM with an instance of the container

[email protected]:# salt-cloud --profile ubuntu_512MB_sf1
[INFO    ] salt-cloud starting
[INFO    ] Creating Cloud VM
[INFO    ] Rendering deploy script: /usr/lib/python2.7/dist-packages/salt/cloud/deploy/

After the salt-cloud run completes, it emits a YAML blob containing information about the newly created VM instance.  Let’s use the IP address of the instance to see if our application is running

Step 11: Verify application is running 


Great success!

We have established the basic setup and management pattern for our infrastructure.  Adding additional public clouds is easy, thanks to salt-cloud, providing a single control interface for our entire application infrastructure.   But where to go from here?    A starting point is to consider how salt states can be used to manage VM and container life-cycles, in the context of the overall continuous integration and deployment process.   I plan to share some of my thoughts on that specifically in a future post.  Obviously, there is a lot of thought to be given to your specific objectives, since that will ultimately determine the deployment and operations architecture for your application.  However, Salt is an incredibly powerful tool that, when combined with Docker, provides a declarative framework for managing the application life-cycle in the immutable infrastructure paradigm right out of the box.  That versatility puts a whole lot of miles behind you, allowing you to focus on other core challenges with application deployment and operations.