My favorite article on salary negotiation of all time talks about “fully-loaded costs” of an employee. The idea is that when figuring what it costs a company to employ an engineer (or whoever) it’s short-sighted to just take their salary and multiply by time. Patrick suggests that "a reasonable guesstimate is between 150% and 200% of their salary" and that the “extra” tends upward as salary does. Of course it depends on benefits and whatnot.

Many people think that’s complete baloney. Specifically, they tend to think that the “extra” is fixed (e.g. $30k extra,) rather than a large and increasing percent of salary.

But when you’re negotiating salary, or otherwise asking, “what does an employee’s time cost a company?” he’s right. Let me explain why.

It’s not just the equity, bonus and benefits, which are already a significant and increasing percent of salary. When you ask, “what does DynaCo lose when it loses ten hours of engineer time,” you’re also including all the support staff that are required for them. That can be admins and project managers, but please don’t forget folks like HR who exist primarily to hire other employees and deal with their benefits and grievances.

For that matter, don’t forget middle management. If much of their purpose is to increase the hour-by-hour effectiveness of an employee (or manage them at a fixed effort per employee per year) then costing the company ten hours of engineer time is also costing a chunk of management time.

That’s ignoring other expenses like equipment, office rent, software services and what not — but they’re cheap on this scale. As a rule, a software company spends far more on support staff than on material goods or software.

It’s not always the case, but often you’re talking about what a company loses by not getting the time. For any good hire, that is by definition more than what the company pays for that time — an employee should make more money for the company than they cost it. And as a rule, a more senior employee will tend to have a higher multiplier, if they’re a good hire. A junior engineer could be awesome by adding 50% more revenue than they cost — the company expects them to improve, and that’s already pretty good. But a senior engineer should certainly be adding two or three times their salary in revenue (or reduced costs, or future revenue, or reduced risk, or…)

Treating the employee’s fully-loaded cost, particularly their opportunity cost, as 150%-200% of their salary is quite reasonable in most cases.

Have you already read Patrick’s article that I linked at the beginning? Remember why you care?

Breaking down a giant monolithic Rails app (colloquially a “monorail”) is a very hot topic right now. I’ll give you the boring, accepted advice first: extract obvious bits by breaking apart obvious sub-apps, take nontrivial logic and extract it into /lib and/or external gems, take repetitive models/controllers/views and, if possible, extract those into separate Rails-specific gems (“engines”) as well.

There’s compromise in all of that, but it’s all basically solid advice. It only takes you so far, but it’s clearly in the direction of better code.

So let’s talk about some of the newer, less-accepted and dodgier advice, since you’re probably already quite familiar with the previous boring advice :–)

You can break back-end stuff into separate services, then call to them via HTTP from the front end. This is higher-latency but scales better, and lets you build not-specifically Railsy logic outside of Rails, where it belongs. This is an especially good choice if you have components that don’t fit clearly into an HTTP server, such as those that want to run their own threads, processes and/or background jobs. That’s the SOA bit, which is called “microservices” if you’re hip and/or the services are smallish.

Microservices/SOA adds a bunch of interesting issues, though:

1) how do you deploy them? Separately or together? 2) how do you make sure services are running locally on your dev box? 3) the more processes you have, the more likely one died. How do you keep them all running? Same in prod vs dev or different? 4) how do you handle versioning in the API? How do you handle non-backward-compatible upgrades in services? 5) how do the services communicate? HTTP is inefficient, latency-heavy and error-prone, but message-passing is complex and has ugly failure cases.

That’s not all the issues, but it’s a good start. We’re doing the same at my job, so it’s front-of-mind :–)

You’re better off if you look for one piece that’s already pretty separable and make it its own service first. Then slowly extract others after it’s up and working. That lets you ease into a lot of the issues above. It also lets you answer the questions when you need a specific answer, not a general one. One of the big advantages of services is that you can answer these questions in the way that works best for a specific service — which means it’s not always the same answer for each service.

Hope this helps!

I’ve gotten this question several times in several forms, so here’s a typical one, and my current answer…

I’m a mid level Rails engineer with strengths in Ruby, Rails and TDD. I understand OOP and REST, but I am relatively week when it comes to deploying a Rails application. Do you recommend any resources on learning how to deploy a Rails app, grow/maintain its deployed environment, and optimize for your application’s performance and scaling abilities?

In general, not really. This turns out to be a gaping hole in the landscape.

Part of the problem is that there’s not one standard way to do this, and the methods change constantly. You can find tutorials and (sometimes) books on specific, individual tools like Chef, Ansible, Capistrano, Docker and so on, but they’re terrible about providing an overview. You have to already have a good idea of what the top level looks like, which is difficult given that the tools make different assumptions and are for different scenarios, but don’t spell that out.

You can see an example of one way to set it up in my Ruby Mad Science open-source software, but please don’t purchase the associated class. I’m in the process of winding it down.

For pretty much the same reason, it turns out. Deployment is something everybody currently does custom. It’s possible that a Rails-style entrant (and/or Rails itself) will eventually standardize this enough to allow one flavor of deployment instead of thousands or millions of flavors, but we’re absolutely not there yet. Heroku is the current closest.

Certainly look at Heroku if you haven’t already. It’s the only example of a simple standard method of deployment, and it’s the gold standard for “it just works.” It’s also expensive and not very configurable, but it’s still worth looking at.

Right now, people go out and learn by doing, with highly variable results :–(

When people talk about Google ruling the roost, it’s common to compare them to Microsoft. I’m an old guy, and I remember Microsoft as our overlord. So I find that comparison pretty darn funny.

But if you haven’t been doing this since, oh, call it 2005… That doesn’t necessarily mean much to you. Microsoft? They’re not anybody special any more. Certainly they’re not unusually evil. They’re not especially powerful, even. Yahoo is probably as strong as current Microsoft, and Yahoo doesn’t intimidate people into anything.

What was Old Microsoft like that was so scary?

Old Boys

Microsoft specialized in a few specific things:

  • Packaging and contract-writing rivals to their core businesses (OS, Office) out of existence, often through straight-up illegal pricing.
  • Leveraging their OS and Office monopoly to bundle other products, killing rivals with ‘free’ included versions (e.g. Internet Explorer, leveraged to support their web servers, back when those cost money.)
  • Suing rivals out of existence, often just because it was more convenient than competing
  • Cozying up to small companies, hiring their primary engineers, cloning their product and folding it into Windows — even if the product was stupid in the first place. Amusingly, that’s how we got Clippy.
  • Out-marketing technically superior companies, killing them because they’d been outbid in the marketing channels.
  • Putting in MS-only optimizations into the OS and/or specifically putting in “screw this one other company” sabotage. Remember, everybody had to use Windows back then.

Google is a “Good Old Boys” style company like GM — “what’s good for General Motors is good for America.” In fact, you’ll literally hear Google folks say things like, “what’s good for the Internet is good for Google.”

Now, Google is an ad company and they act like it. But they’re in a wide-open space, they can literally make lots more money by getting more people using the Internet, and nothing they care about is particularly competitive. Blue ocean all the way.

This is much, much better than “carpet bomb potential competitors, eat young companies that might do well, sue and market everybody into oblivion” old Microsoft. Microsoft had a weird persecuted streak that translated into them being the big dog but believing mauling any little dog that they thought might get big.

Which meant if you were a little dog back in the days of packaged software, you had to get past a paranoid mortal rival (Microsoft) just to sell your software — everybody used Windows, so you had to deploy to a Microsoft platform. And I promise you, they played very rough sabotaging some of their competitors with the OS. There was no digital distribution platform. There wasn’t a browser you’d want to deploy to. Just the desktop.

Microsoft didn’t come down on everybody, but when they came down, they came down hard. You were always praying the they didn’t see your market as too profitable or too strategic. Only the biggest companies could fight them — and even the biggest companies couldn’t usually win, just fight longer before they sank.

Paul Graham (figuratively) sang a lovely little ode to the end of that era when he noticed it was over.

I recently wrote about good project managers &mdash and I mean it.

But there’s a particular project manager meeting that is usually a bad sign, a sign that you’re not dealing with a good project manager.

I’ll call it the “How You’re Going to Use JIRA” meeting. It doesn’t have to be JIRA, though JIRA is designed with this (awful) meeting in mind. It won’t be phrased quite that way, though that’s exactly what they mean.

To see why this meeting is bad, let’s look at a much better meeting — the Engineering “here’s an internal tool” meeting. Imagine a senior engineer sitting with (internal, usually) customers, explaining something they’ve just finished prototyping (or building) and showing off the various features. “Here’s the new XYZ workflow that you wanted” or “this is a new method of auditing ABC.”

In a good meeting of this type, you’ll alternate “yes, that’s what we wanted” with “that’s not quite right” with “uh, not sure what’s up with that” with “okay, we’ll try that, not sure if it’s what we want.” A good engineer will be taking notes to see which features get which reactions.

A good engineer will also understand that while you can explain a few things to the customer (“well, we actually intended…”), the customer is basically right. If your customer (paid or not, internal or not) says “no, we won’t use this”, you’re basically going to have to rethink the feature.

In other words, each feature is intended for the customer’s convenience, and if the customer disagrees about his/her own convenience, s/he is right and you, the designer of that feature, are wrong.

Now let’s talk about Project Manager Meetings.

When your project manager comes to you and explains the (invariably complex) new JIRA workflow, s/he is presenting you with how you’ll be doing things. And if you disagree, the (un-checked) feature designer is right and you (the customer, who already does this) is wrong.

It’s like a customer meeting with engineering, except the engineer is right and the customer is wrong.

Ever been through that literally? Like the engineering meeting above, but they’re allowed to tell you how to do it?

It’s marginally more pleasant for the engineers holding the whip, but it’s a really bad sign for the company.

It means that efficiency and in-the-trenches experience don’t matter, but the opinion of appointed people who don’t understand the work does matter.

It means that work is going to go badly from here on out.

Which, not coincidentally, is what it means when your project managers tell you how to do it for their convenience, too.

Recently Chad Fowler wrote a great job description for a software engineer. I replied that a person like that is currently un-hireable in Silicon Valley.

Another fellow asked me, how would I change the list for technical instructors rather than straight-up engineers?

And if they offered something like 20% time to let somebody keep their skills that sharp, what are the really key components of that program?

Those are great questions. Here’s how I answered:

I’d have nearly the same list of qualities for a great instructor. It might change which items were most important — teaching and learning are obvious choices to put higher on the list, clearly. But that list is already focused on somebody who is big on communicating and instructing, and that’s part of why it doesn’t feel like standard developer job descriptions.

A lot of what killed 20% time at Google was that it was at manager’s discretion. And “at manager’s discretion” turns instantly into “is awarded politically.” As went around on Twitter recently, most things at Google (and to be fair, elsewhere) are like that, which is why Peer Bonuses have the same problem. They’re at manager’s discretion, so they can be turned down by the manager. So they become power games.

Unfortunately, “don’t hire people who play power games” is a terrible solution to this problem, because everybody defines “power games” as “the kind of politics I don’t like.” Which means they have a huge blind spot for the kind of politics they benefit from. There’s not really a way around it. I’m not being holier-than-thou, for the record — my blind spots are as big as anybody else’s.

To answer your question as directly as I can: make 20% time sacrosanct. Don’t give recommendations on what to do with it, don’t require a focus on a technology being taught, don’t make it conditional on performance (and “performance” is a highly political and subjective thing, alas.) You can consider requiring some kind of report on what was done. But even that can be abused, and most people have blind spots about how it’s abused.

Basically, make it very hard for managers to pressure people to work in particular directions with 20% time. The problems with that are the worst with the employees who fit in the least — who tend to be disliked by their managers, who tend to be called “not a culture fit”, and who tend to have the most actually different ideas. Those, in other words, with the greatest potential benefit to themselves and your company from 20% time.

They are also the ones that your smartest employees look to as a bellwether of whether you’re cracking down — whether smart people need to “look productive”, which is usually the enemy of “being productive on the important stuff.”

Managers dictating terms also makes it clear that 20% time is part of the standard company work. That would mean it’s useless for reinforcing most of what you want for the employee described in the blog post. Much like current Hackathons, you’re making it clear that it’s for the company’s benefit, not for yours. Except the company needs a bunch of stuff that isn’t (directly) for its own benefit from employees like the ones in the blog post. 20% time becomes one more thing added on top of your current job description, not a way for employees to feel okay about expanding their own competence outside of current business goals.

And as I wrote in my response, Chad Fowler’s job description requires doing three or four full-time jobs. Anything else you add on top of it is going to seriously reduce the applicant pool. 3-4 jobs’ worth of skills is ridiculous, the equivalent of “I only date ballerinas over 5’10” with a Ph.D.“ You’re also requiring people who are, by their nature, perfectly capable of starting their own company, which pays much better on average than you’re willing to, and comes with way more respect.

The 20% time I have described is clearly not doable at almost any large company. It requires making a lot of managers act against their natural inclinations. When you have something “fun” like 20% time, why would you not use it as a reward and motivator?

(For the answer, look up “intrinsic motivation” or watch the Dan Pink TED talk.)

20% time is valuable because often your company is doing many of the wrong things, and smart employees can fix that somewhat. Which means the more it’s affected by your existing culture, the more of the advantage you’re throwing away in favor of conformity.

Managers are in the business of generating conformity. Also of reducing risks, increasing repeatability and increasing consistency. In other words, they specialize in all of the mortal enemies of 20% time.

(Don’t get me wrong. The results of 20% time can be help those same things. But 20% time itself is a highly creative activity, and dies from repeatability, de-risking and extrinsic motivation.)