Monday, January 22, 2018

Interview Circus - the technical section

Typical of many developer interviews is the technical portion - a section which presumably is designed to gauge the technical ability of the applicant to accomplish the tasks associated with the position being interviewed for. On the surface, a reasonable..nay...a completely necessary portion if one cares about any level of organizational success...in practice? The technical circus.

The problem is that the manner in which far too many organizations attempt to evaluate technical skill has little to no correlation to what actually makes a good/bad developer. You will all too often find this section of the interview littered with absurd questions such as the all too common fibonacci.


I once shot back to an interviewer after completing one of these circus tests - what makes a bad developer and how does this test help you identify said traits? It was a dear in headlights a very obvious realization (on their part - confirmation on mine) that there had been very little if any thought put into the limited time they had to judge whether an individual would be competent at completing the tasks at hand.

I'll answer my own question. I'll tell what in my experience makes a bad developer 100% of the time that I've encountered someone I would label as such. The list is short, sloppy, lack of discipline, inability to think beyond today or to realize that someone else other than themselves might have to change/update the code, realizing that THEY might have to one day update/change the code.

Tests such as the fibonacci sequence and the rest of these silly 'brain teaser' tests, are simply testing what I like to call the 'aha' or 'Eureka' moments of software development..how quickly you figure out 'how' to solve a specific problem. The thing is, software engineering isn't about the Eureka moment, it's everything that happens AFTER the eureka moment. Do you think clearly through the requirements and ensure that your solution properly solves all of them? Do you think through any/all potential edge cases and discuss with steak holders what the expected behavior is. Do you in your design display the coveted attribute of software that thinks about the future without designing for it. Is your solution not just code but functionally modular to an appropriate degree, allowing minimal interdependency and maximum maintainability?

This is what I want to know about a new developer and silly in person quizzes simply don't enable me to asses that - in fact the irony is that these 'tests' require a developer to do something that I generally discourage - start typing out a solution immediately without taking substantial time to let the requirements marinate and come up with a true production grade solution rather than some silly hack.

Any 'one hour' coding challenge is nothing more than technical hazing. You simply cannot meaningfully in a span of an hour start to finish possibly meaningfully gauge the technical chops of an individual.

I don't claim to have all the answers, but I do know there's a much more productive way to conduct technical interviews, one which a number of organizations are progressively adopting, one which enables organizations to get a much better glimpse into the developer's actual coding style and habits, one which much more closely resembles the actual software engineering work that we do. A simple mini-project such as this one that a previous director came up with, that allows you to see the developers approach to creating actual production grade software around specific constraints/requirements.

Upon completion of said project, you can (if it's completed to your satisfaction), then have them talk about their solution - what they were thinking, what assumptions (if any) were made, what the known limitations are etc. You can introduce changes to the requirements and see how they would handle a requirement change that conflicts with a fundamental assumption they made to some degree. This is just a much better approach to evaluating ability. Simpler, more comprehensive....but hey what do I know?

Monday, January 15, 2018

Leadership Philosophy

I've been a tech lead for several years now, and I'm often asked (at interviews or general discussions) about my leadership philosophy. The running joke in places I've worked has always been that I run a tight ship - but this while true, can often be misleading.

It's true - my software projects have always met deadlines and always had some of the lowest bug per release rates of the places I've worked at. Some may jump to the conclusion that do this my dictacting the every move of my development team, which couldn't be further from the truth. Far from it, all the developers that have worked on my systems will readily tell you that they have enjoyed more individual freedom working under me than they did most anyone else.

How do I achieve this? Simple I focus on and create an extreme culture of accountability around the what, not the how. What I want is simple: robust maintainable software that conforms to spec (functional and performant) delivered on time. How individual developers achieve this is their concern, and while as lead I'm always willing to suggest ways to improve on quality and speed of an individual developer (hello vim..haha) I'm certainly not overly (or much at all) concerned with individual preferences. Focussing on HOW they deliver their code is micromanaging. Yes it is. Insisting on pair programming? Micromanaging. Insisting on TDD? Micromanaging. The worst thing about micromanaging (apart from the fact that everybody fucking hates it) is the fact that time spent focussing on the unimportant (the how) is time spent not policing and focussing on the important (what is actually being produced).

Look TDD is awesome  - I find it is especially useful in making me focus on requirements and not code when writing tests which is pivotal when writing quality tests. However, if a developer is giving me quality code with well written tests without using TDD, why on earth would I force them to use TDD?

It's important to differentiate things that constitute individual working styles/process and separate that from (and focus on) the work they produce. Try as little as possible NOT to dictate personal process - it's not just that it's not helpful, it's often counterproductive.

Now if a specific developer is failing to meet clearly defined team standards e.g unacceptable number of bugs introduced by said individual, failing to meet deadlines, failure to write proper tests, code not meeting specs (all things that you should be religiously tracking on your team) - Then and ONLY then should you sit down with said developer and go over their process insisting on changes that you feel would result in better software.

Monday, January 8, 2018

Writing Tests - Why do we actually do it?

I recently had a discussion about about whether or not people should test private methods. One of my colleagues exclaimed that the problem with rule of not testing private methods is that it is essentially tantamount to saying that you should not have anything complex enough to test in your private methods, which on its face would appear to be a bad rule.

My colleagues response illuminated a common and dare I say almost crippling problem with our industry. A failure to properly distinguish between preference, philosophy and fact and properly convey said differences when communicating. A failure to take the time to understand what the other person is saying and why before drawing conclusions.

Testing private private methods is not a rule so much as it is a logical conclusion. One that stems from a fundamental difference in the philosophy of testing.

We do not write tests because the code is complex; inversely and INFINITELY more important, we DO NOT neglect to write tests simply because the code is simple. We write tests as a means to programmatically state and verify system requirements. We test system requirements/specs NOT implementation. Good tests as much as possible refrain directly touching implementation details (and let’s be clear - a private method is the definition of an implementation detail). A excellent test suite should be able to endure significant system extension/refactoring/implementation changes and still run successfully - this in fact is one of the major advantages of a good test suite, the ability to refactor with confidence.

If you’re testing a private method, it means you are doing one of two things:

1) Neglecting to test the actual feature/requirement which said private method was written to fulfill
2) Testing a piece of code that is already being tested via the test written to verify the feature/requirement that said private method was written to fulfill - this is called testing noise.

Both cases are obviously a violation of the philosophy behind testing as stated above. As a result, both are equally dangerous in the long run. While the dangers of the former are obvious, the latter, apart from rapidly making the test suite unwieldy, makes the development team numb to failing tests - tests are failing not because the system isn’t working properly anymore, but just because I cleaned things up and removed this unnecessary private method - and makes developers comfortable deleting tests when they fail.

This is one of the fundamental strengths of TDD. TDD forces you to write tests before implementation, which ensures that the tests focus on the requirements and not implementation. TDD is a process that lends itself to significantly better tests which in turn result in significantly better software.

Tuesday, January 2, 2018

Welcome

Here you will find the simple musings of a longtime technologist, as I take a much needed break.

Part perspectives on specific issues, part industry level disgruntlement from far too many years creating software professionaly.

Interview Circus - the technical section

Typical of many developer interviews is the technical portion - a section which presumably is designed to gauge the technical ability of th...