I am angry.
I just spent about an hour fiddling with indirect dependencies in my Maven configuration.
One issue was that while my version of Hibernate's entity manager depended on javassist:javassist in version 3.4, the new Sesame dependency I added used jboss:javassist in version 3.7 -- of course with the same package names so I ended up with a NoSuchMethodError. That wasn't too hard to figure out, but still a pain.
Worse was the whole "Simple Logging Facade For Java (slf4j)" crap. Everyone seems to think Java's in-build logging facility is not good enough for them, so they want to add something better. Or maybe there is this dream of allowing your users to plug their logging facilities of choice into your application, which is a nice thing to aim for, but somehow it seems to me that commons-logging proved well enough that it doesn't work.
What those guys had done is to introduce some breaking change between versions 1.5.5 and 1.5.6: suddenly a previously public static member has become private. Just the thing you do in a patch release if you want to annoy the world and waste a lot of time of people who surely don't have anything better to do. If you also make sure that people can create dependencies onto your stuff in lots of variants then suddenly you find yourself writing FAQ entries like this, which are only partially helpful.
The real question for me is this: "Why do all these projects really insist on adding these dependencies in the first place?" If you are providing libraries intending to be used by a wide range of people I would think you are very cautious in adding any dependency at all. And since I still haven't heard any compelling reason not to use the JDK logging facilities I wonder why people keep avoiding them.
Yes, JDK logging is not the best -- but it seems good enough for me.
Yes, the Handlers provided don't compete with e.g. log4js Appenders -- but you can write your own Handlers, you know?
And yes, there are some features such as the notion of a logging context that are missing from the JDK logging -- but I still haven't seen any code actually using those.
So if you have a good reason why one should use those logging libraries: please tell me. If not: please stop adding unnecessary dependencies that cause pain further down the track. And if you are in charge of managing releases for a public library: please turn on your brain and maybe try something like sigtest to help you with your decisions on API changes.
Wednesday, March 18, 2009
Tuesday, February 17, 2009
ORM: The Leaky Abstraction
I strongly dislike ORM.
Object-Relational-Mapping that is, I quite like the other kind.
The main reason for me to get this dislike is that ORM is one of the worst cases of leaky abstractions I've ever encountered. Again and again I find myself having to jump out of the object world, trying to identify the particular query I want to do in the relational world and then having to figure out the way I can convince my JPA persistence layer to do exactly this. Instead of just formulating a query in some relational query language I know have to understand not only the query but also how my JPA provider of choice maps objects and their annotations into the relational world. Life certainly didn't get easier this way.
My current problem is the way Hibernate does eager fetching.
All I want is a fully initialized object which I can pass out of my JPA session and it will work. This object has a few one-to-many relationships to small objects, which all should be available. Some of these sets can be reasonably large, but not really large enough to be of concern for in-memory storage. Unfortunately Hibernate tries to fetch them all in a single query, which means instead of fetching first N1 entries, then N2 entries, ... then Nn entries, it creates a single query for a cross-product that has N1*N2*...*Nn rows -- enough to run aout of half a gigabyte of heap space with a database that's less than half a megabyte of plaintext SQL.
I could try another JPA provider, but I somehow suspect quite strongly that it is not going to help and thanks to some omissions in the JPA spec I'm kind of committed to Hibernate already. The JPA spec actually doesn't define what "eager fetching" or "eager loading" means: both terms are used quite a bit but never defined -- at least I didn't find a definition searching trough the document.
I suspect the JPA crowd is going to tell me not to use eager fetching then. If my session would live at least as long as the object that would be ok, but that is not the case. So now I'll have to write code that traverses everything I need to fetch myself, maybe even invent my own annotation so I'll be able to maintain that with reasonable effort across multiple entry vectors. What a pain.
Maybe it is time for me to try using some object database technology. There should be some way out of the ORM pain.
Object-Relational-Mapping that is, I quite like the other kind.
The main reason for me to get this dislike is that ORM is one of the worst cases of leaky abstractions I've ever encountered. Again and again I find myself having to jump out of the object world, trying to identify the particular query I want to do in the relational world and then having to figure out the way I can convince my JPA persistence layer to do exactly this. Instead of just formulating a query in some relational query language I know have to understand not only the query but also how my JPA provider of choice maps objects and their annotations into the relational world. Life certainly didn't get easier this way.
My current problem is the way Hibernate does eager fetching.
All I want is a fully initialized object which I can pass out of my JPA session and it will work. This object has a few one-to-many relationships to small objects, which all should be available. Some of these sets can be reasonably large, but not really large enough to be of concern for in-memory storage. Unfortunately Hibernate tries to fetch them all in a single query, which means instead of fetching first N1 entries, then N2 entries, ... then Nn entries, it creates a single query for a cross-product that has N1*N2*...*Nn rows -- enough to run aout of half a gigabyte of heap space with a database that's less than half a megabyte of plaintext SQL.
I could try another JPA provider, but I somehow suspect quite strongly that it is not going to help and thanks to some omissions in the JPA spec I'm kind of committed to Hibernate already. The JPA spec actually doesn't define what "eager fetching" or "eager loading" means: both terms are used quite a bit but never defined -- at least I didn't find a definition searching trough the document.
I suspect the JPA crowd is going to tell me not to use eager fetching then. If my session would live at least as long as the object that would be ok, but that is not the case. So now I'll have to write code that traverses everything I need to fetch myself, maybe even invent my own annotation so I'll be able to maintain that with reasonable effort across multiple entry vectors. What a pain.
Maybe it is time for me to try using some object database technology. There should be some way out of the ORM pain.
Labels:
cross-product,
hibernate,
jpa,
mapping,
object-relational-mapping,
orm,
out-of-memory,
pain,
scalability
Thursday, November 27, 2008
Is Scala the new C++?
When the Scala buzz started a while ago I was quite interested since it sounded like a way out of Java as a language without losing the Java ecosystem. It also promises more expressiveness in the type system, all checked at compile time, which I'd love to say "who wouldn't love this?" about -- but of course that would get all the fanboys of those so-called "dynamic languages" angry, so I won't. I certainly thought that this sounds like a great new language that I want to try.
So I went and did my share of reading and then had the opportunity to work with Tony Morris for a bit, mostly applying Scalacheck to some of his ADT code, but also playing around with the Lift framework a bit. Working with Tony was quite insightful and while I have a CS degree and studied quite a bit of maths, including some universal algebra, his understanding of functional programming is certainly beyond mine. But while he seems to be happy using Scala as the next-best thing to Haskell, I just didn't catch fire.
My main problem with Scala is that it has always many ways to do things and that there are quite a few language features that seem to make life easier but their subtleness frankly scares me -- implicits are probably the number one on that list. It seems that the authors of the language have a strong focus on writability of code and are willing to pay for that by making it potentially less readable. Maybe that's what it takes to win people in these times where dynamic languages are the rage, but for someone like me who has worked on code that survived a few years of maintenance, readability is the first priority. And it seems I'm not alone, in fact this post was inspired by Cedric Beust talking about Guido van Rossum feeling similar about Scala.
It's not surprising that Guido feels that way since I used to desribe Scala as "too much like Perl" in regard to its attitude of being easy to write and allowing multiple ways of doing the same thing (TMTOWTDI), which is a classic opposition to Python's attitude. And when it comes to scripting I've always prefered Python over Perl.
But recently I changed my opinion a bit: Scala is less like Perl than it is like C++. As far as I can tell Perl is deliberately designed to be a mix, people even refer to it as a language for postmodern programming. Scala on the other side claims to be much cleaner and having strong expressiveness. It also tries to replace Java as the language of choice for a large professional developer community. In many ways it seems to try to do for functional programming what C++ did for object-oriented programming.
Unfortunately it seems to turn out about as messy. Scala feels a lot like C++ in that it gives you all these powerful new features and allows you to express yourself in new ways, but it also gives you plenty of new ways to shoot yourself into the foot. C++ was (and still is) a choice of language for people who want to become gurus, knowing all the esoteric details and twists of their language, denouncing everyone asking for more simplicity to be not worthy of such a powerful language. Scala seems to have everything set up to entice that type of audience.
Maybe in a few years time the equivalent of Scott Meyers' books on "Effective C++" will come around and a new generation of programmers will ask themselves: do we really need this? Understanding his books was the point where I decided to move on from C++ to Java and I've hardly looked back. Somehow going from Java to Scala seems to be looking back in many ways and I'd rather go forward.
Unfortunately I still haven't found the forward. Maybe Scala has to become popular first in the same way C++ had to be reasonably popular for Java to rise. Maybe I will do some Scala coding before the next round happens, but unfortunately I am not very likely to enjoy it as much as others since I will spend too much time dreaming of how things could be much better. Poor me.
So I went and did my share of reading and then had the opportunity to work with Tony Morris for a bit, mostly applying Scalacheck to some of his ADT code, but also playing around with the Lift framework a bit. Working with Tony was quite insightful and while I have a CS degree and studied quite a bit of maths, including some universal algebra, his understanding of functional programming is certainly beyond mine. But while he seems to be happy using Scala as the next-best thing to Haskell, I just didn't catch fire.
My main problem with Scala is that it has always many ways to do things and that there are quite a few language features that seem to make life easier but their subtleness frankly scares me -- implicits are probably the number one on that list. It seems that the authors of the language have a strong focus on writability of code and are willing to pay for that by making it potentially less readable. Maybe that's what it takes to win people in these times where dynamic languages are the rage, but for someone like me who has worked on code that survived a few years of maintenance, readability is the first priority. And it seems I'm not alone, in fact this post was inspired by Cedric Beust talking about Guido van Rossum feeling similar about Scala.
It's not surprising that Guido feels that way since I used to desribe Scala as "too much like Perl" in regard to its attitude of being easy to write and allowing multiple ways of doing the same thing (TMTOWTDI), which is a classic opposition to Python's attitude. And when it comes to scripting I've always prefered Python over Perl.
But recently I changed my opinion a bit: Scala is less like Perl than it is like C++. As far as I can tell Perl is deliberately designed to be a mix, people even refer to it as a language for postmodern programming. Scala on the other side claims to be much cleaner and having strong expressiveness. It also tries to replace Java as the language of choice for a large professional developer community. In many ways it seems to try to do for functional programming what C++ did for object-oriented programming.
Unfortunately it seems to turn out about as messy. Scala feels a lot like C++ in that it gives you all these powerful new features and allows you to express yourself in new ways, but it also gives you plenty of new ways to shoot yourself into the foot. C++ was (and still is) a choice of language for people who want to become gurus, knowing all the esoteric details and twists of their language, denouncing everyone asking for more simplicity to be not worthy of such a powerful language. Scala seems to have everything set up to entice that type of audience.
Maybe in a few years time the equivalent of Scott Meyers' books on "Effective C++" will come around and a new generation of programmers will ask themselves: do we really need this? Understanding his books was the point where I decided to move on from C++ to Java and I've hardly looked back. Somehow going from Java to Scala seems to be looking back in many ways and I'd rather go forward.
Unfortunately I still haven't found the forward. Maybe Scala has to become popular first in the same way C++ had to be reasonably popular for Java to rise. Maybe I will do some Scala coding before the next round happens, but unfortunately I am not very likely to enjoy it as much as others since I will spend too much time dreaming of how things could be much better. Poor me.
Thursday, July 31, 2008
Open Source vs. commercial development
I am a bit fan of OpenSource products.
Unlike others I'm not very political about it and I belong to the category of people who prefer something like an Apache or BSD licence over a the GPL licences. In fact I'd love to produce code straight into the public domain, it's just the idea of being sued over it that stops me from doing that.
But that's not the point of this post, it is just setting the tone. The point I will try to make is why I believe OpenSource products tend to be better and how that can actually be applied to commercial development. I will start with the dual question: why do many commercial products evolve into something hardly anyone likes anymore?
I believe a main problem with commercial development is the business model of selling a new version every N years. The way you get someone to pay for a new version is to convince them that the new version has new features that are worth the price. Therefore from a business perspective your main goal has to be developing new features that can sell this new version. This is quite a different goal from fixing problems with the existing codebase or adding robustness. It is not suitable for designing a good user interface, either -- that might involve removing features or redesigning the existing ones. But any changes to existing features (including bugfixes) will make your customers ask why you didn't do that in the first place and they will not be happy to pay for the change.
Overall this busines model leads to featuritis: features start being added for the sake of adding features, not for the better of the product as a whole. In the first few versions of a product this effect will not be too visible since there will be genuinely useful features to add and the overall complexity of the product will not be too big yet. But with every new version it will get harder to find new useful features to add and to find a spot where to put them in the user interface. After a decade or two you get a product that looks like MS Word and similar programs.
While featuritis can easily be a problem for an OpenSource product (adding new features is more fun, so people tend to do that), the commercial world suffers from an additional problem: there is not much incentive to fix issues with the existing feature set. The only incentive is avoid annoying your customers so much that they don't buy the next version or other of your products. But quite often it is easier to fix that by marketing approaches particularly if the buying decision lies not with the actual user.
Again: OpenSource can suffer here, too. The incentives for fixing bugs can be low in an OSS project, but that is not necessary. In many projects fixing a long-term bug creates a lot of respect and thus creates incentive of doing this. Additionally the feedback options for users are much better and developers tend to see negative feedback of users through forums, mailing lists and issue trackers. This is another way incentives to fix problems are created: the developers might decide to fix the problem either to be nice to someone who is asking or to just to stop them from creating more annoying comments. Either way, the problem will be fixed.
Compare this to many commercial off-the-shelf products, where user feedback is often indirected through multiple levels of support and filing a problem directly is not even an option (I have a bug that crashes Visio and is easy to reproduce if someone is interested). Of course it is in the best interest of the company not to allow this direct feedback since the developers are supposed to create new features to sell the next version of the product. But it means that the developers are detached from the user's perspective on the product they are creating.
So is this a problem with all commercial software product development? Not really: the problem really lies with the business model. The way out for commercial development is to use a subscription model where people do not pay for versions but pay a regular fee that includes support and upgrades. That way the pressure for new features is lower: if the customer stays happy with the existing version you have a constant income flow. Since support is included in the offer, incentives to solve existing problems are now introduced. Any problem that stays in the code creates support requests, which creates cost. Fixing the problem not only makes the customer more happy (thus increasing the chance they stay on the subscription), it also reduces support costs.
Quite a few smaller companies use this business model successfully. Sadly companies like Microsoft find themselves unable to change. While I strongly believe both Microsoft and their customers would win by changing the model, no one is willing to pay a subscription fee for products they already bought licences for, even if the costs are equal or less to their usual upgrade costs over the long term.
I believe there are also some other advantages OpenSource development has, but in my opinion this is one of the most relevant. There are others such as the low entry barrier (no buying decision needed) and the fact that some people just don't want to pay for software. But if you care about good quality products and are happy to pay for them, then looking at the business models of the commercial vendors makes sense; comparing those with the support options found for the OpenSource products.
Unlike others I'm not very political about it and I belong to the category of people who prefer something like an Apache or BSD licence over a the GPL licences. In fact I'd love to produce code straight into the public domain, it's just the idea of being sued over it that stops me from doing that.
But that's not the point of this post, it is just setting the tone. The point I will try to make is why I believe OpenSource products tend to be better and how that can actually be applied to commercial development. I will start with the dual question: why do many commercial products evolve into something hardly anyone likes anymore?
I believe a main problem with commercial development is the business model of selling a new version every N years. The way you get someone to pay for a new version is to convince them that the new version has new features that are worth the price. Therefore from a business perspective your main goal has to be developing new features that can sell this new version. This is quite a different goal from fixing problems with the existing codebase or adding robustness. It is not suitable for designing a good user interface, either -- that might involve removing features or redesigning the existing ones. But any changes to existing features (including bugfixes) will make your customers ask why you didn't do that in the first place and they will not be happy to pay for the change.
Overall this busines model leads to featuritis: features start being added for the sake of adding features, not for the better of the product as a whole. In the first few versions of a product this effect will not be too visible since there will be genuinely useful features to add and the overall complexity of the product will not be too big yet. But with every new version it will get harder to find new useful features to add and to find a spot where to put them in the user interface. After a decade or two you get a product that looks like MS Word and similar programs.
While featuritis can easily be a problem for an OpenSource product (adding new features is more fun, so people tend to do that), the commercial world suffers from an additional problem: there is not much incentive to fix issues with the existing feature set. The only incentive is avoid annoying your customers so much that they don't buy the next version or other of your products. But quite often it is easier to fix that by marketing approaches particularly if the buying decision lies not with the actual user.
Again: OpenSource can suffer here, too. The incentives for fixing bugs can be low in an OSS project, but that is not necessary. In many projects fixing a long-term bug creates a lot of respect and thus creates incentive of doing this. Additionally the feedback options for users are much better and developers tend to see negative feedback of users through forums, mailing lists and issue trackers. This is another way incentives to fix problems are created: the developers might decide to fix the problem either to be nice to someone who is asking or to just to stop them from creating more annoying comments. Either way, the problem will be fixed.
Compare this to many commercial off-the-shelf products, where user feedback is often indirected through multiple levels of support and filing a problem directly is not even an option (I have a bug that crashes Visio and is easy to reproduce if someone is interested). Of course it is in the best interest of the company not to allow this direct feedback since the developers are supposed to create new features to sell the next version of the product. But it means that the developers are detached from the user's perspective on the product they are creating.
So is this a problem with all commercial software product development? Not really: the problem really lies with the business model. The way out for commercial development is to use a subscription model where people do not pay for versions but pay a regular fee that includes support and upgrades. That way the pressure for new features is lower: if the customer stays happy with the existing version you have a constant income flow. Since support is included in the offer, incentives to solve existing problems are now introduced. Any problem that stays in the code creates support requests, which creates cost. Fixing the problem not only makes the customer more happy (thus increasing the chance they stay on the subscription), it also reduces support costs.
Quite a few smaller companies use this business model successfully. Sadly companies like Microsoft find themselves unable to change. While I strongly believe both Microsoft and their customers would win by changing the model, no one is willing to pay a subscription fee for products they already bought licences for, even if the costs are equal or less to their usual upgrade costs over the long term.
I believe there are also some other advantages OpenSource development has, but in my opinion this is one of the most relevant. There are others such as the low entry barrier (no buying decision needed) and the fact that some people just don't want to pay for software. But if you care about good quality products and are happy to pay for them, then looking at the business models of the commercial vendors makes sense; comparing those with the support options found for the OpenSource products.
Wednesday, July 30, 2008
Robustness features
Since we are living on a bit of a smaller budget at the moment, my wife and I don't spend much money on buying new items. But we both like seeing any expense as a bit of a long-term investment and so we ended up buying a Miele washing machine after the old one gave up. Admittably this choice was solely based on anecdotal evidence (such as my mum still using her 30 year old machine), but recently two incidents made me believe we made the right choice.
The first one was that we accidentally used washing powder not meant to be used in a front loader. The problem seems to be that these washing powders foam to much and can harm the machine this way. The interesting bit is that our machine actually noticed the problem. After the cycle had ended it started swapping a "Check detergent" message with the normal "Finished", which then caused me to notice the mistake.
The second incident was a power failure: while the machine was running, it lost power for a few minutes. This didn't seem to worry it at all, it just continued from where it was before the power went as if nothing would have happened.
Both of these features fall into a category which I like to call "robustness features". Admittably it sounds a bit stupid and I was considering the more catchy but inaccurate "quality features", but let's keep it for correctness' sake until someone else finds something smarter.
A robustness feature is something that has been added to a product with the sole purpose of increasing its robustness, i.e. the chances that it will behave well in the case of some errorneous condition. In the examples above Miele spend some time working on (a) adding some sensor and logic to detect use of inadequate washing powder and (b) adding some type of non-volatile memory to allow the machine to remember its state even during a power failure. Both of these features don't seem trivial, they probably add significantly to both the development and production costs.
What makes these features interesting for me is that they show a certain commitment to producing high-quality products. These are features that are not easy to use in marketing. People tend to think "I wouldn't use the wrong detergent" or "power rarely fails", so these features are often ignored when comparing products. Additionally it is easy for a vendor to pass blame if someone complains: the owner should just not have used the wrong detergent and it is certainly not the problem of the manufacturer if the power fails. Both these effects together means that many companies do not care about putting such features into their products and in turn makes me believe that our choice of washing machine was a good one, since Miele seems to be one of the few companies around that still care.
Note that this also applies to software products. If you ever wrote some code to deal with external input then you know that a lot of time can go into avoiding, detecting and treating errors. I once wrote an input filter for a reasonably small XML format that ended up having more than one hundred different error messages -- it would have been a lot easier to use less error messages and group multiple errors together, but that means in the case that some error occurs the user will have to guess what's wrong. Since I have been in that role of the guessing user much too often I tend to write my code with very detailled error messages and in that case the company I was working for was willing to make that investment not only into my time, but also in terms of maintenance and the additional cost for localization.
I believe that it is good to add this extra effort into writing robust code that avoids failing by not letting bad things happen in the first place, detects errors early and accurate, and treats them with detailled error messages and ideally some decent recovery mechanisms. This applies not only to parsing input formats, but also to user interfaces, library design and even general application code where resources might run out and similar problems can occur.
This effort might not be that visible to your potential clients but the ones you have will learn to appreciate it sooner or later -- after all things do go wrong every now and then. They might never know what exactly you did and how much time you spend thinking about what errors can occur and what to do about them, but they will feel good about your product. And that is what counts for me.
The first one was that we accidentally used washing powder not meant to be used in a front loader. The problem seems to be that these washing powders foam to much and can harm the machine this way. The interesting bit is that our machine actually noticed the problem. After the cycle had ended it started swapping a "Check detergent" message with the normal "Finished", which then caused me to notice the mistake.
The second incident was a power failure: while the machine was running, it lost power for a few minutes. This didn't seem to worry it at all, it just continued from where it was before the power went as if nothing would have happened.
Both of these features fall into a category which I like to call "robustness features". Admittably it sounds a bit stupid and I was considering the more catchy but inaccurate "quality features", but let's keep it for correctness' sake until someone else finds something smarter.
A robustness feature is something that has been added to a product with the sole purpose of increasing its robustness, i.e. the chances that it will behave well in the case of some errorneous condition. In the examples above Miele spend some time working on (a) adding some sensor and logic to detect use of inadequate washing powder and (b) adding some type of non-volatile memory to allow the machine to remember its state even during a power failure. Both of these features don't seem trivial, they probably add significantly to both the development and production costs.
What makes these features interesting for me is that they show a certain commitment to producing high-quality products. These are features that are not easy to use in marketing. People tend to think "I wouldn't use the wrong detergent" or "power rarely fails", so these features are often ignored when comparing products. Additionally it is easy for a vendor to pass blame if someone complains: the owner should just not have used the wrong detergent and it is certainly not the problem of the manufacturer if the power fails. Both these effects together means that many companies do not care about putting such features into their products and in turn makes me believe that our choice of washing machine was a good one, since Miele seems to be one of the few companies around that still care.
Note that this also applies to software products. If you ever wrote some code to deal with external input then you know that a lot of time can go into avoiding, detecting and treating errors. I once wrote an input filter for a reasonably small XML format that ended up having more than one hundred different error messages -- it would have been a lot easier to use less error messages and group multiple errors together, but that means in the case that some error occurs the user will have to guess what's wrong. Since I have been in that role of the guessing user much too often I tend to write my code with very detailled error messages and in that case the company I was working for was willing to make that investment not only into my time, but also in terms of maintenance and the additional cost for localization.
I believe that it is good to add this extra effort into writing robust code that avoids failing by not letting bad things happen in the first place, detects errors early and accurate, and treats them with detailled error messages and ideally some decent recovery mechanisms. This applies not only to parsing input formats, but also to user interfaces, library design and even general application code where resources might run out and similar problems can occur.
This effort might not be that visible to your potential clients but the ones you have will learn to appreciate it sooner or later -- after all things do go wrong every now and then. They might never know what exactly you did and how much time you spend thinking about what errors can occur and what to do about them, but they will feel good about your product. And that is what counts for me.
Sunday, June 29, 2008
Hiring software developers
Going through my usual dose of tech blogs I stumbled across Alex Miller's post on the "mismatch problem" in hiring which took me back to my studies in organisational psychology back in uni days (psychology was my minor).
One detail I remember vividly is that informal job interviews tend to show negative validity in respect to various measures of job performance. Meaning: the more you like someone in such a job interview, the better you are sending them away.
It gets better the more you structure a job interview, so you should always try to plan your interviews well beforehand and stay on topic. The only criteria I remember to do well in studies is looking at work samples and the expensive option: assessment centers (although they can vary a bit).
How you get work samples for the positions in software engineering is another question that isn't easy to answer, though. Of course you can let them code a bit or try to debug something, but unless you really need a hard-core coder that tests an aspect of software development that isn't that important in a modern team-based environment. For an architect position you can let them give a rough sketch of what they would tend to do for a given set of requirements, but again that is usually only part of the job description.
One thing I like to look at is an existing portfolio, which you get if the candidate has been involved in open source development. That includes not only code they have written, but also documentation, mailing list posts and commit messages. The nice thing about such an open source portfolio is that it covers the technical side as well as writing style and social behaviour. But of course not every candidate has such a portfolio to look at.
In the end it is just plain hard. And as in project management people like to pretend it gets easier by introducing numbers, even if they are usually proven to be useless fast. Like SLOCs all these grades and test results seem just too pretty not to be good.
Having said all that: it has been years since I really looked at scientific work in the area and my memory is not always the best. So take this with a large grain of salt and fell free to correct me if you can -- I'm happy to learn more about this topic.
One detail I remember vividly is that informal job interviews tend to show negative validity in respect to various measures of job performance. Meaning: the more you like someone in such a job interview, the better you are sending them away.
It gets better the more you structure a job interview, so you should always try to plan your interviews well beforehand and stay on topic. The only criteria I remember to do well in studies is looking at work samples and the expensive option: assessment centers (although they can vary a bit).
How you get work samples for the positions in software engineering is another question that isn't easy to answer, though. Of course you can let them code a bit or try to debug something, but unless you really need a hard-core coder that tests an aspect of software development that isn't that important in a modern team-based environment. For an architect position you can let them give a rough sketch of what they would tend to do for a given set of requirements, but again that is usually only part of the job description.
One thing I like to look at is an existing portfolio, which you get if the candidate has been involved in open source development. That includes not only code they have written, but also documentation, mailing list posts and commit messages. The nice thing about such an open source portfolio is that it covers the technical side as well as writing style and social behaviour. But of course not every candidate has such a portfolio to look at.
In the end it is just plain hard. And as in project management people like to pretend it gets easier by introducing numbers, even if they are usually proven to be useless fast. Like SLOCs all these grades and test results seem just too pretty not to be good.
Having said all that: it has been years since I really looked at scientific work in the area and my memory is not always the best. So take this with a large grain of salt and fell free to correct me if you can -- I'm happy to learn more about this topic.
Labels:
hiring,
job interviews,
organisational psychology,
portfolio,
tests
Wednesday, May 28, 2008
I ❤ Manifest Typing
Lately I have been reading a lot of discussions about the pros and cons of "dynamic languages" vs. "static languages". Without trying to disect what people actually mean with these terms let me make a statement about one aspect of the spectrum -- a statement which seems to put me into a minority position:
Why do I love manifest typing? Because it lets me express what I want right in place and I think that is unique to manifest typing. There are other advantages it shares with static typing in general such as certain types of tool support, but only manifest typing allows me to write down what I think should be true.
That step of writing down my expectations is extremely important for me. Not doing so seems to make my code brittle: if the type of whatever I'm currently looking at changes over time it might cause problems with further assumptions I made at the time of writing. Even if type inference ensures that the compiler will detect certain incompatibilities, I somehow still prefer to be told about that change so I can vet it. After all I might know about further assumptions I made that the compiler does not know about.
It's even worse if I have to rely on tests. While I do believe that tests are a valuable tool, I think people who think they replace specifications are dangerous. A specification is inherently all-quantified, i.e. what is specified has to be true in any case. As opposed to that a test is always existentially quantified, i.e. it tests only particular instances. Maybe you get all the interesting cases, maybe you don't. Code coverage tools can help, but taking that to the extreme just doesn't seem easier than specifying what you want in the first place. If I can make a hard rule I prefer that over listing example cases, only those aspects that I can't specify I'll test.
My problem with modern programming languages is not that they force me to write down my expectations, my problem is that their language is not expressive enough (and sometimes as in the case of Java also broken). Add into that a culture of being happy to ignore basic principles of specifications (e.g. the JDK is happy in multiple places to let subtypes break the specification of their supertypes) and you get a big mess. But my answer to this mess is not to forget about the idea, it is to try again and hopefully get it right next time.
To make manifest typing really useful it should be expressive enough that it documents expectations and promises quite clearly. This is a core notion in the Design by Contract approach and I think while Eiffel didn't take off there is a lot to learn there. There's also the expressivity you find in XML Schema Part 2 with the language of restrictions offered on the types (which are value types). Just imagine how much validation code gets unnecessary if you would have that type of expressivity in the language.
In fact any validation code is nothing but an additional type system on top of what your language offers, which seems to be an indication of how weak our programming languages are. You can not even express the contents of a basic user registration form in the type system of any programming language I know. I consider that as a big shortcoming, since there is clearly a need for those type of specifications on every single tier in our architectures.
And to round things off: I also love manifest typing (and in this case static typing in general) because it allows me to forget about certain details of my code. In a way my brain is always full -- like the Linux kernel I try to use every last bit for something useful. Having to remember types of variables (or even worse: parameter and return values) takes space that could be used for thinking about the conceptual model, the data flow in my program, the overall architecture or a hundred other things that are more interesting than that type information. Let my IDE handle this for me and let it do in a way that I can be sure I don't have to care unless I want to.
I love manifest typing.Manifest typing means that you actually write down the types you expect parts of your code to be -- be it local variables, members, parameters or return values. Quite often this is referred to as "static typing", although the latter includes other forms such as type inference or structural typing.
Why do I love manifest typing? Because it lets me express what I want right in place and I think that is unique to manifest typing. There are other advantages it shares with static typing in general such as certain types of tool support, but only manifest typing allows me to write down what I think should be true.
That step of writing down my expectations is extremely important for me. Not doing so seems to make my code brittle: if the type of whatever I'm currently looking at changes over time it might cause problems with further assumptions I made at the time of writing. Even if type inference ensures that the compiler will detect certain incompatibilities, I somehow still prefer to be told about that change so I can vet it. After all I might know about further assumptions I made that the compiler does not know about.
It's even worse if I have to rely on tests. While I do believe that tests are a valuable tool, I think people who think they replace specifications are dangerous. A specification is inherently all-quantified, i.e. what is specified has to be true in any case. As opposed to that a test is always existentially quantified, i.e. it tests only particular instances. Maybe you get all the interesting cases, maybe you don't. Code coverage tools can help, but taking that to the extreme just doesn't seem easier than specifying what you want in the first place. If I can make a hard rule I prefer that over listing example cases, only those aspects that I can't specify I'll test.
My problem with modern programming languages is not that they force me to write down my expectations, my problem is that their language is not expressive enough (and sometimes as in the case of Java also broken). Add into that a culture of being happy to ignore basic principles of specifications (e.g. the JDK is happy in multiple places to let subtypes break the specification of their supertypes) and you get a big mess. But my answer to this mess is not to forget about the idea, it is to try again and hopefully get it right next time.
To make manifest typing really useful it should be expressive enough that it documents expectations and promises quite clearly. This is a core notion in the Design by Contract approach and I think while Eiffel didn't take off there is a lot to learn there. There's also the expressivity you find in XML Schema Part 2 with the language of restrictions offered on the types (which are value types). Just imagine how much validation code gets unnecessary if you would have that type of expressivity in the language.
In fact any validation code is nothing but an additional type system on top of what your language offers, which seems to be an indication of how weak our programming languages are. You can not even express the contents of a basic user registration form in the type system of any programming language I know. I consider that as a big shortcoming, since there is clearly a need for those type of specifications on every single tier in our architectures.
And to round things off: I also love manifest typing (and in this case static typing in general) because it allows me to forget about certain details of my code. In a way my brain is always full -- like the Linux kernel I try to use every last bit for something useful. Having to remember types of variables (or even worse: parameter and return values) takes space that could be used for thinking about the conceptual model, the data flow in my program, the overall architecture or a hundred other things that are more interesting than that type information. Let my IDE handle this for me and let it do in a way that I can be sure I don't have to care unless I want to.
Subscribe to:
Posts (Atom)