Tomorrow, December 5th 2008, marks the 20th anniversary of my starting work in “the industry.”
This calls for five minutes of reminiscing.
I turned up for my first day of work as a trainee systems programmer at a big Australian bank’s EDP department. I recall being more than a little shocked at having to be at work before 8:06am each day. I was introduced to everyone I’d be working with shortly before being sent off to North Sydney to do MVS and IBM System/360 Assembler training for a few weeks.
At training I learned that the most powerful instruction in Assembly language was the no-op. The coding standard dictated that you sprinkle them throughout your code so that smarter programmers than you could patch your code, in memory, while running by overwriting your no-ops with useful code and then adding a statement to branch to the patch code over the defective instructions.
The bank had some great people. Some were consummate professionals and some were real cowboys.
Towards the end of my time at the bank I was introduced to the pointy-end of the economics of software development and process improvement.
A colleague returned from a long liquid lunch and let me in on the “big secret.”
He said only fools write good code. Code has to break for you to get called in. Being called in gives you overtime and visibility. Overtime is extra money. Being called in is heroism. Develop skill in writing bugs that are serious enough to call you in about, yet easy enough to fix soon after you get into the office. Overtime was paid for in four hour minimum units. Nobody notices people who write reliable code because they never get to perform heroic acts. Notice that the people who get promoted are those that handle high stress situations. Notice that the people handling these high stress situations are generally responsible for creating the high stress situation in the first place.
It was good motivation to find a new job.
Once upon a time in 2002 or 2003, Mr Ed asked me if I had any interest in writing an essay for a new web site he was thinking of setting up called HackNot! I had a bunch of mostly sardonic and simple thoughts in a file somewhere and I sent it off to him asking if this was the kind of thing he was after. He relied yes, and I duly got busy with something else and evaded towards completing it.
I think Ed grew tired of waiting for me and edited my incoherent rambling into an ordered list and plugged it up into public view.
I thought that now, nearly 5 years later, would be a good time to re-examine some of those thoughts. I’m way kindler, way gentler, and way more verbose now. The process of looking at what I thought 5 years ago is bound to ignite a flame war with myself.
I’ve taken a copy of the HackNot! article and added notes along the way.
In 1995 the Unabomber’s manifesto was published in The New York Times. In 2001 the Agile Alliance published their Agile Manifesto. Now our own Tedious Soporific gets in on the act. His personal manifesto leaves no area of software development untouched – from the hazards of frameworks to the role of “doof doof” music in requirements elicitation, it’s all here. Truly a heartbreaking work of staggering genius.
Heartbreaking and staggering but not genius.
- Humans read requirements.
- Humans lose interest if they can’t understand the requirement, or why it’s there.
- Requirements numbers should never contain the section number of the document they are in.
- A requirement only needs to require an implementation in rare circumstances when you need to require a point of concrete integration(shall run inside IE 4.0 and up etc.)
- Requirements should be able to be tested. If you write a requirement that may be hard to test, write supporting notes about how you would envisage the requirement will be tested.
This is a bit of a crude list of things that have irked me about requirements in the past. I would say, without fail, that poorly written, or conceived, requirements cause me the most pain in my work. Both reading other peoples’ and re-reading my own hastily-prepared requirements can be a trial. These days I think I would just point to Carl Weigers’ two excellent texts and leave it at that.
According to Capers-Jones, best practice is to spend 10% of a “systems” software development project’s effort on systems engineering.
- User interfaces are design, not requirements.
- Track requirements met during development by mapping requirement coverage to test results.
I stand by my statement, but the reason it’s a manifesto entry is that UI design is not a requirements generation activity. It’s a design activity, and a vitally important one.
One of the most interesting diagrams about the importance of user interface design and its relationship to estimation I’ve seen is on page 39 of Steve McConnell’s “Software Estimation: Demystifying the Black Art” (see a spreadsheet I prepared earlier here) where he discusses the cone of uncertainty. This diagram is about the variability in estimates done as a project progresses.
Before “product definition” a project may take from one quarter to four times the time estimated at this point. Estimations done at the time detailed requirements are available take the likely variability ranges from two thirds to one and a half times estimates done at this time. The next step is user interface design. When the UI is designed, the likely variation from estimates done at this time are from 0.8x the estimate to 1.25x. If you’re not doing a UI design and re-estimating at each milestone of project definition, you’re not interested in estimates.
I’d go further than tracking requirements to tests and say that requirements should be mapped to architecture and design so the relationship between design decisions and requirements is obvious to developers and archaeologists who look upon your project in later years.
- Because a customer asks for a feature to be implemented, that alone doesn’t make it a good feature.
- Moral time: A man walks into a hospital having already diagnosed himself with prostate cancer. He demands that a surgeon operate immediately to remove the cancer. The surgeon operates. The man is caused inconvenience, discomfort and pain for the rest of his life from side-effects of the operation. The surgeon could have refused to operate, citing that 80% of men die with, and not because of, prostate cancer. The surgeon gave the man what he asked for and not what he needed.
What a smarmy bastard I was.
A simpler way of putting this is that the job of a software professional is to tell your customer when they’re asking you to do something they shouldn’t do. Sure, they can go ahead and do whatever the silly thing is anyway, but you shouldn’t really let someone demand that your team build a content management system when there are free and commercial versions out there that may meet your customers’ needs. Customers tend to talk about how the system should look, so it’s easy to fall down a rabbit-hole of having the customer design everything for you. If you are mindlessly implementing everything your customer tells you to, you’re somewhere in the spectrum of working with a very capable customer to not behaving as a professional software developer.
- Q: What can you brush your teeth with, sit on, and telephone people with?
A: A toothbrush, a chair and a telephone.
This is a bit too subtle and clever, which contradicts something else in the manifesto (see below). I advocate looking at the problem to see if you’re solving one problem or several. Consider if it’s really appropriate to solve two or three distinct problems with one development or a monolithic product. Any way a project can be made smaller, or divided up into several projects is advantageous. In software we have know from Barry Boehm’s work that there is a diseconomy of scale of software development, so three small projects have a much greater chance of succeeding and coming in near to time than one big project does.
- Specify performance early.
- Optimise late.
Specify performance requirements as early as you can and realistically. If you find your customer pulling very ambitious figures out of thin air, mark the performance requirement for later. If you find there is hard and believable data behind the performance requirement, then you should note the source and breathe easy that the performance requirement is believable.
Optimising late is a reaction to developers who get carried away making code perfect up front. If you have a believable performance requirement that is unprecedented then you should see this as a project risk and consider prototyping, benchmarking and discovering early if your project needs to be sent to an early grave for being infeasible before you spend a lot of money and reputation on something impossible. Pay attention to performance and don’t optimize things that aren’t a bottleneck or won’t help you meet your project goals.
- Styles in Microsoft Word are your friend. If you want a Word processor, use Word. If you want a typewriter, use Notepad.
There are many things that may be reused in software development. Documents are one of the most commonly reused development artifacts. If you use Word to do your documentation, learn about styles, automation, cross-references, footnotes, and keep your source material close to the document and in source control. Don’t be too proud to go on a Word course – if you’re tabbing, or double-entering to get paragraph markers please go on a course.
- Beware the “framework.”
Perhaps this should read “Beware the project with no end and no clear customer” as this entry was about projects that allow themselves the luxury to think that everyone wants what they’re producing they just don’t know it yet. I’m sure many successful frameworks came from over-resourced projects with an ambition to meet requirements they just invented for customers that “don’t even exist yet” but I’m also sure there’s a 20:1 ratio of failures to modestly successful frameworks born from over-generous budget allocation.
- Spurn the “reusable component.”
Reuse was big at the end of the 90′s, resurfaced as Product Line Engineering in the early 00′s and seems to have died down prior to resurfacing as SOA around about now. I’ve written before about how I think that design, experience and plain-old code-stealing are some of the most effective forms of reuse. Backed by tools that support findability and developer communication, code reuse will blossom in an organization. Building a repository of carefully curated reusable components and controlling their use and limiting their mutations is intended to reduce testing requirements and defect propagation, but it also stifles innovation and discourages reuse.
- It’s hard to specify a framework because what users might require rarely becomes what they’ll need for sure: “You ain’t gonna need it.” Your customer wants to pay for a solution for their problem, not everyone else’s.
Would you like to be a customer who wanted to pay for an SQL query and got an ORM framework for only twice the price?
- XML: it’s just a verbose way of representing structured data.
- SIP: it’s just a signalling protocol.
The road to hell is paved with overreaching hyperbole about the potential of new technologies. Both XML and SIP are useful technologies; both brilliant and compromised. Before them was WAP, Token-Ring, OpenDoc and a lot of other promising technology that was way over-hyped.
- Top Ten Lessons from the Dot Com Meltdown
- Try not to build R2 before R1 has any customers.
Most projects have version two of a product loaded up with features a long time before version 1.0 has been seen by customers. The absolute best resource for requirements is customers, and their best ideas don’t come in focus groups or interviews about the problems they have. Customers’ best ideas come when they’ve seen version 1.0 and hate it enough to tell you what they’d really like. If the next release is already full of requirements that are sourced from marketing or product management, customers will be upset with 2.0 as well. If you can’t make them happy with release 1.0, make them happy the second time they see a release. Customers you listen to become your ally and make marketing a whole lot easier.
- When someone says “I know this is a death march, but you will be rewarded well if you succeed or fail,” run (away) like the wind.
This was a lie told to me once. Use the experience to remain as professional as you can, or run.
- Habitable development environments.
This was a placeholder for something I read once and could never find again. The idea is that, like share accommodation, teams need to find their level of process and code hygiene. Some brilliant developers work with little infrastructure and formality and others work best with lots of structure. When building a team consider what each developer finds habitable and make sure you get a team that can accept the coding standards, process rigour and meeting load that you intend to inflict on them. In a share house, some people like to leave the dishes until there’s a good pile, others like to wash everything on a regular basis. If you have lived with people at either end of the cleanliness and habitability spectrum, you know how important this can be.
- Give directives in positive terms.
- Avoid saying what shouldn’t be done.
- Toddlers and software engineers want to please you, and do the right thing.
- Toddlers and software engineers hear “Don’t do X” and become paralysed with uncertainty because they now know for sure what they shouldn’t do,but can’t figure out what you do want them to do.
How patronising! It’s a subtle message.
If you want people to change, tell them what the outcome should look like in terms they understand. If an executive stands up and says “don’t be evil,” your expectation is that you can go right up to the line, lean over and smell the sulphur, and still be in the clear. If your executive says “from now on we’re not a services company” middle management might set about firing your services staff with no vision for what you really want to be.
- Usable interfaces should not be innovative. If it’s clever or tricky,then it’s probably confusing.
- Users don’t use the right mouse button.
Let’s back off to saying that you can’t expect users to rely on the right mouse button. Watch some regular users use applications sometime.
- It’s hard to know when to double-click unless someone shows you.
Watch some users sometime. A lot of older users double-click hyperlinks because someone once showed them that the way you get a computer to respond to you was to double-click.
- Users don’t use tree views. Users don’t get trees.
- Users only (very) rarely see trees on computers.
- Developers love tree-views.
- DevStudio [and Eclipse have] trees.
- Windows file explorer shows a tree.
- Most users never see or use tree-views when they’re using Windows (or Macs) and don’t find them comfortable.
- Think about where the Windows explorer is located in the Start-> menu(it’s an “Accessory”) and where the “My Computer” icon is (on theDesktop) and what happens when you double-click it.
- You have to configure Outlook to show a tree view of your folders on the left.
- Standard Windows application file (save/open) dialog does not show a tree.
- A tree is not an easy metaphor. When was the last time you saw a real live tree of folders?
I guess I was tired of seeing lots of new applications that looked just like the IDEs used to build them. The tree has thankfully been supplanted by the Google-like search & results list.
- It’s hard to write requirements unless you’re ears are being pounded by”doof doof” music.
Actually “doof doof, chikka, doof doof” music is better, I’ve found.
- Unfinished Sympathy is the finest pop song ever written.
I have to revise this one day. But the combination of serious-sounding nonsense lyrics, orchestral pomp and an addicting hook make it hard to beat.
- “Refactoring” is not synonymous with “fixing bugs”.
Like the XML and SIP thing, this was written at a time when the term was overused. “I’m going to refactor that bug report” or “I’m going to refactor my performance problems” were not uncommon.
I’ll follow up later with “Epiphanies about software” posts, but I think I’m done with manifestos.
“Because I have nothing to show, nothing to say, we shall try to speak about something else.”
I’ve watched a lot of TED videos, but this one is the most fun. Philippe Starck designed that citrus juicer, and a whole lot of other very special everyday items. Even though he has nothing to say, his humility, outlook on life and motivation are well worth 20 minutes of your time.
I’m a bit of a tinkerer in my leisure time. I like messing around with different software, hardware and writing little programs to try stuff out and keep current with a variety of fun-looking technologies. I also do a little bit of development for friends and family, but mostly this is fun-driven as well. This makes me someone who knows jack about most trades and a master of not so many.
My modus operandi is to prepare list of things to do, and then work through them in an order prioritized by fun and tempered by expense.
It’s like I’ve been running my home leisure time as a consultancy practice with myself as my biggest customer. Although I find tinkering endlessly enjoyable I rarely end up with a finished product because I’m more into the journey than the destination. I’ve been thinking about how much more satisfying tinkering would be if it occasionally resulted in something that was more like the dictionary definition of completion. My inspiration has been a recent post by Marc Andreessen on personal productivity, and a recent re-reading of the PSP and TSP management frameworks.
I remember reading that the most important thing about a consultancy practice (apart from customers) is to have a “pitch-able” methodology and a knowledge base. Inspired by this, I thought I’d create a list of things I think I need to develop a sufficiently documented, scrutinized, and understood “home tinkering” methodology with an emphasis on achieving a collection of finished products. I never thought I’d yearn for efficient leisure time, but there it is, I do.
Getting my home tinkering consultancy practice outlined should help crystallize my random thoughts into more consistent processes than I have now. It also opens it up for improvement through helpful suggestions.
My list of areas to discuss, document and make consistent is currently:
- Tasks and To-dos
- Package, Test, Deploy
- Source configuration management
- Password Management
- Data Backup
Over time I’ll write a page on each.
A while back I purchased and read a book by Jeffrey Pfeffer and Robert Sutton called “Hard Facts Dangerous Half-Truths & Total Nonsense” subtitled “Profiting from evidence-based management” which seeks to examine many deeply ingrained business-related beliefs that aren’t necessarily backed by any study or evidence.
I have had an interest in performance management programmes and software engineers for a long time. This is probably because all of the systems of assessing and rewarding performance I have participated in over the years have seemed flawed in different ways. This post is
an examination a random walk through the Hard Facts chapter on incentive programmes.
Performance management programmes typically consist of processes for goal-setting, development, performance appraisal and reward. While the reward is expected to be commensurate with the amount performance exceeds expectations, when the objective is not met a program of correction cane be undertaken instead of a reward system. As carrots are said to be a better control of behaviour than metaphorical sticks, most companies opt for a system of financial incentives to reward good performance.
In their book Hard Facts Pfeffer and Sutton examine the question “Do Financial Incentives Drive Company Performance?”
Hard Facts lays out the theory of how organizational performance can be improved with incentives.
1. Motivation effects: Even though incentives don’t have an immediate effect on ability, individuals will work harder if they know they will be rewarded for harder work.
2. Information effects: Incentives inform individuals about what the organization considers important.
3. Selection effects: An incentive program can help drive away the wrong type of person and attract the right type for the organization.
In IT and related fields, financial incentives are the norm. Hard Facts makes the point that not everyone is motivated by money. Additionally, people motivated by money aren’t necessarily the right people to have in an organization. Undeniably, it seems humans are motivated to apply more effort to their work by money.
It turns out that most people wildly overestimate the effect of financial incentives. Hard Facts quotes a General Social Survey of US citizens where the same people who rated pay as the third most important aspect of their jobs (“important work” and “a feeling of accomplishment” were first and second) thought other people were much more motivated by pay than themselves. “73 percent thought that large differences in pay were necessary to get people to work hard…”
Software Development industries are filled with knowledge workers, whose day-to-day activities are a mix of engineering practices and inspired invention. The activities necessary to release or service software are complex and are often subtly inter-related. Setting easy-to-measure goals that have a surface-level relationship with the behaviour to be encouraged is both seductive and commonplace, but it is rarely able to define broader organizational goals.
Most software developers have a story that parallels the legendary Dilbert Cartoon above. A decision is made to set goals or provide incentives in a way that was intended to solve a problem, but introduces disastrous side-effects; You’ve been told that you need to write more code so you set up a cron job to generate zillions of lines of perfect template code each day and check it into source control; or you’ve been asked to cut down on software costs so you’ve illegally copied software rather than raise a purchase order.
The example that comes to my mind is when projects in my part of an organization were measured based on how long problem reports stayed open. Developers would take emails and phone calls and write problem details in their journals, working on the problem off the books. When the problem was fixed, they would report the problem in the problem reporting database and shortly after they would close the problem with the fix they had prepared earlier. While this made for some very impressive metrics, it was not the most efficient use of developer resources. Hard Facts summarizes this problem as “Be careful what you wish for, you might just get it.” The question is, is there a better way?
A simple plan
Peter Drucker, in his essay on “Management by Objectives and Self-Control” talks about a device called a “management letter.” This is part of a system he used in his organizations to make sure there was a correctly-communicated understanding of what each person needed to do to help the organization reach its goals in a top-down manner, and allow each person in the organization to reflect the best way to achieve those goals back up the hierarchy — bottom up. Each manager’s subordinate writes a “management letter” twice a year to their manager.
“In this letter to his superior, each manager first defines the objectives of his superior’s job and of his own job as he sees them. He then sets down the performance standards that he believes are being applied to him. Next, he lists the things his superior and the company do that help him and the things that hamper him. Finally he outlines what he proposes to do during the next year to reach his goals.”
And so, at every level of the organization an understanding is negotiated, made, and remade every 6 months with useful information about what needs to change, and how the organization has been perceived to be changing along with some goal-setting in the language of the person that needs to reach them. The oft-forgotten part of Drucker’s title of this seminal essay is “…and Self-Control” which emphasizes the optimal state of an organization — one where supervisors understand their staff, and trust them to work with minimal supervision.
This kind of objective alignment and goal setting doesn’t lend it self to being put in an organization-wide database and relies on prioritizing effective communication and elimination of misdirection over measurable goals.
Another potential pitfall Hard Facts emphasizes is differential reward systems. Most reward systems divide the world into three groups of people; High achievers, people not worth mentioning, and people who need help. They observe that very few organizations fund their rewards significantly enough to provide substantial differences in the rewards offered to people in the three categories, but the social cost of this tiered approach is enormous. Small differences in salary can cause huge damage to self-esteem or counterproductive behaviour.
Hard facts warns to be very careful about using financial incentives and to try very hard to use non-financial rewards. The origins of being blinded to other forms of incentive go back at least to Taylor in the 1900s and probably beyond. Simple goals and incentives can work, particularly for workers who perform tasks with a straightforward relationship between effort and productivity. Multi-dimensional roles like software and other knowledge work are very tricky to design performance incentives for. In both cases, incentives must be carefully designed not to diminish other important organizational goals.
Making goal-setting a meeting of the minds with understanding of goals flowing up and down an organizational hierarchy seems a good approach. This approach places a priority on subjective assessment against established goals and doesn’t make reward decisions any easier.
Hard Facts doesn’t offer a recipe for providing fair and worry-free compensation while avoiding the risks of poorly directed incentive systems, but it does highlight some significant issues to consider. No wonder there are so many compensation consultants available to regularly change an organization’s performance management and incentive programmes. This stuff is hard.
I’ve been occasionally scratching at a blog post for a while (ok, 12 months) which tries to distil the usability wisdom of Tog, Spolsky and Krug into a nugget-sized blog post. Tonio’s advice seems far more practical and usable.
Excerpting, directly from here and adding the later rule:
- Consistency. It’s the bugbear of small minds, but guess what, a lot of users have those small minds. Don’t just be consistent with yourself — be consistent with as many things as possible.
- Progressive Disclosure. Show the stuff they probably want/need to see and allow the rest to be disclosed if needed. Show the functionality they probably want/need to use and allow the rest to be disclosed if needed. Make the stuff you show as powerful and general as possible and you may not need to hide much at all.
- Forgiveness. Make it hard to screw up (try to detect and prevent errors before they’re made), make it easy not to screw up (give useful feedback), and give people a way out if they screw up anyway (undo).
- Visibility. If you can’t see it, it might as well not be there. More advanced users will look in more places, so make the stuff idiots need to see bleeding obvious.
- Beauty & Simplicity. Ugliness is distracting. We don’t like ugly things for a reason, often “ugly” is shorthand for real problems — disease in organisms and inconsistency and carelessness in software. A consistent program is generally a tidy program and untidiness is the easiest form of ugliness to eradicate.
- Maximise Generality, Minimise Steps. These (often conflicting) goals are powerful tools for rethinking and improving an interface. If you can do more with less you’re almost certainly improving your UI. Improving generality (e.g. providing a dialog that does more things) is only good if it doesn’t increase steps and vice versa — that’s the key. (Imagine you could easily put all the Photoshop filters into a single dialog, but it would have a menu of all the filters in it … so you’ve created a very general dialog, but you haven’t saved any steps.
- Smart Defaults. When something needs to have a default value, try to pick that default intelligently (but make it easy to change). Defaulting to the user’s last choice is often a simple, effective option.
- User Errors are Crashes. If a user makes a mistake it’s equivalent to (and often more damaging than) a crash. Treat it like one.
- Avoid Preferences. Configuration choices are often a design failure. Is there a way to make this not an option?
- Wizards. These are generally a sign of design failure. Why isn’t it obvious how to do what it is the Wizard helps you do?
- Online Help. It ought to be good and largely unnecessary.
- Frequent tasks should be efficient. (late adder from here) [...] operations you do all the time don’t need to be mnemonic, they need to be efficient
Comments off to direct comments to the original posts:
Most organizations wake up one day and wonder how much more competitive they would be if they never wrote the same code twice. Many organizations undertake significant organizational changes to implement their new found software reuse religion. Generally, reuse initiatives like these are enthusiastically evangelised, resisted by staff, provide dubious economic benefit when they’re in operation, and take years of investment before they are renamed according to the fashion of the day and re-launched.
At the moment web services (WS) and service oriented architecture (SOA) define the vocabulary of software reuse. In previous years it was object models (distributed and plain-ol’), product line engineering (PLE) or component-oriented architecture leading the charge to re-usability nirvana. My software reuse strategy isn’t an interface definition language or a step-by-step methodology. It is a policy suggestion.
I think the easiest and most effective way to achieve effective software reuse for any organization is to institute this policy:
Every software project must publish all of its development artifacts — code, test, requirements, help, training, bug reports and design information — such that any developer in the organization can search and view any development artifact or identify and contact the people working on the project without restriction. For bonus marks, we will standardise on a configuration management, bug and task tracking, development and documentation environment that makes looking at any other project’s artifacts as close to zero-cost as possible.
This may not be a revelation for some organizations. Some developers already enjoy unfettered access to their colleagues’ project files and can search across the entire organizations’ software assets for code, design and knowledge they can make use of. The tendency for large organizations is to Balkanize and tool up in such a way as code and design becomes invisible outside a project. I’m advocating that no matter what the political structure — make sure development environments present no barrier to searching and copying.
I think too much is made of the value of generating reusable software that is packaged in nice, neat, black boxes lined with soft archetype-affirming design documentation, and set to rest in a software asset mausoleum. Standardised look and feel and avoiding embarrassing contradictions in different software produced by the same organization are just some good reasons to formalise some reuse rules and methodologies, but the way to start down that road is to make sure there is a culture of cooperation and sharing. If you are a CTO looking to implement change to foster software reuse and you can’t see and search all of the software you’re responsible for, I doubt your reuse initiative is going to be more than a PR exercise anyway.
While I think there is merit in approaches advocated in many texts,I think that the strategy that is most likely to succeed is the one that does not require more investment than an important and influential person writing down that from now on no project shall have code or any other artifact that is inaccessible to any other developer in the organization. All project artifacts should be available to be indexed and searched by the organizations’ chosen (preferably competent) search engine.
The benefit of this policy is that reuse will occur. Developers like to branch and merge more than they like to use half-thought-out libraries on a different schedule full of set-in-concrete code designed by people that never thought your specific problem would be the one they needed to solve. Fertile open source projects like Linux are surrounded by mini projects that branch from the “main base” and contribute back the benefits in a time-frame that suits the main and branched bases, and opens up opportunities for future branched projects. With an open internal development environment developers can find and talk to each other, search and review the available code and design, copy what they need, share experiences, and avoid unnecessary rework. All of this is possible without any explicit software reuse initiative.
The drawbacks? A rogue developer will be able to release all of your organizations’ code (and not just those projects she was working on) to the known universe when things go sour for you and her. So sue Sue.
According to the signature image at the NASA Goddard Space Flight Center’s software reuse site “It should be as easy to find a good quality reusable software asset as it is to find a book on the Internet.”
John Udell cites an InfoWorld Programming Survey from 2003, where the biggest obstacles to software reuse perceived among developers were:
- “Effort required to design software for reuse” — 29%,
- “Lack of awareness of which software is available to reuse” — 28%,
- “Effort required to learn and effectively apply software available for reuse” — 21%,
- “Programmer disinclination to package software for reuse” — 10%, and
- “Effort required to package software for reuse” — 7%
- “Other” — 5%
Clearly an open internal development environment is applicable to four of the five points here. Perhaps these respondents already have completely open work environments and were looking beyond sharing and cooperation for the perfect reuse technologies.
If not? Perhaps making software available for reuse that cost nothing to design for reuse will help. Perhaps making all software easily searchable and available for reuse at no cost will help. Perhaps never asking programmers to package software for reuse will help.
I have recently taken a not-random-enough-for-my-liking walk through management self-help text books, management consulting residue and organizational change activities. I was reading my usual suspect blogs looking for contrarian refreshment when a comment referred me to a site that half the universe probably knows about already and has been hiding from me.
He writes pretty well.
The article I stumbled upon was “The Talent Myth” from July 2002. “The Talent Myth” is a fascinating examination of one aspect of the pathology of Enron’s downfall — emphasizing talent without effectively rewarding performance.
Gladwell begins by noting that McKinsey consultants convinced Enron that they should pursue a policy of employing only the most talented and intelligent people and letting them find their own way of contributing. “They believe in stars, because they don’t believe in systems.”
“The only thing that differentiates Enron from our competitors is our people, our talent,” Lay, Enron’s former chairman and C.E.O., told the McKinsey consultants when they came to the company’s headquarters, in Houston. Or, as another senior Enron executive put it to Richard Foster, a McKinsey partner who celebrated Enron in his 2001 book, “Creative Destruction,” “We hire very smart people and we pay them more than they think they are worth.”
There are many organizations that pride themselves on hiring the “top X%” of available talent. I have worked for a couple who claim different top percentages. Joel Spolsky has an interesting short essay explaining why he thinks this is a commonly held delusion:
It’s pretty clear to me that just because you’re hiring the top 0.5% of all applicants for a job, doesn’t mean you’re hiring the top 0.5% of all software developers. You could be hiring from the top 10% or the top 50% or the top 99% and it would still look, to you, like you’re rejecting 199 for every 1 that you hire.
While Joel cautions against deluding yourself that you’re employing the top X% of developers, Gladwell sets out to explore a different thesis:
But what if Enron failed not in spite of its talent mind-set but because of it? What if smart people are overrated?
What if McKinsey consultants were wrong? What if hiring smart people and letting the best performing and most talented people pursue their own interests didn’t provide the right outcome for the business?
The article contains uncomfortably familiar scenarios and observations for anyone who has experienced a gamut of management styles:
Wagner and Robert Sternberg, a psychologist at Yale University, have developed tests of this practical component, which they call “tacit knowledge.” Tacit knowledge involves things like knowing how to manage yourself and others, and how to navigate complicated social situations. Here is a question from one of their tests:
“You have just been promoted to head of an important department in your organization. The previous head has been transferred to an equivalent position in a less important department. Your understanding of the reason for the move is that the performance of the department as a whole has been mediocre. There have not been any glaring deficiencies, just a perception of the department as so-so rather than very good. Your charge is to shape up the department. Results are expected quickly. Rate the quality of the following strategies for succeeding at your new position.
a) Always delegate to the most junior person who can be trusted with the task.
b) Give your superiors frequent progress reports.
c) Announce a major reorganization of the department that includes getting rid of whomever you believe to be “dead wood.”
d) Concentrate more on your people than on the tasks to be done.
e) Make people feel completely responsible for their work.
Wagner finds that how well people do on a test like this predicts how well they will do in the workplace: good managers pick (b) and (e); bad managers tend to pick (c). Yet there’s no clear connection between such tacit knowledge and other forms of knowledge and experience. The process of assessing ability in the workplace is a lot messier than it appears.”
The article also notes a potential consequence of emphasising intelligence over performance that I was surprised by:
[...]Dweck gave a class of preadolescent students a test filled with challenging problems. After they were finished, one group was praised for its effort and another group was praised for its intelligence. Those praised for their intelligence were reluctant to tackle difficult tasks, and their performance on subsequent tests soon began to suffer. Then Dweck asked the children to write a letter to students at another school, describing their experience in the study. She discovered something remarkable: forty per cent of those students who were praised for their intelligence lied about how they had scored on the test, adjusting their grade upward. They weren’t naturally deceptive people, and they weren’t any less intelligent or self-confident than anyone else. They simply did what people do when they are immersed in an environment that celebrates them solely for their innate “talent.” They begin to define themselves by that description, and when times get tough and that self-image is threatened they have difficulty with the consequences.
I think I learned more from reading this article than two weeks cringing at tortured metaphors in best-seller management texts.
I think I’ll spend a while longer at Malcolm Gladwell’s site.
Decades of reading poorly written, difficult to navigate, absent, and out of date help content has conditioned computer users that resorting to using the help in an application can only be a humiliating waste of time. I have come to the conclusion that help content should be simply placed online so it can be indexed by Google.
Unusually, perhaps, for a software guy I have created large quantities of help content in my time. In fact, on one application I was the primary creator of help content. I was quite proud of my help content, but I acknowledge that I had an excellent editor who spent her time chastising me, removing my idioms, tightening up the language and transforming it into Microsoft Windows HTML Help – according to the custom of the time. The resulting content was brilliant – succinct, well-written, nicely structured, copiously referenced and accurate (modesty prevents me from using hyperbole). We sweated blood to make sure that the content would be helpful, relevant and, most importantly, able to prevent our users from having to make support calls.
When users finally got hold of our application it became apparent from the support call topics that nobody ever read the help.
“That’s not fair,” I thought “people who use the help probably don’t make support calls.”
I was wrong.
This application had spawned a little mailing list discussion forum with a few hundred subscribers. When users asked for help on the list users who offered help never said “The answer is in the help.” Not ever. I knew most of the answers were right at their finger tips — right there in the help content! Nobody ever read our fabulous help content.
Help content – the state of the art
Is help really that bad? Here is a short list of my impressions of applications I’ve sampled the help content from:
Microsoft Visual Studio
Visual Studio has a pretty nice, integrated help system. Decent IDEs tend to have good linkage to keyword and API information and Visual Studio is probably one of the nicer examples of help integration. Content is voluminous. The style is good and very navigable in the main topics. Updates are frequent. You can search online for updated topics. There isn’t much to complain about here.
Mozilla FireFox 1.5
The FireFox team have developed their own help system which is a curiously half-hearted WinHelp replacement. Annoyingly, it has “always on top” behaviour and is designed according to the tree-view-on-left-content-on-right pattern of sleep-walked MFC applications. I have no evidence that it is an MFC application, but I’m giving you my impressions. Content is light but it does a good job of getting to the point quickly.
Typical of a Google acquisition, Picasa’s help menu item takes you to the web for answers. http://picasa.google.com/support/bin/topic.py?hl=en&topic=0 is the Picasa “Knowledge base” featuring a search function to help answer your questions. Sadly, the knowledge base appears to be a “walled garden” with a simple keyword search engine rather than a clever use of the Google search engine armed with all of the world’s published Picasa knowledge. On a whim I asked about getting the “color[sic] middle” effect, which I know to be called “Focal B&W” in Picasa.
“Your search – color middle – did not match any answers in our knowledge base. Please edit your search terms and try again.”
Ok. Searching on the term “Focal B&W” used in Picasa turned up two results, the second seeming more relevant than the first, but not too bad.
Web applications are a rule unto themselves. They don’t have help per se, but they do often (and in the case of WordPress) have a central touch point site and gazillions of users ready to help with problems on various forums and blogs. Apart from links to the WordPress Codex, there’s plain old Google to help solve problems with search hits in abundance.
Yahoo!’s Flickr (who would have thought that a possessive noun could have a “!” in it or be missing a vowel or two 10 or 20 years ago?) is *cough* very Web 2.0 in its approach to help. Click on the Help link, and you’re taken to a search page. The trick is that the search page only searches discussions on Flickr. This is actually quite useful and goes well beyond the topics you’d expect. For instance, looking for “macro focus” you get a result list headed by “Sorry to be a newbie, but what is macro?” that when you click it leads you to helpful answers contributed by users. This top forum question was followed by someone who could be channeling Salvador Dali:
y flickr is allowing nude photos?
im against it!
A better solution for help
Help isn’t as dire as I remember it being, but it’s not universally helpful or consistent and as a result users don’t care to read it anyway. What can you do?
My proposal is that all applications put their help content online in a manner that’s easily searched with (for instance) Google – like boring old HTML. If you can’t publish to the Internet, or if your application isn’t anything to do with being network connected, then at least put a local mirror of the HTML content on the local PC for desktop search applications and browser access with an option for Internet-connected users to search for related topics with a search engine like Google (I don’t want to sound too much like a Google fanboi… next year Alta Vista might be back in the Search game with a vengeance!)
The drawbacks are that you’re limited to what a browser and a search engine can do for you. The benefits are:
- You’re not limited to what a custom or Windows-based help system can do for you,
- Help becomes self-correcting. If you make mistakes in the help, users can update the body of knowledge organically through blogs, forum discussions, or any other form of web publishing
- Users will find your content through their usual mode of help search
Be kind to your users. When it comes to help – put it all online.