Showing posts with label IT Governance. Show all posts
Showing posts with label IT Governance. Show all posts

Tuesday, March 12, 2013

Could External Quality Assurance Hamper Your Chance of Data Migration Success?


I’m not sure of the exact rate for failures in data migration projects. Along the way I’ve seen Gartner report that somewhere around 83% of migrations either fail or overrun budgets and schedules and if memory serves I believe I’ve read that Forrester reported the success rate at around 16%.  The exact number probably depends upon who is doing the reporting, whom they survey and how candid the responses they receive are. Whatever the case, the number is big, scarily big.

To my way of thinking any area of a major project where the weight of historical evidence suggests that somewhere between 8 and 9 of every 10 attempts will be significantly challenged should be subject to two things:

  •  External quality assurance processes in an attempt to make sure that the chance of success isn’t derailed by things not being done as they should. Adding another voice of experience or another set of eyes if you will; and

  •  Some form of contribution to the wider data migration community of practice to help understand where things go wrong and over time (as a collective drawing from the positive and negative experience across many projects) look to evolve the methodologies used to undertake data migrations and lift the success rate.


Unfortunately, at least in my experience at least, the two items often work at cross-purposes. All too often I’ve seen the first endeavour block or even derail the second.  Quality assurance efforts will often be established as a risk mitigation exercise. That same aversion to risk often results in lack of comfort and confidence in anything which can’t be shown to have been done many times before. An established methodology is preferred over that which might be construed as cutting or bleeding edge.  That’s all well and good but, chances are, if you are following an established approach then that approach has been followed by a fair number of those 83% of projects that failed (to some degree) before yours.

This resistance to any attempt to stray from the well-worn path hinders the adoption and evolution of new concepts in so doing prevents them from gaining wider acceptance, development and enhancement over time by the wider crowd of data migration practitioners.

So we, as those practitioners, have two choices. We can accept that we can do little to change accepted practice, keep our heads down, collect our pay cheques and hope that luck or our best efforts place us in the lucky 17%, or we can look to find ways to not only increases the chances of success for our project but also contribute to the longer term average success rate of data migration projects in general. If we do want change then we must also recognise that radical shifts in methodology just won’t be possible; governance and quality assurance processes simply won’t allow that. Instead I think we must look for chance to use new techniques to build upon more accepted methodologies, filling the gaps or shoring up the problem areas that pose the biggest problems in our particular current projects.  This could take any number of forms from using lead indicators alongside lag indicators to build gradually build confidence across a project or the gradual introduction of new and improved approaches to the techniques and timing of reconciliation. 

Whatever, and however, we may go about this I hope that over time as a community of practitioners we can slowly build acceptance for new techniques, new methodologies and new measurement paradigms and over time slowly shift what is deemed to be acceptable and common practice. Who knows, maybe sometime before the end of my career we may actually see a failure rate that doesn’t send cold shivers down the collective spines of project managers everywhere. 

Thursday, January 12, 2012

We Don't All Need World's Best Practice


In anything but the smallest of IT departments chances are that at sometime you will need to rely on the efforts of others to design and deliver some project or other, be they internal staff, contract resources brought in-house or a full blown systems integration firm. There’s also a reasonable chance that you may not have managerial authority over those undertaking the work, or perhaps at best a dotted line report to you.  When faced with this scenario most of us will want to exert some degree of control or influence over how the work gets done or, at the very least, we’ll have an approach we’d prefer those charged with performing the work to follow.

One common element I’ve noted amongst many of the people I’ve managed or mentored over my career is an hesitation, or even unwillingness, to commit to paper a set of rules or guidelines which will govern the way in which others conduct their design, development and implementation activities. Often these folks have claimed that this sort of guiding documentation isn’t required, citing reasons including direct supervisory authority over the project team, the ability to exert technical influence in a one on one scenario with key project team members, or that the project team is experienced and skilled enough that such governance is not required. To my way of thinking none of these reasons really holds water. Even if you are able to direct or influence the behavior of the project team, to tackle this in an ad-hoc fashion is ineffective, not fair on the project team (it’s hard to work within given constraints when those constraints are only trickle fed to you and often only arrive at a review stage) and often this direction falls by the wayside as project pressures heat up (if it ever actually occurs at all). Leaving things ungoverned is also a risky proposition – there are many ways to skin a cat – and there is no guarantee that the approach taken by the project team (no matter how good or effective it may be) will result activities or outputs which mesh well with your existing team, processes, procedures or landscape. Failure to govern, guide and set some ground rules can, and usually will, result in project outcomes which are not as good as they could otherwise have been.

I have another theory as to why people are reluctant to commit such ground rules to paper – fear! It’s a natural reaction when you’re dealing with the unknown. Other than a paper CV and perhaps a few preliminary meetings you have little real idea of the depth of experience and skills of the “outsiders” you will be working with. Concern creeps in: “will they know so much more than me” and “will I look stupid alongside them or in their eyes” are just two of the things those little voices in your head might start to whisper to you. I can recall these feelings in the past even the project lay squarely in an area in which I had deep expertise. Imagine how uncomfortable a person already a little uncertain of his or her skill and experience in an area might feel! Alongside the fear comes overwhelm: the feeling that there are just too many things to think about, that it would not be possible to get them all documented without missing at least one or two items. This thought process leads right back to feeding the fear with concerns that omissions from the document will only make the author look even worse in the eyes of his or her management or the new people arriving for the project.

Let me put an idea out there. You don’t need to know more than the people coming into your company, you don’t even need to know the best way to tackle a certain technology problem or the ins and outs of the latest development techniques or what domain thought leaders are debating amongst themselves. There is no need to commit to documenting world’s best practice, nor to hold your project resources to that standard. Good practice will be fine for the vast majority of situations. But how do you even know what good practice is? And how do you make sure that you cover all of the big-ticket items? Knowing everything that is important to think about can be daunting.

I favour tackling this with something I call the Bad Outcomes approach. Rather than trying to think of everything that needs to be done in a certain way or toward a certain approach, simply make a list of the bad outcomes that could result from the upcoming project. Start with the really big ones; the things that might cost your company at the high end of the scale, whether it be financially or in other less direct ways such as reputation or brand damage, or even worse could cost you your job. Once you have those down move on to the layer below, those bad outcomes which may not be as catastrophic but are still likely to cause a prolonger period of discomfort. Mull on these items for a few days; discuss them with colleagues, both from within the IT function and from the business, adding any new items that might come up. Revisit your list and drop any items which would only cause minimal impact or short term pain and with luck you’ll have a relatively short list of the outcomes that you need to govern against. As an example, the last time I went though this exercise I ended up with only eleven items for a project likely to be worth multiple tens of millions of dollars. Now you’ll have focus – you’ll know what to work on that’s really important. It won’t matter if you don’t use the World’s Best Practice approach to govern and guide each item, so long as you find a way which is likely to avoid the outcome then you’ll have what you need and you’ll likely have saved your company a pretty penny in avoided costs and perhaps even your job along the way.

The next time you’re faced with the need to craft a strategy or pen a set of standards or guidelines don’t worry about what you don’t know. Remember, no-one will know all there is to know about a subject, so accept that you won’t always know the best way to solve a problem or everything there is to think about, but I’m pretty sure you, like me, have your scars and war stories so you’ll know what you want to avoid, so start there. Good luck!