Roadmap for M.I.S.

The Mission

When I joined Mobile Information Systems (Offering real-time solutions for the time-sensitive, same-day transportation industry) the company wanted to move to, i.e., develop and release the new generation of their product suite, labelled „flagSHIP“, comprised of Order Entry (pickup and delivery jobs), real-time driver and fleet management, billing and invoicing, and more.  Also moving from text based interface with Informix database management system and xterm connected to the Mobile Information Systems infrastructure to a full graphics user interface (Windows and Unix) and Oracle RDBMS.

Their fear was that their ambitious goals (and promises to their customers) were not achieved in timely fashion; if at all.   

My mission (that I chose to accept):
build and execute the Roadmap project plan.

Following challenges were raised:

  1. Engineering team not meeting schedules
  2. Complex order entry system in place that became hard and painful to maintain
  3. New enterprise suite with complete functionality to be in place within less than 2 years, better 1½ years. 

Initializing the Roadmap

As my first task I interviewed the officers of the company (Engineering, Sales & Marketing, Operations, and the CEO) to learn about their specific goals and expectations, their roles and responsibilities, their contributions and commitments.  They all agreed that, once the roadmap (the master project plan) were in place and approved, any modifications (e.g., additional tasks, priority changes, etc.) would only be implemented under these conditions:

  1. clear documentation of the new task or change by the requester
  2. detail analysis, resource requirements, and potential impact by the roadmap owner
  3. buy-in and sign-off from all stakeholders  
  4. no circumvention and shortcut of that process. 

The consequence: Directly calling engineers and requesting this „quick, one-line change“ became an absolute „NO!“. All such requests had now to be funneled through me and my team, first.  We wanted to avoid a situation where an extra, well, „snuck-in“ task would jeopardize the project causing undesirable side and ripple effects to other tasks lined up and dependent on each other.    

Groundwork

Now I started to interview all team members, to learn about their understanding of the new project, their expertise and skill set, their expectations and their commitments.  Also (very important): where they would see problems or obstacles, would need support or may provide help, prerequisites not thought of, the working (or not working) environment and infrastructure, the communication flow (or lack thereof), and what not.

Those meetings had been scheduled once — or several times as needed and wanted — and were of course mindful of current workloads.

While interviewing the individual engineers I noticed almost continuous interruptions with calls from professional service / customer support folks and also quite often from customers asking for help to a specific problem.  When asked the engineers, well, „confessed“ that those calls occurred quite often, to the extreme that they were not able to start — keep aside completing — their assigned tasks!

So I defined and implemented my second  tasks:

  1. I had all telephone extensions of the engineering team changed and made those numbers private, while their old extension numbers got routed to customer support
  2. Defined with professional services and customer support on an escalation process — before calling my team, who then would decide whether and to what extend to involve engineering 
  3. With professional services and customer support and technical publication we then identified and scheduled any additional training needs so they became more self-efficient. 

Finally I was ready for my third task:
work on the Roadmap for „flagSHIP“, the new product suite for the same-day courier companies.

Building the Roadmap

First, outlining the main topics:

      0. Design of „flagSHIP“   

  1. Layout and implementation of the Order Entry screens (forms and fields) and all the supervisor, the admin, and maintenance screens
  2. Stored procedures to access, read, add, update database records, the database system as the heart and brain of the product 
  3. (so-called) Gateways, interfaces to Mobile Data Terminals and GPS devices, (automated) communication to the driver, etc.
  4. Import + Export / backup & restore / setup & repair functions
  5. Technical Publication and Documentation for all audiences
  6. Quality Assurance environment with automated scripts following the 3-R principles (Reliable – Repeatable – Reproducible) 
  7. Professional Services + Customer Support training and test
  8. Release and Task management, concept and processes  
  9. GA und FCS

With each team member and/or group we identified all the required tasks, worked on the work breakdown, any interdependencies, the order and sequence, and who’s doing what and when.  And, last but not least, how long each task would take.  (Here I preferred elapsed time, for various reasons.)

I distributed the first drafts — in entirety and also filtered by tasks relevant to that team — for review.

Then we had our Roadmap kick-off with all teams where we discussed how to further split tasks into logical sub-tasks so that all could start working concurrently on our project without wait time; i.e., when will engineering have the initial release ready so that systems ops, database admin, quality testing, tech pubs can do their magic, also taking into account that the test beds (infrastructure, system environment, database instances, etc.) need to be prepared and setup first.  

Then the next engineering release would be ready while QA had finished their testing with report and incident tasks (if any) filed, and so on. 

The idea, why not the QA team were doing the tests directly on the engineering platform did not survive even the end of that sentence.  The test beds have to be absolutely independent from any development environment; and I drafted the 100-mile thick wall between those two worlds on the white board (not to scale) and elaborated the principles of that concept.

As I was referring to test beds (plural): We wanted to have earlier releases available and online; say, run new test scripts on older releases to help determine if an incident had been perhaps introduced some time earlier but was only now detected. Also to reproduce the problem noted in the earlier release and now confirmed fixed and closed in the given release.  (Just the usual conformance and regression testing.)      

Getting the Tools Working

Oracle Developer

During the development of the new Order Entry and the other screens engineering had some „fun“ with the Oracle Developer 1.0 tool painting the screens, defining the form flows, assigning the attributes and validation of fields and to the database columns and/or stored procedures, where either the tool crashed, or the forms were displayed distorted, the fields not showing up at the right place, etc.  

We then became a priority (and field test) site for the Oracle Developer team and received necessary fixes and support in very short turnaround time!

Test Scripts 

Test automation using scripts became a key requirement, not only to speed-up, say, filling out the same form on the screen hundreds of times within seconds instead of hours, but also ensuring the intended input were entered correctly all the time and not creating spurious problems because of some overlooked typos.  Yes, I stressed to the QA team to try to break the system!  But, please, in a documented and reliable and repeatable and reproducible (the 3-R) way, so that engineering got a chance to go step by step through the lines to find the culprit.  And later for the QA team to do the conformance tests on the new release to close the incident as fixed.  

As we also wanted to run benchmarks on the various proposed system configurations, load testing came also to mind.

Some of the features I was looking for were:

  1. Easy script creation, generation, modification of batch processes
  2. Easy recording and modification of dialog session (e.g., order entry) scripts, allowing variables
  3. Verification of expected vs. actual output on individual runs and repeated runs, measuring times and system utilization
  4. Correct handling of Think-Times (e.g., the time a simulated order entry person would take to populate the fields on screen)
  5. Nice to have: Taking snapshots of the database instance and allow resetting to some previous state.

There are the usual problems, of course, where some data entries are a one-time thing; but that’d be usually coped with some variable controlled input.  For example, adding a new customer „ABC“ would only work the very first time; on the second run that customer („ABC“) would already exist and not new anymore.

We then decided with the solutions offered by Compuware, mainly QARun, QALoad, File-AID/CS.  

Microsoft Project

The Roadmap now with well over 1500 tasks and sub-tasks seemed to exceed the capabilities of Microsoft Project 98; resource leveling using different priorities on various tasks wasn’t quite its thing.  Following different priorities assigned to tasks linked together (Finish — Start) could have inconsistent results.  Or take resource leveling, where a two-day low priority task would be stretched into one month, with a work assignment of 5 minutes per day.  So I then decided to set the Start and Finish date/times manually.

The beta version of Microsoft Project 2001 eventually addressed many of the problems and annoyances.

Executing the Roadmap

I fine-tuned the Roadmap, identified the critical path(s), added additional slack times and allowed for extra work.  Meanwhile I got a good feeling of the scope of each task and would adjust the resource requirements, further split or combine sub-tasks to ensure efficient hand-offs between the various engineering teams and then to QA and tech pubs and all the other teams (professional service, sys ops, DBA).

I split my master project plan into separate sub-projects (linked back to the master).   And that made sense, as professional services would create their own project plans for training and test plus the individual installations at customer sites (alpha and beta, and eventually full production).  The key availability dates for their planning were provided by the roadmap master project plan. 

In project meetings as well as in individual team meetings I updated the status of the project, added the % completion for the various tasks, updated and added when unforeseen problems or incidents occurred, and the impact to the overall project.  A high-level summary of project status and risk assessment (a wee bit with some, well, crystal ball to judging the unknown) were communicated to the stake-holders.

Releases

The QA team created their test plans and coordinated with engineering and with the release team to schedule the conformance tests of tasks to be closed and the regression testing to ensure nothing got broken in the process.

Talking about releases:  Engineering teams and tech pubs and sys ops would hand off their input to dedicated and specifically named release folders — along with the list of incidents and features addressed and to close.  My team and I would then build a complete installable version of „flagSHIP“ with the documentation on CD-ROM.  That became the official release for full QA test and validation.  In the end, and after QA pass, the release would go to professional services and sys ops to install on the infrastructure of a given customer.    

Meanwhile we were already defining and prioritising and working on the tasks (new and in progress) of the upcoming releases.

In the very beginning that was an iterative process, when the release, comprised of the various files and components, was not complete and thus not installable.  But nothing what a few checks added to the install (UNIX) script would not handle.

We had the main releases with new features and functionalities added and lower priority bug fixes — monthly or every other month.  Then the support releases for bug fixes and very important features that could not wait till the next main release — weekly.  Those regular releases came on CD-ROM.

And for critical incidents we provided hotfixes — as needed, with a very short turnaround and basic testing.  Hotfixes would be installed into the infrastructure of a customer experiencing the incident and could even come in form of an email attachment or a network link to a specific protected release folder with the affected component(s) and/or workaround instructions.

Mission Completed!

Well, thanks to the great teams and the dedication of everybody involved, the project had been completed within 15 months, way ahead of the original schedule.  And it included very successful alpha and beta installations with customers „hungry“ for the new enterprise suite „flagSHIP“ = iorderEXPRESS!