SyntaxHighlighter

Friday, April 29, 2016

The Point of Process - Part 2


Once the point of process is understood, it is time to measure your current process. This step is important because the point of doing any software process is not "is it good Agile"" or "did we follow methodology X correctly?". The point is: does it work.

A contract of the past is what spurred on this article. We were working tightly with another company, completely ingrained in their process, and my boss would ask me, "How's it going?" My response was mixed, because on the one hand I knew what I was assigned to work and was working on that. That makes a boss happy. But my boss, being a stakeholder in the work being done, was really asking those two questions from Part 1 so he could evaluate the current contract and plan for new ones. I'd tell him, "I'm doing work, but I have no idea where it's going and when we'll get there!" It was a software process that was missing the point.

The same questions that the stakeholders are asking are the same questions to use to measure one's software process. If you have an answer, the process is working, and if it takes more than 2 minutes generate or explain the answers, there is a problem.

Where is the software going?


Every member of the team should know the answer to this question. If one is a Team Lead or Product Manager, the answer needs to go both ways. It goes up to the stakeholders, like Business Dev, Marketing and the C-Suite. If they are not directly planning what the software must be capable of, they at least need to be made aware what is coming next in terms of features and improvements. 

This question must flow down to the devs as well. If they do not know where the software is going,  motivation drops and future features are not planned for. Keeping an air-gap between business and dev stifles ideas from those that know the technology the best, and lowers the meaning of work they are doing to simply "doing what they're told."

When looking at your software process, each member on the team must be able to know, quickly and easily, where the software is going. 
  • Can anyone look at the list/board/spreadsheet/diagram and know what is coming next? 
  • Can they see where their work fits in right now? 
  • Is there a place to write down new ideas? 
  • Are the milestones front and center? 
  • Can the reason of the feature (or entire project) be summed up in two sentences that give meaning to the devs and makes sense to the business side? 
  • What will come next, after this cycle of development?
Use your process to understand where your software is going.

When will it be ready? 


The schedule is vital to a business, so the development team cannot simply stall for time, or make excuses like, "it's ready when it's ready." Demos, marketing, trade shows, and everyone's paycheck is resting on these dates. Any slips need to be communicated. (Any early finishes should be celebrated!) 

A memorable quote by Walt Disney, which I like to repeat, is "Everyone needs deadlines." Walt goes on, but that's the important part. The software schedule sets everyones expectations of "done". And without it there is confusion, miscommunications and unmet expectations. Developers need to fit their work within the timeframe, and cannot go off the path just "because" ("because this new way looks cooler", "because I want to refactor now"). Estimation is an important skill for any team, and fitting what must be done within time bounds is part of the planning process, not the middle or end of the execution phase.

The schedule has to be clear, and your process must make it clear. Having business side stay out of your hair until the software is supposed to be done, and then breaking down the door after a missed date is terrible. Don't work like that! Use the process to quickly and easily inform the business side and guide the development side. 

  • When is this cycle of software development going to be done?
  • How close is the team to being done? How much more time is needed? 
  • When is the code frozen (if not continuous)? When will QA look at it? When will customer see it?
  • Is the team currently on track? How accurate were previous estimations? 
  • When is the next major milestone?

Use your process to understand when the software will be ready. 

If You're Happy and You Know It


What is the ultimate measure of your software process? Whether or not the stakeholders are happy! That's it. If they are unhappy, look in to ways to improve the process to better answer the two questions. (Hint: this likely involves bringing them into your process, not pushing them away). If they were happy, then great! And take a little time to see how to make your life easier.

Stakeholders looking at your software might not be very technical. Metrics and procedures mean very little to them. When they evaluate they are looking at two things:

  1. Did you do what you said you were going to do?
  2. Was it ready when you said it was going to be ready? 

Be able to answer "yes" to those two questions and you will have some happy stakeholders.

But you knew that was coming, right? Because your software process, quickly and easily, has been answering those two questions all along, so everyone knew exactly where the software was going and when it was going to be ready.

That's the whole point of all this process, and that's how you know it worked.

Thursday, April 14, 2016

The Point of Process - Part 1

A software development process was very real during my time in the defense industry, but basically not taught during my time in college. I could see mixed views from the open range of small businesses and development shops, depending where one worked. There is plenty of material out there too! Manifestos, guidelines, workshops and so on, all geared to learning and following a particular software process. And all this truly has made software development better, and allowed for better software.

If one did not have varied experiences, or there was not time to do extra reading, or was just getting starting in software, they might wonder, "what is the point of all this process?" Or if one was getting frustrated with a software process, because it was too much pointless work or the process never told them anything useful, they might wonder, "what is the point of all this process?" And some people just need to be reminded, because the point of a software process is not the process itself.

Software Process Answers Two Questions


Where is the software going?
When will it be ready?

That's it. Be it waterfall or agile or something else, it all boils down to these two questions.

Where is the software going? 


Software needs a goal. The goal could be planned months ahead or days ahead but there needs to be an end state. That could be a milestone with a defined set of features or requirements, or a rolling set of features as business needs change, and the process keeps everyone involved on track. Each person can state why they are doing the things they are doing (writing code, performing tests, etc) and, to a higher level, where those things fit into the entire software system.

When will it be ready?


Schedule is important. The software process tells everyone when something must be done. This helps the business guys plan sales demos and constrains the developers to stop tinkering. If the team cannot state when the software will be ready, chances are it will not be.

Who Asks the Questions

The process is for the software and the software is for the stakeholders. Only minimally is a stakeholder the developer. Sometimes it could be Tech Lead, Scrum Master, Program Manager or CTO. Indirectly it is the end-user. The stakeholder should be the business side of the house, and very often it's the person writing the check. The stakeholders are the ones asking the questions, and the answers to those two questions (from the dev team) is sometimes their only view into the software process.

Development is not the point. In the end, process is not the point either. Keep your stakeholders happy and meet their expectations, which can be done by answering two simple questions. You might use a software process to help answer those questions, too : )

Tuesday, April 5, 2016

Meaningful Software Tests

Software Testing is vital to any project, though not all software tests are meaningful. Not a big deal if a few extra automated tests get written or run. Debilitating if the tests cannot catch what they should, by forcing errors to be found outside the dev team or becoming a maintenance nightmare. So what tests should you write?

Testing Types


There exist about three types of software tests:

  • Unit Test - the lowest level of test. Against a single "unit" of code (a class, a group of methods, an algorithm, etc). One unit is tested in isolation and any other units are mocked to keep functionality going and maintain control of the test.
  • Integration Test - Multiple units tested together, but still the same software system. Can have dependencies outside the software, like a database or even, gasp, an internet connection. Much less is mocked at this level. 
  • End-To-End Test - the whole deal. Testing from one end of the system (user interface) to the other (database, algorithms) and back. Runs real code in real environment with (custom) test runners. Nothing is mocked - it doesn't need to be!

People might use different terms for these types of tests, and different approaches might blend somewhere in the middle of these. Want a long list to amaze your friends,mor give you some good ideas? Here.

But honestly, the type of testing you do isn't the most important thing. That you have testing is important, but varying opinions and success stories will advocate for one type or another.

Is There Meaning In These Tests


The type of test is not the issue, rather it is "are these tests meaningful?" Each test that is written should have some meaning behind it - test out a particular function or flow so you know it works. Test out a particular business need or requirement so you know its covered. Test a particular corner case so you know that bug will not come up again.

You need to be confident in your testing. These are little bits of automated software that will prevent errors down the line (where they are much more expensive to fix). When you run your tests, you need to know the functionality they tested and be confident that functionality was performed correctly.

Two quick examples from the past.

Back in the defense industry, documentation was king. Requirement number to test number to test results. Who wrote 'em, who performed 'em, who witnessed 'em and did they each test pass or fail. If you ever wondered how complex programs with hundreds or tens of thousands of requirements can get approved, its because of the documentation backing it up. Each test was performed for a specific purpose, and that purpose was written down. Anyone looking at the resulting documents would know what was being proved when some test step was being performed.

A second example was a past contract. They had pure unit tests that were never to turn into integration tests. One test for one unit. Code coverage was measured and the number of tests were counted. If you were to ask why we were writing all those tests, it was to get code coverage up and have all the code tested! The bad part was that every piece of code was tested in a vacuum (no database, no outside services, minimal component communication), so when the software verification group looked at something (or worse, they deployed!), lots of things were found because it was the first time those pieces were mingled outside of developer testing. There should not have been confidence that those unit tests were a predictor of safe software.


It Depends


Like many software answers, "It Depends." The type of testing you use and how much you test will be dependent on your goals and business needs. Make sure your code has tests, that those tests have meaning, and your teams understands why the tests are there.

Don't waste time writing tests just to fill metrics. If 100% code coverage is important to, or required in, the project, write those tests! But if you made up a number for coverage and then force yourself to reach it (or even better, just keep adjusting your target number!), you are wasting time. Don't test code just to check off some software engineering list - the number of tests you run is as meaningful as SLOC metrics. Tests need to be updated and maintained, forever, and are written with each new feature. If time is being wasted, find out sooner rather than later!

Meaningful tests will help the team too. Later work load will be reduced by catching real bugs early, avoiding those crazy weekend or late night sessions to "GET IT FIXED!!!1", not to mention freeing up your Verification or QA groups a bit. The dev team will have confidence and faster feedback that what they write will work, or not, boosting productivity. Continuous Integration is not possible without tests you can trust. Writing tests that have meaning makes test writing valuable, leading devs to avoid half-backed work (and insidious false negatives).

One final tip. Tests will have more meaning the closer you can get to real code on real hardware. This is also a more expensive environment to set up, and maybe cannot be automated, so it is another decision to be made. Where you are able, get as close as you can to real code on a real system.

 Happy Testing.