Software Estimation

Estimating Software Development

This is actually a huge topic so I am just going to touch on some high level points. What prompted me to start writing on this topic is a recent conversation with a client and a cartoon I came across at Gizmodo about what happens when your Boss Estimates Software.

 

How long to write the software?

How long to write the software?

The question of how long it will take to write a particular piece of software is dependent on several factors:

 

  • what is your direct experience in writing that sort of software?
  • is there existing code you can leverage? – Software Reuse
  • build versus build – can you buy a module?
  • what is the Operating System? – an RTOS will usually slow things down
  • is it a hard or easy problem?
  • is it well defined?
  • how will it be tested?
  • what quality standards does it have to comply with – eg. Medical Device Class C
  • how many people will be working on it?

 

I’m sure you get the idea.

 

And the coding is just part of the Software Development Process. That is the thing that gets forgotten more than anything else.

 

Software Development Process

In the Software Development Process, coding is preceded by:

 

  • user requirements analysis
  • product requirements analysis
  • technical analysis
  • solution selection
  • specification
  • test methodology
  • Software Design

 

Then there may be a Design Review.

 

Then we code (sometimes referred to as putting the bugs in).
Then we test and debug (getting the bugs out).

 

Then there may be a Code Review and Refactor followed by confirmation it still passes all the tests.

 

Then we complete the Software Documentation package and create a labelled revision so it can be properly released and tracked.

 

That is the small software team version of the process and for some projects some of those steps are trivial.

 

Larger companies have larger processes but can also do larger projects as a result.

 

Industry Metrics for Coding

Here are some really basic Coding Metrics.

 

High security, financial systems, mission critical code – as little as 10 lines of fully debugged and documented code per day averaged across the whole process.

 

Commercial and scientific software is usually created at a rate of between 100 and 1000 lines of code a day.

 

And better processes actually speed that up rather than slowing it down.

 

Estimating Software Development Time

A recent conversation with a client was on the topic of redoing someone else’s code. They had been working with another Software Development company and had decided that the code needed to be done again. They had spent two years without getting to a fully working version. My first though was “commendable patience”. My second was merely “ouch”!

 

So we did some analysis. I was initially optimistic. We used a tool called RSM to do some code base analysis. We use quite a lot of science in our Software Development Process including Static Analysis, Code Quality Analysis and complexity measurement. What we got from the initial analysis was 50K lines of code with an average Cyclometric Complexity of 6.21. The normal rule of thumb is that anything above 5 should be redesigned. Not Good. Then we looked at some specific files that had really high complexity scores above 10. That was the clincher. No evidence of design, no consistency, lot’s of cut and paste and everything is global variables.

 

The good news, is that the real complexity of the required code will not require 50K lines of code when it is properly designed. The bad news is that our client was right. It did need to be done again from scratch. Some parts might be reusable but it was unlikely.

 

Assuming we can do it with 20K lines of code, this will take between 20 and 200 person days to produce. In our case closer to 20 person days because the thing that makes the biggest difference to software delivery on time is your Software Development Process.

 

So that is a really quick look at a really big topic.

 

Successful Endeavours specialise in Electronics Design and Embedded Software Development. Ray Keefe has developed market leading electronics products in Australia for nearly 30 years. This post is Copyright © 2014 Successful Endeavours Pty Ltd.

Software Documentation

Software Documentation

This is an area of Software Development that has always been a challenge. The documentation falls into six general categories:

 

  • Software Design Documentation
  • Software Testing and Test Results
  • Software Implementation Details
  • Software Change Management
  • User Documentation
  • Coding Standards

 

Each of these has its own specific issues.

 

There are formal documentation standards including IEEE 829 for Software Test Documentation and IEEE 830 for Software Requirements Specification and IEEE 1016 for Software Design Description. Even if you are not using these standards, it is worth reading them and understanding the ideas and methods they teach. These are well thought out standards.

 

Software Design Documentation

We work for a wide range of clients and some have very strict criteria while others leave it entirely up to us. The strictest criteria we work with are biomedical, automotive or transport clients. These require upfront definitions of everything including the Software Test Methodology and the specific tests to be done and the pass fail criteria. And of course, every change request up-issues every affected document. So that has to be budgeted for.

 

Software Design balances the project priorities

Software Design

For clients without specific requirements, we use the following methods:

 

  • Define the requirements and how they will be tested
  • State all known constraints
  • Describe the operating system
  • Show the Functional Decomposition into modules
  • Show the module communication
  • Use diagrams to show all State Machines
  • Document tests and test results

 

For larger projects, we might choose to have separate requirements and test documents but we always align the numbering so the test for a requirement has the same number as the requirement. For instance, if requirement 2.3.1 was that the system accommodate BAUD rates from 9600 to 115200, then test 2.3.1 would be to set and confirm the system operated with each of the BAUD rates.

 

This helps with both ensuring requirements are all tested and implemented, and that changes can easily identify which tests need review when a requirement is changed. Very large projects would use a Requirements Allocation Matrix where the requirement is cross-referenced with the modules that implement it.

 

Functional Decomposition

Functional Decomposition is the process of breaking the project up into specific modules and allocating requirements and functions to them. Since we do the whole gamut of Electronics Design and Embedded Software Development, this includes deciding how many processors we will use and how we break up requirements between them.

 

Functional Decomposition

Functional Decomposition

The intent is to take the complex and break it down into simpler pieces until they are simple enough to implement as functions in their own right.

 

Software Testing

For Software Testing the process is conceptually simple:

 

  • What is tested – which requirement are we meeting
  • How is it tested – the test protocol
  • What results are recorded – the test records
  • What the pass criteria is – the test acceptance criteria

 

This is usually best handled with tables except for user interface interactions which might use recordings as well as written test results documentation.

 

Software Implementation Details

This usually consists of the source code documentation since the Software Design has already been documented. Many modern toolsets include source browsing and database tools, but these require the end client to have the same toolsets. So we also add Source Code Documentation tags supported by Doxygen which allows a toolset and platform independent set of documentation to be created. If you haven’t used it, what you get is a website with everything hyper-linked. You can also create hyper-linked PDFs but we usually stay with the html.

 

Doxygen Source Code Documentation

Doxygen

You can create diagrams, caller graphs, callee graphs and state machine descriptions using the Graphviz tool which is supported by Doxygen.

 

Doxygen Call Graph

Doxygen Call Graph

Our gratitude goes to Dimitri Van Heesch for creating and maintaining Doxygen which you can support by donation.

 

Software Change Management

So you have a software project that passes all its tests and its out there and working. Now you, or your client, want to change something!

 

The last thing we want to do is break something while trying to make it better. So when changes are needed, they have to be analysed. Things to consider are:

 

  • Which requirements are affected by the change?
  • How risky is the change?
  • Who need to approve of the change?
  • Are there any new requirements or new tests required?
  • Will field testing be necessary?
  • Are there regulatory approvals affected?

 

So trivial changes like updating the wording on a form are usually low risk and don’t require a lot of risk management of multiple levels of signoff.

 

A more significant change like altering the wheel pressure balancing algorithm in a ABS braking system gets the maximum level of attention.

 

User Documentation

If the product has a User Interface or supports user interaction, then this also needs to be documented. This documentation is usually written for the user to read and is intended to assist the user.

 

The format of the documentation depends on the product. It could be online help, tool tips, printed manual, soft manual, on screen or printed on the product itself. One method of User Documentation assessment is with randomly selected people who could be potential users. This is to identify concepts understood by the development team by not either generally understood or else not adequately explained in the documentation.

 

Coding Standards

This will be covered in more detail in another blog post. Because a range of programmers will work on any significant project, it is well worth defining how code is to be written. This consistency will lead to easier to read and maintain code.

 

For instance, Embedded C bracketing uses to follow the UNIX convention which goes like this:

 

for(length = MIN; length <= MAX; length++){
do_some_stuff();}
next_lot_of_stuff_to_do();

 

This had advantages in the days when VDU screen space and Teletype or Lineprinter paper space was at a premium, but it makes sense to make the indenting easier to understand by doing it this way:

 

for(length = MIN; length <= MAX; length++)
{
do_some_stuff();
}
next_lot_of_stuff_to_do();

 

Regardless of the type of project, easy to read and understand Software Documentation will reduce maintenance costs is essential for larger teams to be able to deliver a working project.

 

Successful Endeavours specialise in Electronics Design and Embedded Software Development. Ray Keefe has developed market leading electronics products in Australia for nearly 30 years. This post is Copyright © 2012 Successful Endeavours Pty Ltd

Software Reuse: Software Design

Software Design

This continues on from my posts on Software Architecture and Operating Systems. The basis of Software Design for Embedded Systems is ensuring that you implement the required features within the available hardware. A lot of people forget that second point. I have found there are many more opinions on what you “Should Do” than there are helpful ways of assessing what you “Can Do” and how likely it is to be successful. Way too much of the conversation is like the picture below.

 

Software Programming

Software – You’re Doing It Completely Wrong!

Unlike a typical Computer Science project, there are hard and fast restrictions on the system resources in a Small Embedded System. Some of the constraints to consider are:

 

  • RAM
  • FLASH or program storage space
  • Clock Speed for both the peripherals and the main processor
  • Power Consumption
  • IO and peripherals
  • Latency requirements

 

If we have selected an Operating System then we also have constraints from that choice:

 

  • How to tasks or modules communicate?
  • How is data protected from simultaneous access by foreground and background tasks?
  • What is the worst case latency for a task or interrupt response?
  • Can I meet the peak execution demand with the processor, Software Architecture and Operating System?
  • What design methodology will I use?
  • How will I test?

 

Software Design Methodology

This is also a pretty big area. For now we will focus on the primary methods that are used to manage more complex projects.

 

State Machine

The first and most important is the State Machine, originally known as the Finite State Machine. It was an invention of Hewlett Packard and many were surprised that Intel beat them to the first microprocessor given that the State Machine was one of the breakthrough concepts that made that possible. At it’s core, a State Machine defines the states a system or sub-system can be in and the conditions under which it moves from one state to another. Below is a State Machine for estimating the charge left in a rechargeable battery. We design a lot of battery powered equipment so this is a common design element for us.

 

Finite State Machine

State Machine

One big advantage of a State Machine is that it can be easily designed to operate in a polled environment where very little processor time is required until a transition condition is achieved. This allows very complex systems to operate without needing a larger processor.

 

Test Driven Development

Test Driven Development is the next important Software Design Methodology to consider. In this case the system is analysed and the test requirements identified. The tests are written and then the code is written and debugged until it passes the tests. If requirements change, then update the tests to match and debug until the code passes. Code is refactored once is passes all tests.

 

The big advantage of Test Driven Development is that you think about test up front and that generally leads to simpler designs that are easier to maintain. You also always have a full test suite to ensure changes made don’t have side effects that cause other features to misbehave. The tests pick this up automatically.

 

The big disadvantage is that you might write tests then decide to change direction and have to recreate those. These is also interaction between the system call structure and the test suite so you need to do more detailed design up front. But particularly in mission critical applications, always having an up to date test suite is a big advantage.

 

Rapid Application Development

The final consideration is whether you will use a visual coding or modelling system. For Windows Software we are big fans of the Embarcadero toolset, formerly under the Borland brand. These support Rapid Application Development or RAD as it is known. The tools create forms and provide the software skeleton to go with them automatically managing class members and access functions for you. This way you can focus on the application specific code. We find their C++ toolset one of the most productive to create application with.

 

These systems are generally used on general purpose computing platforms and larger embedded systems rather than a Small Embedded System.

 

However the concept behind them support rapid prototyping and minimal code writing to get to a working demonstration. Rapid Application Development is primarily about doing this. Get to a prototype fast then find out what the problems are. You can think of this as Risk Identification. Identifying and eliminating risks early is one of our core strategies for delivering projects on time and budget and is covered in more detail in our Project Management Methodology.

 

Modelling

Another common approach is the Unified Modelling Language or UML. This uses an open standard for a visual model to drive the code generation. You get the model right and the tool produces the Embedded C code for you. This is also important for reusability. The models are processor independent so in theory you can use them on any processor the toolset supports. They support state based design most easily but can be hard work for highly algorithmic processes or communications processing engines.

 

Regardless of the Software Design Methodology selected, the coding must still be done with care. But selecting the right Software Design approach makes that a much more likely process.

 

Successful Endeavours specialise in Electronics Design and Embedded Software Development. Ray Keefe has developed market leading electronics products in Australia for nearly 30 years. This post is Copyright © 2012 Successful Endeavours Pty Ltd

Software Design: Feature Bloat

Software Design

This continues on from my posts on Software Architecture and Operating Systems and is part of the Software Design series.

 

Feature Bloat

This is also known as Feature Creep but I prefer the term Feature Bloat because it better describes the effect it has on a single project. This is rather like eating too much and ending up feeling uncomfortable for an extended period of time. For this part of the post I will focus on a single product version release.

 

Feature Bloat

Feature Bloat

The worst part about Feature Bloat, is that it adds features to the product which probably don’t improve the market success, but definitely delay the product release. This is always a profit reducer. And when you are working as independent developers as we do, it is also not obvious to the client which features are going to be easy to add and which are not. For instance, adding an ‘UNDO’ feature is often very expensive if it is not allowed for up front as this often requires restructuring of all the data handling methods to include history tracking.

 

So the first thing to do when looking at adding new features, is to ask the following questions:

 

  • Do I need it now or can I add it later?
  • Will it require substantial restructure?
  • Will not having it reduce sales?
  • Is it adding value or just clutter?
  • What degree of timeline risk does this feature add?
  • What degree of budget overrun risk does this feature add?

Perkin Elmer and Varian (now Agilent) competed in the market for the sales of spectrophotometers amongst other things. I was working for Varian in Australia in the 1980s. Perkin Elmer were the market leader with Varian usually having better instruments but getting beaten at the overall sales game. How did Perkin Elmer do that?

 

As a young Electronics Design Engineer I was mostly focused on the core Engineering Design associated with my role at Varian. I was part of a small team that achieved a few ‘World First’ outcomes at that time. Amongst these was a UV/VIS Spectrophotometer in the Cary range that had a dynamic range of 6 ABS or 6 decades. This is a million to 1 ratio. It also removed the need for a rear beam attenuator for may tests and allowed streamlining many test processes. While I am proud of that achievement, the most striking thing I learnt at that time was that Perkin Elmer were number one and expected to stay in that position regardless of the quality of the engineering work I was doing. These reason as explained to me like this:

 

  • Perkin Elmer are designing a new instrument
  • They have some good ideas during the development process
  • So they take a note of all of them but probably don’t implement any of them
  • They release the instrument on time knowing it isn’t the best they could have done
  • Immediately the pick from the new ideas pile the feature for a second version of the product
  • They release this is 6 to 12 months time as an incremental model
  • And they go around the cycle again, and again, and again

 

In the time Varian can release one technologically superior instrument, Perkin Elmer have released two models. While Varian build sales of their instrument Perkin Elmer will release another two model updates. Every year at the annual sales conventional Perkin Elmer have something new on the stand. Salesmen love new models and customers love having the most recent model. This strategy gives them market dominance. So that was a very interesting thing to learn. But it was not the primary lesson here.

 

Varian knew this was how Perkin Elmer achieved their market position but would not change their strategy, even while they knew it would not bring them the success they wanted to achieve. So I learnt that the organisational culture can mean that successful strategies are not adopted, even when it is know it will improve success.

 

So what does that have to do with Software Design and Feature Bloat?

 

Software Feature Bloat

Software Feature Bloat

You do need to design with the end in mind, but unless it can be proven that a new idea will substantially improve the sales success of the product in the same time frame, then hold it in reserve for a future update. It is better to be selling the product earlier than getting those profits in and improve the product over time than to wait until it is perfect.

 

Both Microsoft and Intel built enormously successful companies using this philosophy.

 

Software Bloat

This is different to Feature Bloat or Feature Creep in that it tends to occur over successive software releases. This also leads to software known as Bloatware. In this case, the product has features that are either not necessary or which in fact reduce the overall usefulness of the product for most users. As much as I love Microsoft Word as a product, the Office 2007 release broke a number of features that I found really useful and Office 2010 has made one of them very hard work indeed. In particular I am referring to outline numbering which has gone from incredibly useful to almost mystically unwieldy.

 

Software Bloat

Software Bloat

But the real impact here is that it proves you don’t know your customers very well. Apple have done very well with their iPod, iPhone and iPad range because they have only implemented features that most of their users will want and have stringently avoided Software Bloat.

 

So lesson 2 is to know your customers and to keep making it easier rather than harder for them to use your products, especially new customers who don’t have the history with the product to understand why it is the way it is now.

 

Successful Endeavours specialise in Electronics Design and Embedded Software Development. Ray Keefe has developed market leading electronics products in Australia for nearly 30 years. This post is Copyright © 2012 Successful Endeavours Pty Ltd