2009 EDN Innovation Awards Finalist

EDN Innovation Awards Finalist

EDN Innovation Awards

EDN Innovation Awards

EDN Innovation Awards

This is a bit of a different post. I’m pretty stoked that we are finalists in the 2009 EDN Innovation Awards in 2 separate categories. The award categories are:

  • Best Application Of Analogue Design
  • Best Application Of Design Software

 

So I thought I might let you know a bit more about the project, and also give a public thanks to Pablo Varjabedian of Borgtech for allowing us to put the project forward. We design Electronics and Embedded Software products primarily for Australian Electronics Manufacturers. Our focus is outstanding Electronics Design that will propel them into a world class competitive position while delivering improved profit margins. Low Cost Electronics Manufacture but with outstanding performance and reliability.

 

We routinely use non-disclosure agreements, NDAs, with our clients and so don’t usually get the chance to put our design work forward for awards because we will never disclose a client’s Intellectual Property, IP, without their express permission. In this case Borgtech gave us permission and so we were able. As you can probably see, there is a real benefit to the client in allowing the award application because they also get recognition for the product.

 

This is also not an unusual project for us. We have done a lot of outstanding work over the 12 years we have been in operation. So it is good to have some of it recognised by the Industry we are so passionate about.

 

Electronics Design Details

This project was an example of our Project Priorities Perspective in action. In this case Performance was the primary concern with cost coming second and time coming last. We spent the time to get the performance up and the cost down. There was an earlier post on one aspect of this project where we looked at Analogue Electronics as a way to improve battery life in a Low Powered Electronics Data Logger.

 

The Electronics Design trade offs were:

  • OH&S or Operational Health and Safety – must protect users from hazardous voltages
  • Low Power Electronics – operates from 3 AA cells for up to 6 months
  • Convenience – Analogue front end completely Software Controlled
  • High Reading Accuracy – millivolt resolution over +/-10V range with 60dB Mains Rejection

There were many other Design Requirements but the above list are the core Electronics Design Requirements addressed as part of the award nomination. Below I will look at each of these in turn.

Protection From Hazardous Voltages

Now lets look at the hazardous voltage issue in a little bit more detail. The voltages in questions were:

  • 5000V, 5KV, for 2 seconds
  • 250VAC continuously

These come about due to the conduction of Lightning Strike Transients or Mains Leakage Voltages onto the Pipelines and Storage Tanks monitored for Corrosion Protection status. The Analogue Electronics front end had to provide protection against these cases while meeting all the other Design Requirements. And of course quickly settle so that only the readings during the disturbance were affected.

 

It also led to the use of an 802.15.4 RF Telemetry Link because this meant the monitoring PC could do Real Time Monitoring without hazard. Many other products in this industry use RS232, RS485 or even I2C connections for monitoring, configuration and upload of the Data Logger Records. In the case of the Borgtech CPL2 you can put it in place and then configure it and start the logging with no danger to the operator apart from the moment of electrical connection. And the initial part of the run can be monitored to ensure everything is correctly set up. Otherwise you could get a months worth of data that was useless.

 

And finally, because of the power budget and the possibility of the batteries going flat, the Analogue Electronics had to survive the above Abuse Voltages unpowered!

Low Power Electronics

The Borgtech CPL2 is a Battery Operated device. There are several reasons for this but the three most relevant are that it is:

  • IP68 sealed against water ingress – it is often installed in a pit that can flood
  • Must operate remotely from a convenient power source
  • Protects the operator and PC from Transient Voltages since there isn’t a direct electrical connection

But this is also part of the challenge. For convenience it used off the shelf batteries you can buy at any service station. But to get 6 months life required a strong Power Management approach including powering down anything not in use including the Analogue front end. If you are taking a reading every minute over six months then most of the device is off most of the time. In this mode the average Power Consumption is 37uA.

Analogue Electronics – Software Controlled

The Borgtech CPL2 handles both Current Shunt and voltage mode readings. The Analogue Electronics were designed to have a software selectable full scale range of +/-10VDC and +/-150mVDC so that is could do either mode of operation from the same input. The previous model required a different connection for each of these modes and most other models on the market are the same.

 

And all of this while maintaining accuracy, abuse voltage protection and low power operation.

High Reading Accuracy

By the standards of an Agilent (I still want to call them Hewlett Packard) 6.5 digit laboratory multimeter our millivolt, mV, resolution at +/-10VDC isn’t rocket science. But for a device with the Voltage Abuse Protection and Low Power Electronics requirements we had to meet, it is pretty good. Another small twist you might not recognise is that it is +/-10VDC. This means you can monitor it with the polarity inverted and fix it up later on by inverting all the readings. The previous model was unipolar and so you couldn’t do this meaning you could have just wasted a month. And then there is the live monitoring so you can see what the readings look like before leaving the unit to log away in the background.

 

EDN Innovation Awards

On 17 September 2009 we know the final outcome but either way I am pretty happy to have the recognition this project has already received.

 

Ray Keefe has been developing high quality and market leading electronics products in Australia for nearly 30 years. For more information go to his LinkedIn profile.This post is Copyright © Successful Endeavours Pty Ltd.

The Future of Low Cost Electronics Manufacture

High End Electronics

In the early days Electronics was hand wired on a chassis. Some high end valve amplifiers still do this.

 

But of course this isn’t very compact. For those who didn’t know, I am a guitarist and use a Carvin MTS3212 Master Tube Series tube amplifier which I still enjoy very much. So when compact isn’t a priority and cost isn’t as important as the sound, then you go for this sort of amplifier. This is another example of the trade-offs we discussed in the Project Priorities Perspective where it’s about Performance and Cost is the lowest priority.

 

 

Low Cost Electronics Manufacture

For Low Cost Electronics Manufacture however, there are other factors that come into play. You want quality and you want it in a timely manner but the cost has to be low so that you have a decent profit margin. So hand wiring is out because that is expensive.

 

Very well designed Printed Circuit Board PCB can produce excellent results and with the move to Surface Mount Technology SMT and the Surface Mount Device SMD the Component Loading Cost is also reduced as components are put in place by machines and there are no leads or tails to trim after soldering. So this really helps with Electronics Manufacturing Cost and for at least the next little while will remain the way to go.

 

Another strategy for reducing cost is to use a modern Integrated Circuit IC because you can fit more functions into a more complex device and although it sometimes costs more for that individual device, you can reduce cost by removing other devices, reducing size and reducing loading and handling costs.

 

Reducing size reduces cost because you get more Printed Circuit Boards on a Panel and the cost of a panel in general is roughly the same regardless of how many PCBs there are on it.

 

 

Emerging Electronics Technologies

But the future is approaching and there are some very interesting developments under way. These involve Organic Semiconductors and Printable Electronic Circuits. Check out the following links:

 

printable electronics – a game changer

 

printable electronics on the rise

 

printable electronics to surpass $7 billion in 2010

 

Organic Semiconductors

 

I was particularly interested in the idea that the number one piece of equipment purchased by universities and Research and Development corporations conducting Electronics Research would be an inkjet printer! And did you notice the convergence between these two Low Cost Electronics Technologies?

 

We are in for interesting times indeed when you can design your circuit and then prototype it on your printer.

 

Ray Keefe has been developing high quality and market leading electronics products in Australia for nearly 30 years. For more information go to his LinkedIn profile. This post is Copyright © Successful Endeavours Pty Ltd.

Analogue Electronics – Improving Signal Integrity

Analogue Electronics

Sometimes you come across a post elsewhere that is absolutely on the ball. When it comes to Low Cost Electronics Manufacture, Analogue Electronics Design and Analogue signal integrity, the three are closely linked. Many a product has had expensive technical band-aids added to it to cover up poor underlying Analogue Electronics Design. So avoiding the poor Electronics Design will avoid the unnecessary expense. This is especially true when it comes to the two most misunderstood aspects of Electronics Design:

  • Analogue Electronics
  • Radio Frequency Electronics (RF Design)

For this post we will focus on Analogue Electronics and some simple strategies to avoid problems. A problem you don’t have is a problem you don’t have to fix. The key to success with Analogue Electronics is very simple:

  • Know what you are doing
  • Do it right the first time

 

Ray Keefe has been developing high quality and market leading electronics products in Australia for nearly 30 years. For more information go to his LinkedIn profile. This post is Copyright © Successful Endeavours Pty Ltd.

________________________________________________

 

Reducing Electronics and Embedded Software Product Development Costs

First some basic statistics that made me think about this issue a bit more:

  • Software typically consumes 80% of the development budget. Digital Avionics Handbook and Embedded.com
  • 80% of software projects are unsuccessful IBM

 

So working from the Pareto Principle it is clear that product development success and cost can be most improved by addressing the Software Development component. In my recent post on Reducing Electronics Manufacturing Parts Cost I argued that increasing the software component can reduce the hardware costs. Which is a great idea as long as it doesn’t introduce an even more expensive problem.

 

I agree with Jack Ganssle in his article looking at tools where he points out that software quality tools are often not budgeted for yet will find many classes of defect quickly and at a significantly lower cost than the test and debugging effort required to find them after integration with the rest of the project. Or put another way, the cheapest way to get rid of bugs is not to introduce them in the first place – Lean Coding.

 

Since we mainly develop in C and C++, this is what we do to ensure we minimise software development cost and overruns:

 

Static analysis and code reviews

We use static analysis and code quality tools such as PC-Lint and RSM and integrate them into our editors and IDEs so we can run the tests are part of our build or at the very least with a single click covering either the current file or the current project. These tools find flaws you are hard pressed to identify by visual inspection and I believe they pay for themselves within a month of purchasing them. They can also enforce coding standards. Another great benefit is that when you do a code walk through and review, you are not looking for these classes of faults explicitly because you know the toolset will find them for you. So the first thing you do is run the tests and focus on anything found there.

 

Code reviews save money. Every issue identified in a code review is an issue you don’t have to debug later on. And another person is going to look at your code without the same assumptions you would so they will see the things you miss. It just makes sense to do it. Software debugging is more expensive than coding so not bugging in the first place is good budget management.

 

Unit testing

Next, we unit test. A huge benefit of this is that you have to think about test and it makes you think about error handling in the design phase. Many problems in implementing embedded systems come from not handling errors consistently. Sometimes they aren’t handled at all! Someone else once suggested that software developers were the most optimistic people on the market – you can tell this is true by looking at how they handle exceptions! I’m not sure who said it so if you know then post a comment and I’ll credit them and provide a link too if you have one.

 

Integration testing

Integration testing itself does not have to be overly complex. You want to know that things work and it is often easier to write a cut down system to manage the test process. This way you are proving that each subsystem is present and correct before doing the full scale system test. This is an area that often gets overcomplicated. Don;t try and do more here than you have to.

 

Oh, and by the way, just because something builds don’t mean it passes the integration test. Some things to cover are:

  • software manifest – do I have the right version of each module?
  • data flow – do the higher level calls get at the right data lower down?
  • exceptions – do error returns get passed back?
  • exceptions again – if you raise exceptions, do they get acted on?
  • communications – does it communicate?
  • IO – are they mapped to the right pins and peripherals?

Simulation

For some systems or subsystems we write fully fledged PC mocks around the code and ensure it handles all the parameter and error cases correctly and that all the functions are correctly implemented. This is a form of integration testing that proves the software component of the system is doing what it is meant to but goes a lot further to fully excercise part of it. And since 80% of the problems come from software this is a very effective way of reducing bugs and difficult to track down system defects that are expensive on time and resources to cover in real time operating tests.

 

To do this, you have to abstract the interface so the code can run in the embedded version or the PC version without any changes. This is easy to do if you think about it in advance.

 

One word of caution; the PC has a lot more resources and clock speed available compared to a smaller embedded system so this is not a substitue for testing on the real hardware to ensure execution latency is acceptable.

 

And for the purposes of this post, the PC could just as easily be a Linux or Mac system. The point is to use the higher level system to efficiently and fully test the embedded software module so you save time and money later on in the project. And let’s face it, who like to be under unnecessary pressure at the back end of an embedded software project?

 

System testing

If you think in advance about how to most easily implement the system testing then you can save a lot here as well. We put effort into deciding how the do the test process at the architecture design phase so that we have the data flow required to actually do the test. This can be as simple as having some extra parameters or calls available to be able to inspect the state of the system and the communications facilities to get at this data. Where possible 100% parameter range testing and 100% code coverage testing is very desirable. One thing this means is that you had better think about how you will create each error condition that must be handled!

 

Low Cost Software Development

Low Cost Electronics Manufacture relies on Low Cost Software Development. So make it a priority. The Pareto Principle says that it is the most important thing to get right.

 

Ray Keefe has been developing high quality and market leading electronics products in Australia for nearly 30 years. For more information go to his LinkedIn profile. This post is Copyright © Successful Endeavours Pty Ltd.

 

Reducing Electronics Manufacturing Parts Cost

This one is both easy and straight forward to understand.

 

Do as much as possible in software.

 

But it doesn’t stop there. Do as much as possible in software at every stage of the development. Here is how that pans out:

  • replace hardware with software that does the same function
  • verify operation using unit tests and system tests within a soft environment
  • do production test using on board software so the ATE is very simple
  • do field diagnostics with on board software to make the diagnostics as cheap as possible
  • do service and scheduled maintenance with on board software to minimise time and cost in these areas
  • where suitable, use a bootloader to allow in field upgrade of the software

If you don’t already know, ATE = Automated Test Equipment.

 

The best thing about making software the core part of each of these areas, is that the manufacturing cost of software is effectively the jig and the time to program and test the parts. Automation can be expensive, but if the device contains it’s own automation, then the production process costs plummet. A simplified example:

 

You have a device with 8 inputs and 3 outputs. You want to test all the inputs and outputs to make sure they work. The traditional approach is to have a production ATE which applies known loads to test points and then measures against a series of scheduled tests which are controlled from one of the major production test equipment and systems suppliers. It is not unusual to spend $50K on such a system even for a relatively simple device. If you don’t believe; add up the software toolset costs, the man hours spent designing then building then coding then debugging then commissioning, the opportunity cost of those man hours and the materials costs. It really does all add up.

 

Electronics Manufacture – let’s look at the alternative

 

The test jig merely connects the outputs to the inputs with the appropriate loads in place. The device is programmed with its own ATE code that then goes through the test process including requesting a serial number, and communicates the outcome back to the system which merely records the time, date, serial number, product version and test results. It doesn’t matter if the inputs are analog or digital, the same philosophy can apply. And if there is a big mismatch in the inputs and outputs, then put a simple multiplexer on the jig and let the unit manage it’s own test sequencing.

 

Another bonus: you update the system but the interconnections remain the same however the test sequence would have required altering the ATE software. No need! The on board ATE sequencer does it automatically and you don’t have to alter the production process at all. It even tells you it is the new product and you didn’t have to touch a thing.

 

Of course there are classes of products that do need more than this. Processes like burn in and quality metrics based acceptance testing. But these are the 5% cases. The alternative approach outlined above covers the other 95% and at a cost which can be orders of magnitude lower. And you can always add extra features to the test jig if required and still let them be controlled by the unit under test.

 

Yet another bonus: self calibration! The unit can calibrate itself based on the test results. No need to support multiple different calibration techniques at the ATE. It just says “I read X” and the unit under test looks at this value and what it reads and uses the one calibration process that applies to it.

 

And this features in one of our earlier posts on Strategies To Be More Profitable as it applies to Low Cost Electronics Manufacture in Australia.

 

Now I know this is simplifying it to its core essential elements, but that makes it easy to see the advantages and how much you can leverage them.

 

Less Electronics Hardware = Less Cost

 

The same applies to the other areas mentioned above. Removing hardware and doing the same work in software is pretty obvious. Less parts usually leads to less cost. Above we looked at production line ATE. And the same concept can obviously be applied to field and service diagnostics.

 

Field and service diagnostics

 

So here is another scenario. Imagine you have a customer with a pump that isn’t pumping. What to check first? Easy, the simplest thing to swap out is the pump controller. So you send them a replacement pump controller. They pull the plugs, remove the device, put in a new one, and send the old one back under warranty. You send it to the manufacturer. They test it and there is nothing wrong and send it back to you. But it’s pretty grubby and not suitable for resale as brand new. Well maybe their test process isn’t up to scratch and it really wasn’t working in the field. Anyway, it was still the thing to try first since anything else is a much bigger job to swap out. But now you’ve got all the hassle, a potential dispute with the manufacturer and the pump might still not pump with the new controller. The score is basically NIL all round for this. Everyone loses.

 

Now imagine this: the customer rings you and you ask them to go and press the orange button on the side of the pump controller. It says via its LCD “Check Valve Reversed”. Aha. Not a pump controller problem at all. The customer calls the plumber and gets him to fix the installation. Done. You look good, the customer got timely service and you sure are going to recommend this pump controller to the next customer ahead of the ones that don’t do this.

 

For each product category, the equivalent of the above 2 situation exists. So will your product look this good if the customer has an issue. It can if you think about it, and the cost might be trivial. It might even cost less at manufacture, but it will always cost less in the long run.

 

And of course, if the product can have its software updated in the field, that saves a lot compared to having to return it to the manufacturer. Orders of magnitude this time.

 

So that looks at parts cost, production process costs and support costs.

 

Reducing Development Cost

 

The second of the bullet points is looking at development cost. The up front cost to get a working product. We do a lot of work with small 8 bit and 16 bit microcontrollers and the development environments often don’t give you a lot of facilities to find faults. It’s the combinations that get you. Stop when input A is on, output B is off and the variable C is exactly 122 so I can look at what’s going wrong with my code. Or you might have to pay a lot for an emulator with all those features. And of course you have to put the hardware into the exact state you want as well. How do you do that again? That’s right, either sea of pots and switches or some clever and expensive hardware test equipment.

 

What we do a lot is build the project inside a software clone of the final system. In the software industry this is called a mock. Then we can use our standard PC coding and debugging tools to create scenarios and test against them. You can test your logic in an automated way and you can put every possible input combination in and make sure it responds correctly. Robert Bosch Australia Pty Ltd is one of our clients and we have worked on a number of projects for them. For those who don’t know, the volume of Australian Electronics Manufacture they do at their Clayton Facility in Melbourne is very impressive. They design, make and export millions of automotive electronic control units (ECUs) to Europe, Japan and the USA. And the body electronics supplied by Bosch to the rest of the world is designed and made there. Great stuff guys.

 

So a simple example of how we use this in our projects with them is a battery charging system we did which was all in software. You will find reference to it from one of our Linked In recommenders Dale O’Brien who saw the process in action. Basically, the full suite of tests took a week in real time, the primary test sequence required 54 hours, one test required sub-zero temperatures and none of this was 100% coverage. Using a software mock of the system we were able to do all the testing in 15 seconds including tests specifically to ensure 100% coverage. That’s roughly a million time faster. Debugging at light speed! So we were able to address the logic and algorithm issues quickly and efficiently and have a very high confidence in the system. Final verification in real time with final hardware and a normal test platform confirmed the operation but it was 6 months later. So maybe we were really 25 million times faster.

 

Don’t get me wrong, I firmly believe in testing on the final hardware. After all, assumptions are one of the greatest dangers we face. But at least prove you did correctly implement your solution within the assumptions you did make first. Then when you learn something new you are only fixing one problem and not arguing about whether it was the assumption or the test that is wrong.

 

I feel a bit like I got on my hobby horse over that lot. But I really do believe this can make a huge difference.

 

OK, time to go and design some more products for low cost electronics manufacture in Australia 🙂

 

Ray Keefe has been developing high quality and market leading electronics products in Australia for nearly 30 years. For more information go to his LinkedIn profile.