At a recent networking event, the conversation at one point centered on a long-time manager talking about two kinds of people seen working in technology. There is one who is really good – expert-level or beyond – at one thing, and they are very comfortable with doing that. So comfortable, in fact, that they don’t really want to expand their skill set. Then there is one who is good – probably not quite expert – at something, maybe a few things, and they want to keep learning new things to expand our skill set and get stronger in good areas.

I know that I a firmly in the latter camp, which I have talked a little about before. Both have pros and cons, but to me, the latter has so many more pros than cons and has been a key part of my career. If what you are really good at is a skill in a lot of demand, you can be just fine doing only that. Most of us are not in that boat, however, so it is in our best interest to keep expanding our skill set.

It’s in that vein that I plan to write this week largely about instances in my career where I have had to expand my skill set. There have been some instances that stand out above others, and they provide some great reflection for me on something I intend to keep doing.

When I was at Wind River, I was in the unquestioned embedded systems industry leader. They were on top of the world in that industry, and while there were certainly competitors, they were the industry giant. VxWorks was a household name with operating systems, as Linux had not really established itself in that space and would not for several years. I worked on real-time trace and event systems, and at one point we had a real problem.

For a long time, we had used FPGA buffer images to do the job of getting bus cycle data from the microprocessor’s system bus, then putting that information together in a format that enabled tracing program execution. As processor speeds increased, we ran into many problems, however. This was especially evident when we got to processors like the MPC8260 and MPC8240, as we experienced buffer overflows often. My manager and our senior hardware engineer spent a week at a key customer site dealing with this at one point.

We realized we simply couldn’t stay down the same path. We also came to understand that a logic analyzer had the hardware to handle the faster processor bus speeds, and we had the connectors built in to get the signal data we needed as there was an industry standard. Ultimately, this meant that we had to do in software what we had previously done in hardware: instead of an FPGA image that effectively filtered out what showed up on the bus, I was tasked with writing a software component that did that from the logic analyzer.

It meant I had to understand the processors we supported on a whole new level.

Previously, I understood the processor hardware in some detail, as I had dealt with registers, instructions, reads and writes, and more. But now I had to understand where the information the trace component I had largely worked on came from. I had to understand what constituted a bus cycle, including whether it was any old instruction or a memory operation (read/write). I also had to understand this enough so that we could trace special event, such as all reads or writes, all memory operations (no instructions), reads and/or writes to a particular memory location, or a particular instruction (perhaps a developer isn’t sure a particular line of code is actually executing in their program) among other things.

So I got to work getting to understand the important signals, as well as how an address was put together. The latter part is important because some of these processors have a row and column address, and a chip select might come into play for how the address is determined from those two addresses. Some of this was captured in an initialization component that was used to set up for a logic analyzer being used instead of Wind River’s own event system. With the help of another hardware engineer, one who had much more experience with logic analyzers than I did, I set up to collect trace data, filtering out samples that told us nothing (oftentimes, there would be many samples that gave no useful information in the course of a bus cycle such as transfer start, transfer acknowledge, row or column address valid, or something similar). I was ready to go.

Armed with this and having what the FPGA image used to produce, I started running and looking at what was collected to get to work. I would go through samples to find the sequence of a bus cycle, then turn that into what the FPGA image would send to the trace memory. Because we typically had many samples, a classic trade-off came into play: how much data would I take in and evaluate at a time? There was much more data collected than the program could handle if I went through the many thousands of samples the logic analyzer would collect. I had to find a number that worked well, and get the program to remain in a cycle as long as necessary of getting logic analyzer data, going through what was collected and sending into the trace information file. That way, we wouldn’t tie up too much memory on the PC executing the program.

Once this was complete, and we could trace the program execution just like when we used our own hardware, it was on to supporting breakpoints and events. This required using an application programming interface (API) that was proprietary to the logic analyzer; both Agilent and Tektronix had one, and they were significantly different in how a programmer would use them. I worked mostly on Agilent, with some time spent on Tektronix. Agilent engineers helped with forming the event support, but I still had to translate from a user selecting this into a call down to the logic analyzer to set it up. When the user selected something like, “Trace only reads to memory location 1000”, we formed a command that then did the real work. Here, this meant sending a command to set the logic analyzer up instead of using out own software.

Much testing was needed to cover these scenarios. Plenty of problems were uncovered along the way, but ultimately, we rolled this out and later supported other processors. The PowerQuicc series got the ball rolling, but we did several others. All of these had important differences; there was one family, largely used in automotive applications, that sent very different data on to the bus and required more work behind the scenes in the software component to get the information used by the trace component.

When this was all said and done, we had something with two big benefits. One was that we leveraged the logic analyzers and partnerships with the vendors to trace program execution on processors with higher external bus speeds, ones our FPGA image had a hard time handling (our hardware engineer was terrific and did all he could, but it became a battle against nature, so to speak). The other is that we had turned a logic analyzer, long a tool for hardware engineers, into something a software engineer should not shy away from, but rather, something useful for them.

For me, the benefits came in understanding microprocessors much better, as well as logic analyzers. I also came to better understand the internals of our event system, as well as using an API and understanding how it can benefit developers to use the tool to its fullest capabilities and extend the capabilities. All of this built great intuition for the future.

Share Your Thought

This site uses Akismet to reduce spam. Learn how your comment data is processed.