BOSTON – Three big conferences crowded into part of the Boston Convention and Exhibition Center, with the Embedded Systems Conference being the one of primary interest. There was another conference in the building that was not co-located (the Cloud Foundry Summit), so if you were a technology professional in town for a conference, chances are you were in the BCEC.

The Embedded Systems Conference in Boston isn’t what it used to be. Once upon a time, the exhibits would take up significant space at the Hynes Convention Center floor and there would be enough conference sessions at all times to fill rooms on multiple floors. The current edition is very much a slimmed-down one, although the exhibit floor looked bigger thanks to the other two conferences. The BIOMEDevice Boston exhibitors took up most of the floor, while those who were part of the Design & Manufacturing New England Conference also took up a significant amount of space.

(Side note: I wonder if ESC out west has dropped off as much, if at all. I always sensed that it was a bigger deal out there, and it helps that a company I used to work for was headquartered out there and used to get a big booth that they would not get in Boston. Even LinuxWorld only lasted a couple of years here, after there was much excitement about it coming to Boston.)

The day began with part of a two-hour session conducted by Jean Labrosse, one of the foremost authorities on the subject matter. Labrosse has written a prominent book on real-time operating systems (one I have read some of), so his Introduction to RTOS is a good choice if you’re new, or almost new, to the subject matter. He brought up the questions you should ask to ascertain whether or not you need an RTOS in the first place, before going into aspects of it such as tasks, the stack and scheduling. Unfortunately, I only stayed for the first hour, because of bad timing: there was a session even more worth attending right next door.

Jacob Beningo presented Transitioning Embedded Software From C to C++, something that is even more interesting to me now after learning about modern C++. The language has changed immensely over the past decade or so, along with compilers, so C++ is now a more viable language for embedded systems. The 2017 Embedded Developer Survey conducted by AspenCore showed that C (56 percent) and C++ (23 percent) account for the lion’s share of all embedded software projects, numbers that are not surprising to me. Beningo first gave a series of questions to ask to determine if you need what C++ gives you, then gave a number of good ideas (you might even call them, dare I say, pointers!) for going from C to C++, and even recommended a book, Real-Time C++, whose third edition is coming out next month.

As he went through his ideas, Beningo even noted that he thinks more code will be provided in C++ as well as C as time goes on. While I know Dan Saks for years has been beating the drum that C++ is very much a viable language for even some low-level embedded development, I suspect he is happy to see that the tide appears to be turning on this.

Right after that, I stayed right where I was to learn about Arm TrustZone in Arm V8-M TrustZone Primer. TrustZone goes back at least to ARM 11 and is a set of security features for using processors in some of the newer architectures. There is a Secure and a Non-Secure state, and Secure memory includes what is called Non-Secure Callable, with function entry points. In some ways, NSC seems analogous to how in Linux, a user space program can access the Linux kernel, but must do so via a system call instead of going directly there. If you attempt to call into a secure region that you cannot access, a fault is generated. Even interrupts and their service routines can be segmented into these regions.

After a break to check out the exhibits and grab lunch, I caught most of the keynote address by Anatoly Gorshechnikov from Neurala, as he talked about artificial intelligence and some of its challenges. AI is not a subject I’m well-versed in, and I didn’t have the easiest time following the presentation, but I understood a good deal of the challenge involved – the same challenge we humans have in remembering and recognizing objects we have seen or heard. We take this for granted because we don’t try to figure out how it happens; it happens almost like magic to us, even though that is not what is happening.

The first post-lunch session was one that brought back some memories while being a learning experience. Having worked on real-time trace well over a decade ago, Arm Trace: Kills Bugs Fast by Shawn Prestridge of IAR Systems was very much of interest to me. When I worked on those systems, the microprocessors supported were mostly Motorola (before the FreeScale spin-off) and IBM when they took over the Power architecture, as well as some Intel Xscale. Arm, on the other hand, is a whole other beast, and my exposure to what they have has been minimal.

ARM processors have a few different kinds of trace supported. Most of it is at least some variation of what we used to call “back trace” on the Motorola 8xx processors, where you get information on a branch and then trace back a number of instructions to present the user with what code was executed. The value of trace – to see how you arrived at your current place of execution and the time involved – is something I learned very quickly. It is something that can be immensely helpful, especially with the profiling and code coverage capabilities available, the latter of which comes into play with regulated industries.

Another consideration that comes into play is the availability of memory for this. The systems I worked on had a lot of trace memory as part of a bigger ICE and event system, and at one time used logic analyzers as well; however, those are not necessarily available all the time. That is just one reason there is not often a lot in the way of resources available to make this happen. Without a specialized board available that can get to all the signals on the processor that constitute a bus cycle, reconstructing code execution is a very hard task.

After that was a well-attended session, and deservedly so, as Jack Ganssle talked about Really Real-Time Systems. Jack is a legend in the industry, and as someone who has read a lot of his work over the years in the old Embedded Systems Programming/Embedded Systems Design magazine and more recently on, I look forward to anything with his name on it. While there are some well-known people in this industry to those who have been in it a while, to me Jack stands above them all.

This presentation did not disappoint, one that talked about a lot of gotchas and challenges that we face in this line of work. He started off with an example of a multiple and divide on two processors – an 8051 and a 186. Those are older processors, to be sure, but the point he wanted to make was fundamental: know what your tools do. For different reasons an instruction may take a lot longer or a lot less time to execute on one than another, and you might be surprised. In fact, looking at using trigonometric functions lends a little more insight into this, as the C standard mandates double precision for them, and the 8051 only has single precision, which helps it implement these with much better efficiency than the overall more advanced 186.

He spent a good deal of time talking about the challenges of reentrant code, which includes knowing what your compiler does and what may be interrupted, as well as impedance mismatches and ringing that could basically blow up entire circuit boards. And he also left a little time to talk about interrupts, from non-maskable interrupts (NMI) that will always break non-reentrant code to how one may use unused vectors for a debugging routine in case something leads code to there by mistake. Of course, he was quick to point out that NMIs should only be use for the most serious of errors.

Simply put, Jack delivered a gem in 45 minutes, and I only touched on a few small pieces of it. A college student sat right near me, and I told him before it started that he chose a good one because he was about to watch a legend in action. He relayed to me afterwards that he saw what I meant. I am only dismayed that Jack no longer writes for

Finally, Jared Fry from LDRA talked about the MISRA C:2012 standard, one often used for safety and security. LDRA has several people on the MISRA C committee and similar ones, illustrating the know-how they have. A big point made is that the main language-related failures tend to come from use of parts of the language whose behavior is either undefined, implementation-defined or unspecified. Also, Fry stressed using automation tools for a coding standard as early in the process as possible; using it later or with legacy code could introduce many defects while en route to full compliance, so a cost/benefit analysis likely comes into play there and not necessarily a trivial one.

That wrapped up the day, with the exhibits floor closed for business at the start of the last sessions. In the past, the exhibits floor would be home to a celebratory couple of hours once the sessions ended for the day, but not this time. All in all, the conference is still worth attending even in its diminished state. The sessions are still excellent, with quality presenters, and there is still much happening related to the industry to gain at an event like this.

Share Your Thought