Monthly Archives: August 2007

Does "Eclipse Europa Needs Some Polishing"?

I ran across this review of Eclipse Europa on eWeek that came out yesterday. It confirms almost every concern and hope that I had for the Europa C/C++ package that people have been downloading in droves from And it was really interesting that this guy is a former Windows C++ developer who is very familiar with Visual Studio who is now doing Mobile development, the exact scenario that we see a lot of and is the biggest growth area for Eclipse on the C/C++ side.

The first thing that hit me was his summary: “Eclipse Europa is a solid IDE, but it could use more refined packaging for the Windows platform”. If you’re a regular reader of my blog, what more can I say.

The sore points that he ran into really home for me. His biggest complaint was the install. He had expected the C/C++ IDE package to include the gcc compiler. There are still way too many steps to getting this package to be useful for C++ developers. “Nothing leaves a more sour taste in a Windows user’s mouth than an application not working properly, or requiring additional manual configurations, after clicking finish on the installation wizard’s final panel.”

He also had trouble dealing with the Eclipse workspace paradigm. Visual Studio is much more flexible about what files are included/excluded from a project. This is an area we really need to deal with to make these guys comfortable.

He had some good things to say too, though, and he really showed why I think Eclipse will be attractive to Windows developers once we clean things up. The CVS integration is unbeatable. He loved the CDT editor and navigation features including CDT’s new Call Hierarchy view. It’s these features that really bring the CDT into the mainstream.

One thing to notice, though, is that the title of the article seems to address all of Europa, at least that’s what readers will see first. That’s why we really need to be careful when we present Eclipse as an IDE. It isn’t an IDE accross the board (not to open that debate again). But users see the word IDE and have pretty high expectations. And when it falls short, it looks bad on everyone.

The Need for Diversity

I’m in shock. Amongst other emotions that I’m still trying to figure out.

I came into work this morning and checked my e-mail to find that Danny Smith from the MinGW project had sent an e-mail “Bye” to the mingw users and developers mailing list. Bye? What do you mean bye?! Just as I was getting excited about the future of MinGW with it’s spanking new modern compiler, the only guy working on it has quit. I don’t know what to think. Is it a bad joke? Has someone broke into his e-mail account and sent the message. The responses from the other MinGW developers leads me to believe not as they politely wished him well in his future endeavors as well as expressing their fear for the future of the project.

And fear we should. I was always concerned over the lack of progress with the MinGW compilers. The seemed stuck on 3.4.2 as the official release for a long time (and now, probably even longer). Danny had come to the rescue and offered hope that the wait was over and we’d soon be able to enjoy all the great improvements to gcc in recent years. But now it appears someone else will need to take on this challenge. And it appears to be a big challenge as there were a number of bug reports flowing in (one of which was mine) and I was getting worried that Danny would get overwhelmed.

The timing of this is interesting, especially after my blog entry yesterday. But I’ve also been in a number of discussions in Eclipse lately over the need for diversity for projects to succeed. If contributors to a project all come from one company, what happens to that project when the company needs those resources elsewhere. The CDT was able to survive such an incident because we had contributors from many organizations who stepped up to fill in the holes (and I still can’t thank them enough :). But there are projects at Eclipse who haven’t worked hard enough to make sure they diversify like this and it is something to worry about if you rely on such projects.

And that’s the position I find myself in. I was relying on MinGW’s 4.2 compiler to make Wascana a super appealing environment for Windows development, even for commercial use. Now, I’m not sure what I’ll do. Maybe it’s time to apply some focus again on the Windows SDK compiler and debugger integrations. Although, unless by some miracle Microsoft let’s me redistribute their SDK, it violates Wascana’s primary mission as a simple to install complete IDE. And I doubt I would have ample time to contribute to MinGW and I don’t really have the expertise anyway. And I have QNX work piling up. And CDT stuff to prepare for. Like I said, I’m still trying to figure this whole thing out…

The True Meaning of Wascana

While the progress on Wascana has been slower than I may have liked, it is progressing. And I’ve been very pleased with the positive feedback I’ve received on it. Almost everyone I’ve heard from says it’s the right solution at the right time. A complete CDT IDE is hard for people to set up themselves, especially for noobs, and that is Wascana’s primary mission in life, to make this easier.

But there is another reason for Wascana, and one I use to justify spending some of my work time on it. I’ve often seen marketing staff from various vendors promote their Eclipse-based tools as, well, Eclipse-based tools. Now in the Java space, that definitely means something. But in the embedded world, it doesn’t have the same punch. It’s almost like customers are saying “yeah, so?”.

This has been the main driver behind my work on improving the CDT for the “grassroots” segment of our industry. These are the guys just getting into programming, or are doing it as a hobby, or people working in a start-up. People who don’t have a lot of money to spend on expensive tooling but who would benefit from a good free IDE. And while there are good IDEs out there for free, there is so much more upside to Eclipse.

But I had reached a road block in my pursuit of supporting the grassroots. We had reached the point that their biggest hurdle was setting up the CDT with a good compiler and debugger and set of run-time libraries. This is the stuff that Microsoft’s Visual C++ has always been good at. And if you look around, thanks mainly to the growth of Linux, there is getting to be a pretty good set of open source tools and libraries.

And I guess that’s why the time is right for Wascana. I think we can build a pretty good free open source IDE from all this, and the feedback I’ve received is that it will be very popular. And if that becomes true, then commercial products based on Eclipse will benefit from the extra visibility and the investment will have be worth it. So while I’ve had to pursue Wascana out on SourceForge due to licensing and IP requirements on Eclipse projects, I consider Wascana to be an important part of the CDT, both for the desktop developers who want a good open source IDE based on it, and for commercial vendors who want their CDT-based IDEs to be successful.

Too clever for me…

Brian Kernighan, of Kernighan and Richie fame, or K&R (if you did’t know, Richie created the C language and Kernighan helped him right the book on it). At any rate, Brian has this famous quote which I can’t seem to find the root source but I found it quoted many times:

“Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.”

And that perfectly explains my frustration in the last couple of weeks trying to understand two very clever code bases. Not only is debugging twice as hard, but being a new guy trying to understand the code is at least twice as hard. Hmm, maybe here’s a new quote for people:

“Learning code is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, the only one who’ll be able to understand it.”

Unless the person learning the code is twice as smart as you. Or something… At any rate, I’m not twice as smart as most people so I get to struggle trying to figure this stuff out.

BTW, as I was searching around for the source of the Brian’s quote, I ran across this interesting interview with him that gives a glimpse of the human side of Unix and C as they were created at Bell Labs many years ago now.

From Old to New

I was hunting and pecking around looking to see what is happening in the industry as I do probably too regularly (I really got to get some code done…). At any rate, I ran across some slide shows that ZDNet were showing on old computers. I still remember the buzz and excitement us young geeks had as computers hit our neighbourhood streets. I don’t think I’ll ever see something like it again.

Anyway, one of the pictures was of the first computer I ever typed a program into. It was an HP-85 (click here for the real site that ZDNet borrows the pictures from) that my best friend’s dad used at his work for the Fisheries Department at Government of Manitoba office in town. And it was where it all started for me and it was cool to see the picture. Yeah the thing had a tiny screen and a proprietary CPU, but it did speak BASIC and I remember being excited trying to figure it out.

Of course that is contrast to the latest computer, or at least processor, that caught my eye, Tilera’s TILE64, a monster 64-core machine organized as a System-on-Chip (SoC, peripheral interfaces included). It especially sparked my interest because of the market it’s trying to address, embedded systems for video and advanced networking. Intel can go on about their server and desktop monster multi-core machines, but there is a real need in the embedded space for this technology too. I can imagine some pretty wicked things that embedded devices could do with automation and robotics and such with this kind of horse power.

But as with all monster multi-core machines coming out, I still think we need a better way to program them so that we don’t get lost in the complexity of getting our programs to do multiple things at exactly the same time. Hell, I spent a good part of the last couple of days solving a deadlock issue in the CDT, and that was just two threads colliding…

A lesson in scalability

I just read the Skype blog where a Skypian describes (well glossed over, but we get the gist) what happened with their two day outage last week. I don’t use Skype very much but I know a few people that use it for their work and were at least inconvenienced by it. I read the report with somewhat the same reasoning that one watches Nascar, to see the big wreck and find out how it happened. But reading these things helps you think about how you could avoid such wrecks in your day job, so it’s useful reading.

The story goes that 30 million computers around the world running Skype all downloaded a Windows Update and did a restart, all at the same time. I always wondered how Microsoft’s servers could keep up with that, but I guess they did very well. But when those 30 million Skype users all tried to log into Skype after their restart all at the same time, bad things started to happen and everyone got booted off the system.

Now, being a software professional, it’s not clear to me how this outage could last two days. Normally, you get a timeout if the server is busy and after some amount of time you retry. You’d think there would be a variability in the timeout so that everyone doesn’t retry all at once, but maybe that was the flaw they found.

But the lesson of the day is to always consider the “impossible” since sooner or later, it may not be impossible. We run into that with the CDT. We find users who take the CDT and import any old project they may have and expect the CDT’s parsers to find everything. In a lot of cases, we’re fine, but we definitely don’t take into consideration all possible scenarios. And I think that will be the next phase of CDT’s lifecycle, to reach that maturity where our feature set does work on more and more projects and we can have more and more happy users added to our community. Openning our minds to the impossible will help us get there.

Just when you needed a Boost

I’ve been aware of the Boost C++ library for quite a while, but in the context I had to deal with it, it was painful. The Boost library is a collection of C++ templates intended as a trial ground for additions to the Standard Template Library that is part of the C++ standard. Some of them according to Bjarne, have made it into the next standard C++0x. But they stretch C++ templates to the limits, and as such, stretched the CDT’s C++ parser to it’s limits and broke it. In the early days of the CDT, we eventually just skipped it.

But lately, Markus on the CDT team has been testing his indexer work with Boost, and I’ve had a number of requests from people to include it in Wascana. So I decided to take a fresh new look at it.

Now I was expecting some simple container templates and utilities and such. And there were things that are much needed like a threads package for multi-threading your app and a regular expression utility class.

But I was amazed at some of the big constructs they have there. The first thing I ran into is a complete lexer/parser subsystem including a preprocessor. With that, it wouldn’t take to long to build parsers in C++, and maybe even a C++ parser.

As well, there is a Statechart engine. This is something I’ve dealt with a lot in my past and it was cool to see a solution that involved templates and States as objects and some of the neat tricks it used to implement action code. Whether it scales to real size state machines, I’d have to dig deeper to see.

I’ve always been amazed at how powerful C++ templates can be and how the compilers can take all this template code and specializations and such and optimize it down to some pretty efficient code that you probably would have written by hand. But with templates, you work at a higher level of abstraction, meaning higher productivity. Boost gives you some pretty powerful abstractions. We’ll see how easy they are to use in practice.

Debugging the Debugger

I’ve been trying out MinGW’s new 4.2.1 gcc compilers. As I mentioned previously, they’re experimental. But I’ve gotten really good feedback from people that moving to 4.2.1 is a great move and will help make MinGW a serious choice for developers.

They actually have two variants of gcc that they’re working on. One of them supports exception handling based on the debug information gathered using the DWARF standard. It’s apparently much more efficient than the default one based on setjmp/longjmp. I’m not sure what that all means, but my take is that the dwarf version is better.

At any rate, I had a problem using the dwarf version that I didn’t have using the default (sjlj) version. If I specify the path to a file using Windows traditional back slashes, e.g. ..\main.cpp, gdb got confused and I couldn’t set a breakpoint on a line. And, unfortunately, CDT’s builder builds files this way and I my breakpoints failed to get set.

So, I downloaded the source to MinGW’s gdb, configure and built it and set up a debug session, all within the CDT (this worked since configure generates forward slashes). I was able to set breakpoints, look at the dwarf symbol data that gdb was trying to use and found where the line number info was missing. And with that information, I was able to generate a hopefully helpful bug report that the MinGW developers can take, or if I find the time, I can try out different solutions. The only trouble I had was making sure which gdb was which :).

At any rate, this brought home again why I love using IDEs for development (which gave me a great intro for an article I’m writing). The productivity of using a debug environment that provides point and click visualization of debug information has to be at least ten-fold over using command line debuggers, and maybe a hundred-fold over using printfs. Once you start using it, you’ll never go back.

The Master Speaks, Bjarne’s vision of the future

I’m not sure whether you’d call him a Jedi master, or a Dark Lord. I guess that depends on your opinion of C++. To me, I’ve always affectionately called him Barney (which I’m sure he’d hate), and, of course, I treasure my copy of “The C++ Programming Language”, the “Barney Book”.

Bjarne Stroustrup, the inventor of C++, recently gave a rare public talk at the University of Waterloo, Canada’s top university for computer science. The topic of the talk is the new version of C++ called currently C++0x (he mentions that if it slides into 2010, they may just call it C++0xa, and yes he has a pretty good sense of humour). But he also talked a lot about the past and present of C++. You can download the talk here but be warned it’s huge and you may want to use the bittorent.

I’m a long term fan of C++ since I used it for my grad studies work back in 1989. It was a no brainer to me that it became so popular. Bjarne was able to bring object-oriented constructs and generics to C programmers without compromising on performance. And C++0x has performance square in it sites as it works to clean up some of the complexities of the language and bring new concepts that desperately need standardization like threads.

He also had some great examples why performance is criticial, even in today’s world of fast computers with lots of memory. Embedded systems have always had performance as high priority, and in the world of mobile, high performance also means using less power, which makes power consumption a performance issue. Also, if your application uses less memory and is faster, that leaves more resources available to add more functionality and making the system even more useful.

So while the world still seems to be jumping on the Java/C# or Ruby/PHP/Python bandwagons, C++ still and always will have it’s place. 3 million C++ programs can’t be all that wrong…