Monthly Archives: April 2007

Zero to breakpoint in 10 seconds

We’re getting close to our first release candidate, RC0, of CDT 4. One of the key objectives for CDT 4 was to simplify the new user experience. Thanks to some new features in the CDT like new project templates and in the Platform like contextual launching, we can now get you from a blank workspace to your first breakpoint in 10 seconds (faster if you type faster :).

Here’s how:

  1. File->New Project.
  2. In the New Project Wizard, select C++ -> C++ project, click Next.
  3. Select the Executable project type, clicking the + and selecting one of the Hello World templates and if more than one Toolchain is listed, pick one. Type in the Project name and click Next.
  4. Fill in the form with information the template needs to generate your Hello World app, click Finish (or next to play with the build configurations, but the defaults are fine).
  5. Click the Build button in the toolbar, wait for the build to finish.
  6. Click the Debug button (one click debugging!) and accept the switch to the Debug perspective.

And you’re done. The debugger hits the default breakpoint on main and you are set to go.

Give that we’re not done yet with CDT 4, there is a caveat at the moment. The one click debugging only works with the MinGW integration. To set that up, simply run the MinGW compiler and gdb installers from, or have MinGW installed in C:\MinGW. The CDT will automatically pick up the install location. We’ll get the other ones (Cygwin, Linux, etc) working as by the time CDT 4 ships at the end of June.

You can’t do that with Java

O.K., it’s not that I hate Java. It’s more, I run into times where I need to do something that I can’t do in Java. Some of my favorite C++ features include operator overloading, crazy C++ templates like the ones you get with the Standard Template Library, and flexible memory management, not to mention the great job that C++ compilers do at optimizing all this magic into fast object code.

It’s really the flexible memory management that I miss the most. Allocating memory out of the heap is expensive. That’s pretty common knowledge. With Java, every Object gets allocated out of the heap. I remember the first time I ran into this early in my Java career. I had this little class that had a couple of fields that I used to store temporary information that got passed down to some other methods. I couldn’t believe that I had to allocate it out of the heap. With C++ it had become second nature to declare an object and have it automatically allocated on the stack. And when the function I declared it in finished, whether due to a return or an exception, the destructor for the object gets called so you can clean up any mess. And using C++ pass by reference, I was able to do all that with minimal typing (my other mantra – I hate typing, especially with my sore finger right now).

The other cool feature of C++ is the ability to override the operator new to do your own memory management. That way you can allocate all instances of a class in a special memory pool. Or pass parameters to operator new to do anything you want. I’ve run into this as I’ve started looking closer at ray tracing algorithms (my new hobby). One of the speed ups they mentioned was allocating all contents of one of the structures in a given memory region to help leverage CPU data caches in an effort to squeeze every ounce of performance out of the machine as they can (which is really needed to get any resemblance of real-time ray tracing on today’s machines). Now that’s something you can’t do in Java, at least not without some native code, which then isn’t really Java.

Java has it’s place and I love it for writing Eclipse plug-ins. But despite bold predictions by the IT industry, C/C++ will never go away as long as we continue to throw as much processing at these fancy new CPUs and GPUs as we are. For some reason, our appetite for speed continues to outstrip all that performance that the silicon vendors are working so hard to put in our hands.

Bug 160012 – The CDT Team at Work

There have been massive changes in the CDT’s build system and the CDT’s New Project wizard. This is all great work that the gang at Intel in Russia have been doing to clean up the long standing weirdness of making the user pick between the two competing build systems in the CDT: standard vs. managed. The user still picks, but it’s much more subtle. Along with this, other components of the CDT can start getting more information about what the build system is doing so that we can do things like pick the default debugger based on the active tool chain.

Another component that we’ve been eagerly waiting for was the new project template support proposed by the gang at Symbian in London. This allows us to gather some information in the New Project wizard and generate source files and build settings based on a template. Now this proposal actually occurred pretty much in parallel with Intel’s build system work, and given that, they didn’t really take each other into account.

Well, once the build system was in place with M6 at the beginning of April, it was time to mesh them together. I am thrilled with how this has worked out. It was not an easy task as we had to undo assumptions that had been made. Not to mention the time frame was short with feature freeze being this weekend. But it was great to see how well the two groups worked together along with the odd input from us over here in North America. To see for yourself, check out the bug report for 160012 where the discussions happened. At last count, there were a 110 comments on it, some of them pretty lengthy.

I’ve done “around the world” development in a commercial setting but never at this level and never this successful. Every morning, I wake up and sift through a pile of bug updates that my friends in Europe and India have sent out. We then get a few hours where we’re actually at work at the same time and the bug traffic is pretty heavy in the morning. But then tails off towards the end of the day. You always have to think about what time it is elsewhere (even though someone may still be working late – go to bed Mikhail S! :).

I think that it’s a sign of a successful open source project when you have contributors from around the world with diverse needs but all fighting through the time differences to work together for the common good of the project. This is really the main reason I love working on the CDT. Helping create the world’s best and hopefully soon, most popular, C/C++ IDE is pretty good too…

Subclipse withdraws? Someone tell Bjorn…

I just read Eugene’s post about Subclipse withdrawing their project proposal from Eclipse. He points at a well written statement by Mark Phippard explaining why. I guess they have their reasons. But it would have been nice to see something posted on their project proposal newsgroup about this.

I got involved in this Subclipse versus Subversive debate when we were discussing moving to subversion on the cdt-dev mailing list. In the discussion of Eclipse clients, I mentioned that when I tried them I preferred Subversive. And actually, with some recent trials I did for work, I still prefer Subversive. Mark made a somewhat nasty reply to my comments. He made me feel bad for going against Subclipse. And in my searches for other peoples opinions, I more often than not saw him comment the same in support of Subclipse, and I’m sure he’ll comment here. I certainly commend him for standing up for his project and I sometimes do the same for the CDT, but I try to be more polite about it.

So, I guess that means Subversive wins at Eclipse. From my seat, and others have a right to disagree, but I am talking about my seat not theirs, having one project is a good thing. In reality, I don’t care who wins, but I do care that we produce a good subversion client for Eclipse and I don’t see how two competing projects helps anyone. They do, or are intended to do, the exact same thing. In fact, they almost look identical. I almost had to check the features list to make sure which one I had.

But I think we have a long way to go to get subversion client support up to the same capabilities as CVS. Having one project that we can all work on will help make that happen. My intention is recommend moving the CDT to subversion over the summer, but only if the client meets our needs. That means we on the CDT have a vested interest in making that happen. And I know how to make patches and attach them to bugzillas, so I can’t wait to get some time on it. And I will spend time on Subversive because it is an Eclipse project. So will others in the Eclipse community, because of that sense of community that is Eclipse. That’s something I think the Subclipse guys forgot to take into consideration.

Ray tracing the future of Gaming

I love the Inq (the Inquirer). I’m not sure who their writers are but I usually seem to find out about new things there first. Last night I read this cool article about work going to support real-time ray tracing to render 3D graphics. After a Google search, I found the real article that this guy was referring to here on PC Perspective. It’s based on research happening at Saarland University in Germany where they’ve developed an API called OpenRT, which of course is similar to OpenGL. They also have a prototype so that you can try it out.

I remember back in university almost 20 years ago now a couple of buddies of mine that were doing ray tracing for their graphics class. The images they produced were pretty cool and realistic for the time. But it took overnight to generate one frame. Mind you that was on good old Sun 3’s, but you certainly wouldn’t think of doing this in real time, even today.

The ray tracing demos you’ll find in PC Perspective article and at the OpenRT site are amazing, though. From what I’ve read, doing shadows in current technologies like OpenGL or DirectX is very difficult and game developers almost always take short cuts, which leaves the scenes a bit unreal. But with ray tracing, it appears to be much easier and the scenes appear much more believable, which is the end goal for all 3D animation.

What’s changing is the march towards many multi-core CPUs by Intel and AMD. One of the big advantages of ray tracing it the scalability of the algorithms to parallel threads. Each pixel is determined independently of the other pixels. All you need to do is partition the screen to the cores and you get almost linear scalability in performance.

Now, mind you, the demos I saw, especially the one from the OpenRT site, used a lot of cores, mainly 32 and one was even at 48. But I imagine there’s opportunities for improvement given this early stage, and even for some hardware acceleration for parts of the algorithm. But if you were wondering what you were going to do with that quad-core furnace of a chip, here’s one idea. And it’s pretty interesting to see that Intel caught on to this idea a couple of years ago.

Ubuntu 7.04, has Linux’s time come?

As I’ve ranted on my blog in the past, I’d love to use Linux, but I still find the user experience, especially the look and feel, to be a long way from cleanliness and professionalism of Windows XP. And from I’ve seen of Vista, it’s nowhere even close.

But I have a buddy, Rodney, at work who swears by Linux, especially Ubuntu. So much so, he has it installed on his laptop as his main work environment. Of course, I keep bugging him about how ugly I think it all looks, and he fires back with the cool 3D/alpha blending environment of the latest experimental extensions to X and Gnome. It’s all good fun, but at the end of the day I’m happy to walk back to my desk and sit at my Windows machine.

I’ve been playing with the beta release of VMware Workstation 6. I’m a big fan of vmware from way back and every new major release seems to bring something new that makes me like it even more. This release brings a new UI that makes running vmware full screen a lot easier to use and more Windows friendly. The performance seems to be a bit better too, but then lately it’s been pretty good anyway. I use vmware to run the x86 target of our Neutrino RTOS for testing with a target. And, of course, I use it to experiment and test with a Linux host.

So to get up and running on the vmware beta, I downloaded the latest Ubuntu 7.04 release. Rodney’s been raving about it so I had to give it a try. The Ubuntu install experience is the best I’ve seen with any Linux distro. You boot up into a full Linux/Gnome environment off the CD, and then double click the Install icon to launch the installer. Just coming up cleanly off the Install CD gives you confidence the real thing is going to work. After that, it’s just a matter if making sure all the packages you need are there. This is still a pretty harsh task that’s not intended for the weak. But the package manager helps install those things quickly (once you properly guess at the names of the packages you need, like sun-java6-bin ?).

The look is still not up to Windows standards, but it seems to get better every time I try a new distro. Maybe I’m just getting more open to the idea of using Linux. Certainly if you’re an engineer who knows a lot about *nix already, like taking advantage of Linux’s features for embedded development such as mounting files as disks, then I think you’d be happy with the latest Ubuntu. But if you’re my Mom, sorry Mom, stay on Windows. At least for now…

Fun with JTAG

As I mentioned previously, I am working on adding an officially supported CDT integration with gdb that can be used with JTAG hardware debugging devices. As a quick primer, JTAG devices allow you to have full control over the CPU and memory on a embedded computing board using a special connector that is now pretty much standard on all such boards. With debugging support, that means you can read and write memory and any memory mapped registers, read and write CPU registers, and set breakpoints. A lot of the JTAG vendors are starting to support integration of their devices with gdb as a front end to give developers a familiar interface, and for us on the CDT, allows us to leverage almost all of our existing gdb integration to provide an Eclipse UI interface.

JTAG debugging does have limitations. It’s not overly fast, especially when compared to native debugging. Stepping through code takes around a second in the setup I have. And with most configurations, the JTAG debugger hardly works at all once virtual memory is turned on in the CPU, making process level debugging, as you normally do with OS’s like Windows, Linux, QNX, etc, impossible. The biggest value of JTAG in the past has been for the initialization code that sets up the board and starts the operating system kernel. But that is starting to change though as JTAG debugger makers are figuring out how to do the virtual to physical and back translation and adding OS awareness in the debugger itself allowing for the full debug experience.

So I have the integration working now. With permission, I borrowed a lot of ideas from the Zylin Embedded CDT plug-in. Again, my hopes are to bring those guys and their customers on board to avoid the need for forking the CDT. It was pretty cool when I did my first debugger launch, and everything just worked. This is really the beauty of Eclipse and the CDT and the focus on extensibility, that makes adding new features a breeze.

Below is a picture of my set-up. I have a little TI OMAP board hooked up to a Abatron BDI2000 JTAG device hooked up to a network hub that eventually hooks up to my laptop. You can’t see the screen, but trust me :), the CDT has reset the board, loaded in an image, started it up, and hit the breakpoint I had set. And you get all the CDT goodness like the variables, registers, and disassembly view. Tres cool!

My next step is to hook up qemu, the board emulator, with it’s built in gdb remote stub, which works just like a JTAG device, to this whole thing so you can try it out without having to fork out money for real hardware…

F3, the CDT Wunder-Key

Unfortunately, I do most of my programming in Java (sorry Java-lovers, but I hate Java). But the JDT really makes it a breeze to write code for the CDT plug-ins. My favorite feature is F3, or Open Declaration. Whenever I’m investigating code, I like to go visit the implementation of some unknown method to see what it does. F3 lets me jump from class to class and get a quick overview of the system I’m trying to program against.

Well, I’m trying to do the same thing with the CDT. Before CDT 4, Open Declaration tended to be slow since it did a complete parse of the file you’re viewing and all the files that it includes. With CDT 4, we’re now only parsing the file and using the CDT index to get all the other declarations needed for that file.

As well, F3 tended to be hit and miss on whether it actually found anything. A lot of that had to do with the indexer’s need for build information that is often hard to provide. Also a lot had to do with information that we hadn’t collected yet, C++ template information for example.

With CDT 4, F3 promises to be a whole lot better. I’ll be spending a bunch of my time as we start to wrap up CDT 4 development on making sure it finds as many definitions as it can so that it can be as useful to CDT users as it is for JDT users. The one I just added that made me think of blogging about it is #include statements. Wonder what’s in that include file you #include’ing? Well move the cursor to the statement and hit F3. Bingo, there’s the include file (at least as long as the index knows where it is). I look forward to adding more cool features like that.

Microsoft is making my Linux fonts ugly

I found this article mentioned on Slashdot. I’ve stated in the past that the main reason I don’t like using Linux as my main development environment is that I find the fonts hard to read. My eyes are horrible, especially after long stints writing code. Windows for some reason just looks so much better, especially on LCD screens.

Well, the reason Windows looks better is their ClearType technology. After reading the article I tried turning it off, and sure enough, Windows sucked too.

Apparently the fuss over ClearType and FreeType, Linux’s font technology, has to deal with patents that Microsoft holds on the techniques behind ClearType. With all the anti-Novell/Microsoft clauses in the GPLv3 dealing with patent protection and the essential prohibition on it, I’ve lost all hope. Despite what Richard Stallman may wish, Microsoft will likely never extend patent protection on ClearType to all of the Linux community which means they will have to pick the other route, i.e. to none of the community.

Which is really too bad. As much as I thank the FSF and GPL for giving us all those great GNU tools, I’m afraid that their conviction to ideals will also stunt the growth of open source, and especially Linux. The FSF may hate software patents, but they are a fact of life. And if the two worlds can’t mix, then the poor user pays a price, one way or the other.

Oh, I hate Cygwin

I usually try not to blog while I’m angry. But I’m in the middle of trying to get the Firefox source set up in the CDT on my laptop, which, of course, is running Windows (XP mind you, still not brave enough to try Vista). And it has been a struggle.

First of all, it was a bit tricky to set up the build environment. I’m using Cygwin since Firefox would really rather be built on Linux, or in a convoluted environment that involves using Cygwin but the Microsoft tool chain, which the CDT doesn’t really have support for yet. Luckily I found a web page that showed how to set it up to use the cygwin compilers. It’s a little out of date but a few tweaks and I was able to get Firefox built.

Now I’m trying to get the build output into the CDT so that our cool Scanner Discovery feature can parse it and set up the include paths and symbols for the indexer. That’s been tricky since the Firefox build wrapped all calls to gcc with a wrapper script which deals with converting paths, I guess. I’ve got that fixed, but now the source file paths used by Firefox uses the cygwin paths, i.e. /cygdrive/…, which our build output parser doesn’t understand. So I’ll have to introduce a new parser that does the cygpath conversion.

We’ve had a lot of bug reports lately on cygwin. Most of them have to do with the cygwin developers deciding not to support Windows path names any more, you know, the good ol’ C:\blah. I guess understand their reasoning. Cygwin is meant to be a Linux emulation environment on Windows. I don’t think that was their original intention since Cygwin actually predates all this Linux popularity, but that’s what it has turned into (and even their web site now says so).

The issue for the CDT is that it isn’t running under the cygwin environment. It can only deal with Windows paths. So whenever we see a cygwin path, we need to convert it. Not only that, when generating makefiles for cygwin make, we need to convert Windows paths to cygwin paths. Now, we can’t do that for everything on Windows since tools like MinGW gcc does use Windows paths. It’s pure evil (well maybe not that evil…)

So what this really means is that supporting Cygwin with the CDT is becoming a lot of work. This is one of the reasons I want to start promoting MinGW, a much more Windows friendly port of the gnu tool chain, as the gnu environment of choice on Windows. The problem, though, is that cygwin is very popular and easier to install and seems to have much more momentum than mingw. So we will need to continue to support both. But I’d sure like to see more of that momentum shift to MinGW. Which, in open source, that means I need to do more to help them.