Monthly Archives: February 2007

Getting ready for EclipseCon

Well, I just got my “EclipseCon” haircut and I’m starting my final preparations for next week’s EclipseCon. If you’ve never been to one, these things are more than just a conference where people go to hook up with the Eclipse crowd, it’s a celebration of everything we work hard for over the year. It’s a lot of work to prepare for but it’s always a week to remember.

This year, with the help of the growth of the CDT community, we’ve managed to get a higher profile in the program with a C/C++ Development track. Here’s what you can expect to see next week.

Short Tutorial, Extending CDT To Support Your Compiler (Chris Recoskie, IBM). Learn how to make your own Managed Build integration.

Short Talk, CDT 4.0: easy to use and integrate (Mikhail Voronin, Intel). Get up to date on the new project model changes coming in CDT 4.0.

Demo, What’s New in CDT 4.0 (Doug Schaefer, QNX and Markus Schorn, Wind River). Demo of all the cool new features that are coming in CDT 4.0, or at least the ones we have working right now.

Short Talk, Intelligent Command Line Processing for the CDT, (Chris Mead, ARM). Shows an integration with the Apache CLI library for adding cool tool integration features.

Long Talk, Assembling your open C and C++ workbench (Phillipe Ombredanne and Francois Grenade, nexb/EasyEclipse). A great look at how to leverage a number of open source components to create a complete C/C++ IDE.

Long Talk, C/C++ Source Code Introspection using the CDT (Chris Recoskie and Beth Tibbits, IBM). Show how you too can take advantage of the CDT’s parser and code models to do some cool analysis of your code.

Short Talk, Multi-platform development with the CDT (Graeme Johnson and Gabriel Castro, IBM). Show how they use the CDT to build IBM’s J9 for over 50 different configurations.

BOF, CDT Project Meeting (everyone!). We’ll have our regular monthly meeting during BOF time. Everyone is invited to see what’s going on in the CDT project and to provide input for any issues we need to discuss.

Short Talk, Autotools Demo (Jeff Johnston, Red Hat). Shows the autotools integration with the CDT.

Poster (up all week – Poster Reception Wednesday evening)
Extending CDT Debugger to Support Device Software Development (Mikhail Khodjaiants, ARM). Shows how ARM extends the CDT to support their gnu toolchain, gdb, and JTAG.

Hello C#

Not to get anyone excited, especially a particular executive director of a popular foundation, but I’ve managed to enter, build, and run my first C# program using the CDT.

I had previously did a language extension for C# that provides syntax coloring for the C# keywords and associated the CEditor with .cs files. In the last hour (while trying to watch a poorly played curling game on TV – yes, they broadcast curling games on the national sports cable channel in Canada), I took everything I learned from my exercise of getting my other build integrations working this week and thought I’d give a build integration for the Microsoft C# compiler a try. After a little tweaking to handle this special case where there is only one tool that takes all .cs files and produces a .exe, I hit the build button and an exe came out.

Luckily for us, Microsoft has reused their PE executable format for .Net apps, so the CDT immediately recognized the end .exe file as an executable file, so I created a launch config for it and hit the run button. “Hello World” it said in the Console view. Tres cool!

Now there’s a lot of functionality that’s not there. There’s no debugger and there’s no parser to fill the outline view and index. So there’s a long way to go. But it was cool to see.

Wild Week in CDT-land

Wow, it’s Friday. I’m glad. This week, once M5 was out the door, Intel committed their long awaited rework of the CDT build model. The idea is to make much of the good things we’ve done with the Managed Build System build model available to all CDT builders, and also available to the rest of the CDT that could really use this information. The parsers really need to know what compilers and settings are being used to parse properly. I can also see a use by the debug system to default to the best debugger given the currently active build configuration/toolchain.

As well, we get an upgrade to the New Project Wizard. I think the old approach of forcing users to select a language, C or C++, and then a build system, standard versus managed, was pretty intimidating to new users. How are new users supposed to know what a builder is anyway? With the new New Project Wizard, the selection of builder is hidden at least. It’s starting to look more like my previous favorite IDE

What made the week busy was that to do this, a lot of the underlying architecture of both build systems changed. Of course, with any major architectural change, a lot of things broke. Mikhail S from Intel is working hard on all the bugs that are getting raised, and I was able to figure out how to get my MinGW and Windows SDK build integrations working in the new way, so we’re getting there. All the other committers are also measuring the impact of the change.

I think the biggest challenge is to get vendors who depend on these systems to take a look at the changes as soon as they can. Mikhail has been talking about these changes for a long time and has been publishing patches for everyone to look at for over a month. And it was still a surprise to many how vast the changes were.

While we do want grow the functionality and the architecture of the CDT, it is important that we don’t make too much work for those who are integrating. All I can do is encourage them to get in early and get in their feedback to cdtdev and bugs in bugzilla as early as possible. I’m sure there are surprises in store for them, and for us on how they are integrating with the CDT in weird yet intriguing ways…

The Teraflop ‘CUDA

It’s interesting how the big vendors play off each other when some cool new idea gets close to becoming product. The latest one was Intel with their 80 core research chip for highly parallel teraflop computing. Now we see NVidia has released their CUDA SDK and C-like compiler that does the same with their latest 8800 series video cards.

Apparently, you can get 520 gigaflops (billion floating point operations per second) with their top end 8800 card which features as many as 128 single precision floating point cores. Combine two such cards in an SLI configuration and you can hit the teraflop mark, theoretically. They have a compiler that compiles programs written in C with some CUDA specific extensions that gets downloaded to the card which runs as a co-processor to your main processor.

I can see the applications for such technology being pretty much limited to scientific simulation type things, not general purpose computing, at least not in the short term. The main issue is that not every will have these specific cards in their systems, and it really is NVidia specific. But it looks like they’ll provide a huge amount of horsepower at a decent cost.

One of the reasons I blog about these things, other than I think they’re cool, is to show that C/C++ development is still alive and well and growing. I remember seeing a study a couple of years back that showed it on the decline with everyone moving to Java or .Net. But there will always be applications that need to run as close to the hardware as possible to run efficiently as possible, either because the computing resources are limited such as with mobile devices, or because they are very specialized such as the NVidia processors. Languages such as C and C++ that compile directly to executable object code is still the best way to achieve these efficiencies, and why we see the CDT as popular as it is.

CDT 4.0 M5 Now Available

I am pleased to announce the availability of the first “public” milestone of CDT 4.0, M5. CDT 4.0 is going to be a huge release for us and bugzilla is telling me that we already have 265 bugs/enhancements that have been addressed (more than we did in all of CDT 3.1.0 BTW). With all the new committers working on it, it is important that we get these milestones out to the community for feedback and testing.

So feel free to give it a try. Instructions and links are available by following the CDT website, Note that you require Eclipse 3.3 M5 to run it.

Unfortunately, we don’t have a New & Noteworthy. But things to look out for include new Views like the Include Browser, Call Hierarchy View, Type Hierarchy View, and improvement in performance for Open Declaration and Content Assist. The managed build internal builder works better by getting dependency info out of the index. There hasn’t been much change with debug and the standard builder.

New Face

For those who follow the Planet, I have a face finally. Thanks to Ian Bull for putting it together for me. For those that don’t follow Planet Eclipse, boy are you missing out!

Also, I am honoured to be named as a finalist for the Top Ambassador award in this year’s Eclipse Community Awards. It’s funny since I had a hard time deciding between the other two finalists when I voted, Alex for is great work on EclipseZone and Chris zx for constantly hooking me up with cool things happening in the greater Eclipse community. Good luck to both of them!

Old news is good news

I have to admit, I am one of the worst committers when it comes to responding to newsgroups. I have a tough time just keeping up with cdt-dev and bugzillas and writing code and … that I never seem to have time to go to the CDT newsgroup to see what people are talking about. Don’t vote for me in the Top Committer category, that’s for sure.

But I went there today to announce CDT 3.1.2. I guess this is the first time I did since I got my new laptop a couple of months ago (see!) and it downloaded all the messages since the beginning of time. It was interesting. There are over 12000 messages since the first one where John Duimovich launched it on December 10, 2001. The talk then was on today’s CDT’s predecessor from IBM which was very different (and some lives on today in the Remote Systems Explorer from DSDP/TM).

The first message after John’s was from ‘dominic’ who started a thread of fans that were drewling over the idea of a C/C++ tool in Eclipse. There were a few posts from people who had great feature ideas, like ant support and UML modeling (which funny enough is what got me into the CDT). People were concerned that the team wasn’t testing using the funky new GTK support on Linux available in early access form with Eclipse 2.0. And, of course, people were asking, if you can do C/C++ how about supporting Cobol and Objective C too.

It’s pretty good reading and gives a great historical background on the evolution of the CDT. And it’s a bit spooky since I hadn’t even heard of the CDT at the time and all this activity was happening. But the coolest thing is the diversity of people that posted then and post today. It certainly is a great way to keep in touch with the community outside the cdt-dev walls. And I’m probably the last committer to see the value in participating, which is now my late new year’s resolution…

CDT 3.1.2 Now Available

It’s been a pretty busy week as we are getting two releases together pretty much at the same time. The first is the release of our latest maintenance release CDT 3.1.2. Bugzilla tells me there were 101 bugs fixed in this release, which is a pretty good number given that we have been very busy working on CDT 4.0 at the same time.

The biggest fix that I put in was to move away from using memory mapped files for the Index (also known in some circles as the PDOM). Memory mapped files worked well until we started running into very large workspaces where we started to run into limits on Windows. I’ve switched it to regular files that read in chunks at a time and use a least recently used (LRU) cache to make sure we don’t take up too much memory. It did cost us some in performance, but I’m still within the targets I had set out for CDT 3.1, i.e. indexing Firefox in under 20 minutes.

More information can be found on the CDT Website.

Stay tuned for our first real milestone for 4.0 early next week…

OpenKODE: Cross Platform Mobile Gaming in C

I blogged a while back about an open source mobile gaming device, the GP2X. It’s a pretty simple and inexpensive unit with a dual core ARM configuration. I think a lot of people are using it for simple 2D games and there are ports of the arcade machine simulator, MAME, to it. It’s not a very powerful machine, but I thought the idea was pretty powerful.

It got me thinking about whether the industry would be interested in a similar platform but with 3D gaming support. And if not, why not. Maybe these things cost too much to build. Maybe the business case isn’t there. I haven’t seen too much interest from the big gaming houses on porting their titles to OpenGL ES, a standard 3D API for mobile. But then, there isn’t really a high volume platform out there that supports it to the scale of the Sony PSP and Nintendo DS. Unless you count phones, but you can’t do serious gaming on a 1″ screen…

But it looks like the gang at are trying to do something about that. They recently ratified a draft specification for a common platform API called OpenKODE. It’s a collection of C APIs, most of which they’ve already specified for video and audio. But it also has a OS abstraction layer so you can compile applications against any OS, or so goes the theory. The intent is to provide a platform for content providers that will allow them to hit as big a market as possible.

They are accepting comments from the industry and I see vendors starting to do press releases announcing support for it. My hope is that you’d start seeing mobile gaming platforms like you see with MP3 players, i.e., a lot of vendors making different types of devices but all supporting common standards. That would sure change the face of the industry a bit.