Monthly Archives: March 2007

Get your swim suit on

A colleague pointed this out to me and it really made my day. With the stress of getting CDT 4.0 M6 out the door while getting ready for a week at ESC, you need to have a laugh to help keep you going.

Here’s what you do. Go to Google maps and click get directions. Start from somewhere in North America and end somewhere in Europe.

The directions will guide you to Long Wharf in Boston and then instruct you to “Swim across the Atlantic Ocean”, which apparently is 5572 kilometers. You then get out of the water in La Havre, France, and continue on your way.

I love the sense of humour software developers have. I can imagine the design discussions around putting this feature in. I’m sure they’re still laughing about it, just like we are now. Now, what can we do with the CDT, hmm…

GPLv3, freedom at what price?

I’ve got my fresh copy of the GPLv3 Third Discussion Draft Rationale and started to try to dissect it. It’s a pretty intense legal document as it tries to build up a GPL that has no holes. I’ll have to read it on the plane down to San Jose for next week’s ESC to help me fall asleep.

But one thing that has struck me as I dig into it and read others’ comments on it, is that the view of freedom put forth by Richard Stallman and the FSF may actually turn detrimental to the community instead of helping it.

There is no doubt that open source is successful today because of the GPL. The freedom for users to be able to download, possibly modify, build, and run their code is really what got a lot of developers involved in open source to begin with. And that momentum has grown and the qualify of the open source code has grown to become a serious force in the industry. But that all started in the day when most open source users were developers.

The issue I have is that the FSF does not differentiate between user and developer. But when it comes to critical systems, especially in the embedded space, I’m not sure users even want that freedom. While having GPL code in my TV is cool and I’m glad the manufacturer was able to take advantage of it, and theoretically ended up with good code cheaper than if they would have licensed something. And hopefully the developers contributed back to those open source projects to make them even better. But the last thing I want to do as a user is change that code, even though I know I could.

So while the GPLv3 tries everything in its power to ensure that the user can modify the code, my fear is that it will handcuff the developers working on this code. It’s not free to them to make sure the user gets everything they need to do the modification. The manufactures can’t charge for the code, but you can be sure device prices will go up if they need to ship an SDK and hardware with it.

GPL in embedded devices has always been a tricky subject, and GPLv3 seems to make it even trickier. I’m not sure the FSF is willing to listen to the concerns of embedded manufacturers, but I sure hope someone is making noise.

CDT 4, now 5 MB better

I was just checking last night’s CDT build to make sure our source feature got generated correctly (it did, thanks Andrew!). One thing I noticed was that our run time downloads are now around 17 MBs. CDT 3.1.2 was around 12 MB. That’s 5 MB more CDT! We haven’t added any new plug-ins so it’s all enhanced feature content and improved internal data structures that’s contributing to it.

Our M6 is coming along nicely and should be released late Friday or over the weekend. It’ll be a part of Europa M6 which I think is still a week or so away. The biggest thing you’ll notice is a new New Project wizard that merges standard and managed make into a single experience. Standard make projects are now “Makefile” project types, something VS users will be familiar with, and allows us to associate tool chains with standard make projects to make it easier to set them up for indexing.

For our Windows users, you’ll see a new MinGW tool chain which uses various tricks to find your MinGW installation so there is no more need to add this to your path. Also, it uses the CDT’s internal builder to call the build tools directly so there is no need to install MSYS or Cygwin to get make and the other command line tools. This will feed into the EasyEclipse MinGW distribution that I have promised to deliver.

Aside from that, the index is more complete and has more information in it to drive all our parser based features. This also feeds into better performance for content assist and open declaration since we no longer do a full parse of the file and all it’s includes. It’s not often you get ‘oo’s and ‘ah’s over a content assist, but I did at our demo at EclipseCon.

I’ll provide more details and a pointer to the New and Noteworthy on the CDT wiki once we wrap this thing up and get it out to you.

"Conflicts With"

So you know in Bugzilla you can mark a bug as a duplicate of another bug. I have no problem with people raising new bugs even if they end up being duplicates of another bug. That way we make sure we get them all.

But I’ve often run into cases where I’d like to mark that a bug “conflicts with” another bug. If you mark one of them FIXED, the other one automatically gets marked INVALID.

The tough part, though, is picking the one to fix…

Ever hear of EFS?

We’re at a critical juncture in the evolution of the CDT. Everything has been going so good lately, with tons of indexer and build improvements, that I guess we were due for a bit of a crisis. The issue we are grappling with is how to use the CDT with projects that have source files sitting on a remote server, possibly across the Internet somewhere. It’s a tough environment that will definitely stretch Eclipse and the CDT if we are ever successful in making it work well.

The solution I’ve been hoping to push is to use EFS to access those files and, in theory, the CDT should just work. What’s EFS? Well, it’s the Eclipse File System, which provides a layer of abstraction at the same level as java.io.File, but allows different implementations. Using the right IResource APIs and URIs instead of IPath, and everything should work. It seems designed to meet our needs.

I managed to build a FileSystem using FTPClient from Apache’s common.net. My test is to run it against the ftp server on the CDT build machine at Eclipse. Saving files is a bit slow, and I guess that’s to be expected and we can probably introduce some caching and worker threads to at least get it off the UI thread. I imagine RSE does this already and will need to take a closer look at their EFS implementation to see if it would work better.

The bigger issue is the number of instances where plug-ins are assuming IPath works and IResource.getLocation() works. And I kind of knew that going into it, that we’d have some work in the CDT to solve this, and that proved to be true. So I tried a General project and text files worked fine. So I though I’d try something more involved. How about a Ant build.xml file. Well, there again, massive NPEs because the build.xml editor and the AntModel assumes IPath to work. Not only that, they use java.io.File to get at the file. Ouch.

It is pretty clear that little work has been done to try out a real EFS implementation as backing for IResources, or at least not enough work. My feeling is that this remote project idea must transcend the CDT, and we’ve even thought about starting a new Eclipse project to focus on it. It’s clear there is a lot of work to do and we need to start organizing if we really want to make EFS remote projects work.

Now that’s Embedded!

I ran across this really cool project this morning. A guy built a little device that recorded light patterns, i.e. bright/dark, and then replayed them on an LED. He used a tiny Atmel micro controller and programmed it in C using, you guessed it, the CDT, with the WinAVR cross compiler toolchain.

It’s projects like this that make working on the CDT so cool. We have this little device with 1K flash and 64 bytes, yes bytes!, of RAM and people can use the CDT to program it. Then we have the big iron supercomputers that the Parallel Tools Project gang work with, and you can use the CDT to program it as well. And, of course everything in between, from mobile multi-media devices, to Linux server applications, to the PC desktop, to gaming consoles. But this little project really made my day!

Mozilla Desktop Environment?

Snooping around the Internet news sites, I ran across a discussion on the Mozilla planning list about the potential of a desktop environment built out of Mozilla and their GUI language XUL (O.K. I found it through the trusty Slashdot, I’m sure you did too :).

I always thought the idea of building a desktop around a browser was intriguing and to me this is the promise of mobile tablet-like devices. But simply using Mozilla and it’s plug-in architecture might be overkill. I think simply using the browser function with JavaScript would do the trick. Imagine, you fire up your tablet, the operating system boots and the window system comes up, and it could be any operating and windowing system, and the only thing that gets launched is the Browser. Small and fast. Perfect for mobile.

You’d probably have some dynamic HTML pages locally on the device to do desktop-y things like managing files, writing documents, playing media files, etc. Maybe that does require some help from Mozilla and its set of plug-ins. And add in Internet connectivity and AJAX and you can get a whole suite of applications that can leverage the power of the servers out on the Internet or at your office as well.

Once I get some time, I really want to dig deeper into the RAP (Rich AJAX Platform) project at Eclipse and see how their tools can be used to build such applications. And Mozilla probably is the platform of choice on the device since it is open source and readily available and works on many platforms. This all leads to some really interesting combinations of open source pieces to make something really different and fun for people to use. And Microsoft says there is no innovation in open source…

Ego-less Development

I remember early on in my CDT career running into an old boss and mentor. He asked how things were going and I told him great and that we were starting to build a C++ parser for the CDT. He was skeptical that we could do that and make it work well with the time and resources we had. But I was so excited to be building a C++ parser, I chose to put aside his wisdom. “It’s not that hard.”

Well, for years after, looking back now, I probably should have listened closer to him. The CDT’s parser performance was abysmal. I really feel for the guys at QNX and others who were building products based on those early CDT releases. And, of course, you can guess what my first task at QNX was when I got there. Fix it!

Luckily I pulled a rabbit out of my, uh, hat. CDT 3.1’s indexer is incredibly faster than our previous tries. I totally changed the approach we had taken and threw away the requirement of being 100% accurate that we had be preaching in futility since we started. Now, each file gets parsed only once and again only after it is saved. A lot of good work has gone into CDT 4.0 as well to add more information to the index so that we can use it for all parser based features making them incredibly faster too. It’s not 100% accurate, but for 99% of CDT users, that’s just fine.

But I still remember those years of pain as we fought hard to make the parser faster. Everything we tried resulted in only minor improvements. And every time we analyzed the issue, nothing really jumped out as to the cause. Unfortunately, it was probably our egos that kept us from giving up and trying a whole new approach. And if we had taken that approach earlier maybe we could have saved so many in the community the grief of an indexer stealing all their CPU on them.

“Ego-less development” is a mantra of mine and we should have followed it. Always question your design, try to understand the big picture and the impact of your design on the entire system. And if someone points out a flaw, listen and make sure you really are doing the right thing. I think this is especially true in open source with so many eyes on your work and so many hands trying to pull it in different directions. It’s a huge challenge but if you are successful, the rewards are great, for everyone.

OpenKODE at GDC

You know, all my dreams will have come true if I am ever standing at a conference center attending a GDC, the Game Developers Conference. Interesting enough, this year, it was held the same week as EclipseCon, and in the same area of the world, so I was close. But it is just a dream, and we can probably save that one for another life time. At any rate, I am always interested to see if there is any big news that comes from there, especially in the tools/SDK front that might be of interest to CDT users.

One thing I did notice was that Khronos, the open media API people gave a number of presentations there and have nicely posted them to their site. The one I was most interested in was on OpenKODE, a core API that allows you to build portable applications. What I found most interesting in that presentation was a “State of the Union” of handheld multimedia applications. It shows why the industry needs to be interested in standard APIs to allow for growth of applications for these devices. And I firmly believe that and am glad Khronos is taking this on. They also confirmed what I believe about Java, that, while it really helps with cross platform development, if you need performance, like most 3D applications do, you really need to code natively, and that means C.

In theory, with OpenKode, you can build for one platform, say Windows, and with a simple recompile, run your application on another, say a handheld device running some other operating system. This is similar in purpose to SDL, but SDL has a patchwork architecture, where OpenKODE seems to be much cleaner. Acrodea has a sample implementation of it for Windows, and I wouldn’t mind seeing someone do an open source licensed version of it so that we can pass it along in our upcoming EasyEclipse CDT distribution.

We can debate whether C code that requires a recompile to run on another platform is portable. However, if you look closely, to get any performance out of Java, you need a JIT which recompiles your Java anyway. With C, you just need to do it manually and ahead of time. As we build up a collection of portability libraries, I don’t see that there is a big need to jump on the Java bandwagon.