Monthly Archives: May 2007

Now there’s a screen I can live with

I’ve just been reading a few notes on the web about Palm’s new Foleo “Mobile Companion”. If you haven’t seen it yet, it’s a mini-laptop type thing that’s intended to work with Palm’s Treo smartphone. But under the hood it’s really a mini-laptop that has wifi and Bluetooth connectivity and a USB port and SD slot for expansion. And it runs Linux with an Opera browser. The price is also pretty reasonable at $600 US and that’s with a 10″ screen and full size keyboard, although there’s no function keys.

So, I’m not promoting the product. Yes, I know people from Palm who are contributing to Eclipse and the CDT, which I’m sure they’re using in conjunction with this product. But I think it could the be the start of a trend. Everyone loves smartphones and getting their mail in Blackberry’s and such, but the size of the screen and keyboard on these devices really limits their usefulness beyond their mobility. People still need laptops to do their real work.

But I think there’s room in the mobility market for devices like this one. The embedded system-on-a-chips are there now to do it. And I think you’ll even see games on these things with the 3d capabilities of these chips. With solid state memory like SD cards getting bigger and cheaper, these could be really useful little machines. Palm was first and it fits their niche, but I wonder if anyone else will take the plunge and make a more generally useful “mini-laptop.”

UML Action Semantics, Naturally Parallel

Earlier in my career, I had the honor of reviewing a part of the UML spec which involved a very surreal phone meeting with one of my heroes in this industry, Jim Rumbaugh. The area was on the UML’s Action Semantics. At the time it was a separate spec but it is now intertwined in the Superstructure document as UML’s Action behavior.

The idea according to Jim was to provide a sort of assembly language that all software behavior could map to. But I thought it provided a more powerful concept, that of the Action itself. An Action is a unit of behavior that has inputs, does some processing, and produces outputs. The outputs of one action feeds into the inputs of other actions. The “Ah-ha” is that all actions that have their inputs satisfied theoretically run in parallel.

This concept isn’t new. Hardware designers have been thinking this way forever. I believe Petri nets present a similar idea in mathematical terms. But what struck me was this was a really powerful paradigm that can make it easier for programmers to write highly parallel programs. What was needed, though, was a good, 2-dimensional programming language that allowed programmers to create actions and hook up the inputs and outputs quickly and, of course, with minimal typing. But something like that really wasn’t an objective for UML.

It’s probably one of the reasons I’m keenly watching Eclipse’s Modeling project. Aside from a great framework for creating domain specific languages, it has the capabilities that would be needed to build this “Action” language. And with a good back end that produced code for today’s multi-core clusters, I really think this could be a good way to help programmers meet Intel’s challenge that “Software has to double the amount of parallelism that it can support every two years” to catch up to the what the hardware guys are doing.

How Different Are Linux Distro’s anyway?

In my years in software development since Linux has taken off in popularity enough to warrant commercial software vendors porting their wares to it, one thing I’ve seen vendors having trouble with is dealing with the massive number of different Linux flavours out there. Back when we were just Windows and Unix (commercial *nix’s if you will), life was so much easier. The operating system vendors ensured that the releases were well defined so that we could easily put together a reasonable list of supported versions for our products.

With Linux, it really is next to impossible to do that. Novell and Red Hat do fill in that role as commercial Linux vendors that provide a stamp of approval over their versions of all the packages that go into a Linux distribution. But, really, none of the developers I know that are using Linux are using any of those commercial Linux’es. They’re using Fedora, OpenSuSE, and more lately Ubuntu. It really is impossible to validate your products against all the possible combinations of Linux that your customers may want to use.

But, I then ask the question, so what? How different are these distributions anyway that makes it so hard to support Linux. Yes, you may have version differences in the packages, and things like the major versions of GTK can break under GUI applications like Eclipse. Also, it’s pretty confusing the number of different ways to set up user’s environment variables, but then applications shouldn’t be relying on that anyway. I really wonder if there’s much else that can affect most software products.

It bugs me every time someone tries to explain away a bug with, sorry, that version of Linux isn’t a reference platform so we can’t look at your problem, especially when the person is using a recent distro like Ubuntu. But it really does speak to the challenges that software vendors face with the fragmentation of the Linux market. But I guess it’s part of the price we pay for “freedom”.

cdt-dev is my office

Today is RC2 day for CDT 4. As we get closer to milestone build dates, I send out friendly reminders to the cdt-dev mailing list on what bugs are still open against that milestone. It’s just a prod to get the developers to do something about them so that we have no open bugs on a milestone when we do the build. It’s worked every time as we get the friendly but odd ‘Zarro bugs found’ message from my bugzilla query when we’re ready.

RC2 was no different. This morning we had two left. Ken from Austin, Texas gave an update on his asking for feedback. I, from Ottawa, Canada gave some feedback for him to go ahead and fix it. Bala from London, England mentioned he had a patch ready for his and Mikhail from Russia replied saying he was looking at it. I’m confident we’ll be ready in a couple of hours to fire off the build and get the RC out by the end of the day.

This happens regularly on the CDT and once in a while I stand back and think of what just happened (and I think I’ve probably blogged about this before too). We have a very effective development team working on the CDT, and the cdt-dev mailing list is the backbone of that collaboration. A lot of groups use different technologies such as instant messenger or IRC channels, but for us the cdt-dev mailing list works great. Bugzilla comes in at a close second. But then, we treat bugzillas as mini mailing lists anyway.

I think the biggest benefit of the cdt-dev list is that it’s open to anyone. If you want to see what’s happening with the CDT at a high level, that’s the place to go. If you want more detail, then you really got to watch the bug reports and signing up to receive notifications on the cdt-*-inbox accounts are the best way to catch the train.

From my experience on the CDT, the most important tool you have to build a community is open communication, like mailing lists, forums, IRC. As your community grows, the only way to really talk to them all is via open communication, so it really forces you down that path and you end up doing it anyway. But in the early days, it was a hard habit to get into, especially when QNX was by itself, or even when I was at Rational and we started working with the QNX gang who were only a 5 minute drive from the Rational office. But open communication has really paid off in the end for the CDT and the reach of our cdt-dev mailing list impresses me time and time again.

Open Source Ripped?

Bjorn made me read this article by Howard Anderson. Well, he didn’t make me, but it is a topic I’m very interested in as well: How does open source make sense in a commercial world?

It’s actually a very interesting article. When I finished it, I had to remind myself of the title. In the end, I wasn’t sure if he was for or against open source. His general thesis seems to be that open source is a tool used by small companies to gain market share against big companies. Yes, he’s right. I’ve seen that. There are a lot of smaller companies shipping a world class IDE with their products making them more attractive. They leverage open source (i.e. Eclipse) to do it to lower costs since building a world class IDE is prohibitively expensive for most. I think it’s a great business model. I guess he was just looking at it from the big proprietary company side.

There are a couple of areas where I have to disagree with Howard, though. He mentions open source is a “religion”. Well in some circles, I guess open source participants see it that way. Certainly from the outside it looks like Richard Stallman is playing the part of a religious leader, and FSF is his church.

Howard also seems to believe that the people writing open source are doing so at night when they come home from their real jobs of working on proprietary software. But that’s not what I do as an open source developer. Open source is my day job. The company I work for is one of those companies that is reaping the benefits of the open source business model, and is willing to invest in open source to help build a community where we can share the work with each other. And there are lots of developers like me from many companies. Open source is not a religion to us, but a business means to a business end.

So while it’ll probably be impossible to shake the stigma of the open source “religion” from what we do, open source in the spirit of “co-opetition” (co-operating competitors) is a vital tool available to the commercial world. Some communities are set up for this to work well, like Eclipse, while others, not as much (and I won’t name them unless over a beer :). But the ones that are, seem to be the ones that the big proprietary companies fear the most. Which means we must be on to something…

Bye-bye 32-bit Windows

I just read that Microsoft was putting an end to 32-bit support with their operating systems. I guess that shouldn’t really be a surprise. It is a real struggle for device driver writers to support both and I think Windows 64-bit has really been hurt by that. The same is probably true for Linux as we don’t see much demand for the 64-bit Linux version of the CDT either at 2.5% versus the 32-bit Linux at 20%.

Maybe this will finally trigger people to focus more on 64-bit and start writing programs with that in mind. The biggest change for C/C++ programmers is the size of pointers changes. I’ve seen a lot of code that assumes you can merrily cast a pointer into an int and back and everything is happy. One example of this is with Java native code where we like to stow away pointers in Java fields for later calls back into native-land. Well with 64-bit, while the size of pointers change, the size of int does not. I’m also hearing that there are different interpretations of the sizeof(long) where on some platforms it’s 64-bits, but on others it stays as 32. Then there’s long long (gcc) and int64 (msvc) which in the 32-bit world also means 64-bits.

Suffice it to say that the 64-bit world gets a little messy. We’ll think back to the simple days of 32-bit with a sigh. But then, I think things will still be better than the now ancient 16-bit world was (now who’s old enough to remember that?). Now they say that this will start after 2008, but given the length of time it took to get Vista out, people with 32-bit only machines shouldn’t worry too much. They’ll be ready for the dumpster by then anyway. And do we really need another operating system version beyond Vista? Microsoft hopes so, and I’m sure they’ll use the 64-bit push as a marketing ploy to help you think so too.

Eclipse Wants You!

A few days ago I posted a blog entry worrying about the openness of the Eclipse Platform team. I thought it would generate a flood of comments saying I was wrong. And to a certain extent, I probably am wrong. But I think its a serious issue that people need to think about.

I guess the point I was really trying to make is that we as the Eclipse community outside of the Platform have depended too much on IBM/OTI’s great contributions, to the point where we expect them to fix all of our problems. My experience with open source projects is that it just doesn’t work that way.

Open source developers usually work on open source projects for a reason. They are trying to get something done for themselves, and really as a side affect they hope that others will find it useful as well and maybe come help out. Because open source software is free, I think people start to think it’s more like a charity, but it isn’t. And I think this is even a bigger factor with Eclipse since the vast majority of developers are employed to work on Eclipse projects. They respond to the community as much as they can, but at the end of the day, if their empoyer asks them to work on something else, that has to have priority.

So if you have a bug that isn’t getting the attention you think it deserves, please think of the people at the other end. There’s a good chance it’s not because they think you’re problem isn’t important, but that they have probably been assigned work elsewhere and really just don’t have the time. Do as much leg work as you can. Create a really good bug report that has a patch and a really good justification that shows you thought about the fix as much as the committer would have. Make it as easy for the committer to fix your problem as you can.

And if you find you really depend on certain functionality that isn’t being provided or bugs that you really need fixed, and you do enough great patches, you can become a committer too. The more committers we get from different employers, the better off we’ll all be. That kind of redundancy is important in open source and is something we’ve really learned to appreciate on the CDT project.

I love bugs from

I have no idea what this group is doing with the CDT. They’ve been on the outskirts of our community for quite a while and this bug was just marked VERIFIED to show they are still around. In fact, one of my first experiences as a CDT “dude” was at the first EclipseCon where someone from the Sony Playstation group stopped Sebastien, the first CDT project lead, and I and asked us for information about CDT extensibility. We thought it was pretty cool then, and it’s still cool now.

Forget OO, C++ is a better C

I was working on bug 176353 trying to get interrupt signals sent to cygwin-based gdbs. Part of the magic for that in the CDT is something called the Spawner. Spawner subclasses Java’s Process and adds the ability to send Unix type signals to them. It also implements some fancy I/O but that isn’t supported on Windows. This class is a little Java and a lot JNI native code that interacts with the operating system to implement the signals as well as the other Process type things like starting up and waiting for completion of the process.

So, I guess this code was written originally with Visual Studio 6 many moons ago. However, continuing with my theme of using MinGW for Windows development, I’ve created makefiles to build the spawner DLL. What made this situation a little weird was that the original creators of the spawner were guys who didn’t really know C++, so they did it in C. I always find it weird doing C in VS, but that’s what these guys did. So when I wrote the makefile, I used make’s default of running gcc on these files. Makes sense.

However, I was having trouble when I added a calls to a couple of Windows routines. I was getting undefined references at link time to the two calls I added. Weird, I didn’t get any compile errors. When that happens it usually means I forgot to add the library. So I added it and still got the errors. What’s going on here? Is it something broken in the MinGW port of these libraries? Was my code just getting tired and cranky?

Well for some reason, maybe I was getting tired and cranky, I wondered if it was because I was using C instead of C++. Part of my debugging technique is to start assuming the least likely cause and testing it out. This was really damn unlikely. But I tried it out. I changed the compiler to be g++ and ran it over the .c files. I thought it would have treated them as C files and nothing would really be different.

But to my surprise, g++ compiled the .c files as C++. And I got a ton of errors. And almost all the errors were for passing the wrong types to functions, especially since I was using UNICODE and not everything was really using wchar_t. Well, no wonder things weren’t working. Of course the one thing it found were compile errors with the two functions I was calling. I had forgot to add the include to the header that defined them which specified the correct calling convention, WINAPI, which is why I was getting the link errors in C. But then C was happy to play along without having to see the declarations of the functions and made some bad assumptions. I wasted a good couple of hours trying to figure out what was wrong.

So, forget the object-oriented, templates, namespaces, operator overloading, and all the other cool features of C++, C++ at its core is just a much better C. It has proper type checking that helps you find those errors before link time, or worse, run time. It all feeds into helping you build better software faster, which is what this whole tools industry is all about. And if you’ve programmed in C++ for years and have to go back to C, don’t forget to make that paradigm shift back to the 80’s…

A lesson in release management

One of the first things I remember learning about managing projects came well before I even considered doing so. It came from the lore at the big telecomm company I worked at. They had an old telecomm switch that was doing quite well sales wise, but they had started working on their latest and greatest architecture that would pave the way to the future (which it did in the end). However, they were so excited, they started announcing it to their customers well before the release date.

Well, guess what happened. The customers got excited, too. They didn’t want to buy the old switches anymore, they wanted the cool new one. Unfortunately, the dates ended up getting delayed and that spelled trouble since sales of the old switches were drying up. Lesson learned, though, and you notice a lot of companies holding back release information for that very reason.

Well, I think the same thing is happening to the CDT. For the first three months of this year, we’ve been hovering around the 65,000 downloads mark. It’s not our biggest. That happened last October and November when we hit 85,000. But it was steady.

Well, I just did the numbers for April and found them at a disappointing 55,000. Maybe it’s just a glitch. Maybe people are happy with getting the CDT from other places, like Linux distributions.

But it makes me wonder if this is a side-affect of CDT 4. We’ve been making a lot of noise about it, and we’re finding that a lot of people are using the CDT 4 milestone builds, especially starting at M6 which just happened to be at the beginning of April. I haven’t been counting the milestone builds in our figures.

We’ll see how May’s numbers are, but it would be interesting if we’re seeing the pre-announcement affect in open source projects too. And I guess, why not?