Monthly Archives: May 2008

A Pragmatic E4

So, yes, I was at the e4 summit last week. I’ve been meaning to blog about it but it’s taking some time for me to realize what it was all about. And I think I am finally able to put it into words.

For those who don’t know the history, e4 kinda tripped into existence as a side effect of the creation of an Eclipse incubator project to allow people working on it to check in prototypes and stuff. It was pretty innocent but it did scare a lot of people with the appearance that a new Eclipse platform was being developed without guidance from the community.

Of course, the dust has settled and fears subsided and IBM hosted the e4 summit last week to give people the opportunity to offer their guidance and, more importantly, to offer their help. It was a good, yet standard summit in my view. Lots of good ideas, but few actionable items, especially beyond what has already been actioned.

And to be honest, that’s the way it probably should be. If anyone thinks that we can write a whole new platform and discard backwards compatability, they’re kidding themselves. I think we’d all be fired if we came to our product teams with a plan like that. So I’m not worried in the least about that.

I think McQ has the right strategy and he tried hard to get the point across. You can rewrite the world with the best API and architecture and write a facade over top to let old plug-ins continue to work with as little change as possible. You can have your cake and eat it to. And, yes, it’s a lot more work. But as I said, we’d be fired if we didn’t do that. That’s what will constrain the community from going hog wild on e4. And that’s a good thing.

So if e4 isn’t a great new platform, what is it? Lots of people are wondering that and I’m sure we all have different answers. To me, what e4 is, is the opening up of the platform to new contributors. It’s a change in mindset of the platform team who have been maniacally focused on controlling change (justifiably so in my view) that they have scared off or rejected many a contribution. e4 gives them a chance to loosen up and be more accepting. And it is really up to the rest of us to take advantage and get in there and make the tactical improvements we need while this door is open. It would be our own fault to miss this great opportunity.

Pheonix has landed! A woo-hoo moment

Reaching new heights in geekness, I watched the landing of the Phoenix spacecraft on Mars last night live over the web via NASA TV. I don’t know, I find there’s lots of drama in space missions. It’s an incredible task. One of the mission managers compared it to hitting a hole in one in Australia from a tee in Ottawa (ok, he said Washington but Ottawa is about the same distance :). Another manager closer to the action added, “with Australia moving”. The good news is that they pinned it, relatively, missing only by 20km, rimming it around the hole before dropping it if you will.

The highlight for me was watching the jubilation as the guy called out that the spacecraft had reported a touchdown detected event. The gratification of years of work wrapped up in a single (probably) 2 byte event report is well deserved. That little report required so much technology to be working, it’s mind boggling.

That feeling of jubilation is what I call a woo-hoo moment. Mind you nothing I’ve done compares to the moment these guys had, but I think it’s an important aspect of all software development. It’s these little moments that help you realize all that hard work you’ve put into the project actually works and you can do a little celebration (usually throwing my hands into the air for me and yelling “yes” :). It helps get the adrenalin going and really gives you the energy to start working towards the next one.

In my career I’ve had a number of these moments and I always try to schedule them into the projects I’m working on. And these moments I don’t soon forget: the first run of an external code generated state machine from ObjecTime Developer happened a long time ago and I still remember when it happened. My work on the CDT has had a few too. The first outline view from CDT’s first parser, the first content assist (which was a surprise since the Neifer just did a couple of tweaks to the binding resolution code and it just worked), and the first complete index of the Firefox source using the new Fast indexer that beat my set goal of 20 minutes (it’s now around 13 minutes the last I looked). And more recently, I have the first install of Wind River product based on p2 (the DVD is hanging on my wall :).

I’m sure we all have moments like this throughout our lives. For software development, this is why I think iterative development is the only way to go. Not only does it give you a chance to show your customers progress and get their feedback, it lets you schedule in gratuitous woo-hoo moments.

Just call me a p2 fanboy

There’s been a lot of bashing of p2 lately by blogs on Planet Eclipse. They seem to revolve around the lack of support for extension locations. I’ve never used extension locations, and I hate the fact that for whatever reason you would need to manage installs yourself by hacking around the file system.

p2, IMHO, is awesome. It manages installs as good as any install management system I’ve seen. It tracks versions, it manages dependencies with capabilities which is absolutely the right way to do it. It allows you to install things other than eclipse plug-ins thanks to the extensibility provides by touch points and repositories. When we’re done you’ll be able to everything your favorite install manager can do and more. From where I sit p2 will change the install industry.

So yeah, extension locations aren’t supported any more. And I’m probably not the best person to speak on whether losing them matters. But someone needs to stand up for p2 because it is much needed. And I’m sure you can live without extension locations. I think the worse mistake was providing them to begin with.

CDT 5.0 looks good, now looking ahead

The CDT gang has put together a list of new features that are coming out with CDT 5.0 in a few weeks. Check it out here. There has been a lot of work further improving the indexer and we have a new refactoring framework with a few refactorings available. And there has been a little work on the build and debug side as well.

It was a good sign this release that we had no major architectural changes and got to focus on quality. There is a new scanner/preprocessor for the CDT’s parsers but, trust me, that was much needed and Markus S did such a great job that we hardly noticed the change. Compared to the new indexer framework in 3.1 and the new build framework in 4.0 things went much more smoothly this time.

As I start to work on Wascana 1.0 based on this great work, I still notice a couple of areas that we need to work on for next year’s 5.1. First of all is the tighter intergration of the Debug Services Framework (DSF) that is being built by the Device Debugging project. This is a pretty cool framework that is highly asynchronous and extendible. I am working on integrating MinGW’s gdb with it as an exemplary integration both to help me learn DSF and to show others how to use it.

But to make this integration seamless, we really need to do something about the Launch Configurations. Right now DSF provides it’s own set, meaning if you have CDT’s current debug framework and DSF installed at the same time you get two sets. That’s going to be terribly confusing. And, from what I hear, every vendor that integrates their own debuggers with the CDT add in their own sets. I’d like to see if we can get a common launch framework in place to help solve this, assuming there’s support from the community for that.

The other big issue we’ve got to address is the CDT build system. We’ve tried to support two modes of build, using external build systems, and using Visual Studio-like internal build. External is easy. But, unfortunately, my feeling is that we’ve made things too complicated on the “managed” build side. There has been some great work done up until now by the committers, but we need to make sure we’re meeting the needs of the community and either address them or provide the extensibility to allow them to do what they need but still provide a common user experience.

My real objective is to provide a common user experience for all CDT users whether they’re using a commercial product or the standard open one. That means unifying the workflows for everyone. Maybe then it’ll make financial sense for someone or a group of someones to write a CDT book to serve all of the CDT community.

It’s all good, but not enough D’s

Well, I think I’m finished my journey into Flash-land. It was an interesting experience. I’m glad I took a deeper look at it and gained an understanding about what the fuss was all about. The deeper look also left me wondering how free the Flash run-time components for devices will really be and how they’ll actually deliver bits that will run on your favorite device without really opening the source, or at least making it public. But the technology is very interesting and I’d love to see animated UIs on devices become more common place.

But at the end of it, it left me wanting. The big new thing we’re seeing on the desktop is 3D enabled UIs. 2D is good, but I think there are some really exciting things you can do when you add another D to the mix. You can see some of the potential in the UIs presented by console video games. I’ll never forget my first eye opener with the old Rogue Squadron Star Wars game on our first video console, the Nintendo 64, and entering your name with the 3D spinning wheel of letters. It was fun and since you didn’t have a keyboard it made the arduous task a little more pleasant.

What I think is missing, though, is a commonly available widget set that makes it easy to program 3D UIs. There may be some out there, but I’d like to see them become more mainstream. And as more and more devices come with 3D hardware accelerated graphic circuitry embedded in their processors and OpenGL ES 3D graphics API for devices becoming more ubiquitous, I’m hoping the industry can take a more serious look at this. Devices are becoming less resource constrained as we go, but they’ll always be UI constrained. Maybe 3D can help.

Flash on the Brain

I hate it when that happens. A shiny object flies by and I can’t sleep until I catch it. So I was up until 3 a.m. last night trying to figure out what Adobe Air/Flex/Flash/ActionScript was all about. It’s actually pretty interesting stuff technically. But as a number of people commented when I brought it up a couple of days ago, you do get the sense of vendor lock in, at least for now. But the specs are all open now, so open source implementations of this stuff at least have a fighting chance.

So why do I care about Flash (other than my insatiable need to learn as much as I can about the software industry)? Well, it fits in with my interest in mobile devices, especially those based on embedded Linux. As these devices get more powerful and have bigger screens, the line between laptop and mobile device is going to blur. And I think the expectations of users on the UI for these devices is going to grow as well. Everyone oos and ahs over the iPhone UI. It’s setting the bar.

But looking at a traditional embedded Linux box with a UI, does it make sense to run X Windows on it? X is horrible and antiquated. And it’s very hard to build flashy (sorry about the pun) UIs with it. It certainly wasn’t intended for resource constrained devices. Mind you the old X Terminals were pretty much embedded devices, but then where are they now…

So what are the alternatives? DirectFB looks very promising and is growing in popularity in the embedded world. It gives you an nice API over the graphics hardware and input devices that let you build your UI as low level as you need. But it does require you to build a UI from scratch.

So this is the architecture that piqued my interest: Adobe AIR (which include Flash and the WebKit browser) running on DirectFB. Which then opens up other interesting architectures. Like mobile devices turning into web appliances that let you work connected or disconnected (is there a Flash office suite app?). And with Flash’s animation, video, and audio capabilities, you could build a pretty lively UI. And, from what I hear, there are a lot of graphic artists who have learned Flash who could give us a hand.

Now, I have no links to Adobe and this only crossed my mind as they “opened” up the technology with the Open Screen project. But if this move helps them build momentum in the mobile space, it opens up a lot of opportunities for mobile software developers, and graphic artists for that matter…

Happy Day in Linux-land

Thanks to everyone for their great comments on yesterday’s entry on the frustrations I had getting ClearCase running on my new Linux machine. It gave me renewed hope that I could get this to work.

And I did. After reinstalling Fedora 8 as the host OS, I started getting a KVM virtual machine ready. I figured I’d try the new Virtual Machine Manager GUI to do it just like I do with VirtualBox. But when they say it’s not ready for prime time, believe them. They’re on the right track but to do anything serious, I think you still need to use the command line.

Another hurdle I ran into was running the 32-bit version of RHEL. I guess I should have looked harder at the KVM web page that said SMP was unstable in this configuration. It was. So jumping to the 64-bit version, I was good to go, 4 CPUs and a bridged network connection and all! I installed ClearCase and I’m in business. Now it’s time to get some real work done. It was fun and I did learn a lot and gained an appreciation for virtualization, so it was well worth it.

Frustrating Day in Linux-land

So I’m busy working with my team at Wind on some new installer work and I need to set up ClearCase to get access to the bits that go into the install. I have this spanking new machine, Quad-core Intel, 4GB RAM, 750GB drive. I really got it so I can run multiple virtual machines on it for testing. But if I could run ClearCase on it too, then I could use it for install builds too.

But my issue is that ClearCase is only supported on certain enterprise versions of Linux. But I want to try the latest KVM support in Ubuntu. So I first installed Ubuntu 64-bit and gave it a try. Ubuntu’s 32-bit support in 64-bit installs is horrible. You have to manually install the 32-bit libraries. That probably should be automatic, but then they are trying to fit on a single CD so maybe it’s too much. Unfortunately even with the 32-bit libraries, the perl engine ClearCase ships with crashes. So forget that.

So next up, I tried Fedora 8. It’s much closer to the supported Red Hat Enterprise and might have a better chance. And besides, there are some good Eclipse guys at Red Hat and I should be supporting them. The 32-bit libraries were automatically installed (but then it is a 3+ GB DVD). So I got a lot farther. After tricking the ClearCase install scripts into accepting Fedora as a “supported” kernel, I got as far as building the MVFS kernel module.

As I tried to fix those errors, I started to feel like I was porting their module for them. And it was a lot of work. We were going from version 2.6.18 of the Linux kernel to 2.6.24, but given how many APIs changed, it felt like I was going to 3.0 or something. At any rate, it doesn’t feel like something I should be investing my time in so I gave up on that.

So I tried the supported RHEL 5. You know what, after installing it and rebooting it. No network. RHEL 5 didn’t have a driver for the ethernet on the new machine. For crying out loud (again…). Unfortunately, it’s back to Windows for me. At least for now. Hopefully I can tweak ClearCase to make it fast enough to be usable.

MinGW gcc 4.3 lives!

This just in, Aaron LaFramboise has just released an alpha version of gcc 4.3 for MinGW. And, of course, they are looking for testers. I know I will be. You can give it a try to by downloading it off of mingw.org. I’ve been following the mingw-users mailing list and it’s been a great place to discuss issues. It’s not too busy but it’s been busy enough to be useful.

gcc 4.3 in combination with the new gdb 6.8 really brings the MinGW port for native Windows up to snuff with the gnu toolchain enjoyed by Linux developers. And I think it has a chance to give Visual C++ a run for it’s money. Time will tell of course, and I am wearing my open source colored glasses. But as with the CDT for Windows development, all we’re trying to be is a respected alternative and a valid path for multi-platform development.

Speaking of which, it’s getting time to start working on Wascana 1.0. It’ll be based on the Eclipse Ganymede with the latest tools from MinGW as well as a handful of libraries to help build platform independent apps. And it will use the Eclipse p2 provisioning framework so you can install and update the tools and libraries using the same UI you use for plug-ins. And with 7000 downloads of the last Wascana prerelease, it’s worth the extra time I have to put in to make it happen.

Open Screen, Another Game Changer?

I just went through some blogs and the Adobe Open Screen web site to try and understand what’s going on. If you haven’t heard, Adobe is removing licensing restrictions on it’s SWF and FLV/F4V file formats that serve us Flash content and all those crazy videos on YouTube and such. In the past, the license on the specs restricted the reader from creating competing players, which has resulted in some pretty weak open source players that relied on the developers reverse engineering and guessing at what the spec is.

Opening the specs makes that no longer an issue. But the other announcement, that Adobe is going to make its player free for embedded devices as it does with desktops should really remove the need to have other players (which appears to be the true objective of this project), except for the open source bigots who must have their apps served open sauce, I guess. Bringing a free Flash player to devices is huge in my books and with their porting layer APIs made public, that should make it really easy for device developers to port the player to their devices. I think that’s pretty game changing and you’ll start seeing more Flash-based user interfaces on devices over time.

So it seems like pretty exciting news and it’ll be interesting to see where it goes. But, I do hate the fact they’re using the term “Open”. This is one of my dogmas as colleagues that I’ve worked with in the past are painfully aware ;). “Open” is too tied to the word “source”. And especially when the project is called “Open Screen” it’s to easy to jump to the conclusion that they are actually open sourcing their player technology. But from what I can understand from the brief FAQ’s they have on their site, I don’t think they are. Which then begs the question how do you get their player running on your device. Do they have pre-compiled binaries? Which libc? Which OSes? Which compiler? At any rate it has left me confused and I’m sure others are. I wish people wouldn’t use the word “Open” unless they really mean Open Source.