Monthly Archives: August 2008

Where’s Wascana 1.0?

For those who haven’t heard of Wascana, it’s a lake in the center of my birthplace, Regina, Saskatchewan, Canada. It’s a beautiful oasis in the middle of the bald Canadian prairies and my last trip there inspired me to name my Eclipse CDT distribution for Windows desktop programming after it.

Around this time last year I realized that school was about to start and I rushed out the first release of Wascana 0.9.3. To date I’ve had almost 9,000 downloads of it showing me that there is interest and a need out there for such a package.

My plan was to get Wascana 1.0 ready for this school year. But my summer has been very busy and I haven’t had a chance to work on it. But hear me now and believe me later (I’m sure that was in an old Saturday Night Live sketch somewhere), it is still on my roadmap. For one thing, I really want to make it a showcase for the Eclipse p2 provisioning system showing how you can build a feature rich install and update environment for your whole IDE, not just the Eclipse bits.

Aside from that I want to add the boost C++ libraries to the package. Boost is a very full C++ template library that gives you a lot of the library functionality that makes Java so good, and it’s often a showcase for new technologies that end up in the C++ standard anyway.

I’m also waiting for an official release of gcc 4.3.1 for MinGW, to give us the latest and greatest compiler technology from GNU with super good optimization and support for OpenMP for parallel programming. There’s also the newest gdb debugger that gives pending breakpoint support so we can get rid of a lot of the kludges we had to put in place to support this kind of thing in the CDT. Unfortunately, Windows debugging for MSVC support isn’t as complete as I’d hoped, but there has been progress as part of the Target Communication Framework (TCF) work at Eclipse, so we will get there sooner or later.

And, of course, there’s Ganymede, including the latest CDT 5.0.1 which will be coming out with the Ganymede SR1 in a couple of weeks. CDT had some really awesome improvements, including new refactoring support, in the 5.0 stream.

So for those waiting, I’m glad your a patient bunch. The wait will be worth it for this critical piece of my continuing effort to get the grassroots C++ programmers and hobbyists, many of whom are working on Windows, into the Eclipse world.

Open Source Handhelds

Quite a while ago now I posted about the open source gaming device from Korea know as the GP2X. At the end of the day, it ended up with a storied history and while I love the concept of a handheld mobile device for which you can write your own applications, their execution as a company out side of Korea wasn’t that great and only a distributor in the UK was able to make any kind of splash with it.

At any rate, I found on Slashdot that they have announced a new generation of the product called the Wiz. The links lead you to the UK site and a big JPEG of the brochure in English. The specs look pretty good, ARM9 processor at 533MHz, 3D accelerated graphics, Linux of course, and support for audio and video making it a pretty cool multimedia gaming machine, for which you can write your own applications. And hopefully they’ll be a bit more successful at delivering it than the last one.

But there are other choices for such open handheld devices. One of the commentors on Slashdot pointed to another one called OpenPandora. It has better specs, including the TI OMAP3 which is a monster ARM Cortex-A8 processor with full OpenGL 2.0 ES (i.e. with programmable shaders) graphics. It comes at what I believe will be a higher price point than the Wiz, but it is more powerful and has a QWERTY keyboard.

Looking at this in combination with the Linux mobile phone thrusts going on reminds me of the early days of the PC. Lots of different platforms doing specialized things that beacon the hobbyist programmer to come play – VIC 20, Commodore 64, Trash-80, … The PC is relatively boring today, but maybe these devices can bring in a new generation of programmer that loves to play like we did “back in the day”.

Android SDK goes beta

Well, if you follow the embedded industry even from a distance, the news that the Android SDK has gone beta is old news by now. I’ve been so busy p2-izing our upcoming Wind River products that I haven’t had time to write here. Time to get my priorities straight :).

Any whoo, there’s a lot of competition all of a sudden for mind share in the mobile Linux game. Android has been pretty quiet lately but they’ve clearly been busy beavers and a real Android-based device seems imminent, so it’s time to take them seriously.

The only thing I’m really waiting for from them, however, is open sourcing of their Dalvik VM technology (can’t call it Java since it’s not Java, but it’s Java…). For Java to truly take off in the embedded space, we need a good VM that works in resource constrained environments and runs well on relatively slow processors. Oh, yeah, and it has to be freely available for the kids to play with it. It’s there in the Android SDK, but it would be cool to have the source to see how hard it would be to port to other mobile Linux platforms.

Because there are other players in this game and they actually seem to be ahead at this point. OpenMoko has already shipped a new version of their Neo FreeRunner phone. LiMo has a healthy stable of partners with a number of them already shipping devices as well.

So is there room for all of them? No. There is not. As Donald Smith conjectured recently, mobile is becoming more about the software stack than the hardware. Luckily we have more hardware than stacks at the moment, but not by much. We need to see some consolidation here soon so that application developers can start building those killer apps without having to port them to umpteen different environments.

BTW, Equinox running on Dalvik would be very cool. I’m guessing it’s not easy. But is this in the works by anyone?

Eye Candy on the Linux Desktop

My crusade for a better Linux desktop continues. After reading a recent rant from someone at the Inq and recent predictions on what Linux will look like in 4 years time, I thought I’d give another shot at improving my Linux desktop. I’ve been using it heavily from the command line for manufacturing media for upcoming Wind River product using our Eclipse p2 based installer/generator and for mucking around with ClearCase. I can do all that from putty on my Windows machine. But it would be easier if I could just do it and everything else I need to do right on the Linux desktop.

What I was really after was the 3D effects offered by the Compiz Fusion compositing window manager for my GNOME desktop. After a recent bad experience installing a security upgrade for Fedora 9 which totally killed my machine (no boot up for you), I’m back on Ubuntu 8.04. But, alas, I had to install the proprietary drivers for my ATI video card before I could turn on the effects, which is another can of worms I’ll leave for now. But once I did, I was up and flying with some wobbly windows and spinning cubes and windows flying all over the place.

I haven’t used Mac OS X or Vista seriously, but I can’t imagine any eye candy they would have that Compiz doesn’t have. So in that sense, I get the feeling that Linux is making huge strides forward. I still haven’t figured out how to get fonts as crisp as I get on Windows but I imagine it could be done. It does indeed appear we’re not that far away from a champion desktop for Linux (sorry, watching too much Olympics and I’m finding too many parallels between Linux desktop and the ability for Canadian athletes to win medals, 4th place is great, but…).

But I really liked what the guy said in the article about Linux in 2012. He predicts we will get there, but that it’s going to take “for pay” distributions of Linux to take us there. Free software isn’t going to do it. And there’s good reason. Windows and Mac look so good because of the proprietary software that makes it happen. If we want that on Linux, we’re going to have to pay for it, just like we do for Windows and Mac OS X.

We’re going to have to pay for the licences to the software that legally plays MP3s and shows DVDs and cleans up my fonts and for someone to make it all work on our laptops without having to edit anything in /etc. That’s just the economic reality of it. And I for one have no problem with that. Because I believe you get what you pay for. Free software is great for commodity software like kernels and windowing systems and IDEs written in Java 😉 where there’s lots of people to help build them. But it takes rare skill to make a great desktop environment. And the guys with those skills hold the cards and probably want to profit from their fortune. To get a great desktop on top of that great OS, it’s worth it.

How many threads? No way!

Well, I thought I was being sneaky when I googled for the Intel Larrabee paper that is being presented tomorrow at SIGGRAPH and thought you had to be an ACM member to get a copy but then found it on the Intel site. Everyone else seems to have found it too. To save you the hastle, here it is. I found the link today on Wikipedia so apparently it’s no longer secret.

Anyway, I’m more curious about it than anything. And I guess the biggest question I had was how many cores the thing is supposed to have. This will help me measure how pressing the need is to figure out how to program many threaded applications without Joe programmer’s head exploding.

Well there’s no actual product announcement in the paper, which I guess is the right thing since it’s a conference paper aimed at getting graphics programmers interested in their architecture. But the performance charts give a hint. They show the number of cores needed to make some of the popular recent games hit 60 frames per second where you generally need between 24 and 32 cores. They then show some charts showing scalability up to 64 cores. Given that, I think it’s safe to say we’ll see 24-32 core Larrabees in the not too distant future and grow from there.

But wait, there’s more. Each core has 4 threads of execution to help ensure that the cores are kept busy while waiting for cache and memory and I/O and stuff. That gives you up 256 concurrent threads in the 64-core monster and 128 in the regular 32-core machine. That’s a lot of threads. And, yes, it’ll be hard to write applications that can get the maximum performance from these.

One other thing I found interesting is that their performance tests were done with the cores running at 1 GHz. That’s pretty slow by modern standards. And it explains why they are focusing on graphics co-processor applications where programmers are already trying to figure out how to make things more parallel. If you don’t, you’re single threaded application running at 1 GHz will be pretty slow (trust me, I have a 667 MHz Pentium III at home, brutal). So until multi-threaded programming becomes more mainstream, don’t expect to see one of these things running your Office suite or Eclipse – even though it could.

Carmack the Magnificent

(BTW, the title of this blog entry is a spin on one of my favorite TV personalities from many years ago, Johnny Carson)

I just finished reading an interview Tom’s Games did with John Carmack, the mastermind behind id Software. I love hearing from him. He’s very respected in the gaming industry, is a really smart guy, is totally a geek, but has a good eye on the industry. You learn a lot about where gaming is going technology-wise and business-wise by listening to him.

A couple of things he mentioned interested me. He did bring up multi-core computing and how gaming programmers are still trying to figure out how to take advantage of all those extra concurrent threads. As with everything, it’s going to really change the way programmers think. In what I’m sure was a Freudian slip by the writer, he used the word “paralyze” instead of “parallelize”. I think that’s exactly what’s happening. Programmers are paralyzed as they try to figure out what to do. It goes against everything they’ve been trained.

On the good news front, though, I’ve been playing a little with the dataflow programming paradigm and I’m liking the way it makes me think. Essentially you have huge number of processes that have data streams between them. You’re program becomes a network of co-operating virtual machines. Makes sense to me but it needs to be tried out on a real application. And I think gaming is that perfect application.

As a side note, John also talks about the iPhone and how it can be a great gaming platform. His wife says it’s a terrible phone, which I’m sure it is, but using these things as phones is becoming secondary to the other applications that can run on them. I’m pretty sure that’s where the future of mobile devices is headed. Forget the phone, they’re just great little network appliances.

How’s build working for you?

It’s been a crazy couple of months for me. The list of things I need to do has been badly encroaching on my time for Eclipse community work. But it’s all been good and we’re working on some really cool stuff with p2 internally here at Wind River which should turn into community work as well anyway.

But it’s getting time to start my work on e4 and the Eclipse resource model. We’ve talked a lot in the past about the need for flexibility there and to support other resource models that don’t necessarily map to the underlying file system. Even in the last few days we’ve received inquiries on the cdt-dev list about supporting Visual Studio-style projects in this manner (from a guy with an e-mail address of nvidia.com – quite interesting). So this is what we’ve meant by flexible resources and what we plan on addressing.

The question I’m starting to ask myself is whether we should be looking at the build side of the resource model as well. If you come from the Makefile centric view of the world, the Eclipse build system is very bizarre indeed. I’m not sure of the history of how it got to be that way, but it was our first real interaction with the Eclipse Platform team, to somehow get Eclipse to build CDT projects correctly. There’s still a lot of magic there that probably could be simplified if we made the build model more flexible as well.

Also we have a hell of a time co-ordinating loading the CDT build model data from our magic .cproject file at the right time and in a scalable and non-deadlockable way. Sometimes I just wish that the .project file and the code that manages it was more flexible as well so I can have our build data loaded at the same time the rest of the project information is loaded. (Yes you can do a little of that now but not easily with the complex data we have for our build configurations, tool chains, option settings, etc).

Anyway, I don’t know how many people are building IncrementalProjectBuilders out there (and, yes, the weirdness does start there). But I was wondering if other plug-in developers are also facing issues like this.