Monthly Archives: December 2008

Predictions for 2009

I’m not usually one to make predictions. It’s hard for me to tell the difference between a prediction and wishful thinking. But this article over at the Inquirer (still the best place to get an honest take on the industry along with /.) got me thinking about a couple of things I think are going to be important in 2009. So here we go…

2009: The Year of the GPGPU

This is more a continuation of a trend but the Inq article made some great points that I think will put some spotlight on general purpose programming with GPUs. The key one, is the recent standardization of a cross platform way of programming these things, OpenCL. ATI and nVidia have already signed up to provide OpenCL support for their chips and look for Intel’s Larrabee platform to come with the same. I think there is still some software and hardware architectural things that need to be done to make GPGPU more efficient and easier to program. Look for LLVM (which needs an article on it’s own) to play a role, as it already is with OpenGL, and look for one of the chip vendors to put a GPU on the memory bus shared with the CPU and make these things sing.

2009: The year of WebKit

Ok, yes, I’m playing it safe with these predictions. WebKit is already the base for Apple Safari, Google Chrome, and a host of Linux based browsers, so it already has a ton of momentum. The reason I think WebKit is going to the next level, is first of all the top of the class performance of it’s new JavaScript VM (and I can’t imagine why Google would continue with V8 in Chrome). But also, I am impressed with how easy it is to create your own WebKit based browser, and how easy it is to create a Linux based platform that uses WebKit as it’s front end (launch X, launch a simplified WebKit shell in fullscreen, done). I expect to see a lot more mobile internet devices built this way. At the very least, it gives a reason for embedded developers to care about AJAX.

C++0x won’t be C++09

I think that’s a forgone conclusion but no one really wants to admit it yet. But look for the vote to finish this year at least. C++0x will be an exciting evolution of C++ into the next generation. No it doesn’t have garbage collection, yet, but it does have smart pointers that do the job better if you use them right. C++0x makes it easier to do a lot of things, and the introduction of closures and lambda functions and expressions will breath some life into this stalwart of the software engineering community.

Well, that’s it for now. If I think of more over the next couple of days I’ll post them. There are a lot of things I hope will happen, but i’m not sure they will. But one thing is for sure, open source is here to stay and is becoming a core business model that companies still need to understand and learn to use effectively and I will continue with my work with Eclipse and Wind River to help figure that out and spread the word.

Have a safe and happy New Year! See you on the other side.

A look at WebKit

A few days ago, I was playing with Google’s V8 JavaScript VM library and got it compiling with MinGW in Wascana. I submitted the patch to make it work but I haven’t heard back. I guess it could be the Christmas break.

But one thing that struck me odd recently was an announcement that the next rev of Android would include WebKit’s SquirrelFish Javascript VM. I guess that shouldn’t be too surprising since SquirrelFish comes with Webkit. But then why is there ARM support (the CPU for Android) in V8? And if they are using SquirrelFish for Android, why don’t they use the souped up SquirrelFish Extreme for Chrome? Especially since there are benchmarks showing it beating V8. I’m confused and can only chalk it up to Google being a big company and maybe the Android people don’t hang out with the Chrome people.

Anyway, that got me looking into this whole WebKit business. I downloaded the latest nightly source build to my Debian Linux VM and after installing a boat load of packages needed to build it, I built it. I had heard the JavaScriptCore library which implements the VM was embeddable in C++ apps. The header files are there, but it looks like you actually have to embed the whole WebKit library to get at the VM.

That got me thinking back to an earlier idea I had. Use HTML with JavaScript as your main GUI framework. With Webkit, you can embed the whole browser into your application, and you can hook up new JavaScript classes to your C++ classes to provide scripting and to give access to them to the UI. Interesting to see how that would work in action.

I think I’m starting to figure out this whole JavaScript and C++ thing, with thanks partly to something a commenter said on a previous entry. Use scripting for quick turnaround, when you want to whip up a prototype or allow for easy extension of functionality. But use C++ for areas where you need to engineer functionality. Part of your architecture design is deciding what that means. And maybe something like WebKit might be the right platform to get you off the ground.

VirtualBox 2.1 and assorted Christmas Fun

Just some random thoughts on this Saturday after Christmas. My family and I had a good Christmas, despite a little “Fun with Autism” moment with my Autistic son, but it’s all better now (patience is a key survival technique in our household). Yesterday was Boxing Day in Canada, which is a holiday here despite all the stores being open for your shopping pleasure. If you don’t feel like going out, you are free to sit around, well, like boxes, which we did for the most part.

I’m spending a little time today while everyone is playing on the PS3 and various PCs around the house getting ready for my EclipseCon tutorial. I’m really looking forward to it. By the end of the tutorial, you’ll walk away with Wascana which you use to build qemu, a little Debian Linux image running in that qemu, and a cross-compile toolchain and CDT integration that you also get to build to create apps for Debian from Windows (and maybe Linux). Lots of hands on and hopefully an appreciate of why the CDT is the first class cross-platform C/C++ development environment.

Before I get back into playing with qemu, it was cool to see a new version of the VirtualBox emulator come out, 2.1. It’s a minor version increase but there are two significant features added. One, is 64-bit support on 32-bit platforms. This is critical for me and my installer work at Wind River, where I need to test and debug on 32-bit and 64-bit platforms. I don’t trust 64-bit Linux enough yet to make it my main Linux environment, not to mention downright fear of 64-bit Windows.

The other cool thing is more on my personal interest front. They have an initial release of OpenGL support. If you read this blog regularly, you’ll know I have a dream of an open Linux-based game console/multimedia set top box. I’d like to try some ideas out on a Linux platform with 3D hardware without actually buying any and this is the first emulator to have OpenGL support.

Unfortunately, they only have Windows guest drivers at the moment but have promised Linux/X drivers soon. I can’t wait, but it does lead me to drop my plans for working on OpenGL support for qemu. Instead, I really need to spend what little hobby time I have learning how to write an X window manager, using a cross-compile environment with the CDT, of course 😉

I could have had a V8, oh wait, I do

I’ve always been intrigued by programming languages and what makes them tick, and what is the best one for what situation. That’s why Dave Thomas’s keynote at ESE still has me thinking about the mix of JavaScript and C++. So much so that I spent a few hours this weekend while waiting out the snow storm to get Google’s V8 JavaScript VM building under MinGW for Wascana. I think it would be an intriguing addition to have the VM DLL available for developers using Wascana. With a few changes, I have it building and passing the unit tests and I have a patch into the V8 project. I’ll make V8 available in the Wascana 1.0 alpha in the next couple of days.

Now that I have it, I have to ask myself – what the heck do you do with it? I’ve thought about building wrappers for the wxWidgets library to let you build thick client apps in JavaScript. wxWidgets also comes with Wascana, and thick client apps is kinda what Wascana is all about (aside from dreams of using it for game development, which could also benefit from a fast JavaScript engine).

But it’s not clear where one would draw the line between JavaScript and C++. Given a C++ library like wxWidgets, or SDL, or what have you, is it enough to wrap it with JavaScript and have the developer do everything in JavaScript. Or should JavaScript just be this thing on the side that allows for extensibility of some larger application written in C++.

It makes me wonder if I’m following some crazy idea that some madman sold me in a bar in Germany. Or maybe this is challenging me to give it deeper thought, to think about how scripting and native languages are supposed to mix. Where in all this is the sweet spot of architectural balance. Or is there one? Either way, it’ll be on my mind over the Christmas holiday season.

Fun with FEEDJIT

I’m not sure if you noticed, or are reading this blog from one of the syndication sites it gets copied too (like Planet Eclipse, or the Wind River Blog Network). But if you check back to the original site and scroll down a bit, you’ll see a new panel called the FEEDJIT Live Traffic Feed. I know people express concerns about web things following them, and if I get enough negative response to it I’ll pull it off. But in the meantime, I’m spellbound by this feature.

I’m learning quite a lot about the audience for this blog. The traffic feed gives me the city that where the person was, which is spread throughout the world, as well as a hint at how they got to my site. A few people come directly, I guess from an RSS reader where they’ve subscribed one way or another (Thank you!). More often, though, people end up here based on google searches, and I get the snippet that they were searching for! Creepy, but very useful.

So what are people searching for that pulls up my site? Well a lot of it lately has been the topics I’m most interested in lately, and that’s CDT for Windows development, including Windows cross to Linux. It’s good to see the interest from the community on that and I am continuing working on Wascana 1.0 as I write this (SDL is building in the background). I also often get a few queries on the Subversion Eclipse plug-in wars (I hate both right now, go git!). And you get the odd one looking for help, like today’s “eclipse CDT autocomplete crap” (yeah, it has issues if you’re environment isn’t set up).

Anyway, it’s pretty interesting to watch, and it humbles me immensely to see people from around the world reading what I write, especially when the google search reveals they searched for me by name. But I love to write and share my thoughts and I really appreciate it when people leave comments. Whether I agree with them or not, I always learn something from what they put there. It’s a lot of fun and I encourage everyone to do the same. There will always be someone out there interested in what you have to say.

Fun with my little VIA console

At the Embedded Systems Conference in San Jose this year they handed out little VIA embedded EPIA systems to the attendees. I’m not sure everyone got one, but I was thrilled. It has a embedded VIA processor with a chipset that includes Unichrome 3D graphics, and also include a hard drive, ethernet, VGA, four USB ports, and audio in and out. It’s a cool little unit.

I haven’t done too much with it, but thinking about this Open Console concept (set top box with 3D graphics running Linux), I thought I’d try setting it up with some of the things I had in mind. I started by putting the Debian lenny installer onto a USB stick and installing from it. That was a little tricky until I reformated my USB stick and put syslinux on it properly. I installed enough packages to get X running with the openchrome driver for 3D graphics. glxgears ran pretty smoothly which gave me some hope I could actually use this thing to run games.

So I got adventurous and installed Nexuiz, an open source first person shooter. To my surprise, this and other open source 3D games are available from the Debian package repository. So a quick little ‘apt-get’ which brought down around 450MB of game, and I was off and running. We’ll off anyway. I got about 20 seconds per frame, which makes it a little hard to even notice the thing was running.

Anyway, I tried a few other simpler games and they actually worked. I had to force myself to go to bed while hooked on billards-gl. It was fun. But I’ve slowly begun to realize that games built for the desktop aren’t really ready to be played with only a joystick as you’d likely only have in a set top box scenario. So there would be work to be done.

I also started to understand first hand the commercial opportunity behind Linux, embedded Linux especially. Sure you can install a Linux distro and get a desktop environment up without too much effort. But try to do anything off that beaten path and you’re in for a lot of work. If you can share in that work, fine. If you can pay someone to do it for you for cheaper than you could do, even better.

I also gave up on using this little VIA box for my play-totyping (hmm, new word). I need to start getting ready for my EclipseCon tutorial which will help me get back into the guts of qemu. Maybe I can do a little work there to bring GLX emulation to it, play time permitting, of course. Or maybe I’ll shell out the $500 bucks to build a real system. Though playing in qemu would be funner…

Time for Distributed Source Control is Now

Imagine this scenario. You’re part of a small team that’s been following the CDT closely and have adopted it as the IDE for your commercial platform. You grab the CDT source at times convenient to your product deliver schedule and work on a local copy fixing bugs you find as you go through product testing. You’re not a committer but you do submit patches from time to time and hope that the CDT team picks them up. But they’re often busy with their own delivery schedules and the patches often grow stale and fall off everyone’s radar.

So you live with your CDT fork and struggle every time you have to update to a new CDT version, so you don’t do that very often. And since you’re busy struggling in that environment, you really don’t end up with time to get more involved with the CDT. You are a small team and you only have so much time in the day. You run into Doug once in a while at the Eclipse conferences and talk about what you do and promise you’ll figure out some way to get more involved, but he knows your story too well and doesn’t put much faith in it despite his appreciate for your intentions.

Sounds like I have experience with this, don’t I. This scenario is too real and I’d bet is very common across all open source projects. Relying on CVS and Subversion at Eclipse with access controls limited to the select few committers makes it very difficult for those on the fringes to get more involved. It truly is a have/have not environment. The committers have it easy, checking in their changes whenever they want and those that aren’t are struggling to keep up, or simply fork and go their own direction.

I’ve learned that the new Symbian Foundation as selected Mercurial as their source control system. Along with Linus’s git, it’s one of the new breed of distributed source control systems. These systems allow for multiple repositories and provide mechanism to pull and push changes between them. The introduction chapter of the Mercurial on-line book provides a great description of why this architecture works well for large globally distributed projects.

I invite everyone to read it, especially the Eclipse community. Because I think we need this kind of capability now. CDT needs an infusion of new blood and I know there are a lot of people who work with the CDT code base but have only a limited time to contribute back. If we had the infrastructure to better support them and make it easier to pull their changes into the CDT main line, and easier for them to keep up with everyone else’s changes, it could be the formula we need to grow.

x86, the ultimate applet engine?

I need to watch out or people will start calling me a Google fan boy or something (well, too late). It seems everything they come up with lately grabs my attention. And I guess it makes sense, because they seem to be heading in a different direction than a lot of people, and more in a direction that appeals to me. First Android (open mobile handset), then Google Chrome (Webkit-based browser), then the V8 C++ friendly JavaScript VM, and now, Native Client.

If you haven’t heard of it, it appears to be a Google research project into running secured native x86 code in a browser. Yes, we have tried that before with ActiveX and it was a security disaster. But the underlying need for high performance interactive web pages is pretty intriguing. If you could write browser applets in C++, why wouldn’t you? I suppose…

I had to try it myself. The install instructions are for Firefox, but I dumped Firefox for Chrome a while ago. It’s good that Chrome has some Firefox in it, because all I had to do was copy the plugins for Firefox into my Chrome Plugins directory (it’s hidden in Local Settings, Application Data, Google, Chrome, Application, Plugins).

I was then able to go through their little demos and tests. They’re cute and the Mandlebrot demo shows some of the power. There’s also a demo of the open source SDL version of id’s Quake. It’s pretty complicated to build and I couldn’t get it working on my Windows box (mainly because I’m Cygwin-free and it seems to need it). But it’s an interesting idea, taking an SDL-based application and converting it to run in a browser (Native Client uses SDL to do audio and video). Maybe, they’ll even expose OpenGL through SDL to the native code as well. That would be more interesting.

One thing though that burst my bubble with this whole experience were the results of the performance tests that they have. The C++ version of the tests were only marginally better than the JavaScript ones. I think that’s thanks to the great job they’ve done with the V8 VM. If that’s the case, I really wonder whether this stuff actually makes sense, other than porting old software rendered games to your browser, I guess. I need to stew on that one a little before buying into this idea.

A busy day for Khronos

My News feed filled up all of a sudden today. Looks like they’ve been busy and had a couple of announcements to make.

They released a new version of the 2D OpenVG spec. They added some APIs for text glyphing to make it easier to draw good looking text. I’m not sure anyone really uses OpenVG, especially when you are most likely to be drawing 2D in a web browser with Adobe Flash or SVG (and even then, most likely Flash). From the news release, this is probably most interesting to the mobile crowd.

The more interesting announcement for me was the release of the first OpenCL spec. OpenCL is a standard for running general algorithms on the newer GPUs in video cards. It’ll also be ported to other multi-core systems like Cell and DSPs, but most likely you’ll be using it with a video card. Of course AMD and nVidia were quick to announce their support for this spec, which gives it some immediate momentum.

OpenCL specifies a C-based language for parallel processing as well as APIs that drive them. Up until now, nVidia and AMD had proprietary solutions that didn’t work cross platform. OpenCL opens the door to make parellel programming available to more and more programmers and I’m dieing to see what they’ll do with it…

Wascana 1.0 in Alpha Testing

Well, that didn’t take very long. I’ve spent a few hours building my special p2 artifact repository that manages installed files, including extracting them from an archive and deleting at uninstall time, along with it’s associated p2 touchpoint that hooks it all up. It’s not a lot of code and you can see it in CDT’s CVS space (repo: /cvsroot/tools, module: org.eclipse.cdt/p2).

I’ve also created a generator that creates p2 repositories that use that touchpoint to install remote artifacts from various locations, mostly on SourceForge. Currently I only have support for the MinGW toolchain and the MSYS shell environment. I’ll add libraries as I build them with the 4.2.3 compiler I’m using here. I’ll start with SDL and also do wxWidgets and boost. We can always add more later.

It’s working very well. Managed build picks up the mingw toolchain and uses it when you select the MinGW toolchain. MSYS doesn’t work yet for Makefile projects but managed is usable now. And here’s how:

  1. Unzip the Eclipse IDE for C/C++ Developers anywhere you’d like on your machine. You can also start with any other Eclipse install as long as you have the CDT installed.
  2. In Software Updates, expand out the tools/cdt/releases/ganymede site into CDT Optional Features and install the Eclipse CDT p2 Toolchain Installer feature. Allow Eclipse to restart to make sure things are initialized (I’m not sure if you really have to do this, I’m just paranoid).
  3. Go back to Software Updates and add the Wascana repo site at Install everything under the MinGW Toolchain category. This time you don’t need to restart. You don’t even need to apply changes.

Once you’re done, you can go to the directory containing eclipse.exe and you’ll see the mingw and msys directories there, ready to go. Well at least the mingw dir is, I still need to set up msys correctly to find the mingw compilers, but it is only an alpha :).

Feel free to give it a try and let me know what you think. I’m pretty excited with how this is going. While creating this, a new version of the win32 API component came out and I added it to the repo and the Update… feature found and installed it. Very cool!

It’s a very interesting path where this is going. The ability to incrementally add in libraries and update new versions of the components will be a great showcase on how p2 can manage more than just bundles. Not to mention help me build one heck of a Windows development environment based on the CDT and open source tools and libraries.