Monthly Archives: July 2008

Going with the data-flow

I’ve just been reading articles in Wikipedia on dataflow programming. This programming paradigm captures what I think is the greatest need we face to built the multi-threaded applications of the future. It also explains where a lot of the concepts in UML Actions and Activities come from.

From what I understand most of the dataflow languages are visual languages. The SynthMaker tool that came with my Fruity Loops is like that. The page also lists the hardware description languages, Verilog and VHDL, in this category. I think they’ve left out SystemC since it fits into that mould as well.

But even if dataflow programming is the big new paradigm, I firmly believe that any new paradigm will only be mainstream if it’s familiar to developers. There’s been some great programming languages over the years that you would swear are much better than C (Ada comes to mind, Pascal was really good too), but if you look at the most popular languages in use today, C-like languages, and C++, Java, and C# in particular, win by a land slide.

So it comes back to what I was mentioning earlier. SystemC is a great example of a C++ library and run-time that implement a different programming paradigm but let you reuse all the skills you’ve learned with other C++ applications. And, it’s an example of a language that supports dataflow programming which we need for massively multiprocessing applications. It’s definitely a source of inspiration.

Who’s leading anyway?

The LinuxHater linked off to Christopher Blizzard’s (from OLPC fame and now at Mozilla) blog on the current state of affairs with the GNOME project. He gives some very eye opening insight into what’s happening there and the potential future directions for GNOME, GTK, and friends. It’s not pretty, literally.

GNOME is getting big in the mobile space, or at least the number of contributors from that space is starting to dominate the GNOME project. And as we all know in the open source world, the contributors are the leaders and get to make the decisions. What this likely means and what blizzard is afraid of is that the GNOME desktop is not going to get the attention it needs to compete with the modern interfaces it competes with. The commercial interest just isn’t there to make it happen like it is with GNOME mobile.

He explicitly spells out Qt and Apple as leaders in making good user experiences and developer friendly APIs. My favorite quote of his: “If in a platform-driven market and a platform-driven world you’re not the #1 or #2 player it’s going to be very difficult to make a dent in the market. (This is especially true if Nokia decides to fix the Qt licensing.)” I agree on both fronts. I can’t see how GNOME is going to grow without serious innovation. And I hope that Nokia fixes the Qt licensing (wink, wink, nudge, nudge).

Being CDTDoug and focused on embedded and mobile at Wind River and on the success of CDT for Windows development with Wascana, why do I care so much about the Linux desktop? Well I think it’s the missing piece in the open source success story. We have Linux as an overwhelming favorite in the server market and it’s growing in great strides in the embedded space. But without success on the desktop, my Mom isn’t going to care. Which means Microsoft and Apple and closed source technologies as still seen as the right path to innovation by the general public. And until “I am a Mac” dukes it out with “I am a Linux PC”, there will always be doubts on whether open source can compete with the big boys.

LinuxHater, a touch of tough love

From now on, I defer all my opinions on the quality of the Linux desktop and the open source projects that work on it to this guy, the LinuxHater. I started reading this blog after I ran across this article on the ‘Z’ via the ‘dot’ written by a guy from Google. It really hits home what both of them have to say.

The hater shares some really honest opinions using some very colorful language (warning – if you’re sensitive to that kind of thing) on everything from how hard it is for his grandmother to get into Linux, to how all the forking and duplication that’s going on FOSS community is doing some serious harm to our ability to build up the Linux desktop to compete with Mac and Windows. It’s a really funny read. And I have to agree with the Google guy. Given how much the hater knows about what he’s writing about, he’s really a Linux lover who desperately wants Linux to succeed but is loosing his cool in frustration.

And it’s hard to argue with what the guy says. Open source is about freedom, freedom of the developer to build whatever he wants, however he wants it. And if he doesn’t like working on a project, he can start his own, and even fork the code. What he can’t do, however, is fork the developers. And that’s what’s killing the Linux desktop. Too much duplication is watering everything down. Everyone’s so focused on building the best framework, they’re forgetting about the average end user who doesn’t care, or have the capacity to care, and just wants something that’s easy to use and works.

With Eclipse, we’re making conscious efforts to avoid this problem. At almost every project creation review someone asks whether the project is duplicating some other project and, if so, we work hard to get everyone to work together to resolve it. I think it helps that Eclipse is very much commercially driven. We understand the economics of open source development. We have very limited resources to invest, and it’s so critical to work as a team with other companies, even if we compete in the marketplace. If we can get over that, why can’t Linux desktop projects, who don’t even have a financial vested interest in succeeding, do the same?

But there’s a lot of politics in open source, especially with projects close to the Free Software Foundation. I’m not sure how we get out of it. Hopefully, those involved can see through the sarcasm and listen to the message. Linux rocks as an operating system, it really needs a desktop to match, and the community needs to unite to provide the sufficient resources to build it.

Important safety tip

So when you go to redefine the key sequence in Eclipse to do ‘Build All’ make sure you’re hitting the Ctrl key and not the Shift key. It explained why I couldn’t define the CXXFLAGS macro in my Makefile. Instead of Ctrl-X Ctrl-M for build all, I had accidentally defined it as Shift-X Shift-M. Weird (that it let me), Cool (that I could do it), but took me a little while to figure out that’s what it was and not something wrong with my keyboard when Shift-X didn’t do anything 🙂

BTW, thanks out to the guys who are contributing to the Emacs key bindings. I’m loving it! All I need to add is this Build key sequence, which Emacs doesn’t define either anyway.

Now where’s that include file?

Yeah, C++ refactoring is our biggest achievement for CDT 5.0. But here’s one I found probably more useful in my day to day use of the CDT (which is getting more and more lately which is awesome).

I have a burning need to learn GTK development. I have a little dialog based app that needs to run on Windows, Linux, and Solaris. I have the Windows version done using MinGW’s support for win32 programming (now there is an experience for you). Now I need to implement the same thing in GTK for Linux and Solaris.

So I’m going through the GTK 2.0 tutorial on the GTK web site and the first thing it get’s me to type in is:


#include

The first thing the CDT does is throw up a warning marker on the line complaining that the CDT indexer couldn’t find that include file. Being suspicious, I did a build and sure enough the header file wasn’t found. With all the great work the indexer team is doing I really should trust what it’s telling me.

So I fired up the Ubuntu package manager and found out that the GTK devel package is indeed installed. What’s up?

Then I remembered one of the new CDT features I stumbled across in my 5.0 testing that we really should tell people about more. I went back to the #include statement and after

A tribute to ObjecTime

As I mentioned recently, I’ve been thinking of how you could program UML-like actions using C++ in a manner similar to SystemC. As I worked through the workflows of how actions could receive data and signals on input pins and send stuff on output pins, and how they all hook up together, I started to get a deja-vu feeling.

It brought back a lot of good memories from my years at ObjecTime. You know, we had a lot of this stuff back in the 90’s. Mind you, it was almost totally focused on state machines that communicated with eachother using messages, but it had a lot of the multi-threading, action oriented development that we need for multi-core systems.

It was a fun time back then. We were a company of 150 or so working on something we were all very passionate about. It was one hell of a team. And we had some big customers but not many of them. When Rational bought us we saw it as a good thing that would lead our work to greater exposure and a bigger sales force. It didn’t pan out that way, but the team lived on and a lot of them are still working on modeling tools for Rational which is now a division of IBM.

But one area where I think we failed was in adoptability. We had some passionate early adopters as customers but there are only so many of those. You certainly don’t want to build a business with only that. And it was hard to get the code centric guy to trust the modeling and code generation tools. Let’s face it, when it comes to crunch time, you’d rather be in with the code with age old and trusted tools and the modeling tools easily fall by the wayside.

Anyway, that’s why I work on the CDT now. Code rules, at least for now. But as we try to introduce complex programming paradigms to facilitate multi-threaded development, I got to wonder if there isn’t another ObjecTime out there. We where years ahead of the industry and we knew it. I had feared that the time would never come, but maybe that’s not true after all.

The word with Mark Shuttleworth

I ran across (with the help of slashdot 🙂 this interview with Mr. Ubuntu, Mark Shuttleworth, and found it very interesting. It’s a good insight into how a commercial entity is successfully, or hopefully successful, working with the open source community to make things better. I’ve complained a lot here about the Linux desktop experience and Mark feels the pain and is trying to do something about it.

A couple of interesting points he brings up. One is on the Gnome/GTK versus KDE/Qt battle that’s been going on for years, and for years too long IMHO. And he mentions the point that I think is really underlying the issue and that’s licensing. GTK is popular because it’s LGPL which allows for software using it to pick their own license. Qt is technically and aesthetically better, but sorry, unless it’s commercially friendly in a free form, it’s going to lose the battle. And apparently it is losing from what is stated in the article.

And as long as the battle continues and the Linux community spend their limited resources on two desktops, the Linux desktop user community is going to pay the price. Mark discusses why he sees Mac OS X as the biggest winner lately in the desktop wars. It’s because of Apple’s dedication to providing an innovative user experience. That’s going to be hard to achieve with Linux without the community rallying behind fixing it, or a major vendor stepping up and investing in it. It sounds like that’s what Mark is going to do with Canonical, but they aren’t really a major vendor with big pocket books, at least not at this point.

Anyway, an insightful read. A lot of the discussion should be familiar with the Eclipse contributor community. Working and influencing open source is a difficult task and requires some specialized talents. And apparently that bodes well for those that figure it out.

A lesson on SystemC

Here’s a quick look at an example of SystemC code, you’re traditional NAND gate:


SC_MODULE(my_nand) {
sc_in a, b;
sc_out f;

void run() {
f = !(a && b);
}

SC_CTOR(my_nand) {
SC_METHOD(run);
sensitive }
};

It looks like some of hardware description languages I’ve seen such as Verilog. It lets you model inputs and outputs and a process here named run that takes the inputs a and b and does a nand to produce the output f. And it’s all continuous. The module will change the output value as the input values change.

The crazy thing is that this is C++ code. SystemC is a collection of header files that define the templated classes, such as sc_in, and some macros, such as SC_MODULE, as well as a runtime library that models the continuous nature of electronic signals and calls the process methods, such as run in our example, to execute the behaviors at the right time. Very cool use of C++ IMHO.

Now UML Action Semantics isn’t that much different than the behavior and structure modeled here. You have actions that have input pins and output pins and a behavior that runs when all the input pins are ready. All actions run in parallel. It’s a discrete event system as opposed to continuous, as software tends to be as opposed to hardware. But I wonder whether we can use C++ in a similar way to program action semantics.

With a runtime that uses the underlying OS threading system supporting multi-core systems to run the actions in parallel as much as possible and the familiarity of C++ and existing C++ tools, like the CDT :), but used to program a paradigm very different than traditional sequential C, it has me intrigued…

All eyes on Larrabee

There’s been a bit of talk on the web-o-sphere about a report out of the German tech magazine Hiese.de that claimed that the Larrabee multi-core processor that Intel was working on will contain 32 original (well, second generation, but still 20 years old) Pentium cores. In the end, it appears to be just speculation and Intel was quick to squash the rumors. But the logic behind the speculation seems plausible.

The old Pentiums where 3 million transistors and with the new GPUs coming out with over a billion, you pack a lot of cores onto one of those. And I like the concept. Something old is new again. Simplify and multiply. There are a lot of transistors in modern CPUs just to handle out of order execution and try to do as many things at once at the instruction level. But that’s pretty complicated but made it simple for the programmer. We’ve gotten pretty good at doing the simple things, why not just take a step back and use what we know. But, of course, you still need to software to take advantage of it.

Reading the discussions really opened my eyes a bit more. This 32-core thing really is possible and will happen within the next year or so. Are we ready to build software applications that can do 32 things at once in an organized fashion. Thinking about it a bit over my holidays here, there is some existing technology we can use and it’ll be pretty familiar. I’ll blog more about it in a couple of days or so, but think C++, generics, SystemC, UML action semantics. Mix them all in a pot and I think we can come up with some “soup for the multicore programmer’s soul”…

Are you ready for 1000 cores?

Massively parallel computing is something I’ve been interested in for a while and have blogged about a few times in the past. This blog entry by an Intel Researcher made me think about it again. He continues to proclaim that the future isn’t that far away and we had better start designing our software so that it can run on machines with thousands of cores. He worries that we’re aren’t ready yet and we need to start getting ready. And he’s right.

Being a tools guy, I think this is the next big paradigm that the tooling industry needs to address. Object-oriented programming and design was a godsend when machines started scaling up in the size of memory and storage and our programs began filling that with data. We built a lot of tools to help with that. Programming languages and compilers are obvious examples. But so is the JDT and CDT, with their code analysis to show type hierarchies and help you easily find classes. Not to mention all the object modeling tools for drawing pictures of your classes.

Coming up with the languages and compilers and other tools necessary to deal with thousands of concurrently running threads is our next great challenge. This is why I keep one eye on the Parallel Tools Project at Eclipse. They’re already in this world dealing with the thousands of processors that run the super computers they work with. This effort is a research project in itself (quite literally if you notice who participates in this project :).

But as the Intel researcher warns, this stuff is going to hit the mainstream soon. We’re starting to see that with OpenMP parallel language extensions supported in almost all recent compiler releases, including gcc. And I’m convinced it’s an area where modeling can help since you really need to think of your program in multiple dimensions, which is something modeling is good at.

I think it’s a matter of time before we’re at the head of a new paradigm. I remember the fun we had when object-oriented programming hit the mainstream. I think this one will be just as fun.