I’ve been lucky enough to be involved with the CDT since the day QNX proposed it to world back in 2002. It’s been a very interesting journey. In the early days, the CDT was almost a side project at Eclipse where a few vendors had a dream of building a great C/C++ IDE and tried desparately with the few resources we had to reach the bar that the JDT guys continuously raised and continue to raise on us. But in those days the people working on the CDT didn’t have a whole lot to do with the other projects at Eclipse.
Callisto has changed that in a lot of ways. First of all, just delivering at the same time as the other 9 projects opens up opportunities for working with them to bring their features to the C/C++ world. I’ve had discussions with TPTP with thier static analysis features built on top of the CDT. It’s still small but a start. And others will arise in the future I’m sure. But the biggest benefit was our tighter schedule with the platform where we became early adopters and were able to get bugs fixed before having to wait for a maintenance release. And the platform team was very eager to help us out.
For the CDT, even the fact that we knew about 8 months in advance when our delivery date was going to be was a huge benefit. Until then, the release dates for the CDT were at the whim of the vendors providing committers to the CDT as we tried to match vendor release plans with CDT release plans. It made feature planning very difficult (we even had a 4 month cycle once!). And we look forward to the next release in a years time which will give us the opportunity to put forward a great program and make the major version jump to CDT 4.0.
For me personally, though, it was just the opportunity to work together with the 9 other project leads and Bjorn, Ward and Ian from the EMO. These are great people and it was a pleasure to work with them towards this great common goal that even Mike said wasn’t possible. We proved them all wrong and have started a new era at Eclipse. And I hope you all enjoy the fruits of our labour, Callisto!
I’ve been pretty quiet lately with the blogging. The main reason is that I’ve been working certain parts of my body off as I try to implement a new indexing architecture for the CDT. There is a lot of good news and a little bad news with this project. The good news is that I can now index Mozilla in 14 minutes on my laptop! In CDT 3.0, that took around 50 minutes, and improvement of around 75%. As well, as you change files, you hardly notice the indexer running were as it could take up to 12 seconds to deal with the change in 3.0. I almost fell over when I got the first timing at 14. Followed shortly by a dance of joy.
How did I do it? Well I took a hint from the precompiled header feature that most compilers are starting to support. As I’m indexing, and potentially other parse activities as well, I skip over header files that I have already parsed previously and get the symbol information from the index. This required building a more structured database for the index as opposed to the string based flat table in 3.0. It turns out to be much faster since parsing C and especially C++ is a lot slower than the database lookup. This is why incremental times are so fast. I just didn’t realize the whole reindex operation would be so fast as well (my target was 20 minutes for Mozilla).
The bad news, is that while it is incredibly faster, it does suffer from being young. There is less captured in the index than there was in 3.0, for Mozilla about 20% less symbols. So searching for certain things aren’t going to get you everything you were looking for. But I have been able to capture the high runners. More bad news, is that we are getting spurious StackOverflow errors because not all information is in the index and some of the algorithms we have for symbol resolution weren’t prepared for that. So as a result, the new index is only used for Search actions where we can recover gracefully and not for content assist and open declaration.
But back to the good news, as we work more on improving the contents of the index I’ll be able to direct all parser operations to it and make the CDT much more responsive for all operations (including my baby – content assist). And even as it is today, there is enough information there for the majority of workflows. Even the field engineers at QNX are extremely happy with it and these are the front line guys who need to make sure their customers are happy. More good news is that I’m getting more help with the indexer, both testing and coding. It’s tough to do this as a one man show and I am appreciating all the help I’m getting from the community.
With the new indexing framework in place in CDT 3.1, the opportunities for exciting new features is wide open. And one of the major objections to using the CDT on large complex projects has been eased greatly. It’s time to get the message out, now that I can lift my head away from the code!
Curt Schacker, apparently a veteran of the embedded software industry (well, his resume looks good anyway), has an interesting article on LinuxDevices.com on how he sees the state of the embedded software industry. His contention is that we’ve been been trying to shove a giant square peg in a giant round hole (his words, not mine), and that the embedded software industry is really a service industry and isn’t well served by off the shelf software.
Now mind you Curt is a co-founder of, you guessed it, an embedded services company. But I have definitely seen the trend, especially in the tools area. It is really hard to sell software development tools in a box. Every customer seems to have different processes, different configuration management systems, build systems, coding standards, you name it. It is very difficult to build a suite of tools to satisfy them all.
The biggest success stories I’ve been a part of in this industry is when we sell the customer a box, but then follow it up with intensive support or custom development to make the software in the box work best for them. There’s nothing worse, for me anyway, to have a customer who bought my box, but then let it sit on the shelf because it didn’t really meet his needs. It’s not so good for the reputation and future sales.
This is where programs like Eclipse really play into the business needs of software vendors. First, by sharing the development costs with other companies, our boxes are cheaper to produce. However, with Eclipse’s extensibility and customizability, it is easier to take those products and customize them for individual customer’s needs. Selling services may be more difficult and, as Curt mentions, doesn’t provide the multiples that products do, but it might be the right approach that customers have always wanted and the best road to profitibility for software vendors.
One of my “too many” interests in the computing industry is how to best serve up web content from embedded devices. The main use I see for such a capability is to allow maintenance personnel an convenient and standard way at getting at state and configuration information from the devices under their care. You see it very commonly used for configuring home routers such as my Linksys.
If you were at the CDT BOF at EclipseCon 2005, you would have seen a demo I gave of using gsoap to do this kind of thing. Since then, I’ve come to the conclusion that SOAP and related protocols are oversolving the problem. You can do what I was trying to do with simple http GETs. And with the coming out of AJAX to provide more interactive content with web pages using simple http requests, this really starts to look like the right architecture.
The problem I had was how to you integrate an http server with your embedded application. There are a few httpd library packages around but none of them appear to have enough momentum behind them to take the industry by storm. I had considered making my own but going through the http spec I quickly came to the conclusion that it would take a little more work than I wanted to soak into it at this point.
Then I ran across Nokia’s Raccoon project where they’ve ported Apache to the Symbian OS that they use in their cell phones. My head almost fell off. I thought Apache was this big monolithic web server that is driving the bulk of the web servers on the internet, big iron types. Could Apache be made small enough to fit into embedded devices. Nokia seems to have been able to do it. And looking at Apache’s modular architecture, it looks like you could write some cool modules that can interact with the software on the device without having to resort to the slow and clunky CGI interface. Very cool, and something I need to look into more.