LabelProvider improvements

Greetings,

Since I tried to get involved in the DDT project I always wanted to improve the label provider section. Though Bruno did not merged my changes much in to the main line, but it seems it inspired some changes since then. One of the painfully missing thing was the visual aid of the elements’ protection level. The reason why the merge did not occur before is that it supposed to be shown as an overlay.

I opened a feature branch to improve the label provider, and my first step was to show the protection level icons. Now, as I don’t think it is wise to use completely new set of icons, neither I’m a talented graphic designer, I took the JDT’s protection level icons and applied as an overlay on all elements except of functions and variables. The variables/fields and methods/functions are treated similarly to JDT, they get a completely different icon. If this is still unacceptable, I’m willing to add an option where it could be set to use the original method/field icons with overlay in normal mode, and a “JDT-style” which would work as I described above.

The only issue with the solution that I had to modify the ScriptElementImageDescriptor_Fix class in order to make it more flexible. I don’t think it is a bad choice, but I recognize that it could pose a problem with the integration to the DLTK 4.0. But if it comes to that, I will fix it anyway.

It would also be nice to have the return type description with different color but that’s gonna be an other story.

You can find the branch here. Also, here’s a screen shot, how does it look like:

Advertisements

Time for tidying up

As Bruno posted the contribution guidelines it is time to clean up the mess I made and separate the different features from the master branch and from each other. I’m afraid it won’t be easy given that my public master branch was already littered with these feature fragments.

First I have identify the unrelated commits in the history in order to determine the new feature branches. I was working on several problems and most of them isn’t ready to be merged to the main development line.

  • ANTLR parser. This feature involves the AST classes, removing the old ANTLR thing, and adding the new ANTLR grammar. A potential issue could be those constructor codes that was added in the type-inference branch which I integrated in to the master for accessing those modifications.
  • The D Element label provider. This feature aimed to bring a more JDT-like icon set in the necessary places, such as the Outline view, the Script View and the completion proposal list. I found that basically this is not much more than one commit. There was also a hack to get the module name in a module definition label, but I’m quite uncertain of that change. However, this is a complication as I submitted that change in a random time, so I can’t use a range of commits to separate it out. Perhaps if it is easy to do, I should just simply get rid of it. This feature would add new icon files and change the DeeModelElementLabelProvider class.
  • Static library support. Now this is a worthy feature that is the closest to being completed. However, the issue mentioned above would make it complicated to separate to its own branch. After the separation I should request Bruno to have a look on this feature. The affected code is the builder, the project preferences and spreads to places like DLTKModuleResolver in the core.parser package. The change sets are ranging from here to there.
  • Bracket inserting. Should be quite straightforward as it is only one commit.
  • Type inference. The code that was submitted on this feature does nothing really interesting at the moment. It contains some key refactorings however that probably should be used later for many reasons. One of them is to have a visitor structure that is sufficient to work with all AST class and the other is replace the getMemberScope() (and later probably other AST member-) methods with visitor based processing code. The actual type inference code is only exploratory, trying to integrate the dltk’s type inference basics as an entry point code.
  • Renewing the completion proposal collecting code. The visitor refactoring above could be potentially useful for collecting completion proposal and would be better than the existing code as it is quite obscure. This completion proposal system should be aware of the priority listing, keywords, templates, and all resolvable nodes, not just references. The latter means that completing members for expressions (such as, casting an object or so). Should contain the changes in the function definitions but it is not working at the moment (the feature that the argument list is pre-filled with the parameter names and working like the template suggestions).

At the end of the day, I had to explore the weird world of rebasing as I did not really bother to learn it before. It’s because every time I’ve encounter with it, there was always a note that it could mess up things pretty badly,  and some even says that it is like lying.

Rebasing in git is like to take a range of commits and “replay” them on the top of a branch or a specific revision. This is the perfect tool for the job I am about to perform. The work flow is like this: take the deviation point in my master branch, pick those commits that are relevant to the functionality, and apply them on the deviation point it self. Sounds almost too simple.

However, there are problems that I need to be aware. One is the problem with rewriting the history: I can’t re-base a tacking branch and expect that I can push it easily to its remote. Once I push a branch somewhere re-basing it is not an option anymore. So if that happens, and I screw up (like this happened with the first attempt to do this with the feature-static-library branch where I included commits that renders the branch virtually impossible to merge to bruno’s master), there’s no way back. The only thing I can do, if nobody hangs on my repository, that I wipe out the whole stuff and push the necessary branches from scratch. At the end of this exercise, this is likely to happen though. The only person I share my repository is bruno, and he doesn’t depend on any of my current branches.

That pesky file name capitalization is really annoying. If I want to switch to a branch that has the previous file name, I have to delete the file to get rid of the problem. Not only that, I ran in to this while I performed the rebase call, which is even more annoying as I have to fix to the right name with an additional commit.

Finally I mastered the rebasing so now there’s a new repository I’m working with. There are three feature branches so far: feature-static-libraries, feature-labelprovider-improvement, feature-antlr-parser. These branches were grown from the latest master from Bruno, so there should be no problem to merge them in to the main line.

LL(k) -> ASTNeoNode

I was working recently on an ANTLR based parser for the DDT project. As a phase 1. I’m trying to get the current AST hierachy working under this new parser without having to rely on the Descent parser. It isn’t finished yet, but it is at the level of progress where it worthy perhaps for other to look at it. To see what is missing here’s my sketchy list:

  • Missing import expression node in the AST
    import expression : ‘import’ ‘(‘ assignExpression ‘)’ ;
  • Clarify the function literals
  • Missing static if expression.
  • Missing static assert expression.
  • How to deal with conditional statements (version, debug, and such)?
  • Struct initializer must be implemented.
  • What is ExpIftype is for? Is that something to do with the template stuff? Or static if?
  • IsExpression in the AST somewhere? (It’s pretty tough expression btw).
    isExpression
    : is ( Type )
    | is ( Type : TypeSpecialization )
    | is ( Type == TypeSpecialization )
    | is ( Type Identifier )
    | is ( Type Identifier : TypeSpecialization )
    | is ( Type Identifier == TypeSpecialization )
    | is ( Type Identifier : TypeSpecialization , TemplateParameterList )
    | is ( Type Identifier == TypeSpecialization , TemplateParameterList )
  • Template declarations are missing in the parser rules.
  • Template instances are missing in the parser rules.
  • Proper attribute specifier implementation. (That is, accumulate all the attribute specifier to the corresponding definition).
  • Error handling and error recovery resembling to the Descent’s parser’s one.

The actual state is on my clone’s master branch here:
http://code.google.com/a/eclipselabs.org/r/gyulagubacsi-ddt-root/sour…

As I try to replace the current parser without changing much of the current state of the code (I made very few, the most obvious
modifications, such as constructors for creating AST nodes without the conversion process, and in very few places I added extra fields to currently existing nodes.), I wouldn’t add at this point any new features, or mess with the AST hierarchy.
This is my first try to create parser with a parser generator, and I admit, many places there’s need to improve the current state. Later on, we should change the ASTNeoNode hierarchy to work with on the CommonTree ANTLR object, which would eliminate the need for individual object creation as an action code. (Not sure completely how, but I think it is possible to create heterogeneous trees with ANTLR using factory pattern which in turn would need the ASTNeoNode classes and
interfaces to be more consistent as they are today. As an example of inconsistencies, at the current state some nodes are using ArrayView, others use simple arrays of objects, and so on.

Under The Bonnet: Parser, Lexer… ANTLR

Sometimes it goes like this: you start to make your well defined contribution (type inference), and as you start to assess the job to be done, you realize that some parts are bigger than you thought. So you start looking in to them time to time, and only after a while you realize that you’re step further down in the heart of the project. At this time, I fell in to the parser but for a reason: At work, I was working on a toy parser for a server with the good ol’ hand-crafted way. And I found it fun. Originally, I had the idea to  use bison, ANTLR or something similar to generate a parser, but I rejected the idea because I was afraid I couldn’t learn enough about the compiler generators to finish my task in reasonable time.

In the recent weeks, I tried to get my head around the core of the DDT.  I’ve found the current visitor accessibility unfortunately narrow for semantic analysis, type deduction or type inference, and ran in to some artefacts in the AST design. Well, this is always good news as someone has to deal with these issues. Soon enough I found that the lack of direct parser to our AST is a source of unnecessary complexity, not to mention the possible overheads. I say possible, because I couldn’t make my self carry out detailed measures. OK, OK, I’m lazy! Anyway, so the obvious choice was to look in to the question of the ANTLR parser.

Some initial experiments taught me that I shouldn’t follow Walter’s BNF-kinda description because 1. it is heavy of left-recursive rules definitions, which needs serious re-factoring, 2. there are too many differences between our current AST hierarchy and D documentation’s for explaining the language.