Feeds:
Posts
Comments

I’m currently working on Azure and web packaging (MSDeploy) support for NPanday. I want to do that on a separate SVN branch, which I’ll then reintegrate later on.

The current trunk version is 1.4.1-incubating-SNAPSHOT, and since we are about to release that upcoming version 1.4.1-incubating soon, I don’t want to pollute it with half-baked changes, while I’ll still need to develop on the trunk in parallel.

I’ll also need to be able to install both the current trunk and my experimental branch in my local Maven repository at the same time, hence I need a new temporary version for my branch. All this can be achieved using the Maven Release Plugin, in particular the branch goal. Maven Release supports 14 SCMs through the same interface; in this case we use SVN, though.

What I want

How-to

The command I need to run

mvn release:branch 
   -DbranchName=1.5.0-azuresupport
   -DautoVersionSubmodules=true
   -DsuppressCommitBeforeBranch=true 
   -DremoteTagging=false 
   -DupdateBranchVersions=true 
   -DupdateWorkingCopyVersions=false 

Lets go through the settings line-by-line:

mvn release:branch

Loads and executes the branch-goal from the Maven Release Plugin.

-DbranchName=1.5.0-azuresupport

The name for the branch to be created. When on trunk, Maven figures out to use the default SVN layout for branches and tags. You can optionally define the branch base using the parameter branchBase like this: –DbranchBase=https://svn.apache.org/repos/asf/incubator/npanday/branches/

-DautoVersionSubmodules=true

When ran, Maven will prompt for the version to be used in the branch. I provided 1.5.0-azuresupport-SNAPSHOT. Since autoVersionSubmodules is set to true, Maven Release will automatically use this versions for all submodules and hence also update all inner-project dependencies to that version.

The next four settings go hand-in-hand.

-DsuppressCommitBeforeBranch=true

By default, Maven Releases creates intermediate commits to the current working copy. I’m not sure of the reason, but I think it was because some VCS do not support branching/tagging of modified working copies. This parameter makes sure, no intermediate commits are made to the working copy.

-DremoteTagging=false

With SVN, by default, tags are created remotely. If you want to ommit intermediate commits, this must be set to false.

-DupdateBranchVersions=true

-DupdateWorkingCopyVersions=false

When branching, you can either define new versions for the current working copy, or the new branch, or both. As set here, the working copy will be left alone, and the plugin will ask for a new version for the branch.

Now I can switch forth and back between the trunk and the new branch, but still build and deploy artifacts side-by-side.

Defaults in POM

You may also provide the fixed values in the POM. And if you want to avoid interfering with other Maven Release actions, you might want to use a profile.

<profile>
  <id>branch</id>
  <activation>
    <property>
      <name>branchName</name>
    </property>
  </activation>
  <build>
    <plugins>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-release-plugin</artifactId>
        <version>2.2.1</version>
        <configuration>
          <branchBase>https://svn.apache.org/repos/asf/incubator/npanday/branches</branchBase>
          <autoVersionSubmodules>true</autoVersionSubmodules>
          <suppressCommitBeforeBranch>true</suppressCommitBeforeBranch>
          <remoteTagging>false</remoteTagging>
          <updateBranchVersions>true</updateBranchVersions>
          <updateWorkingCopyVersions>false</updateWorkingCopyVersions>
        </configuration>
      </plugin>
    </plugins>
  </build>
</profile>

Now it will be enough, when I run mvn release:branch –DbranchName=1.5.0-azuresupport

As I’m doing some work for the NPanday Visual Studio Addin, I bugs me even more that my Visual Studio 2010 currently needs about 40 seconds to start.

Actually I do not wonder at all, as I installed every single extension I ever found interesting. But, should I now disable them all, or rather find out which one takes the most time?

I did the latter.

After reading Did you know… There’s a way to have Visual Studio log its activity for troubleshooting? – #366 via (visual studio – VS2010 loads slowly. Can I profile extensions’ respective startup time? – Stack Overflow)

There it says, that if you start VS using devenv /Log, it will log it’s acitivity to  %AppData%\Roaming\Microsoft\VisualStudio\10.0\ActivityLog.xml (for VS 2010). And it even comes with an XML that provides some output:

image

New XSL with Profiling Capabilities

So I tweaked the XSL to be a little bit more “profiling-friendly”. It will now:

  • Tell me how long it took load each package
  • Give me a visual indicator each 1 second (configurable)
  • Mark each “End package load” line red, that exceeds a certain configurable threshold (default 500 ms).
  • Mark each normal line read, if it exceeds the configured threshold.

image

image

Hotspots

image

Download and use with GIT

  1. Open Commandwindow in %AppData%\Roaming\Microsoft\VisualStudio\10.0
  2. Run git clone https://github.com/lcorneliussen/ActivityLogProfiler
  3. Start Visual Studio with ‘/Log’ switch
  4. Run deploy.cmd (will overwrite default ActivityLog.xsl in parent folder; Visual Studio will replace it after restart!)
  5. Open ActivityLog.xml in Internet Explorer

You can also download it manually (from here) and replace %AppData%\Roaming\Microsoft\VisualStudio\10.0\ActivityLog.xsl manually after each Visual Studio Run.

But with GIT you can easily get updates; and it makes it easier to submit patches, which I’ll be happy to apply.

Attention: Now you only have to repeat 3) and 4) to produce new logs, as Visual Studio will recreate both ActivityLog.xml and ActivityLog.xsl each time it is started with ‘/Log’.

After I had to search for this solution all to long, I thought I’d share it:

1. Download NSSM – the Non-Sucking Service Manager

Unzip the zip package you can download from their home page (http://nssm.cc/download/nssm-2.10.zip). You’ll find nssm.exe in win32 and win64; pick the one appropriate to your platform.

2. Install Fitnesse as Service

Run this from command line (adjust paths and port):

nssm install FitnesseService java –jar <path-to-jar>\fitnesse.jar –d <installation-path> –p <your port> <further-options>

If you need to retry, stop the service, then run: nssm remove FitnesseService confirm

3. Start Fitnesse Service

Run net start BmmService or start it from the service manager.

In the past few years I have been doing some open source contribution, and lately also initiated projects together with fellow programmers. Here I write about what I noticed.

First Stage: Contributing

You find something useful and start to play with it. Then you find a show-stopping bug, or you’d like to have a feature. You get in contact with the team, then submit issues to a bug tracker, if you find one. They might like your Ideas and ask for a patch. If it is important – and in many cases easier to fix then to rewrite on your own – you start fiddling with the code, do changes, write tests, submit patches.

If the team/individual applies the patches you are happy. Usually you have to maintain your own codebase until the feature is released – or you just compile from trunk :-)

You might also just contribute documentation or tests.

Some personal experiences in this stage

Over the last couple of years I contributed to following projects:

For these I also submitted contributions, but they are still “in-progress”, as they have never been accepted :(

Since these patch acceptance processes take time, but I needed the fixes, I run personal forks at github for maven-versions-plugin. A custom built artifact is even in use by one of my former customers, because I couldn’t get the patch accepted in time.

Second Stage: Initiating

You create something and publish your code as a zip, on github, codeplex, or somewhere else. Then People start liking it and begin submit patches or report issues. Some people create forks and start maintaining their own code based on yours. Or they (or you) ask for joining the project.

Some projects I founded or work on at this stage:

Third Stage: Team OSS

At this point things change a lot. Who is now in charge of the code base? Who builds+releases it? Who answers submitted issues? Well you have to find out, try to find a process that fits. This usually works quite well with people you personally know, co-workers or even friends. When interacting with others in the community, you’ll find out that everybody has a slightly different agenda: which is good for the project! But it makes it hard to make decisions.

With these projects we currently are in this stage (as of mid 2011)

Fourth Stage: Joining an umbrella organization

Now the need for a good process and for some infrastructure arises. You need hosting, build servers, issue tracker: This is what umbrella organizations like Eclipse Foundation, Apache Software Foundation, Outercurve Foundation or Codehaus help you with.

Apache, for example, is very vote-driven. If you want somebody to become a committer, you ask for a vote on the project’s private mailing list. If you want to release a version, you create it and deploy it to a staging are – then you ask for votes. We once released an Apache NPanday version without waiting the full 72 hours the vote needs to be open; we then had to remove (physically!!) the release from the download servers and maven repositories – as if it never happened.

Apache also offers some infrastructure: a CMS for documentation, Jira for issue-tracking, Hudson and Teamcity for Continuous Builds, SVN+GIT for Version Control, Mailing Lists (users, dev, private), committer directory, and so on.

Every team has to report to the Apache Board of Directors.

Read more on ASF here: http://www.apache.org/foundation/how-it-works.html

Eclipse works quite similarly, but has an even tighter organization. In the bigger projects you have fix release dates and  milestones. And there is something like a Project Lead, which is elected – but then authorized to make decisions alone – as I understand it.

In bigger projects usually a couple of companies donate code and/or full-time developers.

Around August 2010, after having submitted a couple of patches, I was voted to become a committer on Apache NPanday Incubator (Maven for .NET, Declarative Build and Release Management), which moved from Codeplex to ASF around October last year:

Under ASF it is still in incubation, but we hope to graduate to a top level project soon. This means will have our own PMC board and a apache top level domain (npanday.apache.org).

Supplementary: Commercial Support for OSS-Projects

Some companies are afraid of relying on a community for support. Therefore, companies that are very committed to a certain project, often offer Commercial Support options for open source projects.

For example, my employer, itemis AG, offers commercial support for Eclipse Modeling Xtext: Professional Services – Xtext

Happy coding! Happy committing!

Sometimes, from a concrete type, you need to find out if it implements a specific generic type or interface, and even more interesting, which concrete type arguments it uses in it’s implementation.

Since this information can be “hidden” somewhere in the type hierarchy, I created some helper extension methods that help finding this information and released them as a Minimod. What is a Minimod?

It can answer these simple questions:

Is List<Int32> an IEnumerable<T>? yes
If it is, what are the type parameters? Int32

Is Dictionary<Int32,String> an IDictionary<TKey,TValue>? yes
If it is, what are the type parameters? Int32,String

This is the test/example code generating them:

[Test]
public void PrintExamples()
{
    printExample(typeof(List<int>), typeof(IEnumerable<>));
    printExample(typeof(Dictionary<int, string>), typeof(IDictionary<,>));
}

private static void printExample(Type type, Type genTypeDef)
{
    bool isOfGenericType = type.IsOfGenericType(genTypeDef);

    Console.WriteLine("Is {0} an {1}? {2}",
                      type.GetPrettyName(),
                      genTypeDef.GetPrettyName(),
                      isOfGenericType ? "yes." : "no.");

    if (isOfGenericType)
    {
        Console.WriteLine("If it is, what are the type parameters? "
                          + type.GetGenericArgumentsFor(genTypeDef)
                                .Select(_ => _.GetPrettyName())
                                .JoinStringsWith(", "));
    }
}

Summary

The Minimod adds to System.Type the following extension methods:

  • bool IsOfGenericType(this Type type, Type genericTypeDefinition)
  • Type[] GetGenericTypesFor(this Type type, Type genericTypeDefinition)
  • Type GetGenericTypeFor(this Type type, Type genericTypeDefinition)
  • Type[] GetGenericArgumentsFor(this Type type, Type genericTypeDefinition)

Get it!

It is published to Nuget as a Minimod: NuGet gallery

Other minimods are also available there: Packages - NuGet gallery

Original source code is found at: minimod/minimods - GitHub

On Jul 28th we released a first version of our Xtext Unit Testing utilities. It supports makes unit-testing of Xtext-based grammars a no-brainer!!

Integration-style testing of files

image

A integration-style test covers many of the aspects you want to test, when writing new (test-first), or changing grammars (regression).

A simple test like this one:

@Test
public void person_no_attributes(){
    testFile("person_no_attributes.dmodel");
}

; will cover following functionality:

  • Model file parsed into EMF-model with no errors
  • All cross-references resolved
  • Validation passed; no warnings, no errors.
  • Serializer runs without errors
  • Formatter runs without errors
  • Serialize+Format exactly matches the input-files’ content

Further capabilities (need documentation!)

  • Fluent-API for validation assertion. Example:
// error line 1: person.name-feature is missing display name
assertConstraints(
  issues.errorsOnly()
             .inLine(1)
             .under(Modul.class, "person")
             .named("name")
             .oneOfThemContains("missing display name")
);
  • Switches for turning on/off serialization and formatter

Unit-testing of Parser- or Lexer-Rules

I’ll just show an example here:

@Test
public void id(){
    testTerminal("bar", /* token stream: */ "ID");
    testTerminal("bar3", /* token stream: */ "ID");
    
    testNotTerminal("3bar", /* unexpected */ "ID");
    
    // token streams with multiple token
    testTerminal("foo.bar", "ID", "'.'", "ID");
}

@Test
public void qualifiedNameWithWildcard(){
    testParserRule("foo.*", "QualifiedNameWithWildCard");
}

Download / Install

The Eclipse plugin is available on this P2 update site: http://xtext-utils.eclipselabs.org.codespot.com/git.distribution/releases/unittesting-0.9.x

The project is hosted on Google Code:

Source: http://code.google.com/a/eclipselabs.org/p/xtext-utils/source/checkout?repo=unittesting

Unit-Testing Documentation: Unit_Testing - xtext-utils - Unit testing - A collection of useful utilities and samples for Xtext based projects - Google Project Hosting

I once wrote this, but then never posted it. Thought it could be a good article – but never polished it enough. So here it is; untouched from somewhere in 2009, hence outdated?; still too good content to keep it for myself…

Code is available here: lcorneliussen/Discovering-Entity-Framework

LINQ to Entities is great for demos

Ever since the ADO.NET Entity Framework 1.0 released as a part of .NET 3.5 SP1 in August 2008 it has gotten a lot of bad press. This article is not about bad press. It’s about real world project experiences, where we hit walls almost everywhere we tried to go with Entity Framework. Our Scenario is maybe not the most common one, but exactly that’s where EF proves that it’s not mature enough for production.

You know those scenarios where you have to implement real world requirements? Those, where telling your customer that you can’t implement a certain requirement because your framework doesn’t support it, would barely help?

Well, than we are on the same page!

Scenario

Let me introduce you to our scenario. We have a set of plug-ins organized by categories. Every plug-in manages it’s own tables, but the framework requires some certain tables and columns per category. The data is managed by some legacy DAOs.

Let’s say, just to prevent being fired, we have a plugin-category “People” and two plug-ins named “Singles” and “Spouses” managing some people together their cars.Bild 5

And another category

Now, as in every real world product, we need to easily query this data without having to care about all the different tables. So we decided to dynamically create views on top of each of the different plug-in-tables that fake a consistent relational model per Plug-in-Category:

Bild 7

This, as we thought, could be easily mapped to a object structure letting a O/R-mapper do all the complex joining and sub querying over the admittedly much larger production model.

Choosing a Product

My customer, as many customers are, was afraid of using something that does not come from Redmond. Since this scenario is so simple I decided to give both EF and LINQ2SQL a chance.

Relationships (Many-to-Many) were easier to model in EF, so I went for it.

After overcoming some initial problems, as were

  • While generating the Entities for my views it turned out db-views support in EF is quite poor. It just sets all non-null-fields as compound entity keys. After changing it to something more natural, you can’t update your model from the database anymore. This makes changes quite expensive in this case.
  • Mapping one-to-many in the designer feels quite unnatural.

… finally I got it working. And it demos very well!! But that is basically what it does. It demos well. Not more.

Bild 8

Almost every real task I had to solve using EF was a nightmare! First of all let me tell about the difference between LINQ, LINQ-to-SQL, EF, LINQ to Entities and EF Object Query. If you know all about these, just skip it.

LINQ

Linq is a set of interfaces and extension methods, a C#/VB.NET language feature plus a runtime that enables Language-INtegrated Querying against any structured data source. See here for a list of LINQ implementations, and here. If you want to know more on the background, read this.

## LINQ example (on IQueryable and IEnumerable) ##

LINQ to SQL

LINQ to SQL is way to often just refereed to as LINQ. That is simply wrong. LINQ is neither the same as LINQ to SQL, nor is it any O/R-mapper implementation.

Instead, as Microsoft released the language feature along C# 3.0 and the .NET Framework 3.5, they also delivered some implementation for LINQ as LINQ to XML, LINQ to Objects, LINQ-to-SQL and LINQ to DataSet.

EF (Entity Framework)

In addition, Microsoft released another O/R-mapper called ADO.NET Entity Framework. Using EF, the user can choose between querying via the embedded LINQ support called LINQ to Entities, or using Object Queries.

LINQ to Entities

LINQ to Entities simply is a implementation for querying Entity Framework using LINQ-syntax. This gives you nice features as type safety, selecting subsets using anonymous types, and so on.

var query = from p in model.AllPeople
            where p.Age > 30
            select p.Name;

foreach (var name in query)
{
    Console.WriteLine(name);
}

/* Results
*  Michael Jendryschik (A)
*  Michael Kloss (B)
*/

EF Object Query

Object Queries are similar to SQL, but operates on the conceptual object model. It does not offer type safety, but enables much more sophisticated queries.

Within Object Queries you can either use the Query Builder Methods:

var query = model.AllPeople
    .Where("it.Age > @age", new ObjectParameter("age", 30))
    .SelectValue<string>("it.Name");

… or go for manual queries using Entity SQL:

var query = model.CreateQuery<string>(
    "SELECT VALUE p.Name FROM AllPeople AS p WHERE p.Age > @age", 
    new ObjectParameter("age", 30));

All these queries result into the same query:

SELECT 
[Extent1].[Name] AS [Name]
FROM (SELECT 
      [AllPeople].[Id] AS [Id], 
      [AllPeople].[Name] AS [Name], 
      [AllPeople].[Age] AS [Age]
      FROM [dbo].[AllPeople] AS [AllPeople]) AS [Extent1]
WHERE [Extent1].[Age] > 30

So what to choose? Choice necessary?

Problems with LINQ to Entities and/or Object Query

The problem is, that LINQ and ObjectQuery operate on the same objects, but still have a separate architecture. Sometimes it is mixable, sometimes it’s not.

ef-arch

ObjectQuery<T> and IQueryable<T>

Since ObjectQuery<T> implements IQueryable (which is default for LINQ queries) you get all Query Builder Methods plus all default LINQ extension methods for IQueryable.

As soon as you execute one of the many nice Extension Methods you’re locked into LINQ to Entities which won’t let you specify any ObjectQuery-specific Expressions on top anymore. But you can start with Object Query and then use LINQ-queries against ObjectQueries.

Queryable and ObjectQuery 

So what? Then let’s just use LINQ to Entities, right?

Lets sum up the good news about LINQ to Entities first:

  • Nice syntax for those who are used to it
  • Type safety !!!
  • Very easy to join relationships (OK in Object Query)
  • Simple sub-queries using Any(…)
  • Very easy to use with anonymous types 
    (which are not really good supported in ObjectQuery)

Well, the problem is, that you can get very, very far with LINQ to Entities and then suddenly you’ll hit the wall. It has some pretty major limitations that in my opinion make it unsuitable for production use:

Back to some queries

Let me demonstrate those problems using some simple queries.

“List all people with their cars, ordered by the model name”

from p in model.AllPeople
from c in p.Cars
orderby c.Model
select new { p.Name, c.Model };

/* Results
* { Name = Michael Kloss (B), Model = Mercedes E350 CDI Coupé }
* { Name = Lars Corneliussen (B), Model = Skoda Oktavia }
* { Name = Lars Corneliussen (B), Model = VW Fox }
* { Name = Michael Jendryschik (A), Model = VW Golf 5 }
* { Name = Michael Kloss (B), Model = Vw Golf 6 }
*/

Very, very nice! And very simple!

LINQ to Entities does automatically join each Person p with it’s cars c in p.Cars. In the further restrictions, ordering or the select-statement you have access to a joined pair of a Person p and a Car c.

This results in following SQL select statement:

SELECT 
[Project1].[C1] AS [C1], 
[Project1].[Name] AS [Name], 
[Project1].[Model] AS [Model]
FROM ( SELECT 
    [Extent1].[Name] AS [Name], 
    [Extent2].[Model] AS [Model], 
    1 AS [C1]
    FROM  (SELECT 
      [AllPeople].[Id] AS [Id], 
      [AllPeople].[Name] AS [Name], 
      [AllPeople].[Age] AS [Age]
      FROM [dbo].[AllPeople] AS [AllPeople]) AS [Extent1]
    INNER JOIN (SELECT 
      [AllPeopleCars].[PersonId] AS [PersonId], 
      [AllPeopleCars].[Model] AS [Model], 
      [AllPeopleCars].[Id] AS [Id]
      FROM [dbo].[AllPeopleCars] AS [AllPeopleCars]) AS [Extent2] ON [Extent1].[Id] = [Extent2].[PersonId]
)  AS [Project1]
ORDER BY [Project1].[Model] ASC

Don’t ask me what the nested select statement and C1 is for, but else this is quite straight forward.

But, as you also can see, the two from statements result in a INNER JOIN. But what about ecologic people, as for example my colleague Hergen?

“Also list people without cars”

On the SQL side we just have to use a LEFT OUTER JOIN instead of the current INNER JOIN. But how to do that with LINQ?

LINQ does natively support joins, but doesn’t offer a direct keyword that would let you specify how the join should be mapped to a relational structure.

But there is a simple trick, that semantically should do the same thing:

from p in model.AllPeople
from c in p.Cars.DefaultIfEmpty()
orderby c.Model
select new { p.Name, Model = c != null ? c.Model : null };

DefaultIfEmpty() should just pretend a collection with a single type default of the expected type, default(Car)==null in this case and return it as single collection item, if the base collection p.Cars is empty.

It depends on the LINQ implementation, whether Model on c (which can be null) is accessible directly (as in SQL) or has to be checked. Since this code is never really compiled and ran, but instead converted to an Expression Tree and passed into the specific LINQ implementation for interpretation, it could gracefully do the null-checking internally – which some implementations do.

So, LINQ would let you express it this way. But the LINQ implementation for Entity Framework (LINQ to Entities) does not support DefaultIfEmpty(). There is a lot of information out there on left outer joins in LINQ to Entities, but nothing that really works. You know a solution? Please, tell me!

So here we have to switch to Entity SQL (Object Query). We could also use the Query Builder Methods, but lets save that for later.

var query = model.CreateQuery<IDataRecord>(
    "SELECT p.Name, c.Model FROM AllPeople as "
  + "p OUTER APPLY p.Cars as c ORDER BY c.Model");

var casted = from item in query.ToArray() /* executes here */
  select new {
    Name = item.GetString(0),
    Model = (item.IsDBNull(1) ? null : item.GetString(1))
  };

/* Results
* { Name = Hergen Oltmann (A), Model = }
* { Name = Michael Kloss (B), Model = Mercedes E350 CDI Coupé }
* { Name = Lars Corneliussen (B), Model = Skoda Oktavia }
* { Name = Lars Corneliussen (B), Model = VW Fox }
* { Name = Michael Jendryschik (A), Model = VW Golf 5 }
* { Name = Michael Kloss (B), Model = Vw Golf 6 }
*/

Now we got what we wanted. The SQL query send to the server just has a LEFT OUTER JOIN instead of the INNER JOIN.

But we had to refactor the whole query and missed all the convenience LINQ to SQL offers for this simple requirement. Since Object Query has no support for anonymous types, we also have to access the result set in a generic manner, which throws us back to, say, SqlDataReader.

The story goes on. Lets forget people without cars for now and try some other real world restrictions.

“Lets list all people driving Volkswagen”

Back to our favorite LINQ-syntax:

from p in model.AllPeople
where p.Cars.Any(c => c.Model.Contains("VW"))
select p;

/* Results
* { Name = Michael Jendryschik (A), Age = 42 }
* { Name = Lars Corneliussen (B), Age = 26 }
* { Name = Michael Kloss (B), Age = 33 }
*/

The results are what we expected, and this is the SQL query being executed:

SELECT 
[Extent1].[Id] AS [Id], 
[Extent1].[Name] AS [Name], 
[Extent1].[Age] AS [Age]
FROM (SELECT 
      [AllPeople].[Id] AS [Id], 
      [AllPeople].[Name] AS [Name], 
      [AllPeople].[Age] AS [Age]
      FROM [dbo].[AllPeople] AS [AllPeople]) AS [Extent1]
WHERE  EXISTS (SELECT 
    cast(1 as bit) AS [C1]
    FROM (SELECT 
      [AllPeopleCars].[PersonId] AS [PersonId], 
      [AllPeopleCars].[Model] AS [Model], 
      [AllPeopleCars].[Id] AS [Id]
      FROM [dbo].[AllPeopleCars] AS [AllPeopleCars]) AS [Extent2]
    WHERE ([Extent1].[Id] = [Extent2].[PersonId]) AND ((CAST(CHARINDEX(N'VW', [Extent2].[Model]) AS int)) > 0)
)

Instead of doing a sub-query we could also easily use a join in combination with distinct. That doesn’t really make a difference.

But do you see this ugly part there? ((CAST(CHARINDEX(N'VW', [Extent2].[Model]) AS int)) > 0) Well it works for a demo, but usually what we really want Contains to map to is a LIKE, right.

Back to the real customer; do you know those DataGrids with filter fields beneath each title column? Well lets pretend someone enters “M” for filtering the resulting rows by name.

“Give me only those people driving Volkswagen, whose names start with ‘M’“

Since LINQ to Entities neither supports StartsWith nor Like, we have to switch to Object Query again.

This time, we’ll use the query builder methods. At the same time we also replace the CHARINDEX for “VW” to a proper LIKE.

var query = model.AllPeople
  .Where("EXISTS(SELECT c FROM it.Cars as c "
       + "WHERE c.Model LIKE '%VW%')")
  .Where("it.Name LIKE @pattern",
          new ObjectParameter("pattern", "m%"));

/* Results
* { Name = Michael Jendryschik (A), Age = 42 }
* { Name = Michael Kloss (B), Age = 33 }
*/

This results into following SQL statement:

SELECT 
[Extent1].[Id] AS [Id], 
[Extent1].[Name] AS [Name], 
[Extent1].[Age] AS [Age]
FROM (SELECT 
      [AllPeople].[Id] AS [Id], 
      [AllPeople].[Name] AS [Name], 
      [AllPeople].[Age] AS [Age]
      FROM [dbo].[AllPeople] AS [AllPeople]) AS [Extent1]
WHERE ( EXISTS (SELECT 
    cast(1 as bit) AS [C1]
    FROM (SELECT 
      [AllPeopleCars].[PersonId] AS [PersonId], 
      [AllPeopleCars].[Model] AS [Model], 
      [AllPeopleCars].[Id] AS [Id]
      FROM [dbo].[AllPeopleCars] AS [AllPeopleCars]) AS [Extent2]
    WHERE ([Extent1].[Id] = [Extent2].[PersonId]) AND ([Extent2].[Model] LIKE '%vw%')
)) AND ([Extent1].[Name] LIKE 'm%')

I hope my example queries were able to give you an overview over the state of LINQ to SQL and its current limitations.

Resources on EF

http://www.amazon.com/Programming-Entity-Framework-Julia-Lerman/dp/059652028X

Inheritance in EF

Entity Data Model Inheritance (Application Scenarios)

ADO.NET team blog : Inheritance in the Entity Framework

My Advice

If you have to stick with Entity Framework, use ObjectQuery and Entity SQL and avoid using LINQ to Entities except you know what you are doing. Maybe you should also look into some community tools around EF then.

If you haven’t chosen yet, or if you still have the chance to get out of the tight spot, consider other O/R-mappers as for example NHibernate, which we also switched to later in the project.

I’ll show how we solved our problems using NHibernate in a follow-up post soon.

Follow

Get every new post delivered to your Inbox.

Join 433 other followers