Oslo, Quadrant and the Repository is dead, long live “M”?

What the community (me included) has hoped and feared at the same time and what many foresaw, happened: Microsoft discontinued it’s investment in SQL Modeling Services (aka “Oslo” Repository) as well as “Quadrant”, as Don Box states on the Model Citizen Blog.

One important aspect of that focus is that we will not bring “Oslo” repository to market.

The Winners on the data-side of things are OData and the Entity Framework including EDM.

Given the increasing adoption of both OData and EDM, we have decided to focus our investments in those technologies to make our modeling vision a reality.

But Don also states, that Micrsosoft will further invest in the modeling language codenamed “M”, which also was a former part of Oslo, but clearly didn’t fit into the “SQL” rebranding right from the beginning.

While we used “M” to build the “Oslo” repository and “Quadrant,” there has been significant interest both inside and outside of Microsoft in using “M” for other applications. We are continuing our investment in this technology and will share our plans for productization once they are concrete.

I think, all together this is good news. Both the Repository and Quadrant were very far away from what I consider useful.

Hopefully the “M” team moves over to DevDiv into the ScottGu-Area. Hopefully it then returns to collaborative language design, now that the internal goals are removed.  Hopefully, Microsoft also now listens’ to the modeling community when designing the final language. And hopefully they bring great Visual Studio Support for textual DSLs.

Hey Microsoft, what about open sourcing “M”? This would be a huge step forward, after this very huge step backwards!

Advertisements

Lillesand #1: Introducing the Work Breakdown Structure for Task Estimation

In a dream world a developer should code all the day, right? Well, a very common task beside coding though is to estimate tasks of manifold kinds. Estimating though, sucks.  For badly educated product managers and for many of our customers a estimate equals to a personal commitment. Therefore you better get it right the first time!

In many projects at itemis we use something we call WBS which stands for work breakdown structure. The term itself is not new, but rather PMI vocabulary. But the combination with three-point-estimates and some statistics makes it a very powerful tool.

Work Breakdown Structure

WBS is not more than breaking your tasks down to small units that can easily be estimated. It’s just a estimation technique and can be applied in any traditional or agile process.

Three-point Estimates

Three-point estimates help you to incorporate the risk into your estimate. Lets say I discovered a simple task, by breaking down the big project of writing a particular blog entry about WBS.

Example Task: Graphically illustrate the WBS domain model

Now think about how you potentially would solve this. The simplest solution would be to create a diagram of the classes I already have in Visual Studio. That would be almost no work. But those look to cheap and they contain the wrong details. So I’ll do it manually. Probably using OmniGraffle – a diagramming tool for Mac only.

Now, ask yourself three questions. How many hours would it take to finish this task, if…

  • … I’m highly motivated, in the middle of a diagramming rush, and no tools crash, no one calls me, nobody asks for help on some other task, …? You see where this goes. Since there is a great stencil available that I’ve used earlier, I’d just drag the classes on the canvas and the auto-layout will probably be good enough. Lets say, in the best case I’ll need about 1 hour to illustrate the domain model.
  • … I’ve done this before, and I know, visualizing is always tricky. Based on my experience I’d expect a little bit of trouble for the layout. It will probably also be necessary to adjust the stencil a little bit. Also I’d ask some of my colleagues for their opinion and then do some adjustments. In the average case I should be done in 4 hours.
  • … I’ve had a busy week, it’s Monday morning 9 am and I am absolutely not in the mood to work at all let alone do anything creative. OmniGraffle crashes all the time, and the stencil turns out to not do the job. The telephone rings all the time, and my room mate happens to have a loud discussion with his PM because he just spent 3 days on a task he estimated with 2 hours. Also I have to start from scratch once because I forgot to save, and another time because I just messed up the layout. This is not really likely to happen all at once, but in the worst case I’d be on the tasks for 12 hours. I couldn’t imagine how it could take longer.

Now I’ve got three numbers. In the best case 1 hour, at average 4 and in the worst case 12 hours.

Statistics

Now, you can neither tell your customer nor your PM that you need one and up to 12 hours to finish the task. For one single task this might even be correct, but when you have a set of many tasks it’s not very likely that the worst case applies for every task.

I’m not really into the details of the statistics, but I’ll try to explain them from what I’ve understood.

There is two interesting values that can be calculated per task. Let us just name

  • The pert is an educated guess. It considers the average case (A) for times more reliable than best (B) and worst (W) case.
    The pert formula: (B + 4A + W)/6
  • The variance is a value that indicates how different best and worst case are. There is a magic number involved assuming that in 50% of the cases, the actual hours will less or equal to the pert. This is based on a suggestion from the book Software Estimation by Steve McConnell, page 121 ff.
    The variance formula: (W–B / 1.4)^2

Applying these values to our task, we will result in following table. I also added another simple task

Task Best Average Worst Pert Variance
Graphically illustrate the WBS domain model

1

4

12

4.83

61.73

Embed the domain model illustration in the post*

0.08

0.25

0.50

0,26

0.09

Sums

1.08

4.25

12.5

5.10

61.82

Standard Deviation**        

7.86

* A small task with a small variance has little impact. As you would expect. But if the variance grows, even small tasks have great impact. As in reality.

** The standard deviation is calculated by powering the sum of all variances with 0.5.

The first interesting result is, that by a certainty of 50% I’d manage to fulfill the two tasks within 5.1 hours.

Based on the standard deviation combined with the normal distribution (Gaussian curve) we can create a table indicating by which certainty (in %) we are done within X days.

Since the variance in our case is higher than the expected value, some of the numbers won’t make sense.

Each of the percentages have a factor of the variance with which the result deviates from the pert value.

Formula: Pert + (Variance * Variance Factor)

by a certainty of

variance factor

done within (hours)

10% -1.28 (negative number)
16% -1 (negative number)
20% 0.84 (negative number)
25% 0.67 (negative number)
30% 0.52 1.01
40% 0.25 3.13
50% 0 5.10
60% 0.25 7.06
70% 0.52 9.19
75% 0.67 10.37
80% 0.84 11.70
84% 1 12.96
90% 1.28 15.16
98% 2 20.82

Result

No I can assure, that the task will be done within 20 hours by a certainty of 98%. 

This number is just to high, isn’t it? But did it never happen to you that a task took 4 times longer than expected? Not so uncommon, I’d say.

Maybe I wouldn’t put this number on the customer offer, but that is not what it is about either.

What it is about is knowing you risk and make educated decisions.

But not yet enough.

Team estimates

All those calculations still rely on one person doing expert estimates. So what about having multiple persons that estimate the tasks and then combining the results?

The concept is simple.

  1. Most likely two persons do the breakdown together. Either one does a first draft and the other reviews it, or what I rather prefer is that they pair on this step.
  2. The two or more do the estimations separately.

Why separately? Well, it is not about blaming people but about gathering quality data. And having a high variance between to peoples estimates on a certain task indicates a high level of uncertainty about what this task means in practice. The earlier you identify those spots, the better.

Just get the two to talk and eliminate the uncertainty. They can then either agree on a estimate, or what at least should happen they redo the estimates with a lower variance. Both results raise the quality of your data.

There is different ways of combining multiple estimates. The one that makes the most sense is taking the lowest best case, the average of the average cases and the highest worst case. This would give you a quite reliable result.

Checklists

Especially meant to improve the quality of the worst case estimates the estimating person is guided by a checklist about typical things to consider when estimating software projects.

The current ones we use at itemis now include around 50 entries four categories: Preconditions, Things typically forgotten, Non-functional requirements and Other crosscutting concerns

The Experiment codenamed “Lillesand”

3962721542_195313c680[1] “Lillesand” is beautiful a city in the south of Norway. I just spent my Christmas holidays there. But for this project it’s just a codename.

Currently we use a Excel sheet that helps us creating the WBS. This works fine. But its limited. It lacks support for gathering multiple WBS in a project and the calculations for team estimates are still done manually.

Since this is much about data and functions, I thought I’d try to build this using the technologies leveraged by SQL Server Modeling formerly called Microsoft Codename “Oslo”.

I already posted on the migration to the latest CTP. I’ll publish the sources soon.

I don’t really know where this will go, but I think it will be interesting to follow.

What is left of Microsoft “Oslo”? What now with SQL Server Modeling? (Early 2010)

Back in October 2008 Microsoft announced “Oslo” as a new and world-changing modeling platform. While it always consisted of a language, an editor and a repository the community seemed to be mostly interested in the language “M” and it’s grammar features for creating new domain specific languages.

The feedback could be summarized to:

Great! We love model-driven. We love DSLs. But why the heck SQL?

After that they got engaged with that community. Chris hosted the DSL Dev Con. And Doug released “M” under the Open Specification Promise and founded the “M” Specification Community. Mathew Wilson implemented M in JavaScript. Hugo Brunelier elaborated on the possibilities of bridging it with Eclipse Modeling.

Many wrote, spoke and blogged about it. The fellow hope was, that Microsoft could bring model-driven development to a broader audience and by that increase both productivity and quality in software development.

For that community the renaming of “Oslo” to SQL Server Modeling was a slap in the face. Many stated that they were disappointed by Microsoft’s decisions.

Let’s look into the reasoning and the real consequences of that rename.

Even though I think it was a wise decision business-wise both the timing and the messaging was really bad.

The Repository and “Quadrant” were closely tied to SQL right from the beginnings of “Oslo”. The schema, constraints and functions part of “M” had and mostly still has only one implementation: also SQL.

But the languages part of “M” also referenced to as “MGrammar” had nothing to do with SQL at all and it still doesn’t have much to do with it today either.

“M” should have been split out from “Oslo” before the rename and everybody would have been happy.

So where is this going now?

“M” and “Quadrant” today are still code names. The only thing that has a final name is the repository part of “Oslo” which is now named SQL Server Modeling Services. And I think we agree that that is a OK name.

Most probably “M” will not become SQL Server Modeling M and neither will “Quadrant” be named SQL Server Modeling Quadrant.

I think the final “M” will be as much SQL as Entity Framework and OData are SQL today. All those are developed within the same group and I expect it to merge anyway.

“Quadrant” will probably replace the part in SQL Server Management Studio that enables browsing and editing of SQL data – which is horrible today anyway. It also has the potential of replacing Access as a platform for Information Workers.

And maybe some day we’ll see ideas from “Quadrant” in Excel.

Future of DSL Development with Microsoft Tools

Visual: There was a hope, that “Quadrant” could be a better DSL Toolkit. Those teams didn’t talk to each other, but now they do. I think MS gave up on visual DSLs with “Quadrant”. Instead we’ll see the DSL Toolkit getting more attention. In some Demos at PDC we saw diagrams in Visual Studio working together with SQL Server Modeling. That were already DSL Tools operating on “M” models.

Textual: “M” is at maximum a third of the “Oslo” vision. And the languages part again is only a subset of “M”. Maybe that helps seeing the priorities in the big context.

But still the commitment is alive. The November CTP has some great new features. For example more powerful “M” projections for grammar rules or basic debugging support.  Probably some day this will merge with the DSL Tools and we’ll have both graphical and textual DSLs support right within Visual Studio. I think and hope that the connection to SQL will not be very strong in the future.

Model-driven Development

As often when Microsoft starts (mis)using existing terms they or at least many of their employees just don’t get it. At least not in the first place.

An application that uses “M” to define the M in MVC is not automatically model-driven. It’s just a normal application that operates on data as most applications do. If you want to argue like that, then every C# file and all SQL table rows are models.

In my perception model-driven is, when models become part of the implementation. The Workflow Foundation is a good example for what model-driven means. You model something that is the source of truth for your application – instead of C# code.

If tools help building things like the workflow foundation based on either visual or textual DSLs, using interpreters or code generation, then they truly support the vision of model-driven software development.

“M” Samples Review: One-to-Many vs. Many-to-One

Isn’t this title funny? I like it 🙂

As I browse through the samples for the SQL Server Modeling CTP I find a lot of odd “M” code. This is sad, because Microsoft should know how to write “M” code. They are inventing it. Sure I’m expressing my personal opinion, but I think I have no hard time finding people that agree with me.

Why I do this in the “public”? Because I care! I like M and it’s concepts. For me M has the potential to become a great replacement for a variety of technologies. Most of all XSD, and maybe parts of SQL. That it comes with a SQL-code-generator is nice, though. But not too exciting. I still hope that the team adjusts their current direction. But more on that in a longer post 🙂

I’ll just start with one of the first samples I found:

Relationships

It comes with four M-Files:

asdf

 

 

 

 

 

So, what again is the difference between one-to-many and many-to-one?

OneToMany.m and ManyToOne.m

Well, this is OneToMany.m:

// original OneToMany.cs
module OneToMany {
    type A {
        Id : Integer32 => AutoNumber();
    } where identity Id;
    
    type B {
         A : A; 
    } where value.A in As;
    
    As : {A*};
    Bs : {B*};  
}

And this is ManyToOne.m:

// original ManyToOne.m
module ManyToOne {
    type A {
         B : B;  
    } where value.B in Bs;
    
    type B {
        Id : Integer32 => AutoNumber();
    } where identity Id;
    
    As : {A*};  
    Bs : {B*};
}

Do you spot the difference? I do! It would be easier to see, if we changed the ordering on the second one. Now compare this to the first one.

// reordered ManyToOne.m
module ManyToOne {
    type B {
        Id : Integer32 => AutoNumber();
    } where identity Id;
    
    type A {
         B : B;  
    } where value.B in Bs;
    
    Bs : {B*};
    As : {A*};
}

Do you see it now? Exactly. The XPath-function translate(‘AB’, ‘BA’) would have done the job! There is exactly NO difference between except for A and B switched around!

Usually the relationship the sample tries to illustrate is called one-to-many, even though in relational databases the reference goes from the many to one.

The funny part is over. Lets look at one-to-one and many-to-many.

OneToOne.m

This is OK. Although I do not like how coupled the code is. If I used the same style to express the same intent in C# people would blame me for violating every single principle we have learned over the past couple of years. I’ll write about what bothers me here in another post sometime.

// original OneToOne.m
module OneToOne {
    type A {
        Id : Integer32 => AutoNumber();
    } where identity Id;

    type B {
        A : A where value in As;
    } where unique A;
    
    As : {A*};
    Bs : {B*};
}

ManyToMany.m

// original ManyToMany.m
module ManyToMany {
    type A {
        Id : Integer32 => AutoNumber();
    } where identity Id;
    
    type B {
        Id : Integer32 => AutoNumber();
    } where identity Id;
    
    As : {A*};
    Bs : {B*};  

    type AB
    {
        ReferencedA : A;
        ReferencedB : B;
    } where value.ReferencedA in As && value.ReferencedB in Bs;
     
    ABs : {AB*};    
}

What is that. Natural modeling? What you really want to say is, that A has a n2m-Relationship with B, right? Now tell me, how this M-code is any better than SQL! It does not raise the level of abstraction, at least! IMHO this is not a solution for something that claims to be a modeling language, it’s a hack.

In“M” when you model something that naturally would be called a hierarchy or containment, the SQL compiler projects it as n2m anyway.

Relationship vs. Reference

Actually M doesn’t really support relationships (or associations) at all today. It just knows about references. What the difference is?

I’m not too sure if I get it right, but I’ll try.

A relationship is always coherent and integer. Something both sides agree on.

holding-hands1[1] 

A reference, though, is just something holding on something else.

detlef_46983[1] @Pitopia, Detlef

In relational databases relationships are modeled using references plus constraints.

So for example saying to the man, that he isn’t allowed to move as long as this baby holds his finger, you would enforce something that could be called an relation.

Summary

I think samples of this quality rather chase away people than buying them into the language. The language team should review all the documentation and samples. They should discuss them and give good guidance.

What I’ve seen so far is rather bad guidance.

Update: Also check out the MSDN Documentation on Relationships. It’s at least better than the samples. (found it after I wrote this post)

Migrating WBS from “Oslo” to SQL Server Modeling

I finally found some time to migrate two of my pet projects from the “Oslo” May CTP to the new SQL Server Modeling November 2009 CTP. I didn’t publish the sources so far, but I will soon.

Work Breakdown Structure

The first application that I want to migrate is a so-called work-breakdown-sheet (WBS). Originally it was a Excel sheet containing tasks and task estimates. Some smart calculations apply a set of statistics to give a forecast that is as close as possible to the real effort required.

The technologies I wanted to try out with that were mostly “Quadrant” and the “Repository”, now SQL Server Modeling Services. Also I wanted to test the team on their ambitious goal to make something that gets close to the experience of Access in ease of use.

The project contains of a set of “M”-based models and functions as well as some basic “Quadrant” customization. No plain old C# code needed, so far.

The basic domain model expressed in M (May CTP):

type Project {
    Name : Text;
};

type Story {
    Name : Text;
    Project : Project;
};

type TaskGroup {
    Name : Text;
    Story : Story;
};

type Task {
    Description : Text;
    Comment : Text?;
    Group : TaskGroup;
};

type Estimate {
    BestCase : Double;
    AverageCase : Double;
    WorstCase : Double;
} where value.BestCase < value.AverageCase
     && value.AverageCase < value.WorstCase; 

I’ll write an introduction to the project soon. But now, I’ll just log the changes I had to make in order to be able to get it running on the new CTP.

The Project File

In May, both Visual Studio and Intellipad used a *.mproj-File to collect multiple models in a project. In November this is embedded in a *.csproj-File. The easiest thing was to create a new “Oslo Library” in VS 2010 and just add the files that previously were linked in Wbs.mproj.s

Now I got the project both in Visual Studio 2010 and in Intellipad:

asdf[4]

asdf

Now lets build.

Syntax Changes

Lets walk through the errors I get, and how the syntax has to be changed to fix them.

  • First surprise. Intellipad builds the project without complaining. Well this message still leaves me unconfortable: Skipping target "MCompileCore" because all output files are up-to-date with respect to the input files.

So lets switch over to VS. The Messages I get are:

  • Error: Expected a ‘;’ to finish the extent declaration.
  • Error: Expected a ‘;’ to finish the computed value declaration or a ‘{‘ to start its body.
  • Warning: The ‘item’ expression is being deprecated. Please replace with an appropriate ‘value’ expression.

Those were easy to fix. Although the error messages look more explaining then they are.

Actually both first errors had to do with the new collection syntax. This goes for extent declarations as well as functions. This, for example:

TaskEstimates() :  PersonalTaskEstimate* {
    from e in RawTaskEstimates
    select MixEstimates(CalculateEstimate(e), e)
}

Needs to be changed to:

TaskEstimates() :  {PersonalTaskEstimate*} {
    from e in RawTaskEstimates
    select MixEstimates(CalculateEstimate(e), e)
}

The warning about the deprecated item keyword in:

RawStories : (Story & HasFolderAndAutoId)*
        where item.Project in RawProjects;

is solved by using value plus making sure, that the constraint goes inside the collection.

RawStories : {((Story & HasFolderAndAutoId) 
    where value.Project in RawProjects)*};

After having fixed those I get a next set of errors:

  • The left-hand side ‘value.BestCase’ of ‘in’ must  be compatible with the collection element type ‘LifelikeEstimateNumber’.

This seems to be a implicit change enforced by the new type checker in the current CTP. This is the “invalid” code:

type LifelikeEstimateNumber : Double where value < 24
    || value in SomeFibonacciNumbers;
    
type SomeFibonacciNumbers : {55, 89, 144, 233, 377, 610};

The problem here is, that the compiler doesn’t infer that SomeFibonacciNumbers actually is a collection of Doubles. So we have to tell her.

There are different ways to do so. We can either ascribe the first value in the collection to a Double by writing:

{55: Double, 89, 144, 233, 377, 610}

or we could ascribe the whole collection to a collection of Doubles

{55, 89, 144, 233, 377, 610} : {Double*}

or we can mix it with a collection of Doubles:

{55, 89, 144, 233, 377, 610} & {Double*}

I don’t really know, what the differences will be. I’ll just go for the last one, because it looks so nice 🙂

Project References

The next problem is, that my references to models from the “Repository” and “Quadrant” can’t be found. Sure, I didn’t copy the references from the mproj-file, either. As described here, you now have to reference dll files instead of the mx-files that where needed in May.

In my case, I needed a reference to “C:\Program Files\Microsoft Oslo\1.0\bin\Repository.dll” in order to support the SQL Server Modeling Services type HasFolderAndAutoId.

Now all the “M” code seems to be ok. The only thing is, that some concepts do not compile to SQL.

For this code:

type LifelikeEstimateNumber : Double where value < 24
    || value in SomeFibonacciNumbers;
    
type SomeFibonacciNumbers : {55, 89, 144, 233, 377, 610} & {Double*};

type TaskEstimate : (Estimate & {
    Task : Task;
}) where value.BestCase in LifelikeEstimateNumber
    && value.AverageCase in LifelikeEstimateNumber
    && value.WorstCase in LifelikeEstimateNumber;

i get the following error for each occurence in the last three constraints:

  • Not yet implemented: There is no SQL expression generator to handle the expression ‘TypeRef: Reference, resolve to get the target’.

It seems, that the constraint expressions can’t handle types completely. I’ll just refactor that to an inline expression plus an extent for the allowed high estimates:

AllowedHighTaskEstimates : {Double*} {
    55, 89, 144, 233, 377, 610};

type TaskEstimate : (Estimate & {
    Task : Task;
}) where 
    (value.BestCase < 24 
        && value.BestCase in AllowedHighTaskEstimates)
    && (value.AverageCase < 24 
        && value.AverageCase in AllowedHighTaskEstimates)
    && (value.WorstCase < 24 
        && value.WorstCase in AllowedHighTaskEstimates);

Build succeeded!

Well I cheated a little bit, because I removed the models that drive the quadrant customization. But since those have changed completely anyway, I’ll just rebuild the requirements.

Deployment

For the May CTP I had a bunch of batch files to manage the build and deploy process. This was also, because I had to install a external SQL function before deploying my module. Lets see how this works with the deployment integration in Visual Studio 2010.

I configured the connection string in the M Deploy settings to a local database called ‘WbsTestRepository’. But trying to deploy the solution fails with a couple of errors. It seems, that the repository is not deployed automatically, allthough I added a project reference.

Repository Issues

In Wbs I want to use the Repository (now SQL Server Modelling Services) Rolders as well as the catalog which stores data about my models.

You still need to install the “Repository” on your database using the command line. This should be necessary only once, though.

The commands I ran were (described here):

'create a clean db
mx create /d:WbsTestRepository /s:.\SQLExpress /force
'install the rep
mx install Repository.mx /database:WbsTestRepository /server:.\SQLExpress

But now redeploying in VS yields another expected error:

  • error M6040: Sql Exception: Cannot find either column "itemis.Wbs" or the user-defined function or aggregate "itemis.Wbs.Power", or the name is ambiguous.

Wbs needs the power-function to compute some values. Since M doesn’t know about it, and neither can express it natively, I had to model an so called extern:

extern Power(Expression : Double, Power : Integer32) : Double;

This just makes a concept available to M that has to be implemented in SQL. Since we want to ensure this in the installation, we have to compile it along with the M files. This is done by adding a SQL file containing the “create function [itemis.Wbs].[Power] …” script and set the compile action to “MPreSql”.

As I almost expected, redeploying the model doesn’t work. I’ll post a workaround for that soon.

  • error M6040: Sql Exception: The module ‘itemis.Wbs’ is already present in the database.

But for now we just put the two lines in a batch file to recreate a fresh repository before each deploy.

The next problem I run into is:

  • Sql Exception: Target folder must be specified.

There are two ways to add a target folder to your initial values. The documented one suggests adding your Folder to the FoldersTable and then specifying that value in every single instance. The much cleaner and simpler is, too set the target folder globally for the whole project.

Since there is no Visual Studio support for that, you have to add this property to the csproj-File manually:

<MTargetFolder>Repository/Wbs</MTargetFolder>

You will also have to create the folder along with the repository every time you deploy, using the following command:

mx createFolder "Repository/Wbs" /database:WbsTestRepository /server:.\SQLExpress

The next thing is some complaints about my constraints on TaskEstimate. Since I have no time left, and those constraints should be weak, instead of hard CHECK constraints anyway, I’ll just comment them "away” for now.

Deployed successfully.

Wonderful. Now lets go to “Quadrant” and see, how we can make use of the model with it’s sample data.

asdf[1] 

Summary

It were not really the syntax changes that made trouble, but rather the integration with Visual Studio. I had a couple of simple customizations for Quadrant, but I’ll rewrite them soon.

The only comment I have to make so far, is, that it is totally unacceptable to have a development cycle that takes more than 5 seconds from M over compile, deploy and look at the changes in Quadrant. Now it takes more than 30 seconds.

Future Plans

I’ll probably elaborate more on WBS next week. I’ll also migrate my DSL pet project called “Web Layout DSL” and integrate it with a MVC Client Application over WBS.

So stay tuned.

Tired of hearing “M” is to “T-SQL” what X is to Y…

I have heard a couple of variations of these analogies. I do not like them. I think they are simply absurd.

At last PDC, “M” was to “SQL” what C is to Assembler. This year it was, what VB is to C. And now I even read this:

The code name “M” language is like a more manageable (though more limited) form of Transact-SQL, the language normally used to describe data schema and values.

Kraig Blockschmidt, Microsoft Data Development Technologies: Past, Present, and Future 

“M” has some overlaps with T-SQL, ok. But far from every concept in “M” can be translated into T-SQL. What about structural subtyping? Types without identities? Polymorphic references and function arguments? Languages/DSLs? Ordered collections? Lot’s more.

And only a very small, although useful subset of T-SQL maps to “M”. Also most of the translation to SQL is opinionated, not natural.

What the schema and values part of M compares much more to, is XML and XSD.

Would you even try to compare XML to T-SQL?