Feeds:
Posts
Comments

Archive for the ‘Just Drops’ Category

The last Architecture Open Spaces were great. Really great! (2009, 2010)

Now it’s time again. Trust me, you can’t spend your days better. (at least in the context of continuous improvement!)

Have a look at Architecture.Openspace 2012 (German)…

Topics I’m interested in discussing (and partially have solutions for):

  • Distributed Systems Design
  • Transactions? Really?
  • Consistency?
  • Search
  • Messaging / NServiceBus
  • Alternative Data Storage / Raven DB
  • Developer Setup, Build, Versioning, Deployment, Hosting
  • Composite Distributed Web Applications

Would be great to see you there. Speaking german is not really a requirement; I guess english should be fine for all software architects.

Read Full Post »

We have Jenkins installed and want it to pull from bitbucket and github – authorization should happen through Open SSH (Public Keys).

Jenkins runs as Local System.

The problem

How to find and then place id_rsa into ~/.ssh? How to get it to add things to ~/.ss/known_hosts?

The solution

As always: fake it until you make it!

  1. Run this command in an elevated command prompt on the server, in order to start a command prompt as Local System user:

    sc create testsvc binpath= "cmd /K start" type= own type= interact && sc start testsvc & sc delete testsvc

    The Interactive Services Detection will now bring up a dialog (probably in the background) where it asks you to “View the message” in order to display the service session where the command window will run.

  2. Run echo %userprofile% to see where your storage is… In my case it is "C:\Windows\system32\config\systemprofile”.

    Odd, but true: Sadly, when i try to put the id_rsa file into that directory from my normal user session, it somehow doesn’t make it into the local system accounts profile.
  3. From here you can open the git bash by running C:\Program Files (x86)\Git\bin\sh –login –i
  4. Then run cd ~ to switch to your home directory.
  5. Then copy your id_rsa file here with a simple
    cp <id_rsa-location> .
  6. Now run ssh git@bitbucket.org in order to try to authenticate and accept the host as known host.

BTW: also make sure you run git.cmd, not git.exe!!

Other posts that helped (and confused) me:

Read Full Post »

For my current work on Azure integration for NPanday I’m investigating what the Azure Tools do with the Service Configuration (*.cscfg) on publish, since the file in Visual Studio it isn’t the same as one which is deployed along with the Cloud Service Package (*.cspkg).

The build & package part for Azure Cloud Services can be found in %Program Files (x86)%\MSBuild\Microsoft\VisualStudio\v10.0\Windows Azure Tools\1.6\Microsoft.WindowsAzure.targets

Find and copy

First, the build tries to figure out which configuration to build use as input by checking for ServiceConfiguration.$(TargetProfile).ccfg and ServiceConfiguration.ccfg, while $(TargetProfile) is “Cloud” by default.

As a part of the build, after being copied, the configuration file is augmented with more settings.

Add “file generated” comment

That was why I noticed, that the files are different. The comment in the target file makes it look like the file is generated from scratch, but instead it is just a copy which changed here and there. By default, the comment is the only change :)

<AddGeneratedXmlComment
  GeneratedFile="@(TargetServiceConfiguration)"
  SourceFile="@(SourceServiceConfiguration)" />

Cloud Tools version

If IntelliTrace or profiling is enabled, this change lets Azure know which versions of the tools are in use.

<AddSettingToServiceConfiguration
   ServiceConfigurationFile="@(TargetServiceConfiguration)"
   Setting ="$(CloudToolsVersionSettingName)"
   Value="$(CloudToolsVersion)"
   Roles="@(DiagnosticAgentRoles)"
   Condition="'$(EnableProfiling)'=='true' or '$(EnableIntelliTrace)'=='true'" />

Intelli Trace

If IntelliTrace is enabled, it will add a connection string to the configuration:

<AddIntelliTraceToServiceConfiguration
  ServiceConfigurationFile="@(TargetServiceConfiguration)"
  IntelliTraceConnectionString="$(IntelliTraceConnectionString)"
  Roles="@(DiagnosticAgentRoles)"/>

Profiling

If profiling is enabled, it will add an connection string, where to store profiling data.

<AddSettingToServiceConfiguration
  ServiceConfigurationFile="@(TargetServiceConfiguration)"
  Setting ="Profiling.ProfilingConnectionString"
  Value="$(ProfilingConnectionString)"
  Roles="@(DiagnosticAgentRoles)" />

Remote Desktop

If remote desktop is set to be enabled, the build configures this switch in the cloud service configuration, too:

<ConfigureRemoteDesktop
      ServiceConfigurationFile="@(TargetServiceConfiguration)"
      ServiceDefinitionFile="@(TargetServiceDefinition)"
      Roles="@(RoleReferences)" 
      RemoteDesktopIsEnabled="$(EnableRemoteDesktop)"
      />  

Web Deploy

If WebDeploy is enabled for any of your web roles, it will add an endpoint to the definition and set the instance count to zero for all web roles in the service configuration.

<EnableWebDeploy
  ServiceConfigurationFile="@(TargetServiceConfiguration)"
  ServiceDefinitionFile="@(TargetServiceDefinition)"
  RolesAndPorts="$(WebDeployPorts)" />

Connection String Override

If ‘ShouldUpdateDiagnosticsConnectionStringOnPublish’ is set to true, the diagnostics connection string is overridden for all roles in order to prevent the default setting “UseDevelopmentStorage=true” to be published to the cloud.

This is one of the typical “Microsoft demo-ready” features. Most certainly you’ll have multiple role-spanning connection strings or settings that you’d like to change on publish, but this is the only one needed to get demos to run, right?

<SetServiceConfigurationSetting 
  Roles="$(DiagnosticsConnectionStringRoles)"
  ServiceConfigurationFile="@(TargetServiceConfiguration)"
  Setting="$(DiagnosticsConnectionStringName)"
  Value="$(DiagnosticsConnectionStringValue)" />

Corresponding parameters in NPanday

The most complex part in the build is the setup for profiling and IntelliTrace; currently we have no plans on supporting these in NPanday. We will rather just deploy from Visual Studio, in case we need profiling or IntelliTrace.

I still have to look at how RDP and MSDeploy can be added to the configured service configuration; for a first release of NPanday that may have to be done manually.

Read Full Post »

I’m currently working on Azure and web packaging (MSDeploy) support for NPanday. I want to do that on a separate SVN branch, which I’ll then reintegrate later on.

The current trunk version is 1.4.1-incubating-SNAPSHOT, and since we are about to release that upcoming version 1.4.1-incubating soon, I don’t want to pollute it with half-baked changes, while I’ll still need to develop on the trunk in parallel.

I’ll also need to be able to install both the current trunk and my experimental branch in my local Maven repository at the same time, hence I need a new temporary version for my branch. All this can be achieved using the Maven Release Plugin, in particular the branch goal. Maven Release supports 14 SCMs through the same interface; in this case we use SVN, though.

What I want

How-to

The command I need to run

mvn release:branch 
   -DbranchName=1.5.0-azuresupport
   -DautoVersionSubmodules=true
   -DsuppressCommitBeforeBranch=true 
   -DremoteTagging=false 
   -DupdateBranchVersions=true 
   -DupdateWorkingCopyVersions=false 

Lets go through the settings line-by-line:

mvn release:branch

Loads and executes the branch-goal from the Maven Release Plugin.

-DbranchName=1.5.0-azuresupport

The name for the branch to be created. When on trunk, Maven figures out to use the default SVN layout for branches and tags. You can optionally define the branch base using the parameter branchBase like this: –DbranchBase=https://svn.apache.org/repos/asf/incubator/npanday/branches/

-DautoVersionSubmodules=true

When ran, Maven will prompt for the version to be used in the branch. I provided 1.5.0-azuresupport-SNAPSHOT. Since autoVersionSubmodules is set to true, Maven Release will automatically use this versions for all submodules and hence also update all inner-project dependencies to that version.

The next four settings go hand-in-hand.

-DsuppressCommitBeforeBranch=true

By default, Maven Releases creates intermediate commits to the current working copy. I’m not sure of the reason, but I think it was because some VCS do not support branching/tagging of modified working copies. This parameter makes sure, no intermediate commits are made to the working copy.

-DremoteTagging=false

With SVN, by default, tags are created remotely. If you want to ommit intermediate commits, this must be set to false.

-DupdateBranchVersions=true

-DupdateWorkingCopyVersions=false

When branching, you can either define new versions for the current working copy, or the new branch, or both. As set here, the working copy will be left alone, and the plugin will ask for a new version for the branch.

Now I can switch forth and back between the trunk and the new branch, but still build and deploy artifacts side-by-side.

Defaults in POM

You may also provide the fixed values in the POM. And if you want to avoid interfering with other Maven Release actions, you might want to use a profile.

<profile>
  <id>branch</id>
  <activation>
    <property>
      <name>branchName</name>
    </property>
  </activation>
  <build>
    <plugins>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-release-plugin</artifactId>
        <version>2.2.1</version>
        <configuration>
          <branchBase>https://svn.apache.org/repos/asf/incubator/npanday/branches</branchBase>
          <autoVersionSubmodules>true</autoVersionSubmodules>
          <suppressCommitBeforeBranch>true</suppressCommitBeforeBranch>
          <remoteTagging>false</remoteTagging>
          <updateBranchVersions>true</updateBranchVersions>
          <updateWorkingCopyVersions>false</updateWorkingCopyVersions>
        </configuration>
      </plugin>
    </plugins>
  </build>
</profile>

Now it will be enough, when I run mvn release:branch –DbranchName=1.5.0-azuresupport

Read Full Post »

After I had to search for this solution all to long, I thought I’d share it:

1. Download NSSM – the Non-Sucking Service Manager

Unzip the zip package you can download from their home page (http://nssm.cc/download/nssm-2.10.zip). You’ll find nssm.exe in win32 and win64; pick the one appropriate to your platform.

2. Install Fitnesse as Service

Run this from command line (adjust paths and port):

nssm install FitnesseService java –jar <path-to-jar>\fitnesse.jar –d <installation-path> –p <your port> <further-options>

If you need to retry, stop the service, then run: nssm remove FitnesseService confirm

3. Start Fitnesse Service

Run net start BmmService or start it from the service manager.

Read Full Post »

In the past few years I have been doing some open source contribution, and lately also initiated projects together with fellow programmers. Here I write about what I noticed.

First Stage: Contributing

You find something useful and start to play with it. Then you find a show-stopping bug, or you’d like to have a feature. You get in contact with the team, then submit issues to a bug tracker, if you find one. They might like your Ideas and ask for a patch. If it is important – and in many cases easier to fix then to rewrite on your own – you start fiddling with the code, do changes, write tests, submit patches.

If the team/individual applies the patches you are happy. Usually you have to maintain your own codebase until the feature is released – or you just compile from trunk :-)

You might also just contribute documentation or tests.

Some personal experiences in this stage

Over the last couple of years I contributed to following projects:

For these I also submitted contributions, but they are still “in-progress”, as they have never been accepted :(

Since these patch acceptance processes take time, but I needed the fixes, I run personal forks at github for maven-versions-plugin. A custom built artifact is even in use by one of my former customers, because I couldn’t get the patch accepted in time.

Second Stage: Initiating

You create something and publish your code as a zip, on github, codeplex, or somewhere else. Then People start liking it and begin submit patches or report issues. Some people create forks and start maintaining their own code based on yours. Or they (or you) ask for joining the project.

Some projects I founded or work on at this stage:

Third Stage: Team OSS

At this point things change a lot. Who is now in charge of the code base? Who builds+releases it? Who answers submitted issues? Well you have to find out, try to find a process that fits. This usually works quite well with people you personally know, co-workers or even friends. When interacting with others in the community, you’ll find out that everybody has a slightly different agenda: which is good for the project! But it makes it hard to make decisions.

With these projects we currently are in this stage (as of mid 2011)

Fourth Stage: Joining an umbrella organization

Now the need for a good process and for some infrastructure arises. You need hosting, build servers, issue tracker: This is what umbrella organizations like Eclipse Foundation, Apache Software Foundation, Outercurve Foundation or Codehaus help you with.

Apache, for example, is very vote-driven. If you want somebody to become a committer, you ask for a vote on the project’s private mailing list. If you want to release a version, you create it and deploy it to a staging are – then you ask for votes. We once released an Apache NPanday version without waiting the full 72 hours the vote needs to be open; we then had to remove (physically!!) the release from the download servers and maven repositories – as if it never happened.

Apache also offers some infrastructure: a CMS for documentation, Jira for issue-tracking, Hudson and Teamcity for Continuous Builds, SVN+GIT for Version Control, Mailing Lists (users, dev, private), committer directory, and so on.

Every team has to report to the Apache Board of Directors.

Read more on ASF here: http://www.apache.org/foundation/how-it-works.html

Eclipse works quite similarly, but has an even tighter organization. In the bigger projects you have fix release dates and  milestones. And there is something like a Project Lead, which is elected – but then authorized to make decisions alone – as I understand it.

In bigger projects usually a couple of companies donate code and/or full-time developers.

Around August 2010, after having submitted a couple of patches, I was voted to become a committer on Apache NPanday Incubator (Maven for .NET, Declarative Build and Release Management), which moved from Codeplex to ASF around October last year:

Under ASF it is still in incubation, but we hope to graduate to a top level project soon. This means will have our own PMC board and a apache top level domain (npanday.apache.org).

Supplementary: Commercial Support for OSS-Projects

Some companies are afraid of relying on a community for support. Therefore, companies that are very committed to a certain project, often offer Commercial Support options for open source projects.

For example, my employer, itemis AG, offers commercial support for Eclipse Modeling Xtext: Professional Services – Xtext

Happy coding! Happy committing!

Read Full Post »

Xtext is great (textual DSLs)

In my last project I worked on a external textual DSL for defining products in telecommunications.

At this time, the products and their parts (deep tree of “features” then composed to products) are all maintained in Excel. This is great for the “inventor” and power-user, but it’s very hard to enforce structural correctness and business rules. Also working in a team and versioning is almost impossible.

A textual DSL was chosen over a custom application, because gives much more “power” to the user, because he is editing raw data in a very feasible way.

Sad that Microsoft dropped their investment in DSLs. But, so what. We’ve got Xtext. There is only one drawback. It’s Java + Eclipse, not C# + Visual Studio.

First I had plans (and started) to write about a particular scoping problem I had to solve in this DSL, but then I moved to a different project. So instead I’ll just post the introductory part where I listed the features of Xtext, just in case this is new to you :-)

  • Xtext is a Language Workbench for external textual DSLs.
  • Xtext builds on Java+Eclipse.
  • Xtext is a rich grammar language.
  • Xtext builds on EMF (Eclipse Modeling Framework, a rich meta-meta model).
  • Xtext derives a EMF-based meta-model from your grammar definition (Semantic Model).
  • Xtext generates a Antlr-Parser from your grammar definition and automatically converts the AST into your semantic model.
  • Xtext generates an Eclipse-based text editor including syntax highlighting, cross-reference resolving, validation, formatting, outline-view, code-completion, rename refactoring, go to definition navigation + much more.
  • Xtext offers a rich API for validating you semantic model, but shows warnings and errors in the editor (squiggles).
  • Xtext offers a rich API for configuring your DSLs formatting (spaces, line breaks, eg).
  • Xtext lets you define custom scoping – including generic support for packages/namespaces + imports.
  • Xtext comes with a base-language (Xbase) for expressions (including compilation of expressions to Java).
  • Xtext comes with a template language (Xtend) for generating artifacts from your semantic model (model-to-model or model-to-text transformations).

Resources

Read Full Post »

The Try/Can-Pattern

I think this is a pattern (repeatedly implemented), although I have not seen it described anywhere yet.

Tell, don’t ask states, that you should trust the code you write. Instead of first asking if “he” can do it, or wants to do it, you just tell “him” to. Instead, it should throw an Exception, if it can’t satisfy the expectations of the caller.

An example for this is Stream.Write(…) or int.Parse(…). If you call the method, you expect it to function. If not, these methods will throw Exceptions.

But what, if you need to check, if a Stream is writable? Or what, if you’d expect the string to be invalid, and you want to respond to that without catching an exception?

This is, where Try/Can is often implemented. Examples for this are Stream.CanWrite, and int.TryParse.

The Try-Part

If it is often the case that a specific command won’t work, you can offer a alternative method that follows this convention:

 

bool TryMethodName(

          [method_parameters,]

          out return_type val)

 

The behavior is specified as follows:

  • TryMethodName should never throw the expected exceptions of  MethodName
  • The original return value should be exposed as an out-parameter.
  • TryMethodName should return a bool, if the operation succeeded or not

This is for example used by the Parse-Method in the .NET BCL:

class Int32 {
  ...
  int Parse(string text);
  bool TryParse(string text, out int val);
  ...
}

OrDefault as a Variation of Try

As a variation of TryMethodName, it can be useful to offer a MethodNameOrDefault. Here, instead of using bool and an output parameter, you’d return the CLR default value of MethodName’s return type, if the expectation stated by MethodName could not be satisfied.

 

return_type MethodNameOrDefault

                ([
method_parameters])

 

OrDefault is heavily used by LINQ:

class Linq.Enumerable {
  ...
  TSource ElementAt<TSource>(
        this IEnumerable<TSource> source, 
        int index);

  TSource ElementAtOrDefault<TSource>(
        this IEnumerable<TSource> source, 
        int index);
  ...
]

The Can-Part

Sometimes, you want to know, if something might work. For example, if you only want to show a button in the UI, if the action potentially can be executed.

 

bool CanMethodName([method_parameters])

// or 

bool CanMethodName { get; }

I’d favor a method over a property. Although a method implies, that it could be expensive to find out if MethodName can, or not.

The behavior is specified as follows:

  • The Can-property or –method should never throw any of the expected exceptions of MethodName
  • It should determine and return, if the method could be executed or if it can’t. Usually a positive return can’t guarantee, that, when really performing the action, it won’t fail.

Can is for example used by System.IO.Stream:

class Stream {
   ...
  int Read(byte[] buffer, int offset, int count);
  bool CanRead { get; }
  ...
}

Summary

Tell, don’t ask! is a very useful principle. It makes code better readable and it helps complying to SRP (Single Responsibility Principle). But sometimes you need more “weak” actions. Try/Can-Pattern helps accomplishing that, while maintaining encapsulation.

I’ll also post some example code on how to implement the pattern in C# soon.

Read Full Post »

Hallo liebe Architekten-Community,

wir (itemis) konnten Udi Dahan dazu bewegen seinen Distributed-Systems-Design-Kurs nun auch in Deutschland anzubieten. Nachdem sein Vortrag auf der letzten prio.conference reichlichen Anklang gefunden hatte, glauben wir, dass es viele gibt die an diesem Kurs interessiert sind. Jetzt gilt es natürlich, diese vielen zu erreichen. Deshalb dieser Blogpost.

Ich selbst wollte an diesem Kurs schon längere Zeit teilgenommen haben. Um so besser, dass er jetzt in Deutschland stattfindet und ich dabei deutschsprachige Teilnehmer kennenlerne! Schade nur, dass ich jetzt einen anderen Grund suchen muss, nach London zu kommen.

In 2008 war ich in einem SOA-Mini-Workshop mit Udi in Seattle. Er hatte diesen vor dem ALT.NET Open Space abgehalten. Ich bin wirklich gespannt auf die Inhalte. Gespräche mit Udi und Testimonials zum Kurs von Leuten, die ich kenne, haben mir klar gemacht, das Udi seiner Zeit ein gutes Stück vorraus ist.

Der Kurs findet über 5 Tage vom 21.-25. März statt:

Advanced Distributed Systems Design using SOA & DDD
(21.03.2011 – 25.03.2011)

A course for everyone in charge of building a large-scale distributed system and enjoying deep architectural discussion.

  • Workshop with exercises
  • Language: english
  • Length: 5 days
  • Price per person: 2500 EUR + VAT (Early bird rebate*: 10%)
  • Location: Cologne, Holiday Inn Köln-Bonn Airport

Details, Registrierung

Wenn ihr fragen zu den Inhalten oder zur Form habt, kontaktiert mich gerne, oder hinterlasst einen Kommentar.

Read Full Post »

itemis veranstaltet zusammen mit SOPTIM AG und Die Software-Architekten den Architeture Open Space 2010 in Dortmund am 10. und 11. Dezember 2010. Die Veranstaltung ist auf 40 Teilnehmer begrenzt und richtet sich an Software-Architekten aller Technologiesparten! Herzlich Willkommen.

76f305e9cc828a2a3f8136181ad68.jpg

Schon der vergangene Architecture Open Space 2009 wurde von Teilnehmern mit Sätzen wie "It was my best conference experience ever!" quitiert.

Auch dieses Jahr freuen wir uns also auf interessante Gespräche rund um Herausforderungen in der Architekturgestaltung von Unternehmensanwendungen.

Es gibt keine Rollentrennung in Sprecher und Zuschauer. Alle Teilnehmer sind gefordert die Themen auf den Tisch zu bringen, die sie interessieren. Mehr zum Konzept von Open Spaces ist auf Wikipedia nachzulesen. (http://de.wikipedia.org/wiki/Open_Space)

Die Veranstaltung findet an einem Freitag und Samstag statt. So ist sichergestellt, dass die Motivation und das Engagement stimmt. :-)

Infos und Anmeldung

Infos gibts hier: http://archnet.mixxt.de

Und anmelden kannst du Dich hier: http://archnet.wufoo.com/forms/architectureopenspace-2010/

Für die Teilnahme wird lediglich eine Schutzgebühr (http://archnet.mixxt.de/networks/wiki/index.Schutzgebuehr) von 10€ erhoben. Kostenloses Storno ist bis zum 15. November möglich.

Sponsoren

Herzlichen Dank an die Sponsoren, die das Event möglich machen!

SOPTIM AG
itemis AG - Software, Consulting, Coaching
Die Software Architekten

Read Full Post »

Older Posts »

Follow

Get every new post delivered to your Inbox.

Join 35 other followers