Feeds:
Posts
Comments

The Problem:
System Center Orchestrator 2012 exposes a data service that enables to query and execute runbooks.  But working directly with the data service is like executing a WCF service by manually composing the SOAP messages. There are no type-safe parameters, no intellisense and you need to type the exact path of the runbook or even worse, specify GUIDs . Developing and testing a code that is involved with runbooks execution quickly becomes a cumbersome and tedious task.

The Solution:
The solutions is a Visual Studio item template.  When you add it to an existing project, it will ask for the Orchestrator server’s details and generate hierarchy of proxy classes that match-up with the Orchestrator server folders’ hierarchy, and within every class there will be methods that will match-up with the runbooks on said folder that accept the runbook’s parameters. In addition, the runbook’s description will be appended to the method’s remark summary, which makes the Visual Studio intellisense more helpful. Every class that contains runbooks also implements an interface named “I{ClassName}” that include these methods for easier testing. After adding this item to your project you will be able to execute a runbook as seen in the following code:

 OrchestratorReference orchestratorReference = new OrchestratorReference();
 Guid jobId = orchestratorReference.Development.Utils.WriteErrorLog(message, activity, runbook);

The OrchestratorReference object can be initialized with the credentials for accessing the Orchestrator web services.Ex:

 OrchestratorReference orchestratorReference = new OrchestratorReference();
 NetworkCredential cred = new NetworkCredential(userName, password, domain);
 orchestratorReference.OrchestratorServiceCredentials = cred;

In case the runbook path’s prefix depends on development environments you can use the “AddReplaceFolderPrefix” method to dynamically replace the path prefix. Ex:

 OrchestratorReference orchestratorReference = new OrchestratorReference();
 orchestratorReference.AddReplaceFolderPrefix(@"\Development\", @"\Production\");

All the runbooks’ functions return the job Id that was created on the Orchestrator server. The execution of the runbooks is asynchronized, to wait for the runbook completion and optionally collect the runbook’s return values, you can use the created job Id with the following methods:
Sync:

 Guid jobId = orchestratorReference.Development.VMManager.VM.GetNextVMName("Test##");
 Dictionary<string, string> result = orchestratorReference.GetJobResult(jobId);
 string nextVMName = result["nextVMName"];

Async:

 public async Task GetNextVMName(string template)
 {
   OrchestratorReference orchestratorReference = new OrchestratorReference();
   Guid jobId =
         orchestratorReference.Development.VMManager.VM.GetNextVMName(template);
   Dictionary<string, string> result =
                     await orchestratorReference.GetJobResultAsync(jobId);
   return result["nextVMName"];
 }

The T4 template, responsible for generating the code, removes any classes/methods duplication and will name classes/methods with Pascal-case and methods parameters with camel-case. It also removes from the class/methods’ name any un-letter prefix characters, so if the folder name includes an index number prefix, this index will be truncate and will be visible only from the class/methods remark’s summary.

Deployment:

  1. Extract the zip file available for download at the bottom of this post.
  2. Execute the “DeployItemTemplate.cmd” file.

Using:

  1. Open Visual Studio 2010.
  2. Click on add -> new item… in a project from which you need to execute a runbook
  3. Choose the “Orchestrator Reference” template from the template list, type in a file name and click OK.
    Orchestrator Reference - add item template
  4. Type the Orchestrator server’s name, port number where the Orchestrator service is listening (default 81) and if SSL is required.
  5. Click on the “Load” button. The wizard will load the folders structure from the Orchestrator service and will enable to specify which folders to include in the new generated class.
    Orchestrator Reference
  6. Click the “Add” button.
  7. Optional – Expand the template file to check the generated cs file.
  8. Happy developing!

Source & Binary

You can probably find several blog posts out there about remotely executing simple commands and scripts against Exchange servers, but trying to implement these examples on a “real” functional script, can introduce some annoying problems that nobody seems to mention. This was the case when I tried to develop a web page that was supposed to assist with managing a multi-tenant exchange 2010 SP2 environment. I got some startup scripts from Jacob Dixon’s blog and after some minor modifications I was able to execute them locally on the Exchange server without any problems, it was only when I tried to call these scripts from my web page that problems started:

  • Problem 1: Following a TechNet library post (http://technet.microsoft.com/en-us/library/dd335083), I tried, firstly, to open a remote PowerShell session from a C# application, to the Microsoft.Exchange configuration on a CAS server. I connected successfully but found that the execution context was very limited and restricted as it didn’t even allow assigning PowerShell variables (got an “Assignment statements are not allowed in restricted language mode or a Data section” exception), meaning you almost can’t do any scripting with it.
  • Problem 2: So I tried connecting to the default PowerShell  WSMAN (http://{ServerName}:5985/wsman) and executing the script, only to find that though I have full permissions, when trying to execute an exchange’s command, it asked to provide the server argument as it didn’t recognize it executing on an exchange server.  I didn’t want to change my scripts, so I didn’t continue with this approach.
  • Problem 3: So I tried to open a local PowerShell session and from this session I used the Import-Session commandlet to connect to the remote Microsfot.Exhange configuration.  I connected successfully and had full permissions with the right execution context. However,  while re-trying to execute the script I found that the commandlet New-Mailbox doesn’t exist.
    Quick solution:  Executed the “Add-PSSnapin Microsoft.Exchange.Management.PowerShell.E2010” command and magically the command is available.
  • Problem 4:  Trying again to execute the script, I got a new error: “New-Mailbox : Load balancing failed to find a valid mailbox database.”. Ok – so I tried to specify the database explicitly with the –Database argument and got this error: “New-Mailbox : Couldn’t find database “{DB Name}”. Make sure you have typed it correctly. ”. I googled it and found some forums that suggested to make sure the database exists and is mounted, of course my mailbox database did exist and was mounted as the script worked from the local exchange server. So I tried a different approach;  opened a remote PowerShell session to the default PowerShell WSMAN and from that remote session I imported a new session to the Microsoft.Exchange configuration. This approach eliminated the error message and the mailbox was successfully created.
  • Problem 5:The PowerShell script also included some Active Directory commands, for creating a new AD account with the mailbox, which required loading the activedirectory module. When I tried to import the module I got this warning:
    • Error initializing default drive: ‘Unable to contact the server. This may be because this server does not exist, it is currently down, or it does not have the Active Directory Web Services running.‘.

    Not really a helpful message but googling it I understood that I got into a double hop problem, as the AD is installed on a different machine. So I decided to open the first remote session with the CredSSP authentication method (which enables double hops). This wasn’t a trivial task but luckily, Drew has an excellent blog post about this topic (http://werepoint.blogspot.ca/2012/03/setting-up-ps-remoting-for-all-that.html) that helped me to enable it.

  • Problem 6: After successfully creating a remote PowerShell session with the CredSSP authentication, I tried again to import the activedirectory module and got “Out of Memory” Exception.  To solve this one, I executed this command on the remote server:
winrm set winrm/config/winrs @{MaxMemoryPerShellMB="1024"}
  • Problem 7: Now my script was executing successfully but with all the parameters still hardcoded on the script, when I tried to make it more generic and pass the remote script parameter values from the C# application I found that I could not use the following code:
    powershell.Runspace.SessionStateProxy.SetVariable("{Param name}", "{Param value}");
    

    As the SessionStateProxy object doesn’t initialize with a remote PowerShell sessions. To overcome this, I changed my code to iterate the parameters and for each, add a “Set-Variable” Command with the relevant values like

foreach (var parameter in parameters)
{
   Command command = new Command("Set-Variable");
   command.Parameters.Add("Name", parameter.Key);
   command.Parameters.Add("Value", parameter.Value);
   this.powerShell.Commands.AddCommand(command);
}

this.powerShell.Invoke();

So, after a challenging day I ended up with a basic but complete demo for remotely running a script on Exchange 2010 SP2 server that creates a new Tenant’s mailbox. Hopefully this post will save you some time implementing similar tasks. I attached here the code for this demo web page.

The problem:
As a global company that has customers/partners all around the world, especially one that put on her flag community and sharing, inter-language communication and understanding of foreign content soon became a barrier. The company’s community site enables users to post and comment in any language and it is very common to see content in foreign languages, thus we needed to enable communications between staff and customers, partners, and suppliers from around the globe regardless of their preferred language.
There are quick and free solutions, such as Google Translator, that will only require inserting some code lines to your web pages and it will display a language combo that will enable the end user to translate all page content to their preferred language. The problem with this approach is:

  1. You will not have control over what part of your site will be translated.
  2. You will not have control over the combo language design and list of supported languages.
  3. The provider will take space on the top of the page and can include ads.
  4. You can’t set as default or automatically translate to the language a user selected on their community account as preferred language.
  5. You rely on the provider’s availability. If the provider is not responding or is busy your site will suffer.
  6. You are limited to one provider, although different providers have different strength for different languages.

The other option is to use a translator provider API; there are a couple of providers like Google, Microsoft Bing and WorldLingo that enable you to send the text over the web and get in return the translated text. This gives you a full control on the interface and enables you to take into considerations the user preferred language. The downside is that these providers cost money and/or limit the amount of text you can translate with them.
We needed a solution that will minimize the amount of text sent to these providers, will enable monitoring each provider usage and dynamically alternate providers or the language each provider handles all that in a fast and reliable manor that will require minimum change to the existing site.

The solution:
Using Telligent 6.0 new extended capabilities, I developed a plug-in and 2 widgets that tackled all the above issues and gave the site admin a lot of options to deploy and use the translator. Although the above were developed specifically for the Telligent community platform, they are a good example of how to use the Google and Bing API in production scenarios and can be applied with some minor changes to any web site.

The Translator Widget: the widget works with the Translator plug-in to translate any HTML element’s content with a specific CSS class. The content of the selected element will be sent for the original language to be detected and to be translated. If the original language is different from the required language the content will be replaced with the translated version and the user will be able to switch between the versions by pressing a keyboard shortcut key. The widget configuration also enables to show/hide a language combo and/or a quick translate button that will go back and forth between the original text and the last translated text (initialized to the user preferred language). Configuration will also determine if translation is done automatically or on demand. Another configuration will allow to display notification to detect translated text and/or add a tool-tip to the translated section.
All the widget activity happens after the page is completely loaded and in an asynchronies way so the loading time of the page is not influenced by placing the widget on the page.

Step-By-Step Configuration (as mentioned above)Translator- Widget Configuration

Translate all elements returned from this JQuery selector. A JQuery selector string is used to specify which HTML elements should be translated. The default value “.post-name, .post-content” will translate all elements that have CSS class “post-name” or “post-content”
Show languages combo. Show/Hide the languages combo. The combo contains all supported languages from all translator providers, as defined in the Translator plug-in.
Show quick translate button. Show/Hide quick translate button. This button enables quick switching between original and translated text (a keyboard shortcut, ALT-Shift-T, is always available regardless of this button display).
Get user language preference from: The user’s preferred language affects the behavior of the “On page load” options. The available values are:

  • User’s profile –default language will be the language the user set as their preferred language in the community account profile (only if this language is enabled by one of the translator providers in the Translator plug-in’s configuration).
  • Browser accepted language (auto detect) – the default language will be the language configured on the web browser (using the Accept-Language header).
On page load: Configure what will happen when page is loaded. The options are:

  • None – best for minimizing the use of the translator providers. Only if the user will require a translated version, the page will be translated.
  • Preload user preferred language – if you prefer to always show the original text version, but maintain in the background the translated version to the user preferred language. When the user will click on the “quick translate” or the keyboard shortcut, the translated content will be already available on the client and the switch will be instant.
  • Translate to user preferred language – the selected elements content will be translated automatically to the user preferred language.
Show notification when content was translated Show message at the widget location indicating that some of the content of the page was translated. The message will shown only if at least one of the HTML elements that are supposed to be translated contains text in language different from the user’s preferred language. The message text and style can be set via the widget-resource/widget-CSS respectively.
Show notification when translation service was not available Show a message if the translation failed (in case no translation was found in the cache and the translator provider is not available). The message text and style can be set via the widget-resource/widget-CSS respectively.

All the widget’s visible messages have a translated version in the widget’s resource for the following languages:  English, Danish, German, Spanish, French, Italian, Dutch and Swedish. The CSS style of the widget can be configured in the translator.css file that is included with the widget (using the widget studio).

The “SyncProfileWithBrowserPreferredLanguage” Widget: this widget is used to make sure that the user profile is synchronized with the user’s preferred languages as provided from the browser. On the first visit to a page that contains this widget, the widget will check if there is a difference between the current profile preferred language and the browser preferred language. The action to take when a difference is found can be configured as such:

SyncProfileWithBrowserPreferredLanguage- Widget Configuration

  • Update the user profile quietly – The user’s profile will be updated to prefer the same language as the browser without any notification.
  • Update the user profile and show alert message. – The user’s profile will be updated and alert message will notify the user about the change.
  • Show confirmation message and update profile if user confirms – A confirmation message will be displayed and only if confirmed by the user, an update to the profile will occur.

The Translator plug-in:  Manages the translation process. There are some configurations you will need to set before you are able to start using the translation widget. The plug-in dynamically detects any implementation of the translator provider interface under the site bin’s folder and enables their configuration.  The plug-in will try to send the entire element’s content in one big request but in case of a failure result, the provider will split the content to smaller pieces and will try again until succeeding or the content is not split-able.
General configuration:

Translator Plug-in Configuration

Keep in memory sliding expiration cache Minutes to keep in memory cache. The cache works in a “sliding way” so while a specific translation was accessed in the last x minutes; the cached record will stay in the memory cache.
Enable Database Cache This check box is not enabled by default but it is recommended to activate it. When first enabled, the plug-in will create in the database a new schema called “translator” including a table and some stored procedures. The plug-in supports SQL 2008 or later versions. Make sure the SQL user, configured in the connectionString.config file, has a db_owner permission when you first save this configuration, after the creation of the schema you can remove the db_owner permission.
Enable synchronization check between the browser’s language preference and user’s profile Define if the synchronization between the browser’s preferred language and the user’s profile is enabled. Used by the SyncProfileWithBrowserPreferredLanguage widget. When first enabled the plug-in will create a new Boolean profile field called “NeedToCheckSyncPreferredLanguage” that will keep a record of the synchronization process performance for specific user.
Show statistics for This read-only field show usage statistics for each provider for different periods: last day, last month and last year. This information can be used to monitor the amount of translation each provider actually translated as some provider can charge based on activity amount.

Each provider can have a specific configuration but usually will include authentication ID and a list of supported languages. The plug-in enables to configure which provider will translate which language. If the same language is marked on more than one provider the provider with the highest ID will be responsible for the translation. Specifying authentication ID for at least one of the providers is mandatory.

Google Provider - Configuration

Deployment steps:

  1. Download the binary and code from the link at the bottom of this post.
  2. Copy the Telligent.Extensions.Translator.dll and one/both providers assembly: Telligent.Extensions.Translator.Bing.dll , Telligent.Extensions.Translator.Google.dll to the Telligent website’s Bin folder.
  3. Obtain providers Application ID
    1. Google Translator from here.
    2. Bing Translator from here.
  4. Go to the “Manage Plug-in” page on the control panel (http://yourSite/controlpanel/Settings/ManagePlugins.aspx). You should see the “Translator Service Plug-in” on the plugins list.
  5. Click on “configure” beside the translator plug-in and setup the application ID you got in step 3.
  6. Enable database cache – optional.
  7. Go to the “Manage Widgets” page on the control panel (http://yoursite/controlpanel/tools/ManageWidgets/ListWidgets.aspx). Click on “Import Widgets” and select the TranslatorWidgets.xml as the widget file.
  8. Include the Translator widget on the page you like to translate. Configure the widget, make sure to choose the required elements to translate (by configuring the “Translate all elements returned from this JQuery selector” on the Translator widget.
  9. Browse the page…..

The source code and binaries for this article are available here

Alex Crome had recently published a good article about Creating a factory default widgets plugin. After trying to follow his process to develop some widgets for our community site I found a couple of issues with the proposed method:

  • Many repeating tasks. For each widget you need to:
    • Copy&paste the xml provided on the article.
    • Generate new GUID and set it in the xml file.
    • Create new folder using said GUID as the name, under the widgets’ folder.

    Though all Telligent’s out-of-the-box widgets follow the same standards, implementing them, will require some additional steps:

    • Creating configuration elements for the widget title and for the widget configuration. Using these configurations in the widget xml file.
    • Adding a ui.js file to the widget and using the registerEndOfPageHtml method to include this file with the widget.
    • Defining a property with the namespace of the project in the ui.js file and extending the JQuery object.
    • Defining a register function in the ui.js file and calling this function from the widget xml file providing the context object.
  • After creating a few widgets in one project it becomes more difficult to work with these GUID folders; updating the widget requires you to open the widget xml, check the widget’s GUID and look for a matching folder.
  • Deploying any changes to the widget files requires opening a browser on the site control panel and from the developer tools clicking the “Clear Cache” button.

Personally, I feel this method requires a lot to do before you even begin developing your widget…

So, being my old lazy self I started thinking of a way to reduce all the manual work described so far while enforcing best practice and ended up with 2 visual studio item templates to do the work for me.

After deploying the item templates, the process of developing widget in visual studio will look like this:

  1. Open a new Class Library project.
  2. Right click on the project name and open the project properties. Under Application tab set the Assembly name and the default namespace.
  3. Save the solution.
  4. Add reference to Telligent.Evolution.Components.dll and Telligent.Evolution.ScriptedContentFragments.dll.
  5. Right click on the project name and select add > new item. Choose the  “WidgetFactoryDefaultProvider” from the installed templates.WidgetFactoryDefaultProvider
  6. Give the default provider a name like [CompanyName]FactoryDefaultProvider.cs
  7. When you click on the Add button the following window will open
    factory dialog
  8. On the Deployment site URL box, enter the URL for the site you would like to automatically deploy the widgets to (do not include any page name like default.aspx as suffix).
  9. On the Deployment site root folder box, enter the root folder of the above site (the folder that contains the bin folder).
  10. If you would like that every time you deploy an update the deployment script will automatically delete all the content of the widget provider folder, leave the checkbox unchecked. Otherwise, existing files will remain on the folder and only files that exist in the project will be overwrite.
  11. Click the create button. The item template will do the following tasks automatically:
    1. Create DefaultWidget folder under the project root.
    2. Create a factory default widgets plug-in with a new GUID as the provider identifier.
    3. Copy an assembly to the site bin folder that enable clearing the cache.
    4. Copy a generic handler file to the utility folder to access the above assembly.
    5. Create the file WidgetDeploy.ps1, a PowerShell script based on the answer you provided for deploying the files to the Telligent site and clear the cache.
  1. Compile the project and copy the result assembly to the Bin folder of the deployment site or just add the new class library as a reference to the deployment site project.
  2. Enable the new provider from the “Manage plugins” page.
  3. To simplify the execute of the PowerShell script you can add an external tool in visual studio:
    1. In the Tools menu, choose External Tools.
    2. In the External Tools dialog box, choose Add, and enter a name for the menu option in the Title box like “Update widget’s files”.
    3. In the Command box, enter powershell.exe
    4. In the Arguments box, enter "& '$(ProjectDir)WidgetDeploy.ps1'"
    5. Click on the OK button. After you have added a tool to the Tools menu, you can easily launch the update widget files tool from the tool menu.

To add new widgets to the project:

  1. Under the default widget folder add new item and choose the Widget template.
  1. In the name box, enter the name of the widget and click Add.
  2. This will do the following tasks automatically:
    1. Create folder with the widget name.
    2. Create under this folder the widget xml file and ui.js file.
    3. The xml file and the ui.js contain all the necessary content as described above (new GUID, default configuration, namespace registration …)

The development process should follow these steps:

  1. Add/Edit a widget definition or its supplementary files.
  2. From the Tool menu click the “update widget files”.
  3. Review the change in the Telligent Evolution site.
  4. Repeat.

To deploy the item templates:

  1. Download this file  and extract the RAR file.
  2. Execute “DeployTemplates.cmd”
  3. Make sure that PowerShell is in the right version and defined to allow executing unsigned scripts:
    1. Execute “StartPowerShell.cmd”
    2. Type Get-Host and make sure the version is 2.
    3. Type Get-ExecutionPolicy.
    4. If the response is Restricted type the following command:
      Set-ExecutionPolicy RemoteSigned

The item templates source is also included in the RAR file.

In the last two parts of this session I have shown how to develop Silverlight application that shows information from a database in occasion connected environment. In this last part I will show you how to manage updates (including inserting and deleting records) and synchronizing them back into the database.

The first step is to make sure we are working with the relevant DomainContext object and to apply all the updates to the offline/online DomainContext, depending on the application’s status. To accomplish this I have created a factory method that checks the application’s status and returns the relevant DomainContext. This method should be used whenever we need a DomainContext object in Silverlight projects.

public static NorthwindDomainContext GenerateNewNorthwindDomainContext()
{
    NorthwindDomainContext northwindDomainContext;
    if (OfflineHelper.IsApplicationOnline)
    {
        northwindDomainContext = new NorthwindDomainContext();
    }
    else
    {
        northwindDomainContext = 
          (NorthwindDomainContext)DomainContextExtension.
                    GetOfflineDomainContext(typeof(NorthwindDomainContext));
    }

    return northwindDomainContext;
}

The second step is submitting the changes. To accomplish that, we will need to check the application status; if it is online we should use the out of the box SubmitChanges method and if it is offline, we just call the OCSaveAsOfflineDomainContext that we implemented in the previous parts. Since the Entity object is tracing all the changes, we do not need to do anything except serialize the updated DomainContext to the offline storage. To simplify the operation I created this extension method:


public static Task OCSubmitChangesAsync(this DomainContext source)
{
    if (OfflineHelper.IsApplicationOnline)
    {
        return source.SubmitChangesAsync();
    }
    else
    {
        return TaskEx.Run(() =>
        {
            source.OCSaveAsOfflineDomainContext();
        });
    }
}

The third and last step is synchronizing with the database when the application becomes online.  All we need is to call the SubmitChanges on the offline DomainContext. Although the MSDN sample that enables serializing RIA DomainContext objects fit the job, it still required fixing some bugs to make it work properly (I attached the final code at the button of the post). Again, I warp it up with an extension method:

public static Task OCSyncWithOnlineServiceAsync(this DomainContext source)
{
    Task result = null;
    if (OfflineHelper.IsApplicationOnline)
    {
        DomainContext offlineDomainContext = 
             GetOfflineDomainContext(source.GetType());
        try
        {
            result = offlineDomainContext.SubmitChangesAsync();
        }
        catch (Exception ex)
        {
            HandleSyncErrors(ex);
        }
    }

    return result;
}

An example of using the above methods for doing offline updating, would look something like this:

NorthwindDomainContext domainContext = 
          DomainContextFactory.GenerateNewNorthwindDomainContext();

await this.domainContext.LoadAsync(this.domainContext.GetCustomersQuery());
this.domainContext.OCSaveAsOfflineDomainContext();

OfflineHelper.IsApplicationOnline = false;

domainContext = DomainContextFactory.GenerateNewNorthwindDomainContext();
// Do some update on domainContext objects...

OfflineHelper.IsApplicationOnline = true;
await this.domainContext.OCSyncWithOnlineServiceAsync();

In addition, I have also attached a simple application that demonstrates the use of this framework. Notice that you can take the application offline, make changes, close and open the application and the changes will still be there. The application is also able to run out of the browser so you will be able to run this application even after the web server and the SQL server are unavailable.

Summary: To convert your application to support offline scenarios you will need to follow these steps:

  1. Download and install the Microsoft Visual Studio Asynchronous Framework.
  2. Add reference to the SilverlightOccasionallyConnected DLL.(attached to this post)
  3.  Create a factory method as demonstrated, to create DomainContext and use only this method throughout the Silverlight projects.
  4. Use the OCLoadAsync and OCSubmitChangesAsync to query and submit your changes respectively.
  5. Make sure that the filter and order-by logic are located in the Silverlight projects and not on the Server-side project.
  6. Add to your application this functionalities :
    1. An offline/online indication
    2. An option to generate offline copy of the database using the OCSaveAsOfflineDomainContext function. Load all the data that your application will need access to in offline mode before calling this method.
    3. An option to synchronize using the OCSyncWithOnlineServiceAsync function.

Source Code for this article

In the previous part I showed how to easily write a unified code that automatically checks the application connectivity status and executes a query against the database, or an offline storage, whether the application is online/offline respectively. In this part I will describe in more detail the implementation of the DomainContext extension methods that enables this.
The DomainContextExtension class includes the method OCLoadAsync that receives as a parameter an RIA services query that can be customized on the fly and returns a Task of type IEnumerable of the requested entity type.

The method contains 3 main steps:

  1. If the application is online, the method just loads the data from the RIA service.
  2. Else, will create an IQueryable of the requested entity from the offline DomainContext.
  3. Transfers the query from the current DomainContext to the offline DomainContext.

I will explain in details each step:

  1. The class SilverlightOccasionallyConnected.OfflineHelper has a static Boolean property IsApplicationOnline. The property is initiated with the built-in NetworkInterface.GetIsNetworkAvailable() value. The idea is to set the status at the application loading and to enable control of this property to the application. If the application is online, the method use the Async framework to load the query from the server as I described in detail in this article.
  2. If the application is offline we need to execute the query on the offlineDomainContext that is stored on the application Isolated Storage. I used the example from the MSDN- Code Gallery to load the DomainContext from the Isolated Storage. Next we need to create IQueryable for the requested Entity. If the entity framework’s model used in the domain service does not have any inherit relation we can just write:
    OfflineDomainContext.EntityContainer.
                           GetEntitySet(requestEntityType)
    

    If however we have inherit relation we need to do an extra step and find the base type of the Entity (the parent entity that is inherit directly from the System.ServiceModel.DomainServices.Client.Entity object) and then use the following syntax:

    OfflineDomainContext.EntityContainer.
                           GetEntitySet(baseEntityType).OfType<T>()
    

    Where T is the requestEntityType. We also need to convert the IEnumerable to IQueryable so the next step of changing the query source will be available.

  3. This step is the trickiest. We need to transfer the query from the current DomainContext to the offline DomainContext. The queries on the DomainContext are of type EntityQuery<T>. This type has a Query property of type IQueryable that contains the specific requested query (for example “where-clauses” that narrow the result).
    The IQueryable contains an Expression property that exposes the expression tree that builds the requested query. By debugging I realized that the expression tree contains nesting of MethodCallExpression where the first argument can be the next MethodCallExpression or the ConstantExpression that represents the original IQueryable object (the object we’d like to replace) and the second argument is a UnaryExpression that represents the lambda expression. In light of the above tree structure, I wrote recursive method that goes through the expression tree and creates new expressions with a different source for the query (the offline DomainContext IQueryable that we created in step 2).
private static Expression ChangeQueryableExpressionSource<T>(
                                             Expression expression,
                                             IQueryable<T> newSource)
{
    MethodCallExpression methodCallExpression =
                                        (MethodCallExpression)expression;
    Expression expressionLeftArgument =
                                        methodCallExpression.Arguments[0];
    Expression expressionRightArgument =
                                        methodCallExpression.Arguments[1];

    if (expressionLeftArgument is ConstantExpression)
    {
        return Expression.Call(
                               methodCallExpression.Method,
                               Expression.Constant(newSource),
                               expressionRightArgument);
    }
    else
    {
       return Expression.Call(
                              methodCallExpression.Method,
                              ChangeQueryableExpressionSource(
                                      expressionLeftArgument, newSource),
                              expressionRightArgument);
    }
}

To the above method I just added some basic functionality that loads and saves DomainContext to a fixed-name file on the Isolated Storage.

In the next part I will add submit update functionality that will enable making updates in occasionally connected application.

Feel free to use this code in your applications and let me know what you think about it.

Source Code for this article