Quantcast
Channel: Stefan Stanev's SharePoint blog
Viewing all 25 articles
Browse latest View live

Updating read-only fields with the Lists web service

$
0
0

This is a known limitation (from a user’s perspective that is, obviously Microsoft had a good reason to design it this way) – you cannot edit fields like the Created, Modified, Created By, Modified By, etc using the standard SharePoint Lists web service and its UpdateListItems method (Updated 2011-1-20 – the Created By (Author) and Modified By(Editor) fields can be updated in this manner only for non-library SharePoint lists). On the other hand using the object model with SPListItem.UpdateOverwriteVersion or SPWeb.ProcessBatchData you can easily modify the values of read-only fields. And back to the Lists web service – I recently found a small work-around to this issue – the idea is quite simple – if the problem is with read-only fields why just not change the ReadOnly attribute in the field’s schema and check to see whether this will work. I tested the schema changing and it turned out that the UpdateListItems method was now able to change the values of the modified fields. The next question is whether it is possible to make this field schema change with a web service – the answer is again yes – the Lists web service has the UpdateList method which allows among other things to modify the schema of the fields in the list. So, combining the two methods it is possible to first call UpdateList and change the fields whose values should be modified to non read-only, then call UpdateListItems and update the values in the list items and finally call UpdateList again to have the read-only attribute set back to the original value (check the sample code snippet below). Note here that setting the ReadOnly schema attribute back to TRUE is necessary otherwise you will see some side effects like the fields appearing in the new and edit forms of the list, the Modified field will stop reflecting the last modified date, etc. Another important note – if the field is sealed (its “Sealed” attribute is set to TRUE) – the Lists.UpdateList cannot update its schema so the whole approach is unusable in this case (unless you set the “Sealed” attribute to false with the object model or a tool which further complicates the issue).

And now several words about the disadvantages of this work-around (a little attempt to discourage you from using it) – let me start with the schema changes – it is obviously not natural to change the schema of the list (or its children) to get an update of its list items right, another thing is that these schema changes and reverts will become a real mess if you try to run several updates simultaneously. So my advice would be that this approach should be used only in a one-time batch updates and/or when there is really no alternative to using the standard SharePoint web services (no option for using the object model or to create and deploy a custom web service).

And here is a small sample code snippet that demonstrates this work-around:

        publicvoid Test()

        {

            string webUrl = "http://myserver";

            string listName = "docs";

            Lists.ListsSoapClient listsClient = this.GetListsClient(webUrl);

 

            // 1st a call to Lists.GetList - we need the list's version - it is returned in the Version attribute

            XElement listData = XElement.Parse(listsClient.GetList(listName).OuterXml);

            string listID = listData.Attribute("ID").Value;

            string version = listData.Attribute("Version").Value;

            // in the updateFields parameter of Lists.UpdateList the full schema of the fields should be provided

            string updateFields = @"<Fields>

   <Method ID='1'>

      <Field ID='{28cf69c5-fa48-462a-b5cd-27b6f9d2bd5f}' ColName='tp_Modified' RowOrdinal='0' ReadOnly='FALSE' Type='DateTime' Name='Modified' DisplayName='Modified' StorageTZ='TRUE' SourceID='http://schemas.microsoft.com/sharepoint/v3' StaticName='Modified' FromBaseType='TRUE' Version='4' ShowInNewForm='FALSE' ShowInEditForm='FALSE' />

   </Method>

   <Method ID='2'>

      <Field ID='{8c06beca-0777-48f7-91c7-6da68bc07b69}' ColName='tp_Created' RowOrdinal='0' ReadOnly='FALSE' Type='DateTime' Name='Created' DisplayName='Created' StorageTZ='TRUE' SourceID='http://schemas.microsoft.com/sharepoint/v3' StaticName='Created' FromBaseType='TRUE' Version='4' ShowInNewForm='FALSE' ShowInEditForm='FALSE' />

   </Method>

</Fields>";

            XmlDocument doc = newXmlDocument();

            doc.LoadXml(updateFields);

            // Lists.UpdateList: set fields to not read-only

            XElement result = XElement.Parse(listsClient.UpdateList(listID, null, null, doc.DocumentElement, null, version).OuterXml);

            // get updated version from the result XML - for the second call of Lists.UpdateList

            version = result.Elements().Where(el => el.Name.LocalName == "ListProperties").First().Attribute("Version").Value;

 

            // prepare the XML for the list item update

            string updateDates = @"<Batch OnError='Continue'>

  <Method ID='M0' Cmd='Update'>

    <Field Name='ID'>1</Field>

    <Field Name='FileRef'>/docs/zt.txt</Field>

    <Field Name='Modified'>2010-04-04T22:17:00Z</Field>

    <Field Name='Created'>2010-01-01T00:05:00Z</Field>

  </Method>

</Batch>";

 

            doc.LoadXml(updateDates);

            // Lists.UpdateListItems: update Created & Modified

            result = XElement.Parse(listsClient.UpdateListItems(listID, doc.DocumentElement).OuterXml);

 

            // revert the fields' schema

            updateFields = updateFields.Replace("ReadOnly='FALSE'", "ReadOnly='TRUE'");

            doc.LoadXml(updateFields);

            // Lists.UpdateList: set fields back to read-only

            result = XElement.Parse(listsClient.UpdateList(listID, null, null, doc.DocumentElement, null, version).OuterXml);

        }

The code demonstrates how to update the Created and Modified fields of a document library item (check the comments inside the code for more details). Note that this is a sample quality code and you shouldn’t use it directly without serious improvements – for example using a try-finally block with the second call to the Lists.UpdateList method placed in the “finally” block; adding an extra call to Lists.GetList to get a fresh value of the “Version” attribute of the list, just before the second list update, etc.


How to display all versions in a SharePoint list view page – part 2

$
0
0

Several months ago I wrote a small posting describing how you can trick the standard list view web part to display all versions of the list items in a SharePoint list – link. One problem though with this was that even if the list view columns display the values of the relevant item versions the “Name” column in document libraries links always to the current version of document (the same holds for the “Title” column in non-library lists and also for the commands in the context menu of the item). In one of the comments of the original posting there was a request to me to demonstrate a solution or work-around for that which is what I am going to elaborate on in this posting. The solution is applicable for document libraries only and uses a custom computed field which renders as a link to the document which when clicked opens the correct version of the selected document. The link actually calls a small custom application page (that should be deployed to the “12/TEMPLATE/LAYOUTS” folder of the SharePoint installation) which with several lines of code using the SharePoint object model redirects either to the current document version or to one of its previous ones (with the _VTI_HISTORY URL pattern).

And the implementation itself, the schema XML of the computed field:

<FieldID="{ccccdddd-3710-4374-b433-61cb4a686c12}"

       ReadOnly="TRUE"

       Type="Computed"

       Name="VersionLinkFilename"

       DisplayName="Version link"

       DisplayNameSrcField="FileLeafRef"

       Filterable="FALSE"

       AuthoringInfo="(linked to version)"

       SourceID="http://schemas.microsoft.com/sharepoint/v3"

       StaticName="VersionLinkFilename"FromBaseType="TRUE">

  <FieldRefs>

    <FieldRefName="FileLeafRef" />

    <FieldRefName="FileRef" />

    <FieldRefName="_UIVersion" />

  </FieldRefs>

  <DisplayPattern>

    <HTML><![CDATA[<a href=']]></HTML>

    <HttpVDirCurrentWeb="TRUE"></HttpVDir><HTML><![CDATA[/_layouts/versionredir.aspx?FileRef=%2f]]></HTML><LookupColumnName="FileRef"URLEncode="TRUE" /><HTML><![CDATA[&Version=]]></HTML><ColumnName="_UIVersion" />

    <HTML><![CDATA['>]]></HTML>

    <LookupColumnName="FileLeafRef" />

    <HTML><![CDATA[</a>]]></HTML>

  </DisplayPattern>

</Field>

The easiest way to add a computed filed to your document library is using code (or a custom tool), basically you can use the SPFieldCollection.AddFieldAsXml method (calling it on the Fields property of your SPList instance) providing the schema of the field in the first parameter.

And the code of the custom application page:

<%@ Page Language="C#" Debug="true" %>

<%@ Register Tagprefix="SharePoint" Namespace="Microsoft.SharePoint.WebControls"Assembly="Microsoft.SharePoint, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <%@ Register Tagprefix="Utilities" Namespace="Microsoft.SharePoint.Utilities"Assembly="Microsoft.SharePoint, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>

<%@ Import Namespace="Microsoft.SharePoint" %>

<script runat="server">

    protectedoverridevoid OnLoad(EventArgs e)

    {

        base.OnLoad(e);

 

        string fileRef = this.Request.QueryString["FileRef"];

        int version = int.Parse(this.Request.QueryString["Version"]);

 

        SPWeb web = SPContext.Current.Web;

        SPFile file = web.GetFile(fileRef);

        SPListItem item = file.Item;

 

        string redirUrl = null;

        if ((int)item["_UIVersion"] == version) redirUrl = web.Url.TrimEnd('/') + "/" + file.Url;

        else

        {

            foreach (SPListItemVersion v in item.Versions)

            {

                if ((int)v["_UIVersion"] == version)

                {

                    redirUrl = web.Url.TrimEnd('/') + "/_vti_history/" + version + "/" + file.Url;

                    break;

                }

            }

        }

        if (redirUrl != null)

        {

            Response.Redirect(redirUrl);

        }

    }

</script>

The page should be saved as “VersionRedir.aspx” in the LAYOUTS folder of your SharePoint hive.

SharePoint 2010 explorer (using the Client object model)

$
0
0

This is a small WinForm tool again using the TreeView-PropertyGrid combo very much like the popular SharePoint Manager 2010 tool (which became popular with its 2007 version), but instead using the client object model which was introduced in SharePoint 2010. This is actually the second tool in a row using the TreeView-PropertyGrid setup that I release – the first one being the WebPart manager tool, that I demonstrated several weeks ago – check here (both tools actually reuse the same core visual components).

The tool can be downloaded from here (sources).

spx1

So as you can see from the screen-shot the tool shows the client object model hierarchy in its TreeView control in the left pane and when an object is selected there its properties are displayed in the PropertyGrid control in the right. Since the client object model is used, only the hierarchy below the site collection level is displayed (as opposed to the “server” object model where you have everything starting with the farm level). On the other hand the client object model which utilizes internally the SharePoint web services  allows you to connect remotely to multiple SharePoint farms as long as you have the appropriate access rights.

The tool uses extensively – first and foremost reflection – the object model hierarchy in the TreeView is actually a one-to-one mapping with the real object model hierarchy since the child nodes are actually the real member properties of the class of the parent object. Secondly – the available commands/actions for every object in the hierarchy are actually the class methods of that object which are retrieved and eventually executed with reflection. The other thing used extensively in the tool is the PropertyGrid control itself – it is used not only to visualize the properties of the currently selected object but also in the execute action dialog where it gets populated with and allows the setting of the parameters of the underlying client object class method.

System requirements and installation notes: you need .NET 3.5 and SharePoint 2010 on the machine that you use the tool on. Note that the tool uses only two SharePoint assemblies – Microsoft.SharePoint.Client and Microsoft.SharePoint.Client.Runtime – so you will be able to run the tool on a non-SharePoint machine if you have these two copied on that machine – I am not sure though whether Microsoft allow these to be distributed in this way. The tool consists of 4 files: SPx.exe, SPx.Core.dll, SPx.SharePoint.Client.dll and spxclient.xml – there’re no special installation steps, you can just copy the files to the target machine and start the executable.

General usability notes: here is a small list of some useful notes and tips for using the tool:

  • the tool’s PropertyGrid is an extended version of the original PropertyGrid control. One very useful addition to it is the ability to edit multi-line text in string properties– you can use that when you right click a grid item containing a string property of the selected object – then a small dialog box will open in which the text value of the property will appear in a multi-line text box. This is handy especially in cases when you want to edit the Field.SchemaXml or the View.ViewQuery properties.

    spx6
  • when you select a node in the TreeView the tool calls the ClientContext.Load and ClientContext.ExecuteQuery methods to load the corresponding object and its properties. Various errors can occur during loading and these are displayed in a small strip just below the PropertyGrid control: 

    spx2 
    Most notifications reflect normal error conditions in the client object model – for example the Web.AssociatedMemberGroup property may be null in the “server” object model so when you try to load it with the client object model the latter will raise an exception to that effect. Another common notification is the “Info: Not all properties are available.” one – you will see it for all Folder objects which represent non-list folders (e.g. Web.RootFolder) – in this case the Folder.ConentTypeOrder and Folder.UniqueConentTypeOrder properties are invalid in the “server” object model and consequently cannot be retrieved using the client model.
  • when you right click a node in the TreeView control a context menu with all public instance methods of the object’s class will appear (the same menu for the currently selected object is available in the toolbar below the “Actions” button):

    spx3 
    when you click on one of the “reflected” method commands a dialog box will pop up – in the dialog a PropertyGrid control will be populated with the selected method’s parameters which you can use to provide appropriate values to execute the method:

    spx4 
  • a BIG NOTE here – though all public instance methods of the selected object are available in the context menu in many cases you won’t be able to execute the selected client object method – there’re two main reasons for that – 1st – the limitations of the UI of the tool – you may simply not be able to provide values in the PropertyGrid control for the method’s parameters – note that you can insert values in the grid items only for primitive types and not for class types. Secondly – there’re cases when you need to call several client object methods consecutively (and pass the returned objects from the preceding to the next method calls) before committing them with ClientContext.ExecuteQuery – the tool normally calls the ClientContext.ExecuteQuery method immediately after you click the “OK” button in the parameters dialog, so generally you won’t be able handle more complex scenarios with several method invocations (the tool though has a switch that can be used to disable temporarily the automatic call of ClientContext.ExecuteQuery – see below).
  • despite the above mentioned limitations you can use the tool in at least these cases: create, update and delete operations for Web, List, Field, View and Folder objects; upload, delete, check in/out and publish files; add and delete content types to lists; etc.
  • the tool provides two enhancements for calling methods containing parameters of types inheriting Microsoft.SharePoint.Client.ClientObject  and Microsoft.SharePoint.ClientValueObject. In the first case you have the option to click the grid item ellipses button which will display a small popup with the same object hierarchy tree that you have in the tool’s right pane from which you will be able to select an existing in the hierarchy ClientObject that can be used in the method’s parameter:

    spx5 
    as you see in the screen-shot this is very useful when you call the List.ContentTypes.AddExistingContentType method and provide a reference to a site content type so that you can add that content type to the list.
    In the second case the tool “flattens” the parameter and instead of a single grid item for the object it displays separate grid items for every property of the ClientValueObject inheriting class. A small example for that – when you call the WebCollection.Add method to create a new sub-web, in the method’s parameters PropertyGrid instead of a single grid item containing a null WebCreationInformation object, you will see six grid items containing the properties of the WebCreationInformation class – Url, Title, Description, Language, WebTemplate and UseSamePermissionsAsParentSite.
  • the “Add return object to tree” option in the method invocation dialog – this is available for all non-void methods and allows you to dynamically add the object returned from the method call to the hierarchy tree in the right pane – the object will appear as a child node of the currently selected node (all “dynamic” nodes will disappear if you refresh the parent or any of the ancestor nodes). In most cases you won’t need to use this option since most “get” methods will return an object that you already have in the client collection object and most “add” methods will return an object which will automatically appear in the client collection (the tool will internally refresh the collection object). There’re several cases however in which this is a very handy option and the only way to get certain objects – for instance List.GetItems, List.GetItemById, File.GetLimitedWebPartManager, etc.

Commands overview:

The tool provides a pretty simple command user interface – a toolbar at the top of the main window with three menu buttons:

  • Actions menu – this is context dependant – it displays the available commands/actions for the currently selected object. You have the “Refresh” command at the top, available for all nodes and below it commands for the available public instance methods of the object’s class (see above in the “General usability notes” section)
  • Connections menu – you have two commands here: “Connect …” and “Disconnect”. The former displays a dialog in which you should provide the URL to the local or remote site collection that should be opened in the tool. You can optionally provide user credentials and authentication mode here. You can open many different site collections from different SharePoint farms. The disconnect command removes the site collection of the currently selected node from the TreeView.
  • Settings menu – you have three commands here that operate as switches – you can check and uncheck them:
    • “Add method return object to tree” – when this is checked the check box switch with the same label in the method call dialog will be checked by default. Also handy for methods without parameters for which the method invocation dialog doesn’t get launched.
    • “Suppress display object in property grid” – this will effectively disable the PropertyGrid in the left pane, the selected object won’t be displayed there. This can be handy in some advanced situations when the very querying of the object’s properties may affect its state.
    • “Suppress ExecuteQuery” – also handy for some advanced situations. Note that the tool calls ClientContext.ExecuteQuery after each node selection in the TreeView and after each method call selected from the context menu. If you use this option to disable temporarily the ExecuteQuery calls you will be able to execute several client object methods consecutively and only after that commit them with ClientContext.ExecuteQuery. Note that the objects that you use for these calls should have been loaded in advance otherwise you will receive various “not initialized” errors for collections, objects and properties. Basically you should have a very good understanding of the client object model to be able to play with this option and overall there are not that many scenarios in which it can be helpful.

SharePoint 2010 – BinarySerializedWebPart element and XsltListViewWebPart provisioning

$
0
0

The BinarySerializedWebPart is a new feature element introduced in SharePoint 2010 and … its purpose is to be used mainly internally by the SharePoint save-site-as-template functionality. The element can appear below a View element in a Module feature that provisions web part pages with XLV web parts.

A known issue in SharePoint 2007 was that when you use the View element in an ONET.XML file or a Module feature element to add a List View web part (LVWP) to a page you can only specify one of the existing Views defined in the list definition feature (its schema.xml) of the target list instance – this is achieved with the BaseViewID attribute of the View element. So if you need a slightly modified View for a LVWP you have to either create a custom list definition (if you already have one – you have to extend it) or to use code to make the necessary list view schema changes – in the case of LVWP and XLV web parts there is always a hidden SPView associated with the web part (you can check a previous posting of mine treating this topic in detail - ListViewWebPart & SPView – two sides of the same coin). The problem with both approaches is that they require some extra effort for this relatively simple provisioning task.

And how about the new SharePoint 2010 – well, the View element works pretty much the same in the sense that if you use the View element alone you have again only the BaseViewID attribute to specify a View definition (limited to the predefined ones in the list template’s schema.xml). And here comes the new BinarySerializedWebPart element – you can see it working when you, say, have a normal team site and add a XLV web part to its home page and then modify the View of the web part adding for instance several more view fields and changing the filter fields. Then you can use the “save site as template” command from the “site settings” page of the site and after you have the site template, you can create a new site based on it, where you will see that you have the exactly same XLV web part with the exact same customizations in place (this didn’t work in the SharePoint 2010 beta but is fixed in the RTM version). The last exercise may sound like a nonsense since we had this functionality in SharePoint 2007 too, but the important difference here is that site templates in SharePoint 2007 were saved as “stp” files while in SharePoint 2010 site templates are saved as standard “wsp” solution packages comprising standard SharePoint features. And this means that in SharePoint 2010 it is possible to provision XLV web parts with arbitrary view customizations totally declaratively with standard feature element syntax. You can check that when you open the “wsp” file and extract the feature files from it using a standard archiving program that can handle CAB files (“wsp” packages are actually CAB files). In the Modules feature of the “wsp” package (there is one per site template package) you will see a View definition that will look something like:

<ViewList="Shared Documents"DisplayName=""Url=""DefaultView="FALSE"BaseViewID="1"Type="HTML"WebPartOrder="0"WebPartZoneID="Left"ContentTypeID="0x"ID="g_ba709f71_6af5_4e3c_a8b1_01be2d3f95e8"Hidden="TRUE">

  <BinarySerializedWebPart>

    <GUIDMap>

      <GUIDId="5ba54c0a_6d0a_4667_8889_27fcfc904193"ListUrl="Lists/Announcements" />

      <GUIDId="2d9a46bd_f3e1_4cc6_9630_5763fc589317"ListUrl="Lists/Links" />

      <GUIDId="76f78fed_a0f9_498b_b2e4_8aa9ff2b4a2f"ListUrl="Shared Documents" />

      <GUIDId="ea419abd_36c7_4a42_94e3_b661ef52af44"ListUrl="Lists/Calendar" />

    </GUIDMap>

    <WebPartID="{ba709f71-6af5-4e3c-a8b1-01be2d3f95e8}"WebPartIdProperty=""List="{$ListId:Shared Documents;}"Type="1"Flags="8388621"DisplayName=""Version="1"Url="/sites/1/default.aspx"WebPartOrder="0"WebPartZoneID="Left"IsIncluded="True"FrameState="0"WPTypeId="{874f5460-71f9-fecc-e894-e7e858d9713e}"SolutionId="{00000000-0000-0000-0000-000000000000}"Assembly=""Class=""Src=""AllUsers="B6Dt/i4AAAABAAAAAAAAAAEAAAAvX2xheW91dHMvaW1hZ2VzL2l0ZGwucG5nAP8BFCsAEAICAgMCAgEEAAIBAgkBAAACCAKCAQUZL19sYXlvdXRzL2ltYWdlcy9pdGRsLnBuZwKVAQUmezc2Rjc4RkVELUEwRjktNDk4Qi1CMkU0LThBQTlGRjJCNEEyRn0FBkxpc3RJZCgpWFN5c3RlbS5HdWlkLCBtc2NvcmxpYiwgVmVyc2lvbj0yLjAuMC4wLCBDdWx0dXJlPW5ldXRyYWwsIFB1YmxpY0tleVRva2VuPWI3N2E1YzU2MTkzNGUwODkkNzZmNzhmZWQtYTBmOS00OThiLWIyZTQtOGFhOWZmMmI0YTJmBRVJbml0aWFsQXN5bmNEYXRhRmV0Y2ho"View="qKkwMQwAAAByAQAAeJxtkNsKwjAMhl+l9AUmeNsNPA0GHuf0vtp0BNtFYkX29tZ5QNiuQvL9P/kTdUR45AjO3DLV1RKsWGsPqZzTuThTI5MeWWJzydFBE7sBvCKDFsEMoIXBQDwAZgw6vC3Jf6bdHbjN1IYN8LTt2Yp55/jh5KOf1DVDrQNScxNH7e5Ru7H2JS7psUSPQWx1DSaVVXlYyGw8UsmXZGpFp3ieeJcigO/mqRzLz2yP/urgF7T/korInTSLqr3G1TkDWGIfyRMdinz/" />

  </BinarySerializedWebPart>

</View>

OK … so this is the fully declarative provisioning of the customized XLV web part but as you see what you have is actually the binary serialized property data of the web part which cannot in any way be manually modified in case you need some further adjustments. If you want to make some changes you need to have the feature deployed, change the provisioned XLV view and then export the whole containing site as a template again. Obviously this approach is good for the “saving as site template and site replication based on a site template” functionality but not that good for creating and maintaining manual view provisioning features. So again maybe the two previously mentioned approaches that were used for SharePoint 2007 before to get the XLV views configured correctly still can be considered as good alternatives.

And lastly one small detail – the BinarySerializedWebPart element will appear in the “save as site template” generated “wsp” solution package only for XLV web parts whose views were modified and are different from the standard predefined list template base views. This also holds for changes made to the XLV web part’s XSLT that you can do using the SharePoint 2010 designer. For XLV web parts that were not modified after they were added to a page, the “save as site template” command will generate a “standard” View element that will contain a normal CDATA section containing a WebPart element with the XML of all properties of the web part – i.e. the standard web part feature provisioning format that we use in web part features.

LINQ to SharePoint cross list query

$
0
0

In SharePoint 2010 we already have the LINQ to SharePoint support that is a great improvement over the traditional SPListItem + CAML approach for exposing and modifying SharePoint list data. The built-in LINQ to SharePoint implementation is a complete ORM framework which provides full support for CRUD operations.

But in SharePoint there is one other method for retrieving list data – this is the cross list query support that is available through the SPWeb.GetSiteData method. The cross list query provides means to fetch list data from many lists in the entire site collection simultaneously and can be useful in many scenarios; it is also internally used by the standard Content Query web part (CQWP). Still there is no out-of-the-box support for LINQ to cross list query in SharePoint 2010, so I decided to create a small framework with a light-weight and simple implementation of a LINQ query provider for cross list query CAML. Note here that this solution is focused mainly on the lambda expression to CAML translation and doesn’t use entities that are mapped to SharePoint lists or content types – in this sense it is not an ORM system (you will see that from the demonstration below). Another important note – since the object model support for the cross list query hasn’t changed much in SharePoint 2010, this solution works in both SharePoint 2007 and SharePoint 2010. You can download the code of the custom LINQ framework from here.

And now, let’s jump directly to a small code sample:

using (SPSite site = newSPSite("http://testserver/sites/testsite"))

{

    SPWeb web = site.OpenWeb();

    IQueryable<DataRow> result = web.GetSiteData()

           .Webs(WebScope.SiteCollection)

           .Lists(BaseListType.GenericList)

           .ViewFields(Fields.ID, Fields.Text1.Nullable, Fields.Date1.Nullable, Fields.Bool1.Nullable, Fields.Look1.Nullable, Fields.Numb1.Nullable)

           .CamlWhere(r =>

                Fields.Text1.Value.Contains ("test")

                && (Fields.Look1.LookupId == 1 || Fields.Look1.Value == "test")

                && (Fields.Bool1.Value != true || Fields.Date1.Value > DateTime.Today.AddDays(-2))

            )

           .CamlOrderBy(Fields.Date1.Descending, Fields.Text1)

           .RowLimit(10);

 

    foreach (DataRow row in result)

    {

        Console.WriteLine(Fields.ID[row]);

        Console.WriteLine(Fields.Text1[row]);

        Console.WriteLine(Fields.Numb1[row]);

        Console.WriteLine(Fields.Bool1[row]);

        if (Fields.Look1[row] != null)

            Console.WriteLine(Fields.Look1[row].LookupValue);

    }

}

As you can see the code snippet starts with opening an SPWeb instance and then there is a sequence of extension method calls the end result of which is an IQueryable of DataRow. So there are several important things to note here before going into detail:

  • unlike the standard LINQ to SharePoint implementation there are no entity classes here, so instead of IQueryable<SomeEntity> the custom query provider returns always IQueryable<DataRow> (keeping things much closer to the SPWeb.GetSiteData method which returns a DataTable)
  • the standard IQueryable extension methods Select, Where, First, FirstOrDefault, etc are not used and are actually not supported. Instead there is a small set of custom extension methods which map directly to the properties of the SharePoint SPSiteDataQuery class (I’ll explain how these can be used below).
  • instead of entity classes that you normally generate with the SPMetal utility, a static class is used here (class Fields) which exposes a number of static variables that represent SharePoint site and list fields. A small wrapper class is used for these static variables (whose name is, you guessed it - Field) and it is used in all cases that in the CAML you would use the “FieldRef” element. So this is actually the greatest difference between the standard LINQ to SharePoint and this custom implementation – there are no wrapping classes on the entity level (content type or list level in SharePoint terms) but instead – on the column/field level. This has the drawback of not having your data neatly packed into dedicated classes but on the other hand can give you more flexibility over the static wrappers in terms of the ease with which you can modify the meta-data (the fields that you want to retrieve or use to filter or sort on) of your queries without it being necessary to re-generate the entity classes every time.

And here is the definition of the sample “Fields” class that is used in the first code snippet:

publicstaticclassFields

{

    publicstaticreadonlyField<int, int> ID =

        newField<int, int>("ID", "Integer", SimpleCamlValueConvertor.Instance);

 

    publicstaticreadonlyField<string, string> Text1 =

        newField<string, string>("Text1", "TEXT", SimpleCamlValueConvertor.Instance);

 

    publicstaticreadonlyDateField<DateTime, DateTime> Date1 =

        newDateField<DateTime, DateTime>("Date1", "DateTime", SimpleCamlValueConvertor.Instance);

 

    publicstaticreadonlyField<double, double> Numb1 =

        newField<double, double>("Numb1", "Number", SimpleCamlValueConvertor.Instance);

 

    publicstaticreadonlyField<bool, bool> Bool1 =

        newField<bool, bool>("Bool1", "Boolean", SimpleCamlValueConvertor.Instance);

 

    publicstaticreadonlyLookupField<string, SPFieldLookupValue> Look1 =

        newLookupField<string, SPFieldLookupValue>("Look1", "Lookup", SimpleCamlValueConvertor.Instance);

}

So, this is basically sorts of container or placeholder class – it can be named with any suitable name and the important part is the set of static member variables (these can be defined as member properties as well) that represent the SharePoint fields that you want to use in your site data queries. The type of the member variables is the generic Field<TValue, TParsedValue> class, which as I mentioned is used as a small wrapping class for SharePoint list and/or site fields. You can also see in the code sample that two other generic types are used – DateField and LookupField – they actually inherit the Field class (these two are the only inheritors) and provide specific support for the SharePoint date and lookup field types. At this point I haven’t implemented a utility that automatically generates the container class with the mapped “Field” member variables but as you can see it is pretty easy to create one very quickly by hand – it is basically one line of code per field.

And this is the declaration of the generic Field class (this is only the declaration of the class as it appears in the meta-data code view in Visual Studio):

publicclassField<TValue, TParsedValue> : Field

{

    public Field(string name, string type, ICamlValueConvertor convertor);

    public TValue Value { get; }

    publicvirtual TParsedValue this[DataRow item] { get; }

}

as you see, it inherits a non-generic class also named “Field” (which is also an abstract class):

publicabstractclassField

{

    public Field(string name, string type, ICamlValueConvertor convertor);

    publicField Descending { get; }

    publicField Nullable { get; }

}

The two classes are very simple with just a couple of properties, a constructor and a custom indexer. As for the separation in two classes – a base one and an inheritor (which is also a generic class) – you will see the logic behind this design solution in a while.

So, let me now give you a brief explanation about the members of the “Field” classes:

  • first – the constructor– it takes two string parameters in which you should provide the name (the internal name actually) and CAML type (as it appears in the Type attribute of the CAML “Value” element) of the SharePoint field, and a third parameter of type ICamlValueConverter – this is a custom interface type defined in this custom LINQ framework. You don’t have to know details about this interface, since there is already a class that implements it which you can use directly (I will briefly give you some details about the interface below). And here is a sample usage:

    newField<string, string>("Text1", "TEXT", SimpleCamlValueConvertor.Instance);

    the field in this case is a single line text field - “Text1” is its internal name and “TEXT” is the CAML type that should be used for it. For the third parameter you can see that the SimpleCamlValueConvertor class is used (this is a singleton class, available through its “Instance” static property).
  • the generic “Value” property – this property has one purpose only – it should be used only in the custom CamlWhere extension method, inside its lambda expression parameter. It is this expression tree actually that will be translated to the CAML “Where” element. If you have for example this comparison expression with the Value property:


    Fields
    .Bool1.Value != true

    it will get translated to this CAML:

    <Neq>

      <FieldRefName="Bool1" />

      <ValueType="Boolean">1</Value>

    </Neq>

    The exact .NET type of the “Value” property is determined by the first generic type parameter of the Field<TValue, TParsedValue> class. Basically only five .NET types can be used for it – string, bool, int, double and DateTime, depending on the underlying SharePoint field type. For all SharePoint field types whose field value type is not a primitive .NET type, the System.String type should be used – including the lookup, multi lookup, multi column, multi choice, etc. field types – this is because of the limitations of the CAML syntax which doesn’t allow more complex comparison operations.
  • the generic indexer that can be used to extract typed values from the DataRow instances returned when the IQueryable gets enumerated (see above in the first code snippet). The exact .NET type of the return value is determined by the second generic type parameter of the Field<TValue, TParsedValue> class. You can provide the corresponding field value type here which depending on the field type of the underlying SharePoint field can be SPFieldLookupValue, SPFieldLookupValueCollection, SPFieldMultiColumnValue, etc, as well as just a simple .NET value type like int, double, bool. The converting of the DataRow column value to the exact .NET type is handled normally by the SimpleCamlValueConvertor class, which you provide as a parameter to the Field class constructor. You can use a custom class here too that implements the ICamlValueConvertor interface if you want to have some different converting logic. Note that the SimpleCamlValueConvertor class doesn’t support “nullable” types like Nullable<int>, Nullable<double>, etc (or with the short-hand notation: int?, double?, etc), and for fields whose value type is a .NET value type when the underlying list item contains a null value in the corresponding column, the indexer will return the default value for that .NET value type (DateTime.MinValue for System.DateTime, false for System.Boolean, 0 for System.Int32 and 0.0 for System.Double).
  • “Descending” property – this property whose type is the abstract Field class should be used in the custom CamlOrderBy extension method only:

    CamlOrderBy(Fields.Date1.Descending, Fields.Text1)


    the idea is that when you provide only a Field member variable only (like Fields.Text1) the sorting will be on that column ascending and if you use the “Descending” property of the Field member variable (Fields.Date1.Descending) then the sorting will be descending. The above method call will be translated into this CAML:

    <OrderBy>

      <FieldRefName="Date1"Ascending="FALSE" />

      <FieldRefName="Text1"Ascending="TRUE" />

    </OrderBy>

  • “Nullable” property – this is similar to the “Descending” property, its type is again the abstract Field class and it is intended for use only in the “ViewFields” custom extension method:

    ViewFields(Fields.ID, Fields.Text1.Nullable);


    and this method call will be translated to this CAML:

    <ViewFields>

      <FieldRefName="ID"Nullable="FALSE" />

      <FieldRefName="Text1"Nullable="TRUE" />

    </ViewFields>

And a quick look at the LookupField and DateField classes which inherit the Field<TValue, TParsedValue> class and each of which adds one extra member property:

  • the “LookupId” property of the LookupField class – this is an integer property that can be used only in the lambda expression parameter of the custom CamlWhere extension method (like the “Value” member property of the Field class):

    .CamlWhere(r =>

       Fields.Look1.LookupId == 1 || Fields.Look1.Value == "test")

    which will translate to:

    <Where>

      <Or>

        <Eq>

          <FieldRefName="Look1"LookupId="TRUE" />

          <ValueType="Lookup">1</Value>

        </Eq>

        <Eq>

          <FieldRefName="Look1" />

          <ValueType="Lookup">test</Value>

        </Eq>

      </Or>

    </Where>

    as you can see, you can use both the “Value” and the “LookupId” properties of the LookupField class inside the lambda expression but they have different semantics – when you use the “LookupId” property the generated CAML contains the extra “LookupId” attribute in the “FieldRef” element. This means that when you use the “Value” property the comparison will be on the value of the lookup field (show field) in the lookup list and when you use the “LookupId” property the comparison will be on the ID of the lookup list item (the referenced item in the lookup list).
  • the “DateValue” property of the DateField class – this is again a property that can be used only in the lambda expression parameter of the custom CamlWhere extension method. The difference between the “Value” and “DateValue” properties of the DateField class is that when you use the “Value” property the comparison will be on the full date-time value, while for the “DateValue” property the comparison will be on the date part only of the DateTime argument (the CAML will contain the extra “IncludeTimeValue” attribute when you use the “Value” property).

And here are the declarations of the custom extension methods:

publicstaticclassSPWebExtensions

{

    publicstaticIQueryable<DataRow> GetSiteData(this Microsoft.SharePoint.SPWeb web);

}

publicstaticclassSiteDataExtensions

{

    publicstaticIQueryable<DataRow> CamlOrderBy(thisIQueryable<DataRow> items, paramsField[] fields);

    publicstaticIQueryable<DataRow> CamlWhere(thisIQueryable<DataRow> source, Expression<Func<DataRow, bool>> predicate);

    publicstaticIQueryable<DataRow> Lists(thisIQueryable<DataRow> items, BaseListType listType);

    publicstaticIQueryable<DataRow> Lists(thisIQueryable<DataRow> items, paramsGuid[] listIDs);

    publicstaticIQueryable<DataRow> Lists(thisIQueryable<DataRow> items, string serverTemplate);

    publicstaticIQueryable<DataRow> Lists(thisIQueryable<DataRow> items, BaseListType listType, bool hidden, int maxListLimit);

    publicstaticIQueryable<DataRow> Lists(thisIQueryable<DataRow> items, bool hidden, int maxListLimit, paramsGuid[] listIDs);

    publicstaticIQueryable<DataRow> Lists(thisIQueryable<DataRow> items, string serverTemplate, bool hidden, int maxListLimit);

    publicstaticIQueryable<DataRow> RowLimit(thisIQueryable<DataRow> items, uint rowLimit);

    publicstaticIQueryable<DataRow> ViewFields(thisIQueryable<DataRow> items, paramsField[] viewFields);

    publicstaticIQueryable<DataRow> Webs(thisIQueryable<DataRow> items, WebScope webScope);

}

  • SPWebExtensions.GetSiteData– this is the method that you will use first to get a reference of IQueryable<DateRow> that you can then use with the extension methods defined in the SiteDataExtensions class. With the standard SharePoint 2010 LINQ support we have the “context” class which exposes public properties that return IQueryable for the different entities that you have generated. And with this custom LINQ implementation you see that it is much simpler – just one extension method that you use on a SPWeb instance. And actually this method has the same name as the standard SPWeb.GetSiteData method (which accepts a SPSiteDataQuery parameter), but this one is parameterless. After you have the IQueryable<DateRow> instance you can then call the extension methods defined in SiteDataExtension which directly map to the properties of the standard SPSiteDataQuery class, but are much easier to use, because you don’t have to use the cumbersome CAML syntax:
  • SiteDataExtensions.CamlOrderBy– this method generates the “OrderBy” element of the CAML that is provided to the SPSiteDataQuery.Query property. It accepts a “params” array parameter of the abstract “Field” class which you can provide as “Field” static member variables of your “container” class (class Fields in the sample above) or their “Descending” properties if you want to use descending sorting (for sample code see the paragraph about the Field.Descending property).
  • SiteDataExtensions.CamlWhere - this method generates the “Where” element of the CAML that is provided to the SPSiteDataQuery.Query property. Note that it has the same notation as the standard Queryable.Where extension method (I could have just as well used the latter instead, but with all other custom extension methods, I decided to have all of these custom). It is actually the expression tree that the .NET framework generates from its lambda expression parameter that gets translated to the “Where” CAML. Note that since the resultant CAML is with very limited comparison expression capabilities the custom Queryable provider of this implementation supports very few .NET methods and operators:
    • the binary operators “&&” (AndAlso) and “||” (OrElse) (that map to CAML “And” and “Or”)
    • the binary comparison operators “==” (Equal), “!=” (NotEqual), “>” (GreaterThan), “>=” (GreaterThanOrEqual), “<” (LessThan), “<=” (LessThanOrEqual) (these map to CAML “Eq”, “Neq”, “Gt”, “Geq”, “Lt”, “Leq” respectively, for equality/inequality to null: “== null” and “!= null” the corresponding CAML elements are “IsNull” and “IsNotNull”)
    • .NET methods – only two methods are supported – String.StartsWith and String.Contains (these map to CAML “BeginsWith” and “Contains”):

      Fields.Text1.Value.Contains ("test")

    • No other operators and methods are supported (the unary negation (unary “!”) is not supported either)
    Note also the following restrictions for the expression tree from the lambda expression of the CamlWhere method:
    • one of the operands of the binary comparison operators (no matter whether left or right operand) should always be a “Field” static member variable/property of your container class (class “Fields” from the sample) using its “Value”, “LookupId” or “DateValue” properties (see the sample code above)
    • the other operand of the binary comparison operators should be a constant expression or an expression that doesn’t reference “Field” member variables/properties and that evaluates to the same .NET type as the “Value” property of the “Field” instance from the other operand (note that it should evaluate to the exactly same .NET type otherwise you have to explicitly use type casting for that operand). Note also that you can use null equality/inequality comparison even for .NET value types (e.g. int, double, DateTime) – these will be translated to CAML “IsNull” and “IsNotNull”:

      Fields.Date1.Value == null

      The operand can contain any type of expressions and methods – the restrictions mentioned above for the allowed operators/methods don’t apply here, for instance you can have some more complex calculations and/or call other custom methods:

      Fields.Date1.Value > DateTime.Today.AddDays(-2)

    • For “Field” members with Boolean “Value” property you should explicitly use the equality/inequality operator (even though it may seem more economical to omit it):

      Fields.Bool1.Value == true

    It is important to note here that if you violate one of the above mentioned restrictions you will get various exceptions (of the custom ParseLinqException type) during run-time. Note also that the delegate type of the lambda expression parameter accepts a DataRow parameter but this one is not used (and shouldn’t be used) anywhere in the lambda expression’s body.
  • SiteDataExtensions.Lists – this method generates the CAML that is provided to the SPSiteDataQuery.Lists property. The method comes with six overloads that provide handy support for the different list modes of the cross list query – either using the base list type, or the list server template or providing an array of SharePoint list IDs.
  • SiteDataExtensions.RowLimit – this methods maps to the RowLimit property of the SPSiteDataQuery – it accepts anunsigned integer parameter with which you specify how many list items the query should return.
  • SiteDataExtensions.ViewFields– this method generates the CAML that is provided to the SPSiteDataQuery.ViewFields property. It accepts a “params” array parameter of the abstract “Field” class which you can provide as “Field” static member variables of your “container” class (class Fields in the sample above) or their “Nullable” properties if you want the generated “FieldRef” elements to have the “Nullable” attribute added (see the first code sample).
  • SiteDataExtensions.Webs – this method generates the CAML that is provided to the SPSiteDataQuery.Webs property.

Quick start with the custom LINQ framework:

And lastly several words of how to use the custom LINQ framework inside your SharePoint projects.
The solution comes in a single assembly - Stefan.SiteData.Linq (you can use it as a separate assembly or to incorporate the code in a custom assembly of yours). The steps to use it are as follows:

  1. If you use it as a separate assembly you need to have an assembly reference of it in your SharePoint project.
  2. You then need to create the static “container” class and create a static member variable (or property) for every SharePoint field that you will use in your cross list queries (“FieldRef” elements inside “ViewFields”, “Where” and “OrderBy” CAML elements) – these should be of the Field<TValue, TParsedValue>, LookupField<TValue, TParsedValue> or DateField<TValue, TParsedValue> classes type (check the definition of the sample “Fields” class above).
  3. You can proceed with your list data retrieving code – after you have an SPWeb instance, you can start with calling the extension SPWebExtensions.GetSiteData extension method to create the IQueryable<DataRow> and then start calling the extension methods of the SiteDataExtensions class so that you specify custom filtering, sorting and view fields that should be included in the result-set (check the first code snippet at the top).
  4. Lastly when you start iterating the IQueryable<DataRow> you can make use of the “Field” class indexer so that you get typed values from your SharePoint fields (you can as well directly access the values from the DataRow instances and use them as simple strings). As I mentioned above the conversion logic for the DataRow column string value to specific SharePoint field value .NET types is in the custom SimpleCamlValueConvertor class (you can check its code, it is really simple). You can provide your custom conversion implementation if you create a class implementing the ICamlValueConverter interface:

        publicinterfaceICamlValueConvertor

        {

            string ConvertToString<T>(T value);

            T ConvertFromString<T>(string value);

        }

    It is the ConvertFromString generic method that is used for the conversion and its generic parameter determines the return type of the method (this is also the .NET type of the indexer of the “Field” class). You can also inherit the SimpleCamlValueConvertor class and override its ConvertFromString method (it is defined as a virtual method) – then you won’t have to implement the ICamlValueConvertor.ConvertToString method.
  5. When you debug your code you will notice that when you hover over the IQueryable<DataRow> variable or add a debug watch for it you will be able to see the full query CAML generated by the custom IQueryable provider which can be very handy for issue troubleshooting.

Provision publishing pages in a sandbox solution

$
0
0

This one turned out to be an interesting finding – at first I thought that I can just copy a regular “Module” feature from a farm solution and that it will work smoothly within a sandbox solution. For instance:

<Elementsxmlns="http://schemas.microsoft.com/sharepoint/">

  <ModuleName="MyPages"Path="MyPages"Url="Pages">

    <FileUrl="mypage.aspx"Path="mypage.aspx"Type="GhostableInLibrary">

      <PropertyName="Title"Value="Home" />

      <PropertyName="ContentType"Value="Welcome Page" />

      <PropertyName="PublishingPageLayout"Value="~SiteCollection/_catalogs/masterpage/MyWelcome.aspx, My Welcome Page" />

      <AllUsersWebPartWebPartZoneID="BottomLeftZone"WebPartOrder="1">       

          <![CDATA[

<WebPart xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://schemas.microsoft.com/WebPart/v2">

  <Title>Search Box</Title>

  <FrameType>None</FrameType>

  <Description>Displays a search box that allows users to search for information.</Description>

  <MissingAssembly>Cannot import this Web Part.</MissingAssembly>

  <Assembly>Microsoft.Office.Server.Search, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c</Assembly>

  <TypeName>Microsoft.SharePoint.Portal.WebControls.SearchBoxEx</TypeName>

</WebPart>]]>       

      </AllUsersWebPart>

    </File>

  </Module>

</Elements>

As you see, this is a fairly simple “Module” elements file with a single “File” element that provisions a publishing page containing one web part to the standard “Pages” library. The “mypage.aspx” file is the standard template redirection one-liner page that you can take from any of the standard SharePoint publishing site definitions:

<%@Page=""Inherits="Microsoft.SharePoint.Publishing.TemplateRedirectionPage,Microsoft.SharePoint.Publishing,Version=14.0.0.0,Culture=neutral,PublicKeyToken=71e9bce111e9429c"%><%@Reference=""VirtualPath="~TemplatePageUrl"%><%@Reference=""VirtualPath="~masterurl/custom.master"%>

So, when I added this feature to a sample sandbox solution of mine, deployed the latter and activated the feature I immediately noticed two quite unpleasant issues:

  • the first one was that the file was provisioned in a draft state (not checked in)
  • the second was even worse – the web part was missing on the page

I started wondering what may be causing these issues and thought that at least the first one was not that serious – after all with a little bit of code in a feature receiver it should be possible to publish the provisioned page or pages. But then thinking about the second issue I realized that it will require much more than several lines of code if I want to get that fixed with custom programming (and why should I get into that much trouble since this is supposed to be available out-of-the-box). The fact was that I already had the code that does the whole trick of parsing the “Module” elements file, parses a selected “File” element and adds the web parts to a web part page using the SharePoint object model – I had used that in my “web part manager” tool, though I wasn’t sure if the code (and it was several hundred lines of code) would work normally in a sandbox.

Reluctant to use this cumbersome “fix”, it occurred to me that I can try using the built-in SharePoint feature of saving a site as a template which in SharePoint 2010 creates a “wsp” package (which is a sandbox solution) that contains a “Module” feature provisioning all files (including publishing pages) in the site. My plan was simple – after I create the “wsp” package of my test site with publishing pages I will create a sub-site based on it and will check whether the provisioning of the publishing pages will fail in the same miserable way and if it doesn’t I will open the “wsp” file and inspect the XML of the “Module” element to see what does the trick.

So, as I suspected, the first part went smoothly which meant that the “save as template” feature worked perfectly with publishing pages although the “save site as template” command is missing in the “site settings” page of sites based on publishing site templates and you have to navigate to it manually - _layouts/savetmpl.aspx (it’s actually interesting to know why it is hidden in the first place, I can only suspect that there is a serious reason for that).

Then came the interesting part of checking the “Module” and “File” elements in the “Module” feature containing the publishing pages (actually I was testing with just one page) – first I reproduced the normal working of the feature by copying the elements without any changes and then started to remove parts, that I thought, weren’t necessary so that I could find out which is the critical part. First I checked the attributes of the “Module” and “File” elements but there was nothing extra or peculiar there (the “Level” attribute of the “File” element was not present), it wasn’t a “Property” element for the “_ModerationStatus” system column either. Then I saw a big chunk worth deleting – the “Property” element of the “MetaInfo” system column – it basically contains the concatenated values of all other fields and file property bag items of the provisioned file – and after I deleted it the feature started to behave in the exactly same way as my first feature. I had a closer look at the “MetaInfo” element and after some testing found out that it is this part that does the trick:

vti_setuppath:SR|SiteTemplates\\SPS\\Default.aspx

The meaning of this one was almost clear to me – instead of using the “aspx” file provided in the feature the publishing page should use a file already available in the subfolders of the “14” hive. Another thing that immediately occurred to me was that the “Module” element also accepts a “SetupPath” attribute, so I quickly rewrote my first feature so that it now looked like:

<Elementsxmlns="http://schemas.microsoft.com/sharepoint/">

  <ModuleName="MyPages"SetupPath="SiteTemplates\SPS"Url="Pages">

    <FileUrl="mypage.aspx"Path="default.aspx"Type="GhostableInLibrary">

      <PropertyName="Title"Value="Home" />

      <PropertyName="ContentType"Value="Welcome Page" />

      <PropertyName="PublishingPageLayout"Value="~SiteCollection/_catalogs/masterpage/MyWelcome.aspx, My Welcome Page" />

      <AllUsersWebPartWebPartZoneID="BottomLeftZone"WebPartOrder="1">       

          <![CDATA[

<WebPart xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://schemas.microsoft.com/WebPart/v2">

  <Title>Search Box</Title>

  <FrameType>None</FrameType>

  <Description>Displays a search box that allows users to search for information.</Description>

  <MissingAssembly>Cannot import this Web Part.</MissingAssembly>

  <Assembly>Microsoft.Office.Server.Search, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c</Assembly>

  <TypeName>Microsoft.SharePoint.Portal.WebControls.SearchBoxEx</TypeName>

</WebPart>]]>       

      </AllUsersWebPart>

    </File>

  </Module>

</Elements>

So, as you see there are only two small changes in the attributes of the “Module” and “File” attributes:

  • the first one is the adding of the “SetupPath” attribute to the “Module” element and removing the existing “Path” attribute
  • the second one is the changing of the value of the “Path” attribute of the “File” element to “default.aspx”

The idea of these two attributes is that under this path “14\Template\SiteTemplates\SPS\default.aspx” there is a standard SharePoint file which is exactly a template redirection page, exactly the same as the one that I showed above. This means that instead of the “aspx” file in the feature (which can be removed from the feature actually) an existing standard SharePoint file will be used for the provisioning of the publishing page. The other positive thing is that the cumbersome “MetaInfo” property is also not necessary since the “SetupPath” attribute of the “Module” element achieves the same result much more elegantly.

And several words as a small conclusion – though the syntax of the standard SharePoint feature elements seem the same there’re certain differences in the way the SharePoint artifacts get provisioned in farm and sandbox solutions. So, I guess, we can expect similar smaller or bigger surprises with the new SharePoint 2010 and especially with the new sandbox solutions.

Site Template Configurator utility

$
0
0

This is the newest Windows forms tool that uses the famous TreeView-PropertyGrid combo (a long time favorite of mine). I called it the “Site template configurator” and as its name suggests it is designed to provide an easy to use UI for creating, updating and maintaining SharePoint site templates (with a focus on portal site templates). You can download the source code of the tool from here. Here is a small teaser screenshot:

stcfg1

Let me start first with a few words about site templates in SharePoint. We have the normal (ONET.XML) site template which configures the functionality for a single SharePoint site. And we also have the so called portal site templates which specify not the configuration of a single site but rather a site hierarchy – this includes which ONET site template should be used for the root site of the hierarchy and then for the sub-sites in the site hierarchy. In SharePoint 2010 there’re already two ways of provisioning ONET site templates – the old one available in SharePoint 2007 with folders under the TEMPLATE\SiteTemplates SharePoint system folder and a WEBTEMP*.XML file under the TEMPLATE\[LCID]\XML folder; and the new one – with “WebTemplate” features (which works for both farm and sandbox solutions as opposed to the “SiteTemplates” approach). The portal site templates are provisioned with “WebManifest” XML files as the one available in the TEMPLATE\SiteTemplates\WebManifest SharePoint folder for the standard “Collaboration Portal” portal site definition (plus an entry in a WEBTEMP*.XML file much like the “SiteTemlates” type ONET site definitions).

The obvious purpose of site templates is to automate the process of creating SharePoint sites with the same functionality (or site hierarchies with the same structure). Although the ONET.XML provides many different configuration options it can basically come down to specifying only which site scoped and web scoped features should be activated in the site – I think that ONET elements like the “Lists” and “Modules” ones should be deprecated in favor of the corresponding “ListInstance” and “Module” feature elements. So if we assume that SharePoint features are the smallest blocks of functionality then site templates can be viewed as simply sets of features that should deploy these various functionality items to a SharePoint site. This is a neat conceptual frame but when it comes to real life implementation it may turn really difficult to maintain site definitions especially if they grow rapidly in number. And here comes the tool I will present in this posting that will hopefully help address at least some of the issues of creating and maintaining site templates.

The tool is developed in Visual Studio 2010 and targets SharePoint 2010. It uses only a small set of classes from the SharePoint object model so it can be easily recompiled for SharePoint 2007, meaning that it can be used for the older version of SharePoint as well. To start with an important note – the tool doesn’t modify existing SharePoint site templates and doesn’t change any standard configuration files under the SharePoint 14\TEMPLATE folder. It allows you to create new site templates and select and configure site and web features for them and all configurations are saved in a so called project file, which is an XML file with a custom format. The project file can be opened in the tool at later times for further updating and maintaining of your site templates. From the project file the tool can generate SharePoint site template definitions that can be deployed to a SharePoint installation where they can be used. I think that keeping all configurations in a single file is definitely an advantage at least in terms of maintainability (unfortunately in SharePoint we should deal with much more configuration files).

And here is a quick overview of the tool’s commands starting with the tool’s toolbar:

  • “Actions” menu – this menu is context dependant – it displays commands that apply for the currently selected node in the TreeView control
  • “Project” menu:
    - “New Project …”
    - “Open Project”
    - “Save Project”
    - “Save Project as …”
    - “Close Project”
    These are all self explanatory – they handle the creating, opening, updating and closing of the project file.
    - “Generate artifacts …” – this command generates the SharePoint site definitions that are configured in the project file – I will explain this in more detail below.

The tool displays several entities in a hierarchical view in the left side pane, these are: “Project”, “Template”, “Web”, “Feature” and “Feature property”. Each of these contains a number of configuration properties that are displayed in the right side property grid pane (they can also be modified in the property grid) and also each entity comes with a number of “actions” that are available as menu items under the “Actions” toolbar menu or in the context menu which appears on right mouse click on the tree node.

  • “Project” entity – it contains some general settings for the project and your templates. It has the following properties:
    - “ID” – automatically generated GUID – internal unique identifier of the project, used when the SharePoint  artifacts get generated
    - “Name” – internal name of the project
    - “DisplayName” – display name of the project (for reference purposes only, not used internally)
    - “Description” – description of the project (for reference purposes only, not used internally)
    - “DefaultDisplayCategory” – this specifies the display name of the group under which your site templates will appear in the SharePoint “create site” page
    - “Locale” – set to English US by default (1033) – this specifies the target locale of your site templates
    - “StartTemplateID” – this one is important – this number will be used for the “ID” attributes of your site templates in the generated WEBTEMP*.XML file – note that these should be unique throughout all WEBTEMP*.XML files otherwise you will get an error when you create a site with your templates.
    The commands for the “Project” entity:
    - “Add template” – adds a new site template to the project
    - “Paste template” – after you use the “copy” command of an existing template you can paste it as a new template using the Windows clipboard.
  • “Template” entity – it contains the settings for a site definition with the following properties:
    - “BaseTemplateName” – you can specify an existing ONET or portal site definition here and also one of the other templates in the project. Note that currently you cannot use “WebTemplate” feature site definitions (you won’t see these in the “site templates” drop-down control in the “Add template” dialog form). Note also that you can use portal site templates as well – nesting these is not a problem. The tool will also generate portal site templates for your site definitions.
    - “BaseTemplateConfiguration” – this is used in conjunction with the “BaseTemplateName” property to specify the configuration ID of the base site definition within its ONET.XML.
    - “Name” – internal name of the template (as it is used to create a site with code)
    - “DisplayName” – display name of the template (as it appears on the standard “create site” page)
    - “Description” – description of the template (as it appears on the standard “create site” page)
    - “DisplayCategory” – this specifies the display group of the template as it appears on the standard “create site” page, if empty the “DefaultDisplayCategory” property of the “Project” entity will be used instead during artifact generation.
    - “Hidden” – specifies whether the template is hidden
    - “RootWebOnly” – analogous to the “RootWebOnly” attribute in the WEBTEMP*.XML
    - “SubWebOnly” – analogous to the “SubWebOnly” attribute in the WEBTEMP*.XML
    - “IncludeLists” – specifies whether the Configuration/Lists element from the ONET.XML of the base site definition (specified in the “BaseTemplateName” property) should be copied to the new definition
    - “IncludeModules” – specifies whether the Configuration/Modules element from the ONET.XML of the base site definition (specified in the “BaseTemplateName” property) should be copied to the new definition
    The “Template” entity has the following commands:
    - “Add web” – allows you to add a sub-web to the template’s site hierarchy
    - “Delete template” – deletes the template
    - “Copy template” – copies the definition of the template to the clipboard (see “Paste template” above)
    - “Copy template (shallow)” – the same as “Copy template” but copies the template’s definition without its sub-webs
    - “Paste web” – pastes a “Web” definition from the clipboard as a new sub-web
  • “Web” entity – you can have “Web” elements under your “Template” elements and also nest “Web” elements under other “Web” elements. This way you can specify a site hierarchy for your template (since it is a portal site template). The “Web” entity contains a subset of the properties of the “Template” entity. There is one difference here – the usage of the “Name” property – here it specifies the URL of the sub-web relative to the template’s root or its parent “Web”. The “Web” entity has the following commands:
    - “Add web” – adds a sub-web under the current web
    - “Delete web” – deletes the current web
    - “Copy web” - copies the definition of the web to the clipboard (see “Paste web” above)
    - “Copy web (shallow)” – the same as “Copy web” but copies the web’s definition without its sub-webs
    - “Paste subweb” – pastes a “Web” definition from the clipboard as a new sub-web
  • “Feature” entity – it represents a SharePoint feature that should be activated for the site based on the template or some of the “Web” elements available in the template’s portal hierarchy. The “Feature” elements appear under the “SiteFeatures” and “WebFeatures” sub-nodes of “Template” and “Web” elements. When you add a new “Template” or “Web” element all available site and web features from the “Configuration” element of the base template’s (“BaseTemplateName” property) ONET.XML are copied and will appear in your definition. Note that when the base template is a portal site definition (this includes the templates in the current project) the feature nodes won’t be available – portal site definitions are treated as “sealed” by the tool. The “Feature” entity has the following commands:
    - “Add feature property” – adds a new “Feature property” element for the current “Feature”
    - “Delete feature” – deletes the current “Feature”
    - “Copy feature” – copies the current “Feature” definition to the clipboard
    - “Paste feature property” – creates a new “Feature property” from a copied to the clipboard “Feature property” definition. If a “Feature property” with the same “Key” property exists it will be overwritten.
    The following “Feature” commands are available for the “SiteFeatures” and “WebFeatures” nodes:
    - “Add/remove features” – this command opens a dialog form with a multi-select control that allows you to quickly add and remove many features at a time and also allows you to change the ordering of the features.
    - “Paste feature” – pastes a “Feature” definition from the clipboard. If the feature already exists it will be overwritten (the feature properties if available will be overwritten actually).
  • “Feature property” entity – it represents a property of a SharePoint feature. It has the following properties:
    - “Key” – represents the “Key” attribute of the ONET.XML “Property” element
    - “Value” - represents the “Value” attribute of the ONET.XML “Property” element
    The “Feature property” entity has the following commands:
    - “Delete feature property” – deletes the selected “Feature property”
    - “Copy feature property” – copies the definition of the “Feature property” to the clipboard.


Generation of the SharePoint site definitions

And a few words about the generation of the real SharePoint definitions from the tool’s custom template definition XML format: initially I was considering several approaches to get the site templates functionality available (site templates in the most general sense as I mentioned above – getting a set of features applied to a site) including these scenarios:

  • doing this programmatically using the SharePoint object model (either with a custom feature receiver or using the new SharePoint 2010 “web provisioned” event). A problem with this was that the object model provides methods to activate a feature but you can’t specify the feature properties that should be applied with the activation (there is an internal method for that but I didn’t want to resort to reflection).
  • using the “WebTemplate” feature approach – it seems a little bit more economical – you can pack many site definitions into a single feature and you don’t need the WEBTEMP*.XML with an extra entry there so that the definition becomes available. A small but annoying problem with “WebTemplate” features is that you can’t specify for example that the site definition is hidden – the moment you install the “WebTemplate” feature in the farm all site definitions in it become globally available.

So in the end I decided to use the good old “SiteTemplates” type of site definitions coupled with “WebManifest” files for the portal site definitions. And here is what you will get when you click the “Generate artifacts …” command – first you will need to select a target folder (it’s better to use a temporary one rather than the standard 14\TEMPLATE folder), then the tool will output all necessary SharePoint artifact files in two main folders below the target folder: “[LCID]\XML” and “SiteTemplates”.

The first one will contain a single WEBTEMP*.XML file – the asterisk will be replaced by a Guid like sequence of characters (starting with “G” and followed by thirty two alphanumeric characters – this is actually the auto-generated “ID” property of the “Project” element). The WEBTEMP*.XML file will contain entries for all portal site templates generated from the “Template” elements in the project file and also hidden “Template” entries for all base site templates (actually copies of these – see below) that were used in the template project.

The “SiteTemplates” folder – this will contain one folder named G[32 characters project GUID]_WebManifest and several folders named G[32 characters project GUID]_[base template name]. The former will contain “WebManifest” XML files for all templates in the project and the latter ones – there will be one folder per base template used – a folder will be created only for non-portal base templates – this will be an exact copy of the original folder of the base template containing all files and sub-folders, the only difference will be the ONET.XML file – it’s “Configurations” element will be completely changed and will contain one “Configuration” element for every “Template” and “Web” element in the project that use this base template. The individual “Configuration” elements will contain the selected site and web features for the parent element in your project file plus optionally the “Lists” and “Modules” element from the base configuration.

The tool just outputs the SharePoint artifact files without creating a WSP file (one of the directions for improving the tool should be the built-in generation of CAB files (the WSP is a CAB) although this is a little bit tedious job in Windows). The simple copying of the files to the “14\TEMPLATE” folder is actually sufficient (followed by a recycling of the application pool/pools – including the central administration site application pool if you want to create sites from central administration) to get the generated site definitions available (though it is a bit cumbersome since it should be made on every machine in the farm).

XsltListViewWebPart – several XSLT tips

$
0
0

The XSL customizations of the XsltListViewWebPart in SharePoint 2010 are probably not a trivial thing. If you want to do something more complex with that the best starting point for reference materials is of course the SharePoint 2010 SDK - http://msdn.microsoft.com/en-us/library/ff604021.aspx. It provides extensive coverage on the topic, so I won’t repeat any of that in this posting. Here, I will concentrate on two things: the first one is a practical issue, which is probably the first one that you will encounter when customizing the XsltListViewWebPart’s XSL – how to hook your custom XSL to the XsltListViewWebPart. The thing is that you have not just one but four options for that – the XsltListViewWebPart itself exposes two properties for setting the custom XSL – XsltListViewWebPart.Xsl and XsltListViewWebPart.XslLink and also the SPView class has two properties with the very same names and obviously the same purpose. Well, that’s plenty and to quickly answer the two questions that may already have arisen – first – what has the SPView class to do with the XsltListViewWebPart – the answer is simple: the two classes are actually two representations of the same internal SharePoint entity (check a previous posting of mine on the subject - http://stefan-stanev-sharepoint-blog.blogspot.com/2010/02/listviewwebpart-spview-two-sides-of.html). And the second question – if the two classes are indeed representations of one and the same thing do these two sets of properties map to the same internal properties too – the answer here is no, the “Xsl” and “XslLink” properties of the XsltListViewWebPart are actually inherited from a base class (DataFormWebPart) that has no direct relation to list views and indeed the four properties are really independent from one another (and there are some differences in their usage despite the matching names as you will see below). One important thing here is that there is a specific precedence for their usage by the XsltListViewWebPart – the exact evaluation order is this:

  1. XsltListViewWebPart.XslLink
  2. XsltListViewWebPart.Xsl
  3. SPView.Xsl
  4. SPView.XslLink

Before starting with the exact specifics of using these properties I want to mention one other property of the XsltListViewWebPart class – CacheXslTimeOut. This is an integer property that specifies the cache time in seconds for the XslTransform object used by the XsltListViewWebPart. The caching of the XslTransform object is very useful because, of course, it boosts the performance. And the caching of the XslTransform also may be used or not used altogether depending on which of the “Xsl” or “XslLink” properties you use, which I think is important to know beforehand. One other thing here – when I tested the “caching” behavior of these properties I either changed the value of the properties in the case of the “Xsl” ones or changed the underlying XSL file in the case of the “XslLink” properties. The immediate reflection of the change to the rendering of the web part doesn’t necessarily mean that the cache is not used because it may simply mean that the change of the property invalidates the XslTransform cache. So in the short descriptions of the properties’ usage below I won’t mention that the XslTransform cache is not applied but simply that you see or don’t see immediately the change applied in the rendering of the XsltListViewWebPart. Of course for testing purposes you can always set the value of the CacheXslTimeOut property to 1 second so that you can quickly change the XSL and see the result immediately.

And now the details and specifics about the usage of the “XSL” properties:

  • XsltListViewWebPart.XslLink– you can change this with either the SharePoint UI (it appears in the XsltListViewWebPart’s toolpart) or programmatically with the SharePoint object model. Note here that unlike the XslLink property of the SPView class you can’t specify simply the name of a custom XSL file that resides in the system TEMPLATE\LAYOUTS\XSL folder (e.g. the standard main.xsl or a custom “custom.xsl”) – this won’t work (it may be a bit surprising). You have two options for specifying the path to your custom XSL file here – the first one is to specify a file under the TEMPLATE\LAYOUTS folder (it can be directly in that folder or any sub-folder below it, not just the “XSL” one) – and the path (rather URL in this case) should mandatorily start with “/_layouts/”. The other option is to reference an XSL file that resides on your site, for instance in a document library – then you can use either the site relative URL of the file or the server relative one. For example, if you have a “custom.xsl” in a library whose URL is “documents”, then the site relative URL will be “documents/custom.xsl” (no starting slash) and the server relative URL will be something like “/sites/mysite/documents/custom.xsl” (note the starting slash – the starting part of the URL depends on the server relative URL of your site). And about the caching behavior – interestingly enough it is different depending on whether you use an XSL file from the LAYOUTS folder or in a document library in the site – in the first case the changes to the referenced XSL file won’t be visible immediately, and in the second – they will be.
  • XsltListViewWebPart.Xsl– you can change this property with the object model, but the easier way to do this is with the SharePoint Designer. With it it should be easy to change the Xsl property even without deep understanding of XSL – you can use the enhanced UI of the SharePoint Designer to modify the styling and rendering of individual list columns or to apply conditional rendering on whole rows in the XsltListViewWebPart. The SharePoint Designer automatically populates the Xsl property with an XSL snippet containing one or several XSL templates depending on your exact customizations (this XSL also references the standard “main.xsl” and the templates in it are rather “overrides” of the standard row and column rendering XSL templates). The changes to the “Xsl” property are applied immediately regardless of the value of the CacheXslTimeOut property. If you think that this may pose a performance issue for you, you can always save the XSL contained in the “Xsl” property to an external XSL file which you can then reference with the XsltListViewWebPart.XslLink or SPView.XslLink properties.
  • SPView.Xsl– you can again change this property programmatically – for that you will need to get the hidden SPView instance associated with your XsltListViewWebPart (see the link to my posting on the subject above) or again the easier way to achieve that is to set the property in the view schema in the “schema.xml” file of your custom list template (custom “schema.xml” files can also be specified in ListInstance feature elements via the “CustomSchema” attribute). Note that below the “View” element in the “schema.xml” file you can have both “Xsl” and “XslLink” elements. One important note - you should provide the XSL in the “Xsl” element in the “schema.xml” and also in the SPView.Xsl property in an XML CDATA section. The changes to the SPView.Xsl property (those applied with code) are immediately visible in the XsltListViewWebPart regardless of the value of the CacheXslTimeOut property.
  • SPView.XslLink– you can change this property programmatically and also can set its value in the XslLink element in the view schema in the “schema.xml” file of your SharePoint list. Normally you provide only a file name in this property and the referenced XSL file with that name should exist in the system TEMPLATE\LAYOUTS\XSL folder. You can also reference files in sub-folders of the LAYOUTS\XSL folder and even files in the LAYOUTS folder itself – in the first case you set the “XslLink” property to “sub\custom.xsl” and in the second case - to “..\custom.xsl”. The XmlTransform caching is fully applied for the SPView.XslLink property depending on the value of the CacheXslTimeOut property. There is one extra thing here compared to the XsltListViewWebPart.XslLink property – if you set the CacheXslTimeOut property to 1 second you will need additionally to recycle the application pool (which will force the invalidation of the cache) and only after that you will see the changes to the underlying XSL file immediately in the rendering of the XsltListViewWebPart. This is also true if you have the CacheXslTimeOut  set to 1 second and you see the immediate changes but then set it to some higher value and then again reset it to 1 second – then you will no longer see the immediate changes until you recycle the application pool again. This means that the changing of the CacheXslTimeOut property itself doesn’t invalidate the caching of the XmlTransform in the case of the SPView.XslLink property.

So, this was the first topic that I wanted to talk about and about the second one I will demonstrate a short XSL snippet:

<?xmlversion="1.0"encoding="utf-8"?>

<xsl:stylesheetversion="1.0"xmlns:xsl="http://www.w3.org/1999/XSL/Transform"xmlns:msxsl="urn:schemas-microsoft-com:xslt"exclude-result-prefixes="msxsl"

    xmlns:asp="http://schemas.microsoft.com/ASPNET/20"xmlns:mycontrols="http://mycontrols">

  <xsl:outputmethod="html"indent="no"/>

  <xsl:decimal-formatNaN=""/>

  <xsl:paramname="XmlDefinition"select="."/>

 

  <xsl:templatematch="/">

    <xsl:value-ofdisable-output-escaping="yes"select="'&lt;%@ Register Tagprefix=&quot;asp&quot; Namespace=&quot;mycontrols.WebControls&quot; Assembly=&quot;mycontrols, Version=1.0.0.0, Culture=neutral, PublicKeyToken=d03859869fe4a098&quot; %&gt;'"/>

    <mycontrols:MyControlrunat="server"ListUrl="docs" />

 

    <tablecellpadding="3">

      <tr>

        <xsl:for-eachselect="$XmlDefinition/ViewFields/FieldRef">

          <th><xsl:value-ofselect="@DisplayName"/></th>

        </xsl:for-each>

      </tr>

      <xsl:for-eachselect="/dsQueryResponse/Rows/Row">

        <tr>

          <xsl:variablename="row"select="." />

          <xsl:for-eachselect="$XmlDefinition/ViewFields/FieldRef">

            <xsl:variablename="fieldName"select="@Name" />

            <td>

             <xsl:choose>

                <xsl:whentest="$fieldName='LinkFilenameNoMenu'">

                  <xsl:value-ofselect="$row/@FileLeafRef"/>

                </xsl:when>

                <xsl:otherwise>

                  <xsl:value-ofselect="$row/@*[name() = $fieldName]"disable-output-escaping="yes"/>

                </xsl:otherwise>

              </xsl:choose>

            </td>

          </xsl:for-each>

        </tr>

      </xsl:for-each>

    </table>

  </xsl:template>

</xsl:stylesheet>

You can see in the snippet that it makes its own custom rendering of the underlying SharePoint list data (not very good at that) and doesn’t even include the standard SharePoint “vwstyles.xsl” and “fldtypes.xsl” (it’s only for demonstration purposes and should be as short as possible). The important thing in it is in the very beginning of the body of its single XSL template definition – you can see there a somewhat concealed asp.net page “Register” directive and immediately after it an asp.net markup declaration of a server control. Note also that the “mycontrols” namespace of the server control element is declared in the root element of the XSL file. So, basically this means that you can place server controls inside the XSL and these will be instantiated by the XsltListViewWebPart and their logic will be executed server-side. As you see in the snippet I placed only one server control before the actual rendering of the list data, but you can place controls in every row or even cell that you render and you can provide some list item data to the properties of the server control or controls (there may be of course performance considerations because of the number of controls that can be created in this manner).

And to answer the question – how does the XsltListViewWebPart instantiate the server controls that you may place in the XSL. It is actually simple – the XsltListViewWebPart uses the XSL transformation to produce HTML markup from the source XML, it then checks whether there are occurrences of the “runat=server” token inside the HTML markup: if there are no occurrences, the HTML markup is simply rendered, otherwise the web part instantiates a server control using the asp.net TemplateControl.ParseControl method providing the raw HTML markup to its single parameter. The TemplateControl.ParseControl method as it name suggests parses asp.net markup and creates a server controls tree from it much like it happens when a normal “aspx” or “ascx” file gets compiled. So, with this small “trick” the XsltListViewWebPart allows you to place server controls and implement more complex server side logic inside your custom XSL. And one limitation to this approach – you can’t use user controls (ascx files in the CONTROLTEMPLATES or LAYOUTS system folders) inside the XSL (server controls that use user controls internally with the TemplateControl.LoadControl method will also fail to instantiate their user controls).


XsltListViewWebPart & ContentByQueryWebPart – ParameterBindings, localization and embedded server controls

$
0
0

In SharePoint 2010 the XsltListViewWebPart and the ContentByQueryWebPart web parts play a pivotal role for the presentation of list data. It turns out that both web parts share a lot of common functionality because they both inherit one and the same base XSL transformations using web part – the DataFormWebPart (the CQWP via the CmsDataFormWebPart and the XLV via the BaseXsltListWebPart classes respectively). One of these shared pieces of functionality is the so called “parameter bindings” available through the DataFormWebPart.ParameterBindings property. The idea of the “parameter bindings” is that using specific XML data that you assign to the ParameterBindings property you can fetch various properties or variables from the asp.net, page, web parts, etc. environments which can then become available as XSL parameters (“xsl:param” XSL items) in your custom XSL that you use in either the XsltListViewWebPart or the ContentByQueryWebPart web parts. With these additional custom XSL parameters you can apply different presentation logic and even make the presentation of the two web parts react dynamically to the changes of the variables coming from these external (from the web part’s perspective) environments/contexts. So, let’s jump directly to a sample “parameter bindings” XML:

<ParameterBindings>

  <ParameterBindingName="CustomBinding"DefaultValue="my custom value" />

  <ParameterBindingName="dvt_firstrow"Location="Postback"DefaultValue="1" />

  <ParameterBindingName="ResourceBinding"Location="Resource(wss,noitemsinview_doclibrary)" />

  <ParameterBindingName="Today"Location="CAMLVariable" />

  <ParameterBindingName="UserID"Location="CAMLVariable" />

  <ParameterBindingName="WPVariableBinding"Location="WPVariable(_WPID_ | _WPQ_ | _WPR_ | _WPSRR_ | _LogonUser_ | _WebLocaleId_)" />

  <ParameterBindingName="QuerySringBinding"Location="QueryString(myqueryparam)"DefaultValue="empty" />

  <ParameterBindingName="FormBinding"Location="Form(myhidden)"DefaultValue="empty" />

  <ParameterBindingName="ServerVariableBinding"Location="ServerVariable(REMOTE_ADDR)" />

  <ParameterBindingName="WPPropertyBinding"Location="WPProperty(Title)" />

  <ParameterBindingName="ControlBinding1"Location="Control(myDDL)" />

  <ParameterBindingName="ControlBinding2"Location="Control(myDDL,SelectedIndex)" />

  <ParameterBindingName="CombinedBinding"Location="Form(myhidden);QueryString(myqueryparam)"DefaultValue="empty" />

</ParameterBindings>

Yes, this was a somewhat extensive sample, but it actually covers most of the options that are available for the “parameter bindings” and I will give more details for each option shortly. As you see the XML’s format is pretty simple – it contains a root “ParameterBindings” element with “ParameterBinding” child elements. The “ParameterBinding” element may have three attributes: Name, Location and DefaultValue. The “Name” attribute is mandatory and it specifies the name of the XSL parameter that will be initialized by this “parameter binding”. In most cases you are free to specify any suitable and meaningful for your scenario name for the parameter binding, but for certain values of the “Location” attribute you can use only a predefined set of possible names (see below). The “Location” attribute contains the logic for specifying the context from which you will fetch a certain variable/value that will become available as an XSL parameter – you can choose from several predefined options here. The “DefaultValue” attribute is self explanatory – when the value retrieved from the specified context is not available (is an empty string), the XSL parameter will be initialized with the value of the “DefaultValue” attribute.

So, after you know the XML format of the ParameterBinding property, the next logical step would be to assign this XML to a real CQWP or XLV web part that you have somewhere in your SharePoint farm. The standard SharePoint UI doesn’t allow to edit this property, so you will have to choose from several other alternatives: SharePoint Designer 2010 (pretty handy actually and requires no custom coding), custom code in a feature receiver or a tool, PowerShell script or a specialized tool (a good place to advertise my “web part manager” utility).

In order that you can actually use the custom parameters in your XSL you will also need to define “xsl:param” elements with corresponding names in the main (root) XSL file of your CQWP or XLV web part (your custom ContentQueryMain.xsl for the CQWP or the Main.xsl for the XLV web part). If the “xsl:param” is defined in the main XSL file it will be available (visible) in all included XSL files as well. Basically you can define the “xsl:param” elements for your “parameter bindings” in one of the included XSL files (e.g. the ItemStyle.xsl or the fldtypes.xsl) – then the parameter will be available only in this XSL file (there won’t be a problem if the parameter is defined in both the main and the included XSL file either). The definitions of the XSL parameters for the parameter bindings from the sample above can look something like:

  <xsl:paramname="CustomBinding" />

  <xsl:paramname="ResourceBinding" />

  <xsl:paramname="dvt_firstrow"select="1" />

  <xsl:paramname="Today" />

  <xsl:paramname="UserID" />

  <xsl:paramname="WPVariableBinding" />

  <xsl:paramname="QuerySringBinding" />

  <xsl:paramname="FormBinding" />

  <xsl:paramname="ServerVariableBinding" />

  <xsl:paramname="WPPropertyBinding"select="'no name'" />

  <xsl:paramname="ControlBinding1" />

  <xsl:paramname="ControlBinding2" />

  <xsl:paramname="CombinedBinding" />

Note that by using the “select” attribute of the “xsl:param” attribute you can specify again a default value for the parameter if the initializing parameter binding has no “DefaultValue” attribute and returns an empty value. This default value will also be used if there is no parameter binding with that name specified in the XML of the ParameterBindings property.

And now let’s have a look at the available options that you have for the “Location” attribute of the “PropertyBinding” element:

  • missing Location attribute– this is also possible – in this case you need to provide the “DefaultValue” attribute -basically you provide the web part with a fixed/constant parameter value. This can be very handy if you have for example several CQWP parts using several different but very similar item styles (the item styles can differ in some minor elements like the target of the links or the presence of some additional small elements per item). In this scenario you can use only one item style which can check one or several parameters provided by the parameter bindings and adjust its presentation accordingly (the different web parts will provide different values in their parameter bindings or not provide some of the parameter bindings at all). It is obviously far easier to maintain only one item style instead of several, so the parameter bindings can help a lot in this scenario.
  • Location “Postback”– in this case you cannot choose arbitrary names, there is a predefined set of available names and they all have the “dvt_” prefix: dvt_sortdir, dvt_sortfield, dvt_filterfields, dvt_firstrow, dvt_nextpagedata, dvt_prevpagedata, dvt_partguid. From these only the last one is available in the CQWP (the rest will always have empty values in the CQWP). The “dvt_partguid” parameter will contain a value that seems like the “ClientID” of the web part (a sample value will look something like ctl00$ctl23$g_59a84b77_6a94_4c65_81cd_618b4c21c374) but actually you won’t find a matching HTML element with exactly this ID on the page. The other “dvt_” parameters contain specific XLV “postback” data related to the XLV’s sorting, filtering and paging. In fact these are already defined as “xsl:param” elements in the standard Main.xsl of the XLV web part and get initialized by the web part even without it being necessary to add them as parameter bindings. Depending on the current sorting, filtering and paging of your XLV you can see values like these in the “dvt_” parameters:

    dvt_firstrow: 31
    dvt_sortdir: descending
    dvt_sortfield: LinkTitle
    dvt_filterfields: ;LinkTitle;
    dvt_nextpagedata: Paged=TRUE&p_Title=item%2045&p_ID=45
    dvt_prevpagedata: Paged=TRUE&PagedPrev=TRUE&p_Title=item%2071&p_ID=71
  • Location Recource([Resource_File],[Resource_Name]– this one is very useful since it allows you to make values from resource files available in your XLV or CQWP web parts. The resource files are located under the App_GlobalResources subfolder of the physical directory of your web application and have the “resx” extension. If you have for example this value in the “Location” attribute - Resource(wss,noitemsinview_doclibrary) – it will fetch a localized string from the “wss.resx” resource file (actually it can be a resx file for a specific culture like wss.en-US.resx) with a name “noitemsinview_doclibrary”. If you check the wss.resx file you will see that the value of this resource string is “There are no items to show in this view of the "<ListProperty Select="Title" HTMLEncode="TRUE" />" document library” – you see a fancy “ListProperty” XML element inside the resource string, but in the XLV this will get replaced with the title of the actual list that the XLV web part displays. Apart from the “parameter bindings” localization support there are other alternatives for localization in both the XLV and CQWP web parts – check the paragraph titled “Localization” below.
  • Location CAMLVariable– in this case you can use only two predefined values for the “Name” attribute of the parameter binding: Today and UserID. Sample values returned by these two parameters are:

    Today: 2010-09-17T16:04:48Z
    UserID: Stefan Stanev

    Note that the UserID parameter actually returns the display name, not the account name of the current user.
  • Location WPVariable([Tokens])– the value inside the brackets can contain one or a combination of several predefined tokens: _WPID_, _WPQ_, _WPR_, _WPSRR_, _LogonUser_, _WebLocaleId_. If you use a combination of these you can separate them with spaces or other arbitrary characters/strings. The names of these tokens are not that descriptive but you can get some idea of their meaning by these sample values that correspond to the “WPVariable” parameter binding from the sample at the top:

    g_3242c920_b37c_4cfa_a356_955bb398d47f | WPQ2 | http://myserver/sites/2/_wpresources/Microsoft.SharePoint.Publishing/14.0.0.0__71e9bce111e9429c | /sites/2/_wpresources/Microsoft.SharePoint.Publishing/14.0.0.0__71e9bce111e9429c | myserver\stefanstanev | 1033
  • Location QueryString([QueryParam])– this allows you to get values from query parameters in the current URL in your custom XSL – it actually maps to the HttpContext.Current.Request.QueryString collection. So if you have this query parameter in the URL of the current page: default.aspx?myqueryparam=somevalue – the value “somevalue” will become available in the xsl:param corresponding to the “QueryString” parameter binding which specifies the “myqueryparam” parameter name in its Location attribute.
  • Location Form([FormParam])– this maps to the HttpContext.Current.Request.Form collection. If you have this input element on your page:

    <input type="hidden" id="myhidden" name="myhidden" value="myhiddenvalue"/>

    this parameter binding

    <ParameterBinding Name="FormBinding" Location="Form(myhidden)" DefaultValue="empty" />

    will provide the “myhiddenvalue” in the <xsl:param name="FormBinding" /> parameter (only on page postbacks, otherwise the parameter value will be the specified in the “DefaultValue” attribute: “empty”)
  • Location ServerVariable([ServerVariable])– this maps to the HttpContext.Current.Request.ServerVariables collection. If you have for example Location=”ServerVariable(REMOTE_ADDR)” it will initialize the corresponding XSL parameter with something like: fe80::a589:57ce:e0ec:1f1c%13
  • Location WPProperty([PropertyName])– in the brackets you can specify the name of a public instance property of the XLV or CQWP classes – this can be the “Title” or “ID” or any other property of the web parts whose value you want to use in your XSL.
  • Location Control([ControlID]) and Control([ControlID],[PropertyName])– here you need to provide the ID of a server control on the page containing the web part and optionally the name of a public instance property of the control’s class. If you skip the property name option the parameter will be initialized with the value of the standard Control.Text property. So if you have this drop-down list control on your page:

    <asp:DropDownListrunat="server"ID="myDDL"AutoPostBack="true">

      <asp:ListItem>Item 1</asp:ListItem>

      <asp:ListItem>Item 2</asp:ListItem>

      <asp:ListItem>Item 3</asp:ListItem>

      <asp:ListItem>Item 4</asp:ListItem>

    </asp:DropDownList>


    which has its third option selected, the parameter binding with Location="Control(myDDL)" will initialize its parameter with the value of “Item 3” (the “Text” property of the DropDownList control will return the same value as its “SelectedValue” property) and the one with Location="Control(myDDL,SelectedIndex)" will initialize its parameter with the value of “2” (the third item in the zero based “SelectedIndex” property). As you see this type of parameter bindings allows you to make the presentation of your CQWP and XLV web parts interact with the server controls that you may have in the containing page.
  • Location that is a combination of several options– in this case the Location options should be separated by semi-colon (from the sample at the top - Location="Form(myhidden);QueryString(myqueryparam)"). The ordering here is important – the preceding Location options will be evaluated first – from the example – if you have both the “myhidden” from parameter and the “myqueryparam” query parameter available in the current page, the parameter binding will return the value of the “myhidden” from parameter, because the “Form” option is placed before the “QueryString” option. The idea of this combining is that if one or more of the preceding options yield no value, then the next option which returns a value will be taken by the parameter binding and passed to its corresponding parameter.

Localization

Localization can be quite a serious issue with the availability of so many multi-lingual sites around. Fortunately as you saw the “parameter bindings” support provides a way to use localized resources in the CQWP and XLV web parts. Besides the “parameter bindings” there is one other much easier alternative to get resource strings in the XLV web part:

<xsl:value-ofselect="/dsQueryResponse/Rows/@resource.wss.fld_yes" />

yes, it is as simple as that – you just specify with XPath an attribute of the “/dsQueryResponse/Rows” element of the source XML. And you don’t have to set additionally some web part properties or define extra XSL parameters. As you see the name of the attribute follows a specific convention – it consists of three parts separated by dots – the first one is the constant “resource” prefix, the second one specifies the name of the targeted resource file (wss.resx in this case) and the third part is the name of the resource in the resource file. In the “vwstyles.xsl” and “fldtypes.xsl” XSL files of the XLV web part there’re XSL variables like the “$Rows” and “$thisNode” which map to the “/dsQueryResponse/Rows” and “/dsQueryResponse/Rows/Row” XML elements respectively. So, these alternative variants of the special “resource” attribute’s XPath are possible (from locations where you have these XSL variables available):

<xsl:value-ofselect="$thisNode/../@resource.wss.Thunmbnail"/>

<xsl:value-ofselect="$Rows/@resource.wss.ManualRefreshText"/>

And … the interesting thing – how do these resource strings come as attributes of the “/dsQueryResponse/Rows” element, after all you can have lots of resource files with hundreds of resource strings inside them. The answer is that the “resource” attributes are actually something like pseudo-attributes of the source XML. And actually there is no source XML at least in the form of a XML DOM container class like XmlDocument or XDocument. The trick is possible because the XLV uses a custom XPathNavigator (overriding the DataFormWebPart.GetXPathNavigator virtual method). This custom XPathNavigator queries directly the SPListItem data that the XLV web part retrieves from SharePoint and as an extra provides the “resource” pseudo-attribute support. The main reason to use a custom XPathNavigator is simple – performance (this way you skip the step of transforming the list item data to a DOM XML container like XmlDocument so that you can then use its default XPathNavigator, which additionally will not perform so well as the custom one).

And how about the CQWP – I kept mentioning just the XLV web part in the previous paragraph but not the CQWP. Unfortunately the “resource” pseudo-attribute “shortcut” is not available in the CQWP (the CQWP also uses a custom XPathNavigator but it is not the same as the one used by the XLV web part and doesn’t implement the “resource” attribute logic). Still, there is yet another alternative available for the CQWP web part – the idea is that the .NET XSL transformations implementation allows you to define a .NET class whose methods (rather static methods) can become available and be used from inside the XSL (check the XsltArgumentList.AddExtensionObject method in MSDN for more details). Such “extension” class can also be used in the CQWP web part but you will need to create a new class inheriting the CQWP class. The CQWP’s ModifyXsltArgumentList virtual method (which you will need to override) is the place where you can hook the custom extension object – check this nice article which describes in detail the whole procedure. And this article describes how this technique can be used to fetch resource strings into the web part’s XSL.

Embedded server controls

In my previous posting I demonstrated how you can embed server controls inside the XSL of the XLV web part. This functionality is provided by the base DataFormWebPart class, so it would be normal to expect that this will also work in the CQWP web part. But … it doesn’t (at least in most of the cases that I tried). It turns out that in the DataFormWebPart class there is a public virtual property named CanHaveServerControls, whose obvious purpose is to enable or disable the ability of the web part to parse and instantiate embedded server controls. This property is not overridden in the CQWP but its base implementation checks several other virtual properties which in the case of the CQWP return always (or in most cases) false. So the only possible work-around in this case (if you need embedded server controls in the CQWP that badly) is to simply subclass the CWQP. The inheriting class will just need to override the CanHaveServerControls property and will look something like:

    publicclassMyCQWP : ContentByQueryWebPart

    {

        publicoverridebool CanHaveServerControls

        {

            get

            {

                returntrue;

            }

            set

            {

                base.CanHaveServerControls = value;

            }

        }

    }

WebProvisioned event receiver – a practical example

$
0
0

The WebProvisioned web event receiver is a new receiver type introduced in SharePoint 2010. It is one very useful addition to the already extensive suite of event receiver types in SharePoint, as it allows you to add some additional logic when a sub-site in your site collection gets created. In this sense it resembles to some extent the feature stapling functionality, but it differs from it because its scope is the site collection level whereas the feature stapling affects the whole farm (I personally dislike farm wide impact like this since you generally/potentially break all other solutions that may have been installed in the same farm).[updated 2010-12-29]

Creating and deploying a WebProvisioned event receiver is also a relatively simple task as you have a dedicated project item in Visual Studio 2010 – it basically supports all available receiver types, including the WebProvisioned one. With a couple of clicks you will be able to see an elements file for your WebProvisioned receiver that will look something like this:

<?xmlversion="1.0"encoding="utf-8"?>

<Elementsxmlns="http://schemas.microsoft.com/sharepoint/">

  <Receivers>

    <Receiver>

      <Name>EventReceiver1WebProvisioned</Name>

      <Type>WebProvisioned</Type>

      <Assembly>$SharePoint.Project.AssemblyFullName$</Assembly>

      <Class>Stefan.SharePoint.EventReceiver1.EventReceiver1</Class>

      <SequenceNumber>10000</SequenceNumber>

    </Receiver>

  </Receivers>

</Elements>

the event receiver element definition will come with a new feature definition if you don’t have already one in your SharePoint project. Basically this is enough to have your new event receiver provisioned, but you may further consider two more important options: the first one is the Synchronization of the event receiver – you can specify either “synchronous” or “asynchronous” here (the “asynchronous” is the default one):

<Synchronization>Synchronous</Synchronization>

The Synchronization element should be placed below the “Receiver” element. Normally, I will opt for the synchronous type – this will be justified especially in cases where you will create your sites with the SharePoint UI and will expect to see immediately the results of the receiver when the home page of the site loads. Another thing is that with asynchronous receivers you have the risk of having save conflicts and concurrency issues especially when you create many sites simultaneously which is the case when you use a portal site definition.

The second option is the Scope option – you can specify it with the “Scope” attribute of the “Receivers” element – you can set it to either “Site” or “Web”:

<ReceiversScope="Site">

The scope determines whether the event receiver will be added to the SPSite.EventReceivers or SPWeb.EventReceivers collections. And there is a substantial difference in the behavior and application of the receiver depending on whether it is added to the SPSite or SPWeb level: in the first case it will be called for every sub-site created in the site collection, while in the second case, the receiver will be called only for the sub-sites that are immediate children of the site in whose EventReceivers collection the receiver is added. One other interesting “feature” (a rather peculiar one) is that when I tested my sample “Site” scoped WebProvisioned receiver it got called twice for one and the same site (no idea yet if this is an issue with my dev environment only or it is something by design). This is not the case for “Web” scoped WebProvisioned receivers.

Another important thing with the receiver’s scope is that the “Site” scope will be applied only if the activating feature also has “Site” scope (site collection scoped). If the feature has “Web” scope, the “Scope” attribute of the “Receivers” element will be ignored and the receiver will be added to the SPWeb.EventReceivers collection.

General purpose WebProvisioned event receiver

And now let’s have a look at one practical example - the general purpose WebProvisioned event receiver that I created (download source code from here). So, first let me say several words about the real life issues that I thought could be addressed with a WebProvisioned receiver – one of these is the ability of sub-sites to inherit certain settings from their parent site – for example the site default master page or the alternate CSS URL. This functionality is available for sub-sites based on the standard publishing site templates but that’s not the case for the non-publishing site definitions (like team site, blog site, wiki site, etc). This issue can be addressed with custom feature/features but it is not handy to create new site definitions that extend the standard ones only to add this extra functionality. Feature stapling is also an alternative in this casebut it affects the whole farm which is definitely not a thing that I want to have in my SP installation[updated 2010-12-29]. And apart from the web settings we also have the settings managing the site navigation that are also suitable for inheriting – apart from the option to inherit the top navigation provided in the standard SharePoint “create site” page (and you can create your sites or site hierarchies with custom tools or scripting where this option is obviously not available).

One other important thing that I wanted to have in my WebProvisioned event receiver was that it should be fully configurable. And that this configurability is achieved via feature properties – because the event receiver will be naturally provisioned by a feature which will support a set of feature properties that will determine the behavior of the event receiver.

So, let me directly start with an example of the custom configuration feature properties so that you can get an idea of the type of functionality that this custom WebProvisioned receiver provides:

<Properties>

  <PropertyKey="ReceiverScope"Value="Site" />

  <PropertyKey="Web.MasterUrl"Value="~SiteCollection/_catalogs/masterpage/v4.master" />

  <PropertyKey="Web.AlternateCssUrl"Value="/_layouts/styles/mydefault.css" />

  <PropertyKey="CustomProperty.Test"Value="Some value" />

  <PropertyKey="PublishingWeb.IncludeInCurrentNavigation"Value="true" />

  <PropertyKey="Navigation.GlobalIncludePages"Value="true" />

  <PropertyKey="Navigation.GlobalIncludeSubSites"Value="true" />

</Properties>

The first feature property – “ReceiverScope” – as its name suggests, determines whether the event receiver should be added to the SPSite or SPWeb EventReceivers collection respectively (you can use either “Site” or “Web” in its value, exactly the same as in the “Scope” attribute from the event receiver sample element above). And as you can probably figure this out already – the configurability of the receiver’s scope with a feature property means that you can’t use declarative CAML in a “Receiver” feature element but rather use code in a feature receiver that should create the WebProvisioned event receiver. You will see that in the sample code I simply commented out the CAML in the event receiver’s element file and added a feature receiver to the original receiver’s feature (as both were created by Visual Studio 2010) that creates everything with code. The code of the feature receiver works for both “Site” and “Web” scoped feature so you don’t have to couple the “Site” web event receiver’s scope with a site collection scoped feature as it is the case with the “Receiver” feature element.

The rest of the feature properties determine the concrete behavior of the WebProvisioned event receiver – as you see, their keys follow a specific naming convention – they consist of two parts separated by a dot. The first part can contain the following predefined values: Web, CustomProperty, PublishingWeb and Navigation. They correspond to the target instance that the receiver will modify – SPWeb, SPWeb.AllProperties, PublishingWeb and PublishingWeb.Navigation respectively (SPWeb instance in this case is the SPWeb of the newly created (provisioned) web that the WebProvisioned receiver is invoked for and the PublishingWeb is the object retrieved from this SPWeb instance using the PublishingWeb.GetPublishingWeb static method). The second part of the key specifies the name of a public instance property of the target class in the case of the SPWeb, PublishingWeb and PortalNavigation (PublishingWeb.Navigation) target objects and a key in the Hashtable instance in the case of the SPWeb.AllProperties target object. The properties of the SPWeb, PublishingWeb and PortalNavigation classes can be only of the following .NET types – System.String, primitive .NET types (boolean, integer, double, etc) or .NET enumerations, properties of other .NET types are not supported (meaning that you can modify only properties from the above mentioned types in the target instances with the general purpose WebProvisioned event receiver).

So, the feature properties from this first sample set fixed values to some of the properties of the new web that the web event receiver was invoked for, but what about inheriting these properties from the parent web. Have a look at this second sample:

  <PropertyKey="Web.MasterUrl"Value="${~Parent}" />

  <PropertyKey="Web.CustomMasterUrl"Value="${~ParentNoRoot}" />

  <PropertyKey="Navigation.GlobalIncludePages"Value="${~SiteCollection}" />

As you see, you can specify special tokens in the “Value” attribute of the feature property elements too, these three options are available:

  • ${~Parent}– with this token you specify that the property of the “Key” attribute will be copied from the parent web of the current web
  • ${~ParentNoRoot}– this is almost the same as the first option, the only difference being that if the parent site is the root site of the current site collection the property won’t be copied to the current web (meaning that if the current web is not a child of the root web, the property will get copied).
  • ${~SiteCollection}– this token specifies that the property will be copied from the root web of the current site collection (no matter whether it is the parent of the current web or not)

In the case of the ${~ParentNoRoot} token you saw that there will be cases when the specified web property won’t get copied to the current web (for first level children). In this case you will need to specify two feature property elements with the same “Key”:

  <PropertyKey="Web0.CustomMasterUrl"Value="~SiteCollection/_catalogs/masterpage/v4.master" />

  <PropertyKey="Web1.CustomMasterUrl"Value="${~ParentNoRoot}" />

… or almost the same “Key” – you see that there is an extra digit before the dot separator in the “Key” attribute – this is a special trick that the WebProvisioned event receiver supports because SharePoint doesn’t allow you to have feature property elements with the same “Key” attribute. Another important thing here is the order of evaluation of the feature property elements that specify one and the same web property – in this case the properties appearing later in the properties’ definition will have precedence – this means that the static “v4.master” value will be applied only for first level child webs and all other sub-webs will have their “CustomMasterUrl” property copied from their respective parent webs.

So much about web properties’ inheritance, but what about the much rarer case when you may want the opposite - to disallow the web properties’ inheritance (which is the default behavior for the SPWeb’s MasterUrl, CustomMasterUrl and AlternateCssUrl properties for publishing sites). This sample would do the trick:

  <PropertyKey="Web.MasterUrl"Value="~SiteCollection/_catalogs/masterpage/v4.master" />

  <PropertyKey="Web.CustomMasterUrl"Value="~SiteCollection/_catalogs/masterpage/v4.master" />

  <PropertyKey="Web.AlternateCssUrl"Value="" />

  <PropertyKey="CustomProperty.__InheritsMasterUrl"Value="false" />

  <PropertyKey="CustomProperty.__InheritsCustomMasterUrl"Value="false" />

  <PropertyKey="CustomProperty.__InheritsAlternateCssUrl"Value="false" />

As you see, when you create publishing sub-sites it is not enough to just set the SPWeb’s MasterUrl, CustomMasterUrl and AlternateCssUrl properties. To “break” the inheritance of the master page settings you will also need to set several custom properties in the SPWeb.AllProperties collection as well.

And one last option that the general purpose WebProvisioned event receiver supports:

<PropertyKey="[SPS;STS#1]Navigation.GlobalIncludeSubSites"Value="true" />

you can specify optionally the names of the target site definitions for which the property setting should be applied. You can specify one or more site definition names separated by semi-colon in the beginning of the “Key” attribute enclosed in square brackets. You can use names of site definitions with or without configuration number added (after the ‘#’ character) – when you use a site definition name without a configuration number the property will be applied for all available configurations in this site definition (actually for webs based on all configurations).

And lastly several words about the sample solution with the general purpose WebProvisioned receiver: it contains a feature named “WebProvisionedReceiver” – this is the feature which actually installs the custom WebProvisioned receiver. In order that you can use it you will need to provide it with the appropriate for your scenario feature properties (starting with the “ReceiverScope” one) – this you can do in the onet.xml file of a site definition of yours in which you will add a reference to the “WebProvisionedReceiver” feature. Optionally (handy for testing purposes) you can add the feature properties directly to the feature’s definition (the feature.xml template file).

Besides the “WebProvisionedReceiver” feature the sample solution contains one more feature which is called “WebSettings”. It can be configured with exactly the same properties as the “WebProvisionedReceiver” feature (except the “ReceiverScope” one). This feature uses the same internal logic as the general purpose WebProvisioned receiver but instead applies the specified web properties directly to the web in which it is activated.

ListViewWebPart – set the toolbar with code without reflection

$
0
0

[update (Aug 25, 2011): this solution works only for standard SharePoint lists, whose schema definitions (in the schema.xml file) already contain view definitions which do not have the toolbar (usually view definitions with BaseViewID=0) – see details below]

After several postings about the XLV web part in SharePoint 2010, let’s take a step back and have a look at its predecessor – the ListViewWebPart from SharePoint 2007. And more precisely about the problem with the list view toolbar when you add the web part with code. Basically the problem is that the web part gets added with the toolbar containing the “New”, “Actions” and “Settings” buttons which you normally see in the default list view pages like the “AllItems.aspx” one, and which in probably more than 90% of the cases you don’t want to see in the custom page, to which you are adding the web part.

There is already a solution to this problem (i.e. to remove the toolbar after you add the web part), which you can find in at least a dozen SharePoing blogs in internet, but the thing is that the solution in question uses reflection. Which is not a good thing at all. And actually the original solution stopped working after the Infrastructure update for MOSS 2007, so a small extra bit of reflection was to be added to the original code, so that it works after the update.

And let’s see now, how it is possible to add a ListViewWebPart to a page without the commands toolbar. Actually there are two slightly different ways to add a ListViewWebPart to a page with code, let’s see the first one:

        staticvoid AddListView(SPFile file, SPList list, string zoneID)

        {

            SPLimitedWebPartManager mngr = file.GetLimitedWebPartManager(System.Web.UI.WebControls.WebParts.PersonalizationScope.Shared);

            ListViewWebPart lvWebPart = newListViewWebPart();

 

            lvWebPart.ListName = list.ID.ToString("B").ToUpper();

            // this is either the default view, or some of the other views that you get from the SPList.Views collection by name

            lvWebPart.ViewGuid = list.DefaultView.ID.ToString("B").ToUpper();

 

            mngr.AddWebPart(lvWebPart, zoneID, 1);

        }

Yes, this shouldn’t be something new for you, because you probably use something similar to this code if you have the problem with the commands toolbar.

And here is the second approach:

        staticvoid AddListView2(SPFile file, SPList list, string zoneID)

        {

            SPLimitedWebPartManager mngr = file.GetLimitedWebPartManager(System.Web.UI.WebControls.WebParts.PersonalizationScope.Shared);

            ListViewWebPart lvWebPart = newListViewWebPart();

 

            lvWebPart.ListName = list.ID.ToString("B").ToUpper();

            string schemaXml = list.GetUncustomizedViewByBaseViewId(0).HtmlSchemaXml;

            // you can modify the schema XML of the view before assigning it to the ListViewWebPart

            // e.g. you can change the ViewFields, the Query element, etc

            lvWebPart.ListViewXml = schemaXml;

 

            mngr.AddWebPart(lvWebPart, zoneID, 1);

        }

As you see, it is very similar to the first approach and the only difference is that instead of setting the ListViewWebPart.ViewGuid property you set the ListViewWebPart.ListViewXml property. And the outcome of this code snippet is that the list view web part gets added to the target page without the annoying toolbar.

So far so good, but let’s have a closer look at the two code snippets, so that you can understand what happens behind the curtains. In the first code example, the ListViewWebPart.ViewGuid property gets set – but this doesn’t imply that the LV web part gets bound somehow to the SPView instance whose ID was provided to the web part’s property. In reality what happens is that a new SPView gets created when the web part is added to the page – this SPView is hidden but you can still find it in the SPList.Views collection of the parent list. What’s more – this SPView is always associated with this LV web part and actually they are more or less two representations of one and the same internal entity (check this older posting of mine for more details). Another detail here is that when this hidden SPView is created its schema XML (containing the settings for the view fields, query, toolbar etc) is copied from the source SPView, whose ID you provided to the ViewGuid property of the web part. And this is why you end up with the commands toolbar in your LV web part – normally the SPList.Views collection contains the views from the standard list view pages of the SharePoint list (like the “All Items” one, etc), all of which do have a commands toolbar.

And now, let’s see what happens in the second code sample – you see that instead of providing a SPView’s ID to the web part, a full view schema XML is retrieved and then provided to the web part’s ListViewXml property – as you see in this case you don’t need to set the ViewGuid property at all. Let’s first see how the view schema XML gets retrieved – you can see the SPList.GetUncustomizedViewByBaseViewId method – this method returns a SPView object but this SPView instance is not a normal view that you can get from the SPList.Views collection. This is sorts of artificial SPView instance whose view schema XML is populated directly from the list’s “schema.xml” file. In the schema.xml of the list’s template definition you have a “Views” element which contains several “View” elements each of which defines the available view schema XML-s for that list. Each “View” element has a unique identifier – that’s its “BaseViewID” attribute – and the integer parameter that you specify in the GetUncustomizedViewByBaseViewId method references the “BaseViewID” attribute. There is one other detail here – some of the “View” elements in the “schema.xml” file of the list definition do provision list views for the list instances created from this list definition (for example the “AllItems.aspx”), but there are also “View” elements that actually don’t provision list view pages:

<ViewBaseViewID="1"Type="HTML"WebPartZoneID="Main"DisplayName="$Resources:core,objectiv_schema_mwsidcamlidC24;"DefaultView="TRUE"SetupPath="pages\viewpage.aspx"ImageUrl="/_layouts/images/generic.png"Url="AllItems.aspx">

This is the “View” element with BaseViewID=1 of the schema.xml of the standard CustomList – as you see from its attributes it provisions the “AllItems.aspx” view page for the lists based on that list definition.

<ViewBaseViewID="0"Type="HTML">

And this is the “View” element (it has many child elements but I omitted them because the XML is huge) with BaseViewID=0. As you can see, it contains much less attributes and doesn’t provision any of the list’s view pages. But this doesn’t mean that the view schema XML that it defines is not used, on the contrary – for example, when you use the standard SharePoint UI to add LV web parts to a page, it is the view schema with BaseViewID=0 that’s get copied to the hidden SPView of the web part. And also in the code sample above I specified the view schema with BaseViewID=0 in the GetUncustomizedViewByBaseViewId method for the same reason. If you have a closer look at the “View” elements in the schema.xml you will see the difference in their “Toolbar” elements definition and will see why the “View” with BaseViewID=0 doesn’t add a commands toolbar and the “View” elements with different BaseViewID values normally do.

[update (Aug 25, 2011): the difference between setting the LVP’s ViewGuid and ListViewXml properties is that in the first case, the whole schema of the provided SPView will be copied and used in the newly created LVP, which means that in most cases you will end up with the standard toolbar, because it is already present in most of the non-hidden views (normal view pages) of the list. In the second case although you provide a full view schema, the SharePoint object model will take into account only its BaseViewID property and will otherwise use the original view definition with the same BaseViewID from the list’s schema.xml file. The view definition with BaseViewID=0 for most of the standard SharePoint list templates just happens to come without a toolbar, so this will help you solve the issue with the LVP’s toolbar. Apart from the toolbar, if you want to use a modified view definition when creating a new LVP – this won’t be possible at the moment of the provisioning itself (the only options that you have is to choose between the existing view definitions or the already existing views, as I explained above). But once you have the LVP created it is possible to modify the hidden SPView instance associated with the LVP (check the line of code at the bottom of the posting as to how it is possible to retrieve this hidden SPView from the web part). You can modify the hidden SPView using its properties which expose most of the view’s schema. If you are interested in changing the whole schema you can see this sample code, which unfortunately resorts to reflection, contrary to the posting’s title]

One other advantage of the second approach for adding a LV web part is that after you retrieve the view schema XML you can modify it before assigning it to the ListViewWebPart.ListViewXml property. For example you can change the ViewFields XML node or the Query node and even the Toolbar XML element if you want to display some custom toolbar of yours.

One last thing that I want to mention – it is about the case when you already have you LV web part added to the page and want to change its toolbar (the solution above was intended for the scenario when you add a new LV web part, which is actually the more common case). In this case I would simply delete the existing web part and add a brand new LV web part to the page. The trick here will be that instead of getting the view schema XML with the SPList.GetUncustomizedViewByBaseViewId method, I will use the schema (SPView.HtmlSchemaXml) from the existing web part (actually from its hidden SPView), before I delete it (then the view schema should be modified accordingly). Here is a small code snippet of how your can get the LV web part’s hidden SPView:

SPView hiddenView = list.Views[newGuid(lvWebPart.ViewGuid)];

XsltListViewWebPart – how to hide the ribbon

$
0
0

A reader of my blog asked me this in a comment to one of the previous postings and I had also wondered about it myself on several occasions. The thing is that sometimes you may not want the ribbon to appear when you select (clicking on the web part basically selects it) the XLV web part or select some of the items that the web part displays. The solution to this one turned out to be pretty simple and straight-forward – it involves creating a small custom XSLT file which does the trick with several lines of javascript (the selection of the web part and the displaying/hiding of the ribbon are all implemented with javascript in standard SharePoint 2010, so it is pretty logical to counter this also with javascript). The replacing of the standard “main.xsl” in the XslLink property of the web part and the setting of one additional property of its hidden SPView are then the only thing left that you need to fulfill the task (I’ll come to that in a moment).

So let me first shortly explain which are the javascript “culprits” that cause the page to display the “list” context specific ribbon when you click on the web part (not on a particular item – that’s a different case) or check the web part’s selection checkbox at the top right corner. If you inspect closely the HTML of the page containing the XLV web part you will be able to find these two HTML elements:

<tdid="MSOZoneCell_WebPartWPQ2"valign="top"class="s4-wpcell"onkeyup="WpKeyUp(event)"onmouseup="WpClick(event)">

 

<inputtype="checkbox"id="SelectionCbxWebPartWPQ2"class="ms-WPHeaderCbxHidden"title="Select or deselect test Web Part"onblur="this.className='ms-WPHeaderCbxHidden'"onfocus="this.className='ms-WPHeaderCbxVisible'"onkeyup="WpCbxKeyHandler(event);"onmouseup="WpCbxSelect(event); return false;"onclick="TrapMenuClick(event); return false;" />

The first one is one of the top container elements of the XLV web part – have a look at its “onmouseup” attribute – it is this javascript bit that triggers the selection of the web part and the appearing of the parent list’s contextual ribbon. The second element is the selection checkbox control itself. Notice the “id” attributes of these two HTML elements – they both have one and the same suffix – “WPQ2” – this is actually something like an web part index which in case you have more than one web parts on the page can be used to identify these two elements for a specific web part.

And let’s go directly to the custom XSLT that hides the ribbon when used in the XLV web part:

<xsl:stylesheetxmlns:x="http://www.w3.org/2001/XMLSchema"xmlns:d="http://schemas.microsoft.com/sharepoint/dsp"version="1.0"exclude-result-prefixes="xsl msxsl ddwrt"xmlns:ddwrt="http://schemas.microsoft.com/WebParts/v2/DataView/runtime"xmlns:asp="http://schemas.microsoft.com/ASPNET/20"xmlns:__designer="http://schemas.microsoft.com/WebParts/v2/DataView/designer"xmlns:xsl="http://www.w3.org/1999/XSL/Transform"xmlns:msxsl="urn:schemas-microsoft-com:xslt"xmlns:SharePoint="Microsoft.SharePoint.WebControls"xmlns:ddwrt2="urn:frontpage:internal">

  <!-- import the standard main.xsl so that we have all standard stuff -->

  <xsl:importhref="/_layouts/xsl/main.xsl"/>

  <!-- this template was copied from the standard vwstyles.xsl -->

  <xsl:templatematch="/">

    <!-- only this javascript block was added -->

    <script>

      try

      {

        // remove the click handler of the containing element

        $get('MSOZoneCell_WebPart<xsl:value-ofselect="$WPQ"/>').onmouseup = function (){};

        // remove the TD element containing the wp selection checkbox in the wp's header

        $get('SelectionCbxWebPart<xsl:value-ofselect="$WPQ"/>').parentNode.parentNode.style.display = 'none';

      }

      catch (ex) {}

    </script>

    <xsl:choose>

      <xsl:whentest="$RenderCTXOnly='True'">

        <xsl:call-templatename="CTXGeneration"/>

      </xsl:when>

      <xsl:whentest="($ManualRefresh = 'True')">

        <xsl:call-templatename="AjaxWrapper" />

      </xsl:when>

      <xsl:otherwise>

        <xsl:apply-templatesmode="RootTemplate"select="$XmlDefinition"/>

      </xsl:otherwise>

    </xsl:choose>

  </xsl:template>

</xsl:stylesheet>

As you see, it is pretty concise and doesn’t need much explanation as to what it does and how it does it – I also put several comments inside it, so that you can get a better clue of its workings. Notice that the “WPQ” index that I mentioned above is actually available in an XSL parameter (with the same name) that the XLV web part has initialized for you, so the localizing of the two HTML elements in the javascript becomes a really easy task.

This snippet should be saved as a file in the TEMPLATE\LAYOUTS\XSL folder (I saved it as “main_noribbon.xsl”). After you have the XSL file you will need to set the XLV web part to use it – you have two options here – to set the XslLink property of the web part (the value in this case should be: /_layouts/xsl/main_noribbon.xsl) – you can do that easily using the SharePoint UI. The other option is to set the XslLink property of the associated hidden SPView of the web part (the value in this case is simply the name of the file: main_noribbon.xsl) – the standard UI can’t be used in this case, so you will need some lines of code here (see below).

And … you need one more thing to have it all working. Apart from the ability to select the whole web part, the XLV allows you to select individual item rows which also results in displaying the ribbon. To override this you won’t need extra XSL because the XLV web part provides this as an option out of the box – it is about the so called “view styles” that you can use for list views and hence XLV web parts. The view style can be changed easily from the standard UI – you click the “modify view” button in the ribbon (you have already disabled the ribbon? – no worries – you can do the same from the edit tool-pane of the XLV – it is the “Edit the current view” link there). In the standard “Edit View” page that will open you can scroll down and locate the “Style” section, expand it and then in the “View Styles” list box select the topmost option – “Basic Table”. This view style is pretty much the same as the “Default” one save the ability to select the item rows.

As you saw, you can apply the changes (once you have the XSL file in place) using the standard SharePoint UI alone. And here is how you can do it with code:

privatestaticvoid XLVHideRibbon(SPWeb web, XsltListViewWebPart xlv)

{

    // if the page is in a library that requires check out, the file containing the XLV should be checked out before calling this method

 

    // get the hidden view of the XLV web part

    SPView hiddenView = web.Lists[newGuid(xlv.ListName)].Views[newGuid(xlv.ViewGuid)];

    // set its XslLink property to reference the custom "no ribbon" XSLT file

    hiddenView.XslLink = "main_noribbon.xsl";

 

    // web.ViewStyles[0] - Basic Table

    hiddenView.ApplyStyle(web.ViewStyles[0]);

    // update the hidden view

    hiddenView.Update();

}

Note that instead of setting the XslLink property of the XLV web part, I set the XslLink property of its hidden SPView, but this basically achieves the same effect.

XsltListViewWebPart – how to display columns based on permissions

$
0
0

The standard SharePoint security trimming in the XsltListViewWebPart works on the list item level – the items in the displayed SharePoint list may inherit the permissions from their parent list, but may also have different permissions and based on these the viewing user depending on his rights may see different result sets. In most cases this “horizontal” security trimming will be sufficient for the end users, but there may be cases when a “vertical” type of security trimming is also requested – meaning that certain columns in the list view are displayed only to users with particular rights for the target SharePoint list.

So, I created a small “proof-of-concept” type solution that demonstrates how this can be achieved. The implementation is fairly simple and involves only changing of the XSL used by the web part and modifying its “PropertyBindings” property. The modifications of the rendering XSL are obviously the core part of the implementation but you may wonder what the purpose of using the “PropertyBindings” property of the XsltListViewWebPart is (if you are not familiar with the “PropertyBindings” property and how to use it, you can check out my extensive posting on the subject here). The idea is simple – in the “PropertyBindings” property you can keep configuration information that also becomes available in the XSL of the web part. And the configuration data that is needed in our case is the permissions’ data for the list columns that the web part displays – it is about what rights exactly the user should have for the target list, so that he can see one or another column in the list view. The question here is why put this configuration in the web part’s property bindings and not in the fields’ schema itself for example (the field schema can easily be extended with custom attributes – e.g. xmlns:PermissionMask="0x7FFFFFFFFFFFFFFF"). The main reason for this is that if you use custom attributes in the field schema XML, you cannot then use them in the rendering XSL of the XLV web part, custom attributes simply don’t appear in the view schema XML that is available through the “$XmlDefinition” XSL parameter (check this MSDN article for a sample view schema XML in the XLV’s XSL – the field schema data is in the View/ViewFields/FieldRef elements). Another point here is that even if it were possible to store the field permission data in the field’s schema, this would impact all list views that use the customized XSL (and display the particular column), and with the setting of the “PropertyBindings” property only the current XLV web part will be affected. It is hard to judge whether this is of advantage or disadvantage  and probably depends on the specific case that you may have.

And let me first show you the “PropertyBinding” XML that should be added to the “PropertyBindings” property (I said “added”, because the property normally already contains the XML of other property bindings):

<ParameterBindingName="ColumnPermissions"DefaultValue="Author|9223372036854775807|Editor|756052856929" />

The “Name” attribute specifies the name of the xsl:param element that will be initialized with the value of the property binding in the web part’s XSL. Its value is in the “DefaultValue” attribute – it is a long string containing values delimited by the ‘|’ character. At odd positions you have field names (internal names actually) and at even positions you see big numbers, which are actually the permission masks that should be applied for the list columns which precede the corresponding number. The permission mask is determined by the standard SharePoint SPBasePermissions enumeration: 9223372036854775807 (hex 7FFFFFFFFFFFFFFF) corresponds to the “Full Control” permission level and 756052856929 (hex B008431061) corresponds to the “Contribute” permission level. This means that the user will see the “Author” (“Created by”) column only if he has “Full Control” rights and the “Editor” (“Modified by”) column only if he has “Contribute” rights for the SharePoint list that is displayed. Note that all fields that are not specified in the property binding will be always visible to all users.

Let’s now move to the custom XSL that should handle the rendering and display only the columns that the current user has rights to see. Before, I show you the complete source of the custom rendering XSL, I want to draw your attention to one important thing – the trick that is used in the XSL to check whether the permission masks for the fields have a match with the effective permissions of the current user for the source SharePoint list. It is very simple actually and uses … a standard SharePoint “ddwrt” XSLT extension method:

<xsl:if test="ddwrt:IfHasRights($checkResult)">

The “IfHasRights” extension method receives an integer parameter for the permission mask and returns true or false depending on whether the current user has those rights for the SharePoint list of the web part. Note that the check is made for the SharePoint list, not the items of the list and not for its parent SharePoint site.

And here is the complete source of the custom XSL (check the extensive comments inside it for more details)

<xsl:stylesheetxmlns:x="http://www.w3.org/2001/XMLSchema"xmlns:d="http://schemas.microsoft.com/sharepoint/dsp"version="1.0"exclude-result-prefixes="xsl msxsl ddwrt"xmlns:ddwrt="http://schemas.microsoft.com/WebParts/v2/DataView/runtime"xmlns:asp="http://schemas.microsoft.com/ASPNET/20"xmlns:__designer="http://schemas.microsoft.com/WebParts/v2/DataView/designer"xmlns:xsl="http://www.w3.org/1999/XSL/Transform"xmlns:msxsl="urn:schemas-microsoft-com:xslt"xmlns:SharePoint="Microsoft.SharePoint.WebControls"xmlns:ddwrt2="urn:frontpage:internal">

 

  <!-- import the standard main.xsl, so we have all standard stuff -->

  <xsl:importhref="/_layouts/xsl/main.xsl"/>

  <xsl:outputmethod="html"indent="no"/>

 

  <!-- we get here the field permissions configuration from the PropertyBinding with the same name -->

  <xsl:paramname="ColumnPermissions" />

  <!-- this is the standard XmlDefinition parameter - the XLV initializes this one with the view schema -->

  <xsl:paramname="XmlDefinition" />

 

  <!-- this variable contains the parsed configuration data like <token>Author</token><token>9223372036854775807</token> etc -->

  <xsl:variablename="tokens">

    <xsl:call-templatename="Tokenize">

      <xsl:with-paramname="string"select="$ColumnPermissions" />

      <xsl:with-paramname="delimiter"select="'|'" />

    </xsl:call-template>

  </xsl:variable>

 

  <!-- here we create a copy of the original XmlDefinition removing all View/ViewFields/FieldRef elements for which the user doesn't have rights -->

  <xsl:variablename="XmlDefinition2Raw">

    <xsl:apply-templatesmode="transform-schema"select="$XmlDefinition">

      <xsl:with-paramname="tokenSet"select="msxsl:node-set($tokens)" />

    </xsl:apply-templates>

  </xsl:variable>

 

  <!-- the one above is a sequence of tags, in order that it can be used exactly like the standard $XmlDefinition it should be converted to a node set -->

  <xsl:variablename="XmlDefinition2"select="msxsl:node-set($XmlDefinition2Raw)" />

 

  <!-- this one is simply a copy of the template with the same match from the standard vwstyles.xsl (thus we override it), the only difference is that it uses our trimmed $XmlDefinition2 instead of the standard $XmlDefinition -->

  <xsl:templatematch="/">

    <xsl:choose>

      <xsl:whentest="$RenderCTXOnly='True'">

        <xsl:call-templatename="CTXGeneration"/>

      </xsl:when>

      <xsl:whentest="($ManualRefresh = 'True')">

        <xsl:call-templatename="AjaxWrapper" />

      </xsl:when>

      <xsl:otherwise>

        <xsl:apply-templatesmode="RootTemplate"select="$XmlDefinition2"/>

      </xsl:otherwise>

    </xsl:choose>

  </xsl:template>

 

  <!-- the same as the template above -->

  <xsl:templatename="AjaxWrapper"ddwrt:ghost="always">

    <tablewidth="100%"border="0"  cellpadding="0"cellspacing="0">

      <tr>

        <tdvalign="top">

          <xsl:apply-templatesmode="RootTemplate"select="$XmlDefinition2"/>

        </td>

        <tdwidth="1%"class="ms-vb"valign="top">

          <xsl:variablename="onclick">

            javascript: <xsl:call-templatename="GenFireServerEvent">

              <xsl:with-paramname="param"select="'cancel'"/>

            </xsl:call-template>

          </xsl:variable>

          <xsl:variablename="alt">

            <xsl:value-ofselect="$Rows/@resource.wss.ManualRefreshText"/>

          </xsl:variable>

          <ahref="javascript:"onclick="{$onclick};return false;">

            <imgsrc="/_layouts/images/staticrefresh.gif"id="ManualRefresh"border="0"alt="{$alt}"/>

          </a>

        </td>

      </tr>

    </table>

  </xsl:template>

 

  <!-- this template creates the copy of the standard $XmlDefinition trimming the View/ViewFields/FieldRef elements for which the user doesn't have rights -->

  <xsl:templatemode="transform-schema"match="View">

    <xsl:paramname="tokenSet" />

    <!-- copy the root View element -->

    <xsl:copy>

      <!-- copy the root View element's attributes -->

      <xsl:copy-ofselect="@*"/>

      <!-- copy the child elements of the root View element -->

      <xsl:for-eachselect="child::*">

        <xsl:choose>

          <xsl:whentest="name() = 'ViewFields'">

            <!-- special handling of the ViewFields element -->

            <ViewFields>

              <!-- iterate the ViewFields/FieldRef elements here -->

              <xsl:for-eachselect="child::*">

 

                <!-- get the permission mask for the FieldRef element, by the Name attribute -->

                <xsl:variablename="checkResult">

                  <xsl:call-templatename="GetValueFromKey">

                    <xsl:with-paramname="tokenSet"select="$tokenSet" />

                    <xsl:with-paramname="key"select="./@Name" />

                  </xsl:call-template>

                </xsl:variable>

 

                <xsl:choose>

                  <!-- if the permission mask is not empty and the ddwrt:IfHasRights returns true, copy the field -->

                  <xsl:whentest="$checkResult != ''">

                    <!-- this is how we check whether the user has sufficient rights for the field (checking the permission mask of the field against the user's permissions for the source list) -->

                    <xsl:iftest="ddwrt:IfHasRights($checkResult)">

                      <xsl:copy-ofselect="."/>

                    </xsl:if>

                  </xsl:when>

                  <xsl:otherwise>

                    <!-- if we don't have the field in the configuration simply copy the FieldRef element -->

                    <xsl:copy-ofselect="."/>

                  </xsl:otherwise>

                </xsl:choose>

 

              </xsl:for-each>

            </ViewFields>

          </xsl:when>

          <xsl:otherwise>

            <xsl:copy-ofselect="."/>

          </xsl:otherwise>

        </xsl:choose>

      </xsl:for-each>

    </xsl:copy>

  </xsl:template>

 

  <!-- several helper templates that parse the configuration string and return the permission mask for the field by providing the field's internal name -->

  <xsl:templatename="GetValueFromKey">

    <xsl:paramname="tokenSet" />

    <xsl:paramname="key" />

    <xsl:apply-templatesselect="$tokenSet/token[text() = $key]" />

  </xsl:template>

 

  <xsl:templatename="NextNode"match="token">

    <xsl:value-ofselect="following-sibling::*"/>

  </xsl:template>

 

  <xsl:templatename="Tokenize">

    <xsl:paramname="string" />

    <xsl:paramname="delimiter"select="' '" />

    <xsl:choose>

      <xsl:whentest="$delimiter and contains($string, $delimiter)">

        <token>

          <xsl:value-ofselect="substring-before($string, $delimiter)" />

        </token>

        <xsl:call-templatename="Tokenize">

          <xsl:with-paramname="string"select="substring-after($string, $delimiter)" />

          <xsl:with-paramname="delimiter"select="$delimiter" />

        </xsl:call-template>

      </xsl:when>

      <xsl:otherwise>

        <token>

          <xsl:value-ofselect="$string" />

        </token>

        <xsl:text> </xsl:text>

      </xsl:otherwise>

    </xsl:choose>

  </xsl:template>

 

</xsl:stylesheet>

One note about the XSL code – if you have a look at it, you will notice that it replaces (overrides) two of the core XSL templates used in the standard vwstyles.xsl file. It then provides a modified copy of the standard view schema XSL parameter ($XmlDefinition) in these templates. And on the other hand if you check the vwstyles.xls file, you will notice that there are still other XSL templates in it that also use the standard $XmlDefinition parameter (and thus not the modified copy) – these are the templates that handle aggregations and groupings, which means that the customized XSL above won’t be able to handle properly these cases.

And finally a few words on how to use this sample: the first thing is to save the XSL to a XSL file in the TEMPLATE\LAYOUTS or TEMPLATE\LAYOUTS\XSL folder (a subfolder of these two is also possible). Then you need to select your XLV web part (it may be an XLV from a standard list view page or an XLV that you placed on a content page) and change its PropertyBindings and XslLink properties. You can do that with code or using my web part manager utility which provides an easy to use UI for that (you can download it from here). For the “PropertyBindings” property, you should append the XML of the field permissions configuration which should look like the sample above. In the “XslLink” property you should specify the link to the XSL file in which you have saved the custom XSLT above – provided you’ve saved the XSL file as TEMPLATE\LAYOUTS\columns_main.xsl, the value that you should put in the “XslLink” property should be: /_layouts/columns_main.xsl.

SharePoint 2007 – associated lookup columns

$
0
0

No, this is not a mistake – we can also have associated lookup columns in SharePoint 2007 [update (Dec 13, 2010): check for two drawbacks in SharePoint 2007 below], though not with the extended UI creation capabilities available in SharePoint 2010. Let’s first have a look at this new functionality in SharePoint 2010 (and define more accurately the terminology – I don’t think that there is a specific term for this lookup field extension, but in this article I will call it as I did in the title of the posting):

image

So, as you can see from the screenshot (and probably tried that many times already yourself), when you create a new lookup column in a list, you have the option to select more than one column from the target lookup list (in SharePoint 2007 we had the option to select just one “show column”). Basically what SharePoint does when you create a lookup field from the UI is to create the “normal” lookup column as before and then create another read-only “associated” lookup column for every additional show field from the check-box list that you have selected. And this functions as follows – when you create a new item, or edit an existing one in the edit form you see only the “normal” lookup column there, but when you change it to point to another item in the lookup list, all “associated” lookup columns change their values so that they correspond to the related columns in the newly selected lookup item. The associated lookup columns are read-only but you can add them to the views of the list, so that you display all additional columns that you need from the lookup item. And since the main lookup and the auxiliary lookup fields are automatically synchronized (SharePoint does that for us), you don’t need any additional code in item event receivers for instance.

Let’s now have a look at the field schema of the “normal” and “associated” lookup columns:

<FieldType="Lookup"DisplayName="Department"Required="FALSE"EnforceUniqueValues="FALSE"List="{ceaf935e-b9c6-48a0-8c23-bcec58a24c91}"ShowField="Title"UnlimitedLengthInDocumentLibrary="FALSE"RelationshipDeleteBehavior="None"ID="{98948dfd-cea5-4d6c-ac47-25bafa5218de}"SourceID="{bca1cd44-3822-49b6-b68c-2ff28ced1726}"StaticName="Department"Name="Department"ColName="int1"RowOrdinal="0"Group="" />

This is the schema of the “normal” or main lookup column – as you see, there is nothing specific in its schema – it is just a regular lookup column. And this is the schema of one of the associated lookup columns that I created for the lookup column above:

<FieldType="Lookup"DisplayName="Department:Code"List="{ceaf935e-b9c6-48a0-8c23-bcec58a24c91}"WebId="4ee36ddf-5b1b-470b-9f9a-fbd970edf5aa"ShowField="Code"FieldRef="98948dfd-cea5-4d6c-ac47-25bafa5218de"ReadOnly="TRUE"UnlimitedLengthInDocumentLibrary="FALSE"ID="{cd2b77b9-6238-4e2e-99ee-826826dc09f2}"SourceID="{bca1cd44-3822-49b6-b68c-2ff28ced1726}"StaticName="Department_x003a_Code"Name="Department_x003a_Code"Version="1" />

So, as you see we have all attributes for a normal lookup column, but one extra attribute as well – “FieldRef”. The value of this attribute is a Guid and this is exactly the ID attribute of the main lookup column. So, obviously the “FieldRef” attribute is the one that defines this association between the main and the associated lookup columns. Notice also the “ReadOnly” attribite in the associated lookup field’s schema which is set to TRUE – this guarantees that the associated lookup column doesn’t appear in the new and edit forms of the SharePoint list (you don’t need it there anyway, since it is synchronized automatically).

This is indeed a very useful new functionality in SharePoint 2010, but you may already ask yourself what SharePoint 2007 has to do in the whole matter – we don’t have this functionality there. Well, it is true that SharePoint 2007 lacks the user interface for creating “associated” lookup columns but this doesn’t mean that we cannot create columns with code specifying the values for all attributes in their schema as we decide. And it turns out that when we create a lookup column in a list and set its “FieldRef” attribute to “point” to another lookup column in the same list (both columns should target one and the same lookup list) – the “associated” lookup functionality actually works – just like in SharePoint 2010 (note here – I tested this on SharePoint 2007 service pack 2). Another detail here is that you shouldn’t forget to set the “ReadOnly” attribute of the field to TRUE – otherwise the “associated” lookup column will appear in the edit form of the list and when you try to save the list item, the page will crush with a nasty error.

As to the question of how you can create a lookup column (an associated lookup column) with code (and set its FieldRef attribute) – probably the easiest way is to use the SPFieldCollection.AddFieldAsXml method, which accepts a string parameter containing the schema XML of the new field. Note that you will have to properly format the schema XML of the associated lookup field providing the correct values to its attributes (the final XML schema should look similar to the one above):

  • in the ID attribute – a unique Guid value
  • in the Name and StaticName attributes – unique (within the SharePoint list) internal name of the field
  • in the List attribute – the ID of the lookup list
  • in the WebId attribute – the ID of the web containing the lookup list (in most cases this is the web of the current list)
  • in the FieldRef attribute – the ID of the main lookup field to which we want to associate the new lookup column
  • in the ShowField attribute – the internal name of the column in the lookup list whose value should be displayed in the new lookup column
  • in the ReadOnly attribute – this should be set to TRUE

And the net result will look something similar to this:

image

In the sample screenshot, the “Department” column is the main lookup column (displaying the “Title” column from the lookup list) and the “Department country” column is the associated lookup column (displaying the custom “Country” text column from the lookup list). Note also that the value displayed in the list view in the column of the associated lookup field is not rendered as a link, unlike the value in the column of the main lookup – actually this is exactly the same behavior as we have in SharePoint 2010. And most importantly again the same as in SharePoint 2010, the values of the associated lookup column or columns get automatically synchronized with the changes of the selected lookup item in the main lookup field.

[update Dec 13, 2010]

Shortly after the publishing of this article there was a comment (by Szymon) that once you create an associated lookup field you cannot delete it – something that I have overlooked. The error that you receive is “One or more field types are not installed properly. Go to the list settings page to delete these fields.” The reason for that is some kind of restriction in SharePoint 2007 (in SharePoint 2010 it works perfectly well) and I have seen the same error when I tried to delete “computed” (not to be confused with “calculated”) fields. After some research the one common thing between the associated lookup and the computed fields is that they both lack the “ColName” attribute, which means that there is no associated database column in the SharePoint “all user data” SQL table. Which also means that these types of columns don’t take space in the content database as one can expect. The bottom line is that you have a field that doesn’t take space in the SharePoint content database, but you still can’t get rid of its schema. As a work around you can set also its “Hidden” attribute so that the field doesn’t appear in the available view fields of the list in the standard SharePoint UI. Still, it is important that you know that once created you cannot delete (at least not using standard SharePoint object model) the associated lookup fields.

The other drawback that you have with associated lookup columns in SharePoint 2007 is that if you use them in list views or as view fields in SPQuery objects you need to always specify the main lookup column as well. If you miss it, you will receive the ugly SharePoint error “Cannot complete this action”.

Feature stapling in sandbox solutions

$
0
0

In a recent posting of mine about the WebProvisioned event receiver in SharePoint 2010, I made a comparison between the latter and the feature stapling functionality which has been available since SharePoint 2007. In that posting I had the wrong assumption that feature stapling only works at farm level, but after a prompting comment I had to correct this. After another check of the SharePoint SDK, I saw that feature stapling can actually be scoped at three levels: farm, web application and site collection – which provides not only extra flexibility but also better isolation when necessary. When I understood that one can create a site collection scoped “feature stapling” feature, I asked myself whether it’s possible to provision such a feature in a sandbox solution (that’s the highest possible feature scope for a sandbox solution). I created a small POC project in a couple of minutes, deployed it and it turned out that feature stapling does work in sandbox solutions too.

The setup in my POC project was very simple – it contained two features – one site collection scoped (Scope="Site") and one site scoped (Scope="Web"). The latter was a dummy feature, containing no feature elements – its purpose was to be “stapled” by the stapling feature. The stapling feature was the site collection scoped one (naturally) – it contained one feature element file with two “FeatureSiteTemplateAssociation” elements – both were targeting the standard “blank site” site definition (STS#1), the first one specifying the dummy site scoped feature in the same sandbox solution and the second one – the standard “WikiPageHomePage” feature (this is the standard feature that creates the wiki page library in the standard “team site” and sets its default page to be the “home” page in the wiki page library). I used two feature template associations, because I wanted to make sure that both sandbox and farm solution features get stapled by a “feature stapling” feature in a sandbox solution – which turned out to be exactly so.

And let’s see what the possible benefits from the feature stapling (site collection scoped) in a sandbox solution may be compared to the “normal” feature stapling in farm solutions – I can see these two positive/advantageous points:

  • the first one stems from a general feature of the sandbox solutions – sandbox solutions can be installed and activated by site collection administrators (no need for farm administrator’s rights) and you don’t “pollute” the 14 hive with extra features.
  • the second one is better isolation and manageability – if you have the feature stapling features in a farm solution with the “Hidden” attribute set to FALSE, so that site collection administrators can see them and manage them (activate/deactivate) in the standard site settings pages, the stapling features will appear in the settings pages of all site collections in the SharePoint farm. If you on the other hand decide to have the stapling features with the “Hidden” attribute set to TRUE, then the site collection administrators won’t be able to manage them using the standard SharePoint UI. And on the other hand with a stapling feature in a sandbox solution you can safely leave the feature to be visible, since this visibility will be limited only to the site collection in which you have the sandbox solution activated. And even if the stapling feature is hidden, the site collection administrator will still be able to deactivate the containing sandbox solution and thus deactivate the stapling feature itself.

And now let me make another comparison of the feature stapling (site collection scoped), this time with the WebProvisioned event receiver (which was introduced in SharePoint 2010 – check this posting of mine for more details about the WebProvisioned event receiver) – here are some of the differences between the two:

  • With feature stapling you basically activate extra features to sites based on standard and custom site definitions and you configure the whole thing with XML in a feature element file, while with the WebProvisioned event receiver you write custom code which gets executed after the new site is created. In the WebProvisioned event receiver you can also activate features but you have to do that with code using the SharePoint object model.
  • When you use feature stapling you can use feature properties for the features that you staple. If you want to activate features in the WebProvisioned event receiver with code, you cannot provide feature properties – there is no public method in the SharePoint object model that allows that.
  • The lowest possible scope for feature stapling is the site collection level – meaning that all sub-sites in the site collection with the targeted site definitions will be affected. For the WebProvisioned event receiver you have the site collection scope but also the site scope. With the latter you target only the immediate children of the site to which you attach the WebProvisioned event receiver (this site may be the root site of the site collection but may also be a sub-site).
  • The time of the activation of the stapled features and the time of the execution of the code of the WebProvisioned event receiver in the timeline of the site provisioning differs and this may seriously impact your solution depending on the specifics of your case. Here is the order of activation of the standard elements in the ONET.XML of a site definition/web template:

    1. site collection scoped features from the onet.xml
    2. site collection scoped stapled features
    3. site scoped features from the onet.xml
    4. site scoped stapled features
    5. “List” elements from the onet.xml
    6. “Module” elements from the onet.xml

    In the case of site collection scoped “feature stapling” features we can obviously staple only site scoped (Scope="Web") features, but as you see the time of execution of the site scoped stapled features is before the activation of the “List” and “Module” elements, which means that when your stapled feature gets activated your site is not fully provisioned and you may miss some of the SharePoint lists and some of the files/pages in the site depending on the site definition that is used for the site.  On the other hand the custom code of the WebProvisioned event receiver gets executed after the target site is fully provisioned and you will have all artifacts in the site already created which you can further modify/adjust with the code of the receiver.

Automatically publishing files provisioned with Sandboxed Solutions

$
0
0

I couple of days ago I saw this posting on Waldek Mastykarz’s blog (a great blog by the way) and now you are reading my posting with the very same name, which treats also the very same issue, but provides a different solution to it. So, what exactly is the issue with file provisioning with sandbox solutions? In short – the problem occurs when you use “Module” elements in a feature for provisioning files to SharePoint document libraries. And not to any document library but to a library which is configured with one of the following:

  1. Check out for files is required
  2. Content approval is enabled
  3. Minor versions are enabled

The files will be of course provisioned to the target library but they will be either checked out and/or in a draft or pending state depending on the enabled configurations and the actual combination of the three that is enabled on the library. And this will mean that the files will be present but they won’t be accessible for the regular users of the site. This behavior is obviously intended and by design – I suppose that the main reason for it is the ability for the site and site collection administrators to review and approve or reject the content provisioned by the sandbox solution.

Having said that, I should mention that I knew about this issue for quite some time myself and even had tackled one isolated case of it in a previous posting of mine – it is about provisioning of publishing pages with sandbox solutions. The problem with publishing pages was even more serious – not only the pages end up in a draft state and checked out, but the web parts defined in the “Module” manifest file never get provisioned (check the posting itself for more details). The solution for publishing pages was pretty neat and simple  (it doesn’t need a feature receiver and code with it) but unfortunately it is applicable only for publishing pages.

As for the solution itself – it is known that the SPFile.CheckIn, SPFile.Publish and SPFile.Approve methods are available in sandbox solutions too, so with a feature receiver and several lines of code the solution would be pretty trivial. But obviously in this case the best solution would be if you have a reusable feature receiver that you write just once and then can use everywhere without changing or adjusting its code. The universal feature receiver will save you a lot of time and efforts since the usage of “Module” features is pretty frequent in SharePoint. The main technical challenge for this universal feature receiver is that the code in the receiver should be “aware” of which files exactly its feature provisions to the target library (libraries). This type of “awareness” is provided OOB by the built in SPFeatureDefinition.GetElementDefinitions method but unfortunately this method is not available in the sandbox subset of the object model. So we need a different solution and I will start with briefly explaining the solution from Waldek’s blog – it is pretty simple and requires very little additional effort – you just add an extra “Property” element to every “File” element in your “Module” manifest file like that:

<FilePath="Images\someimage.png"Url="someimage.png"Type="GhostableInLibrary">

  <PropertyName="FeatureId"Value="$SharePoint.Feature.Id$"Type="string"/>

</File>

The idea is as follows – since you won’t have a column name with the name “FeatureId” in your target library, the value of this property element will be saved in the property bag of the underlying SPFile instance of the provisioned file. And the value of the property element is none other but the Guid of the parent feature (note the smart usage of the Visual Studio token syntax). So basically by adding this extra “Property” element to all “File” elements in the manifest file you literally “mark” all files that your feature provisions. The logic then of the feature receiver will be to iterate all files in the target library and after finding all marked files to check them in, approve or publish them where appropriately. Additionally you will have to provide somehow the name/names of the target library/libraries to the feature receiver, which is possible by using feature properties in the feature.xml file of your feature.

And, now to the reasons as to why I started this article and thought of a different solution – the solution above is basically pretty neat and straight-forward and involves very little extra implementation effort, but what I didn’t like about it was the iteration of all files in the target library(libraries), which may happen to contain thousands of files (which in fact is very improbable but this thought always gives me the creeps when I think of some aspects of SharePoint performance). Of course there’re several ways to optimize the iteration – for instance, to iterate the SPList.Items collection instead of the SPFolder/SPFile hierarchy (the former is generally faster) and use the contents of the internal system MetaInfo list column which contains the serialized property bag of the underlying SPFile instance (unfortunately you cannot use CAML filtering on this column). Further, a very simple but effective optimization would be to use CAML filtering on the “Created”, “_Level” and “_ModerationStatus” fields so that you can fetch only items just recently created (say you can put a margin of 5 or 10 minutes in the past; note that even if the file exists already in the library, when the feature provisions it again, its “Created” date will also get updated) and are either checked out or in a draft or pending state.

Thus far so good, but still I wanted to find a solution that is as close to the “self awareness” approach with the SPFeatureDefinition.GetElementDefinitions method, so that the feature receiver knows exactly which files its feature provisions without it being necessary to traverse the target library or libraries and search for them. And what occurred to me was that if we need the contents of the manifest file in the feature receiver why can’t we simply provision it to the target site (it is available in the feature definition as an element file but there is nothing wrong if it is also referenced in a “File” element in the manifest file), then read its contents from the feature receiver, and finally when the files are published the feature receiver can safely delete it. And let me show you a sample “Module” manifest file so that you can get a better idea of the trick that I just explained:

<Elementsxmlns="http://schemas.microsoft.com/sharepoint/">

  <ModuleName="TestModule"Url="Style Library">

    <FilePath="TestModule\Sample.txt"Url="Sample.txt"Type="GhostableInLibrary" />

    <FilePath="TestModule\Sample2.txt"Url="Sample2.txt"Type="GhostableInLibrary" />

    <FilePath="TestModule\Sample3.txt"Url="test/Sample3.txt" />

  </Module>

  <ModuleName="TestModule2">

    <FilePath="TestModule\Elements.xml"Url="Elements_$SharePoint.Feature.Id$.xml"Type="Ghostable" />

  </Module>

</Elements>

You can see that the manifest file (whose name is the commonplace “elements.xml”) contains two “Module” elements – the first one provisions several files to the standard “Style Library” library (a pretty recurring task in SharePoint development) and check carefully the second one – this is the only extra bit in the manifest file that you need (the other bit is the reusable feature receiver) – this is a “Module” element that provisions the manifest file itself. Note two things – the manifest file gets provisioned to the root folder of the target site – we don’t want to put the file in a document library because this way it will be directly visible to the site users. And secondly – check the URL of the file – it contains the Guid of the feature and the feature receiver will use exactly this so that it can locate the file when it gets to be executed. The pattern of the URL should be like this: [any number of characters that are unique among the “Module” manifest files in the parent feature]_$SharePoint.Feature.Id$.xml (the part from the underscore character on should be always the same) – basically we want to get a unique target name for every “Module” manifest file in this feature. And here is the code of the feature receiver itself:

    publicclassTestFeatureEventReceiver : SPFeatureReceiver

    {

        // The SharePoint elements file namespace

        privatestaticreadonlyXNamespace WS = "http://schemas.microsoft.com/sharepoint/";

 

        publicoverridevoid FeatureActivated(SPFeatureReceiverProperties properties)

        {

            // make it work for both 'Site' and 'Web' scoped features

            SPWeb web = properties.Feature.Parent asSPWeb;

            if (web == null&& properties.Feature.Parent isSPSite) web = ((SPSite)properties.Feature.Parent).RootWeb;

            if (web != null) CheckinFiles(web, properties.Feature.DefinitionId);

        }

 

        privatevoid CheckinFiles(SPWeb web, Guid featureID)

        {

            // create a regular expression pattern for the manifest files

            string pattern = string.Format(@"^.+_{0}.xml$", featureID);

            Regex fileNameRE = newRegex(pattern, RegexOptions.Compiled | RegexOptions.IgnoreCase);

 

            // get the manifest files from the root folder of the site

            SPFile[] manifestFiles = web.RootFolder.Files.Cast<SPFile>().Where(f => fileNameRE.IsMatch(f.Name)).ToArray();

            try

            {

                // iterate the manifest files

                foreach (SPFile manifestFile in manifestFiles)

                {

                    // load the contents of the manifest file in an XDocument

                    MemoryStream mStream = newMemoryStream(manifestFile.OpenBinary());

                    StreamReader reader = newStreamReader(mStream, true);

                    XDocument manifestDoc = XDocument.Load(reader, LoadOptions.None);

 

                    // iterate over the 'Module' and 'File' elements in the XDocument, concatenating their Url attributes in a smart way so that we grab the site relative file Url-s

                    string[] fileUrls = manifestDoc.Root.Elements(WS + "Module")

                        .SelectMany(me => me.Elements(WS + "File"), (me, fe) => string.Join("/", newXAttribute[] { me.Attribute("Url"), fe.Attribute("Url") }.Select(attr => attr != null ? attr.Value : null).Where(val => !string.IsNullOrEmpty(val)).ToArray()))

                        .ToArray();

 

                    // iterate the file url-s

                    foreach (string fileUrl in fileUrls)

                    {

                        // get the file

                        SPFile file = web.GetFile(fileUrl);

                        // depending on the settings of the parent document library we may need to check in and/or (publish or approve) the file

                        if (file.Level == SPFileLevel.Checkout) file.CheckIn("", SPCheckinType.MajorCheckIn);

                        if (file.Level == SPFileLevel.Draft)

                        {

                            if (file.DocumentLibrary.EnableModeration) file.Approve("");

                            else file.Publish("");

                        }

                    }

                }

            }

            finally

            {

                // finally delete the manifest files from the site root folder

                foreach (SPFile manifestFile in manifestFiles) manifestFile.Delete();

            }

        }

    }

As you can see, the receiver’s code is quite small and straight-forward (check also the comments inside the code). In short what it does is as follows: it first iterates the files in the site root folder, matching their names against the above mentioned name pattern, then it loads the manifest files one by one in an XDocument object and extracts the URL-s of the files in every manifest file. After that the provisioned files are being checked in, published or approved if necessary (depending on their state and the settings of the containing document library). The final step is to delete the manifest file or files in the site root folder, because they are no longer needed, of course.

New version of the Site Template Configurator utility

$
0
0

This posting is a brief announcement of the new and enhanced version of my Site Template Configurator utility, which I first introduced several months ago in another posting of mine. You can download the source code of the utility from here.

sc1

Here is a short list of the changes and enhancements in this new version:

  • The tool is now capable of generating “wsp” packages and also has commands for “wsp” installing and uninstalling, both in two flavors – local (for a single machine farm) and farm (launching timer jobs on all machines in the farm). The “wsp” generation and handling functionality neatly packs the custom site templates that you create and make them easy to deploy to other SharePoint farms (see the notes about the “wsp” generation below).

sc2

  • The “Generate artifacts…” and the “wsp” export and install/uninstall commands are now available in the context menu of the “project/root” node of the left hand pane hierarchy. The “project” menu in the program’s toolbar now contains only the file handling commands for the “template project” file.
  • The “project/root” node now contains an extra property – “SiteTemplatePrefix”. This allows you to specify a common prefix for the newly generated site templates’ folders in the “TEMPLATE/SiteTemplates” system folder. If you don’t specify one the tool will use long and ugly Guid containing names as in the previous version.
  • The source code comes with two “sln” files – one for Visual Studio 2008 and one for Visual Studio 2010. The tool also compiles for both SharePoint 2007 and SharePoint 2010.
  • The tool now displays a “close warning” prompt if you have modified your “template project” and want to exit without saving the changes.

And several handy notes about the tool:

  • The tool uses the “makecab” Windows utility to create “wsp” packages, so you need to have it installed on the machine where you will use the utility. Alternatively on machines with SharePoint 2010, the tool may be compiled to use the SharePoint 2010 built-in CAB file support (the functionality is not publicly accessible, but the tool’s code uses a couple of reflection tricks to make use of it). In order that the tool is compiled to use this, you need to specify the “SHAREPOINT2010” compilation symbol in the project settings for the projects in the Visual Studio solution.
  • Specify appropriate values in the “SiteTemplatePrefix” and “StartTemplateID” properties of the “project/root” node. The latter specifies the starting ID value of the “TEMPLATE” elements in the generated WEBTEMP*.xml file – make sure, that you don’t have several projects that use one and the same starting or overlapping ID values (these should be unique in the SharePoint farm). Use the “Generate artifacts …” command of the “project/root” node and then carefully inspect the configuration files (and the subfolders that they are placed in), that the tool generates – make sure that when these get copied to the TEMPLATE/SiteTemplates and TEMPLATE/1033 system folders, they won’t overwrite existing files there.
  • Check the “IncludeLists” and “IncludeModules” properties of the “template” nodes (the nodes with blue icons). These have the value “false” by default, which means that the “List” and “Module” elements from the ONET.XML of the base template won’t get executed for your custom template. If you don’t have the “IncludeModules” property enabled, and you don’t have a custom feature that provisions another default/welcome page for your site template, the sites based on this custom template will end up with no default/welcome page.

Pluggable Content Query web part

$
0
0

Sub-classing the standard SharePoint ContentByQueryWebPart is a common approach for extending the default set of functionality that this web part offers. There are many examples for sub-classing the CQWP on the internet – for instance this sample by Andrew Connell that handles URL query parameters for adding dynamic filtering in the web part, or the paging enabled CQWP version by Waldek Mastykarz, just to name a few.

In most cases the need to sub-class the CQWP comes with some extra requirements for filtering or modifying the result set (SharePoint list items or documents) that the CQWP is supposed to present. The standard implementation of the CQWP of course has many filtering options itself but if you want some additional filtering based on dynamic conditions (as it is the case with the URL query parameters) you will have to consider the extending of the standard implementation by sub-classing the ContentByQueryWebPart class. As for the URL query parameters example, the SharePoint 2010 version of the CQWP has some limited built-in support for handling query parameters, but this still may be far from adequate in many cases.

I myself have sub-classed the CQWP in a handful of SharePoint projects of mine, so now I have maybe at least ten different implementations of cases where I had to handle some similar or not that similar requirements all based on the premises of having to have some sort of modification of the original CQWP result set. And some time ago I brain-stormed a bit the whole issue coming up with the idea that the whole problem can be handled in a bit more generalized way – if you for instance separate the web part implementation and the result set filtering/modifying logic implementation in a sort of pluggable architecture. Basically, the idea is pretty simple and further facilitated by the fact that you can have as many custom persistable properties in a web part as you need. The “plugin” can be as simple as a .NET class implementing a predefined .NET interface and the web part can be configured with a dedicated string property containing the fully qualified type name of this class, which will allow the web part to create instances of it using reflection. The web part will then be implemented to call on the methods of the plugin class which will be implementing the “contract” specified in the predefined .NET interface that I mentioned. The whole idea is that on the side of the extended CQWP there will be only a single implementation instead of many different ones for all possible cases that you may have and the whole effort would be for the part with the filtering/modification “plugin” class (classes) where it will be possible to concentrate on the retrieving/filtering specifics instead of having to pay attention to web part and presentation details.

And now to the more interesting part – how is it actually possible to hook onto the standard CQWP web part with your inheriting class so that you can modify the result set that the CQWP normally produces. Basically, there are a few options available here – it can be as simple as setting or modifying the values of the built-in filtering properties of the CQWP – for instance: FilterField1, FilterValue1, FilterType1, FilterOperator1, Filter1ChainingOperator, ListsOverride, QueryOverride, etc. Another approach is to use the handy ProcessDataDelegate property to which you can provide a delegate referencing a custom method receiving a DataTable parameter with a return type a DataTable as well. The idea of this delegate property is, that you can provide a custom method of yours to it, and it will be called with a DataTable parameter that will contain the freshly retrieved result set that the CQWP is about to display and you will have the chance to modify this DataTable instance (and its contents) or to even create a new DataTable instance and reuse, modify or filter the items from the original DataTable. The modified or the newly created DataTable then you will return from the custom method to the CQWP so that it goes to the presentation handling logic (check Waldek’s posting at the top for more details).

Having briefly mentioned these two approaches for implementing some extra filtering in the CQWP and before I continue with describing the actual approach in my implementation I think that it will be a good moment here to give you some more details on the internal workings of the CQWP and more specifically on its data retrieving logic so that you can better understand the whole hooking procedure in the inheriting “pluggable” CQWP. So, the first thing here, which is probably familiar to most of you is that the CQWP uses internally the  CrossListQueryCache and CrossListQueryInfo classes combo for retrieving aggregated cross list data. These two classes are actually a thin wrapper on top of the SPWeb.GetSiteData method and the SPSiteDataQuery class in which you provide the CAML query details for the cross list data call. What the CrossListQueryCache and CrossListQueryInfo classes add on top of the base implementation is some caching support and support for audience filtering and grouping. In theory this is pretty simple, the using of the cross list classes or the SPWeb.GetSiteData method is pretty common in many cases even not related to the CQWP at all. Apart from this there are several important facts giving some further details about the exact implementation of the cross list query invocation in the CQWP:

  • The call of CrossListQueryCache.GetSiteData method that retrieves the list data for the CQWP takes place in the GetXPathNavigator virtual method (actually in a private method called by the latter) of the web part. Note that the GetXPathNavigator is a virtual method which means that you can override it thus it is an ideal candidate for hooking onto the base implementation of the CQWP, when it comes to modifying its data retrieval logic.
  • The DataTable instance returned by the CrossListQueryCache.GetSiteData method is assigned to the Data public property of the CQWP. This property has a public getter and setter and even more importantly – the CQWP issues the CrossListQueryCache.GetSiteData call only if the Data property is not initialized and has the default null value.
  • The CrossListQueryCache class requires an instance of the CrossListQueryInfo class in its constructor. The CrossListQueryInfo has properties like the Query, Lists, Webs, RowLimit, etc. in which you specify CAML fragments that determine what list data you want to retrieve from the target site collection. The CQWP creates and initializes a CrossListQueryInfo instance based on its filtering properties, some of which I mentioned above: FilterField1, FilterValue1, FilterType1, FilterOperator1, Filter1ChainingOperator, ListsOverride, QueryOverride, etc. The good news here is that the CQWP provides a public method which you can use to get a CrossListQueryInfo instance with the exactly same configuration as the one that the web part uses for its internal cross list query call. This method is BuildCbqQueryVersionInfo– it actually returns an instance of the CbqQueryVersionInfo class whose CbqQueryVersionInfo.VersionCrossListQueryInfo property contains the CrossListQueryInfo instance that we need.
  • And lastly – the call to the delegate ProcessDataDelegate property that I mentioned above also takes place in the CQWP’s GetXPathNavigator virtual method. Actually this happens at the same moment when the web part gets the list data in the DataTable instance from the CrossListQueryCache.GetSiteData call.

With all these facts it is easy now to devise a strategy for implementing the hooking logic in the inheriting class. It is obvious that the whole logic can be placed in the overridden version of the GetXPathNavigator method, just before you call the base CQWP implementation (base.GetXPathNavigator();) in the method body. For the plugin logic I thought of three alternative types of overriding the standard CQWP data retrieving and filtering logic:

  • The first type of override is the most radical one – you directly circumvent the standard logic for retrieving the list data in the CQWP and produce your own DataTable instance in some custom way. The only thing that you will need to do after producing the DataTable instance is to assign it to the CQWP’s “Data” property. This way you will ensure that when you call the base GetXPathNavigator method, the CQWP won’t issue its standard cross list query call, thus there won’t be any performance issues because of the double data retrieval. As of how you are going to get your DataTable instance – you have many possible choices here depending on your particular case. I can think of for instance using the SharePoint search functionality, using data in an external SQL database or even using again the CrossListQueryCache.GetSiteData method but applying some custom caching logic instead of relying on the built-in one if it doesn’t fit in your requirements.
  • The second type of override is to manipulate just the CrossListQueryInfo instance that will be used for the CrossListQueryCache.GetSiteData call. This instance will contain the initial configuration of the web part and you can add some additional filtering or scoping logic to it based on your requirements. Then the inheriting web part will issue the call to the CrossListQueryCache.GetSiteData method and will again set the CQWP’s Data property, so that the base implementation doesn’t make a second call. Note also that in this type of overriding the CQWP’s filtering you have to deal only with a CrossListQueryInfo instance and manipulate the CAML fragments in its properties (merge some extra filtering CAML, etc.) instead of having to modify directly the CQWP’s properties like the FilterField1, FilterField2, FilterField3, etc. which the end user may have already set values to using the SharePoint UI.
  • The third type of override is to simply use the standard delegate ProcessDataDelegate property override mechanism – the CQWP will make the cross list query call itself and we will only manipulate the DataTable instance (or create a new instance using the rows from the latter) that it has already retrieved.

And now, let’s move on to some code snippets which will better illustrate the implementation of the pluggable CQWP. You can download the project containing the pluggable CQWP from here. Let me first show you the code of the .NET interface that should be implemented by the custom plugin classes for the pluggable CQWP:

    publicinterfaceIContentQueryPlugin

    {

        // should return true if the pluging can generate the data for the web part - the GetDataResults method should produce a DataTable instance

        bool CanGetDataResults(PluggableContentByQuery part);

        // should return true if the plugin needs to modify the CrossListQueryInfo instance that the CQWP uses for its internal cross list query - the ProcessQueryInfo should be implemented in this case

        bool CanProcessQueryInfo(PluggableContentByQuery part);

        // should return true if the plugin needs to modify the DataTable returned by the CQWP cross list query - the ProcessData method should be implemented

        bool CanProcessData(PluggableContentByQuery part);

 

        // implement this if you want the retrieve the data without using the CQWP built-in retrieval mechanism

        DataTable GetDataResults(PluggableContentByQuery part, CrossListQueryInfo queryInfo);

        // implement this if you want to modify the CrossListQueryInfo instance before the CQWP issues its cross list query

        CrossListQueryInfo ProcessQueryInfo(PluggableContentByQuery part, CrossListQueryInfo queryInfo);

        // implement this if you want to modify the already retrieved by the CQWP DataTable with the results of the cross list query

        DataTable ProcessData(PluggableContentByQuery part, DataTable data);

    }

As you see the interface contains six methods, but actually the three types of overrides that I mentioned above are implemented by only three of the methods: GetDataResults, ProcessQueryInfo and ProcessData. The other three methods are sort of auxiliary methods in the sense that you have one auxiliary method corresponding to one of the implementation methods – the auxiliary methods all start with the verb “can” prefix. Their purpose is pretty transparent – since the types of overrides that you can implement are optional (and even mutually exclusive – the first one versus the second two) with the auxiliary methods you can simply specify whether you are implementing a particular override. For instance if you want to take care of retrieving the data results yourself you are going to implement the GetDataResults method and also you will have to implement the CanGetDataResults auxiliary method by simply returning “true” in its implementation (for the other “can” methods you will have to return “false”).

Let’s now have a closer look at the CQWP overrides implementing methods:

  • the GetDataResults method – as I mentioned you go for this method if you want to have your custom data retrieving logic and circumvent the CQWP’s normal data retrieving routine altogether. As you see, the method’s return type is a DataTable, meaning that you will have to gather your result items in some custom way and then produce a DataTable instance that you will pass back to the CQWP. The method accepts two parameters – a reference to the “parent” CQWP and a CrossListQueryInfo object. The latter will contain all CAML fragments for the cross list query that the web part would have issued – you can use that for the filtering against your custom data source if you use one. Further you can use the reference to the web part to inspect the properties of the latter and make use of their values in some way or another.
  • the ProcessQueryInfo method – you can use this method if you want to only add some additional CAML markup or make some other modifications to the CrossListQueryInfo object that the CQWP constructs for its cross list query call. This means that you are opting only for some additional filtering but the data retrieval method remains the standard one for the CQWP. The method accepts CrossListQueryInfo parameter and also returns a CrossListQueryInfo object. You can either make your modifications to the input CrossListQueryInfo instance and then return the same instance, or you can optionally create a brand new CrossListQueryInfo instance, copy some or all of the properties of the source instance and then return the new one.
  • the ProcessData method – if you opt for this override it will mean that you don’t want the change the CAML filtering and will only want to modify the DataTable instance that the CQWP produces with its standard cross list query call. You can perform all sorts of changes to the DataTable instance like adding new columns to it, add or remove data rows, etc. The removing of data rows will effectively be equivalent to applying some extra filtering, which in many cases will be better if implemented in CAML, meaning that probably there will be a better solution if using the ProcessQueryInfo to inject the filtering in the CrossListQueryInfo’s CAML.

Let me now show you how the IContentQueryPlugin interface fits in the overriding procedure of the CQWP. As I mentioned above, the only hooking point that we need to change the data retrieving logic of the CQWP is the virtual GetXPathNavigator method, and this is its overridden version in the pluggable CQWP:

        protectedoverrideXPathNavigator GetXPathNavigator(string viewPath)

        {

            // This method is the single point where we need to hook to get the whole plugin thing working.

            // The standard CQWP issues the cross list query in its implementation of the GetXPathNavigator method

            // setting the CQWP's Data property with a DataTable instance holding the results of the query.

            // The CQWP issues its query only if the Data property is null, so we can even set it with

            // a customly retrieved DataTable if we want before calling base.GetXPathNavigator(viewPath).

 

            // first get the plugin instance

            IContentQueryPlugin plugin = this.GetQueryPlugin();

            if (plugin != null)

            {

                // check if the plugin wants to generate the data results first

                if (plugin.CanGetDataResults(this))

                {

                    // if so, provide it with a CrossListQueryInfo instance and set the Data property of the web part

                    CrossListQueryInfo queryInfo = this.BuildCbqQueryVersionInfo().VersionCrossListQueryInfo;

                    this.Data = plugin.GetDataResults(this, queryInfo);

                }

 

                // then check if the plugin wants to modify the original CrossListQueryInfo instance (check this only if the Data property is still null)

#if SHAREPOINT2010

                if (this.Results == null && plugin.CanProcessQueryInfo(this))

#else

                if (this.Data == null&& plugin.CanProcessQueryInfo(this))

#endif

                {

                    // get the CrossListQueryInfo instance and pass it to the plugin

                    CrossListQueryInfo queryInfo = this.BuildCbqQueryVersionInfo().VersionCrossListQueryInfo;

                    queryInfo = plugin.ProcessQueryInfo(this, queryInfo);

                    // after this run the cross list query to populate the Data property

                    IssueQuery(queryInfo);

                }

 

                // set the CQWP ProcessDataDelegate property if the plugin needs to post-process the results DataTable

                if (plugin.CanProcessData(this))

                {

                    this.ProcessDataDelegate = (dt) =>

                    {

                        return plugin.ProcessData(this, dt);

                    };

                }

            }

 

            // only after we finish the plugin stuff call the base CQWP GetXPathNavigator implementation

            returnbase.GetXPathNavigator(viewPath);

        }

You can see that the implementation is actually pretty concise and straight-forward. Basically the implementation of the overriding GetXPathNavigator method simply dispatches the calls for modifying the data retrieving logic of the CQWP to the plugin class that implements the IContentQueryPlugin interface. The creating of the plugin class instance is also a pretty trivial thing – the pluggable CQWP has a custom string property called PluginClassName, which as its name suggests should be set to contain the fully qualified type name (plus the full assembly name if the plugin class is not in the same assembly as the assembly of the pluggable CQWP) of the plugin class. The pluggable CQWP creates the plugin class instance using reflection (there’s also the option to cache the plugin class instance – check the source code for details). Note also that you can compile the web part for SharePoint 2010 too but you will need to define an extra Debug symbol in the settings of the Visual Studio project of the web part (check the conditional compiling statements in the source code).

The Visual Studio project with the pluggable CQWP (download from here) also contains a sample plugin class implementing the IContentQueryPlugin interface. It implements only the IContentQueryPlugin.ProcessQueryInfo meaning that it only modifies the original CAML of the CQWP’s cross list query call. This is the source code of its implementation of the ProcessQueryInfo method:

        publicCrossListQueryInfo ProcessQueryInfo(PluggableContentByQuery part, CrossListQueryInfo queryInfo)

        {

            // check the FilterFieldN and FilterValueN query parameters 0 through 9

            string[][] args = (newint[10]).Select((i, idx) => newstring[] { HttpContext.Current.Request.QueryString["FilterField" + idx], HttpContext.Current.Request.QueryString["FilterValue" + idx] })

                .Where (arr => !string.IsNullOrEmpty(arr[0]) && !string.IsNullOrEmpty(arr[1]))

                .ToArray();

            // if none found or their values are empty just exit

            if (args.Length == 0) return queryInfo;

 

            // get the CAML query in an XElement

            XElement queryElement = XElement.Parse("<Query>" + queryInfo.Query + "</Query>");

            // retrieve the Where clause or create a new one if missing

            XElement whereEelement = queryElement.Element("Where");

            if (whereEelement == null)

            {

                whereEelement = newXElement("Where");

                queryElement.Add(whereEelement);

            }

 

            // get the existing where clause if present and append all comparison clauses from the query parameters

            string[] clauses = whereEelement.Elements().Select(el => el.ToString())

                .Union(args.Select(arg => FormatComparisonQuery(arg[0], arg[1], "Eq")))

                .ToArray();

 

            // change the contents of the Where XElement performing a CAML 'And' on the clauses array

            whereEelement.RemoveAll();

            whereEelement.Add(XElement.Parse(FormatLogicalOpQuery("And", clauses)));

 

            // set the Query property of the CrossListQueryInfo instance

            queryInfo.Query = string.Concat (queryElement.Elements().Select(el => el.ToString()).ToArray());

            // return the modified CrossListQueryInfo

            return queryInfo;

        }

Some bits of the full implementation are missing (there are also several helper private methods in the plugin class – check the source code in the project for details) but you can see that the method simply “injects” some extra CAML in the Query property of the source CrossListQueryInfo parameter (the Query property may already contain some CAML that comes from the particular query configuration of the CQWP). The additional filtering that the sample plugin class is dynamically determined by the presence of certain URL query parameters that you use to call the page containing the web part. Actually these are no other than “FilterField1” and “FilterValue1”, the same as the standard LV and XLV web parts use and also the same as Andrew Connell uses in his sample of sub-classing the CQWP that I mentioned in the beginning.

In order that you can see the plugin class working with the pluggable CQWP you will need to set its fully qualified type name in the PluginClassName property of the web part. You can easily do that from the standard SharePoint UI – you will have to put this value for the sample plugin class: “Stefan.SharePoint.WebParts.QueryStringPlugin, Stefan.SharePoint.WebParts, Version=1.0.0.0, Culture=neutral, PublicKeyToken=f9bc508a219e6843”.

SharePoint Search 2007 – hacking the SiteData web service – Part I

$
0
0

When I started preparing this posting I realized that it would be too long, so I decided to split it into two parts – the first one being more introductory and explaining some aspects of the inner workings of the SharePoint search engine, and the second one concentrating on the actual implementation of the “hack”. Then when I started the first part, which you are now reading, I felt that the posting’s title itself already raises several questions, so it would be a good idea to start with a brief Q & A which will help you get into the discussed matter. This is a short list of questions that you may have also asked yourself two sentences into the posting:

  1. What is the relation between the SharePoint search service and the SharePoint SiteData web service in the first place?
  2. Why would I need to change the working of the SharePoint search, what can be the reasons and motives for that?
  3. Is it a wise idea and would you recommend using this hack?

And the answers come promptly:

  1. To answer this one we need to have a closer look at the internal workings of the SharePoint search index engine. If you are not familiar with some core concepts and basic terminology like index engine, content sources, filter daemon, protocol handlers, IFilters, I would recommend that you first check these two MSDN articles – here (for a high level architecture overview) and here (for a high level overview of the protocol handlers). Let me start with several words about the protocol handlers – these are basically responsible for crawling the different types of content sources. They are implemented as COM components written in unmanaged code (C or C++). If you are familiar with COM Interop you will know that it is also possible to create COM components using .NET and managed code and in fact there is a sample .NET protocol handler project in CodePlex. I am not sure though how wise it is to create your own protocol handler with managed code (apart from the fact that it is quite complex to start with) knowing that all existing protocol handlers by Microsoft and third party vendors are written in unmanaged code.
    You can check the available index engine protocols and their matching protocols handlers for your SharePoint installation in the Windows registry:

    search-ph

    You can see that there are different protocol handlers for the different types of content sources – SharePoint sites, external web sites, file shares, BDC, etc. The name of the protocol handler (the “Data” column in the image above) is actually the ProgID (in COM terms) of the COM component that implements the handler.
    In this posting we are only interested in just one of the protocol handlers – this is the one for the Sts3 protocol, which is responsible for crawling the content from SharePoint sites. The same handler is also used for the Sts3s protocol (see the image) which is again for SharePoint sites but which use the HTTPS (SSL) scheme. And now the interesting part – how does the Sts3 protocol handler traverse the content from SharePoint. The answer is actually also the answer of the first question in the list above – it calls the standard SharePoint SiteData web service (/_vti_bin/SiteData.asmx). If you wonder why for instance it doesn’t use the SharePoint object model directly – the main reason I think is for greater scalability (not to mention that it would be at best challenging to call managed from unmanaged code). The better scalability comes from the fact that the handler can be configured to call the SiteData web service from all available web front servers in the SharePoint farm, which can distribute better the workload and utilize better the resources of the farm. Later in the posting I will give you more details about how you can check and monitor the calls to the SiteData web service from the crawl engine and also some additional information about the exact methods of the SiteData service that are used for traversing the content of the SharePoint sites.
  2. As I already mentioned in the answer for the first question, this posting deals specifically with the search functionality that targets SharePoint content. So, the motives to come to this hack are directly related to the using and querying of SharePoint data. The reasons and motives for these changes can be separated in two groups – the first one is more general - why use SharePoint search and not some other available alternative method. The second one is more specific – what is not available or well implemented in the SharePoint search query engine that needs to be changed or improved.
    Let me start with the first group – out of the available methods to query and aggregate SharePoint content in the form of SharePoint list item and document metadata – SharePoint search doesn’t even come as the first or preferred option. Normally you would use the SharePoint object model with the SPList, SPListItem and SPQuery classes (for a single SharePoint list) or the SPSiteDataQuery class with the SPWeb.GetSiteData method (or alternatively the CrossListQueryInfo and CrossListQueryCache classes if you use the publishing infrastructure) – for querying and retrieving data from many lists in one site collection. The cross list query functionality is actually directly used in the standard SharePoint content by query web part (CQWP), so even without using custom code you may have experienced certain issues with it. Probably the biggest one is performance – maybe you’ve never seen it or you are well aware of it. This is because it becomes a real issue only if the size of your site collection in terms of the number of sub-sites becomes very big and you use queries that aggregate data from most of the available sub-sites. You can add to these two conditions the number of list items in the individual SharePoint lists which further degrades the performance. So, when does this become a visible issue – you can have various combinations of the above said conditions, but if you query more than one hundred sub-sites and/or you have more than several thousand items in every list (or many of the lists) you may see page loading times ranging from several seconds to well above a minute in certain extreme cases. And … this is an issue even with the built-in caching capabilities of the cross list query classes. As to why the caching doesn’t solve always the performance issue – there are several reasons (and cases) for that: first – there’re specific CAML queries for which the caching is not used at all (e.g. queries that contain the <UserID /> element); secondly – even if the caching works well, you have the first load that populates the cache which will still be slow, etc.
    Let me now briefly explain why the cross list query has such performance issues (only in the above mentioned cases). The main reason is the fact that the content database contains all list data (all list items in the whole site collection, it may also contain more than one site collections) in a highly denormalized table called AllUserData. This design solution was totally deliberate because it allows all flexibility that we know with SharePoint lists in terms of the ability to add, modify and customize fields, which unfortunately comes with a price in some rare cases like this one. Let’s see how the cross list query works from a database perspective with a real example – let’s say that we have a site collection with one hundred sub-sites each containing an “announcements” list with two custom fields “expiration date” and “publication date”. On the home page of the root site we want to place a CQWP that displays the latest five announcements (aggregated from all sub-sites) ordered by publication date and for which the expiration date is in the future. Knowing that all list item data is contained in a single database table you may think that it may be possible to retrieve the aggregated data in a single SQL query but, alas, this is not the case. If you have a closer look at the AllUserData table you will find out that it contains columns whose names go: nvarchar1, nvarchar2, …, int1, int2, …, datetime1, datetime2, … – these are the underlying storage placeholders for the various types of SharePoint fields in your lists. Obviously the “publication date” and “expiration date” will be stored in two of the “datetimeN” SQL columns but the important thing is that for the different lists the mappings may be totally different, e.g. for list 1 “publication date” and “expiration date” may map to datetime1 and datetime2 respectively, whereas for list 2 they can map to datetime3 and datetime4 respectively. This heterogeneous storage pattern makes the retrieval much more complex and time costly – the object model first needs to extract the metadata for all target lists in these one hundred sites (which contains the mappings for the fields) and after that retrieve the items from all one hundred lists one by one making a SQL union with the correct field to SQL columns mappings and applying the filtering and sorting after that. If you are interested in checking that yourself you can use the SQL profiler tool that comes with the MS SQL management studio.
    Having seen the performance issues that may arise with the usage of the cross list query built-in functionality of SharePoint, it is quite natural to check what SharePoint Search can offer as an alternative. Obviously it performs much faster in these cases and allows data retrieval and metadata filtering but the results and functionality it has are not exactly identical to the ones of the cross list query. And here we come to the second group of motives for implementing this kind of hack that I mentioned in the beginning of this paragraph. So let’s see some of the things that we’re missing in SharePoint search – from a data retrieval perspective – the text fields, especially the ones that contain HTML are returned by the search query with the mark-up stripped out (this is especially embarrassing for the publishing Image field type, whose values are stored as mark-up and get retrieved virtually empty by the search query); the “content type id” field is never crawled and cannot be used as a crawled and managed property; for the “lookup” field type (and derivative field types as the “user” type) – these are retrieved as plain text, with the lookup item ID contained in the field value stripped out; etc. From filtering and sorting perspective, you have pretty much everything needed – you can perform comparison operations on the basic value types – text, date, integer and float and perform the correct sorting based on the respective field type. What is missing is for instance the filtering on “lookup” (including “user”) fields based not on the textual value but on the integer (lookup ID) value – this is because this part of the lookup field value is simply ignored by the search crawler (we’ll come to that in the next part of the posting). For the same reason you cannot filter on the “content type id” field.
    The next question is of course is it possible to achieve these things with the SharePoint search – the answer is yes, and the hack that is a subject of this posting does exactly that.
  3. And lastly the third and most serious one – most of the time I am overly critical towards my own code and solutions, so I would normally not recommend using this hack (I will publish the source code in the second part of the posting), at least not in production environments. I would only suggest that you use it very limitedly in development/testing or small intranet environments if at all. I suppose that the material in the posting about some of the inner workings of the indexing engine and the SiteData web service would be interesting and useful by itself.

So, let’s now see how the index engine or more precisely the Sts3 protocol handler calls the SiteData web service. Basically you can track the SiteData.asmx invocations by simply checking the IIS logs of your web front server or servers (you have to have IIS logging enabled beforehand). If you first run a full crawl on one of your “SharePoint Site” content sources from the SSP admin site and after it completes open the latest IIS log file you will see that there will be many request to _vti_bin/SiteData.asmx and also to all pages and documents available in the SharePoint sites that were listed in the selected content source. It is logical to conclude that the protocol handler calls the SiteData web service to traverse the existing SharePoint hierarchy and to also fetch the available metadata for the SharePoint list items and documents and then it also opens every page and document and scans/indexes their contents so that they are later available for the full text search queries.

The checking of the IIS logs was in fact the first thing that I tried when I began investigating the SiteData-SharePoint Search relation but I was also curious to find out what method or methods exactly of the SiteData web service get called when the crawler runs. If you have a look at the documentation of the SiteData web service you will see that some of its methods like GetSite, GetWeb, GetListCollection, GetList, GetListItems, etc. look like ideal candidates for traversing the SharePoint site hierarchy starting from the site collection level down to the list item level. The IIS logs couldn’t help me here because they don’t track the POST body of the HTTP requests, which is exactly the place where the XML of the SOAP request is put. So I needed a little bit more verbose tracking here and I quickly came up with a bit ugly but working solution – I simply modified the global.asax of my test SharePoint web application like this:

<%@ Assembly Name="Microsoft.SharePoint"%><%@ Application Language="C#" Inherits="Microsoft.SharePoint.ApplicationRuntime.SPHttpApplication" %>

<%@ Import Namespace="System.IO" %>

<script RunAt="server">

 

    protectedvoid Application_BeginRequest(object sender, EventArgs e)

    {

        TraceUri();

    }

 

    protectedvoid TraceUri()

    {

        conststring path = @"c:\temp\wssiis.log";

        try

        {

            HttpRequest request = HttpContext.Current.Request;

            DateTime date = DateTime.Now;

            string httpMethod = request.HttpMethod;

            string url = request.Url.ToString();

            string soapAction = request.Headers["SoapAction"] ?? string.Empty;

            string inputStream = string.Empty;

 

            if (string.Compare(httpMethod, "post", true) == 0)

            {

                request.InputStream.Position = 0;

                StreamReader sr = newStreamReader(request.InputStream);

                inputStream = sr.ReadToEnd();

                request.InputStream.Position = 0;

            }

 

            string msg = string.Format("{0}, {1}, {2}, {3}, {4}\r\n", date, httpMethod, url, soapAction, inputStream);

 

            File.AppendAllText(path, msg);

        }

        catch { }

    }

</script>

The code is pretty simple – it hooks onto the BeginRequest event of the HttpApplication class which effectively enables it to track several pieces of useful information for every HTTP request made against the target SharePoint web application. So, apart from the date and time of the request, the requested URL and the HTTP method (GET, POST or some other) I also track the “SoapAction” HTTP header which contains the name of the SOAP method for a web service call and also the POST content of the HTTP request which contains the XML of the SOAP request (in the case of a web service call). The SOAP request body contains all parameters that are passed to the web service method call, so by tracking this I could have everything I wanted – the exact web service method being called and the exact values of the parameters that were being passed to it. Just to quickly make an important note about this code – don’t use it for anything serious, I created it only for testing and quick tracking purposes.

With this small custom tracking of mine enabled I ran a full crawl of my test web application again and after the crawl completed I opened the log file (the tracking code writes to a plain text file in a hard-coded disc location) and to my surprise I saw that only two methods of the SiteData web service were called – GetContent and GetURLSegments. Actually the real job was obviously done by the GetContent method – there were about 30-35 calls to it, and only one call to GetURLSegments. You can see the actual trace file that I had after running the full crawl here. My test web application was very small containing only one site collection with a single site, so the trace file is very small and easy to follow. The fourth column contains something that looks like an URL address but this is in fact the value of the “SoapAction” HTTP header – the last part of this “URL” is in fact the actual method that was called in the SiteData web service. The fifth column contains the XML of the SOAP request that was used for the web service calls – you can see the parameters that were passed to the SiteData.GetContent method inside. If you check the MSDN documentation about the SiteData.GetContent method you will see that its first parameter is of type “ObjectType” which is an enumeration. The possible values of this enumeration are: VirtualServer, ContentDatabase, SiteCollection, Site, Folder, List, ListItem, ListItemAttachments. As one can deduce from this enumeration, the GetContent method is designed and obviously used for hierarchy traversing and metadata retrieval (the MSDN article explicitly mentions that in the yellow note box at the bottom). If you check the sample trace file from my test site again you will see that the calls made by the crawler indeed start with a call using ObjectData.VirtualServer and continue down the hierarchy with ObjectData.ContentDatabase, ObjectData.SiteCollection, etc. You may notice something interesting – after the calls with ObjectData.List there’re no calls with ObjectData.ListItem. Actually in the trace file there is only one call to GetContent using ObjectData.ListItem and it is invoked for the corresponding list item of the home (welcome) page of the site, which in my case was a publishing page. The other method of the SiteData web service – GetURLSegments is also called for the home page only – it basically returns the containing site and list of the corresponding list item by providing the URL of the page. And if you wonder which option is used for retrieving list items – it is neither the ObjectData.List nor the ObjectData.ListItem. The former returns an XML fragment containing mostly the list metadata and the latter the metadata of a single list item. The option that actually returns the metadata of multiple list items is ObjectData.Folder. Even though the name is a bit misleading this option can be used in two cases – to retrieve the files from a folder that is not in a SharePoint list or library (e.g. the root folder of a SharePoint site) or to retrieve the list items from a SharePoint list/library. If you check the sample trace file you will see that the GetContent method is not called with ObjectData.Folder for all lists – this is because the crawler is smart enough and doesn’t call it for empty lists (and most of the lists in my site were empty). And the crawler knows that a particular list is empty by the preceding GetContent (ObjectData.List) call which returns the ItemCount property of the list. There is one other interesting thing about how the crawler uses the GetContent with ObjectData.Folder – if the list contains a big number of items, the crawler doesn’t retrieve all of them with one call to GetContent but instead reads them in chunks of two thousand items each (the logic in SharePoint 2010 is even better – it determines the number of items in a batch depending on the number of fields that the items in the particular list have). And … about the return value of the GetContent method – it is in all cases an XML document that contains the metadata for the requested object or objects. It is interesting to note that the XML also contains the permissions data associated with the object which is obviously used by the indexing engine to maintain ACL-s for the various items in its index which allows the query engine to apply appropriate security trimming based on the permissions of the user that issues the search query. For the purposes of this posting we are mostly interested in the result XML for the ObjectData.List and ObjectData.Folder GetContent invocations – here’re two sample XML fragments from GetContent (List) and GetContent (Folder) calls. Well, indeed they seem quite … SharePoint-ish. Except for the permissions parts, the GetContent (Folder) yields pretty much the same XML as the standard Lists.GetListItems web service method. Have a look at the attributes containing the field values in the list items – these start with the well-known “ows_” prefix, which is the very same prefix that we see in the crawled properties associated with SharePoint content. Another small detail to note is that the GetContent (Folder)’s XML is not exactly well formed – for example it contains not properly escaped new line characters inside attribute values (not that this prevents it from rendering normally in IE) – I will come again to this point in the second part of this posting.

So far so good, but the results above are from a full crawl. And what happens when we run an incremental crawl? Have a look at the sample trace file that I got when I ran an incremental crawl on my test web application after i had changed several list items and had created a new sub-site and several lists in it. You can see that it contains again several calls to SiteData.GetContent, one call to SiteData.GetURLSegments and this time one call to SiteData.GetChanges. If you wonder why there is only one call to SiteData.GetChanges – a quick look at the result XML of this method will explain most of it. If you open the sample XML file you will see that the XML is something like a merged document from the results of the GetContent method for all levels from “ContentDatabase” down to “ListItem” … but containing only the parts of the SharePoint hierarchy whose leaf descendants (that is list items) got changed since the time of the last crawl. So basically, with one call the crawler can get all the changes in the entire content database … well, almost. Unless there are too many changes – in this cases the method is called several times each time retrieving a certain number of changes and then continuing after the reached book-mark. If you check the documentation of the GetChanges method in MSDN you will see that its first parameter is again of type ObjectData. Unlike the GetContent method however, you can use it here only with the “ContentDatabase” and “SiteCollection” values (the rest of the possible values of the enumeration are ignored and the returned XML if you use them is the same as with the “ContentDatabase” option). And one last thing in the case of the incremental crawl – the calls to the GetContent method are only for new site collections, sites and lists (which is normally to expect). The metadata for new, updated and deleted list items in existing lists is retrieved with the call to the GetChanges method.

So, this was in short the mechanism of the interaction between the SharePoint Search 2007 indexing engine (the Sts3 protocol handler) and the SharePoint SiteData web service. In the second part of this posting I will continue with explaining how I got to hack the SiteData web service and what the results of this hack were for the standard SharePoint search functionality.

SharePoint Search 2007 – hacking the SiteData web service – Part II

$
0
0

In the first part of this posting I outlined the physical architecture of the SharePoint search engine mostly concerning the part of the Sts3 protocol handler and the role that the “SiteData” web service plays in the crawling process. The very fact that the search engine uses a web service to access and retrieve SharePoint list and site data gave me the first clue as to how I can “hack” the search crawling process. My idea was simple – since the web service is hosted in IIS, of course, I can use some basic URL rewriting techniques so that the call to the real web service is “covertly” redirected to a custom web service which will either implement the original logic of the standard service adding some additional functionality or will simply serve as a proxy to the real web service but will do some modifications to either the input or the output of the latter. Out of these two options the first one seemed more than complex and the second one actually was pretty sufficient as to the goals that I had with the implementation of the “hack”. The thing is that the output XML of the SiteData.GetContent method contains all relevant SharePoint list item and schema data – the “List” option for the ObjectType parameter returns the schema of the SharePoint list and the “Folder” option – the list item data (see the sample XML outputs of the web service in the first part of the posting). The problem is that the Sts3 protocol handler “interprets” the data from the output XML in its own specific way which results in the well-known limitations of the crawled properties and the retrieved for them data in the search index that we have for SharePoint content. So what I decided to do was to create a small custom web service which implements the SiteData.GetContent and SiteData.GetChanges methods (with the exact same parameters and notation). Since I wanted to use it as a proxy to the real SiteData web service I needed somehow to pass the call to it. The simplest option here would have been to simply issue a second web service call from my web service, but the better solution was to just instantiate an instance of the SiteData web service class (Microsoft.SharePoint.SoapServer.SiteData from the STSSOAP assembly which is in the _app_bin subfolder of the SharePoint web application) and call its corresponding method. The last trick of the “hack” was to get the XML output from the SiteData GetContent and SiteData.GetChanges methods and modify it (actually add some additional stuff to it) so that I can get the extra crawled properties in the search index that I needed.

So, before going into details about the implementation of the “hack”, I want to point out several arguments as to why you should consider twice before starting using it (it’s become a habit of mine trying to dissuade people from using my own solutions) and I would rather not recommend using it in bigger production environments:

  • It tampers with the XML output of the standard SiteData web service – this may lead to unpredictable behavior of the index engine and result in it being not able to crawl your site(s). The standard XML output of the SiteData service is itself not quite well-formed XML so before getting the right way to modify it without losing its original formatting I kept receiving crawler errors which I could find in the crawl log of my SSP admin site.
  • There will be a serious performance penalty compared to using just the standard SiteData service. The increased processing time comes from the added parsing of the output XML and the extra modifications and additions added to it.
  • The general argument that this is indeed a hack which gets inside the standard implementation of the SharePoint search indexing which won’t sound good to both managers and Microsoft guys alike.

Having said that (and if you are still reading) let me give you the details of the implementation itself. The solution of the “hack” can be downloaded from here (check the installation notes below).

The first thing that I will start with is the URL rewriting logic that allows the custom web service to be invoked instead of the standard SiteData web service. In IIS 7 there is a built-in support for URL rewriting, but because I was testing on a Windows 2003 server with IIS 6 and because I was a bit lazy to implement a full proxy for the SiteData web service I went to the other approach … Which is to use a custom .NET HTTP module (the better solution) or to simply modify the global.asax of the target SharePoint web application (the worse but easier to implement solution) – which is the one that I actually used. The advantage of using a custom URL rewriting logic as opposed to using the built in URL rewriting functionality in IIS 7 is that in the former you can additionally inspect the HTTP request data and apply the URL rewriting only for certain methods of the web service. So in the modified version of the global.asax I do an explicit check for the web service method being called and redirect to the custom web service only if I detect the GetContent or GetChanges methods (all other methods will hit directly the standard SiteData service and no URL rewriting will take place). You can see the source code of the global.asax file that I used below:

<%@ Application Language="C#" Inherits="Microsoft.SharePoint.ApplicationRuntime.SPHttpApplication" %>

<script language="C#" runat="server">

protectedvoid Application_BeginRequest(Object sender, EventArgs e)

{

    CheckRewriteSiteData();

}

protectedvoid CheckRewriteSiteData()

{

    if (IsGetListItemsCall())

    {

        string newUrl = this.Request.Url.AbsolutePath.ToLower().Replace("/sitedata.asmx", "/stefansitedata.asmx");

        HttpContext.Current.RewritePath(newUrl);

    }

}

protectedbool IsGetListItemsCall()

{

    if (string.Compare(this.Request.ServerVariables["REQUEST_METHOD"], "post", true) != 0) returnfalse;

    if (!this.Request.Url.AbsolutePath.EndsWith("/_vti_bin/SiteData.asmx", StringComparison.InvariantCultureIgnoreCase)) returnfalse;

 

    if (string.IsNullOrEmpty(this.Request.Headers["SOAPAction"])) returnfalse;

 

    string soapAction = this.Request.Headers["SOAPAction"].Trim('"').ToLower();

    if (!soapAction.EndsWith("getcontent") && !soapAction.EndsWith("getchanges")) returnfalse;

    if (string.Compare(ConfigurationManager.AppSettings["UseSiteDataRewrite"], "true", true) != 0) returnfalse;

 

    returntrue;

}

</script>

Note also that in the code I check a custom “AppSettings” key in the web.config file whether to use or not URL rewriting logic. This way you can easily turn on or off the “hack” with a simple tweak in the configuration file of the SharePoint web application.

And this is the code of the custom “SiteData” web service:

[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1), WebService(Namespace = "http://schemas.microsoft.com/sharepoint/soap/")]

publicclassSiteData

{

    [WebMethod]

    publicstring GetContent(ObjectType objectType, string objectId, string folderUrl, string itemId, bool retrieveChildItems, bool securityOnly, refstring lastItemIdOnPage)

    {

        try

        {

            SiteDataHelper siteDataHelper = newSiteDataHelper();

            return siteDataHelper.GetContent(objectType, objectId, folderUrl, itemId, retrieveChildItems, securityOnly, ref lastItemIdOnPage);

        }

        catch (ThreadAbortException) { throw; }

        catch (Exception exception) { throwSoapServerException.HandleException(exception); }

    }

    [WebMethod]

    publicstring GetChanges(ObjectType objectType, string contentDatabaseId, refstring LastChangeId, refstring CurrentChangeId, int Timeout, outbool moreChanges)

    {

        try

        {

            SiteDataHelper siteDataHelper = newSiteDataHelper();

            return siteDataHelper.GetChanges(objectType, contentDatabaseId, ref LastChangeId, ref CurrentChangeId, Timeout, out moreChanges);

        }

        catch (ThreadAbortException) { throw; }

        catch (Exception exception) { throwSoapServerException.HandleException(exception); }

    }

}

As you see, the custom “SiteData” web service implements only the GetContent and GetChanges methods. We don’t need to implement the other methods of the standard SiteData web service because the URL rewriting will redirect to the custom web service only in case these two methods are being invoked. The two methods in the custom service have the exact same notation as the ones in the standard SiteData web service. The implementation of the methods is a simple delegation to the corresponding methods with the same names of a helper class: SiteDataHelper. Here is the source code of the SiteDataHelper class:

using SP = Microsoft.SharePoint.SoapServer;

namespace Stefan.SharePoint.SiteData

{

    publicclassSiteDataHelper

    {

        publicstring GetChanges(ObjectType objectType, string contentDatabaseId, refstring startChangeId, refstring endChangeId, int Timeout, outbool moreChanges)

        {

            SP.SiteData siteData = new SP.SiteData();

            string res = siteData.GetChanges(objectType, contentDatabaseId, ref startChangeId, ref endChangeId, Timeout, out moreChanges);

            try

            {

                ListItemXmlModifier modifier = newListItemXmlModifier(newEnvData(), res);

                res = modifier.ModifyChangesXml();

            }

            catch (Exception ex) { Logging.LogError(ex); }

            return res;

        }

 

        publicstring GetContent(ObjectType objectType, string objectId, string folderUrl, string itemId, bool retrieveChildItems, bool securityOnly, refstring lastItemIdOnPage)

        {

            SPWeb web = SPContext.Current.Web;

            SP.SiteData siteData = new SP.SiteData();

            string res = siteData.GetContent(objectType, objectId, folderUrl, itemId, retrieveChildItems, securityOnly, ref lastItemIdOnPage);

            try

            {

                EnvData envData = newEnvData() { SiteId = web.Site.ID, WebId = web.ID, ListId = objectId.TrimStart('{').TrimEnd('}') };

                if ((objectType == ObjectType.ListItem || objectType == ObjectType.Folder) && !securityOnly)

                {

                    ListItemXmlModifier modifier = newListItemXmlModifier(envData, res);

                    res = modifier.ModifyListItemXml();

                }

                elseif (objectType == ObjectType.List)

                {

                    ListItemXmlModifier modifier = newListItemXmlModifier(envData, res);

                    res = modifier.ModifyListXml();

                }

            }

            catch (Exception ex) { Logging.LogError(ex); }

            return res;

        }

    }

}

The thing to note here is that the two methods in the SiteDataHelper helper class create an instance of the SiteData web service class directly (note that this is not a generated proxy class, but the actual web service class implemented in the standard STSSOAP.DLL). The GetContent and GetChanges methods are called on this instance respectively and the string result of the calls is stored in a local variable. The string value that these methods return actually contains the XML with the list schema or list item data (depending on the “ObjectType” parameter being “List” or “Folder”). This XML data is then provided to an instance of the custom ListItemXmlModifier class which handles all XML modifications for both the GetContent and GetChanges methods. Note that for the GetContent method, the XML results are passed for modification only if the “ObjectType” parameter has the “ListItem”, “Folder” or “List” values. I am not going to show the source code of the ListItemXmlModifier class directly in the posting (it is over 700 lines of code) but instead I will briefly explain to you what are the changes in the XML from the GetContent and GetChanges methods that this class implements. The modifications to the XML are actually pretty simple and semantically there are only two types of changes – these correspond to the result XML-s of the GetContent (ObjectType=List) and GetContent (ObjectType=Folder) methods (the result XML of the GetChanges method has a more complex structure but contains the above two fragments (in one or more occurrences) where list and list item changes are available).

Let’s start with a sample XML from the standard SiteData.GetContent(ObjectType=List) method (I’ve trimmed some of the elements for brevity):

<List>

  <MetadataID="{1d53a556-ae9d-4fbf-8917-46c7d97ebfa5}"LastModified="2011-01-17 13:24:18Z"Title="Pages"DefaultTitle="False"Description="This system library was created by the Publishing feature to store pages that are created in this site."BaseType="DocumentLibrary"BaseTemplate="850"DefaultViewUrl="/Pages/Forms/AllItems.aspx"DefaultViewItemUrl="/Pages/Forms/DispForm.aspx"RootFolder="Pages"Author="System Account"ItemCount="4"ReadSecurity="1"AllowAnonymousAccess="False"AnonymousViewListItems="False"AnonymousPermMask="0"CRC="699748088"NoIndex="False"ScopeID="{a1372e10-8ffb-4e21-b627-bed44a5130cd}" />

  <ACL>

    <permissions>

      <permissionmemberid='3'mask='9223372036854775807' />

      ....

    </permissions>

  </ACL>

  <Views>

    <ViewURL="Pages/Forms/AllItems.aspx"ID="{771a1809-e7f3-4c52-b346-971d77ff215a}"Title="All Documents" />

    ....

  </Views>

  <Schema>

    <FieldName="FileLeafRef"Title="Name"Type="File" />

    <FieldName="Title"Title="Title"Type="Text" />

    <FieldName="Comments"Title="Description"Type="Note" />

    <FieldName="PublishingContact"Title="Contact"Type="User" />

    <FieldName="PublishingContactEmail"Title="Contact E-Mail Address"Type="Text" />

    <FieldName="PublishingContactName"Title="Contact Name"Type="Text" />

    <FieldName="PublishingContactPicture"Title="Contact Picture"Type="URL" />

    <FieldName="PublishingPageLayout"Title="Page Layout"Type="URL" />

    <FieldName="PublishingRollupImage"Title="Rollup Image"Type="Note"TypeAsString="Image" />

    <FieldName="Audience"Title="Target Audiences"Type="Note"TypeAsString="TargetTo" />

    <FieldName="ContentType"Title="Content Type"Type="Choice" />

    <FieldName="MyLookup"Title="MyLookup"Type="Lookup" />

    ....

  </Schema>

</List>

The XML contains the metadata properties of the queried SharePoint list, the most important part of which is contained in the Schema/Field elements – the simple definitions of the fields in this list. It is easy to deduce that the fields that the index engine encounters in this part of the XML will be recognized and appear as crawled properties in the search index. So what if we start adding fields of our own – this won’t solve the thing by itself because we will further need list items with values for these “added” fields (we’ll see that in the second XML sample) but it is the first required bit of the “hack”. The custom service implementation will actually add several extra “Field” elements like these:

    <FieldName='ContentTypeId.Text'Title='ContentTypeId'Type='Note' />

    <FieldName='Author.Text'Title='Created By'Type='Note' />

    <FieldName='Author.ID'Title='Created By'Type='Integer' />

    <FieldName='MyLookup.Text'Title='MyLookup'Type='Note' />

    <FieldName='MyLookup.ID'Title='MyLookup'Type='Integer' />

    <FieldName='PublishingRollupImage.Html'Title='Rollup Image'Type='Note' />

    <FieldName='PublishingPageImage.Html'Title='Page Image'Type='Note' />

    <FieldName='PublishingPageContent.Html'Title='Page Content'Type='Note' />

    <FieldName='Env.SiteId'Title='Env.SiteId'Type='Text' />

    <FieldName='Env.WebId'Title='Env.WebId'Type='Text' />

    <FieldName='Env.ListId'Title='Env.ListId'Type='Text' />

    <FieldName='Env.IsListItem'Title='Env.IsListItem'Type='Integer' />

You can immediately notice that these “new” fields are actually related to already existing fields in the SharePoint list in the schema XML that’s being modified. You can see that I used a specific naming convention for the “Name” attribute – with a dot and a short suffix. Actually the crawled properties that the index engine will generate will also contain the dot and the suffix, so it will be easy for you to locate them in the “crawled properties” page in the SSP admin site. From the “Name” attribute you can immediately see which the related original fields for the new fields are. In short the rules for creating these new fields are:

  • For every original lookup field (both single and multiple lookup columns and all derived lookup columns, e.g. user fields) two additional fields are added – with the suffixes “.ID” and “.Text” and field “Type” attribute “Integer” and “Note” respectively.
  • For every original publishing “HTML” and “Image” field one extra field with the “.Html” suffix is added.
  • For all lists the “ContentTypeId.Text” extra field is added with “Type” attribute set to “Note”
  • For all lists the additional fields “Env.SiteId”, “Env.WebId”, “Env.ListId”, “Env.IsListItem” are added.

So, we have already extra fields in the list schema, the next step is to have them in the list item data populated with the relevant values. Let me first show you a sample of the standard unmodified XML output of the GetContent(ObjectType=Folder) method (I trimmed some of the elements and reduced the values of some of the attributes for brevity):

<Folder>

  <Metadata>

    <scopeid='{5dd2834e-902d-4db0-8db2-4a1da762a620}'>

      <permissions>

        <permissionmemberid='1'mask='206292717568' />

        ....

      </permissions>

    </scope>

  </Metadata>

  <xmlxmlns:s='uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882'xmlns:dt='uuid:C2F41010-65B3-11d1-A29F-00AA00C14882'xmlns:rs='urn:schemas-microsoft-com:rowset'xmlns:z='#RowsetSchema'>

    <s:Schemaid='RowsetSchema'>

      <s:ElementTypename='row'content='eltOnly'rs:CommandTimeout='30'>

        <s:AttributeTypename='ows_ContentTypeId'rs:name='Content Type ID'rs:number='1'>

          <s:datatypedt:type='int'dt:maxLength='512' />

        </s:AttributeType>

        <s:AttributeTypename='ows__ModerationComments'rs:name='Approver Comments'rs:number='2'>

          <s:datatypedt:type='string'dt:maxLength='1073741823' />

        </s:AttributeType>

        <s:AttributeTypename='ows_FileLeafRef'rs:name='Name'rs:number='3'>

          <s:datatypedt:type='string'dt:lookup='true'dt:maxLength='512' />

        </s:AttributeType>

        ....

      </s:ElementType>

    </s:Schema>

    <scopes>

    </scopes>

    <rs:dataItemCount='2'>

      <z:rowows_ContentTypeId='0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF390064DEA0F50FC8C147B0B6EA0636C4A7D400E595F4AC9968CC4FAD1928288BC9885A'ows_FileLeafRef='1;#Default.aspx'ows_Modified_x0020_By='myserver\sstanev'ows_File_x0020_Type='aspx'ows_Title='Home'ows_PublishingPageLayout='http://searchtest/_catalogs/masterpage/defaultlayout.aspx, Welcome page with Web Part zones'ows_ContentType='Welcome Page'ows_PublishingPageImage=''ows_PublishingPageContent='some content'ows_ID='1'ows_Created='2010-12-20T18:53:18Z'ows_Author='1;#Stefan Stanev'ows_Modified='2010-12-26T12:45:31Z'ows_Editor='1;#Stefan Stanev'ows__ModerationStatus='0'ows_FileRef='1;#Pages/Default.aspx'ows_FileDirRef='1;#Pages'ows_Last_x0020_Modified='1;#2010-12-26T12:45:32Z'ows_Created_x0020_Date='1;#2010-12-20T18:53:19Z'ows_File_x0020_Size='1;#6000'ows_FSObjType='1;#0'ows_PermMask='0x7fffffffffffffff'ows_CheckedOutUserId='1;#'ows_IsCheckedoutToLocal='1;#0'ows_UniqueId='1;#{923EEE29-44AB-4D1B-B65B-E3ECEAE1353E}'ows_ProgId='1;#'ows_ScopeId='1;#{5DD2834E-902D-4DB0-8DB2-4A1DA762A620}'ows_VirusStatus='1;#6000'ows_CheckedOutTitle='1;#'ows__CheckinComment='1;#'ows__EditMenuTableStart='Default.aspx'ows__EditMenuTableEnd='1'ows_LinkFilenameNoMenu='Default.aspx'ows_LinkFilename='Default.aspx'ows_DocIcon='aspx'ows_ServerUrl='/Pages/Default.aspx'ows_EncodedAbsUrl='http://searchtest/Pages/Default.aspx'ows_BaseName='Default'ows_FileSizeDisplay='6000'ows_MetaInfo='...'ows__Level='1'ows__IsCurrentVersion='1'ows_SelectTitle='1'ows_SelectFilename='1'ows_Edit='0'ows_owshiddenversion='26'ows__UIVersion='6656'ows__UIVersionString='13.0'ows_Order='100.000000000000'ows_GUID='{2C80A53D-4F38-4494-855D-5B52ED1D095B}'ows_WorkflowVersion='1'ows_ParentVersionString='1;#'ows_ParentLeafName='1;#'ows_Combine='0'ows_RepairDocument='0'ows_ServerRedirected='0' />

      ....

    </rs:data>

  </xml>

</Folder>

The list item data is contained below the “rs:data” element – there is one “z:row” element for every list item. The attributes of the “z:row” element contain the field values of the corresponding list item. You can see here that the attributes already have the “ows_” prefix as all crawl properties in the “SharePoint” category. You can notice that the attributes for lookup fields contain the unmodified item field data but the publishing “HTML” and “Image” columns are already modified – all HTML markup has been removed from them (for the “Image” type column this means that they become empty, since all the data they contain is in markup).

And let’s see the additional attributes that the custom web service adds to the “z:row” elements of the list item data XML:

ows_ContentTypeId.Text='0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF3900242457EFB8B24247815D688C526CD44D0005E464F1BD83D14983E49C578030FBF6'

ows_PublishingPageImage.Html='&lt;img border="0" src="/PublishingImages/newsarticleimage.jpg" vspace="0" style="margin-top:8px" alt=""&gt;'

ows_PublishingPageContent.Html='&lt;b&gt;some content&lt;/b&gt;'

ows_Author.Text='1;#Stefan Stanev'

ows_Author.ID='1'

ows_Editor.Text='1;#Stefan Stanev'

ows_Editor.ID='1'

ows_Env.SiteId='ff96067a-accf-4763-8ec1-194f20fbf0f5'

ows_Env.WebId='b2099353-41d6-43a7-9b0d-ab6ad87fb180'

ows_Env.ListId='Pages'

ows_Env.IsListItem='1'

Here is how these fields are populated/formatted (as I mentioned above all the data is retrieved from the XML itself or from the context of the service request):

  • the lookup derived fields with the “.ID” and “.Text” suffixes – both get their values from the “parent” lookup column – the former is populated with the starting integer value of the lookup field, the latter is set with the unmodified value of the original lookup column. When the search index generates the corresponding crawled properties the “.ID” field can be used as a true integer property and the “.Text” one although containing the original lookup value will be treated as a simple text property by the index engine (remember that in the list schema XML the type of this extra field was set to “Note”). So what will be the difference between the “.Text” field and the original lookup column in the search index. The difference is that the value of the original lookup column will be trimmed in the search index and will contain only the text value without the integer part preceding it. And if you issue an SQL search query against a managed property mapped to the crawled property of a lookup field you will be able to retrieve only the textual part of the lookup value (this holds also for the filtering and sorting operation for this field type). Whereas with the “.Text” derived field you will have access to the original unmodified value of the lookup field.
  • the fields derived from the “HTML” and “Image” publishing field type with the “.Html” suffix – they are populated with the original values of the corresponding fields with the original markup intact. Since the values of the original fields in the list item data XML are already trimmed the original values are retrieved with a simple trick. The “z:row” element for every list item contains the “ows_MetaInfo” attribute which contains a serialized property bag with all properties of the underlying SPFile object for the current list item. This property bag happens to contain all list item field values which are non-empty. So what I do in this case is to parse the value of the ows_MetaInfo attribute and retrieve the unmodified values for all “Html” and “Image” fields that I need. An important note here – the ows_MetaInfo attribute (and its corresponding system list field – MetaInfo) is available only for document libraries and is not present in non-library lists, which means that this trick is possible only for library-type lists.
  • the ows_ContentTypeId.Text field gets populated from the value of the original ows_ContentTypeId field/attribute. The difference between the two is that the derived one is defined in the schema as a “Note” field so its value is treated by the search index as a text property.
  • the ows_Env.*, fields get populated from service contextual data (see the implementation of the SiteDataHelper class). For the implementation of the XML modifications for the SiteData.GetChanges method these values are retrieved from the result XML itself. The value of the ows_Env.IsListItem is always set to 1 (its purpose is to be used as a flag defining a superset of the standard “isdocument” managed property).

Installation steps for the “hack” solution

  1. Download and extract the contents of the zip archive.
  2. Build the project in Visual Studio (it is a VS 2008 project).
  3. The assembly file (Stefan.SharePoint.SiteData.dll) should be deployed to the GAC
  4. The StefanSiteData.asmx file should be copied to {your 12 hive root}\ISAPI folder
  5. The global.asax file should be copied to the root folder of your SharePoint web application. Note that you will have to backup the original global.asax file before you overwrite it with this one.
  6. Open the web.config file in the target SharePoint web application and add an “appSettings” “add” element with key “UseSiteDataRewrite” and value “true”.
  7. Note that if you have more than one front end servers in your farm you should repeat steps 3-6 on all machines.
  8. After the installation is ready you can start the search crawler (full crawl) from the SSP admin site. It is a good idea if you have a content source only for the web application for which the custom SiteData service is enabled, so that you can see immediately the results of the custom service.
  9. After the crawling is complete you should check the crawl log for errors – check whether there’re unusual errors which were not occurring before the installation of the custom SiteData service.
  10. If there’re no errors in the crawl log you can check the “crawled properties” page in the SSP admin site – the new “dotted” crawled properties should be there and you can now create new managed properties that can be mapped to them.
  11. Note that the newly created managed properties are not ready for use before you run a second full crawl for the target content source.
Viewing all 25 articles
Browse latest View live