Tuesday, December 14, 2010

Dissecting the SharePoint PowerShell Assignment Collections

With SP2010, Microsoft introduced PowerShell cmdlets as a replacement for / alternative to stsadm. Along with this, they brought SPAssignmentCollection, to minimize the impact of forgetting to dispose many of the SharePoint objects.

It's usage is simple enough: Start-SPAssignment and Stop-SPAssignment.
They can both be scoped Global, SemiGlobal and Local.

Locally scoped assignment collections would pan out such as
$gc = start-spassignment
$web = ($gc | Get-SPWeb http://someaddress)
# do something with $web
$gc | stop-assignment
I won't spend more time going into details about usage - other blogs have done that already. And if you're curious, "get-help start-spassignment -full" goes a long way.

What's interesting, though, is what happens behind the scenes. Take SPWeb instantiation as an example: do we even need to pass an assignment collection, for trivial usage? The answer is "not necessarily".

The reason for this is the way the PowerShell bindings deal with the SharePoint objects. Whenever you open an SPWeb, such as
Get-SPWeb http://address
.. what happens is that a SPSite object will be instantiated, and OpenWeb will be called on that, with the supplied url. SPWeb's constructor will pass itself back down to the SPSite, where it will be stored in a list of "owned" web instances. When SPSite.Dispose is called, all these owned instances will be disposed / closed as well.

Back in the PowerShell bindings, the SPSite has been constructed, and an SPWeb retrieved. This is the object returned to the caller. But before returning, the Get-SPWeb cmdlet will check for an active assignment collection - and upon not finding one: close the SPSite. Effectively, Get-SPWeb will return a disposed object, and no memory will leak.

This all holds true for opening a web, and reading certain already-initialized properties (such as url and id).

As soon as you start reading or writing other properties - even reading something as trivial as Title - the SPWeb will be re-opened, and kept open. The same goes for any nested instantiation / opening, such as enumerating child webs through SPWeb.Webs. You can verify this yourself, by checking the hidden "m_closed" field on the SPWeb instance, such as:
function is-closed($value) {
if ($value -is [microsoft.sharepoint.spsite]) {
$value.gettype().getfield("m_Request", [reflection.bindingflags]::nonpublic -bor [reflection.bindingflags]::instance).getvalue($value) -eq $null
}
elseif ($value -is [microsoft.sharepoint.spweb]) {
[boolean]::parse($value.gettype().getfield("m_closed", [reflection.bindingflags]::nonpublic -bor [reflection.bindingflags]::instance).getvalue($value))
}
else {
throw "invalid type"
}
}

$w = get-spweb http://address
$w.id
is-closed $w # outputs true

$w = get-spweb http://address
$w.title
is-closed $w # outputs false
Similarly, for child enumeration, and given the "is-closed" function above, the following will yield a count equal to the number of child webs in the web it's targetting. In other words, we'll quickly have a bunch of instances leaking memory.
$w = get-spweb http://address
$a = @()
$w.webs | %{ $a += $_ }
($a | ?{ (is-closed $_) -eq $false } | measure-object).count
So this is where we'll really see the benefit of using assignment collections. They'll not only properly dispose of any opened instance returned by e.g. Get-SPWeb; they also take care of nested instantiation and retrieval.

This means that once we wrap the above block with calls to Start-SPAssignment and Stop-SPAssignment, the resulting "still open SPWebs" count should be 0:
spgc {
$w = get-spweb http://address
$a = @()
$w.webs | %{ $a += $_ }
set-variable -scope global -name a -value $a
}
($a | ?{ (is-closed $_) -eq $false } | measure-object).count
The trick here is the "spgc" function, which takes a scriptblock and wraps it with a global assignment collection:
function spgc([ScriptBlock]$block) {
trap {
Stop-SPAssignment -Global
}
Start-SPAssignment -Global
& $block
Stop-SPAssignment -Global
}
Stop-SPAssignment manages to dispose the enumerated child webs not by taking note of each of them individually, but by disposing the SPSite they were all retrieved from. This is taken care of by the SharePoint PowerShell bindings, which add all disposable objects returned by a cmdlet to a global assignment collection, if one is defined.

While this collection of returned disposable objects is what makes Stop-SPAssignment do its job, it is also its Achilles' heel: For what happens when the object returned by the PowerShell cmdlets isn't a disposable object, but it's used to retrieve something that's disposable?
spgc {
$wa = get-spwebapplication http://address
$s = $wa.sites[0] # open the first site in the web application
set-variable -scope global -name site -value $s
is-closed $s
$s.allwebs # causes the site to be opened
is-closed $s
}
is-closed $site
SPWebApplication instances aren't disposable, but you could easily find yourself using one - in PowerShell - to enumerate all sites in a certain webapp. If you do so, and call anything on any of the sites retrieved, which cause them to be brought to an open state, they will not be disposed by calling Stop-SPAssignment. As such, the last script block above will show that the SPSite object retrieved remains open after leaving the spgc block.

So, in summary - do use Start-SPAssignment and Stop-SPAssignment in long-running PowerShell sessions / scripts, but pay close attention to their limitations as well as their powers.

Monday, December 13, 2010

How logic flaws in SharePoint's Element activation process can break Lookup fields

I came about this issue while I was - with as little markup as possible - deploying a Lookup field with a relative List reference. The list this field referenced was deployed with another feature in the wsp, upon which the field's feature depended. All in all a very simple setup, with dependencies that make sense.

With Lookup and LookupMulti fields, you have two options when binding them to a second list. You can either specify the guid, or a relative url. Here's an excerpt from MSDN, describing the "List" attribute:
Optional Text. Used to identify the list that is the target of a lookup field (Type="Lookup").

If the target list already exists, the value of the List attribute should be the string representation of the GUID (including braces) that identifies the target list. If the target is the same list as the one that the field belongs to, you can specify "Self".

If the target list does not yet exist, the value of the List attribute can be a web-relative URL such as "Lists/My List" but only if the target list is created in the same feature as the one that creates the lookup field. In this case, the value of the List attribute on the Field element must be identical to the value of the Url attribute on the ListInstance element that creates the target list.
In my case, seeing as I have no idea what the target list's id is as the field's feature is activating, I'd use the relative url. But wait a minute - the quote above states "[...] can be a web-relative URL such as 'Lists/My List' but only if the target list is created in the same feature as the one that creates the lookup field" - What?

That would be bad for two reasons.
  1. You'd be stuck putting a whole lot of functionality in one feature - something that'd generally make dependencies difficult to express, and impossible to get right.
  2. Lists deployed already; the guids of which you'd have no way of getting into your element markup, unless you either wrote supporting code to update the xml dynamically, or create and deploy the field entirely by code .. Both of which defeat the purpose of the xml markup in the first place.
So we can all agree that this limitation would have been awful - had what's noted in the MSDN text actually been true. Because it most certainly isn't.

After deploying a plethora of lookup fields - some exported from SharePoint, others written by hand - with varying luck referencing lists relatively, I got annoyed, and turned to Reflector.

Element activation process in detail

Long story short, SPFieldElement - which represents a <Field /> block in an elements.xml - has a method called PerformFixUpIfLookUpField. It is responsible for looking up a referenced web (through the WebId attribute) and list (through the List attribute), based on either guids, relative references or - in the case of webid - the special "~sitecollection". If the list reference is a relative one, and the method manages to find the list in question, the list reference will be updated with an actual guid.

So that's all fine and dandy. If PerformFixUpIfLookUpField is called for a Field element, relative references will be dealt with. If it's called. And this is where the catch is.

PerformFixUpIfLookUpField is called from two places, and both are in the same method: SPFieldElement's ElementActivated. From that method, PerformFixUpIfLookUpField is called if, and only if, either of these statements hold true:
  1. .. A field with the deployed field's id already exists on the site and the field's Overwrite attribute is "true" and the existing field isn't sealed or readonly.
  2. .. The deployed field is in a sandboxed solution or its Overwrite attribute is "true".
This means that unless the Lookup field is deployed in a sandboxed solution, relative list references will never be resolved if the Overwrite attribute is anything but "true". That makes no sense. At all.

This makes me wonder, though - and do correct me if I'm wrong here. The Feature activation process strikes me as a central piece in the SharePoint code base, and judging by this logic flaw: not a single unit test was written to ensure its correctness.

In Summary

So long as you specify Overwrite="true" for Field elements of type Lookup, LookupMulti, TaxonomyFieldType, etc., relative references will work. And you can completely disregard the "created in the same feature" statement in the MSDN reference.

Saturday, December 11, 2010

Building a Silverlight & jQuery powered drag'n'drop-from-desktop uploader for SharePoint 2010

This is a cross-post from NothingButSharePoint.com. Head over to the developer section for more SharePoint articles.

Introduction

The scope of this article is to demonstrate how to build a custom SharePoint file uploader, which will essentially let the user drag files straight into the web browser, and drop them onto either a document library, or a document library web part.

Contrary to the default uploaders in SharePoint, it will require no clicks or ribbon navigation to upload the files - and it quite efficiently deals with multiple files.



Learning Points

The solution demonstrates use of:
  • Using delegate controls to add user controls (.ascx) to the various SharePoint pages
  • Loading jQuery and other script dependencies through use of a custom script loader
  • Using the jQuery Templates plugin in a SharePoint 2010 setting
  • Dynamically loading and communicating with Silverlight from JavaScript
  • Handling Desktop Drag events in the browser and Silverlight
  • Uploading files to SharePoint, using the standard REST services
  • Microsoft's Reactive Extensions (Rx) framework, for asynchronous operations

Requirements

Going deep into each step of this article would be way beyond any practical scope, so I'm assuming that the user has at least basic knowledge of SharePoint 2010 as a development platform, and some experience with JavaScript and Silverlight.

Additionally, there are some software dependencies:
Step 1 - Creating the Visual Studio solution

Open Visual Studio, and start a new "Empty SharePoint Project". Name it DesktopUploader, or pick another name that suits you better (do note that selecting another name will require changes to various strings further down in this article).



Next, select to deploy it as a farm solution, and enter the URL of your SharePoint development portal.

Now that the Visual Studio Solution is started, and we've got the SharePoint project in place, we need to add the Silverlight project.

Go to File -> Add -> New Project, and select the "Silverlight Application" template.



In the next frame, make sure you deselect "Host the Silverlight application in a new Web site", and that Silverlight Version is set to 4.



Now that the Silverlight app is created, we'll add a reference to Microsoft's Reactive Extensions. If you downloaded and installed them from the link provided above, you should be able to right click the Silverlight project's node in the solution Explorer, and select "Add Reference".

Navigate to "C:\Program Files (x86)\Microsoft Cloud Programmability\Reactive Extensions\v1.0.2787.0\SL4", and select the top four dlls, then click ok.



What we'll want to do next, is make sure that the Silverlight project copies its output to one of the folders the SharePoint project deploys.

Note that this is something you could do with a Project Output Reference added to a Module project item (details here). If you wish to deploy the Silverlight application to a gallery within the site -- such as an assets library, or the master page gallery -- a Project Output Reference is certainly the way to go. For deployments to Layouts, or any of the other file system location on the frontend servers, Project Output References are of no help.

For this article we'll stick to Layouts folder deployment of the Silverlight application. This gives us the opportunity to use CKS:DEV to quick deploy the Silverlight app while we're developing it - rather than pushing files to the database for each rebuild. Once stable and ready for deployment, you may want to switch to the Module (and Project Output Reference) approach.

Right click the Silverlight project's node in the Solution Explorer and click "Settings" at the bottom. Once the property panes are open, go to "Build Events" tab, and enter the update the "Post-build" event command line:

copy /Y "$(TargetDir)$(TargetName).xap" "$(SolutionDir)DesktopUploader\Layouts\DesktopUploader\"

If you named the SharePoint project something other than DesktopUploader, you'll have to update the string above in two places.



Close the project properties, and the Project menu on the top menu bar. From there, select "Project Dependencies", and pick the "Desktop Uploader" project from the dropdown. Next make sure the checkbox next to the Silverlight project's name is checked.



This ensures that the Silverlight project is built before the SharePoint project, and that we'll thus get an updated Silverlight .xap file to deploy.

The final piece of the build-centric configuration tasks is to map the Layouts folder into the SharePoint project. This is tied to the build event configuration above, meaning that we'll be deploying the Silverlight .xap to a folder within the SharePoint Layouts folder.

Right click the SharePoint project node in the solution explorer, open the Add menu and click "SharePoint 'Layouts' Mapped Folder". Now build the solution once, and right click the DesktopUploader folder within the mapped Layouts folder in the solution explorer. Select Add -> Existing Item.



Navigate to the DesktopUploader project folder, and find Layouts within that, and finally DesktopUploader. Inside that folder, pick the .xap file and click Add.



Step 2 - Building the Silverlight uploader

Silverlight is the workhorse in this solution, in so much that it handles the actual file drop events as well as uploads the files to the server.

The Silverlight project will consist of:
  • A view that receives drag and drop events, triggers file sends, and communicates with the JavaScript.
  • A class which takes care of uploading a file to the SharePoint REST file service.

We'll start with the XAML markup. In the MainPage.xaml file, replace the Grid's code with the following:

<Grid x:Name="LayoutRoot">
<Border BorderBrush="DarkSlateBlue" BorderThickness="3">
<TextBlock VerticalAlignment="Center" HorizontalAlignment="Center" FontSize="14" Foreground="DarkSlateBlue">
Drop files here to upload
</TextBlock>
</Border>
<Rectangle
AllowDrop="True"
Drop="OnDrop"
DragLeave="OnDragLeave"
DragEnter="OnDragEnter"
Fill="Transparent"></Rectangle>
</Grid>

This markup will draw two overlapping structures. One transparent overlay (the Rectangle), and a Border with some text in the center. The event handlers have to be defined in a transparent overlay, rather than the LayoutRoot, to get consistent events for the whole Silverlight. Had there not been an overlay, and LayoutRoot received the events, DragLeave and DragEnter would fire as the mouse dragged past the other elements (such as the TextBlock). For reasons obvious later, we need DragEnter to fire once as the mouse drags onto the Silverlight app, and DragLeave to fire once as we drag off it.

The codebehind for this page will mainly process and act upon the drag events. We'll deal with the rather simple DragEnter and DragLeave first.

private void OnDragEnter(object sender, DragEventArgs e)
{
HtmlPage.Window.Eval("window.SLFileDrop.onDragEntersDropZone()");
}

private void OnDragLeave(object sender, DragEventArgs e)
{
HtmlPage.Window.Eval("window.SLFileDrop.onDragLeavesDropZone()");
}

Both of these events are offloaded for processing by the JavaScript. HtmlPage.Window.Eval allows Silverlight to evaluate and execute JavaScript within the browser, and as such retrieve / send information to the scripts present on the page. The objects and methods referenced in the snippet above, e.g. SLFileDrop and onDragEntersDropZone, haven't been defined yet - we'll get back to those when we type up the JavaScript.

Moving on to the OnDrop handler.

private void OnDrop(object sender, DragEventArgs e)
{
try
{
string rootUrl = HtmlPage.Window.Eval("window.SLFileDrop.getRootUrl()") as string;
if (rootUrl == null) return;
HtmlPage.Window.Eval("window.SLFileDrop.onDrop()");
var files = e.Data.GetData(DataFormats.FileDrop) as FileInfo[];
IEnumerable<FileInfo> filesToUpload = files.Where(file => file.Exists);
UploadFilesAsync(filesToUpload, rootUrl + (rootUrl.EndsWith("/") ? "" : "/"));
}
catch (Exception ex)
{
DisplayErrorMessage(ex);
}
}

This demonstrates information fetching from the JavaScript, as getRootUrl is called - a method which returns the url of the document library folder we are to send the file to.

The main piece to this method is the retrieval of the files dropped onto the application, which are returned by the call to GetData. We then go on to pick files who's Exists flag is set to true, and pass them on to another method of ours - the asynchronous uploader.

And that's where it all seems to get complex.

private void UploadFilesAsync(IEnumerable<FileInfo> filesToUpload, string folderUrl)
{
IDisposable uploadTaskDisposable = null;
uploadTaskDisposable =
(from file in filesToUpload
select BeginUploadFile(file, folderUrl))
.Merge().Skip(filesToUpload.Count() - 1)
.ObserveOnDispatcher()
.Subscribe(
o =>
{
UploadComplete();
uploadTaskDisposable.Dispose();
},
exception => DisplayErrorMessage(exception));
}

The goal of Microsoft's Reactive Extensions framework is to simplify event driven programming. Silverlight applications, and most other GUI aplications, for the most react to some UI event - and either produce a response immediately (synchronously), or at some later point in time (asynchronously). In the case of our application, we react to drop events, and start long running uploads which will themselves raise events when they are done. The glue code in our Silverlight application can thus be thought to sit in the middle and _observe_ events from sources around it. Rx turns events and synchronous calls into IObservable sequences.

To get a better hold on what's going on in the code above, and what we'll be doing in the following methods, let us start by getting a hold of what we're actually trying to express.

For each of the files dropped on the Silverlight application, we want to start a new upload task. For each such new upload task, we want to have periodic updates on progress, as well as be notified when the upload finishes. When we've received completion events for all files, we want to notify the JavaScript that we're all done.

Hence; for each file, it calls BeginUploadFile, and selects the return. This will be an IObservable - and thus the result of the Linq block will be a sequence of observables. The next call merges all of these observable event producers into a single stream of events, from which we can skip all but the last one. The Subscribe call at the end will be raised for the final upload finished event, and will go on to notify the JavaScript about this.

private IObservable<IEvent<HttpFileUploader.UploadEventArgs>> BeginUploadFile(FileInfo file, string folderUrl)
{
var uploader = new HttpFileUploader(Dispatcher, file, "PUT", new Uri(folderUrl + file.Name, UriKind.Relative));
var progressDisposable = CatchProgress(file, uploader);
var finish = CatchUploadFinished(file, uploader);
finish.Subscribe(s => progressDisposable.Dispose());
HtmlPage.Window.Eval(string.Format("window.SLFileDrop.onUploadStart('{0}')", file.Name));
uploader.StartUpload();
return finish;
}

BeginUploadFile will start by creating a new HttpFileUploader - a class I'll soon supply you with - and instruct it to HTTP PUT a file to the url indicated by folderUrl (which we requested from the JavaScript earlier). It then goes on to catch progress events from the uploader, as well as upload finish events. The call to Subscribe on the finish event will ensure that the progressDisposable is released properly once we're no longer interested in progress updates from this single file.

Finally it will send a notification to the JavaScript, that we've started uploading a named file, and return the finish event observable.

private static IDisposable CatchProgress(FileInfo fi, HttpFileUploader uploader)
{
return Observable.FromEvent<HttpFileUploader.UploadEventArgs>(
o => uploader.UploadProgressChanged += o,
o => uploader.UploadProgressChanged -= o)
.Throttle(TimeSpan.FromMilliseconds(250))
.ObserveOnDispatcher()
.Subscribe(s => HtmlPage.Window.Eval(
string.Format("window.SLFileDrop.onUploadProgress('{0}', {1}, {2}, {3}, {4})",
fi.Name, s.EventArgs.BytesSent,
s.EventArgs.BytesTotal, 0, 0)));
}

Hooking onto progress events employs another Rx trick. It creates an observable sequence from a plain .NET event object (UploadProgressChanged within HttpFileUploader). This observable is then filtered by calling Throttle with a TimeSpan of 250 ms - meaning that we don't want progress updates more often than each quarter of a second.

For each raised event (that is at most each quarter second), we call back to the JavaScript, notifying it of progress for the named file.

private static IObservable<IEvent<HttpFileUploader.UploadEventArgs>> CatchUploadFinished(FileInfo fi, HttpFileUploader uploader)
{
IObservable<IEvent<HttpFileUploader.UploadEventArgs>> obs =
Observable.FromEvent<HttpFileUploader.UploadEventArgs>(o => uploader.UploadFinished += o,
o => uploader.UploadFinished -= o);
obs.Subscribe(s => HtmlPage.Window.Eval(string.Format("window.SLFileDrop.onUploadComplete('{0}')", fi.Name)));
return obs;
}

Catching file upload finished events isn't too different from catching progress events. For each of these events, we notify the JavaScript. Looking back to BeginUploadFile, we'll notice that we don't return a disposable from this method - unlike the CatchProgress method. This is because the finish observable will be be part of the sequence bound in UploadFilesAsync - and will thus be disposed as part of the cleanup going down when all files have finished uploading.

The missing pieces of the MainPage.xaml.cs file are as follows. There aren't much to them, so I'll leave you to figure out what their purpose is.

private static void UploadComplete()
{
HtmlPage.Window.Eval("window.SLFileDrop.onAllUploadsComplete()");
}

private static MessageBoxResult DisplayErrorMessage(Exception exception)
{
return MessageBox.Show(exception.ToString(), "Error", MessageBoxButton.OK);
}

The following is the complete source listing for the HttpFileUploader class.

public class HttpFileUploader
{
private readonly Dispatcher _dispatcher;
private readonly FileInfo _file;
private readonly string _method;
private readonly Uri _uploadUrl;
private long _bytesTotal;
private long _bytesUploaded;
private FileStream _fileStream;
private bool _isActive;

public HttpFileUploader(Dispatcher dispatcher, FileInfo file, string method, Uri url)
{
_dispatcher = dispatcher;
_file = file;
_method = method;
_uploadUrl = url;
_isActive = false;
}

public long BytesUploaded
{
get { return _bytesUploaded; }
}

public event EventHandler<UploadEventArgs> UploadError;
public event EventHandler<UploadEventArgs> UploadFinished;
public event EventHandler<UploadEventArgs> UploadProgressChanged;

public bool StartUpload()
{
if (_isActive)
{
throw new InvalidOperationException("Uploader is already active");
}
_isActive = true;
_fileStream = _file.OpenRead();
var webRequest = (HttpWebRequest)WebRequestCreator.ClientHttp.Create(_uploadUrl);
webRequest.Method = _method;
webRequest.ContentType = "multipart/form-data; charset=utf-8";
webRequest.BeginGetRequestStream(WriteToStreamCallback, webRequest);
_bytesUploaded = 0;
_bytesTotal = _fileStream.Length;
return true;
}

private void InvokeUploadError(string error)
{
var args = new UploadEventArgs
{
BytesSent = _bytesUploaded,
BytesTotal = _bytesTotal,
ErrorMessage = error,
IsDone = false,
IsError = true
};
EventHandler<UploadEventArgs> handler = UploadError;
if (handler != null) handler(this, args);
}

private void InvokeUploadFinished(HttpStatusCode statusCode)
{
var args = new UploadEventArgs
{
IsDone = true,
IsError = false,
BytesSent = _bytesUploaded,
BytesTotal = _bytesUploaded,
StatusCode = statusCode
};
EventHandler<UploadEventArgs> handler = UploadFinished;
if (handler != null) handler(this, args);
}

private void InvokeUploadProgressChanged()
{
var args = new UploadEventArgs
{
IsDone = false,
IsError = false,
BytesSent = _bytesUploaded,
BytesTotal = _bytesTotal
};
EventHandler<UploadEventArgs> handler = UploadProgressChanged;
if (handler != null) handler(this, args);
}

private void ReadHttpResponseCallback(IAsyncResult asynchronousResult)
{
_isActive = false;
try
{
var webRequest = (HttpWebRequest)asynchronousResult.AsyncState;
var webResponse = (HttpWebResponse)webRequest.EndGetResponse(asynchronousResult);
_fileStream.Close();
_fileStream.Dispose();
_dispatcher.BeginInvoke(
() => InvokeUploadFinished(webResponse.StatusCode));
}
catch (Exception e)
{
if (_fileStream != null)
{
_fileStream.Close();
_fileStream.Dispose();
_fileStream = null;
}
_isActive = false;
InvokeUploadError(e.Message);
}
}

private void WriteToStreamCallback(IAsyncResult asynchronousResult)
{
try
{
var webRequest = (HttpWebRequest)asynchronousResult.AsyncState;
Stream requestStream = webRequest.EndGetRequestStream(asynchronousResult);
var buffer = new Byte[131072];
int bytesRead = 0;
while ((bytesRead = _fileStream.Read(buffer, 0, buffer.Length)) != 0)
{
requestStream.Write(buffer, 0, bytesRead);
requestStream.Flush();
_bytesUploaded += bytesRead;
_dispatcher.BeginInvoke(InvokeUploadProgressChanged);
}
requestStream.Close();
webRequest.BeginGetResponse(ReadHttpResponseCallback, webRequest);
}
catch (Exception e)
{
if (_fileStream != null)
{
_fileStream.Close();
_fileStream.Dispose();
_fileStream = null;
}
_isActive = false;
InvokeUploadError(e.Message);
}
}

#region Nested type: UploadEventArgs

public class UploadEventArgs : EventArgs
{
public long BytesSent { get; set; }
public long BytesTotal { get; set; }
public string ErrorMessage { get; set; }
public bool IsDone { get; set; }
public bool IsError { get; set; }
public HttpStatusCode StatusCode { get; set; }
}

#endregion
}

Describing this in depth is outside of the scope of this article, but a short sum-up is regardless in order. What it will essentially do is create a new HTTP PUT connection to the SharePoint site, using the ClientHttp stack in Silverlight. The client stack was introduced in Silverlight 3.5, and separates itself from the browser stack mainly as follows: 1. It allows REST HTTP verbs, such as PUT. 2. It doesn't share cookies with the browser, so sites with Basic Authentication will yield a password popup. For scenarios where this uploader is to be used with Basic Auth sites, the HttpUploader will have to be rewritten to not use the REST services (e.g. by deploying a custom WCF upload service capable of POST on the server side).

The HTTP connection will be established asynchronously, so in terms of our BeginUploadFile described earlier, the call to StartUpload will return immediately.

This pretty much concludes the Silverlight part of the solution.

Next up is the SharePoint project.

Step 3 - Setting up the SharePoint project, adding a delegate and user control

What we'll have to do next is add a delegate control to the project, which does the actual script injection. Since we can't know which pages contain document library web parts up front, this will be injected into all the pages of the portal. The pages without a document library view won't be affected. Pages with one or more doclibs on them will have the Silverlight app added, and scripts loaded.

Right click the SharePoint project's node in the Solution Explorer, and go to Add -> New Item. Go to the SharePoint 2010 templates, and pick "User Control".



Name the control e.g. "DesktopUploader.ascx" and click Add.

To have this user control injected into the various SharePoint pages, we'll have to reference it from a delegate control. Right click the SharePoint project's node in the Solution Explorer again, and pick "Delegate Control (CKSDev)" from the SharePoint 2010 items.



Name the control e.g. "DesktopUploaderDelegate" and click Add.

In the wizard that follows, enter the following values:

"Specify the ID of the control": AdditionalPageHead
"Sequence Number": 20
"Specify the relative URL": ~/_ControlTemplates/DesktopUploader/DesktopUploader.ascx



In page two of the wizard, simply click Finish.

You will now have gotten a new Feature, as well as a delegate control. You can go ahead and rename the feature if you'd like.

At this point your project should look something like the following.



We're now ready for the JavaScript bits to go into the user control.

Step 4 - Implementing the JavaScript

The JavaScript will go inside the "DesktopUploader.acx" user control.

At the top of the user control, directly below the assembly and control directives (prefixed with "<%@"), there will be a style sheet. The styles are mostly for the progress indicators we show as the files are being uploaded.

<style>
#SLFileDropStatus
{
position: absolute;
z-index: 300;
}
.SLFileDropEntry
{
white-space: nowrap;
overflow: hidden;
}
.SLFileDropLabel
{
font-size: 8pt;
color: Gray;
font-family: Verdana, Tahoma, helvetica;
display: inline-block;
}
.SLFileDropProgressBar
{
display: block;
height: 6px;
width: 100px;
border: 1px solid gray;
}
.SLFileDropProgress
{
width: 0px;
background: gray;
margin: 1px;
height: 4px;
}
.SLFileDropActive
{
border: 2px solid #44aff6;
}
</style>

Following this we'll add a serverside script tag, which will provide us with the server relative url of the current SharePoint site.

<script runat="server" language="C#">
private static string ServerRelativeUrl
{
get
{
string url = SPContext.Current.Web.ServerRelativeUrl;
return url.EndsWith("/") ? url.Substring(0, url.Length - 1) : url;
}
}
</script>

Given this server side method, we can be sure to refer to the various resources we need with a site relative url.

The final pieces to add, before getting to the actual script, are a progress indicator placeholder and two jQuery templates.

<div id="SLFileDropStatus"></div>
<script id="SLTemplate" type="text/x-jquery-tmpl">
<div id="${id}" style="${style}">
<object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="100%" height="100%">
<param name="source" value="${source}" />
<param name="onError" value="onSilverlightError" />
<param name="background" value="white" />
<param name="minRuntimeVersion" value="4.0.50401.0" />
<param name="autoUpgrade" value="true" />
<a href="http://go.microsoft.com/fwlink/?LinkID=149156&v=3.0.40624.0" style="text-decoration:none">
<img src="http://go.microsoft.com/fwlink/?LinkId=108181" alt="Get Microsoft Silverlight" style="border-style:none"/>
</a>
</object>
</div>
</script>
<script id="SLFileUploadEntryTemplate" type="text/x-jquery-tmpl">
<div SLFileName="${name}" class="SLFileDropEntry">
<table cellpadding='0' cellspacing='0'>
<tr>
<td>
<div class='SLFileDropLabel'>${name}</div>
</td>
<td width='5px'/>
<td>
<div class='SLFileDropProgressBar'>
<div class='SLFileDropProgress'/>
</div>
</td>
</tr>
</table>
</div>
</script>
</html>

If you aren't familiar with jQuery templates, you can think of them as html code structures, which will have data placeholders in them - such as ${name} and ${source} above - that will be instantiated and filled by jQuery. Given the Silverlight template above, we can easily provision Silverlight apps given an id, a style string and a source url for the xap file, without having to type out the whole html block within our JavaScript.

Heading to the JavaScript, what it'll be doing is essentially:
  1. Wait for SP.js to be loaded - indicating that most of the page is ready for use, and we can access urls for the document libraries on the page.
  2. Use my open source script loader to load jQuery if its not present, the jQuery template plugin - if that's not there and finally my open source tool library.
  3. Upon all scripts being loaded, define a SLFileDrop class / object, which will hold all the JavaScript drag and drop logic.
  4. Upon jQuery being available: check if there are any document library views on the page, and if so initialize the javascript class.

We'll start with laying out the script block for point 1 and 2 above.

<script type="text/javascript">
grep.core.executeAfterSPLoad(function () {
grep.scriptloader.load(typeof $, grep.core.getCoreJSUrl() + "jquery.min.js");
grep.scriptloader.load(function () { return typeof $ === "undefined" || typeof $.tmpl === "undefined"; },
grep.core.getCoreJSUrl() + "jquery.tmpl.min.js");
grep.scriptloader.load(typeof grep.tools, grep.core.getCoreJSUrl() + "grep.tools.min.js");
grep.scriptloader.exec(function () {
}
}
</script>

Granted that my open source Grep.SharePoint.CoreJS solution (linked on top of this article) has been activated for the site collection, grep.core and grep.scriptloader will be available for use in the script.

grep.core.executeAfterSPLoad(function) will essentially register a function to be called once SP.js is fully loaded.

grep.core.getCoreJSUrl() will return the server relative url of the folder where the scripts deployed by my core javascript solution reside.

grep.scriptloader.load(test, scriptUrl) will run evaluate the parameter passed as "test". It will load the script passed as scriptUrl if:
  • test is a string (which is the case for e.g.: typeof $) and is equal to "undefined"
  • if test is a boolean or function, and is equal to true

In the case of the above code, jQuery templates will be loaded if either jQuery isn't defined, or jQuery templates isn't defined.
grep.scriptloader.exec(function) will execute the function passed to it, as soon as all the script loads defined prior to the call to exec has completed. In our case, the function will execute when jQuery, jQuery templates and grep.tools are loaded.

Moving on with the end, we'll add the script block that verifies that jQuery is loaded, that we've got a document library view on the page, and finally initialize the main javascript.

// Execute init when jQuery's loaded
$(function () {
if ($("table[id^=onetidDoclibViewTbl]").size() == 0) return;
window.SLFileDrop.init();
});

And finally, here's the main class. It's defined on the window (global) object.

window.SLFileDrop = {
activeDocLib: null,
dropZone: null,
statusBox: null,
leaveTimeoutId: -1,
init: function () {
this.statusBox = $("div#SLFileDropStatus");
this.dropZone = ($("#SLTemplate")
.tmpl({
id: "SLFileDropHelper",
source: "<%= ServerRelativeUrl %>/_layouts/Grep.SharePoint.DesktopDrag/SLFileDrop.xap?r=<%= new Random().Next() %>",
style: "position: absolute; left: -1px; top: -1px; width: 1px; height: 1px; display: none; z-index: 100000;"
}))
.appendTo("body");
var filedrop = this;
$("table[id^=onetidDoclibViewTbl]").closest("div[id^=WebPartWP]")
.bind("dragenter", function (e) {
// Avoid duplicate events
if (filedrop.isMouseWithinActiveDoclib(e.clientX, e.clientY)) {
filedrop.abortDropZoneHideTimer();
return;
}
if (filedrop.activeDocLib != null) {
// Entering another doclib than the currently active
filedrop.hideDropZone(true);
}
filedrop.activeDocLib = $(this);
filedrop.activeDocLib.closest(".s4-wpTopTable").addClass("SLFileDropActive");
filedrop.showDropZone(e);
return false;
});
this.dropZone.show();
},
abortDropZoneHideTimer: function() {
if (this.leaveTimeoutId != -1) {
window.clearTimeout(this.leaveTimeoutId);
this.leaveTimeoutId = -1;
}
},
getFileEntry: function (name) {
return this.statusBox.find("div[SLFileName=" + name + "]");
},
getRootUrl: function () {
if (this.activeDocLib == null) return null;
var num = this.activeDocLib.find("div[name=LinkFilename]:first").attr("CTXNum");
var theCtx = null;
if (typeof (num) != "undefined") theCtx = g_ctxDict['ctx' + num];
var docLibSrc = this.activeDocLib.find("iframe[id^=FilterIframe]").attr("FilterLink");
var folderUrl = "";
if (typeof (docLibSrc) != "undefined") folderUrl = unescape(grep.tools.queryString("RootFolder", docLibSrc));
if (folderUrl == "") {
if (theCtx == null) throw "Document library context node not found";
folderUrl = theCtx.listUrlDir;
}
return folderUrl;
},
hideDropZone: function () {
if (this.activeDocLib != null) {
this.activeDocLib.closest(".s4-wpTopTable").removeClass("SLFileDropActive");
this.activeDocLib = null;
}
this.dropZone.css({ left: -1, top: -1, width: 1, height: 1 });
},
isMouseWithinActiveDoclib: function (mouseX, mouseY) {
if (this.activeDocLib == null) return false;
var offset = this.activeDocLib.offset();
return mouseX >= offset.left && mouseX < offset.left + this.activeDocLib.outerWidth() &&
mouseY >= offset.top && mouseY < offset.top + this.activeDocLib.outerHeight();
},
makeFileEntry: function (name) {
var entry = this.getFileEntry(name);
if (entry.size() != 0) return entry;
$("#SLFileUploadEntryTemplate")
.tmpl({ name: name })
.appendTo(this.statusBox);
return entry;
},
removeFileEntry: function (name) {
this.getFileEntry(name).remove();
},
onAllUploadsComplete: function () {
this.statusBox.hide();
$(document.body).unbind("mousemove.SLFileDrop");
_SubmitFormPost(_CorrectUrlForRefreshPageSubmitForm(), false, true);
},
onDragEntersDropZone: function () {
this.abortDropZoneHideTimer();
},
onDragLeavesDropZone: function () {
var filedrop = this;
this.leaveTimeoutId = window.setTimeout(function() {
filedrop.hideDropZone();
}, 200);
},
onDrop: function () {
var filedrop = this;
$(document.body).bind("mousemove.SLFileDrop", function (e) {
filedrop.statusBox.css({ left: e.clientX + 10, top: e.clientY + 10 })
});
this.statusBox.show();
this.hideDropZone();
},
onUploadComplete: function (name) {
this.removeFileEntry(name);
},
onUploadProgress: function (name, sentBytes, totalBytes, fileNum, totalFiles) {
try {
this.setFileEntryProgress(name, sentBytes / totalBytes * 100);
}
catch (e) {
}
},
onUploadStart: function (name) {
this.makeFileEntry(name);
},
setFileEntryProgress: function (name, pct)
{
this.getFileEntry(name).find(".SLFileDropProgress").css("width", pct + "%");
},
showDropZone: function (e) {
var docLibOffset = this.activeDocLib.offset();
this.dropZone.show();
var filedrop = this;
setTimeout(function() {
filedrop.dropZone.css({
left: docLibOffset.left,
top: docLibOffset.top,
width: filedrop.activeDocLib.outerWidth(),
height: filedrop.activeDocLib.outerHeight()
});
}, 5);
}
}

The init method is the first method called, and it will setup a reference to the progress indicator placeholder, and provision a Silverlight box (using jQuery templates) to the end of the body tag. Finally it will hook onto dragenter events for all document library views.

The hooks into dragenter for doclibs is the most important piece here. What it'll do is, upon having files dragged over a document library, resize the Silverlight file catcher / uploader to the size of the document library view, and place it on top of the document library view. This will allow us to receive drop events meant for different document library views present on the same page.

The next point of interest is the getRootUrl. This method is called from the Silverlight app, and requests the server relative url of the current document library folder. It works by finding the context object number of the currently hovered document library view (captured by the dragenter event described above), and then looking up FilterLink for the document library view's FilterIframe. If present, this FilterLink will contain a RootFolder in the query string. If no such RootFolder is found, we take the listUrlDir from the document library view's context object instead.

The last piece worth noting in this JavaScript is the progress indicators. When a new upload is started, indicated by onDrop being called, we bind to the body's mousemove event, and show the progress indicator placeholder. This placeholder will be moved along with the mouse, while there are files still uploading. The onUploadStart method will be called one time for each file to be uploaded, and this method will go on to add a new entry (using jQuery templates) in the placeholder. Each of the file entries in the progress placeholder will get a special attribute set to the name of the file, so when progress and completion notifications arrive, we can call our getFileEntry method with the filename, and retrieve a html element on which we can either update the progress, or remove - in case of a finish event.

Step 5 - Deploying the solution

When you've got the JavaScript in place, and you've activated the Grep.SharePoint.CoreJS feature on your development site collection, you can go ahead and build and deploy the Visual Studio solution.

Well within a document library, or on a page with document library views, you should now be able to drag files from the desktop onto the document library. As the mouse moves onto a document library view, the Silverlight application should become visible. Upon dropping the files there, the progress indicators should become visible and follow the mouse as you move that around. When all files are uploaded, the page should refresh.

Points of improvements

There's plenty of room for improvement here, especially visually. Progress indicators could be moved into the Silverlight application, or rendered differently by the JavaScript. Similarly, the user should be redirected to a page for document annotation once the upload has completed, if the upload was made against a document library which requires certain meta data fields for new documents.

Other than that, the solution should work pretty well, and at least serve as a general introduction to mixing various often mentioned technologies (SharePoint, jQuery, jQuery Templates, Silverlight, Rx, CKS Dev and so forth) into one merry solution.

Resources

Example Project Source Code

Monday, December 6, 2010

SPDataSource is what's wrong with this industry

The title for this rant of a post could also easily have been: If you stick to XML for configuration, you better parse it as XML.

I spent quite a bit of time today, trying to figure out why an SPDataSource of mine wouldn't return any rows for a SiteCollection scoped query against specific lists. It all worked (seemingly) perfectly when I left out the list id specification:

dataSource.SelectCommand = "<Webs Scope='SiteCollection'/><View><ViewFields><FieldRef Name='Title' Type='Text' /></ViewFields></View>";

But as soon as the list id specifications were in place, it all failed miserably. Yielding nothing but "There are no items to show in this view" in the SPGridView I bound the data source to.

dataSource.SelectCommand = "<Webs Scope='SiteCollection'/><Lists><List ID='97B278EC-6AC1-4357-A1D1-3422D314AF11'/></Lists><View><ViewFields><FieldRef Name='Title' Type='Text' /></ViewFields></View>";

Digging high and low for a schema for the SelectCommand property - which seemed to set itself slightly apart from the familiar SPSiteDataQuery - I found nothing. 'Nothing' is also what I found when searching for others with similar problems. Next to it, at any rate: It seems that many have observed unpredictable results, depending on which tags they entered, and what numbers (if any) were passed as rowlimit. None of them, however, were related to the list id specification. The closest I came was one commenter who noted that SPDataSource came up blank, so he turned to SPSiteDataQuery and ObjectDataSource instead.

At this point, I turned to my only reliable source of information in dealing with SharePoint: Reflector.

Reading through the reflected source of SPDataSourceView's GetQueryAndFieldStringFromSelectCommand method, the problem quickly became apparent: SelectCommand isn't actually parsed as XML!

Long story short, the bits and pieces from SelectCommand that are eventually passed to SPSiteDataQuery, are actually extracted with string searches for "<TagName>" and "</TagName>". The consequence is:
  • None of the section marker tags (ViewFields, Webs, Lists) can be defined using self-closing syntax: <Lists />. Doing so will break the string searches.
  • <View>, as often specified in SelectCommand, can be dropped entirely. It's never used by SPDataSource.
So this is my friendly reminder to the SharePoint team: If you decide to stick with XML (even if you decide to call your dialect 'CAML') as a driving markup language in your application: Be sure to process it as such.

Finally: I can't for the life of me figure out why SPDataSource uses a single SelectCommand property, rather than sticking with the different properties from SPSiteDataQuery. It brings nothing but overhead. And trust me: there's enough overhead as it is.