About Me

A career professional with 19 years of experiences with in application development, solution architecture, management and strategy. Architected, planned, and executed, many programs, projects and solutions delivering valued results to business clients and customers. Able to provide creative solutions to solve problems and articulate the value proposition to upper management and technical design to IT staff. Collaborates across teams and other facet of IT (i.e. operations, infrastructure, security) to validate solutions for completeness. Interface with clients to gather feedback, advise, and solution in the business language.

Monday, December 19, 2016

Going All Cloud Solution - Know the Formula of Serverless


Going All Cloud Solution - Know the Formula of Serverless

Currency Conversion Sample Solution
{All Cloud Solution = SPO + Flow + AzFn}

STILL in DRAFT mode but it has been published so others can get info they need.

Solution - SharePoint Online, Microsoft Flow, and Azure Functions

Recently I wrote an article on LinkedIn about building an All Cloud Solution
The solution uses SharePoint Online, Microsoft Flow, and Azure Functions. Using these solutions together can greatly increase your solution capabilities without having to deploy on-premise technologies.

Please see the details of the article here:

https://www.linkedin.com/pulse/going-all-cloud-solution-sharepoint-flow-azure-functions-cooper?trk=mp-author-card

Putting the solution together:

SharePoint Online (SPO) - Data Entry Form

Doing simple data capture is light weight and simple to get started.

1. Create a custom SharePoint List called Sales
Field Name Type
Rename Title field to Short Sales Description
ProductSingle Line
ItemSkuMultiple Choice [0001, 0002, 0005]
SalesAmtNumber
LegalTenderMultiple Choice [Data goes here]


Microsoft Flow - Workflow Engine "The Glue"

Sign up for a Flow account (https://flow.microsoft.com/en-us/)


Create a new Flow


New step - Add Action > SharePoint - When a new item is created
 Fill in SharePoint site and select list name

Add a New step - Add an action


Type in HTTP (*Note this is a Custom API feature that gives you a lot more power).

Select SharePoint action to update list item.
Azure Function
Register for an Azure Function service. The Azure Function are WebJobs underneath and they will create an App Service on your Azure portal.

Provisioning the Azure Function App Service. If you already have an Azure subscription the second image shows you how to get to Azure Functions via Azure portal.








Rewrite the code
To make sure that everything is functioning we will augument the code with enhanced functionality as I go on

--- Note the #r is a way of pulling external libraries into your Azure Function
#r "Newtonsoft.Json"

using System;
using System.Net;
using Newtonsoft.Json;

public static async Task Run(HttpRequestMessage req, TraceWriter log)
{
    log.Info($"Webhook was triggered!");

    string jsonContent = await req.Content.ReadAsStringAsync();
    dynamic data = JsonConvert.DeserializeObject(jsonContent);

    if (data.Amount == null || data.LegalTender == null) {
        return req.CreateResponse(HttpStatusCode.BadRequest, new {
            error = "Please pass Amount/LegalTender properties in the input object"
        });
    }

   double factor = 2.0;
    return req.CreateResponse(HttpStatusCode.OK, new {
        conversion = data.Amount * factor
    });

}









Monday, November 7, 2016

Azure Function and IBM BlueMix Auto Image Tagger

Lately I have been taking a lot into Azure Functions. I just completed my proof of concept (poc) for an idea that I had. If anyone has ever dealt with trying to get user to tag documents with metadata knows that its a challenge. We have all just through documents to the SharePoint ether with no metadata.

Metadata for documents have huge advantages for starts search works much better and you can implement more advanced functionality.

Disclaimer - The code as is and it is in a POC state .
The man thing that I wanted to achieve is testing out the architectural principles.
Some of the code needs to be refactored to be dynamic to use in a production setting.
I'll do a second cut of this blog with more details about all the steps that I went through to get everything up and running.


Usage: Have a timer/cron process wake up periodically auto tag picture files in a designated picture library. For the timer process Azure Function was used. It was the perfect for this use case. For the auto tagging I used IBM's BlueMix platform visual recognition service. You send the service an image and it uses AI to do image recognition and classify the image. After the results are return the classifications supplied are written back to the SharePoint image library.


IBM BlueMix Resources
  • Visual Recognition
    • http://www.ibm.com/watson/developercloud/visual-recognition.html
  • Watson API Explorer
    • http://www.ibm.com/watson/developercloud/visual-recognition/api/v3/#introduction
Azure Resources
  • Azure Functions Intro Site
    • https://azure.microsoft.com/en-us/services/functions/
  • Azure Functions Site
    • https://functions.azure.com/try?correlationId=a4f806de-be86-4732-88c2-a01525e1cc4e

Logical Architecture


Sequence Diagram

Code from Azure Function

#r "System.Xml.Linq"
#r "Newtonsoft.Json"

using System;
using System.IO;
using System.Net;
using System.Text;
using System.Collections.Specialized;
using System.Collections.Generic;
using System.Xml;
using Newtonsoft.Json;
using SP=Microsoft.SharePoint.Client;


public class ImageClassification
{
    public class Images
    {
        public IList classifiers { get; set; }
        public string image { get; set; }
        public class classifierList
        {
            public class classListItem
            {
                [JsonProperty("class")]
                public string classItem { get; set; }
                public double score { get; set; }
            }
            public string classifier_id { get; set; }
            public string name { get; set; }
            public IList classes { get; set; }
        }

    }
    public int custom_classes { get; set; }
    public IList images { get; set; }
    public int images_processed { get; set; }
}

public class FileItem
{
    public int ID { get; set; }
    public string FileName { get; set; }
    public string FileGuid { get; set; }
    public string RelativeUrl { get; set; }
    public string Keywords { get; set; }
    public FileItem(int Id, string fileName, string fileGuid, string relativeUrl, string keywords)
    {
        ID = Id;
        FileName = fileName;
        FileGuid = fileGuid;
        RelativeUrl = relativeUrl;
        Keywords = keywords;
    }
}

public class SPContext
{
    private static Microsoft.SharePoint.Client.ClientContext clientCxt = null;
    public static string SiteUrl { get; set; }
    public static NetworkCredential SiteCredentials { get; set; }
    public static void SetInstance(string siteUrl,NetworkCredential siteCredentials)
    {
        SiteCredentials = siteCredentials;
        SiteUrl = siteUrl;
        if (clientCxt != null)
        {
            clientCxt.Dispose();
            clientCxt = null;
        }
        
        clientCxt = new Microsoft.SharePoint.Client.ClientContext(SiteUrl);
        clientCxt.Credentials = SiteCredentials;
    }

    private SPContext() { }

    public static Microsoft.SharePoint.Client.ClientContext GetInstance()
    {
        if (clientCxt == null)
        {
            clientCxt = new Microsoft.SharePoint.Client.ClientContext(SiteUrl);
            clientCxt.Credentials = SiteCredentials;
        }
        return clientCxt;
    }
    internal static string GetAbsoluteFileUrl(string spLibraryName, string fileName)
    {
        StringBuilder siteUrl = new StringBuilder( SiteUrl);
        if (!SiteUrl.EndsWith("/"))
        {
            siteUrl.Append("/");
        }
        siteUrl.Append("/" + spLibraryName + "/");
        siteUrl.Append(fileName);
        return siteUrl.ToString();
    }
}

public class ConfigData
{
    public string SPLibrary { get; set; }
    public TraceWriter Logger { get; set; }
    public string ClassificationServiceUrl { get; set; }
    private static ConfigData configData = null;

    private ConfigData() { }
    public static void Load(string library,string classServiceUrl)
    {
        if (configData == null)
        {
            configData = new ConfigData();
        }
        configData.ClassificationServiceUrl = classServiceUrl;
        configData.SPLibrary = library;
    }
    public static ConfigData GetInstance()
    {
        if (configData == null)
        {
            configData = new ConfigData();
        }
        return configData;
    }
}


private static void TagFile(FileItem fileItem,ImageClassification imgClassification)
{
    ConfigData configData = ConfigData.GetInstance();
    TraceWriter log = configData.Logger;

    log.Info("TagFile-Start:");
    log.Info("TagFile-File:"+fileItem.ID + " / "+ fileItem.FileName);
    log.Info("TagFile-Classification:"+imgClassification.images[0].classifiers[0].classes[0].classItem);
    SP.ClientContext clientContext = SPContext.GetInstance();
    SP.List oList = clientContext.Web.Lists.GetByTitle(configData.SPLibrary);
    SP.ListItem oListItem = oList.GetItemById(fileItem.ID);

//should add logic to check all of the classifications and scores
//if the score is below a certain threshold then we should through out the classification 
    oListItem["Keywords"] = imgClassification.images[0].classifiers[0].classes[0].classItem;

    oListItem.Update();
    clientContext.ExecuteQuery();
    log.Info("TagFile-End: ");
}

private static ImageClassification PostDataReturnClassifier(string webUrl, MemoryStream memBuffer, FileItem fileItem, NameValueCollection formFields = null)
{
    ConfigData config = ConfigData.GetInstance();
    TraceWriter log = config.Logger;
    log.Info("PostDataReturnClassifier-Start: ");
    /* Thanks for all the coders from Stack Overflow */
/* http://stackoverflow.com/questions/566462/upload-files-with-httpwebrequest-multipart-form-data */
/* http://stackoverflow.com/questions/1688855/httpwebrequest-c-sharp-uploading-a-file */
    string boundary = "----------------------------" + DateTime.Now.Ticks.ToString("x");
    
    HttpWebRequest request = (HttpWebRequest)WebRequest.Create(webUrl);
    request.ContentType = "multipart/form-data; boundary=" + boundary;
    request.Method = "POST";
    //request.KeepAlive = true;
    request.ServicePoint.Expect100Continue = false;
    Stream memStream = new MemoryStream();

    var boundarybytes = System.Text.Encoding.ASCII.GetBytes("\r\n--" + boundary + "\r\n");
    var endBoundaryBytes = System.Text.Encoding.ASCII.GetBytes("\r\n--" + boundary + "--");

/*
    #region Not in use right now
    string formdataTemplate = "\r\n--" + boundary + "\r\nContent-Disposition: form-data; name=\"{0}\";\r\n\r\n{1}";
    if (formFields != null)
    {
        foreach (string key in formFields.Keys)
        {
            string formitem = string.Format(formdataTemplate, key, formFields[key]);
            byte[] formitembytes = System.Text.Encoding.UTF8.GetBytes(formitem);
            memStream.Write(formitembytes, 0, formitembytes.Length);
        }
    }
    #endregion
*/
    string headerTemplate = "Content-Disposition: form-data; name=\"{0}\"; filename=\"{1}\"\r\n" + "Content-Type: image/jpeg\r\n\r\n";

    memStream.Write(boundarybytes, 2, boundarybytes.Length-2); //starting at two skip first two bytes, two many bytes! 
    var header = string.Format(headerTemplate, "uplTheFile", fileItem.FileName);
    var headerbytes = System.Text.Encoding.UTF8.GetBytes(header);

    memStream.Write(headerbytes, 0, headerbytes.Length);

    Byte[] aryBytes = memBuffer.ToArray();
    memStream.Write(aryBytes, 0, aryBytes.Length);
    memStream.Write(endBoundaryBytes, 0, endBoundaryBytes.Length);
    request.ContentLength = memStream.Length;

    using (Stream requestStream = request.GetRequestStream())
    {
        memStream.Flush();
        memStream.Position = 0;
        byte[] tempBuffer = new byte[memStream.Length];
        memStream.Read(tempBuffer, 0, tempBuffer.Length);
        memStream.Close();
        requestStream.Write(tempBuffer, 0, tempBuffer.Length);
    }

    try
    {
  log.Info("PostDataReturnClassifier-Posting to Service: ");
        using (var response = request.GetResponse())
        {
            using (Stream streamRes = response.GetResponseStream())
            {
                using (StreamReader readResult = new StreamReader(streamRes))
                {
                    //string jsonResult = readResult.ReadToEnd();
                    JsonSerializer serializer = new JsonSerializer();
                    ImageClassification imgClass = (ImageClassification)serializer.Deserialize(readResult, typeof(ImageClassification));
                    log.Info("PostDataReturnClassifier-Result: "+imgClass.images[0].classifiers[0].classes[0].classItem);
                    return imgClass;
                }
            }
        }
    }
    catch(Exception e)
    {
        log.Info("PostDataReturnClassifier-Error: "+e.Message);
        return null;
    }
}
        private static MemoryStream DownloadItem(FileItem workItem)
        {
            ConfigData config = ConfigData.GetInstance();
            TraceWriter log = config.Logger;
            log.Info("DownloadItem-Start: ");

            string webFileUrl = SPContext.GetAbsoluteFileUrl(config.SPLibrary, workItem.FileName);
            log.Info("DownloadItem-File: "+webFileUrl);
            WebRequest request = WebRequest.Create(webFileUrl);
            request.Credentials = SPContext.SiteCredentials;
            //request.AllowWriteStreamBuffering = true;
            request.Timeout = 30000; //this should come from a config settings
            MemoryStream memStream = new MemoryStream();
            log.Info("DownloadItem-Starting Downloaded ");
            using (WebResponse response = request.GetResponse())
            {
                // Display the status.
                //Console.WriteLine(((HttpWebResponse)response).StatusDescription);
                // Get the stream containing content returned by the server.
                log.Info("DownloadItem-Getting Downloaded ");
                using (Stream dataStream = response.GetResponseStream())
                {
                    byte[] buffer = new byte[1024];
                    int received = 0;

                    int size = dataStream.Read(buffer, 0, buffer.Length);
                    log.Info($"Got data: {DateTime.Now} bytes in buffer:" + size);

                    while (size > 0)
                    {
                        memStream.Write(buffer, 0, size);
                        received += size;
                        size = dataStream.Read(buffer, 0, buffer.Length);
                    }
                    log.Info("DownloadItem-Downloaded bytes:"+received);
                }
            }
            
            memStream.Flush();
            memStream.Position = 0; //reposition the memory pointer
            return memStream;
        }

private static List GetWorkItems()
{
    ConfigData config = ConfigData.GetInstance();
    TraceWriter log = config.Logger;
    SP.ClientContext clientContext = SPContext.GetInstance();

    SP.List oList = clientContext.Web.Lists.GetByTitle(config.SPLibrary);
    SP.CamlQuery camlQuery = new SP.CamlQuery();
    camlQuery.ViewXml = "100";
    SP.ListItemCollection collListItem = oList.GetItems(camlQuery);
    clientContext.Load(collListItem);
    clientContext.ExecuteQuery();
    List workList = new List();
    string keyWords;
    foreach (SP.ListItem oListItem in collListItem)
    {
        log.Info("ID: "+ oListItem.Id +" \nFile Name: "+oListItem.FieldValues["FileLeafRef"]+" \nGUID: "+ oListItem.FieldValues["GUID"]);
        keyWords = string.Empty;

        //Process items that don't have the keywords set
        if (oListItem.FieldValues["Keywords"] == null)
        {
            workList.Add(new FileItem(oListItem.Id, oListItem.FieldValues["FileLeafRef"].ToString(), oListItem.FieldValues["GUID"].ToString(), oListItem.FieldValues["FileDirRef"].ToString(), keyWords));
        }
    }
    return workList;
}

public static void ProcessEngine()
{
    TraceWriter log = ConfigData.GetInstance().Logger;
    log.Info($"ProcessEngine-Enter: {DateTime.Now} ");
//Need to register with IBM BlueMix to get URL and apikey for visual-recognition service
    string watsonImgRecUrl = "https://gateway-EX.watsonplatform.net/visual-recognition/api/v3/classify?api_key=123456789&version=2016-05-20";
    log.Info($"ProcessEngine-: Loading Config data ");

    ConfigData.Load("pics", watsonImgRecUrl);
    ConfigData configData = ConfigData.GetInstance();

    log.Info($"ProcessEngine-: Setting SPContext Instance ");
 //this for SharePoint on-premise that uses IWA for authentication (will add other authentication schemes later)
    SPContext.SetInstance("http://SomeSharePointOnPremise.sample.com/site/SiteEx", new NetworkCredential("MyUserName", "MyPassword", "MyDomain"));
    
   log.Info($"ProcessEngine-: Getting Work Items "); 
    foreach(FileItem fileItem in GetWorkItems())
    {
        log.Info($"ProcessEngine-: Processing Work Item ... " + fileItem.FileName + " / "+ fileItem.FileGuid);

        //log what your processing
        log.Info($"ProcessEngine-: Downloading... ");
        //Download
        MemoryStream memStream = DownloadItem(fileItem);
        //Upload & classify 
        log.Info($"ProcessEngine-: Classifying... ");

        ImageClassification imageClassification = PostDataReturnClassifier(configData.ClassificationServiceUrl, memStream, fileItem, null);
        //Update list entry
        log.Info($"ProcessEngine-: Updating Metadata... ");
        TagFile(fileItem,imageClassification);
    }
    log.Info($"ProcessEngine-End: {DateTime.Now} ");
}


public static void Run(TimerInfo myTimer, TraceWriter log)
{
    log.Info($"Run-Start: {DateTime.Now} ");
    ConfigData configData = ConfigData.GetInstance();
    configData.Logger = log;
    ProcessEngine();   
    log.Info($"Run-End: {DateTime.Now} ");
}


Sunday, November 6, 2016

Okta Universal Directory - Consolidating AD samAccountName


Okta Universal Directory


Extending Okta Profile


In the Admin console go to the People tab

Click Profile Editor

Select Okta from the Profiles and click on the user link









Adding a Custom Attribute


Click add Attribute button





Adding the Attribute Custom attribute to the Okta profile


Used to create one instance of the samAccountName from the different AD domains




 Populate the Custom Attribute



Click on Directories list

Click on a directory

Click on Map Attributes


Select the tab that specifies pushing data from the "directory" to Okta


 

Scroll down to the custom attribute


Click the Add Mapping drop down and select samAccountName


Click Save Mappings


When your asked to apply the mappings to all users with profile select Apply update now


Do this will all instances.

Moving “shadow” service workloads to Serverless Functions

How many times have you had a job/process that needed to wake up and perform a unit of work then shut down? The IT landscape is littered with these services, batch jobs, and timer jobs throughout data centers, standalone servers, and personal PCs.  These services become relegated to the back alleys of IT infrastructure, long forgotten until an event happens. An even worst offense is when these services are allocated dedicated resources i.e. VMs or physical servers and these resources are under-utilized until the service executes.
              The reason for labeling these services as “shadow” because they become forgotten and hidden in the shadows. Their locations and knowledge of functionality become lost as subject matter experts (SME) move on. This becomes quite apparent when servers need to be decommissioned or migrated, and there are no preparations for these services. I have seen numerous times, after decommissioning or migration of servers people scramble to track down servers to get the service back and transfer services off the machine. After the services has been recovered and migrated, only to have this pattern reemerge. Some of these “shadow” services are the perfect workloads to shift to Serverless functions.
              Serverless functions can be invoked in different ways; event based, scheduled, and invoked. Shifting your workloads to Serverless functions, allow services to be aligned with the following principles:
  • Discoverable
  • Reusable
  • Faster time to delivery
  • Minimum resource consumption
  • Elasticity
  • Monitored
Depending on the Serverless platform  (Amazon Lambda, IBM OpenWhisk, or Microsoft Azure functions) the functions are deployed to a portal. Having a single point of deployment, the services become discoverable allowing for them to potentially be leveraged by other resources. Time to market is greatly increased because developers don’t need to wait for infrastructure to procure, build, and configure resources.
              Capacity planning is part art part science. When rolling out new functionality it is hard to get capacity planning right. Changes in the business and applications can also change the parameters of your capacity planning. This is where the elastic nature of Serverless functions shines. Serverless functions can easily be scaled to meet demand. Once the demand is met the resources will be scaled down and reallocated to other processes. As for cost, you only have to pay for what is utilized in the execution time of the function. Services that have dedicated resources can potentially go from CapEx + Opex spend to just Opex spend at a cheaper run rate. Because the functions are monitored, gathering metrics for usage is an easier task. These metrics can be used for budgetary planning and forecasts.
              To start select a small proof of concept (POC) that can be done to build experience and gain confidence in using Serverless functions. Develop a best practice that works for your business. Work on building an application portfolio can help drive this effort to prioritize key systems. The portfolio can be leverage to build system blueprints and system architectures diagrams. The system blueprints will show the interactions between systems at a logical level. Systems architectures diagram will provide a detailed architecture of the application interactions. Gradually working through the system architecture will highlight key areas where Serverless functions can be replace particular component or processes.

Rise of Serverless Functions


Rise of Serverless Functions




Recently I have been to various meetups and conference around Serverless architecture. Lately this architecture has been gaining ground within the IT industry. As IT is seeing a rapid accession up the abstraction curve. Barriers that have existed in the past are dissolving before our eyes. From virtualization, to cloud compute, to containers, now Serverless. Serverless allows developers to deploy functional components of code to a cloud platform for execution. Once the function is complete the resources for that function are relocated for the next process.  The developer’s function is now in a total on demand execution model.

               If leveraged in the right way this becomes an excellent value proposition for particular components of applications. All the developer needs to do is deploy their code, no need to worry about the infrastructure below, scaling, hosting, or any of the other traditional activities needed to deploy your code. The value proposition is you’re only charged for the execution time of your function. Batch services, event base activities, and scheduled join are prime sweet spot for this technology.

               Architecture and developers have to start taking a hard look at their applications to see what pieces of the apps can leverage this technology. Areas in the applications that would potentially have radical spikes because of influx of demands can leverage these functions to offload processing to a background event.

               There are a growing number of players offering services in this space Amazon AWS/Lambda, IBM OpenWhisk/Bluemix, and Microsoft Azure/Azure Functions. So OpenWhisk can be downloaded and ran within your environment. It seems that the greatest value is achieved when IT doesn’t have to be responsible for the underlying resources for particular workloads.

Wednesday, December 1, 2010

Using Search as an Application

Using Search as an Application



By

Kenneth Cooper

Enterprise SharePoint Architect

And

Viji Anbumani

SAI Innovations Inc

Business Case

A department came to IT with a request of a document management system that would allow them to meta-tag document (fig 1.1) and discover the content through advanced search. Our users had four document types that they wanted to start with. Each document type had more then twelve pieces of associated metadata. The metadata within the document type covered just about all the basic column types (single line, check box, single select, multi-select, Numbers, and dates). The security for the document types varied between them.



After hearing the initial requirements we knew that SharePoint was a natural fit. So the first thing we did with the users was a discovery phase to hear more of what our users wanted. This phase was also interactive; we informed our users on what SharePoint can do. The first thing that we discovered when showing our user SharePoint was they didn’t like the out of the box advanced search. They really wanted an advanced criteria screen that was reflective of their metadata in look and functionality. They wanted fixed fields that they can populated. The fields also had to have the same functionality as the metadata entry screen. So if they had a multi-select field in the metadata they wanted the same thing within the search criteria screen. The search results they wanted them to come back in a grid. They wanted to have two views for the grid compact and detailed.



The timeline and budget for this project were both short. They wanted to be up and running no later then the first of January. We started the discovery phase mid-September. But they wanted to start uploading content by the first of November. Even though the majority of the project is out of the box we also had to account for rework time.



The Challenge

The first challenge we needed to resolve was the metadata search on the different document types. After that we needed to do something about the advanced search. The easy solution was to have a custom webpart written to handle the search. Given all the other criteria of their requirements we didn’t have time to push SharePoint aside and do a custom web application. Ninety percent of the functionality was there in SharePoint already that’s why we decided to just do custom webparts.





The Plan

The department handed us excellent requirements for the document types. From pass experience with others projects as the project draws closer and the users start to see the product the rework sometime increases. This is an important factor in SharePoint because there might certain things that the users don’t like and it takes to much effort to rewrite or rework it. First a project plan was created and it had two tracks; structure and portal functionality. The structures were the work that we were doing for the site definitions, content types, and deployment packages.

The structure had to precede the portal work any way so this track was devised to get the users uploading content by November. The custom work like branding and the search web parts were part of the second track. This plan let the users know what they were getting and when.


Side note

Don’t assume that your users won’t change their mind about there fields or field types. To help keep our production SharePoint environment clean we go through (development to QA to production). For moving items between these environments there are various methods and tools. We choose to do a site definition and content type features for our document libraries. When using feature we can easily move from environment to environment. We also used import and export site command from stsadmn. The only issue with this is that if you have an error or bad data you have to take care of it once it is deployed. Cleaning up a site in production can prove to be very messy and can cause issues.





The Solution

The Search Components we used

Content Source –A source of content that you want SharePoint to crawl and index.
Managed Properties – Metadata fields that are defined in SharePoint that are indexed and put into a separate database. These fields can be used in actual search queries.
Scopes – Creates a filtered view of the content in an index based on rules defined within the SSP.

Custom Search Webpart – A custom search criteria screen was built to render the search criteria screen for the users. Another webpart was built to display the results in grid format.



Document Types & Content Types

For each document type we created a content type and a document library (see fig 1.0). The content types were applied to the appropriate document libraries. We leveraged the power of the content type field as a managed property to create our view of the department’s content. A managed property called ‘MPContentType’ was created using content type field. Next a scope was created and rules were added that specified the managed property ‘MPContentType’ as the filter. A rule was added for each content type that we wanted to add to the scope. With the scope in place, queries can be issues; resulting in a query that only queries the specified content types (i.e. Software) within the scope. Now we can isolate the content that we want to search on giving us the base line to our solution (see fig 1 ‘The Big Picture’).



More Managed Properties

The next piece of the solution was the additional managed properties that were created. We created a managed property for each piece of metadata that was associated to one of the four content types. This allowed us to do our pinpoint accurate searches on the content. The last piece of the puzzle was the advanced search webpart and the results page. Our users told us that they were only starting with four document types, but a couple of months down the road they want to add more document types. This means the design for the search criteria screen had to grow with user demand.

So that the webpart would be extensible in the future as per one of IT’s requirement was it must work with XML. The webpart takes XML as a parameter and dynamically generates the search criteria screen. That makes this a completely generic solution that can be applied over and over again without having to going back to development. For the webpart we commissioned SAI Innovations a consultant company with a strong background in SharePoint development and solutions.

Through the XML various features can be controlled and the user interface is reflective of the data entry screens. Fields with drop downs and SharePoint lists as data sources can be presented in the same way on the search criteria screen. This gives the end users a lot of flexibility and easy of use so they want have to understand and/or and other operations. Also there’s a drop down for the document type. As the document type is change the fields on the search screen are changed to match the search criteria fields for that document type.

Earlier I mentioned that the document types vary in security requirements. Since SharePoint search engine is security trimmed this did not present a problem to us. We just created SharePoint groups and modified the document libraries security to these groups and configurations.

The search results screen returned the results within a grid per request of our users. For the grid they wanted two modes a summary and a detailed view. On the detailed view it would expand the grid and show the all metadata fields for that type. The results pane also incorporated sorting and paging.

Some issues that we ran into are that the document has to be checked in for that document to be indexed.

Fig 1.0



Fig 1.1



Fig 1.2


Advanced Search Result Set

Fig 1.3



Search Quick Tutorial



The Search for the Search Configuration



When a SharePoint (MOSS) farm is setup a default Shared Service Provider (SSP for short) is created within Central Administration (might vary with WSS). It is within the SSP where the search configurations can be modified. If this is still foreign try to find a good SharePoint Admin.

As items are put into SharePoint there’s a process out there the opens certain files and reads the content and it metadata and writes out what it saw to disk and database (Ok I did skip a few steps but another paper another time). This is called the indexing process (see fig 2). The search functionality can be configured within the Shared Service Provider (SSP).

How does SharePoint know what to indexing? Within the search configuration you’ll see the content sources. A content source is content that you SharePoint to index so that it can be searchable from SharePoint. These content sources are the destination. Content sources can be created for other web sites, files shares, lotus notes, public folders, and SharePoint sites (see fig 3). Once a content source is created you can tell SharePoint to go crawl it. Also crawl schedules can be setup so new content gets indexed periodically. Since our users wanted new content to show up very quickly we created a content source for them and created an incremental schedule to index more frequently. A default content source is already there. Once we add our site to the new content source the old needs to be removed from the default one.

The indexing process also crawl’s the document’s metadata within SharePoint. This is huge piece of the solution puzzle. These metadata fields are store in an actual database. Once they have been indexed this information can be used in queries (SQL like queries). How do I expose this field to SharePoint; by making it a managed property. There would still be some work at the site level but after a few configuration changes you can make these fields show up in advanced search (the out of the box one).

Another awesome thing that you can do within the search configuration is create scopes. Think of scopes as a filtered view of the data that is in the index file. You already see this in most sites already (fig 4). If you have a drop down list in front of the search text box you’ve get scopes! One scope is ‘All Sites’ and usually the other is ‘This Site’. The All sites search all of the SharePoint content but the ‘This Site’ scope only search pertaining to that site. This feature helps scope down our target area that we want to search.


Indexing Engine
Fig 2


Fig 3


Index and Scope

Fig 4

Monday, August 9, 2010

Application Pools pertaining to SharePoint

Application Pool

Definition
Application pools are a security and process boundary for a set of IIS web applications. Multiple web application can run in a single worker process. Application pools should be create if processes need to be isolated for stability or security. Especially if the web application is going to be going to be used with custom code or has request and proceses intensive sites.


When should an application pool be created?
An application pool should be created if an application has custom code or has multiple memory and processes intensive sites. Also if the application is deemed missing critical and uptime is the utmost importance then this might warrant an app pool.



More background Information:

The Http.sys is responsible for getting http request and send responses back to users.

Http.sys -----> Listen --> Queues -->(Isolation mode send to worker process) --> Responses


Worker Process -> A processes that runs in user mode to process a request from the Http.sys queue. The worker process runs in it's own process space so it want interfere with other http processes that are on a different process. After the worker process processes the request it use Http.sys to send it response back.