About Me

A career professional with 19 years of experiences with in application development, solution architecture, management and strategy. Architected, planned, and executed, many programs, projects and solutions delivering valued results to business clients and customers. Able to provide creative solutions to solve problems and articulate the value proposition to upper management and technical design to IT staff. Collaborates across teams and other facet of IT (i.e. operations, infrastructure, security) to validate solutions for completeness. Interface with clients to gather feedback, advise, and solution in the business language.

Monday, December 19, 2016

Going All Cloud Solution - Know the Formula of Serverless


Going All Cloud Solution - Know the Formula of Serverless

Currency Conversion Sample Solution
{All Cloud Solution = SPO + Flow + AzFn}

STILL in DRAFT mode but it has been published so others can get info they need.

Solution - SharePoint Online, Microsoft Flow, and Azure Functions

Recently I wrote an article on LinkedIn about building an All Cloud Solution
The solution uses SharePoint Online, Microsoft Flow, and Azure Functions. Using these solutions together can greatly increase your solution capabilities without having to deploy on-premise technologies.

Please see the details of the article here:

https://www.linkedin.com/pulse/going-all-cloud-solution-sharepoint-flow-azure-functions-cooper?trk=mp-author-card

Putting the solution together:

SharePoint Online (SPO) - Data Entry Form

Doing simple data capture is light weight and simple to get started.

1. Create a custom SharePoint List called Sales
Field Name Type
Rename Title field to Short Sales Description
ProductSingle Line
ItemSkuMultiple Choice [0001, 0002, 0005]
SalesAmtNumber
LegalTenderMultiple Choice [Data goes here]


Microsoft Flow - Workflow Engine "The Glue"

Sign up for a Flow account (https://flow.microsoft.com/en-us/)


Create a new Flow


New step - Add Action > SharePoint - When a new item is created
 Fill in SharePoint site and select list name

Add a New step - Add an action


Type in HTTP (*Note this is a Custom API feature that gives you a lot more power).

Select SharePoint action to update list item.
Azure Function
Register for an Azure Function service. The Azure Function are WebJobs underneath and they will create an App Service on your Azure portal.

Provisioning the Azure Function App Service. If you already have an Azure subscription the second image shows you how to get to Azure Functions via Azure portal.








Rewrite the code
To make sure that everything is functioning we will augument the code with enhanced functionality as I go on

--- Note the #r is a way of pulling external libraries into your Azure Function
#r "Newtonsoft.Json"

using System;
using System.Net;
using Newtonsoft.Json;

public static async Task Run(HttpRequestMessage req, TraceWriter log)
{
    log.Info($"Webhook was triggered!");

    string jsonContent = await req.Content.ReadAsStringAsync();
    dynamic data = JsonConvert.DeserializeObject(jsonContent);

    if (data.Amount == null || data.LegalTender == null) {
        return req.CreateResponse(HttpStatusCode.BadRequest, new {
            error = "Please pass Amount/LegalTender properties in the input object"
        });
    }

   double factor = 2.0;
    return req.CreateResponse(HttpStatusCode.OK, new {
        conversion = data.Amount * factor
    });

}









Monday, November 7, 2016

Azure Function and IBM BlueMix Auto Image Tagger

Lately I have been taking a lot into Azure Functions. I just completed my proof of concept (poc) for an idea that I had. If anyone has ever dealt with trying to get user to tag documents with metadata knows that its a challenge. We have all just through documents to the SharePoint ether with no metadata.

Metadata for documents have huge advantages for starts search works much better and you can implement more advanced functionality.

Disclaimer - The code as is and it is in a POC state .
The man thing that I wanted to achieve is testing out the architectural principles.
Some of the code needs to be refactored to be dynamic to use in a production setting.
I'll do a second cut of this blog with more details about all the steps that I went through to get everything up and running.


Usage: Have a timer/cron process wake up periodically auto tag picture files in a designated picture library. For the timer process Azure Function was used. It was the perfect for this use case. For the auto tagging I used IBM's BlueMix platform visual recognition service. You send the service an image and it uses AI to do image recognition and classify the image. After the results are return the classifications supplied are written back to the SharePoint image library.


IBM BlueMix Resources
  • Visual Recognition
    • http://www.ibm.com/watson/developercloud/visual-recognition.html
  • Watson API Explorer
    • http://www.ibm.com/watson/developercloud/visual-recognition/api/v3/#introduction
Azure Resources
  • Azure Functions Intro Site
    • https://azure.microsoft.com/en-us/services/functions/
  • Azure Functions Site
    • https://functions.azure.com/try?correlationId=a4f806de-be86-4732-88c2-a01525e1cc4e

Logical Architecture


Sequence Diagram

Code from Azure Function

#r "System.Xml.Linq"
#r "Newtonsoft.Json"

using System;
using System.IO;
using System.Net;
using System.Text;
using System.Collections.Specialized;
using System.Collections.Generic;
using System.Xml;
using Newtonsoft.Json;
using SP=Microsoft.SharePoint.Client;


public class ImageClassification
{
    public class Images
    {
        public IList classifiers { get; set; }
        public string image { get; set; }
        public class classifierList
        {
            public class classListItem
            {
                [JsonProperty("class")]
                public string classItem { get; set; }
                public double score { get; set; }
            }
            public string classifier_id { get; set; }
            public string name { get; set; }
            public IList classes { get; set; }
        }

    }
    public int custom_classes { get; set; }
    public IList images { get; set; }
    public int images_processed { get; set; }
}

public class FileItem
{
    public int ID { get; set; }
    public string FileName { get; set; }
    public string FileGuid { get; set; }
    public string RelativeUrl { get; set; }
    public string Keywords { get; set; }
    public FileItem(int Id, string fileName, string fileGuid, string relativeUrl, string keywords)
    {
        ID = Id;
        FileName = fileName;
        FileGuid = fileGuid;
        RelativeUrl = relativeUrl;
        Keywords = keywords;
    }
}

public class SPContext
{
    private static Microsoft.SharePoint.Client.ClientContext clientCxt = null;
    public static string SiteUrl { get; set; }
    public static NetworkCredential SiteCredentials { get; set; }
    public static void SetInstance(string siteUrl,NetworkCredential siteCredentials)
    {
        SiteCredentials = siteCredentials;
        SiteUrl = siteUrl;
        if (clientCxt != null)
        {
            clientCxt.Dispose();
            clientCxt = null;
        }
        
        clientCxt = new Microsoft.SharePoint.Client.ClientContext(SiteUrl);
        clientCxt.Credentials = SiteCredentials;
    }

    private SPContext() { }

    public static Microsoft.SharePoint.Client.ClientContext GetInstance()
    {
        if (clientCxt == null)
        {
            clientCxt = new Microsoft.SharePoint.Client.ClientContext(SiteUrl);
            clientCxt.Credentials = SiteCredentials;
        }
        return clientCxt;
    }
    internal static string GetAbsoluteFileUrl(string spLibraryName, string fileName)
    {
        StringBuilder siteUrl = new StringBuilder( SiteUrl);
        if (!SiteUrl.EndsWith("/"))
        {
            siteUrl.Append("/");
        }
        siteUrl.Append("/" + spLibraryName + "/");
        siteUrl.Append(fileName);
        return siteUrl.ToString();
    }
}

public class ConfigData
{
    public string SPLibrary { get; set; }
    public TraceWriter Logger { get; set; }
    public string ClassificationServiceUrl { get; set; }
    private static ConfigData configData = null;

    private ConfigData() { }
    public static void Load(string library,string classServiceUrl)
    {
        if (configData == null)
        {
            configData = new ConfigData();
        }
        configData.ClassificationServiceUrl = classServiceUrl;
        configData.SPLibrary = library;
    }
    public static ConfigData GetInstance()
    {
        if (configData == null)
        {
            configData = new ConfigData();
        }
        return configData;
    }
}


private static void TagFile(FileItem fileItem,ImageClassification imgClassification)
{
    ConfigData configData = ConfigData.GetInstance();
    TraceWriter log = configData.Logger;

    log.Info("TagFile-Start:");
    log.Info("TagFile-File:"+fileItem.ID + " / "+ fileItem.FileName);
    log.Info("TagFile-Classification:"+imgClassification.images[0].classifiers[0].classes[0].classItem);
    SP.ClientContext clientContext = SPContext.GetInstance();
    SP.List oList = clientContext.Web.Lists.GetByTitle(configData.SPLibrary);
    SP.ListItem oListItem = oList.GetItemById(fileItem.ID);

//should add logic to check all of the classifications and scores
//if the score is below a certain threshold then we should through out the classification 
    oListItem["Keywords"] = imgClassification.images[0].classifiers[0].classes[0].classItem;

    oListItem.Update();
    clientContext.ExecuteQuery();
    log.Info("TagFile-End: ");
}

private static ImageClassification PostDataReturnClassifier(string webUrl, MemoryStream memBuffer, FileItem fileItem, NameValueCollection formFields = null)
{
    ConfigData config = ConfigData.GetInstance();
    TraceWriter log = config.Logger;
    log.Info("PostDataReturnClassifier-Start: ");
    /* Thanks for all the coders from Stack Overflow */
/* http://stackoverflow.com/questions/566462/upload-files-with-httpwebrequest-multipart-form-data */
/* http://stackoverflow.com/questions/1688855/httpwebrequest-c-sharp-uploading-a-file */
    string boundary = "----------------------------" + DateTime.Now.Ticks.ToString("x");
    
    HttpWebRequest request = (HttpWebRequest)WebRequest.Create(webUrl);
    request.ContentType = "multipart/form-data; boundary=" + boundary;
    request.Method = "POST";
    //request.KeepAlive = true;
    request.ServicePoint.Expect100Continue = false;
    Stream memStream = new MemoryStream();

    var boundarybytes = System.Text.Encoding.ASCII.GetBytes("\r\n--" + boundary + "\r\n");
    var endBoundaryBytes = System.Text.Encoding.ASCII.GetBytes("\r\n--" + boundary + "--");

/*
    #region Not in use right now
    string formdataTemplate = "\r\n--" + boundary + "\r\nContent-Disposition: form-data; name=\"{0}\";\r\n\r\n{1}";
    if (formFields != null)
    {
        foreach (string key in formFields.Keys)
        {
            string formitem = string.Format(formdataTemplate, key, formFields[key]);
            byte[] formitembytes = System.Text.Encoding.UTF8.GetBytes(formitem);
            memStream.Write(formitembytes, 0, formitembytes.Length);
        }
    }
    #endregion
*/
    string headerTemplate = "Content-Disposition: form-data; name=\"{0}\"; filename=\"{1}\"\r\n" + "Content-Type: image/jpeg\r\n\r\n";

    memStream.Write(boundarybytes, 2, boundarybytes.Length-2); //starting at two skip first two bytes, two many bytes! 
    var header = string.Format(headerTemplate, "uplTheFile", fileItem.FileName);
    var headerbytes = System.Text.Encoding.UTF8.GetBytes(header);

    memStream.Write(headerbytes, 0, headerbytes.Length);

    Byte[] aryBytes = memBuffer.ToArray();
    memStream.Write(aryBytes, 0, aryBytes.Length);
    memStream.Write(endBoundaryBytes, 0, endBoundaryBytes.Length);
    request.ContentLength = memStream.Length;

    using (Stream requestStream = request.GetRequestStream())
    {
        memStream.Flush();
        memStream.Position = 0;
        byte[] tempBuffer = new byte[memStream.Length];
        memStream.Read(tempBuffer, 0, tempBuffer.Length);
        memStream.Close();
        requestStream.Write(tempBuffer, 0, tempBuffer.Length);
    }

    try
    {
  log.Info("PostDataReturnClassifier-Posting to Service: ");
        using (var response = request.GetResponse())
        {
            using (Stream streamRes = response.GetResponseStream())
            {
                using (StreamReader readResult = new StreamReader(streamRes))
                {
                    //string jsonResult = readResult.ReadToEnd();
                    JsonSerializer serializer = new JsonSerializer();
                    ImageClassification imgClass = (ImageClassification)serializer.Deserialize(readResult, typeof(ImageClassification));
                    log.Info("PostDataReturnClassifier-Result: "+imgClass.images[0].classifiers[0].classes[0].classItem);
                    return imgClass;
                }
            }
        }
    }
    catch(Exception e)
    {
        log.Info("PostDataReturnClassifier-Error: "+e.Message);
        return null;
    }
}
        private static MemoryStream DownloadItem(FileItem workItem)
        {
            ConfigData config = ConfigData.GetInstance();
            TraceWriter log = config.Logger;
            log.Info("DownloadItem-Start: ");

            string webFileUrl = SPContext.GetAbsoluteFileUrl(config.SPLibrary, workItem.FileName);
            log.Info("DownloadItem-File: "+webFileUrl);
            WebRequest request = WebRequest.Create(webFileUrl);
            request.Credentials = SPContext.SiteCredentials;
            //request.AllowWriteStreamBuffering = true;
            request.Timeout = 30000; //this should come from a config settings
            MemoryStream memStream = new MemoryStream();
            log.Info("DownloadItem-Starting Downloaded ");
            using (WebResponse response = request.GetResponse())
            {
                // Display the status.
                //Console.WriteLine(((HttpWebResponse)response).StatusDescription);
                // Get the stream containing content returned by the server.
                log.Info("DownloadItem-Getting Downloaded ");
                using (Stream dataStream = response.GetResponseStream())
                {
                    byte[] buffer = new byte[1024];
                    int received = 0;

                    int size = dataStream.Read(buffer, 0, buffer.Length);
                    log.Info($"Got data: {DateTime.Now} bytes in buffer:" + size);

                    while (size > 0)
                    {
                        memStream.Write(buffer, 0, size);
                        received += size;
                        size = dataStream.Read(buffer, 0, buffer.Length);
                    }
                    log.Info("DownloadItem-Downloaded bytes:"+received);
                }
            }
            
            memStream.Flush();
            memStream.Position = 0; //reposition the memory pointer
            return memStream;
        }

private static List GetWorkItems()
{
    ConfigData config = ConfigData.GetInstance();
    TraceWriter log = config.Logger;
    SP.ClientContext clientContext = SPContext.GetInstance();

    SP.List oList = clientContext.Web.Lists.GetByTitle(config.SPLibrary);
    SP.CamlQuery camlQuery = new SP.CamlQuery();
    camlQuery.ViewXml = "100";
    SP.ListItemCollection collListItem = oList.GetItems(camlQuery);
    clientContext.Load(collListItem);
    clientContext.ExecuteQuery();
    List workList = new List();
    string keyWords;
    foreach (SP.ListItem oListItem in collListItem)
    {
        log.Info("ID: "+ oListItem.Id +" \nFile Name: "+oListItem.FieldValues["FileLeafRef"]+" \nGUID: "+ oListItem.FieldValues["GUID"]);
        keyWords = string.Empty;

        //Process items that don't have the keywords set
        if (oListItem.FieldValues["Keywords"] == null)
        {
            workList.Add(new FileItem(oListItem.Id, oListItem.FieldValues["FileLeafRef"].ToString(), oListItem.FieldValues["GUID"].ToString(), oListItem.FieldValues["FileDirRef"].ToString(), keyWords));
        }
    }
    return workList;
}

public static void ProcessEngine()
{
    TraceWriter log = ConfigData.GetInstance().Logger;
    log.Info($"ProcessEngine-Enter: {DateTime.Now} ");
//Need to register with IBM BlueMix to get URL and apikey for visual-recognition service
    string watsonImgRecUrl = "https://gateway-EX.watsonplatform.net/visual-recognition/api/v3/classify?api_key=123456789&version=2016-05-20";
    log.Info($"ProcessEngine-: Loading Config data ");

    ConfigData.Load("pics", watsonImgRecUrl);
    ConfigData configData = ConfigData.GetInstance();

    log.Info($"ProcessEngine-: Setting SPContext Instance ");
 //this for SharePoint on-premise that uses IWA for authentication (will add other authentication schemes later)
    SPContext.SetInstance("http://SomeSharePointOnPremise.sample.com/site/SiteEx", new NetworkCredential("MyUserName", "MyPassword", "MyDomain"));
    
   log.Info($"ProcessEngine-: Getting Work Items "); 
    foreach(FileItem fileItem in GetWorkItems())
    {
        log.Info($"ProcessEngine-: Processing Work Item ... " + fileItem.FileName + " / "+ fileItem.FileGuid);

        //log what your processing
        log.Info($"ProcessEngine-: Downloading... ");
        //Download
        MemoryStream memStream = DownloadItem(fileItem);
        //Upload & classify 
        log.Info($"ProcessEngine-: Classifying... ");

        ImageClassification imageClassification = PostDataReturnClassifier(configData.ClassificationServiceUrl, memStream, fileItem, null);
        //Update list entry
        log.Info($"ProcessEngine-: Updating Metadata... ");
        TagFile(fileItem,imageClassification);
    }
    log.Info($"ProcessEngine-End: {DateTime.Now} ");
}


public static void Run(TimerInfo myTimer, TraceWriter log)
{
    log.Info($"Run-Start: {DateTime.Now} ");
    ConfigData configData = ConfigData.GetInstance();
    configData.Logger = log;
    ProcessEngine();   
    log.Info($"Run-End: {DateTime.Now} ");
}


Sunday, November 6, 2016

Okta Universal Directory - Consolidating AD samAccountName


Okta Universal Directory


Extending Okta Profile


In the Admin console go to the People tab

Click Profile Editor

Select Okta from the Profiles and click on the user link









Adding a Custom Attribute


Click add Attribute button





Adding the Attribute Custom attribute to the Okta profile


Used to create one instance of the samAccountName from the different AD domains




 Populate the Custom Attribute



Click on Directories list

Click on a directory

Click on Map Attributes


Select the tab that specifies pushing data from the "directory" to Okta


 

Scroll down to the custom attribute


Click the Add Mapping drop down and select samAccountName


Click Save Mappings


When your asked to apply the mappings to all users with profile select Apply update now


Do this will all instances.

Moving “shadow” service workloads to Serverless Functions

How many times have you had a job/process that needed to wake up and perform a unit of work then shut down? The IT landscape is littered with these services, batch jobs, and timer jobs throughout data centers, standalone servers, and personal PCs.  These services become relegated to the back alleys of IT infrastructure, long forgotten until an event happens. An even worst offense is when these services are allocated dedicated resources i.e. VMs or physical servers and these resources are under-utilized until the service executes.
              The reason for labeling these services as “shadow” because they become forgotten and hidden in the shadows. Their locations and knowledge of functionality become lost as subject matter experts (SME) move on. This becomes quite apparent when servers need to be decommissioned or migrated, and there are no preparations for these services. I have seen numerous times, after decommissioning or migration of servers people scramble to track down servers to get the service back and transfer services off the machine. After the services has been recovered and migrated, only to have this pattern reemerge. Some of these “shadow” services are the perfect workloads to shift to Serverless functions.
              Serverless functions can be invoked in different ways; event based, scheduled, and invoked. Shifting your workloads to Serverless functions, allow services to be aligned with the following principles:
  • Discoverable
  • Reusable
  • Faster time to delivery
  • Minimum resource consumption
  • Elasticity
  • Monitored
Depending on the Serverless platform  (Amazon Lambda, IBM OpenWhisk, or Microsoft Azure functions) the functions are deployed to a portal. Having a single point of deployment, the services become discoverable allowing for them to potentially be leveraged by other resources. Time to market is greatly increased because developers don’t need to wait for infrastructure to procure, build, and configure resources.
              Capacity planning is part art part science. When rolling out new functionality it is hard to get capacity planning right. Changes in the business and applications can also change the parameters of your capacity planning. This is where the elastic nature of Serverless functions shines. Serverless functions can easily be scaled to meet demand. Once the demand is met the resources will be scaled down and reallocated to other processes. As for cost, you only have to pay for what is utilized in the execution time of the function. Services that have dedicated resources can potentially go from CapEx + Opex spend to just Opex spend at a cheaper run rate. Because the functions are monitored, gathering metrics for usage is an easier task. These metrics can be used for budgetary planning and forecasts.
              To start select a small proof of concept (POC) that can be done to build experience and gain confidence in using Serverless functions. Develop a best practice that works for your business. Work on building an application portfolio can help drive this effort to prioritize key systems. The portfolio can be leverage to build system blueprints and system architectures diagrams. The system blueprints will show the interactions between systems at a logical level. Systems architectures diagram will provide a detailed architecture of the application interactions. Gradually working through the system architecture will highlight key areas where Serverless functions can be replace particular component or processes.

Rise of Serverless Functions


Rise of Serverless Functions




Recently I have been to various meetups and conference around Serverless architecture. Lately this architecture has been gaining ground within the IT industry. As IT is seeing a rapid accession up the abstraction curve. Barriers that have existed in the past are dissolving before our eyes. From virtualization, to cloud compute, to containers, now Serverless. Serverless allows developers to deploy functional components of code to a cloud platform for execution. Once the function is complete the resources for that function are relocated for the next process.  The developer’s function is now in a total on demand execution model.

               If leveraged in the right way this becomes an excellent value proposition for particular components of applications. All the developer needs to do is deploy their code, no need to worry about the infrastructure below, scaling, hosting, or any of the other traditional activities needed to deploy your code. The value proposition is you’re only charged for the execution time of your function. Batch services, event base activities, and scheduled join are prime sweet spot for this technology.

               Architecture and developers have to start taking a hard look at their applications to see what pieces of the apps can leverage this technology. Areas in the applications that would potentially have radical spikes because of influx of demands can leverage these functions to offload processing to a background event.

               There are a growing number of players offering services in this space Amazon AWS/Lambda, IBM OpenWhisk/Bluemix, and Microsoft Azure/Azure Functions. So OpenWhisk can be downloaded and ran within your environment. It seems that the greatest value is achieved when IT doesn’t have to be responsible for the underlying resources for particular workloads.