Cognitive Listening

For a decade now, we’ve lived in a “social” era – through networks such as LinkedIn, Facebook, and IBM Connections.  Social networks have given us the tools by which we can engage in dialogue, share ideas, and find new information.  A simple example is the now-ubiquitous comments section seen on most websites.  Someone posts content, and someone writes back in the form of a comment.  Collectively, those comments reflect public perception, understanding, and support.  But to get a sense of the public’s reaction, you have to read through all of the comments.

This is where the emerging cognitive era can help.  Rather than manually read the comments, one could simply use a cognitive service to classify the emotions of commenters.  One such service is IBM’s AlchemyAPI.  Below I’ve combined Alchemy’s emotion analysis with IBM Connections’ comments to generate a “social reaction” to my post.

Connections Sentiment

Admittedly, people aren’t really angry with my post – maybe it’s the exclamation marks being used by commenters … but you get the point.  In isolation, this is a neat trick.  But when you apply this on a larger scale, it gives you the ability to listen cognitively to the social network.  For example, an active forum of actual angry customers could trigger intervention by a customer support representative.  Or combined with other services – like concept extraction – would tell us which areas of the company, initiative, or project that employees are struggling with.  The possibilities and outcomes are substantial, which is why cognitive is more than just technology. It’s a new era of business and computing.

Getting Started

  1. Lots of information exists on using AlchemyAPI.  Start out by creating an account on Bluemix and adding the service.
  2. I used a tool called Greasemonkey to add the “Reaction widget” to IBM Connections Blog pages.  Think of Greasemonkey as a way of creating small, personal applications that run only in your browser.
  3. Adapt my widget below to experiment with content an APIs.
// ==UserScript==
// @name        Blog Entry Emotion Analyzer
// @namespace   http://demos.ibm.com
// @include     https://apps.*.collabserv.com/blogs/*/entry/*
// @include     https://w3-connections.ibm.com/blogs/*/entry/*
// @version     1
// @require     https://code.jquery.com/jquery-3.1.0.min.js
// @require     https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.2.2/Chart.min.js
// @grant       GM_xmlhttpRequest
// ==/UserScript==


console.log("Starting up Blog Entry Emotion Analyzer Widget");

// setup the widget in right side column
var sidebar = $( ".lotusColRight" )

if(sidebar.length){
  // any html added to DOM MUST USE SINGLE QUOTES
  sidebar.append("<div aria-expanded='true' name='reaction_section_mainpart' class='lotusSection' role='complementary' aria-label='Tone' aria-labelledby='section_reaction_label'><label class='lotusOffScreen' aria-live='polite' id='reaction_section_hint_label'>Expanded section</label><h2 style='cursor:default'><span id='section_authors_label' class='lotusLeft'>Reaction</div></span></h2><div id='section_reaction' class='lotusSectionBody'><span class='lotusBtn lotusLeft'><a id='analyzeButton' role='button' href='javascript:;'>Analyze Comments</a></span><canvas id='watsonChart' width='300' height='300'></canvas></div></div>");
  
  // attach an event handler to do the analysis when button is clicked
  $("#analyzeButton").click(function() {
    
    // inform the user something is happening
    $("#analyzeButton").text("Analyzing ...")
    
    // get the html of the blog entry
    var entryHtml = $("div.entryContentContainer");

    // get the html of the blog comments
    // var commentsHtml = $("#blogCommentPanel"); // Connections Cloud
    var commentsHtml = $( "div[dojoattachpoint='commentsAP']" ); // Connections on-prem

    // decide whether you want to use AlchemyAPI against comments or the entry
    if(commentsHtml.length) {
      post(commentsHtml.html());

    } else {
      console.error("Could not find text entry; can't add widget");
    }
  });
} else {
  console.error("No sidebar found in HTML; can't add widget");
}

function post(html) {
  // any HTML text sent to AlchemyAPI needs encoded
  html = encodeURIComponent(html);
  
   console.log("Sending text to AlchemyAPI: " + html);
  
  // send the html to watson APIs
  GM_xmlhttpRequest({
    method: "POST",
    headers: {
      "Content-Type": "application/x-www-form-urlencoded"
    },
    url: "https://watson-api-explorer.mybluemix.net/alchemy-api/calls/html/HTMLGetEmotion",
    data: "apikey=<use your own API key>&outputMode=json&html=" + html,
    onload: function(response) {
      console.log(response.responseText);

      // received response; construct the widget
      createChart(response);
    },
    onerror: function(response) {
      console.error(response.responseText);
    }
  });
}
 
function createChart(response) {
  console.log("Creating chart");
  
  // remove the analyze button from the view
  $("#analyzeButton").hide();
  
  // convert the API response to JSON
  var json = JSON.parse(response.responseText);
  
  // set up the chart
  var ctx = $("#watsonChart");
  
  if(ctx == undefined) {
    console.error("Context for chart not found");
  }
  
  var data = {
    labels: ["Anger", "Disgust", "Fear", "Joy", "Sadness"],
    datasets: [
      {
        label: "Sentiment",
        backgroundColor: [
          'rgba(255, 99, 132, 0.2)',
          'rgba(75, 192, 192, 0.2)',
          'rgba(153, 102, 255, 0.2)',
          'rgba(255, 206, 86, 0.2)',
          'rgba(54, 162, 235, 0.2)'

        ],
        borderColor: [
          'rgba(255,99,132,1)',
          'rgba(75, 192, 192, 1)',
          'rgba(153, 102, 255, 1)',
          'rgba(255, 206, 86, 1)',
          'rgba(54, 162, 235, 1)'

        ],
        borderWidth: 1,
        data: [
          json.docEmotions.anger,
          json.docEmotions.disgust,
          json.docEmotions.fear,
          json.docEmotions.joy,
          json.docEmotions.sadness,
        ],
      }
    ]
  };
  
  // add the chart to the view
  var myBarChart = new Chart(ctx, {
      type: 'horizontalBar',
      data: data,
      options: {
        title: {
          display: false
        },
        legend: {
          display: false
        }
      }
  });
}

Installing Greasemonkey Reaction Widget

  1. Launch your Firefox browser.
  2. Head over to the Greasemonkey addon page.
  3. Click the “Add to Firefox” button.
  4. You’ll then see a little monkey on the toolbar.
  5. Copy the script above to the clipboard.
  6. Click “Add New User Script”.
  7. Click “Use Script From Clipboard”.
  8. Change the script as needed.

New User Script

Greasemonkey Script

Greasemonkey Editor

How It Works

A few things to point out:

  • The top of the script defines where the “application” can run.  I’ve made it so that the widget will be added to IBM Connections Cloud and IBM’s Connections deployment.  You should update the @include line to reflect your server installation.  The @include directive also says to run the application only on Blog entry pages.  It does not currently run on a wiki or forum page for example.
  • The script will add a button to the right sidebar.  Pressing the button invokes the AlchemyAPI.
  • The text sent to AlchemyAPI is obtained from the Comments section of the post.  All we’re doing here is grabbing the HTML from inside your browser and making an API call.  AlchemyAPI does the rest.
  • I’m using Chart.js to create the chart.  I’ve used it before on other blog posts.
  • The color of the emotions in the chart is similar to the “Inside Out” characters. 😉

Inside OutHappy coding!

Digital Experience and the Internet of Things

Recently I created a Digital Experience + Internet of Things concept.  It started when I found a really neat Internet of Things Foundation demo.  Give it a try, and you’ll see your smartphone moving in real-time using only your browser.

IoT Demo AppThis demo is part of IBM’s Internet of Things Foundation  – a service available on Bluemix, which is a platform that provides services like this along with “runtimes” to build applications.  A runtime might be a traditional Java server like WebSphere Liberty or more modern runtimes like Node.js.  Personally I enjoy writing applications in Node.js, but I’d rather build this application in IBM Digital Experience.  A few reasons why:

  • Security.  It seems like every Node.js sample application I see has some comment like /* your authentication code here */.  That stuff is hard … there’s a reason someone didn’t actually write it.
  • Context.  Say we build an IoT application and, we can see data like speed, temperature, movement, etc in real-time.  Alone that’s interesting, but if there’s a problem, a maintenance manual would be helpful.  Or maybe a list of online technicians with relevant skills to assist.  Combining context with content is powerful, but in the “app-only” model more scope means more code.
  • Integration.  An IoT platform is there to obtain and store information from the “things”.  To get that information, you need to use APIs – that’s an integration need.  And then there’s the parsing of data and web presentation and all the stuff – again – that needs to be as simple as possible.

Here’s the concept that I built.  It comes in two forms: an overview page with a list of devices deployed and a detail page that shows specifics about the device.

Let’s dissect the overview page.

IoT Overview PageThe chart and the table are web content.  And both make use of the Digital Data Connector (DDC).  In short, DDC fetches the data from the IoT platform.  It will also parse the incoming data – JSON in this case.  What happens next is the really cool part.  The web developer doesn’t touch the data.  They use placeholders in their HTML to refer to the data – no parsing, no “where did this come from”.  They just know that if they use the “id” placeholder the real value from the data will be shown.  Visually it looks like this: on the left is the web developer writing HTML in Digital Experience and on the right is the raw data a developer using an API would see.  The arrows show how DDC bridges the two.

IoT DDCWhat are some other fun facts?

  • The chart on the left, it’s using DDC and web content too.  But instead of emitting the HTML markup seen in the screenshot, it also emits Javascript code.  And that code works with a Javascript library called Chart.js.
  • The data is being delivered directly to Digital Experience and not to the browser.  This feature is known as the Outbound HTTP Proxy (formerly AJAX Proxy).  It’s an important point because A) to the user, it’s all coming from Digital Experience and B) not all services will allow browser to service (CORS) communication.
  • The Proxy I mentioned also supports authentication to external services.  I was able to exploit the Bluemix demo easily because the user credentials were visible in the web app.  Conversely, Digital Experience allows me to pass the user’s credentials or a shared credential (Credential Vault) to the IoT platform from the server rather than the web app.  Just one more thing that made this easier.

Next, let’s look at the detail page (click it to animate).

IoT Live ViewThe web content you see on the left is contextual.  This is the example I gave earlier – based on what I’m seeing, what else might be helpful to display?

The graph you see with my phone moving up and down is data that is being sent from the IoT platform.  I re-used the Chart.js library from the other page to graph the data points in real-time.  And these data points are being sent via an MQTT Javascript client that is communicating with the IoT platform.

To build the MQTT client, I used IBM’s Script Portlet.  The Script Portlet allows me to write a simple web application using nothing more than a browser.  (It’s like 80 lines of code!)

IoT Script PortletBut rather than use the web editor you see here, I developed the application locally.  This allowed me to use my favorite IDE.  When the app was ready, I simply pushed a button and published it in Digital Experience thanks to the local developer tools.

Now for the technical details.

To use DDC, I would suggest simply reading Stuart’s post on developerWorks.  Here are the properties needed for the WP List Rendering Profile Service.

IoT DDC PropertiesBe careful not to forget importing the SSL certificate for Bluemix and setting the AJAX proxy digital_data_connector_policy URL. Both are documented in the article.  To test the AJAX Proxy (Outbound HTTP Proxy) access the following URL.  You should get data back.

http://<your portal>/wps/proxy/https/play.internetofthings.ibmcloud.com/api/v0002/bulk/devices?_limit=10&hpaa.slotid=iot

The hpaa.slotid=iot is what adds the shared credentials from the Credential Vault to the request.

The MQTT Client application is the following.

  <div style="display:none" data-script-portlet-original-tag="head">
     <script type="text/javascript" src="[Plugin:ScriptPortletElementURL element="js/require.js"]"></script>
<script type="text/javascript">
    requirejs.config({
        baseUrl : "/"
    });
    
    require(["ibmiotf"] , function(Client){
      console.log("loaded IOTF library");
        var data = {
            labels: [],
            datasets: [
        {
            label: "Acceleration (Y)",
            fill: false,
            lineTension: 0.1,
            backgroundColor: "rgba(75,192,192,0.4)",
            borderColor: "rgba(75,192,192,1)",
            borderCapStyle: 'butt',
            borderDash: [],
            borderDashOffset: 0.0,
            borderJoinStyle: 'miter',
            pointBorderColor: "rgba(75,192,192,1)",
            pointBackgroundColor: "#fff",
            pointBorderWidth: 1,
            pointHoverRadius: 5,
            pointHoverBackgroundColor: "rgba(75,192,192,1)",
            pointHoverBorderColor: "rgba(220,220,220,1)",
            pointHoverBorderWidth: 2,
            pointRadius: 1,
            pointHitRadius: 10,
            responsive: false,
            data: [],
        }
            ]
            };

        var ctx = document.getElementById("myChart");
        var myLineChart = new Chart(ctx, {
        type: 'line',
        data: data
        });
            
        var appClientConfig = {
            "org" : "play",
        "id" : "vans-iphone",
        "auth-key" : "<probably should get your own>",
        "auth-token" : "<ditto>"
        }
        
        var appClient = new Client.IotfApplication(appClientConfig);
      
          console.log("loaded IOTF client " + appClient);
      
        appClient.connect();
        
        appClient.on("connect", function () {
            console.log("connected");
            appClient.subscribeToDeviceEvents("iot-phone","vans-iphone","+","json");
        });
        
        appClient.on("deviceEvent", function (deviceType, deviceId, eventType, format, payload) {
            console.log("Device Event from :: "+deviceType+" : "+deviceId+" of event "+eventType+" with payload : "+payload);
            var json = JSON.parse(payload);
            // this is a hack to ensure when the device is offline that the chart does not
            // push new data entries
            if(json.d.ay != data.datasets[0].data[data.datasets[0].data.length-1]) {
                myLineChart.data.labels.push("");
                myLineChart.data.datasets[0].data.push(json.d.ay);
                myLineChart.update();
            }
        });
  });
    </script>
</div>
<div data-script-portlet-original-tag="body">      
    <canvas id="myChart" width="300" height="150"></canvas>
  </div>

Notice this snippet.

requirejs.config({
        baseUrl : "/"
    });

I’m setting the base path for where requirejs will look for the ibmiotf module.  This means the ibmiotf.js file must be at http://<webserver>/ibmiotf.js for example.  In my setup, I placed it on the IBM HTTP server (htdocs folder).  I did this because I had difficulty with getting requirejs to play nicely with the Script Portlet.  The ibmiotf.js module can be found in the dist folder of /iot-nodejs on GitHub (iotf-client-bundle.min.js) .

Chart.js was added as a theme module and profile.  This allowed me to simply change the profile of the overview and detail pages to including the charting functionality.  Be careful that OS files don’t sneak their way to the server (._Chart.js seen in the screenshot).  This usually results in the theme code failing because there’s a foreign file it does not understand.

IoT Chart Theme Modue iot_profileHappy Coding!

New Way to Learn 2016

We just wrapped up our New Way to Learn tour for worldwide business partners.  If you missed it, we covered 133 technical and business sessions over two months.  IBM Business Partners can access all the sessions in the #NWTL community on Connections Cloud.  Sessions that I presented are listed below from my SlideShare.

Introduction to Single Sign-On

Single Sign-On with Active Directory

Introduction to the Social Business Toolkit

IBM Digital Experience Theme Customization

Building Collaborative Document Solutions with Connections Docs 2.0 and Box

 

Welcome back!  In previous posts, we explored the new 3rd party support in Connections Docs 2.0 and built a custom module to communicate with SharePoint 2013.  This time, we’ll be integrating with our friends at Box.  What’s nice about building this integration is that it encounters production-level questions:

  • How do I architect a hybrid (cloud and on-prem) solution?
  • How will authentication work (e.g. OAuth2, cookies, SAML)?
  • What’s the best way to initiate a co-editing session?  (e.g. build my own solution or use Box’s extension points)

The result is a solution that behaves like so.

Download Video: MP4

Admittedly, there’s one step I don’t like.  It’s the login to Docs.  I wrote the solution, but there’s one small issue I’m working through.  Once resolved, I’ll update this post.

The steps you see in the video have the following sequence.

Box Architecture

  1. User begins in Box and chooses the file to be edited in Connections Docs.
  2. Box sends the user to the Connections Docs server.  And in doing so, Box provides Docs with needed information like the file ID and the OAuth code.
  3. The user’s browser then connects to the Docs server (you’ll see the address change in the browser).
  4. The Docs application uses the OAuth code to retrieve an OAuth token directly from Box.  This token allows Docs to act on behalf of the Box user (e.g. downloading and uploading the file).
  5. The Docs application connects to my custom code.  This is needed because Box doesn’t yet know how to give the file to Docs.  So my code acts as that bridge.
  6. The file is downloaded from Box and opened in the Docs editor.
  7. On save, the file is then uploaded back to Box.

So what does it take to do all this?

Box SDK

Box has an easy to use Java SDK.  I’ll be using it to do most of the work to download a file, get the meta data, get information about the user, etc.  This actually makes my code minimal.  For example, here’s everything needed to download and upload the file.

 private BoxAPIConnection getApi() {
 String bearer = settings.get("Authorization");
 
 if(bearer != null &amp;&amp; !bearer.equals("")) {
 bearer = bearer.substring(bearer.indexOf("Bearer") + 7);
 logger.finest("Using Box access token " + bearer);
 return new BoxAPIConnection(bearer);
 } else {
 logger.severe("No Authorization header found; OAuth not possible");
 }
 
 return null;
 }
 
 @Override
 public void open(String fileId, OutputStream out) throws IOException {
 logger.entering(BoxRepository.class.getName(), "open");
 
 BoxFile boxFile = new BoxFile(getApi(), fileId);
 BoxFile.Info info = boxFile.getInfo();
 
 logger.fine("Retrieved " + fileId + " from Box " + info.getID());
 
 boxFile.download(out);
 
 logger.exiting(BoxRepository.class.getName(), "open");
 }
 
 @Override
 public void save(String fileId, InputStream in) throws IOException {
 logger.entering(BoxRepository.class.getName(), "save");
 
 BoxFile boxFile = new BoxFile(getApi(), fileId);
 boxFile.uploadVersion(in);
 
 logger.exiting(BoxRepository.class.getName(), "save");
 }
 
 @Override
 public JSONObject getMeta(String fileId) {
 logger.entering(BoxRepository.class.getName(), "getMeta");
 
 JSONObject o = new JSONObject();
 
 BoxFile boxFile = new BoxFile(getApi(), fileId);
 BoxFile.Info info = boxFile.getInfo();
 
 if (info != null) {
 o.put(DocRepository.ID, fileId);
 o.put(DocRepository.VERSION, info.getVersion().getID());
 // TODO : Confirm works as expected
 if (info.getExtension() != null) {
 o.put(DocRepository.MIME, info.getExtension());
 }
 // FIXME : What if the extension is not present in name
 o.put(DocRepository.NAME, info.getName());
 o.put(DocRepository.DESCRIPTION, info.getDescription());
 o.put(DocRepository.SIZE, info.getSize());
 o.put(DocRepository.CREATED,
 DocRepositoryUtil.formatTime(info.getContentCreatedAt().getTime()));
 o.put(DocRepository.MODIFIED,
 DocRepositoryUtil.formatTime(info.getContentModifiedAt().getTime()));
 
 // created_by
 JSONObject c = new JSONObject();
 c.put(DocRepository.ID, info.getCreatedBy().getID());
 c.put(DocRepository.NAME, info.getCreatedBy().getName());
 c.put(DocRepository.EMAIL, info.getCreatedBy().getLogin());
 o.put(DocRepository.CREATED_BY, c);
 
 // modified_by
 JSONObject m = new JSONObject();
 m.put(DocRepository.ID, info.getModifiedBy().getID());
 m.put(DocRepository.NAME, info.getModifiedBy().getName());
 m.put(DocRepository.EMAIL, info.getModifiedBy().getLogin());
 o.put(DocRepository.MODIFIED_BY, m);
 
 // permissions
 JSONObject p = new JSONObject();
 EnumSet&lt;Permission&gt; permissions= info.getPermissions();
 
 if(permissions != null){
 p.put(DocRepository.READ,
 String.valueOf(permissions.contains(
 Permission.CAN_DOWNLOAD))); // String not bool!
 p.put(DocRepository.WRITE,
 String.valueOf(permissions.contains(
 Permission.CAN_UPLOAD))); // String not bool!
 } else {
 // FIXME : does this mean it's not shared or it's owned
 p.put(DocRepository.READ, "true");
 p.put(DocRepository.WRITE, "true");
 }
 
 o.put(DocRepository.PERMISSIONS, p);
 }
 
 logger.exiting(BoxRepository.class.getName(), "getMeta");
 
 return o;
 }

To use the SDK, I’ve placed the box-java-sdk-2.0.0 and it’s maven dependencies inside F:\IBM\Docs\WebSphere\AppServer\lib\ext.  The ext folder is part of WebSphere’s classloader lookup.  Since the various JARs are ~4MB, you can place them in the ext folder rather than inside your web application.

OAuth2

You may have noticed that in my getApi() function, I’m using the OAuth header.  This is my authentication mechanism.  Box tells Docs who the user is by way of the OAuth code.  Docs will then exchange the code for a token, and that token is used in my code.  You also saw me login to Docs.  I think this is unnecessary, and I’m currently doing it because I’m having a different issue.  Once I resolve it, the OAuth token is the user’s identity and Docs will use it to retrieve additional user information from Box.

Box OAuth

Box has a good write up on configuring OAuth on their end.  Read it here.  After setting up my application on Box, I need to get the OAuth information to configure Docs.

Box OAuth

Box assigns the client_id and client_secret for you.  Add the URL to Docs in the redirect_uri, for example https://docs.demos.ibm.com/docs/driverscallback.  One point of caution, just be consistent when entering this URL in various places.  OAuth will fail if the redirect_uri here is different than the one Docs sends.  Check your concord-config.json file for entries like

"docs_callback_endpoint" : "https://docs.demos.ibm.com/docs/driverscallback"

Whatever you have here should match the redirect_uri (or vice versa).

Docs OAuth

Next we need to configure Docs to use the client_id and client_secret values.  See the article here on doing so.  Admittedly, I did not even get this right.  Here are the examples from IBM.

./wsadmin.sh -lang jython -user xx -password xx -f customer_credential_mgr.py -action
add -customer docs.demos.ibm.com -key oauth2_client_id -value
"l7xxf61984f99f404575a781d47c6bfebdca"
./wsadmin.sh -lang jython -user xx -password xx -f customer_credential_mgr.py -action
add -customer docs.demos.ibm.com -key oauth2_client_secret -value
"cc692ce34451418e86d9b231ee34af65"

Some helpful points:

  • The value used for customer should match the “customer_id” : “docs.demos.ibm.com” entries in concord-config.json and viewer-config.json.
  • You will issue two wsadmin commands to store oauth_client_id and oauth2_client_secret.  In the beta, I think this was documented differently.
  • The wsadmin commands are storing data in CONCORDDB.CUSTOMER_CREDENTIAL should you need to verify.

You’ll also need to confirm that concord-config.json and viewer-config.json are properly set to use OAuth.  This is done in the following sections (concord-config.json on DMgr01 as the example).

{
 "id" : "external.rest",
 "class" : "com.ibm.docs.repository.external.rest.ExternalRestRepository",
 "config" :
 {
 "s2s_method" : "oauth2",
 "customer_id" : "docs.demos.ibm.com",
 "oauth2_endpoint" : "https://app.box.com/api/oauth2/token",
 "j2c_alias" : "",
 "s2s_token" : "123456789",
 "token_key" : "docstoken",
 "onbehalf_header" : "docs-user", 
 "media_meta_url" : "http://docs.demos.ibm.com/mydocs/DocServlet?id={ID}&mode=meta",
 "media_get_url" : "http://docs.demos.ibm.com/mydocs/DocServlet?id={ID}&mode=content",
 "media_set_url" : "http://docs.demos.ibm.com/mydocs/DocServlet?id={ID}&mode=content",
 "docs_callback_endpoint" : "https://docs.demos.ibm.com/docs/driverscallback",
 "repository_home" : "http://docs.demos.ibm.com/mydocs/DocServlet"
 }
 }
{
 "id" : "external.rest",
 "class":"com.ibm.docs.authentication.filters.ExternalAuth",
 "config" : {
 "s2s_method" : "oauth2" 
 } 
 }
{
 "id" : "external.rest",
 "class":"com.ibm.docs.directory.external.ExternalDirectory",
 "config" : {
 "profiles_url" : "",
 "s2s_method" : "oauth2",
 "customer_id" : "docs.demos.ibm.com",
 "oauth2_endpoint" : "https://app.box.com/api/oauth2/token",
 "j2c_alias" : "",
 "s2s_token" : "123456789",
 "token_key" : "docstoken",
 "onbehalf_header" : "docs-user", 
 "docs_callback_endpoint" : "https://docs.demos.ibm.com/docs/driverscallback",
 "keys" : {
 "id_key" : "id",
 "name_key" : "name",
 "display_name_key" : "display_name",
 "email_key" : "email",
 "photo_url_key" : "photo_url",
 "org_id_key" : "org_id",
 "url_query_key" : "userid"
 },
 "bypass_sso" : "false" 
 } 
 }

Note that the oauth2_endpoint is https://app.box.com/api/oauth2/token.  This is the second step in the OAuth dance – not the first.

SSL

I’ve used SSL (https) in a few places – most notably for the Box OAuth endpoint.  Docs (i.e. WebSphere) will be going out to Box to retrieve the OAuth code.  To do that over SSL, we must import the SSL certs into WebSphere.  We also need to do this because as you see in the architecture diagram, the Box SDK needs to connect to api.box.com and upload.box.come over SSL.  If you do not do this, you will see the SSL handshake exceptions in the log.

The steps to do so are documented in various places, for example here.  I’ve done the import into the Cell Default Trust store for app.box.com, upload.box.com, and api.box.com.  Just do all three for best practice – today Box is using the same cert for two of their servers, which is why you only see two certs listed in my trust store.

Box SSL

Box SSL 2

X-FRAME-OPTIONS

OMG there’s more?  Almost done.  But this one really confounded me.  Box is going to open the Docs app inside an IFrame.  For that to work correctly, we need certain security settings applied in the Docs HTML response.  I knew why the problem existed (the developer console in Firefox or Chrome will tell you it’s an issue), but I didn’t know how to resolve.  Basically, we need to change the X-FRAME-OPTIONS: SAMEORIGIN header to be ALLOW-FROM https://app.box.com.  All those blog posts that say you can do this with an Apache unset header are wrong.  It just didn’t work.  Fortuntely (after alot of decompiling), I found an undocumented way.  Add the following before the last brace at the end of your concord-config.json.

 "x-frame-options" :
 {
 "allow_option" : "ALLOW-FROM",
 "allow_uri" : "https://app.box.com"
 }

This will ensure that the header is set from the Docs application code.

Restart Docs and test.

This example was not easy.  So virtual high five for me, and if you run into issues, post a comment.

Happy coding …

Building Collaborative Document Solutions with Connections Docs 2.0 and SharePoint 2013

In a previous blog entry, I explored Connections Docs 2.0’s 3rd party support.  In this post, I’ll actually build support for a 3rd party – SharePoint 2013.  This post is pretty technical and is meant to provide working example code. If you have questions, feel free to leave a comment.

Getting Started

A few days ago, I had never used SharePoint.  But I was asked to connect Connections Docs and SharePoint 2013.  And where there’s an API, there’s a way.  SharePoint 2013 has RESTful APIs to get and store documents – specifically the Files and Folders API as documented on MSDN.

Let’s review.  To integrate a 3rd party we simply need to provide information about a file (metadata) and the file itself (binary).  We can use two SharePoint REST endpoints to do just that.

<app web url>/_api/web/getfilebyserverrelativeurl('/Shared Documents/filename.docx')
<app web url>/_api/web
 /getfilebyserverrelativeurl('/Shared Documents/filename.txt')/$value

SharePoint has a peculiar file identifier; it has spaces and slashes – just like a path on your desktop. Using “/Shared Documents/filename.txt” as an example, we’d end up with the following Docs URL.

http://docs.demos.ibm.com/docs/app/doc/external.rest//Shared%20Documents/filename.docx/edit/content

See those double slashes or notice that we have /filename.docx/edit/content?  Docs is going to fail if given this URL.

One possible solution is to encode the file identifier so that it doesn’t contain slashes or spaces.  I’ll use base64 encoding to demonstrate.

Docs Adapters

First, base64 encode the file ID (in green).  The result is a human unreadable value (in blue), but it will preserve the slashes.  I’m also going to slip in my adapter name (in orange).  This way my code can simultaneously get the correct file ID as well as the intended backend.  If we want a different backend, we just change the adapter name.  This approach also gives you a way to pass additional parameters to the 3rd party.  Pretty flexible.

Apache HTTPComponents Pitfall

I’ll be using the Apache HTTPComponents library to help with the REST communication.

Regarding SharePoint authentication, I’m simply supplying a username and password.  This is similar to basic authentication, but Microsoft does things a bit differently.  And this led to some technical frustration.

Docs ships with version HTTPComponents 4.1.1.  Unfortunately in version 4.1.1, there’s a bug in NTCredentials, which is how we need to authenticate to SharePoint.  And the latest version of HTTPComponents will not work (you’ll get issues due to Docs loading the older classes).  So I’m using httpclient-4.2.6 and httpcore-4.2.5.  You can get these from Maven.

If you’re using OAuth as the authentication strategy, YMMV.

Adapter Design

I have a few integration examples being developed.  To handle multiple 3rd parties, my code uses an adapter pattern (in software engineering speak).  The mechanics of my core code and Docs doesn’t vary.  What varies is the 3rd party.  And the adapter pattern allows me to delegate how communication to the 3rd partyis done.

Even though every adapter is different, each is going to need to do the same high level operations.

  1. Get the metadata about a file
  2. Open the file from the 3rd party
  3. Save the file to the 3rd party

So I’ve created an abstract class DocRepository to define these requirements.

abstract public void open(String fileId, OutputStream out) throws IOException;
abstract public void save(String fileId, InputStream in) throws IOException;
abstract public JSONObject getMeta(String fileId);

Notice the InputStream and OutputStream. These are sinks to the Docs server.  The OutputStream is used to write the file to Docs.  The InputStream is used to read the file from Docs.  The adapter’s job is to know how to speak “3rd party” – handling connections, authentication, error checking, etc.

SharePoint Adapter

Setting Up the HTTP Client

Let’s take a look at the code that communicates with SharePoint and sets up authentication, the HttpClient.

 private HttpClient getHttpClient() throws IOException {
 // TODO : You will want to do something more consumable; perhaps OAuth
 // figure out the user's password
 Properties props = new Properties();
 InputStream stream = getClass().getResourceAsStream(
 "/com/ibm/demos/docs/ext/SharePointUsers.properties");
 props.load(stream);
 
 // FIXME: a bit of a hack just to show functionality
 // use vstaub and not vstaub@demos.ibm.com
 String user = settings.get(settings.get(ONBEHALF_HEADER));
 user = user.substring(0, user.indexOf("@"));
 
 String password = (String) props.get(user);
 
 // set up the client with NTLM authentication
 CredentialsProvider credsProvider = new BasicCredentialsProvider();
 credsProvider.setCredentials(
 new AuthScope(AuthScope.ANY),
 // map the user to a stored password
 new NTCredentials(user, password, settings
 .get(HOST), "domain"));
 
 DefaultHttpClient httpClient = new DefaultHttpClient();
 httpClient.setCredentialsProvider(credsProvider);
 
 return httpClient;
 }

How does it know who to authenticate?  Recall that Docs will make a REST request to your implementation (3rd party).  This request will carry with it some headers.  Here’s a print out of what my servlet receives.

[1/27/16 11:57:57:223 EST] 000000c9 DocServlet 1 Incoming request http://docs.demos.ibm.com/mydocs/DocServlet
[1/27/16 11:57:57:223 EST] 000000c9 DocServlet 1 docstoken=123456789
[1/27/16 11:57:57:223 EST] 000000c9 DocServlet 1 docs-user=vstaub@demos.ibm.com
[1/27/16 11:57:57:223 EST] 000000c9 DocServlet 1 User-Agent=Jakarta Commons-HttpClient/3.1
[1/27/16 11:57:57:223 EST] 000000c9 DocServlet 1 Host=docs.demos.ibm.com

Notice the docs-user header.  This is how Docs informs the 3rd party of the user’s identity.  I’m logged in as vstaub@demos.ibm.com on the Docs server.  My code can then take the value vstaub@demos.ibm.com and use it to authenticate me with SharePoint. I’ve chosen to do this by looking my password up in a property file.

“And I’m supposed to just trust that this docs-user is who he says he is?” Yep, and also notice the docstoken header.  This is a server to server (s2s) secret the 3rd party can use to validate the incoming request.  Keep in mind that this is Docs server to 3rd party interaction – not the user’s browser.  But if you need more assurances, there are other mechanisms – like using Cookies rather than a s2stoken.  See the documentation for more details.

** Update ** During a fresh deployment, I saw an error related to NTLM scheme not supported by the HttpClient. I did not see this on my server, but there were many code revisions, and it’s possible that something was stuck in the classloader.  If you run into this, you may want to add this line to the HttpClient.

httpClient.getAuthSchemes().register(AuthPolicy.NTLM, new NTLMSchemeFactory());

Getting Metadata from SharePoint

Next let’s look at the code that obtains metadata about the file.  (The code is incomplete – for example, the modification details and permissions are stubbed).

public JSONObject getMeta(String fileId) {
 logger.entering(SharePointRepository.class.getName(), "getMeta");
 
 String filename = DocRepositoryUtil.encodeSpaces(
 DocRepositoryUtil.getFilename(fileId));
 
 String url = settings.get(HOST)
 + "/_api/web/getfilebyserverrelativeurl('"
 + filename + "')";
 
 JSONObject o = new JSONObject();
 JSONObject d = getJson(url, Request.GET);
 
 if (d != null) {
 // ID is set by com.ibm.demos.docs.DocServlet
 // o.put(DocRepository.ID, "&lt;ID&gt;");
 
 // o.put(DocRepository.MIME, "application/msword");
 // if the extension is not in name, set mime above
 o.put(DocRepository.NAME, DocRepositoryUtil.getFilename(fileId));
 o.put(DocRepository.VERSION,
 d.get("MajorVersion") + "." + d.get("MajorVersion"));
 o.put(DocRepository.DESCRIPTION, d.get("Name"));
 o.put(DocRepository.SIZE, d.get("Length"));
 o.put(DocRepository.CREATED, d.get("TimeCreated"));
 o.put(DocRepository.MODIFIED, d.get("TimeLastModified"));
 
 // created_by
 JSONObject c = new JSONObject();
 c.put(DocRepository.ID, "sdaryn");
 c.put(DocRepository.NAME, "Samantha Daryn");
 c.put(DocRepository.EMAIL, "sdaryn@demos.ibm.com");
 o.put(DocRepository.CREATED_BY, c);
 
 // modified_by
 JSONObject m = new JSONObject();
 m.put(DocRepository.ID, "sdaryn");
 m.put(DocRepository.NAME, "Samantha Daryn");
 m.put(DocRepository.EMAIL, "sdaryn@demos.ibm.com");
 o.put(DocRepository.MODIFIED_BY, m);
 
 // permissions
 JSONObject p = new JSONObject();
 p.put(DocRepository.READ, "true"); // String not bool!
 p.put(DocRepository.WRITE, "true"); // String not bool!
 o.put(DocRepository.PERMISSIONS, p);
 }
 
 logger.exiting(SharePointRepository.class.getName(), "getMeta");
 
 return o;
 }

Getting the actual data from SharePoint is done by getJson().

 private JSONObject getJson(String url, Request type) {
 JSONObject d = null;
 
 try {
 HttpClient httpClient = getHttpClient();
 
 if (httpClient != null) {
 try {
 HttpRequestBase request = null;
 
 switch (type) {
 default:
 case GET:
 request = new HttpGet(url);
 break;
 case POST:
 request = new HttpPost(url);
 break;
 }
 
 request.setHeader("accept",
 "application/json; odata=verbose");
 
 logger.finest("Executing request "
 + request.getRequestLine());
 
 HttpResponse response = httpClient
 .execute(request);
 
 String json = EntityUtils
 .toString(response.getEntity());
 logger.finest("Received data from SharePoint " + json);
 d = JSONObject.parse(json);
 d = (JSONObject) d.get("d"); // nested object in d:
 
 } finally {
 httpClient.getConnectionManager().shutdown();
 }
 }
 } catch (Exception e) {
 e.printStackTrace();
 }
 
 return d;
 }

Reading the File from SharePoint

The code to obtain a file is actually quite succinct.  Notice that I’m reading the data from SharePoint and simply writing it to the OutputStream.  This effectively pipes data from SharePoint to the Docs server.

 public void open(String fileId, OutputStream out) throws IOException {
 logger.entering(SharePointRepository.class.getName(), "open");
 
 String filename = DocRepositoryUtil.encodeSpaces(
 DocRepositoryUtil.getFilename(fileId));
 
 HttpClient httpClient = getHttpClient();
 
 if (httpClient != null) {
 try {
 HttpGet get = new HttpGet(settings.get(HOST)
 + "/_api/web/getfilebyserverrelativeurl('" + filename
 + "')/$value");
 
 logger.finest("Executing request " + get.getRequestLine());
 
 HttpResponse response = httpClient.execute(get);
 
 logger.finest(response.getStatusLine().toString());
 
 // pipe the file data to the output
 out.write(IOUtils.toByteArray(response.getEntity()
 .getContent()));
 } finally {
 httpClient.getConnectionManager().shutdown();
 }
 
 }
 
 logger.exiting(SharePointRepository.class.getName(), "open");
 }

Writing the File to SharePoint

And finally, the write operation is a combination of the code seen earlier.  We need to getJson() to obtain a digest from SharePoint.  And then it’s used in the POST to SharePoint.

public void save(String fileId, InputStream in) throws IOException {
 logger.entering(SharePointRepository.class.getName(), "save");
 
 String filename = DocRepositoryUtil.encodeSpaces(
 DocRepositoryUtil.getFilename(fileId));
 
 // 1: Get the Digest for the POST
 JSONObject context = (JSONObject) getJson(
 settings.get(HOST) + "/_api/contextinfo", Request.POST).get(
 "GetContextWebInformation");
 String digest = (String) context.get("FormDigestValue");
 
 // 2: Send the data in POST
 
 HttpClient httpClient = getHttpClient();
 
 if (httpClient != null) {
 try {
 HttpPost post = new HttpPost(settings.get(HOST)
 + "/_api/web/getfilebyserverrelativeurl('" + filename
 + "')/$value");
 post.setHeader("X-HTTP-Method", "PUT");
 post.setHeader("X-RequestDigest", digest);
 
 ByteArrayEntity entity = new ByteArrayEntity(
 IOUtils.toByteArray(in));
 
 post.setEntity(entity); // pipe the input
 
 logger.finest("Executing request " + post.getRequestLine());
 
 HttpResponse response = httpClient.execute(post);
 
 
 logger.finest(response.getStatusLine().toString());
 } finally {
 httpClient.getConnectionManager().shutdown();
 }
 }
 
 logger.exiting(SharePointRepository.class.getName(), "save");
 }

And that’s it.  It may seem like a lot of code, but it’s not.  In fact, in my next post, we’ll see just how succinct this interaction can be.

In the mean time, happy coding!

It’s disruptive! No, it’s probably not.

Van's Disruption Meme

Nearly a decade after its initial introduction by Bower and Christensen, I first learned about disruption theory as a student in business school.  And whether it’s my own newly found understanding or journalistic overuse, the term disruption seems to be everywhere.  But is disruption really everywhere?

“Disruption” describes a process whereby a smaller company with fewer resources is able to successfully challenge established incumbent businesses.

To most of us, disruption is simply the overtaking of corporate Goliath by startup David.  Is Tesla disruptive?  Is Uber disruptive?  Again to most, the answer is probably, “Of course.”

But there’s more to disruption than the outcome between market winners and losers. In his recent HBR article, Christensen summarizes his seminal theory and analyzes whether Uber is disrupting the taxi industry. If you’ve never investigated what really makes something disruptive, read the articles in the links.  You may find what you once thought of as disruptive – well – isn’t.

Using cURL with IBM Connections Cloud

I love Java. But there are times that writing a program is more work than it’s worth.  And to the novice, trying to get set up with a JVM, IDE, etc only adds to the time commitment.

So I re-introduce you to cURL (I’ve mentioned it a few times on the blog).  What is cURL?  It’s like a browser – only without the user interface.  cURL gets and sends the raw text data to and from a server.  This is what you see when you use the “View Source” option in your web browser.

I’ll use cURL to populate a bunch of Connections Cloud communities quickly. (You could do this for on-premises as well.)  For example, let’s say my company just moved to Connections Cloud. And for every network shared folder we previously used to be organized (terrible), we’d rather use a Connections Cloud community (awesome).  The reason to leverage cURL to do this is that creating the community is very easy. And it’s something you’ll do once or occasionally.  So a scripted approach is more efficient than writing code.

Let’s get to it.  For reference, review the cURL scripts I have laying around in cURLConnectionsCloud.zip. Just unzip it to any Windows computer.

cURL

You can either download cURL or use the one I’ve packaged in my cURLConnectionsCloud.zip.  I’d recommend using mine since it works with the rest of the examples.

Setup

Every cURL script I create starts with some setup to initialize parameters like server URL, username, and password.  The first time you run the scripts, it will prompt for user name and password.  Anything run subsequently will be done in the context of this user name (e.g. My Communities).

SetupCurl.bat

The below command sets the path to the cURL executable.  It also ensures that basic authentication is used and the username:password pair are included any time a Connections Cloud script is run.

set curl=%~dp0/curl/curl.exe -L -k -u %cnx_username%:%cnx_password%

SetupCnx.bat

This script set the URL to the server.  It also prompts the user for credentials if not already provided previously.

@echo off
REM CA1 Test Server
REM set cnx_url=https://apps.collabservnext.com
REM North America Production Server
set cnx_url=https://apps.na.collabserv.com
IF DEFINED cnx_url (echo %cnx_url%) ELSE (set /p cnx_url= Connections URL:)
IF DEFINED cnx_username (echo %cnx_username%) ELSE (set /p cnx_username= Connections ID:)
IF DEFINED cnx_password (echo **masked**) ELSE (set /p cnx_password= Connections Password:)

Usage

Next we need to create a community.  This is done simply by sending text to the Connections Cloud server.

The Script

The cURL script looks like the following.

@echo off
call ../SetupCnx.bat
call ../SetupCurl.bat
%curl% -v -X POST --data-binary @%1 -H "Content-Type: application/atom+xml" %cnx_url%/communities/service/atom/communities/my

A couple of points:

  • -v is the verbose flag; I use it to see everything that happens. You can remove it if you’d like
  • –data-binary @%1 means that I am sending a file to the server and the file name is provided as input on the command line
  • -H “Content-Type: application/atom+xml” is a required setting; you need to set a header specifying the content type per the API doc
  • %cnx_url%/communities/service/atom/communities/my is the URL to the Connections endpoint per the API doc

To create the community, all that’s needed is to create an XML file and run the following command.

C:\IBM\workspaces\connections\cURL\communities>CreateCommunity.bat CommunityInpu
t.xml

The Input

The above command has CommunityInput.xml at the end.  This is the input file that is used to create the community. The input XML file is easy on the eyes as well.  If we had multiple communities, I would write a few more lines in the script to substitute the list of folders for the title field.  Or you could create more input files … it’s a lot easier to edit text than program.

<?xml version="1.0" encoding="UTF-8"?>
<entry xmlns="http://www.w3.org/2005/Atom" xmlns:app="http://www.w3.org/2007/app"
 xmlns:snx="http://www.ibm.com/xmlns/prod/sn">
 <title type="text">Community Name Goes Here</title>
 <content type="html">Community Description Goes Here</content>
 <author>
 <name>Van Staub</name>
 <email>van_staub@us.ibm.com</email>
 <snx:userid>20002888</snx:userid>
 <snx:userState>active</snx:userState>
 </author>
 <contributor>
 <name>Van Staub</name>
 <email>van_staub@us.ibm.com</email>
 <snx:userid>20002888</snx:userid>
 <snx:userState>active</snx:userState>
 </contributor>
 <category term="community" scheme="http://www.ibm.com/xmlns/prod/sn/type"></category>
 <snx:communityType>public</snx:communityType>
</entry>

I’ve boldfaced the areas you might want to change. But use the API doc as a guide of what you can additionally set.  Most importantly the snx:userid applies to either your GUID for Connections on-premises or your subscriber ID for Connections Cloud.

That’s it.

  1. Unzip my sample.
  2. Update the CommunityInput.xml.
  3. Run CreateCommunity.bat

So next time you need to get something completed quickly or just want to experiment with the APIs, take a look at the cURL scripts I posted.  Most of them should work …

Happy scripting!

Building Collaborative Document Solutions with Connections Docs 2.0

Connections Docs (formerly IBM Docs) is soon to release its next major update, Connections Docs 2.0.  And with it, the IBM team adds a new, significant capability: integration with third party document repositories.  For independent software vendors who are already storing, organizing, or sharing documents, you can now easily add document-centric collaboration with your existing offering.  Let’s get started.

Docs 2.0

Overview

The general idea is fairly basic.  Connections Docs will take care of all the real-time co-editing, commenting, tracking changes, conversion of file formats, etc.  You just need to supply the document to Connections Docs via a programmatic interface.  Your interface will be responsible for both retrieving and storing the document as well as returning some information about the document.  Connections Docs 2.0 supports two formats: CMIS and REST.  I’ll focus on the latter because with REST you’ll be able to code in Java, node.js – anything really – and communicate with any document repository.

Installation

Connections Docs 2.0 installs into a WebSphere Application Server cluster.  The process to do a WebSphere install is not covered here.  But I’ve linked to a few external posts on the major steps.

  1. Install WebSphere 8.5.
  2. Create a single node cluster.  (This link shows a custom profile.  You can just select Cell in the Environment Selection screen.)
  3. Add a web server to the cluster.  (This link is really good, but it’s specific to Portal. You don’t need to add the rewrite rules, and there will be no wp_profile.)

With the above pre-conditions, you can install Connections Docs.  All steps are as screenshots in the file Connections Docs 2.0 Install.  A couple of points as you go through the steps:

  • Install Packages
    • Ensure the “Other content management systems” package is used.  This is the option for third party ISVs.
    • I have not selected the Extension packages.  These are only used with IBM Connections.  Presumably, you will not be installing Connections Docs alongside IBM Connections.
  • Node Identification
    • I used the defaults for cluster and node names.
    • The webserver you installed earlier should be listed.  If you don’t want to do this step, you could likely enter a bogus URL in the “Enter URL” textfield and later access Docs using the internal ports.
  • Integration with Other Content Management Systems
    • This is the important screen.  I used my own implementation during my installation.  But since we’ll be using a sample provided by IBM later in this post, we’ll enter the following values into this screen.
    • Repository type: REST
    • URL for the file metadata: http://<your server information>/docs-sample/files/{ID}/meta
    • URL for getting/setting file content: http://<your server information>/docs-sample/files/{ID}/content
    • Call authentication method: s2s_token
    • Server-to-server token key: token
    • Server-to-server token value: 123456789
    • Act as a user: as-user
    • User profiles endpoint: <Leave this blank>
    • Repository home: http://<your server information>/docs-sample/files
  • Client-side mount points
    • I chose to use local directories rather than NFS shares.  Create these directories manually prior to installation.
  • Editor Server Cluster
    • Fully qualified host name and address of Connections file server: <add a bogus URL; this is a bug>
    • Fully qualified host name and address of email notification service: <add a bogus URL; this is a bug>
  • Restart Web Server
    • Yes

Now grab a cup of coffee because the install takes about an hour on my VM.

For those using the beta code, see the manual step in the troubleshooting section at the end of this post.

To confirm the installation, access the URL http://<your server information>/docs/api/list?method=fileType.  You should receive a JSON response with the following data.

{".ods":"20480",".xls":"20480",".odt":"20480",".pptx":"51200",".txt":"20480",".ppt":"51200",".xlsx":"20480",".csv":"5120",".doc":"20480",".odp":"51200",".docx":"20480"}

Security

Docs relies on the user’s Java security Principal.  There’s a few ways to approach security: LDAP, SAML, OAuth, etc.  When you installed the WebSphere server, security should have been enabled.  You can log in to WebSphere’s console to add users to the file based repository.  Access https://<deployment manager url>:<port>/ibm/console.  Then go to Users and Groups -> Manage Users.  Then use the create button to add new users.

Integration via REST

Using the settings from above as an example, the out-of-the-box document retrieval process works like this.

  1. The user begins in the 3rd party application and elects to edit a document.  The 3rd party application would perform any operations to do so.  For example, the document may need to be locked in the 3rd party repository or authentication performed.  Finally the 3rd party application should redirect the user to the Docs application.
  2. If the user is not authenticated in WebSphere, a redirect to the login page will occur.  Note, this may not occur or be necessary depending on configuration of your specific server.
  3. Connections Docs will request meta data about the document described by the file_id URL parameter.
  4. The 3rd party application must respond with specific JSON data describing the document and permissions that can be performed on the document.  Special note: include the extension in the “name” property.  If you do not, the mime type will not be recognized and a failure will occur.  Alternatively, you can add a “mime” JSON property with the extension as the value.
  5. Connections Docs will then request the actual document.
  6. The 3rd party application sends over the document.
  7. The user is redirected to the Docs application where the document is opened ready to be edited.

Docs Integration

 

There are optional, additional steps if you have integrated profiles.  This is not covered here [yet], but the process is essentially the same.  Given an endpoint to the 3rd party repository, Docs can query external user information in JSON format.

{
 "id" : "5c11a0c0-7f6f-1033-982d-eba7a40afa7a", 
 "name" : "docs_tester", 
 "display_name" : "docs_tester", 
 "email" : "docs_tester@mail.com", 
 "photo_url" : "https://domain/profiles/id/photo.png", 
 "org_id" : "default_org" 
}

Integration Sample

Fortunately, there is an IBM reference implementation located here.  This is boldfaced because I overlooked this fact in the beta documentation.  Don’t you do the same. IBM’s implementation is a servlet that retrieves and stores documents from a directory inside the web module.  It’s trivial to extend this example to store and retrieve from disk, database, etc.

Sample Configuration

Download the code and open it with your IDE.  We need to update the configuration file.  In the Java src folder, expand the package com.ibm.docs.api.rest.sample.filters.  You’ll see a config.json file.  Update the file with the following contents.

{
 "s2s_method": "s2s_token",
 "s2s_token": "123456789",
 "onbehalfof_key" : "as-user"
}

The above tells the sample to use the token mechanism and which header identifies the user.  The sample code is currently written to look for the “token” header and validate it with the s2s_token property in the config.json.  Note that these must match the same settings we used when installing the server.

  • Server-to-server token key: token
  • Server-to-server token value: 123456789
  • Act as a user: as-user

Sample Installation

Next export and install the web module (or EAR) on the IBMDocsMember1 server.

Install the WAR using WebSphere Console
Install the WAR using WebSphere Console
Ensure the web module is mapped to both the web server and IBMDocsMember1 server.
Ensure the web module is mapped to both the web server and IBMDocsMember1 server.
Docs Sample Install
Ensure the context matches the URLs used in the installation wizard.

 

And to be certain that everything is properly mapped in the HTTP server’s plugin, now is a good time to update the web server.  In WebSphere, do the following:

  • Generate Plugin
  • Propagate Plugin
  • Restart the HTTP server

Docs Web Server

Testing

If all goes well, you should be able to perform the following actions.

Download Video: MP4

Troubleshooting and Reference

Here are a few tips and tricks as you build your first integration.

Important URL Examples

  • https://docs.demos.ibm.com:9051/ibm/console
  • http://docs.demos.ibm.com/docs/login
  • http://docs.demos.ibm.com/docs/api/list?method=fileType
  • http://docs.demos.ibm.com/docs/driverscallback?repository=rest&file_id=test.ods

Beta Configuration Step

For beta users, you’ll need to create a mock <install root>\WebSphere\AppServer\profiles\AppSrv01\config\cells\docsCell01\LotusConnections-config\LotusConnections-config.xml file.  I’ve attached my LotusConnections-config for this purpose.  If you do not, you’ll see errors.  After you make the update, restart Docs.

[12/10/15 16:12:53:142 EST] 00000070 ConnectionsCo W com.ibm.connections.httpClient.ConnectionsConfigHelper loadConfig SONATA: Connections configuration file [F:\IBM\Docs\WebSphere\AppServer\profiles\AppSrv01\config\cells\docsCell01\LotusConnections-config\LotusConnections-config.xml] does NOT exist.

Script to Start Deployment Manager

@ECHO OFF

call time /t

echo Starting Deployment Manager …

F:\IBM\Docs\WebSphere\AppServer\profiles\Dmgr01\bin\startManager.bat

call time /t

PAUSE

Script to Start Docs

@ECHO OFF

call time /t

echo Starting Docs …

call F:\IBM\Docs\WebSphere\AppServer\profiles\AppSrv01\bin\startNode.bat

call F:\IBM\Docs\WebSphere\AppServer\profiles\AppSrv01\bin\startServer.bat IBMConversionMember1
call F:\IBM\Docs\WebSphere\AppServer\profiles\AppSrv01\bin\startServer.bat IBMDocsMember1
call F:\IBM\Docs\WebSphere\AppServer\profiles\AppSrv01\bin\startServer.bat IBMDocsProxyMember1
call F:\IBM\Docs\WebSphere\AppServer\profiles\AppSrv01\bin\startServer.bat IBMViewerMember1

call time /t

PAUSE

Script to Stop Deployment Manager

@ECHO OFF

call time /t

echo Stopping Deployment Manager …

set username=wasadmin
set password=password

call F:\IBM\Docs\WebSphere\AppServer\profiles\Dmgr01\bin\stopManager.bat -username %username% -password %password%

call time /t

PAUSE

Script to Stop Docs

@ECHO OFF

call time /t

echo Stopping Docs…

set username=wasadmin
set password=password

call F:\IBM\Docs\WebSphere\AppServer\profiles\AppSrv01\bin\stopServer.bat IBMConversionMember1 -username %username% -password %password%
call F:\IBM\Docs\WebSphere\AppServer\profiles\AppSrv01\bin\stopServer.bat IBMDocsMember1 -username %username% -password %password%
call F:\IBM\Docs\WebSphere\AppServer\profiles\AppSrv01\bin\stopServer.bat IBMDocsProxyMember1 -username %username% -password %password%
call F:\IBM\Docs\WebSphere\AppServer\profiles\AppSrv01\bin\stopServer.bat IBMViewerMember1 -username %username% -password %password%
call time /t

PAUSE

Docs Configuration

If you find that you need to change a configuration setting.  Review the <install root>\WebSphere\AppServer\profiles\Dmgr01\config\cells\docsCell01\IBMDocs-config\concord-config.json file.  This contains all the settings used during the installation wizaLotusConnections-configrd.  Should you need to make a change, you’ll need to update this file and synchronize the nodeagent.  Then restart the Docs servers. The file located at <install root>\WebSphere\AppServer\profiles\AppSrv01\config\cells\docsCell01\IBMDocs-config must update and reflect your changes for the new settings to take effect.

Application Security

Make absolutely sure that Application Security is enabled.  If you do not, you will receive 401 errors when accessing Docs and errors in the log similar to the following.

[12/9/15 14:14:36:442 EST] 000000b3 ExternalAuth  W   Request is not authorized while accessing URL: /docs/api/list

This is an easy fix.

Docs Security Enabled

When All Else Fails

Review the SystemOut.logs in F:\IBM\Docs\WebSphere\AppServer\profiles\AppSrv01\logs.  Specifically see the logs inside IBMDocsMember1.

Building Social Applications using Connections Cloud and WebSphere Portal: SAML and Single Sign-On

If you’ve been following along, we’ve created a custom solution that combines WebSphere Portal and Connection Cloud to create a socially enabled web site.  To access information in Connections Cloud, OAuth is used as the mechanism to exchange data.  Unfortunately, OAuth does not do everything.  If a user were to follow a link from Portal to Connections Cloud, he or she would need to log in to Connections Cloud.  What gives?

The reason is that WebSphere Portal is what authenticates to get the user’s data, but the actual user’s browser is not authenticated.  The result feels like a disconnect for users and non-technical observers.  The solution to this problem is SAML or Security Assertion Markup Language.  I’ve written about SAML on this blog previously, see Using SAML with SmartCloud for Social Business.  What I’ll do here is add to that work to create a solution that uses WebSphere Portal.

Design

The general flow goes a bit like this:

  1. “Something” triggers the SAML process.  It could be by a strict precondition (like right after you login) or dynamically (something realizes that you have not yet authenticated).
  2. Send the user to a web application hosted on Portal.  This web application (the actual page a user visits) is designed to construct the SAML token.
    1. The SAML token is signed using a certificate previously exchanged with IBM.  There is a manual Support process that you must follow for this to work.
  3. The web page then POSTs the SAML assertion to Connections Cloud.
    1. Connections Cloud decrypts the token, inspects the user’s identity and allows access if appropriate.

Rationale

Why would I ever do this?!?!  There are existing solutions I could use: Tivoli Federated Identity, Microsoft Active Directory Federation Server, Shibboleth.  But I needed something narrow in scope.  I don’t want an identity server – I already have one, Portal.  And I don’t want to add more servers to the existing deployment.  So I’ve created a module that does only one thing: determines who you are from Portal, creates a SAML assertion, and sends it to Connections Cloud.

That and I had the code laying around … it just needed a use case.

Implementation

This is a fairly technical project.  Even my eyes glaze over when I starting hearing about ciphers and key chains, but the main moving parts are as follows.

SAML Servlet

The SAML Servlet listens for incoming requests.  In doing so, it will do the following:

  1. Figure out the user’s identity.  The web module is protected and thus all users must be authenticated by WebShere to access.
  2. Construct the SAML token.
  3. Generate a web page with the form that sends the token to Connections Cloud.  (Javascript will submit the form automatically for the user.)
    1. There’s also a bit of code that will realize the rediret (302) coming from Connections Cloud if the admin has configured to use only this servlet as the identity server.
package com.ibm.sbt.saml.impl.servlet;
 
import java.io.IOException;
 
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
 
import com.ibm.sbt.saml.ISamlIdentityProvider;
import com.ibm.sbt.saml.ISamlSigner;
import com.ibm.sbt.saml.SamlCreator;
import com.ibm.sbt.saml.impl.SimpleSaml11Creator;
import com.ibm.sbt.saml.impl.SimpleSamlSigner;
 
public class Saml11Servlet extends HttpServlet {
private static final long serialVersionUID = 1L;
 
private Saml11ServletConfig config;
private ISamlSigner signer;
private SamlCreator creator;
private ISamlIdentityProvider idProvider;
 
private final String emailParam = "email.domain";
 
public void init() throws ServletException {
super.init();
 
config = new Saml11ServletConfig(this.getServletConfig());
signer = new SimpleSamlSigner(config.getKeyStorePath(),
config.getKeyStorePassword(), config.getKeyStoreAlias());
creator = new SimpleSaml11Creator(config, signer);
 
// see http://www-01.ibm.com/support/knowledgecenter/SSHRKX_8.5.0/mp/dev-portlet/add_jaas.dita?lang=en
// for additional ways to get the email from a logged in user
// be careful as not all WebSphere servers use VMM (i.e. Federated Security)
idProvider = new PrincipaltoEmailIdentityProvider(this.getServletConfig().getInitParameter(emailParam));
}
 
protected void doGet(HttpServletRequest request,
HttpServletResponse response) throws ServletException, IOException {
 
// incoming 302 from Connections Cloud has TARGET parameter
String target = request.getParameter("TARGET");
 
String identity = idProvider.getUserIdentity(request.getUserPrincipal().getName());
String token = creator.create(identity, null);
 
response.getWriter().print(getForm(config.getEndpoint(), token, target));
}
 
private String getForm(String endpoint, String token, String target) {
return "&lt;html xml:lang=\"en\" xmlns=\"http://www.w3.org/1999/xhtml\"&gt;&lt;head&gt;&lt;meta http-equiv=\"Content-Type\" content=\"text/html; charset=UTF-8\"&gt; &lt;title&gt;SAML POST response&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;form method=\"post\" action=\"" + endpoint + "\"&gt;&lt;p&gt;&lt;input name=\"TARGET\" value=\"" + target + "\" type=\"hidden\"&gt; &lt;input name=\"SAMLResponse\" value=\"" + token + "\" type=\"hidden\"&gt; &lt;noscript&gt; &lt;button type=\"submit\"&gt;Sign On&lt;/button&gt; &lt;!-- included for requestors that do not support javascript --&gt; &lt;/noscript&gt; &lt;/p&gt; &lt;/form&gt; &lt;script type=\"text/javascript\"&gt; setTimeout('document.forms[0].submit()', 0); &lt;/script&gt; Please wait, signing on... &lt;/body&gt;&lt;/html&gt;";
}
}

Identity Provider

This is pretty basic.  I’m assuming that the user currently has a Portal session.  If so, I’m grabbing the Principal (e.g. wpadmin) and then appending a configurable email domain.  The email address is a required format for Connections Cloud.  Thus you can implement your identity lookup any way you want, but the final value must be an email address that is also stored in Connections Cloud.

package com.ibm.sbt.saml.impl.servlet;
 
import com.ibm.sbt.saml.ISamlIdentityProvider;
 
public class PrincipaltoEmailIdentityProvider implements ISamlIdentityProvider {
 
	private final String domain;
 
	public PrincipaltoEmailIdentityProvider(String domain){
		this.domain = domain;
	}
 
	@Override
	public String getUserIdentity(String userId) {
		return userId + "@" + domain;
	}
}

SAML Token Creator

Things are about to get interesting.  With the user’s identity, we need to construct the SAML token.  I’ve chosen a SAML 1.1 implementation … because it was easier.  This code simply takes the template XML and substitutes appropriate values.  The result at the end is really just XML.  But this is where mistakes happen.  The values used must be accurate in not only the value but also format (e.g. date).

package com.ibm.sbt.saml.impl;
 
import java.text.SimpleDateFormat;
import java.util.Calendar;
import java.util.Date;
import java.util.Iterator;
import java.util.Map;
import java.util.Map.Entry;
import java.util.SimpleTimeZone;
 
import org.apache.commons.lang.StringEscapeUtils;
 
import com.ibm.sbt.saml.ISamlConfig;
import com.ibm.sbt.saml.ISamlSigner;
import com.ibm.sbt.saml.SamlCreator;
 
public class SimpleSaml11Creator extends SamlCreator {
 
	private static final String SAML_11_TEMPLATE = "&lt;samlp:StatusCode " + "Value=\"samlp:Success\" /&gt;$AUDIENCE$"
			+ "&lt;saml:AuthenticationStatement " + "AuthenticationInstant=\"$AUTH_INSTANT$\" AuthenticationMethod=\"urn:oasis:names:tc:SAML:1.0:am:password\"&gt;"
			+ ""
			+ "$USER_NAME$"
			+ "urn:oasis:names:tc:SAML:1.0:cm:bearer"
			+ "&lt;saml:NameIdentifier " + "Format=\"urn:oasis:names:tc:SAML:1.0:assertion#emailAddress\"&gt;$USER_NAME$"
			+ "urn:oasis:names:tc:SAML:1.0:cm:bearer"
			+ "$ATTRIBUTES$";
 
	public static final String ATTRIBUTE_FORMAT = "$ATTR_VALUES$";
	public static final String ATTRIBUTE_VALUE_FORMAT = "$ATTR_VALUE$";
 
	public SimpleSaml11Creator(ISamlConfig config, ISamlSigner signer) {
		super(config, signer);
	}
 
	@Override
	protected String getToken(String userId, Map&lt;String, String[]&gt; userAttrs) {
		Date now = new Date();
		SimpleDateFormat format = new SimpleDateFormat(DATE_FORMAT);
		Calendar cal = Calendar.getInstance(new SimpleTimeZone(0, "GMT"));
		format.setCalendar(cal);
 
		long validLength = (long) 60000 * config.getTokenExpiration();
 
		// Setup the time for which SAML assertion is valid
		Date notBefore = new Date(now.getTime() - validLength);
		Date notAfter = new Date(now.getTime() + validLength);
 
		String saml = SAML_11_TEMPLATE.replace("$ISSUE_INSTANT$",
				format.format(now));
 
		saml = saml.replace("$RESPONSE_ID$", now.getTime() + "");
		saml = saml.replace("$AUDIENCE$", config.getEndpoint());
		saml = saml.replace("$RECIPIENT$", config.getEndpoint());
		saml = saml.replace("$ISSUER$", config.getIssuer());
		saml = saml.replace("$ASSERTION_ID$", now.getTime() + "");
		saml = saml.replace("$NOT_BEFORE$", format.format(notBefore));
		saml = saml.replace("$NOT_AFTER$", format.format(notAfter));
		saml = saml.replace("$AUTH_INSTANT$", format.format(now));
		saml = saml.replace("$USER_NAME$", StringEscapeUtils.escapeXml(userId));
 
		StringBuilder allAttrs = new StringBuilder();
 
		if (userAttrs != null) {
			Iterator&lt;Entry&lt;String, String[]&gt;&gt; iterator = userAttrs.entrySet()
					.iterator();
			while (iterator.hasNext()) {
				Entry&lt;String, String[]&gt; entry = iterator.next();
				boolean attHasValue = false;
 
				// Setup the attribute values
				String attrValues = "";
				String[] values = entry.getValue();
				// Make sure values is not null
				if (values != null) {
					for (int j = 0; j &lt; values.length; j++) { String value = values[j]; // Only add the attribute if there is a value to add if (value != null &amp;&amp; value.length() &gt; 0) {
							attHasValue = true;
							attrValues += ATTRIBUTE_VALUE_FORMAT.replace(
									"$ATTR_VALUE$",
									StringEscapeUtils.escapeXml(values[j]));
						}
					}
				}
 
				if (attHasValue) {
					// Setup the attribute name and namespace
					String key = entry.getKey();
					String attribute = ATTRIBUTE_FORMAT.replace("$ATTR_NAME$",
							StringEscapeUtils.escapeXml(key));
					attribute = attribute.replace("$ATTR_NAMESPACE$",
							StringEscapeUtils.escapeXml(key));
					attribute = attribute.replace("$ATTR_VALUES$", attrValues);
					allAttrs.append(attribute);
				}
			}
		}
		saml = saml.replace("$ATTRIBUTES$", allAttrs.toString());
 
		return saml;
	}
}

SAML Signer

Now that there’s an XML SAML token, we need to sign it.  The signer class doesn’t do the actual signing; I’ve taken care of that implementation in the SAML Creator class.  The signer class’s job is to produce the X509Certificate.  I’ve chosen to simply pull it off disk from a configurable location.  A better implementation is to get it from built-in WebSphere keystores.  And since it’s not that interesting, I’ve left it out of the blog post (though it’s in the project code).

SAML Creator

And now we need to bring it all together. Take the XML, sign it with the certificate and hand it back to the servlet for posting to Connections Cloud.

package com.ibm.sbt.saml;
 
import java.io.ByteArrayInputStream;
import java.io.StringWriter;
import java.security.cert.X509Certificate;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.logging.Logger;
 
import javax.xml.crypto.dsig.CanonicalizationMethod;
import javax.xml.crypto.dsig.DigestMethod;
import javax.xml.crypto.dsig.Reference;
import javax.xml.crypto.dsig.SignatureMethod;
import javax.xml.crypto.dsig.SignedInfo;
import javax.xml.crypto.dsig.Transform;
import javax.xml.crypto.dsig.XMLSignature;
import javax.xml.crypto.dsig.XMLSignatureFactory;
import javax.xml.crypto.dsig.dom.DOMSignContext;
import javax.xml.crypto.dsig.keyinfo.KeyInfo;
import javax.xml.crypto.dsig.keyinfo.KeyInfoFactory;
import javax.xml.crypto.dsig.keyinfo.X509Data;
import javax.xml.crypto.dsig.spec.C14NMethodParameterSpec;
import javax.xml.crypto.dsig.spec.TransformParameterSpec;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.transform.Transformer;
import javax.xml.transform.TransformerFactory;
import javax.xml.transform.dom.DOMSource;
import javax.xml.transform.stream.StreamResult;
 
import org.w3c.dom.Document;
import org.w3c.dom.Element;
import org.w3c.dom.NodeList;
 
import com.ibm.ws.util.Base64;
 
public abstract class SamlCreator {
 
	private final Logger logger = Logger.getLogger(SamlCreator.class.getName());
 
	public static final String DATE_FORMAT = "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'";
 
	protected ISamlConfig config;
	protected ISamlSigner signer;
 
	protected abstract String getToken(String userId,
			Map&lt;String, String[]&gt; userAttrs);
 
	public SamlCreator(ISamlConfig config, ISamlSigner signer) {
		this.config = config;
		this.signer = signer;
	}
 
	public String create(String userId, Map&lt;String, String[]&gt; userAttrs) {
		logger.fine("Creating SAML token for " + userId);
 
		String saml = getToken(userId, userAttrs);
 
		logger.finest("SAML token = " + saml);
 
		Document doc;
		try {
			DocumentBuilderFactory dbfac = DocumentBuilderFactory.newInstance();
			dbfac.setValidating(false);
			dbfac.setNamespaceAware(true);
			DocumentBuilder docBuilder = dbfac.newDocumentBuilder();
			ByteArrayInputStream is = new ByteArrayInputStream(
					saml.getBytes("UTF-8"));
			doc = docBuilder.parse(is);
 
			logger.finest("Successfully converted SAML token to " + doc.getClass().getName());
 
			NodeList nodes = doc.getDocumentElement().getChildNodes();
			for (int i = 0; i &lt; nodes.getLength(); i++) {
				// Sign the SAML assertion
				if (nodes.item(i).getNodeName().equals("saml:Assertion")) {
					if (config.signAssertion()) {
						logger.fine("Signing SAML assertion element");
						signSAML((Element) nodes.item(i), null, "AssertionID");
					} else {
						logger.fine("Skipping signing of SAML assertion element");
					}
				}
			}
 
			logger.fine("Signing SAML response");
 
			// Sign the entire SAML response
			Element responseElement = doc.getDocumentElement();
			signSAML(responseElement,
					(Element) responseElement.getFirstChild(), "ResponseID");
 
			// Transform the newly signed document into a string for encoding
			TransformerFactory fac = TransformerFactory.newInstance();
			Transformer transformer = fac.newTransformer();
			StringWriter writer = new StringWriter();
			transformer.transform(new DOMSource(doc), new StreamResult(writer));
			String samlResponse = writer.toString();
 
			logger.finest("SAML token = " + samlResponse);
 
			logger.fine("Base64 encoding SAML token");
 
			// Encode the string and return the response
			return new String(Base64.encode(samlResponse.getBytes("UTF-8")));
		} catch (Exception e) {
			e.printStackTrace();
		}
 
		return null;
	}
 
	private void signSAML(Element element, Element sibling, String referenceID) {
		logger.fine("Signing element " + referenceID);
 
		try {
// this needs to be here due to Java bug in 1.7_25
            // http://stackoverflow.com/questions/17331187/xml-dig-sig-error-after-upgrade-to-java7u25
            element.setIdAttribute(referenceID, true);
			XMLSignatureFactory fac = XMLSignatureFactory.getInstance("DOM");
 
			DOMSignContext dsc;
			if (sibling == null) {
				dsc = new DOMSignContext(signer.getPrivateKey(), element);
			} else {
				dsc = new DOMSignContext(signer.getPrivateKey(), element,
						sibling);
			}
 
			DigestMethod digestMeth = fac.newDigestMethod(DigestMethod.SHA1,
					null);
			Transform transform = fac.newTransform(Transform.ENVELOPED,
					(TransformParameterSpec) null);
			List list = Collections.singletonList(transform);
			String refURI = "#" + element.getAttribute(referenceID);
			Reference ref = fac.newReference(refURI, digestMeth, list, null,
					referenceID);
 
			CanonicalizationMethod canMeth = fac.newCanonicalizationMethod(
					CanonicalizationMethod.EXCLUSIVE_WITH_COMMENTS,
					(C14NMethodParameterSpec) null);
			List refList = Collections.singletonList(ref);
			SignatureMethod sigMeth = fac.newSignatureMethod(
					SignatureMethod.RSA_SHA1, null);
			SignedInfo si = fac.newSignedInfo(canMeth, sigMeth, refList);
 
			KeyInfoFactory kif = fac.getKeyInfoFactory();
			List x509Content = new ArrayList();
			x509Content.add(signer.getX509Cert());
			X509Data xd = kif.newX509Data(x509Content);
			KeyInfo ki = kif.newKeyInfo(Collections.singletonList(xd));
 
			XMLSignature signature = fac.newXMLSignature(si, ki);
			signature.sign(dsc);
 
			logger.fine("Successfully signed element" + referenceID);
		} catch (Exception e) {
			// TODO: throw instead of catch
			e.printStackTrace();
		}
	}
}

Demo

If all goes well, the result should look something like this.

Download Video: MP4

 

And you’ll probably need the com.ibm.sbt.saml project code.

This code is as-is for education purposes.  FWIW I’m not a SAML expert; so if you post a question, there’s a good chance I won’t know.

Happy coding.

 

 

 

Building Social Applications using Connections Cloud and WebSphere Portal: Social Portal Pages

We’re going to use Portal’s theme framework to add the necessary CSS and JS files to our social pages.  Using this approach, we’ll no longer need to include the dependencies in our script portlets.  Pages that have social script portlets on them can simply have the relevant theme profile applied.  Another benefit is that by using Portal’s profile feature, the various browser requests are centralized into a single download to reduce the time taken to load the page.

Creating the Theme Modules

Let’s begin by adding new theme modules.  The modules will include the following resources on the page:

  • The Social Business Toolkit SDK’s Javascript dependency, for example /sbt.sample.web/library?lib=dojo&ver=1.8.0&env=smartcloudEnvironment
  • CSS files from Connections Cloud, for example /connections/resources/web/_style?include=com.ibm.lconn.core.styles.oneui3/base/package3.cssstyles.oneui3/base/package3.css

You can read how to create the module framework in the Knowledge Center.  Since the CSS files are located on a remote server, I need to create a “system” module.  This is essentially creating a plugin with the relevant extensions.  It’s a web project (WAR) with a single plugin.xml file.  The contents of my plugin.xml are as follows.

<?xml version="1.0" encoding="UTF-8"?>
<?eclipse version="3.4"?>
<plugin id="com.ibm.sbt.theme"
name="Social Business Toolkit Theme Modules"
version="1.0.3"
provider-name="IBM">
<extension
id="sbtSdkExtension"
point="com.ibm.portal.resourceaggregator.module">
<module
id="sbtSdk"
version="1.0.3">
<capability
id="sbt_sdk"
value="1.0.3">
</capability>
<title
lang="en"
value="Social Business Toolkit SDK">
</title>
<description
lang="en"
value="Social Business Toolkit SDK">
</description>
<contribution
type="head">
<sub-contribution
type="js">
<uri
value="{rep=WP CommonComponentConfigService;key=sbt.sdk.url}/sbt.sample.web/library?lib=dojo&amp;ver=1.8.0&amp;env=smartcloudEnvironment">
</uri>
</sub-contribution>
<sub-contribution
type="css">
<uri
value="{rep=WP CommonComponentConfigService;key=sbt.cc.url}/connections/resources/web/_style?include=com.ibm.lconn.core.styles.oneui3/base/package3.cssstyles.oneui3/base/package3.css">
</uri>
</sub-contribution>
<sub-contribution
type="css">
<uri
value="{rep=WP CommonComponentConfigService;key=sbt.cc.url}/connections/resources/web/_style?include=com.ibm.lconn.core.styles.oneui3/sprites.css">
</uri>
</sub-contribution>
<sub-contribution
type="css">
<uri
value="{rep=WP CommonComponentConfigService;key=sbt.cc.url}/connections/resources/web/_lconntheme/default.css?version=oneui3&amp;rtl=false">
</uri>
</sub-contribution>
<sub-contribution
type="css">
<uri
value="{rep=WP CommonComponentConfigService;key=sbt.cc.url}/connections/resources/web/_lconnappstyles/default/search.css?version=oneui3&amp;rtl=false">
</uri>
</sub-contribution>
</contribution>
</module>
</extension>

You could use the actual server’s path, for example https://apps.collabservnext.com/<some css resource> in the XML. But I’m using a substitution rule

{rep=WP CommonComponentConfigService;key=sbt.cc.url}

that will swap the corresponding keys in the plugin.xml for the values defined by WebSphere’s Resource Environment Provider.  The only reason I did this was so I could configure the URLs from WebSphere rather than hard code them into the plugin.xml.

SBT REP

The other thing I’m doing is telling the SBT SDK which environment I want configured by referencing sbt.sample.web/library?lib=dojo&amp;ver=1.8.0&amp;env=smartcloudEnvironment.  This alleviates me from having to manually specify the endpoint in the SBT scripts I write later.  And notice the ampersand symbol amp semicolon format.  You’ll need to escape the ampersands in the plugin.xml.

Create your web module and deploy to your server.  You can use the Theme Analyzer tools in Portal’s administration interface to pick up the new modules.  Just go to the Control Center feature and invalidate the cache.

Invalidate Theme

Then review the system modules to locate the sbt_sdk one.

sbtSdk Module

Profiles

To actually use the module, we need to build a theme profile.  A profile is a recipe of which modules should be loaded for a particular page’s functionality.  In addition to the sbtSdk module, we’ll need other IBM provided or custom modules loaded for pages to properly work.  Profile creation is rather straightforward.  You can use the existing profiles as a starting point.  See those in webdav for example; I use AnyClient to connect to my Portal http://127.0.0.1:10039/wps/mycontenthandler/dav/fs-type1.  Once there, you can peruse the profiles under the default theme.

I’ve created a SBT Profile that includes the SDK and Cloud modules I created earlier.

{
 "moduleIDs": ["getting_started_module",
 "wp_theme_portal_85",
 "wp_dynamicContentSpots_85",
 "wp_toolbar_host_view",
 "wp_portlet_css",
 "wp_client_ext",
 "wp_status_bar",
 "wp_theme_menus",
 "wp_theme_skin_region",
 "wp_theme_high_contrast",
 "wp_layout_windowstates",
 "wp_portal",
 "wp_analytics_aggregator",
 "wp_oob_sample_styles",
 "dojo",
 "wp_draft_page_ribbon",
 "sbtSdkModule",
 "sbtCloudModule"],
 "deferredModuleIDs": ["wp_toolbar_host_edit",
 "wp_analytics_tags",
 "wp_contextmenu_live_object",
 "wp_content_targeting_cam",
 "wcm_inplaceEdit"],
 "titles": [{
 "value": "Connections Cloud",
 "lang": "en"
 }],
 "descriptions": [{
 "value": "This profile has modules necessary for viewing pages that contain portlets written with the Social Business Toolkit SDK and Connections Cloud banner integration",
 "lang": "en"
 }]
}

This JSON file is then added to my default Portal theme using a webdav client.

SBT Profile WebDav

You’ll likely need to again invalidate the theme cache for the profile to be available for the next section.

Page Properties

To enable the profile on a page, we need to update the page properties.  The result of this process is that the aforementioned Javascript and CSS files get added to any page that has the profile enabled.

SBT Profile

And that’s it.  Now any developer can begin authoring “social” script portlets with nothing more than the page profile and a bit of web code.