Cognitive Listening

For a decade now, we’ve lived in a “social” era – through networks such as LinkedIn, Facebook, and IBM Connections.  Social networks have given us the tools by which we can engage in dialogue, share ideas, and find new information.  A simple example is the now-ubiquitous comments section seen on most websites.  Someone posts content, and someone writes back in the form of a comment.  Collectively, those comments reflect public perception, understanding, and support.  But to get a sense of the public’s reaction, you have to read through all of the comments.

This is where the emerging cognitive era can help.  Rather than manually read the comments, one could simply use a cognitive service to classify the emotions of commenters.  One such service is IBM’s AlchemyAPI.  Below I’ve combined Alchemy’s emotion analysis with IBM Connections’ comments to generate a “social reaction” to my post.

Connections Sentiment

Admittedly, people aren’t really angry with my post – maybe it’s the exclamation marks being used by commenters … but you get the point.  In isolation, this is a neat trick.  But when you apply this on a larger scale, it gives you the ability to listen cognitively to the social network.  For example, an active forum of actual angry customers could trigger intervention by a customer support representative.  Or combined with other services – like concept extraction – would tell us which areas of the company, initiative, or project that employees are struggling with.  The possibilities and outcomes are substantial, which is why cognitive is more than just technology. It’s a new era of business and computing.

Getting Started

  1. Lots of information exists on using AlchemyAPI.  Start out by creating an account on Bluemix and adding the service.
  2. I used a tool called Greasemonkey to add the “Reaction widget” to IBM Connections Blog pages.  Think of Greasemonkey as a way of creating small, personal applications that run only in your browser.
  3. Adapt my widget below to experiment with content an APIs.
// ==UserScript==
// @name        Blog Entry Emotion Analyzer
// @namespace
// @include     https://apps.**/entry/*
// @include*/entry/*
// @version     1
// @require
// @require
// @grant       GM_xmlhttpRequest
// ==/UserScript==

console.log("Starting up Blog Entry Emotion Analyzer Widget");

// setup the widget in right side column
var sidebar = $( ".lotusColRight" )

  // any html added to DOM MUST USE SINGLE QUOTES
  sidebar.append("<div aria-expanded='true' name='reaction_section_mainpart' class='lotusSection' role='complementary' aria-label='Tone' aria-labelledby='section_reaction_label'><label class='lotusOffScreen' aria-live='polite' id='reaction_section_hint_label'>Expanded section</label><h2 style='cursor:default'><span id='section_authors_label' class='lotusLeft'>Reaction</div></span></h2><div id='section_reaction' class='lotusSectionBody'><span class='lotusBtn lotusLeft'><a id='analyzeButton' role='button' href='javascript:;'>Analyze Comments</a></span><canvas id='watsonChart' width='300' height='300'></canvas></div></div>");
  // attach an event handler to do the analysis when button is clicked
  $("#analyzeButton").click(function() {
    // inform the user something is happening
    $("#analyzeButton").text("Analyzing ...")
    // get the html of the blog entry
    var entryHtml = $("div.entryContentContainer");

    // get the html of the blog comments
    // var commentsHtml = $("#blogCommentPanel"); // Connections Cloud
    var commentsHtml = $( "div[dojoattachpoint='commentsAP']" ); // Connections on-prem

    // decide whether you want to use AlchemyAPI against comments or the entry
    if(commentsHtml.length) {

    } else {
      console.error("Could not find text entry; can't add widget");
} else {
  console.error("No sidebar found in HTML; can't add widget");

function post(html) {
  // any HTML text sent to AlchemyAPI needs encoded
  html = encodeURIComponent(html);
   console.log("Sending text to AlchemyAPI: " + html);
  // send the html to watson APIs
    method: "POST",
    headers: {
      "Content-Type": "application/x-www-form-urlencoded"
    url: "",
    data: "apikey=<use your own API key>&outputMode=json&html=" + html,
    onload: function(response) {

      // received response; construct the widget
    onerror: function(response) {
function createChart(response) {
  console.log("Creating chart");
  // remove the analyze button from the view
  // convert the API response to JSON
  var json = JSON.parse(response.responseText);
  // set up the chart
  var ctx = $("#watsonChart");
  if(ctx == undefined) {
    console.error("Context for chart not found");
  var data = {
    labels: ["Anger", "Disgust", "Fear", "Joy", "Sadness"],
    datasets: [
        label: "Sentiment",
        backgroundColor: [
          'rgba(255, 99, 132, 0.2)',
          'rgba(75, 192, 192, 0.2)',
          'rgba(153, 102, 255, 0.2)',
          'rgba(255, 206, 86, 0.2)',
          'rgba(54, 162, 235, 0.2)'

        borderColor: [
          'rgba(75, 192, 192, 1)',
          'rgba(153, 102, 255, 1)',
          'rgba(255, 206, 86, 1)',
          'rgba(54, 162, 235, 1)'

        borderWidth: 1,
        data: [
  // add the chart to the view
  var myBarChart = new Chart(ctx, {
      type: 'horizontalBar',
      data: data,
      options: {
        title: {
          display: false
        legend: {
          display: false

Installing Greasemonkey Reaction Widget

  1. Launch your Firefox browser.
  2. Head over to the Greasemonkey addon page.
  3. Click the “Add to Firefox” button.
  4. You’ll then see a little monkey on the toolbar.
  5. Copy the script above to the clipboard.
  6. Click “Add New User Script”.
  7. Click “Use Script From Clipboard”.
  8. Change the script as needed.

New User Script

Greasemonkey Script

Greasemonkey Editor

How It Works

A few things to point out:

  • The top of the script defines where the “application” can run.  I’ve made it so that the widget will be added to IBM Connections Cloud and IBM’s Connections deployment.  You should update the @include line to reflect your server installation.  The @include directive also says to run the application only on Blog entry pages.  It does not currently run on a wiki or forum page for example.
  • The script will add a button to the right sidebar.  Pressing the button invokes the AlchemyAPI.
  • The text sent to AlchemyAPI is obtained from the Comments section of the post.  All we’re doing here is grabbing the HTML from inside your browser and making an API call.  AlchemyAPI does the rest.
  • I’m using Chart.js to create the chart.  I’ve used it before on other blog posts.
  • The color of the emotions in the chart is similar to the “Inside Out” characters. 😉

Inside OutHappy coding!

Using cURL with IBM Connections Cloud

I love Java. But there are times that writing a program is more work than it’s worth.  And to the novice, trying to get set up with a JVM, IDE, etc only adds to the time commitment.

So I re-introduce you to cURL (I’ve mentioned it a few times on the blog).  What is cURL?  It’s like a browser – only without the user interface.  cURL gets and sends the raw text data to and from a server.  This is what you see when you use the “View Source” option in your web browser.

I’ll use cURL to populate a bunch of Connections Cloud communities quickly. (You could do this for on-premises as well.)  For example, let’s say my company just moved to Connections Cloud. And for every network shared folder we previously used to be organized (terrible), we’d rather use a Connections Cloud community (awesome).  The reason to leverage cURL to do this is that creating the community is very easy. And it’s something you’ll do once or occasionally.  So a scripted approach is more efficient than writing code.

Let’s get to it.  For reference, review the cURL scripts I have laying around in Just unzip it to any Windows computer.


You can either download cURL or use the one I’ve packaged in my  I’d recommend using mine since it works with the rest of the examples.


Every cURL script I create starts with some setup to initialize parameters like server URL, username, and password.  The first time you run the scripts, it will prompt for user name and password.  Anything run subsequently will be done in the context of this user name (e.g. My Communities).


The below command sets the path to the cURL executable.  It also ensures that basic authentication is used and the username:password pair are included any time a Connections Cloud script is run.

set curl=%~dp0/curl/curl.exe -L -k -u %cnx_username%:%cnx_password%


This script set the URL to the server.  It also prompts the user for credentials if not already provided previously.

@echo off
REM CA1 Test Server
REM set cnx_url=
REM North America Production Server
set cnx_url=
IF DEFINED cnx_url (echo %cnx_url%) ELSE (set /p cnx_url= Connections URL:)
IF DEFINED cnx_username (echo %cnx_username%) ELSE (set /p cnx_username= Connections ID:)
IF DEFINED cnx_password (echo **masked**) ELSE (set /p cnx_password= Connections Password:)


Next we need to create a community.  This is done simply by sending text to the Connections Cloud server.

The Script

The cURL script looks like the following.

@echo off
call ../SetupCnx.bat
call ../SetupCurl.bat
%curl% -v -X POST --data-binary @%1 -H "Content-Type: application/atom+xml" %cnx_url%/communities/service/atom/communities/my

A couple of points:

  • -v is the verbose flag; I use it to see everything that happens. You can remove it if you’d like
  • –data-binary @%1 means that I am sending a file to the server and the file name is provided as input on the command line
  • -H “Content-Type: application/atom+xml” is a required setting; you need to set a header specifying the content type per the API doc
  • %cnx_url%/communities/service/atom/communities/my is the URL to the Connections endpoint per the API doc

To create the community, all that’s needed is to create an XML file and run the following command.

C:\IBM\workspaces\connections\cURL\communities>CreateCommunity.bat CommunityInpu

The Input

The above command has CommunityInput.xml at the end.  This is the input file that is used to create the community. The input XML file is easy on the eyes as well.  If we had multiple communities, I would write a few more lines in the script to substitute the list of folders for the title field.  Or you could create more input files … it’s a lot easier to edit text than program.

<?xml version="1.0" encoding="UTF-8"?>
<entry xmlns="" xmlns:app=""
 <title type="text">Community Name Goes Here</title>
 <content type="html">Community Description Goes Here</content>
 <name>Van Staub</name>
 <name>Van Staub</name>
 <category term="community" scheme=""></category>

I’ve boldfaced the areas you might want to change. But use the API doc as a guide of what you can additionally set.  Most importantly the snx:userid applies to either your GUID for Connections on-premises or your subscriber ID for Connections Cloud.

That’s it.

  1. Unzip my sample.
  2. Update the CommunityInput.xml.
  3. Run CreateCommunity.bat

So next time you need to get something completed quickly or just want to experiment with the APIs, take a look at the cURL scripts I posted.  Most of them should work …

Happy scripting!

Building Social Applications using Connections Cloud and WebSphere Portal: Social Portal Pages

We’re going to use Portal’s theme framework to add the necessary CSS and JS files to our social pages.  Using this approach, we’ll no longer need to include the dependencies in our script portlets.  Pages that have social script portlets on them can simply have the relevant theme profile applied.  Another benefit is that by using Portal’s profile feature, the various browser requests are centralized into a single download to reduce the time taken to load the page.

Creating the Theme Modules

Let’s begin by adding new theme modules.  The modules will include the following resources on the page:

  • The Social Business Toolkit SDK’s Javascript dependency, for example /sbt.sample.web/library?lib=dojo&ver=1.8.0&env=smartcloudEnvironment
  • CSS files from Connections Cloud, for example /connections/resources/web/_style?

You can read how to create the module framework in the Knowledge Center.  Since the CSS files are located on a remote server, I need to create a “system” module.  This is essentially creating a plugin with the relevant extensions.  It’s a web project (WAR) with a single plugin.xml file.  The contents of my plugin.xml are as follows.

<?xml version="1.0" encoding="UTF-8"?>
<?eclipse version="3.4"?>
<plugin id=""
name="Social Business Toolkit Theme Modules"
value="Social Business Toolkit SDK">
value="Social Business Toolkit SDK">
value="{rep=WP CommonComponentConfigService;key=sbt.sdk.url}/sbt.sample.web/library?lib=dojo&amp;ver=1.8.0&amp;env=smartcloudEnvironment">
value="{rep=WP CommonComponentConfigService;}/connections/resources/web/_style?">
value="{rep=WP CommonComponentConfigService;}/connections/resources/web/_style?">
value="{rep=WP CommonComponentConfigService;}/connections/resources/web/_lconntheme/default.css?version=oneui3&amp;rtl=false">
value="{rep=WP CommonComponentConfigService;}/connections/resources/web/_lconnappstyles/default/search.css?version=oneui3&amp;rtl=false">

You could use the actual server’s path, for example<some css resource> in the XML. But I’m using a substitution rule

{rep=WP CommonComponentConfigService;}

that will swap the corresponding keys in the plugin.xml for the values defined by WebSphere’s Resource Environment Provider.  The only reason I did this was so I could configure the URLs from WebSphere rather than hard code them into the plugin.xml.


The other thing I’m doing is telling the SBT SDK which environment I want configured by referencing sbt.sample.web/library?lib=dojo&amp;ver=1.8.0&amp;env=smartcloudEnvironment.  This alleviates me from having to manually specify the endpoint in the SBT scripts I write later.  And notice the ampersand symbol amp semicolon format.  You’ll need to escape the ampersands in the plugin.xml.

Create your web module and deploy to your server.  You can use the Theme Analyzer tools in Portal’s administration interface to pick up the new modules.  Just go to the Control Center feature and invalidate the cache.

Invalidate Theme

Then review the system modules to locate the sbt_sdk one.

sbtSdk Module


To actually use the module, we need to build a theme profile.  A profile is a recipe of which modules should be loaded for a particular page’s functionality.  In addition to the sbtSdk module, we’ll need other IBM provided or custom modules loaded for pages to properly work.  Profile creation is rather straightforward.  You can use the existing profiles as a starting point.  See those in webdav for example; I use AnyClient to connect to my Portal  Once there, you can peruse the profiles under the default theme.

I’ve created a SBT Profile that includes the SDK and Cloud modules I created earlier.

 "moduleIDs": ["getting_started_module",
 "deferredModuleIDs": ["wp_toolbar_host_edit",
 "titles": [{
 "value": "Connections Cloud",
 "lang": "en"
 "descriptions": [{
 "value": "This profile has modules necessary for viewing pages that contain portlets written with the Social Business Toolkit SDK and Connections Cloud banner integration",
 "lang": "en"

This JSON file is then added to my default Portal theme using a webdav client.

SBT Profile WebDav

You’ll likely need to again invalidate the theme cache for the profile to be available for the next section.

Page Properties

To enable the profile on a page, we need to update the page properties.  The result of this process is that the aforementioned Javascript and CSS files get added to any page that has the profile enabled.

SBT Profile

And that’s it.  Now any developer can begin authoring “social” script portlets with nothing more than the page profile and a bit of web code.




Sizing an IBM Connections Server

What’s a Sizing

A sizing is a recommendation on how big (or small) your server needs to be to handle an estimated workload.  Will you need a server with two cores or eight cores? This is difficult question to answer. The reason this question is difficult is because you need to estimate the behavior of your users. How many users will be active during the day? How many times will they use a wiki or a blog or their homepage? If you’re new to Connections, you’ll probably shrug your shoulders and say, “I have no idea.”

I completely expect this response. If you’ve read Predictably Irrational, we humans are just plain bad at assigning value to something new or independent. How much should this cost? “I don’t know, how much is the competitor’s?” How big should the server be?  “I don’t know, how big is our current server?” We need an anchor, a point a reference to begin the conversation.


Fortunately, IBM has a team that facilitates sizings. You complete a questionnaire filling in answers to questions like how many users, how many will use these applications, how often, etc. The Techline team runs the numbers, and you’re given a helpful report on everything from the number of processors and memory needed to the disk space required in year one. Very helpful.

But when you look at the sizing questionnaire, you’ll find many default suggestions. I’ve read enough sizings to surmise that overwhelmingly people tend to take the defaults. This is better than shrugging your shoulders, but still not ideal.


Let’s go back to our need for an anchor. If most of use simply take the defaults, which answers do change? The most obvious distinction between clients is their user population.  Below are two graphs where the Connections defaults remain the same but the number of active users vary.

The first graph shows the number of requests by application for 100 to 5,000 active users.  The minimum suggested size of the application server for this entire group is 2 cores.

2 Core Sizing
2 Cores Sizings

Next, we increase the users: starting at 25,000 and increasing to 100,000 active users.  The suggestion for the 25,000 and 50,000 populations is 4 cores.  The 100,000 population is 12 cores.

4 to 12 Core Sizing
4 and 12 Core Sizing

The number of cores suggested exclude high availability.  But for clarity let’s consider the 12 core example.  This uses quad core servers.  Thus we have 3 servers with 4 cores each. High availability means we add one more server with four cores for a total of 16 licensed cores.

Beware, Math Ahead

I used data to build the above graphs.  An alternative approach is to use the same data and build a model that estimates the sizing.  I’ve previously used Excel and SPSS Statistics to create such models. But this time I wanted to use SPSS Modeler.  With Modeler you can 1) run multiple numeric models at the same time and select the best and 2) refine the inputs to get down to the most accurate model.  I did the latter since I assumed a regression as the best model and included factors that made sense to what I was estimating.  For example, consider memory.  The more people using the server means we need more memory, right?  Yes, but the questionnaire also asks whether Connections Content Manager is used.  Using this feature inherently requires more memory, and it made sense for me to include this factor in the model.  The result of adding this factor was an increase in correlation from 65% to 83%.  Statistical arguments and skepticism aside, this is a good thing.

Connections SPSS Model

Creating a Model

SPSS doesn’t do everything for us. Or maybe it does, and I’m just a novice. But I still needed to take the data I collected from the sizing questionnaires and put that into a suitable format to build my model. Recall that the questionnaire is asking questions on user behavior: how many users, how many times do they use this application, etc. While that gives us average load, it doesn’t tell us peak load. We must make sure that the server can handle the maximum, peak, usage or bad things will happen (i.e. crash).

There are two questions that create a peak load estimate.

What is the typical length of a day in hours?

What multiplier should be applied for the load during peak hour?

The calculations then look like this for each of the Connections applications: homepage, blogs, activities, etc.

RegisteredUsers 10,000
x ActivePercent 10%
= ActiveUsersCount 1,000
x AppPercent 75%
= AppUsers 750
x AppUse 2
= AppDailyUseCount 1,500
÷ DayLength 8
= AppHourlyUseCount 188
 x Multiplier 2
= AppPeakCount 375

I do this calculation for each of the Connections applications and then add up all of the peak counts to give me a total peak count. You could argue that all applications are not equal.  The activities application is more resource intensive than blogs. You would be correct. But the goal here is to build an estimate and blending works out better than considering each application’s count individually. If I break out the individual counts (and I did investigate this), I see negative correlations as the model tries to overfit the data. So the total peak count is what I settled on.

So What?

Assuming you’re still here after all that, you get to ask, “So what?” Well now I can take the total peak hour request count I calculated and simply plug it into an equation given by Modeler.

TotalPeakCount * 0.00004074 + 1.213 = SuggestedCores

Bringing It All Together

Forms Experience Builder (FEB) is one of my favorite IBM products.  It gives you a drag and drop way to create a form with all the programmatic control that you’d expect from IBM. I’ve recreated much of the Connections sizing questionnaire as a FEB form. This allows the user to answer questions and run a quick calculation to get an instant sizing. What’s happening behind the scenes is that the calculations I showed above are occurring. The results are then fed into the SPSS Modeler equation to create the instant sizing.

You can take this concept one step further. A model is only as good the data that built it. With less data, you typically have more error. So the ability to submit authentic data back to FEB allows me to augment the SPSS Modeler’s data set.  The model is then rebuilt, and the overall accuracy of the FEB form’s suggestion improves.

Take the form for a test drive, but remember that it does not replace working directly with IBM. For purchasing or production decisions, continue to engage Techline for validated and accurate sizings.

Analyzing WebSphere Memory

You don’t need to be an expert in a particular application such as WebSphere Portal or IBM Forms to do a quick analysis of memory.  Here’s how.

Get a Heapdump

The easiest way to create a heapdump is using the WebSphere Console.

  1. Start the administrative console.
  2. In the navigation pane, click Troubleshooting > Java dumps and cores.
  3. Select the server_name for which you want to generate the heap dump.
  4. Click Heap dump to generate the heap dump for your specified server.

Heap Dump WAS Console

Should this approach not work.  Use wsadmin to generate the heapdump.

  1. Start the wsadmin scripting client. You have several options to run scripting commands, ranging from running them interactively to running them in a profile.
  2. Invoke the generateHeapDump operation on a JVM MBean.

For example, this is my script to launch the wsadmin client on the local machine.

C:\IBM\FormsExperienceBuilder\WebSphere\AppServer\bin\wsadmin.bat -host localhost -port 8071 -user wasadmin -password password

Next, I run the following commands on the wsadmin prompt.

set jvm [$AdminControl queryNames WebSphere:type=JVM,process=TranslatorServer,node=formsNode01,*]
$AdminControl invoke $jvm generateHeapDump

There are a few settings you need to know.

  • The SOAP port as seen in the first command.
  • The process and node in the second command.

You can get these from the WebSphere Console.  See the previous screenshot. The Server and Node are listed in the table.  These values correspond to the process and node values substituted in the set command. You can find the SOAP port easily by viewing Servers -> Websphere application servers -> <your desired server> -> Communications -> Ports. Look for the SOAP_CONNECTOR_ADDRESS.

The output of these approaches will tell you where the heapdump is located. It will be a file with the PHD extension.

Use the Memory Analyzer Toolkit (MAT)

I’ve used MAT for years. It’s awesome at what it does, which is much more than looking at a pretty graph.  Download it here To open the PHD file generated by WebSphere, you’ll need to add the DTFJ plugin.  Find instructions here

Now it’s super simple to anaylze the PHD. Use File -> Open heap dump -> <select you PHD file>. Let’s take a look at a PHD I recently analyzed.

Memory Analyzer

This is a leak suspect report (the default option in the wizard after opening a heap dump). It’s basically saying that there’s a possible memory leak. Why? Because the WebSphere server’s Java heap is currently set to 1 GB.  Of that, 937MB is being used by one object.  So 90% of the heap is being used by one object – that seems like a leak … or maybe it isn’t.

This is a server that is simply running out of memory because the workload is too high. You can drill down into how the heap is allocated. Use the Open Dominator Tree for entire heap button.

Dominator Tree

The FormCache object is using 90% of the total heap – we can see that in the percentage column and is expected given the graph in the leak suspects report. Look at what’s in this object. It’s a collection for CachedFormInstance private objects. And there are 860 of them! These range in size, from 15MB to 4.7MB (shown in the screenshot). Together these 860 entries make up that 983MB heap allocation.

This is where you ask the question of whether this is abonormal. Is this runaway code, poor caching behavior, or working as designed? A developer needs to answer this question, but at least you have background that something is not quite right.

Proactive Monitoring

Let’s say we’ve identified a problem and made an attempt to resolve. This could be adding more memory to the server, increasing the Java heap, or even application code changes. How can you proactively monitor the application server? One way is to take heapdumps at regular intervals and specifically review the object allocations.  You can use MAT’s Object Query Language feature to select the object type and view the number of instances. For example, we saw 860 entries of the CachedFormInstance previously, here we see only 9. This could indicate that a code problem is resolved or the server is under relatively low load.


Another approach is to monitor the heap in real time. To do this, I use VisualVM, which can be downloaded at  To monitor a remote application server, add the JMX settings to the WebSphere Application Server’s JVM process definition.

Seen here.

VisualVM WebSphere

Now you can create a JMX connectin in VisualVM to as seen in my example.


Notice that I can see the heap – both the current size and how much has been used. The sawtooth line shows heap usage over time. In this case, it looks like the heap is growing until garbage collection kicks in to reclaim the space. You would want to look at heap growth over time or large one time increases or drops and investigate accordingly.


IBM at SugarCon 2013

Last week, I attended SugarCon in New York to demonstrate “Social Selling” in the IBM booth.  This was a great opportunity to showcase the potential to create social sales experiences by combining products such as IBM Connections and SugarCRM.  A demonstration of the social selling experience is available on YouTube.

Some also asked whether the live demo could be provided.  I’ve uploaded our live demo, which is a story board of web pages – just enough to tell the social sales story and see it in action.

Also at the event were several IBM business partners.  VoiceRite showcased their Sametime Connector for SugarCRM.  Alacrinet created an exceptional web sales experience by combining WebSphere Portal, IBM Web Content Manager, IBM Connections, and SugarCRM.  And Highland Solutions demonstrated “Semantic CRM”, which uses IBM Connections and SugarCRM to surface sales collateral in Sugar dashlets.  A demonstration of Semantic CRM is available on YouTube.

And what visit to New York would be complete without a view of the skyline?

SugarCon Skyline


Building an IBM OAuth Consumer in PHP

An increasing number of IBM products make use of OAuth, for example IBM Connections and IBM SmartCloud for Social Business. For those developers new to OAuth, there is an excellent sample application on developerWorks. It’s great – if you’re writing Java. But I’ve recently worked with two business partners who are PHP shops.  They obviously required an PHP OAuth consumer, which leads me to post.  I’ll reference the IBM SmartCloud for Social Business OAuth steps, but the OAuth process is the same on IBM Connections (with one minor implementation difference). Let’s get started.

First, there are some constants that will be used throughout the code.

There are the standard OAuth parameters.

class OAuthParam {
	const ACCESS_TOKEN = 'access_token';
	const REFRESH_TOKEN = 'refresh_token';
	const EXPIRES_IN = 'expires_in';
	const CLIENT_ID = 'client_id';
	const CLIENT_SECRET = 'client_secret';
	const CALLBACK_URI = 'callback_uri';
	const AUTHORIZATION_CODE = 'authorization_code';
	const CODE = "code";
	const GRANT_TYPE = 'grant_type';
	const RESPONSE_TYPE = 'response_type';
	const OAUTH_ERROR = 'oauth_error';

Next, there are the respective server endpoints to obtain tokens.

class IBMOAuthEndpoints {
	// IBM SmartCloud for Social Business
	const SC4SB_AUTH_PATH = '/manage/oauth2/authorize';
	const SC4SB_TOKEN_PATH = '/manage/oauth2/token';
	// IBM Connections
	const CNX_AUTH_PATH = '/oauth2/endpoint/connectionsProvider/authorize';
	const CNX_TOKEN_PATH = '/oauth2/endpoint/connectionsProvider/token';

And finally, some constants are specific to IBM SmartCloud for Social Business.

class SmartCloud {
	const E1_SERVER = '';
	const C1_SERVER = '';
	const ACCESS_TOKEN_EXPIRATION = 7200;	// 2 hours
	const REFRESH_TOKEN_EXPIRATION = 7776000;	// 90 days

I have created two sample scripts. One for SmartCloud.

// vendor application (must be SSL for callback to work)
// this is the current URL hosting this PHP script
$callbackUrl = 'https://localhost/SC4SB/SC4SBOAuth2_0.php';
// Step 1: Register the application
$clientId =  'app_20072407_1349820195761';
$clientSecret = '1250ec8a59c5f9a8f68dc77d341ab32bd2d71d81ccf98d77174765d58233a1ebf980b496d7bff7b366973363cde82ed3b1e21174461c39f8a97d5c6a4a09a2b65efb29dcd8c64a4f3acde395ef453c5fd441365dfe8ee90aa9416774796ffe3d189662219fd384fcb86a119637964a0a31d5fd5084e47e8b50'; // not really a secret
// Step 2: Obtain authorization code
// Step 3: Exchange authorization code for access and refresh tokens
$sc4sbOAuth = new IBMOAuthV2(SmartCloud::C1_SERVER, $clientId, $clientSecret, $callbackUrl, 
		IBMOAuthEndpoints::SC4SB_AUTH_PATH, IBMOAuthEndpoints::SC4SB_TOKEN_PATH);
$accesToken = $sc4sbOAuth-&gt;getAccessToken();
// Step 4: Use the access token to allow API access
$ch = curl_init(SmartCloud::C1_SERVER . "/api/bss/resource/customer");
curl_setopt_array($ch, $sc4sbOAuth-&gt;options);
curl_setopt($ch, CURLOPT_HTTPHEADER, array($sc4sbOAuth-&gt;getAuthorizationHeader()));
print curl_exec($ch);

And a similar sample for Connections.

const CNX_SERVER = '';
// vendor application this is the current URL hosting this PHP script
$callbackUrl = 'https://localhost/Connections/ConnectionsOAuth2_0.php';
// Step 1: Register the application
$clientId =  'php-sample';
$clientSecret = 'NXl0GROIu9p6YFKU4i1LI5qZ9OnrBL14y8QQg68brJ7GPEdgn0ed9tI'; // not really a secret
// Step 2: Obtain authorization code
// Step 3: Exchange authorization code for access and refresh tokens
$sc4sbOAuth = new IBMOAuthV2(CNX_SERVER, $clientId, $clientSecret, $callbackUrl,
		IBMOAuthEndpoints::CNX_AUTH_PATH, IBMOAuthEndpoints::CNX_TOKEN_PATH);
// Step 4: Use the access token to allow API access
$ch = curl_init(CNX_SERVER . '/connections/opensocial/oauth/rest/activitystreams/@me/@all');
curl_setopt_array($ch, $sc4sbOAuth-&gt;options);
curl_setopt($ch, CURLOPT_HTTPHEADER, array($sc4sbOAuth-&gt;getAuthorizationHeader()));
print curl_exec($ch);

Let’s look at each of the steps individually.

Step 1: Register the application

Registration is straight forward. See IBM SmartCloud for Social Business and IBM Connections. Here is an example of my commands for IBM Connections.

OAuthApplicationRegistrationService.addApplication(‘php-sample’, ‘PHP Sample’,’https://localhost/Connections/ConnectionsOAuth2_0.php’)
clientSecret = OAuthApplicationRegistrationService.getApplicationById(‘SBTK’).get(‘client_secret’)
print clientSecret

Step 2: Obtain authorization code

We’ll begin with the request for an authorization token. The IBMOAuthV2 class first checks if your browser has an access token or a refresh token from a previous session. These are stored as cookies, and if not, the following code executes. When the code completes, the user will be prompted to login to IBM SmartCloud for Social Business or Connections.  By logging in, the user consents to the third party’s use of his or her data.

private function getAuthorizationCode(){
	$code = $this-&gt;getUrlParam($_SERVER['REQUEST_URI'], OAuthParam::CODE);
	if($code == NULL){
		// Step 2: Obtain authorization code
		syslog(LOG_INFO, 'Obtaining authorization code for client ID ' .  $this-&gt;clientId);
		$url = $this-&gt;sc4sbUrl . $this-&gt;authPath . '?' .
				OAuthParam::RESPONSE_TYPE . '=' . OAuthParam::CODE .
				'&amp;' . OAuthParam::CALLBACK_URI . '=' . $this-&gt;callbackUrl .
				'&amp;' . OAuthParam::CLIENT_ID . '=' . $this-&gt;clientId;
		syslog(LOG_INFO, $url);
		header('Location: ' . $url);
		// the result is SC4SB or Connections returning to the callbackUrl
	} else {
		return $code;

After login, the user is redirected back to the third party’s application.  The authorization code is provided as part of the redirect, which is then used to obtain the access token.

Step 3: Exchange authorization code for access and refresh tokens

private function exchangeTokens(){
	// Step 3: Exchange authorization code for access and refresh tokens
	$code = $this-&gt;getAuthorizationCode();
	syslog(LOG_INFO, 'Authorizing client ID ' .  $this-&gt;clientId . ' using authorization code ' . $code);
	$endpoint = $this-&gt;sc4sbUrl . $this-&gt;tokenPath;
	syslog(LOG_INFO, 'OAuth consumer created for ' . $endpoint);
	$ch = curl_init($endpoint);
	$fields = OAuthParam::CALLBACK_URI . '=' .  urlencode($this-&gt;callbackUrl) .
	'&amp;' . OAuthParam::CLIENT_SECRET . '=' .  urlencode($this-&gt;clientSecret) .
	'&amp;' . OAuthParam::CLIENT_ID . '=' .  urlencode($this-&gt;clientId) .
	'&amp;' . OAuthParam::GRANT_TYPE . '=' . urlencode(OAuthParam::AUTHORIZATION_CODE) .
	'&amp;' . OAuthParam::CODE . '=' . urlencode($code);
	syslog(LOG_INFO, 'Adding POST fields ' . $fields);
	curl_setopt_array($ch, $this-&gt;options);
	curl_setopt($ch, CURLOPT_POSTFIELDS,  $fields);
	curl_setopt($ch, CURLOPT_POST, 5);
	// if the result is false, check curl_error($ch);
	$result = curl_exec($ch);
		syslog(LOG_ERR, 'Failed Step 3');
		syslog(LOG_ERR, 'Authorization server returned ' . curl_getinfo($ch, CURLINFO_HTTP_CODE));
		// TODO Goto error page
	} else {
		syslog(LOG_INFO, 'Authorization server returned ' . curl_getinfo($ch, CURLINFO_HTTP_CODE));
	// If the request is successful, the following parameters are returned
	// in the body of the response with an HTTP response code of 200:
	// refresh_token, access_token, issued_on, expires_in, token_type
	// the response body differs between Connections and SC4SB
	$result = $this-&gt;normalizeBodyData($result);
	parse_str($result, $this-&gt;tokenData);
	// you can store the refresh token to request a new access token in the future
	// e.g. for up to 90 days
	syslog(LOG_INFO, 'Successfully obtained the following parameters from SmartCloud for Social Business');
	syslog(LOG_INFO, $body);
	syslog(LOG_INFO, print_r($this-&gt;tokenData, TRUE));

The above code makes a direct request from the third party’s application server to either IBM SmartCloud for Social Business or Connections.  It’s important to be aware “who” makes the request – the application server, not the user’s browser.  Any firewalls or port restrictions between the third party application and IBM products can lead to failure.

You may have noticed that I have a helper function normalizeBodyData.

private function normalizeBodyData($body){
	syslog(LOG_INFO, 'Normalizing response body ' . $body);
	// Connections body
	// {"access_token":"czv49WRypFyFsFpAJOqlzwQ7jyMsW7SXKMcGXknP","token_type":"bearer","expires_in":43199,"scope":"","refresh_token":"k05LL9G1QzlwGeSQhcWXHi2Drq04wgD0ZCbw2vw9T4VtowSXnZ"}
	// normalize to the SC4SB format
	$cnxTokens = array("\"", ":", ",", "{", "}");
	$sc4sbTokens = array("", "=", "&amp;", "", "");
	$normalized = str_replace($cnxTokens, $sc4sbTokens, $body);
	syslog(LOG_INFO, 'Normalized ' . $normalized);
	return $normalized;

The response format from IBM SmartCloud for Social Business and IBM Connections differs. I’ve chosen to normalize the Connections response to the SmartCloud format for use in the next step.  Now with the access token available, the third party application can make an API call.

Step 4: Use the access token to allow API access

For example, access the activity stream in IBM Connections.  While the below code retrieves data, creation of an activity stream entry by the third party is possible.  And the entry would appear as though it was made by the user who approved the access – not a third party.

// Step 4: Use the access token to allow API access
$ch = curl_init(CNX_SERVER . '/connections/opensocial/oauth/rest/activitystreams/@me/@all');
curl_setopt_array($ch, $sc4sbOAuth-&gt;options);
curl_setopt($ch, CURLOPT_HTTPHEADER, array($sc4sbOAuth-&gt;getAuthorizationHeader()));

Another example is accessing the Business Support System in SmartCloud.

// Step 4: Use the access token to allow API access
$ch = curl_init(SmartCloud::C1_SERVER . "/api/bss/resource/customer");
curl_setopt_array($ch, $sc4sbOAuth-&gt;options);
curl_setopt($ch, CURLOPT_HTTPHEADER, array($sc4sbOAuth-&gt;getAuthorizationHeader()));

Using the data returned from the above code is where /* your code goes here */.

One final step is obtaining a new access token. The access token used to make API calls will expire in two hours for SmartCloud. The code I’ve written stores the access token as a cookie that has an expiration based on the response from the server.  The code also stores the refresh token, which – for SmartCloud – is valid for up to 90 days.  It may be used to re-obtain access tokens.  After the refresh token expires, you’ll need to perform the entire OAuth dance again. The code I’ve written is functional, but edge cases related to exactly when the refresh token expires needs to be handled better.  One final note is that while SmartCloud for Social Business lists several ways to make the refresh request, I’ve found only the parameterized URL approach works.

Step 5: Get a new access token after the access token has expired

private function refreshAccessToken(){
	syslog(LOG_INFO, 'Refreshing access_token using refresh token ' . $this-&gt;getRefreshToken());
	$refreshParams = OAuthParam::CLIENT_SECRET .'=' .  $this-&gt;clientSecret .
	'&amp;' . OAuthParam::CLIENT_ID . '=' .  $this-&gt;clientId .
	'&amp;' . OAuthParam::GRANT_TYPE . '=' . OAuthParam::REFRESH_TOKEN .
	'&amp;' . OAuthParam::REFRESH_TOKEN . '=' .  $this-&gt;getRefreshToken();
	// set SC4SB refresh query to the URL
	if($this-&gt;tokenPath == IBMOAuthEndpoints::SC4SB_TOKEN_PATH){
		syslog(LOG_INFO, 'Adding SC4SB refresh query');
		$ch = curl_init($this-&gt;sc4sbUrl . $this-&gt;tokenPath . '?' . $refreshParams);
		curl_setopt_array($ch, $this-&gt;options);
		curl_setopt($ch, CURLOPT_HTTPHEADER, $auth);
	} else {
		// set Connections POST parameters
		syslog(LOG_INFO, 'Adding Connections refresh POST fields ' . $fields);
		$ch = curl_init($this-&gt;sc4sbUrl . $this-&gt;tokenPath);
		curl_setopt_array($ch, $this-&gt;options);
		curl_setopt($ch, CURLOPT_POSTFIELDS,  $refreshParams);
		curl_setopt($ch, CURLOPT_POST, 5);
	// if the result if false, check curl_error($ch);
	$result = curl_exec($ch);
	// the response body differs between Connections and SC4SB
	$result = $this-&gt;normalizeBodyData($result);
	// If the request is successful, the following parameters are returned
	// in the body of the response with an HTTP response code of 200:
	// refresh_token, access_token, issued_on, expires_in, token_type
		syslog(LOG_ERR, 'Failed Step 5');
		syslog(LOG_ERR, 'Authorization server returned ' . $result);
	parse_str($result, $this-&gt;tokenData);
	syslog(LOG_INFO, 'Obtained new access_token ' . $this-&gt;tokenData[OAuthParam::ACCESS_TOKEN]);
			$this-&gt;tokenData[OAuthParam::REFRESH_TOKEN], $this-&gt;tokenData[OAuthParam::EXPIRES_IN]);

Download the code and happy coding.

Socialog: Social Knowledge Sharing Using IBM Connections

Every software user has experienced some form of error message. This is that message that occurs unexpectedly – usually when you haven’t saved your work. Readers of PC Magazine were amused to see reader-submitted, often-humorous error messages in its section, “Abort, Retry, Fail.” But for those in software industry, error messages and their useful counterpart, trace logging, help identify the causes of program failures. But some of these messages – even for the experienced – still prompt the question,

What does that mean?

Industry professionals who regularly review the detailed messages contained in log files are within IBM’s Software Group (SWG). Developers for SWG create enterprise class software used in nearly every industry for many of the world’s leading companies. When this software fails to function as expected, the analysis of a product’s log files is one of the first steps in the solution process. In fact, the act of log file creation and submission is such an important requirement that many product teams have created “must gather” documents describing the needed log information prior to submitting support tickets.

Generally speaking, all software has the same technical lifecycle: development, deployment, and support. At each stage knowledge is certainly gained, possibly stored, and sometimes shared. For companies the size of IBM, this process repeats itself on a massive scale. The developer asks the question, “Was that bug fixed?” The services engineer deploying the solution questions, “What does this error mean?” And the software support engineer wonders, “What error is important in this log file?” These questions are examples of tacit knowledge, knowledge that is difficult to transfer though written or verbal means.

Tacit knowledge doesn’t mean it cannot be learned – only that the learning process often requires a high amount of personal interaction. For example, think of an apprenticeship. The apprentice must engage in a period of learning or knowledge transfer from the more experienced “master”. In an extremely large company such as IBM, this is difficult for numerous reasons:

  • The person with the knowledge may no longer be available.
  • Teams may be geographically distributed leading to difficulty sharing information.
  • Teams may be unaware of other teams with important knowledge.
  • The apprentice fixates on irrelevant details and fails to emphasize relevant information.

Such communication failures lead to delays in the problem determination process often through rediscovery of existing knowledge or an inability to match pertinent information with expert assistance. Once tacit knowledge is created, how can IBM facilitate its transfer to each actor in the software lifecycle?

To aid in the tasks of knowledge creation and discovery, I created the IBM community source project Socialog. Software engineers most often use text editors when reviewing logs. Unfortunately, the act of knowledge discovery (the reviewing of a log file) is separate from the documentation of this knowledge (the knowledge channels). Finally, the usage of such knowledge may not be clear to inexperienced professionals. To better align these tasks, Socialog provides a text editor platform that enhances log analysis while providing the means to document knowledge. For example, the same way a reader may write in the margin as he or she reads a book, Socialog provides the ability to annotate log messages with additional information.

Socialog Annotations
An annotation on select text

This novel approach allows the reader to have more information in one source, saving time and energy on data gathering. These annotations can also be shared to IBM Connections blogs.

Socialog Connections
Annotations in Socialog and stored on IBM Connections

Users of the Socialog application will automatically receive this shared information. Sharing surfaces the so-called “wisdom of the crowds.” The result is a solution that breaks knowledge out of team-based or product-based silos providing efficient use to the widest audience possible. The Socialog mantra is

Stop micro-blogging and start micro-technoting.

Socialog also creates a virtuous feedback loop by combining the sharing and receiving of knowledge within a core business activity. Consider the business problems this social approach solves.

  • Onboarding. What if you are a new team member, would you even known what to look for in a log?
  • Attrition. What if someone leaves your team, is their knowledge documented somewhere for others’ benefit? How long would it take to apply this knowledge?
  • Encapsulation. What if you need context on an error message related to your problem but this is the domain of another team?
  • Distribution. What if what you’ve learned isn’t suitable for existing knowledge channels?

By providing a platform to review logs and integrating content from Connections, knowledge is stored, efficiently surfaced, and flows across intra-organizational boundaries. Consumers can either apply this knowledge to their current analysis or seek out experts, the authors of annotations. A famous computer engineer is quoted as having said, “Given enough eyeballs, all bugs are shallow.” For any organization of any size, given enough knowledge any problem can be solved quickly. The concept of Socialog, sharing and applying information more easily, has real effects on the productivity of IBM’s engineers and clients while also realizing the cost reductions that great social strategies deliver.

Socialog is an IBM internal application. It is currently being tested in emerging markets where there is a need to transfer skills to new knowledge engineers. The following video highlights how Socialog is used in an IBM Support Engineering role. There are quite a few business-specific features described before the social capabilities. A social demonstration is available three quarters of the way into the video.

Download Video:

Social Rendering Licensing

WCM Social Rendering

Portal includes Social Rendering, adding Connections content to Portal pages by using WCM. Does Social Rendering require a WCM license? No – unless you plan on customizing the presentation templates. This is similar to the blog, wiki, and article templates entitlement found in Portal Server.