Showing posts with label Measuring Knowledge. Show all posts
Showing posts with label Measuring Knowledge. Show all posts

Wednesday, September 30, 2009

Defining ... Some Thoughts

This seems to be the season for fundamental re-thinks. It began with Dave Snowden's post about alternative to CKO, which delved into the relationship between business units and KM. I had published a poll about the same topic (which is open till 10th October), and blogged about Dave's thoughts. And something i have been thinking about for a few days (the reason i havent been able to blog about this earlier is simply laziness) ... how could one define KM. And came across this post by Dave Snowden, defining KM, which i think is a very good description of what KM should be doing in an organization.
I think the definition Dave gives describes KM quite well:

The purpose of Knowledge Management is to provide support for improved decision-making and innovation throughout the organization. This is achieved through the effective management of human intuition and experience augmented by the provision of information, processes and technology together with training and mentoring program.

Improved decision-making ... this is something which was promised by information systems more than a decade back. Though decisions did improve, there is still the possibility of decision-making being more improved. How, one may ask. Till now, the paradigm of decision-making hasnt considered that decision-making is not a perfectly rational process. In other words, decisions arent always made on perfectly rational assumptions, or on information available, and that, even if theoretically, all possible information were available (which it cant), there would still be that factor x which is not totally definable, and which cannot be externalized, which influences decision-making. Could we call this tacit knowledge? Probably. Could we call this experience? Maybe. No matter what we call this, this remains the major aspect of Knowledge Management.

Add to this the aspect that it is not usually possible for everyone to have access to all possible information required to make a decision. Not only is this because of systemic constraints, but also because there is usually no single definition about what information is relevant, or required, for making a decision. In some scenarios there is, but not in all. Given this, one aspect of KM is also to get people connected with sources of knowledge, whether repositories, or people, and to get them access to knowledge, whether directly or indirectly, which may be relevant for decision-making. This is the essential value-proposition for tools like social networking.

Another aspect which Dave mentioned is about the positioning of KM in the organization. The essence is that at a centralized level, KM needs to be synchronized with the strategic imperatives of the organization, while implementation should be done at localized level. Implementation of KM initiatives should be within the context of the localized business requirements. This has a number of benefits. One, this ensures that while overall KM is aligned with strategic requirements, at the point of implementation, KM is aligned with specifics of business requirements. Two, this also creates a level of ownership for  KM initiatives among business units. Three, it is easier to measure the impact of KM initiatives in highly localized context, where it is easy to define the way KM can impact the business, rather than at a generic level.

Friday, July 17, 2009

Conversations, Networks, Measurement

This is a question i have been thinking about for some time. The question of ROI ... and, how this impacts the way we look at KM. The question is simply this ... how does one measure the impact of KM initiatives on the financial health of the organization. This question can be answered depending on how you understand the question. Simply put, anything that has a financial impact, has an impact either on the revenues, or on the cost, whether directly or indirectly. Suresh Nair posted a commend over at the post where he refers to the whole discussion from the perspective of need. Do we need something? If yes, and it would have an impact on the performance of the organization, go for it. But then, another aspect to look for is, whether it is worth it, if you go for it. And that is probably the trickier question to answer. How does the CFO decide that its worth it investing in a Social Networking tool, for example.

This question can be looked at in two parts. One, content, and other, collaboration. Lets look at content first. This is a little simpler to address. To begin with, if you have a document, you could always look at the document, and look at how much effort this document would save. So, for example, if a document reduces rework, or reduces cyclc-time by, say, 10%, thats 10% reduction in cost for that particular process. Its a different matter that this kind of determination is by itself not something which is completely accurate, but if you get the opinion of enough people, you could come up with a number which is reasonably accurate, at least in theory.

This brings us to the second point about conversations. And this is where it gets tricky. How do you measure the impact of conversations? Here, lets look at it in two parts. One, when taking a decision to invest in a tool to enable conversation, how does the organization even know how, and in what form conversations are actually going to happen? There is probably no way to determine this, given one of the key factors is the adoption rate, and even once you move past that, the nature of the conversation, according to the basic paradigm, is something you cannot regulate. The other aspect, if you already have such a platform, is how to actually determine what value conversations are adding. This is where it seems the paradigm of ROI faces some resistance.

There was a recent post by John Husband about assessing productivity, where he describes some of the aspects of networks, and hence, conversations, which make measuring them tricky. To quote:

• They multiply rapidly because the value of a network increases exponentially with each additional connection.

• They become faster and faster because the denser the interconnections, the faster the cycle time.
• They subvert (unnecessary) hierarchy because previously scarce resources such as information are available to all.
• Network interactions yield volatile results because echo effects amplify signals.
• Networks connect with other networks to form complex adaptive systems whose outcomes are inherently unpredictable.

The interesting to see from these is that networks can open up ways of working which are new, which we havent yet seen in organizations. For example, the idea of bypassing hierarchies. This is something which is enabled by the network. Does this lead to quicker decision making? Probably, it does. What is the financial impact of quicker decision making? We dont know. Can this be measured in the context of specific decisions? I think so. An organization i was interacting with a few years ago, had a servicing scenario, where the service engineer, if facing a problem which he could not solve, would travel back to the office, consult his manager, who would tell him the solution to the problem, and then he would go back to the customer site, check if the spares required for solving the problem are there or not, and if not there, would come back to the office, order the spares, and when they are available, repair the product. With handheld devices, the engineer could interact directly with their counterparts across the country, and quickly get a solution, either through a Knowledgde Base, or through interactions with service engineers, and reduce the time taken to repair. At the same time, if the spares werent available, the engineer could broadcast a request for spares to other engineers who could provide them if they had stock which they didnt require. There is value in these conversations.

But, these are specific examples, and on the whole, it is not so simple to determine this kind of value from conversations, or from networks. But can at least define scenarios in which conversations can create value, in a specific context? As i have written before, measurements or possible improvements make sense more in a specific context, rather than being broad-based. And, if you can take this to the context of the business process, you can at least begin to understand the applicability. And, within this context, it is rather easy to identify how conversations could create value. Not that this exercise is feasible, especially if you try to do this across the organization, but it at least illustrates the value of conversations in the context of the organization.

Another thing that John mentions:

Continuous flows of information are the raw material of an organization’s value creation and overall performance.

This is the idea on which the concept of ERP was based, too. Of making the relevant information available to the relevant people, so they could take effective decisions, and processes could be streamlined. Only thing, ERPs focus on transaction processing, where data is made available across organization silos or departments, while we are talking about ideas and experiences being made available in a similar manner. It is a little easier to quantify the impact of data sharing (production planning cycle time reduced by 20%).

To take an example, i am having a conversation with Nirmala about something which we realized we were both thinking about. The whole interaction was sparked by a comment on twitter (and also on facebook), and brought out a conversation which could lead to ideas coming out of it. These ideas could have some form of value. Even so, it would be impossible to quantify this to begin with. And even then, if it is a new idea, then its easier to quantify the value, while if its just sharing of ideas, making people more effective in their work, then it is tricky to measure this, too. Add to this, that if nothing comes out of this idea, i would have at least learnt something, which again is very difficult to quantify.

Any thoughts, please feel free to comment.

Tuesday, January 13, 2009

A Conversation ...

You must all be following the news, and all the goings-on at Satyam. Again, i am not a management expert, so better for all that i dont comment on that and expose my ignorance. Having said that, i would say that equating this to large-scale corruption in Indian industry would be quite similar to speculating about large-scale corruption in American industry in the wake of Enron. Which, to my mind, is not something which is reasonable.

Coming now to the point of this post ... a few days back, Satyam stock was seeing quite a bit of fluctuations in trades on the stock exchange. I was having a talk with my friend, Arvind Dixit, and was asking him who are the people who are buying Satyam at this stage. I wont write here about his hypothesis, but an interesting thing that he mentioned ... something to the effect that if it was a brick-and-mortar company, it would have been easy to figure out what should be the fair value of the Satyam stock, but because the nature of Satyam's business is knowledge-based, its very difficult to determine the fair value of Satyam stock.

I would see this based on two considerations:

1. It is inherently very difficult to determine fair value based on intangibles.
2. Since knowledge is the main ingredient of the business, and this knowledge is inherently carried by people, it becomes all the more difficult to determine the fair value, because as people leave, they take away a large part of the resources of the organization with them.

And this is what i wanted to write about ... the fact that it is very difficult to determine the value of knowledge to the organization, or to society at large. However, it would not be correct to assume from this that there is no value (nobody would agree even if you said that), but the fact is, while there are exhaustive mechanisms for valuing tangibles, there is still little to value intangibles. One could estimate value based on the projected revenues which could be generated based on this knowledge, but this at most a proxy measure.

Thursday, December 11, 2008

Of Measurements ... Again

A lot of people have written a lot about the value of measurements. Most of us know the dictum that whatever cant be measured cant be managed. And this interesting post by Moria Levy about Measurement also starts with this dictum. But thats where she moves away from what a lot of folks are saying.

A lot has been written about the utility, or futility of measurements especially when it comes to intangibles. This is because of the basic definition of something intangible, which is defined by the dictionary as ...

existing only in connection with something else, as the goodwill of a business.

Now, if something exists only in connection with something else, how do we measure it? And, is it really important to measure it? Maybe it isnt.

The the example of knowledge ... Moria has forcefully described how and why measurement may not be the best thing to have happened to humanity since ... (fill in with whatever you like!). Apart from the usual issue that most measurements we do today are about where we have been, rather than where we are headed, an important thing to point is that in most scenarios, its not possible to identify cause and effect relationships between things. Its easy if we can keep all other variables constant, but thats easier said than done. As i have written before, KM is possibly one of the initiatives being run in the organization, and as such, its really difficult to identify cause and effect relationships which can define what led to which operational improvement. Like the swimmer's dilemma i have written about earlier (although in the context of training, but its equally applicable here).

Another aspect to this definition is that if intangibles exist only in connection with something else, the only way to measure these is by measuring those something elses, which is why, i have been talking about the whole idea of proxy measures, which means that we cannot, and maybe should not, have a universal definition for measurement of KM, but rather, derive these definitions based on the context in which they are applicable.

Wednesday, October 29, 2008

Measurement And E2.0 ...

Back after a week ... and, Diwali! And here's wishing all of you Happy Diwali and a Prosperous New Year. The Mahurat trading session yesterday had most stocks going up on the BSE, so thats a nice start.

Andrew McAfee has a rather interesting conversationg going ... about a topic which tends to have about the most divergent views when it comes to social computing ... yes, you got it ... measurement. Andrew has written a rather interesting post about the whole idea of rating knowledge workers, encapsulating a large range of divergent views on the subject.

What i believe comes out of the entire discussion is that while the whole idea of putting a rating to someone's contribution to a social computing platform is quite against the entire idea of social computing, there has to be a way this can be addressed. After all, when we look at anything in the organizational perspective, there has to be a way of finding out whether we are on the right track, and whether there need to be changes to the way things are being done.

There could be two ways of looking at this ... one could be in terms of a performance appraisal type of rating on contributions and knowledge sharing efforts, and the other in terms of community feedback on these. While the first could end up stifling the entire effort (because this would look at it more quantitatively, rather than qualitatively ... how many blog posts could your boss go through to give you a rating ...), the second option is actually quite in line with the overall idea of social computing.

Lets take an example ... when someone from your network posts something on their profile, say, on facebook, you, and lots of others have the means to comment on this. These comments are essentially feedback, and could work as a form of ranking on this contribution. Take this one step further, into the organizational context ... if people had the possibility of giving you stars (ya, this is something i picked up from my son ... they get stars for doing well at school), they could show their appreciation of whatever you have contributed. The nice part is that there is no limit to the supply of these stars ... so, you dont necessarily rank someone to the exclusion of someone else, and considered over the larger audience, this could be a reasonable way for people to show their appreciation of your work, at the same time, work well in terms of recommending things to others.

In addition to this, different people look at the same contribution from different perspective. An expert looks at it trying to understand how well this could communicate a concept to a larger audience, a novice could look at it to learn something new, while someone who is simply trying to solve a problem would look at it from the perspective of relevance. Aggregating feedback from such diverse viewpoints would, i think, give an overall qualitative perspective.

In other words, if we take a scenario where feedback could be gathered by the larger community, this could be a reasonably nice way of understanding how the entire idea of social computing is working in the organization.

Wednesday, October 8, 2008

Leadership and Social Computing ...

A rather interesting post by Rachel Happe ... the distinction between wisdom of crowds and mob rule ... interesting reading ... more so because it brings in some form of sobering to the euphoria around social computing. Having said that, however, the key point i think Rachel brings out is the idea about leadership. And this is something which i have experienced in my interactions with different organizations.

Especially within the context of the organization leadership plays a critical role. As i have written before, the difference between succesful adoption pf and hence deriving benefits from social computing and Knowledge Management initiatives, and the other way round, comes, to a large extent from the leadership and the attitude of leadership towards these initiatives. Now, leadership is not the only parameter here, but it is definitely one of the most important parameters towards determining how an organization is going to take to the larger social computing picture.

If we have an organization where leaders look askance at blogs (there are quite a few organizations, where senior management, and i am equating them with leadership, look at blogging as a waste of time), then the probability of the organization adopting blogging on a large scale is quite low. Similarly, for communities ... One of the paradoxes about communities is that while they are supposed to be self-forming, and self-governing, they really cannot sustain without some amount of stimulus provided by the organization itself, and when i say organization here, i am really talking about leadership.

Which brings us to the question ... how to get the leadership to buy into these initiatives. Lot has been written about this, but more and more, the ROI concept comes in. Managers need to see what is the benefit the organization gets from investing time and effort into an initiative like adopting web 2.0 technologies, in order to justify the investment of resources into this, rather than into other initiatives which are competing for the same funding. Having said this, ROI is not a concept which lends itself easily to calculation when it comes to knowledge, for reasons which i have written about before. This is not to say that we can do without something which is as basic as this in the minds of the decision-makers. Now, i am not writing about a score-card here, but some measures for performance (which are usually already in place), and their relation with KM initiatives is something which needs to be developed. And this, to my mind, can be developed only within the context of a specific scenario, rather than being generalized.

Tuesday, October 7, 2008

ROI And Training ... Again

A very interesting post by Jay Cross about ROI … it got me thinking. A question which has been coming up time and again in discussions I have been having with friends is about the extent to which we measure ROI has been responsible for the crisis the markets are facing. Or is it, at all? Hey … I am not a management guru, and hence, I don’t even claim to know whether it does or not.


There is, however, something which I have been thinking about, and this post actually brought this out quite well. Especially the part where he says …

Making strategic decisions is fundamentally different from making operating decisions. Senior leadership uses gut feel, informed judgment, and vision to set direction. Managers at lower levels decide what projects to fund by describing the logic of how they will help carry out the strategy; this is where running the numbers is useful. ROI hurdles help identify the projects with the greatest potential return. They don’t address the big picture.

This is an interesting thought, if we take this forward. When we talk about vision, we are not talking about this quarter, or the next. We are, instead, talking about a process of reaching from point A to point B, whatever these points may be. Question is, if, in this process, some of the measures take a hit for a quarter or two, sort of giving up on some short term gains for more long terms gains, do these trade-offs actually come into the radar, or the intelligence dashboards of business leaders?

Consider this … There are a number of construction projects going on in Delhi these days, in preparation for Commonwealth Games, 2010. Now, these project sites are not a pretty site as of now, but by the time these are completed, its going to be a different picture altogether. Should one give up on a not so pretty near-term picture in order to attain a nicer picture in the long term?

In this context, lets look at training. Lets remember … training is usually work in progress. When people come out of a training, they have learnt some things, and they are yet to learn some things more, which is where the experience of applying the concepts of what they have learnt on the job comes into the picture. The first question, hence, is what is the point at which we should measure the ROI of training? Traditional means are feedback forms which participants to trainings fill out at the end of the training, when they have no idea how relevant the training has been, and how well it has equipped them to deliver work on the job. So does this mean that effectiveness should be measured at a later point? Here, the question that comes up is, what is the extent that operating improvements can be attributed to training, and to what extent can they be attributed to experience, on the job learning, or collaboration?

Lets look at it this way … you could train someone to swim … or, they could learn to swim by themselves once pushed into the deep end of the pool (with the lifeguard around, of course …). The person who was trained to swim wouldn’t be able to appreciate the effectiveness of the training because he never experienced the effort required in learning to swim on your own, while the other person never really got trained, so again, he is not the right person.

Friday, September 12, 2008

Knowledge Scorecard ...

No, i am not coming up with a new knowledge scorecard. Rather, some of the things i have been reading about ... about measuring knowledge. Rather interesting reading, though i would think they are based on assumptions which we might want to question.

The first assumption of measuring the knowledge inventory of the organization, is that the knowledge, and the person who holds the knowledge are two separate, independant things. Not only does this treat knowledge as a thing, it also makes the assumption that you can have knowledge even if you abstract the knower from the scene. This may not be an assumption that may be quite valid. Of course, when we talk about explicity knowledge, we assume that this assumption is valid, but having said that, once we believe that all knowledge is directly or indirectly tacit, this assumption breaks down. The question that then comes up is how does one measure something which doesnt exist on its own.

Another assumption is that knowledge is a "thing" which can be measured. This assumes that knowledge is an object which can exist by itself, which, as we have seen, is not something which is necessarily correct. Add to this the idea that what you cannot measure, you cannot manage, and the mix becomes heady ... but then, the question to ask here would be ... is the term management apt when it comes to KM?

The answer to this measurement dilemma, though, can be simple ... we can measure something based on its manifestation. What is the manifestation of knowledge? Improvements in the way things are done. Great ... this is a nice, indirect way to measure ... after all, if there is no mechanism to directly measure something, then we use something indirect to measure it ... think dark matter! Only thing is, this indirect measurement must change in different scenarios. In other words, something which is relevant to the context in which we are measuring it, as i have written before!

Monday, July 28, 2008

Value of KM

Admittedly, theres plenty written about the subject. And, we are yet nowhere closer to what could be a framework for measuring the value of KM. So why am i writing about this? I came across an interesting blog by Jenna Sweeney about the idea of measurement of Training ... and, look at it closely, Training and Knowledge Management are related, so i have thought for a long time.

The basic point that Jenna is making here is the fact that measurement must be done in the context of whatever you are measuring. And, this is quite valid for the entire question of value of KM. First of all, KM means different things to different people ... and if this is so, it is quite difficult to come up with adequate measurement norms. Leave aside the fact that even if it were to be able to come up with these norms, it would still be very difficult to measure, because of the basic structure of knowledge. And this is something i have written about before ... that when we are measuring something as nebulous as knowledge, it is a nice idea to not abstract it from its context, and try to build up something generic, but instead, stick to things which are specific to the context of the measurement.

Wednesday, July 16, 2008

Art Fry and Social Computing

I was reminded of the story of how the Post-Its were invented. Though this post is not about Post-Its. Or, you might find this an interesting read. Or, if you look closely at the story of the Post-Its ... From what i read ...

The marketing people did some surveys with potential customers, who said they didnt see the need for paper with a weak adhesive. Fry said, "Even though i felt that there would be demand for this product, i didnt know how to explain this in words. Even if i found the words to explain, no on would understand ..." Instead, Fry distributed samples within 3M and asked people to try them out.The rest was history.

The part about not being able to explain in words, and even if one found the words to explain, no one would understand, reminds me of social computing. Strange how one thing could lead to another? This, to my mind, is the beauty of human thought. One doesnt know what thought might lead where. The interesting part here is that, like Post-Its, senior management usually doesnt see the need for sticky web-pages where people can scribble their thoughts. However, just give these pages to them, and one could come up with quite interesting uses. And, the interesting thing is, it may not just be restricted to the usual things.

Why should a wiki be used only for maintaining project plans and communications, or for preparing presentations? Why cant a systems administrator create a wiki for maintaining help and FAQs for the new system? Or, a sales guy create a blog to keep track of the orders he has closed this quarter, so that, for reporting, he doesnt have to go back asking for reports, but rather, just go to his blog, and get the numbers from there? Or, why cant just about anybody write down their objectives or targets for the year on a wiki page, an track their achievements against their targets, in a wiki, so that come appraisal time, one could just send the link of the wiki to their boss (if one is feeling adventurous, that is ... otherwise, copy-paste and send it in an email!).

The point i am trying to make is that given the chance, people could come up with uses of social computing technology which were probably not even thought about. There are, of course, the usual, well-defined ways of using them, but these may just be one of the few.

Of course, if usage cannot be completely predicted, the next question that arises is whether anything like ROI can be predicted with any reasonable level of confidence? I dont think so. Of course, the question still remains whether one could tag ROI to something as intangible as social computing (simply because there is usually no causal relationship between the tools, and the outcome ... the tools are the software, and the outcome occurs in the heads of people!). Though, of course, somethng which keeps coming back to me is that if a senior manager is to make an investment, surely, they would need to make sure that it is worth it. And this, to my mind, is where the catch lies. This ROI is to be experienced, not necessarily calculated, to begin with!

Saturday, June 21, 2008

This Book I Am Reading

These days, i am reading a book titled Shadows of the Mind ... written by Roger Penrose. This is a rather interesting book ... One that i would definitely recommend to anyone who is even remotely interested in human thinking. Though, of course, you would need to make sure you are at your most alert when you are reading the book (using a language slightly closer to English would have been actually a wonderful idea ...).

Just so you know ... i am still on chapter 1. Though, soon to move to chapter 2! Now, that would be an achievement (and if you read the book, you would quite agree with me!). The basic point of the part that i am reading now, is that there is the aspect of understanding "what needs to be done", and of being aware of "why it needs to be done". And, what Sir Roger Penrose argues (to my mind, quite effectively), that while the former is something which can be easily understood by any intelligence, through the form of mathematical algorithms (i would stretch this to the hilt, and say something similar about documented information, or, if i may use the term ... explicit knowledge!), the latter, in other words, awareness of what we are doing, and why this needs to be done to achieve a particular objective is something which is the tricky part.

And this is where i would extend the logic from chapter 1 of the book, to the two aspects of Knowledge Management i deal with ...

Codification, which is my fancy word for documented information

Collaboration, which, to a lot of folks, is the "other" part of KM

And this is where i would like to make the point that while what some folks call KM 1.0 focussed on the former, it is the latter which is the trickier part. One of the points Sir Roger goes on to make ...

It also allows us to have some kind of direct route to another person's experiences, so that one can "know" what the other person must mean by a word ...

This is where i would like to bring out the importance of collaboration ... from the basic premise that there is something which is beyond the objective (i am using the term loosely here) nature of things, and this is where managerial imagination comes into the picture, to imagine an organization where this can be tapped into. And this is something which large part of web 2.0 technologies are focussing on.

This also reinforces the point that some aspects of Knowledge, and hence of Knowledge Management must remain beyond measurement, at least till such a time as we can generate a framework which is scientific, and can bring these into the scientific fold (though this is something which the book argues against ... something i would surely write about again).

Tongue in cheek ... there are always ideas relating to our field of work from domains which are not necessarily related. Something i have written about before.

Tuesday, June 17, 2008

Web 2.0 in the Organization

Dave Snowden has written a very interesting post ... Small one ... You could read it here. Very interesting thought. More often than not, we use things for stuff they werent even remotely intended for.

Something we saw yesterday ... I am in Bangalore right now, and i was in my room last night, with a few friends, knocking off a few Beers. As usual, the wall-mounted bottle opener proved inadequate for opening the Beer bottles. And, what did the guys open the bottles with? You wouldnt guess ... A spoon. Whoever invented the spoon would never have imagined that.

On a more serious note ... i was once working with a client. They were using an enterprise software (read ERP), and interestingly, they were using a particular feature of the applications. Interestingly, that was an undocumented feature (euphemism for bug), and when they upgraded the software ... what do you know ... the "feature" went away, and they were no longer able to do something they were able to earlier. Nobody would have guessed they would actually have been using that.

Which is why i quite agree with Dave when he quotes ...

When I worked at IBM we were asked (in 1990) to 6Sigma our CICS development team. The gurus told us that the next release of CICS could only have 6 bugs (or APARs as we called them). This was ridiculous, but luckily a colleague ran a report and showed that IBM program products had extremely strong positive correlation of profitability with APAR rate. That is, the products with the most APARs were the most profitable. This is because great products, like CICS, get used for lots of things we didn't think of and for which we didn't test. Mediocre products only get used for what the tests cover. Bad products don't get used at all and so generate almost no bugs.

In all probability, you would be using things in ways which the guys who made them never even dreamt of. And this is something which i hold even when it comes to adopting technology ... especially in the web 2.0 world ... More often than not, you can roll-out some application to the users, and you would find them using these in ways you never were able to imagine in those requirements documents you had written. Which is why, i believe that especially with technology in the web 2.0 space, it would be wise to simply launch this in the organization, and wait and watch ... you would find over a period of time, usage emerging ... new, and in all probability, innovative usage for these tools. And, it is not in the interest of the knowledge managers, or the larger community, to restrict this usage.

In other words ... usage, and hence benefits would tend to be more emergent rather than being pre-defined when it comes to collaboration, or social software. This is a challenge especially to the ROI school of thought, because this very phenomenon would make it quite difficult to actually measure something like this. Remember ... ROI of spoon? While this is something we are all grappling with, the other side of the coin also is quite relevant, that is, how does management decide whether to invest or not, unless they can see benefits. Having said this, though, there is also the viewpoint that whether you like it or not, social computing is here to stay ... whether within or outside the firewall. More beneficial to adopt it, and see benefits as they emerge. Only thing is, most managers are not comfortable with the idea of something emerging over a period of time. What we dont realize is that most technologies do actually emerge. The internet wasnt invented ... sure, the technology was, but the usage ... thats something that emerged over a period of time. Same is true of web 2.0, too.

Emergent technology means looking at how people use it over a period of time, and then look at how you would like to guide this technology into the business processes in the organization. Which again is something which, in my opinion, would happen sooner or later ... something i have written about before.

Tuesday, June 10, 2008

Of Measurement ...

Wikipedia defines Measurement as ...

The estimation of the magnitude of some attribute of an object ... (theres more, but this quite sums it up!)

Now, when it comes to measuring KM, we are not even sure what is the object, and what attribute of this object we are trying to measure. As such, there doesnt seem to be a direct mechanism of measuring the impact of KM, because the impact of KM is not on KM itself, but on some business processes. Now, this is what makes this so nebulous. The business processes vary from one part of the organization to another, and hence, the impact of KM on these also varies from part of the organization to another. In this kind of a scenario, can there be a direct way to measure the impact of KM? I am not talking about measuring KM (i dont think that would make sense), because KM cannot be the end in itself.

I came across a post by Dave Snowden about setting targets for KM. Dave is spot on ... if you are setting targets for KM, you really havent understood KM. Especially the part where he says ...

The early abortive attempt involved things like requiring x documents contributed to a community of practice or similar measures. Net result there was meaningless material been published to achieve a target along with plagiarism in many cases.

I quite agree with the observation here. Having said this, there are two thoughts i wanted to make:

1. The idea of the software which gives thank you credits ... sounds like a nice idea. The crazy part, i think, is the part where these credits are encashed. I would look more at the possibility of generating social capital for folks who are earning these points. Something like "Featured Bloger of the Month" ... or, some such idea?

2. Having said that setting targets for KM is quite akin to taking the wrong road, the point is, that managers need to figure out the return on the money being invested in KM. Since there are no direct ways, we need to rely on indirect measures. Something i have written about here.

The way i see it ... the impact is more in terms of the impact, and the kinds of results in terms of improvement in business processes can be delivered by KM, and no way we can have a direct measurement of KM which is possible ... or desirable.

Wednesday, November 21, 2007

KM -- Tool or Function

I was at the KM India Summit last week. Which explains the long time since the last post. Well well ... Traffic in Delhi can have that effect. As you can see, the who's who of the knowledge fraternity in India were there ... and talking. Dr. Rory Chase delivered some interesting insights. And, Dave Snowden was there ... via telephone. A little bit of a disappointment not being able to meet him, but he did deliver a talk which was very insightful (had read a bit of it on his blog, which should go to show the power of web 2.0 ...).

Well ... one theme which came out of the summit throughout, and something which you couldnt help observing was this ... More and more people were talking about KM being used to achieve a particular thing ... Which is the way it should be ... End of the day, KM cannot be the end in itself, but has to be a means towards a larger business goal. Having said this, there are two things which I thought need a little more reflection ... Not because I disagree with the idea of KM being a means to a larger business objective, but because the larger discussions threw up a few questions in my mind ...

1. It seems to me that in a lot of scenarios, KM is encroaching ... on the domains which used to be those of other functions in the organization. It could be production, or operations, or it could be quality, or it could be sales, or finance, or hr ... Its one thing that KM enables these functions, and another thing to have things being drawn from other functions, packaged together, and labelled KM. Having said that, it is also a fact that the business demarcations between functions are blurring as the world around us getting more and more multi-disciplinary. But, somewhere I think there is a little bit of confusion about where KM should fit into the jigsaw. Of course this would be different for different contexts, and for different problems, but KM cant be all things to all people.

2. Taking the previous point forward, and the logical conclusion from this is ... KM can either be a function, or a tool. As a function, I think the definition of KM is blurred in the current applicability context, which leaves us with one option ... a tool. The question this then throws up, is that if KM is a tool, and this is something which explains a lot of the things and practices taken up by KM practitioners, is how does one measure a tool? Does this then mean that our efforts to look at measurement of the effectiveness of our KM efforts are misguided?

Thursday, November 8, 2007

Communities and Participation

A conundrum which a lot of us have been facing ... How do we know where our social computing efforts are headed. Something I have been thinking about for some time, and thought I would blog about some of my thoughts on this, basically with reference to online communities within the organizational context. One of the most important aspects of the social computing initiative in the organization could be the communities into which the folks out there organize themselves (ok, so they do need some kind of structure to guide, but not to enforce ...), and the way these communities are generating value. I am not talking about these communities generating thoughts, but generating value, which is the next logical step.

It would have been quite simple, if communities were like task-forces ... Come together for a particular project, complete the project, and over. This would lend these communities to measurement which could be performed by traditional project management tools. Only thing, communities are not task-forces. The question to answer before we can proceed towards measurement is, to my mind, why these communities are created. Why do we find that people organize themselves into communities? Is it because man is essentially a social animal? I dont think so ... Our social instincts can be satisfied by the world around us. Especially when we are talking about communities in the organizational context. The reason, therefore, has to be a little more complex. Actually, I dont think it is. People form themselves into communities for two reasons ...

1. Because they see value in the communities.
2. Because they can form communities, therefore they do.

I would think, in the organizational context, its the first point which plays an important role in the emergence of communities. Which means that if we are to define the ROI of communities, we need to do this from the perspective of the community members. With a difference ... First of all, we cannot have a cost and value calculation for this, so to this extent this is different from the traditional ROI calculations. To begin with, we could calculate, more in terms of value perceived versus value expected. Remember, communities are as effective or otherwise as their members see them to be. So, the starting point for any such measurement has to be the expectations of the members, and the perceptions of the members. I would think, over a set of people, the answers we get are going to be the aggregated opinion of the community about itself.

Thing is, you ask people, they are not going to reply in terms of numbers. How is a community performing at 60% any different from a community performing at 70%, for example? Its not. So, we need some proxies which can supplement these. Broadly, I think these proxies can be divided into three parts:

1. Membership

2. Activity

3. Interactions

As you can see, any of these dimensions alone would not be enough to determine how a community is shaping up. A large community may actually not be too effective, if these members are not really contributing to the cause of the community. On the other hand, the activity in the community (the number of people logging in regularly to read posts, the number of posts), and the interactions (the number of responses to posts, follow-up posts), alone would not be enough, because in a small community, these may not be reaching a sizeable part of the intended audience. This is, obviously, assuming these to be proxies for the real thing, because the real thing is something we cant really measure, at least not as of now. More on this as I work this out ...

Friday, October 5, 2007

Measurement and Business Processes

I got back from a trip to one of my favourite cities ... Hyderabad. And, when one talks about Hyderabad, then can thoughts of Biryani be far behind? I am continuing ths from my previous post ... Where I was writing about Frank Jani'a post on Luis Suarez's post ... about measuring deltas, and deriving the efficacy of KM from there on.

Two things which I think we need to keep this in mind ...

1. The approach of measurement must be applied to specific business processes ... We cannot have a universal measurement (deltas of operating parameters over a period of time) for the efficacy of KM. The question that this brings up ... If you are starting a KM initiative in your organization, then what are the things you must consider to determine where the pilot should be done?

  • is the process customer-facing
  • are process participants geographically dispersed
  • is the process cross-functional
  • is this a line or support function
  • is this a cost centre
  • is this politically sensitive

These questions must be answered to determine whether a particular process in the organization is a suitable candidate for being the pilot.

2. Be prepared to have people dispute the fact that KM is responsible for process improvement. For example, in any typical organization, there are usually a number of initiatives which are running, for example, TQM, Six Sigma, Lean, FMS ... And, all of these are claimants to being the root cause for process improvement. What this means is that the process improvement pie could be politically sensitive, and might need to be treated carefully.

Wednesday, October 3, 2007

Business Case for KM ... Continued ...

A very interest post by Luis Suarez, where he is trying to make a case for social computing ... and, I think this logic can be applied to the entire concept of KM ... and, a comment on the post by Frank Jania can, indeed, take this entire discussion about measuring the value of KM one step forward.

I have been using the term ROI for measuring the value addition from KM, primarily for want of a better word. However, I think Luis makes a very stroong point in favour of looking at ROI in a new way. This is necessitated by the fundamentally different nature of the intangibles we are working with today. The important point, though, is that while ROI the way it is defined, may not be an apt measure for measuring the value of knowledge, this is no reason to not measure, because end of the day, the CFO would need to look at some numbers to determine whether the investment is worth it.

Frank takes the discussion a few steps forward, with the idea of deltas. I think this could be a good starting point towards an evolving concept for measuring knowledge, because this takes into consideration the fact that there is a baseline (today), and there is a point of measurement (the future). The only thing here is, KM is usually one of the improvement initiatives which run in an organization, and hence, there would be many candidates who could say that the improvement has happened because of them. Which means that there is no way to isolate the impact of KM from other initiatives. So, for example, it is not sure whether Frank would have been able to close the sale without collaborating. He might have, but then, there were are conjecturing.

This, to my mind, is an essential issue which we need to address, because when we look at knowledge, there is no linear relationship between cause and effect, but rather, what I like to call circular relationships between multiple causes and multiple effects. And, we are not even close to being able to figure out these relationships. Having said that, I think Frank has come up with an interesting concept to take the thought process forward.

Tuesday, October 2, 2007

Social Computing and Decision Making

I am reading (rather, revisiting), a rather nice Book ... The Fifth Discipline ... And, there are a few things which I am seeing in a light in which I didnt quite see them the last time round that I read the book.

One of the concepts that Peter Senge talks about is the concept of Reinforcing Feedback. The idea behind this is that as a particular phenomenon occurs in a system, this works like a kind of a Pygmalion effect. This means that whatever is happening in the system, gets reinforced, or magnified in amplitude in the same direction. So, for example, a pattern of growth gets magnified towards greater growth.

What is the priary means for this happening? To my mind, the most important aspect of this is information flow. So, for example, if you have a good product, some people will buy it, and will generate positive word of mouth, and because of this more eople will buy it, and so on ... No matter what we are referring to, the primary reason for this, to my mind, is information flow.

While we have been living in the information age for some time now, today is qualitatively different. Today, with the emergence of web 2.0 technologies, the flow of information has found uncountable new channels, and information today flows in directions, and using mechanisms which were unthinkable even a few years back.

What this implies is that the adoption of web 2.0 would tend to increase the magnitude of amplification for reinforcing feedback. What this means is that companies had better be prepared to make sure they dont goof up. Any slip-up would get circulated across the world, over the blogosphere, or in online communities, and using numerous other forms.

Now, there is the question of ROI on Social Computing, this is something which is still highly nebulous, and as Luis mentions in his post, social computing (or KM for that matter) doesnt necessarily lend itself to ROI calculations (one of the reasons being the structural differences between material, and knowledge). However, this could be a reasonably convincing argument for adoption of social computing (its anyway coming at you, so why would you want to ignore it? And, even if you did, its better for the organization to have a presence in the web 2.0 world, so it can make sure its voice is heard, and the people commenting on the company dont have a field day!).

Monday, October 1, 2007

Measurements and KM

The latest post on Jay Cross' blog is a rather interesting one. Theres a point he makes ...

"In brief, you measure the impact of informal learning the same way you measure the impact of any investment in the organization: by its outcomes. Are people able to do their jobs? Are they challenged? Are they working in top form?"

In a nutshell, the point here is, that unless we look at KM as an investment, we are looking at the wrong side of the picture. Whether people are able to do their work better or not will not show up on the financial statements. Though, one could argue that it would, over a period of time, but then, there is no way that a cause and effect relationship can be built up, when they are so separated by space and time.

Which is why, it is importat, as I have written earlier, that KM, and hence Knowledge, must be measured within the context of business results, rather than in a vacuum. Of course, this requires an understanding of two things: One, that training and KM are investments, which should be looked at accordingly, and two, that business outcomes can be related in some way or the other with knowledge within the organization. This would actually follow from this page I came across.

Sunday, September 30, 2007

Generating Knowledge

While this is a question that has been asked over and over, without us being any closer to the answer, nevertheless, it is important to develop an understanding of how people generate knowledge. For, unless we understand this, how can we, or today's organizations, find out how to maximize this, how to facilitate knowledge creation? While I am not trying to come up with a solution, I am trying to develop a framework. I would think its a simplistic one, but this is not meant to be an elaborate one. Also, I believe that if you take a complex concept, and put it into a particular context, then it becomes quite simple, almost like describing something with an example.

Of course, this doesnt have a starting point (a circle doesnt have one), but we must begin somewhere. So, we will begin with an individual. This individual, through reading, training, or observation, develops an understanding about something. This understanding is placed in a context (which actually could be quite different from the original context, in which it was developed, for example, classroom versus on-the-job) by actually doing, and experiencing. This is the first step.

Once this individual knowledge has been developed, it is shared. This sharing is done in a number of ways, like storytelling (this is getting to be a bit of a fad), or at times, even unknowingly. For example, others might actually learn something simply by observing you, or seeing what you do in a particular scenario, and so on. This sharing of knowledge is by itself generating new knowledge, in the form of a shared understanding, or a shared model. This is the second step, and leads to a repitition of the first step.

I admit that this is a rather simplistic model, but it takes care of a few things. To begin with, it takes care of the concept of generating knowledge by sharing. Also, it takes care of the multitude of vectors which facilitate the movement of knowledge in directions and ways which are not fully understood (though it doesnt quite explain them). The drawback is that it doesnt take into consideration the nature of the knowledge being considered, whether explicit or implicit. This model assumes that the mechanism of assimilating and sharing varies based on the classification, but not the basic process.

I believe this model can be used to develop more complex models for the generation of knowledge in the organization, and I look at this only as a first step. Of course, if you look at this closely, you will be able to discern the similarly with the SECI model, but the fact is, this is focussing on the individual mechanism of learning, while the SECI model describes the mechanisms for transformation of knowledge from one form into another. Needless to say, this model must be looked at in conjunction with the SECI model, to develop a fuller picture.

All thoughts, feedback, doubts, clarifications ... more than welcome!