Saturday, 31 October 2015

In this excerpt from the podcast between Keith Swenson of Fujitsu North American and Peter Schooff, Managing Editor of, they discuss how to first identify robust processes, how to tell if a process is not robust, and what to do about it.
PETER: So just to start it off, how would you define a robust process?
KEITH: It's probably better if I start by defining what it isn't. A lot of people think that a robust process is going something that when you start it, it's completely opaque, it's fire and forget, it's a 100% reliable, it does what it's supposed to do. The problem with that is that is that kind of 100% reliability is only possible in isolated and idealized environments and in a business process has to deal with some of the realities of your organization and it can't really run that way, so when I define a robust process, I define it as a process, that when it works, you have confidence that it did it right, and when it can't work, it tells you clearly the problem and it allows for being restarted in some sort of controlled manner so that you can eventually get to completion, and to success.
PETER: That makes sense and when you first start out saying what it isn't, that basically tells me there's a lot of non robust processes out there. Can you give me some examples of those non robust processes that you've seen?
KEITH: Well, I can. The thing is that, many of these business processes are designed by programmers and the programmers are used to working in more or less idealized environments. So when I design a program to run on a particular system, you know, I can code that program; I can test it, I can get all the bugs out and I can make it run in that one system very, very reliably. When we work with distributor systems, it's a little bit different than that. One example I have is from a customer I was working with recently and as many of these things go, it was a legacy modernization project. This was for human resources and they had a process, they wanted to onboard employees effectively. They had these six legacy systems that were out there that did various things like allocating a badge them, or setting up an account, or whatever it is and they didn't want to rip and replace all of those, they wanted to leverage those, so they built a master process and the master process started, you entered all the information about the person and then it would call off to these other processes. But the reason that it wasn't reliable was that as you pass the information from one system to the other, there was a chance that one of those systems may fail. It simply does not guarantee that everything worked right.
PETER: Gotcha. Now, so you have a failure, what is the key to basically hurdling this failure? Getting over this and cracking that process?
KEITH: Yes, it's important to remember that failures of these type are always going to happen and you need to design around those failures so that you can recover from them. You need to expect them and to recover. When I see these highly fragile processes, I often think of Rube Goldberg machine. Are you familiar with Rube Goldberg?
PETER: Sure, that's one of those extremely complex machines that do basically something like throw a light switch or something, correct?
KEITH: That's right. They're, they're very humorous and the involve a bunch of very complicated steps and if every step completes correctly, you end up getting your successful result. What is humorous about that situation is that it is very easy to see how every little step could fail, every little thing could, if it just went a little bit wrong, the whole thing would break. So many of these businesses processes are being designed again, they are being designed either by business people without a lot of experience in the area, they're being designed by programmers that are used to working in idealized environments and they hand information off. System A passes the information to System B which then does some processing on it and it passes back, which then passes it to System C and it looks a little bit like a Rube Goldberg machine. If everything worked perfectly, it worked; it's a real process. But if any little thing fails along the way, the whole thing stops and the system may not be able to recover from that.
PETER: Right. So, how would distributive systems handle something like this and also, can you somewhat touch on micro services as well?
KEITH: Oh, that's two good questions there. The first is that because it's a distributive system, let me talk a little bit about reliability in a distributive environment. In a localized environment, we make use of these things called transactions and that's a software engineering concept which allows you to guarantee consistency. If you start in a consistent state, and you start a transaction, you can have your program make some updates to the data and either all of the updates will be done, or none of them will be done. This is very, very important to make sure that you go from one consistent state to another consistent state. You never have the fact that sort of half of it is written and half of it isn't. So we understand transactions very well in the software engineering field and there's been a lot of research into distributive transactions. That's where you have all sorts of systems that are all involved in a transaction so that they all succeed or they all roll back. The problem is, that as you increase the size, that as you include more and more systems into there, the cost of that goes up exponentially. The amount of memory needed, the amount of time needed, the amount of compute resources. As you add systems, you will reach a point where you just simply cannot add a system to the transaction because it becomes too expensive and your systems slow down far too much.
PETER: That makes sense.
KEITH: You also have the problem that you can end up with deadlocks, so as you have these open transactions out there spread through your organization, if another transaction comes along that needs some of those resources that can be blocked, and that also increases exponentially. So what we actually do is we end up making small islands of reliability. We make a small group of servers and those are going to be reliable and they provide a function something like ERP. Then you have another set of servers and they will be reliable and they'll provide a function such as payroll or something like that, I don't know what it is. Between these islands of reliability, sometimes programmers want to link those islands with a reliable messaging system. This is why I was invited to talk at the robustness in BPM seminar that was held in Monterey this year to talk about how you can make large systems reliable. I won't go into a lot of the details here, but the fact is you have these reliable environments, those work very well, but if you attempt to bridge them with reliable messaging, it doesn't work. It is important that people who design business processes, or at least the system architects that implement those business processes need to understand that you can't just assume that you have a reliable system.
PETER: Right. So you're talking basically creating some kind of feedback loop, correct?
KEITH: That is correct. That's where I see it happening. I worked in a different system where we had a problem for a large international bank and the system was handling requests quite easily. There were hundreds of people using the system, but every now and then the system would get very very slow. It would start to take up tremendous amounts of memory and lots of CPU time and it would just sort of escalate out of control and what was happening was that the programmers deep inside the system had decided that if a particular system call failed, it was probably because that other system was off line so we'll just try again. Instead of giving up, we'll just keep retrying. What was happening is down deep in the code, there were these loops going on where it was trying, trying, trying again, and actually later on we figured out something was coded wrong. It was never going to succeed, but it was trying and trying and so a transaction that normally took fifty milliseconds was now taking twenty or thirty seconds, forty seconds. What was then happening was that instead of the system having to handle only one or two transactions at a time, it was handling fifty or sixty transactions and memory blew up, and everybody slowed down and everybody was unhappy. So the principle that we found out of this was to immediately, when you see a failure, throw an exception, stop processing. Do not continue processing when you hit an error. This is the concept of fail fast and fail fast is critically for large systems to be stable when they encounter an error. So what you do is as soon as you hit an error, you throw an exception, you roll everything back to the beginning and then you record the fact that that error happened. You roll the transaction back so that you're consistent state, but you remember the error happened and you somehow surface that for users to be able discover and to deal with.

Process Center of Excellence

BPStategyblogEstablishment and refinement of a “Process Center of Excellence” (PCoE) is one of the initiatives we’re currently working on with our clients.
Though many companies have a process management team or teams within the enterprise, frequently there are capability gaps which result in sub-optimal value delivery of the function. Our role often involves assessing current PCoE maturity levels then developing a Blueprint and associated Roadmap to Success.
Key questions commonly requiring attention are:
  • Is there clear visibility and linkage to enterprise strategic value drivers?
  • Is there demonstrable alignment between business management, process management and technology management, ideally including business executive sponsorship?
  • Does the organization have a command of the core value processes, both customer-facing and business management-focused?
  • Has the organization adopted a proven development methodology, typically involving a continuous improvement-aligned agile/iterative framework coupled to extensible best practices?
  • Is there an excellence program in place to ensure repeatable excellence in design, development and deployment of process solutions (technology-based and other)?
  • Are there mature “adoption management” practices in place to ensure that desired approaches gain durable traction?
  • Are there process-centric metrics in place which demonstrate value delivery and awareness?
  • Have near-term and mid-term roadmaps been established which persistently elevate process competencies and map out process improvement priorities?
 While other areas often warrant attention, focusing on these core topics produces a clear understanding of strategic priorities, current capabilities and an appropriate roadmap to success.

Friday, 30 October 2015

5 Essential Principles for Understanding Analytics

I’m convinced that the ingredient for the effective use of data and analytics that is in shortest supply is managers’ understanding of what is possible. Data, hardware, and software are available in droves, but human comprehension of the possibilities they enable is much less common. Given that problem, there is a great need for more education on this topic. And unfortunately, there aren’t a lot of other good options out there for non-quantitative managers who want to learn about analytics. MOOCs and traditional academic courses mostly focus on methods. And while there are lots of executive programs in “Accounting and Finance for Nonfinancial Managers,” there aren’t any that I know of on “Analytics for Non-Quantitative Managers.”
I have designed or taught in analytics programs for managers at Babson, Harvard, MIT, Boston University, and University College Cork, so I have some opinions about what content ought to be included. If you’re a potential consumer of programs like these, make sure the one you sign up for has the components you will read about below. Or do some targeted reading in these areas.
Identifying and Framing the Analytical Problem: A proper quantitative analysis starts with recognizing a problem or decision and beginning to solve it. In decision analysis, this step is called framing, and it’s one of the most critical parts of a good decision process. There are various sources that lead to this first step, including pure curiosity (a manager’s common sense or observation of events), experience on the job, or the need for a decision or action.
At this early stage, the analytics haven’t yet come into play. The decision to forge ahead with some sort of analysis may be driven by a hunch or an intuition. The standard of evidence at this point is low. Of course, the whole point of a quantitative analysis is to eventually test your hunch against data. (And that’s the key difference between analytical thinkers and others: they test their hunches with evidence and analysis.)
What managers need to focus on in the framing stage is that they have systematically identified and assessed the problem, and that they have considered alternate framings. It may be helpful to discuss the issue with quantitative analysts who have a sense of how alternative framings might be pursued. (If you want to know more about framing an analytical problem, I’ve written a whole chapter about it in my book, Keeping Up With the Quants.)
Working with Quantitative People: Speaking of quantitative analysts, it’s really important for managers to establish a close working relationship with them. You have the understanding of the business problem; your “quant” has the understanding of how to gather data on and analyze it. In order for this relationship to work, each party needs to reach out to the other. You, as the largely non-quantitative manager, need to help your analyst understand your problem fully, perhaps through having them work in the relevant area of the business for several days. Your quant needs to communicate with you in normal business language, engage with your issue, and work at it until you’re satisfied. Your analyst may not be particularly good at interfacing with managers, and you may be intimidated by quantitative analysis. But somehow you need to find common ground.
Understanding Different Types of Data and Their Implications: These days, you’ll hear a lot about big data and how valuable it can be to your business. But most managers don’t really understand the difference between big and small data, and they use the term “big data” indiscriminately. How you refer to your data doesn’t matter much, but it’s important to know about the differences between various types.
Small data—which, despite its name is extremely useful—is data that’s of manageable size (able to fit on a single server), that’s already in structured form (rows and columns), and that changes relatively infrequently. It’s most likely to come from your organization’s transaction systems such as financial systems, CRM, or order management. This type of data has probably been analyzed for many years. It doesn’t get much press these days, but it’s essential for knowing your customers, understanding your company’s financial performance, and tuning your supply chain.
Big data is unruly. It’s too big to fit on a single server, is relatively unstructured, and fast moving. It’s more likely to be about the world outside your business transactions—what your customers and prospects are saying on social media, what they’re telling your call center reps, and how they’re moving around your store. Big data offers great opportunity, but it’s often a challenge to get it into a structured form that can be easily analyzed. If you want to pursue it, your quant partner probably needs to be a data scientist.
Understanding Different Types of Analytics and Their Implications: For many years, the vast majority of analytics were descriptive—simple reports or dashboards with numbers about what happened in the past. But that’s not the only type out there. Predictive analytics use statistical models on data about the past to predict the future. Prescriptive analytics create recommendations for how workers can make decisions in their jobs. Most managers need some urging to adopt the less familiar predictive and prescriptive analytics, which are typically far more valuable than the descriptive variety. A few years ago, I did a video explaining the difference between descriptive, predictive, and prescriptive analytics that will come in handy for managers who need a refresher. These are still very important, but now I am increasingly focused on a new type: automated analytics. These analytical decisions are made not by humans, but by computers. Many common analytical decisions, such as those about issuing credit by banks or insurance policies, are made entirely automatically. They portend a lot of change in how we organize and manage analytics within firms, and may even pose a threat to many decision-makers’ jobs.
Exploring Internal and External Uses of Analytics: Finally, managers need to be aware of the distinction between internal and external uses of analytics. Historically, analytics were used almost exclusively to support internal decisions. That’s still useful, of course, but now companies are also using data and analytics to create new products and services. And it’s not just the digital players you would expect, like Google and LinkedIn; mainstream firms like GE, Monsanto, and several large banks are pursuing such “data products.” This is a new option for organizations that managers need to understand and explore.

Getting a grasp on these fundamentals won’t make you an analytics expert, but it will make you a more effective consumer of this important resource. And in today’s business world, not knowing about analytics can be dangerous to your and your company’s prosperity.

The Winning Formula to a Successful Change Team

I’ve been asked by many of my clients: "What’s the best way to pull a successful change team together?" Now, there are many views on this out there but I like to keep it simple! As Bruce Lee once said: "simplicity is genius". 
Firstly, you need to look at the type of person you want to entrust with improving your organization – Attitude is everything! A positive, can-do attitude wins every time for me over any number of MBA qualifications, so let’s take a look at teams…
There are lots of definitions as to what makes a team. One of the better definitions states:“… a team is a group of people who co-operate and work together to achieve a goal in a way which allows them to accomplish more than individuals working alone could have achieved.”
Ideally, they will all be working together towards commonly understood, shared and achievable objectives.

Obviously, when a new team is formed, it is very rare that it will perform effectively from the start. There is a development process that each successful team must go through. This development process has three distinct stages:
  • Chaotic
  • Formal
  • Mature

Chaotic Stage

The chaotic stage is the very first stage that a new team goes through. As the name suggests, a team in the chaotic stage of development will exhibit typical characteristics. These include:
  • Inadequate planning
  • Not enough time given to setting clear, agreed objectives
  • Making too many assumptions, particularly about objectives, targets and team roles
  • Underestimating problems
  • No clear procedures, agreed ways of working or development of understanding
  • Poor communication within the team (during discussions, some team members will dominate while others will not be able to get their ideas heard)
  • Everyone tending to talk at once (this, coupled with poor listening skills, will lead to ideas that are lost)
  • Leadership is either non-existent, unclear, too heavy-handed or not accepted by the rest of the team
It is easy to see that teams in the chaotic stage will fail more often than they succeed. However, these failures will be ‘explained away’ and ‘glossed over’ instead of being analyzed objectively.
Chaotic teams will be fun for a while. Team members will tend to overcome uncertainty by diving headlong into the task without really giving thought as to what they are doing or how they are working together.
Eventually though, the team will begin to react against the chaotic stage by becoming more formal in its approach. There is a danger however, that the team will overreact and introduce formal procedures that are too rigid and restraining.

The Different Stages of your Change Team
Now we’ve looked at the early stages of forming a team, lets discuss the ‘formal’ and ‘mature’ stages:

Formal Stage

The formal stage is likely to be more successful, but this success will be limited by inflexibility. The team at this stage of its development tends to be too regimental and fails to utilize the full capability of all its members. This is due largely to over-rigidly defined roles within the team. Probably the worst case of this being that far too much dependence is placed on the leader to co-ordinate, plan, and make decisions as well as to control.
Typical characteristics of formal teams include:
  • Strong likelihood of overreacting - becoming too formal in procedures and roles
  • Strong leadership often seems the answer to problems of the chaotic stage - the leader is often criticized for failure to be strong enough during chaotic stage
  • Formal/specific roles (timekeeper/secretary etc.)
  • Poor flexibility
  • Not making use of capabilities of members

Mature Stage

Gradually, the team begins to develop out of the formal stage and starts to take ‘liberties’ with its own procedures without slipping back into chaos. This progress to maturity is not guaranteed however.
If the team rebels against the rigidities of the formal stage too early, it can easily slip back to the chaotic stage.
On the other hand, some teams get stuck in the formal stage and never quite reach the mutual understanding and trust needed to move to maturity.
The breakthrough to the mature stage usually occurs when the team realizes that some parts of their formal procedures are of no use to the particular task they are doing. The team cuts corners and finds it can cope.
Typical characteristics of mature teams include:
  • Procedures for objective setting, planning, discussions etc. that are agreed and based on the task or situation
  • Procedures that are flexible rather than rigid
  • Team roles that are more relaxed, with members contributing on the basis of what they can offer in a specific situation, not according to rigidly defined responsibilities
  • Communication that is good within the team. (ideas and suggestions are given freely and are listened to and built on)
  • Leadership style is more ‘involving and participative’ as the situation dictates
The team in its mature stage of development is likely to be very effective.

Getting your team to continue to perform
One of my recent posts I discussed what makes a successful change team, so, you've formed a team and you know they will go through the stages of chaotic, formal to mature but how can you get them there and ensure they continue to perform at a stellar level?

Team Reviews

To move from the chaotic stage to the mature stage, the team must build on its experiences. It is commonly believed that you learn from experience. Actually, you gain understanding from reviewing and thinking about your experiences. You can then draw appropriate conclusions and plan to modify your behavior in the future.
You learn by doing and understand by reviewing

The Learning Cycle

There is a procedure for conducting a team review. It is a skill that should be learned and practiced.


Start by analyzing your team results:
  • Did you achieve the task?
  • What aspects of the task did you do well?
  • What aspects of the task did you not complete satisfactorily?
  • Did individual roles, contributions, activities and attitudes benefit the team?
REMEMBER, your analysis should be factual and honest.
Analyze why you performed the way you did. You need to shift the focus of the review from ‘ what happened’ to ‘WHY it happened!’ Your analysis should cover both task and process-related aspects - how your team approached the task; how you as team members worked together.
Having focused on the causes you can now identify what strengths you can maintain and develop and what weaknesses you need to overcome. Again, this should include both process and task-related aspects of your performance.
It is important at this stage that you adopt the ‘Pareto Principle’. Try to concentrate on the ‘significant few’ factors or weaknesses - the ones that are causing you the most problems. Once you have eliminated these, you can then move on to the lesser concerns.
REMEMBER, do not try to change everything immediately. Instead, use the Continuous Improvement principle of aiming for steady, continuous progress rather than risky wholesale change.
Plan for improved performance in the future.
Try not to forget, a good review is just as concerned with the future as it is with the past.
You are not carrying out a review as a ‘post-mortem’; you are attempting to understand what you have done in the past so that you can improve in the future.
Having carried out your review you can develop a plan which:
  • Maintains and builds on the strengths of the team
  • Overcomes the weaknesses identified
Most importantly, your plan for improvement should be:
  • Feasible
  • Practical
  • Understood by everyone
  • Able to identify who will do what
  • Clearly directed and agreed by all for the next task
This completes the learning cycle.

Thursday, 29 October 2015

Stay hungry...Stay foolish. Amazing Steve Jobs Speech at Stanford

Agile Development: The Three Artifacts of Event Processing

TIBCO BusinessEvents

“Digitalization:” the impact that inter-connections between people and technology are having on organizations. France, for instance, went from 14% of its population connected and online in 2000 to more than 85% in 2014, and a penetration of smartphone users from 25% in 2012 to 50% in 2015. This phenomenon is still accelerating because we are now connecting things to the digital ecosystem.


But what is the impact of this phenomenon exactly? The most notable and disruptive effects are the increasing speeds at which companies need to interact with customers, react to market conditions, and differentiate through innovation. Digital communication channels are nearly instantaneous, and customers will disengage from providers that cannot keep pace with the speed of interactions. Machine data is now delivered in sub-seconds, and a company’s key to success is its ability to assimilate, understand, and react to it faster and more intelligently than others.
Today’s technology yields multiple ways to achieve it. However, one absolute requirement for this transformation is AGILITY. With the speed at which new digital companies innovate and at which customers can now disengage, the normal delivery cycles of months or years for large monolithic business applications cannot continue.
One way to gain the necessary agility is with event-driven architectures using products like TIBCO BusinessEvents as the foundation. There are three artifacts that make event processing a vector for agility: events, data context, and rules.


An event is the starting point, a sign something has happened. In legacy transactional systems, transaction start, abort, or commit are events. In SOA architectures, each service call is an event. In the Internet of Things, each action taken by a person becomes an event, and sensors generate information events. Events are also aggregated from heterogeneous sources. When you leverage your legacy investment and use it to enrich a new source of data, you build intelligence.
A payment running through the traditional transactional system is the action of moving money from a buyer to a seller. However, if you consider a payment as an event, a much wider variety of opportunities is suggested. The payment event includes where the payment was made, to whom, how much, and when—key data that, in addition to triggering the payment transaction, can also serve multiple parts of the business: fraud prevention, marketing, logistics, and more.


Data context, information you can set around the processing of events, enables reacting differently—more appropriately—to the same event:
Event: Customer complains about bad service.
Context: Customer has high loyalty score.
Action: Make compensation offer A.
Event: Customer complains about bad service.
Context: No history.
Action: Send service guarantee email.
Remember, the time between event and action must be kept to a minimum; typically, the read/write times of big databases cannot keep up. Rather, data must be maintained in memory for immediate availability. A key characteristic of good event processing solutions like TIBCO BusinessEvents is a native combination of both events and contextual data in the same development and runtime environment, allowing both iterative and parallel evolution.
A UML modeling language, such as that present in BusinessEvents, enables easy design, maintenance, and evolution of complex data and event models. In addition, contextual data objects usually have a life cycle, the accuracy of which can be challenging to maintain when it spans multiple systems. The ability to design event processing state models for these data objects eliminates this challenge and allows maintaining lifecycles in real time by collecting events from many disparate sources.


Rules are the final key artifact, bringing intelligence and action by correlating events and contextual data. Rules, composed of conditions and actions, continuously evaluate whenever an event arrives or the contextual data object changes. When a rule applies, its action is executed.
Rules are a key contributor to the agility we are after because they:
  • Enable you to act without having to apprehend the entire problem. You identify and focus on one specific situation, and create the rule(s) required to deal with it.
  • Let you enrich or update event processing logic easily, simply by modifying existing rules or creating and deploying additional ones.
  • Determine and complete procedures efficiently by grouping rules according to scenario and assigning the right groups of people to create them. For payment events for example, one group can create fraud rules, another marketing rules.
With these three capabilities, TIBCO BusinessEvents helps you keep up the pace of digital interactions. It further allows you to reduce time needed for changes to the system for its unique ability to address both a technical audience, focused on the technical challenges of building low-latency large-scale platforms, and a business audience, focused on generating value from their business expertise.
With TIBCO BusinessEvents, developers can expose rules that drive parameters in a business friendly web interface with Excel-like decision tables and rule templates that translate logic into business language. Business experts can then adapt behavior without needing to wait for a long development process. Compare this agility to classical procedural programming where changes have to be evaluated across the entire process.
Watch this video on orchestrating events, data context, and rules to create agility—and a demo (starts at 5:00) of the business-friendly web interface in action.

Wednesday, 28 October 2015

2016: CIOs And CMOs Must Rally To Lead Customer-Obsessed Change Now

In the coming weeks Forrester will publish its annual set of predictions for our major roles, industries, and research themes — more than 35 in total. These predictions for 2016 will feature our calls on how firms will execute in the Age of the Customer, a 20-year business cycle in which the most successful enterprises will reinvent themselves to systematically understand and serve increasingly powerful customers.
In 2016, the gap between customer-obsessed leaders and laggards will widen. Leaders will tackle the hard work of shifting to a customer-obsessed operating model; laggards will aimlessly push forward with flawed digital priorities and disjointed operations. It will require strong leadership to win, and we believe that in 2016 CMOs will step up to lead customer experience efforts. They face a massive challenge: Years of uncoordinated technology adoption across call centers, marketing teams, and product lines make a single view of the customer an expensive and near-impossible endeavor. As a result, in 2016 companies will be limited to fixing their customer journeys.
CMOs will have good partners, though. As they continue to break free of IT gravity and invest in business technology, CIOs will be at their sides. 2016 is the year that a new breed of customer-obsessed CIOs will become the norm. Fast-cycle strategy and governance will be more common throughout technology management and CIOs will push hard on departmental leaders to let go of their confined systems to make room for a simpler, unified, agile portfolio.
Firms without these senior leadership efforts will find themselves falling further behind in 2016, with poor customer experience ratings impacting their bottom line. Look for common symptoms of these laggards: Poorly coordinated investment in digital tools, misguided efforts to invent new C-level titles, and new products with unclear business models.
We will begin publishing our predictions for the CMO and CIO roles on November 2nd in conjunction with our Age of the Customer Executive Summit. A steady stream of predictions will follow in the days after that. In the meantime, I'm providing a sneak peek at our predictions documents by indentifying the top 10 critical success factors for winning in the age of the customer:
  1. Disrupt leadership: CEOs will need to consider significant changes to their leadership teams to win a customer-led, digital market; CEOs that hang on to leadership structures to simply preserve current power structures will create unnecessary risk.
  2. Institute a customer-obsessed operating model: Companies that shift to customer-obsessed operations will gain sustainable differentiation; those that preserve old ways of doing business will begin the slow process of failing.
  3. Connect culture to business success: Those that invest in culture to fuel change will gain significant speed in the market; those that avoid or defer culture investments will lose ground in the market.
  4. Personalize the customer experience (CX): Customers will reward companies that anticipate their individual needs and punish those that have to relearn basic information at each touchpoint.
  5. Implement multidiscipline CX strategies: Companies that transform operations to deliver high-value, personalized experiences will drive a wedge between themselves and laggards just executing CX tactics.
  6. Operate at the speed of disruptors: Leaders will animate their scale, brand, and data while operating at the speed of disruptors; laggards will continue to be surprised and play defense in the market.
  7. Evolve loyalty programs: Companies that find ways for customers to participate with their brand and in product design will experience new and powerful levels of affinity; companies that try to optimize existing loyalty programs will see little impact to affinity or revenue.
  8. Convert analytics to customer value: Leaders will use analytics as a competitive asset to deliver personalized services across human and digital touchpoints; laggards will drown in big data.
  9. Master digital: Companies that become experts in digital will further differentiate themselves from those that dabble in a set of digital services that merely decorate their traditional business.
  10. Elevate privacy as a differentiator: Leaders will extend privacy from a risk and legal consideration to a position to win customers; companies that relegate privacy as a niche consideration will play defense and face churn risk.
As you can tell we’re expecting 2016 to be another year of rapid change as firms learn to cope and respond to empowered customers and agile competitors. The decisions companies make, and how fast they act, will determine if they thrive or fail in the age of the customer.
To learn more, download our guide to the Top 10 Success Factors To Determine Who Wins And Who Fails In The Age Of The Customer. In-depth analysis on these themes and more will be available on our blog in the coming weeks.

How to Perform Business Process Mapping

Tuesday, 27 October 2015

Success breeds success with social collaboration

For any social collaboration initiative, one of the biggest challenges is making the leap from the early adopter stage to broad, cross-organisation use of the platform. Whether you’ve built your initiative on a foundation of viral, grass roots adoption of a free tool, or whether you’ve sensibly started small and focused, with a carefully identified use case and a group of enthusiastic pilot users, translating this into new use cases and engaging employees who may be resistant to changing their ways of working can be extremely difficult, and is often the point at which such initiatives peter out.

Often, the biggest missed opportunity is in failing to fully capitalise on that early success, to make sure that you get every possible value out of it. Some of this is about demonstrating credibility; if you can show senior leaders the tangible value that teams and individuals are getting from using the technology, it will help them to buy into what you are trying to do. Some of it is about publicity; finding ways to stay in people’s consciousness is a constant challenge, especially when day-to-day business problems inevitably take people’s attention away. The more positive stories you can find to share with people, the more you can reinforce the idea that this change is here to stay; it’s not just another passing fad.

But perhaps more important is that sharing your success stories – the use cases for your platform that actually deliver value in a business context, whether through making people’s jobs easier, saving them time, or enabling things to happen that simply couldn’t have happened without their use of the social collaboration technology – allows people to understand WHY. This is probably the biggest challenge for people in adopting social collaboration; they simply don’t know why and how this is relevant and potentially valuable to them. Context is extremely important; firstly to show what collaboration means in the context of your particular organisation, given its industry and culture for example, and secondly to show what it means in the context of an individual’s particular role or in a particular team. 

Replication is your starting point – the more times you can replicate your early adopter use case across the business, the better. But more than that, you want to inspire people, to help them come up with their own ideas for how the technology could help them. A great idea is to build these success stories into your training courses, like Springer Nature (formerly Macmillan Science and Education) has done, and encourage people in that setting to discuss potential ways they could emulate this, or to come up with alternatives.

The final point I want to mention here is the advocate opportunity. Not only is it important to share your successful use cases far and wide, it’s also critical to highlight the individuals and teams involved in those successes. This is something that is often forgotten about, but it can actually provide a fantastic boost to your efforts, as these early adopters are often your very best advocates, especially if they were not wholly behind you to start with. The more you can celebrate these teams across the organisation, and encourage them to share their experiences with their peers themselves, rather than you doing it as a third-party, the more real and genuine they will come across. This is ultimately your goal; you want people to collaborate because it is worthwhile, and these individuals are the perfect spokespeople for that. Celebrate them well.

The Convergent Enterprise


Jeff Rauscher, Director of Solutions Design for Redwood Software discusses how companies can use their legacy mainframes to gather and process big data.
Gathering and processing strategic information for business can take a great deal of effort. For years, companies have depended on information technology (IT) to bring large volumes of business data back down to a manageable and meaningful size. This year marks the 50th anniversary of one of the most trusted and reliable business technologies out there–the IBM System/360, the world’s first commercially available mainframe computer.
Business IT owes much of its success, as well as its basic structure, to mainframe design. It is robust technology that businesses will use well into the future. Earlier this year, Gavin Clarke, a journalist at The Register, reported that 96 of the world’s top 100 banks run S/360 descendants, with mainframes processing roughly 30 billion transactions per day.
Mainframe Backbone
Reliable mainframe technology is the backbone of many businesses, particularly financial organizations. However, when the mainframe co-exists with layers of distributed, cloud and virtual technologies—as it almost always does—it can be difficult to maintain process visibility and efficiently execute coordinated cross-system processes. As a result, business and IT are now at a crossroads. The challenge for both is to use innovative new technology alongside reliable older solutions to gather and process data faster and more accurately than ever.
Just last year Gartner predicted that; “By 2016, 70% of the most profitable companies will manage their processes using real-time predictive analytics or extreme collaboration.” For some organizations, the question of exactly how this will happen still remains. For top performers, the answer lies in better engineering across the processes that support the business by bringing seemingly disparate elements together.
Automate for Convergence
Recent research from The Hackett Group found that “top-performing IT organizations focus on automation and complexity reduction as essential IT strategy elements.” The Hackett study continues to explain that these top performers automate up to 80% more business processes than other organizations. As a result these companies carry 70% less complexity in their technology, spend 25% less on labor and use 40% fewer applications for every 1,000 users.
Automated processes guarantee a level of coordination, consistency and accuracy that’s impossible otherwise. By automating processes across diverse technologies, platforms and locations, IT organizations coordinate and tame complexity to realize immediate benefits from everything they have.
Companies need to process more data faster for business intelligence, customer service and competitive advantage. At the same time, business leadership demands deeper and more detailed analytics.  The mainframe is a tremendous ally in the effort to manage so much information. So are myriad other technologies.
To get the most from a complex IT environment, organizations need platform-agnostic automation. With this single advantage, divergent IT converges to function accurately, quickly and in concert. It’s the best way to deliver value from your long-term investments while you build for the next 50 years.

Monday, 26 October 2015

The Business/Process/IT Partnership (Your Secret Ingredient For Process Transformation Success)

Over the past 20 years, I’ve been involved in many process initiatives, working closely with business and IT execs. And I’ve seen some efforts work while others go south. But it’s only this year that I so clearly understood why some organizations zoom ahead in process transformation while others stumble, go in circles, or abandon the chase. My own big “ah-ha” moment came at a big business process excellence event in 2012, when I looked across an audience of 500-600 process practitioners and realized there were very, very few CIOs, IT leaders, or even technologists in attendance. Then I remembered some past technology events when my references to process technologies, techniques, or methodologies drew blank stares. Why was that? It was because few process experts were in the room! That made me start wondering — what’s wrong with this picture?
More and more senior execs are embarking on business process transformation. These new changes are catalyzed by many things: new business models made imperative by a new market leader, new threats or opportunities because of rapid changes in technology, a much more customer-centric business strategy, aggressive global expansion plans, or something else. As a result, many organizations are working to become process-driven by 2020.
But . . . successful business process transformation requires an unprecedented level of collaboration among business owners, process teams, and IT, three groups not accustomed to working together. IT usually limits participation to technical topics or doesn't get involved. Process teams lean toward continuous improvement and can be technology phobic. And the business functions? While a small number of visionary execs seek transformation, most are content with incremental improvement that does not lead to transformation while others don’t know how to get started. These organizations may get their initiatives off the ground, but they usually have a weak partnership across the business functions, process teams, and IT, like the one depicted here:
Weak Business/Process/IT Transformation Partnership

In contrast, here’s an example of a strong partnership that has been forged across these three essential groups:
Strong Business/Process/IT Transformation Partnership

For business process transformation programs to succeed, the organization will need a strong (and most likely, a radically new) partnership among all three participants. The good news is that business-oriented CIOs can and will play a crucial role in process initiatives. And there are a growing number of examples in which business, process, and IT leaders are getting together to form multidisciplinary business process management teams.