Self Service or Selfish Service?

Is your self-service actually selfish service?

Personally I love this message. All too often the operators of a service sit down and try to make them more “efficient” or “streamlined”. What they mean is efficient and streamlined for them.

This leads to poor adoption, and continues a long standing abuse of the customer experience. What’s in it for me? Am I getting better faster service?

In reality these self-service portals are an afterthought and aren’t truly integral to the service experience!

Read the full article here:

What do you think? Is self-service integral to your service experience? What value is added for your customers and teams through self-service?

Better Service Visibility = Better Delivery!

Service disruption occurs when change and release management collisions occur.

How do we prevent these collisions from occurring in the first place?

Even the most sophisticated teams are subjected to these problems; why?

No matter how much planning and automation you have, there are still outages!

Now the service desk is getting hammered with calls and a VP is irate over not being able to reach his “key” system. No one is happy. The world is on fire!

We planned. We strategized… We have GREAT tools! We have GREAT PEOPLE! We AUTOMATE!!!

Why? Why me? Why us?

While you may have planned accordingly, followed the good practice handbook to the letter and thought you understood the decisions in the CAB, you still had collisions. Why?

Because you made decisions based on incomplete information.

There are MANY systems of record that hold critical information related to service delivery. That information is often not all in a single database — such as your ITSM system.

Vacation and business event information? It’s in your messaging system (Exchange).

Customer specific case information? It’s in your CRM (

Release information? It’s in your ITSM system (ServiceNow).

If key data related to change/release decisions is not all in the same system, the effort to correlate it may be painful and time-consuming, but; ultimately it is worth it if service is improved. Figure out how to get it correlated — even if it is a spreadsheet. Reduce the risk by knowing what is what.

1:56 — A demonstration of a unified calendar view.

We built Kinetic Calendar to enable real-time visibility into key data from multiple applications. it’s more important than ever to be able to cross reference data from those systems. Request a free demo here.

Be a Provider… NOT a Broker!

At Kinetic Data we’ve been talking for years about Service Integration and Automation (SIAM) and building software products to enable Service Providers to deliver at scale.  Understanding the SIAM concept has real value for enterprises looking to achieve successful delivery where service models are distributed across fulfillment silos, and customer experience is of paramount importance.

For Shared Service IT organizations, most have an understanding of the Handshake above 2brokering concept with respect to infrastructure delivery.  In this context, the brokering concept is often referred to as the Hybrid Cloud infrastructure model. In this model, Corporate IT is typically the central provider of infrastructure services, while the actual components making up deployed technology stacks live both internally (corporate data centers) and externally (partner provided, Cloud-based data centers).  Often, Corporate IT may involve many back-end partners in providing those infrastructure components.

At a high-level, the Service Brokering concept appears to solve challenges associated with delivering enterprise IT service in the complex world of today’s global economy. In this model, services are made up of component functions where fulfillment tasks are sourced to provider-partners responsible for delivering their individual part.  While this may seem like a broker model, the reality is that if you view things from the customer’s perspective, the “Service Broker” concept doesn’t make sense at all.  

When I think about my experiences with brokers, some are great and some are not.  Regardless of how good the broker, I’ve ended up (as the customer) having to directly interact with downstream providers to resolve issues related to the service I’ve procured.  I’ll spare the gory details, but offhand I can think of examples with healthcare, investments, house-buying and home repair that make up my experiences.

Each time an issue came up in the delivery of a complex service (home-purchase) and I had to get involved in solving them, it was time-consuming, costly and frustrating. More than once, I decided that regardless of how good the broker was in my initial interaction, I would not use them for the same service in the future as it was easier for me to handle things directly with the downstream provider.  That’s an anecdote for IT outsourcing if you are keeping score at home!

Ultimately, the underlying issues with all of any of these challenging “Service Broker” experiences I have lived were due to the difference between my perception and the reality of the service model I was procuring.

As a customer, I expected an experience where the service being provided was truly integrated end-to-end regardless of who was doing the fulfillment.  What I got was a disparate and distributed service experience that was notintegrated and left me looking for an alternative provider for the future.

So, with respect to Enterprise IT and the idea of “Service Brokering”, think about:

  • A customer procures (requests or buys) a service and expects delivery of it, not just “part of it”.
  • That customer has an expectation (SLA) for that service with corporate IT.  It’s not the customer’s responsibility to coordinate sub-contractor agreements (OLA’s) between back-end fulfillers that comprise the component Sub-Services, nor is it their interest to have any complexity added to their experience.
  • They don’t care if Vendor A is responsible for Sub-Service 1, and Vendor B is responsible for Sub-Service 2.  All they want is simple access to the service and a great experience in it’s delivery.   

If there’s an issue with a downstream fulfillment by Vendor B, it’s ridiculous to expect a customer to care about a missed OLA or further, to get involved in the resolution of a stalled service.  When they come to get service from Corporate IT, they expect a great experience by a Service Provider, not a Service Broker.

If you understand what goes into end-to-end service delivery where there is afocus on customer experience, Service Brokering is nothing more than marketing-speak. Another attempt by industry vendors to try to re-label what already exists and sell it to you as “new.”  The multi-sourced delivery model has existed for decades.  It is not new, and there are real Service Providers out there that truly understand the value of Service Integration in driving excellent customer experience!

Remember:  What matters most is customer experience.  Be a ServiceProvider NOT a Broker!

Congratulations to CareTech Solutions – Named Best in KLAS for Seventh Straight Year

CareTech Solutions was recently named Best in KLAS (again) for healthcare IT outsourcing by KLAS Research. The 2014 award marks the seventh consecutive year that CareTech has been named a Best in KLAS category leader in IT outsourcing.

$4.7 million IT cost savingsAs reported, “CareTech’s top ranking for IT outsourcing (Extensive) is included in the ‘2014 Best in KLAS Software and Services’ report, intended to help healthcare providers identify the best healthcare vendors across multiple segments based on clients’ feedback.” The award is based on input from healthcare professionals and clinicians at thousands of hospitals, clinics and other facilities.

Continue reading “Congratulations to CareTech Solutions – Named Best in KLAS for Seventh Straight Year”

Customer Satisfaction Soars at ATS with Enterprise Request Management

As noted here before (and here and here), enterprise request management (ERM) is a business-efficiency strategy that reduces service delivery costs while increasing user satisfaction. Combining a single intuitive portal for requesting any type of enterprise service with back-end process automation, the ERM approach simplifies request management for employees, accelerates service delivery, and ensures first-time fulfillment.

Automating workflow processes with ERMWhat does that look like in the real world? Continue reading “Customer Satisfaction Soars at ATS with Enterprise Request Management”

How a Healthcare IT Service Provider Saved $4.7 Million with ERM

Hospitals are under pressure on multiple fronts to reduce costs. One key cost-reduction strategy is outsourcing non-core functions, such as IT, in order to focus more on effective and efficient patient care.

CareTech Solutions is a healthcare-focused IT service provider, supporting 400,000 end-users across more than 200 hospital clients, with a focus on “creating value for clients through customized IT solutions that contribute to improving the patient experience while lowering healthcare costs.”

To make it easier and faster for busy nurses and hospital staff to request IT services, and resolve technical issues more quickly, CareTech began implementing an enterprise request management (ERM) strategy with help from Kinetic Data.

Handling close to 250,000 service requests through its online service request portal in 2013, CareTech estimated “the annual costs savings and productivity gains for its collective operation to be an impressive $4.7 million.”

Read the full story here.


The Fundamentals of Service Delivery

By Brett Norgaard

Sound journalism addresses a fundamental set of questions—who, what, where, when, how, and why—in the makeup of a good story. The reader learns facts and gains knowledge from reading a well constructed piece. Similarly, a well constructed service will engage the user and provide a memorable experience. Service Blueprinting is a technique that maps out the service interaction from the perspective of what the user sees as well as how they interact with the “onstage” visible and “backstage” invisible service delivery elements. There is also a visualization of the underlying support infrastructure that the service provider uses in the delivery of the service.

Fundamentals of Service DeliveryWith the advent of online shopping, social media and the proliferation of mobile devices, today’s demanding users expect the same kind of experience, always on connectivity, interactivity, self-service that they receive from Amazon, FaceBook, and on their iPhone. Yet, service providers face an even higher bar, for they have to not only address the aforementioned characteristics, but they also have to handle high organizational levels of security, compliance, privacy, multiple levels of approvals, and more complex service requests than typical consumer interactions. Examples of these more complex processes include employee on-boarding, transitioning a new client onto the service platform, or doing both simultaneously while integrating with enterprise applications. Yet, service providers have to do this for multiple clients who may in turn have multiple divisions, departments, or offices.

And, a service provider always pits its current service against the “claims of better service” by rivals waiting in the wings. So, savvy service providers are adding interactive feedback into the service flow so that instead of waiting for the service to conclude before gathering feedback, are actually capturing valuable, relevant feedback that can be acted upon if triggers indicate a breach.

Good service for one client may not be good for another. It depends on what the service goals are for each client. Consider the case of a Health Care IT Service Provider that we work with—one hospital client sought to optimize around their doctors while another wanted to optimize around their patients.

As you can see, the demands on a service provider to deliver a well constructed, engaging, interactive, secure, compliant, and unique service for each client is no small order. The people, processes, and technology need to support all aspects of the service.

Starting with the technology, a platform that is configuration-based, secure, scalable, and operates in either a dedicated or multi-tenant mode is the first step. Next, and utilizing the aforementioned, the service provider should have several “experience shaping” levers at its disposal for its people to pull. The ability to request services and/or products bundled, unique approvals, and fulfillment is key. The ability to view times, dates, resources, and obtain status is also important. And, it is also vital to be able to gather real-time feedback so the service can be delivered or rectified if off track. Taken together, these service levers can shape the flow of service to the unique needs of their clients, truly addressing their business requirements and their processes.

Now there’s a story that you can write where you shape the who, what, where, when, how and why of great service.

Three Keys to Making Multi-tenancy Work

There is still some debate over whether multi-tenancy is a prerequisite for cloud computing, but doubters are getting harder to find. Nearly two years ago (an eternity in Internet time), David Linthicum, blogging in InfoWorld, called the dispute “silly.”  “Let’s get this straight right now,” he wrote, “Cloud computing is about sharing resources, and you can’t share resources without multi-tenancy.”

Polymorphic Application ModelEven so, there are differences of opinion about what makes a good multi-tenant application strategy. First, to be clear on what multi-tenancy is, Wikipedia defines it as “a principle in software architecture wherein a single instance of the software runs on a server and serves multiple client organizations (tenants).” In contrast, a single-tenant cloud app, even it if runs on a virtual server partition, is almost identical to the old hosted ASP model, which dates back to the 1990s. And by now it’s abundantly clear that the multi-tenancy can lower a customer’s costs and offer significantly more value over time. (See Alok Misra in InformationWeek).

But not all multi-tenant applications are alike. Their cost and value—especially value—are heavily dependent on architectural and design considerations. The multi-tenant experience is often likened to leasing an apartment or condo versus owning a home. As anyone who has ever spent any time renting knows, the quality of the experience often depends on your landlord. In situations wherein our software is being used by a multi-tenant-application “landlord,” Kinetic Data has always tried to leverage best practices in the multi-tenant environment. To us, these boil down to three key considerations:

Be flexible, not monolithic.

Since all tenants in a multi-tenant environment share the same application, it may seem logical that the application must be the same for all tenants. If you accept that notion, you have to conclude that while a multi-tenant application may provide some broad value to all customers, its monolithic nature limits the unique value it can provide to customers with unique needs.

Kinetic Data set out to disprove that notion years ago, at least in the BMC Remedy® IT service management world. We acknowledged that at the application-run-time engine and application-data tiers, the same instance of BMC Remedy has to serve all customers in the same ways. This is the essence of multi-tenancy, and there’s no way around it.

But there are other tiers atop those two basics tiers, and each can be configured to provide unique service experiences. The trick is utilizing what we call cloning or templating, using metadata in the outward-facing applications customers actually use (in our case, Kinetic Request and Kinetic Task). In the Kinetic world, there is a clear separation between the compiled BMC Remedy run-time engine/application data and the metadata-based templates customers use to create unique branding, workflow processes, portals, and forms. In this way, each Kinetic Data customer can change the templates they use without affecting other tenants. This “isolation of processes” thus allows the creation of unique customer applications that are easily configured and deployed without sacrificing the security and integrity of the underlying application.

Security must be architected in, not added on.

In the early days of multi-tenant cloud computing, there was a lot of concern about data security. After all, multiple independent tenants share the same application, each with their own data sensitivity and security requirements. How do multi-tenant application providers ensure that each tenant’s data is kept separate and secure?

In Kinetic Data’s case, we use the “row-level” security model already built into BMC Remedy. This model ensures that row-level records are locked to a specific company ID. As a result, the ability to query, view, and modify records is restricted to users with company (or companies) privileges that allow access to those records.

Employ multiple implementation models.

Customizability, unique value and ironclad data security—those are the pillars upon which Kinetic Request and Kinetic Task are built. But we also took another need into consideration. In the ideal multi-tenant world, companies may want to use applications on a strictly company-specific basis or share certain services among vendors, campuses, clients, partners, and others, or maintain the ability to do both. That’s why the Kinetic Data architecture is based on these three specific implementation models.

In the Kinetic Data company-specific model, companies can configure uniquely branded catalogs, portals and service items without affecting the branding or processing of any other company in the same BMC Remedy instance. In the shared Kinetic Data model, companies can share these items while maintaining the row-level security of the BMC Remedy environment. And in the blended Kinetic Data model, companies can share items while maintaining company-specific service items.

So what makes multi-tenancy work? In Kinetic Data’s case, it works thanks to:

  • Customizable user interfaces for each tenant and the ability to configure unique tenant processes without global effects, via isolation of processes;
  • A flexible deployment model (or unique or shared services or a combination thereof); and
  • Data integrity and segmentation.

To learn more, see this technical brief on Kinetic Data’s Multi-tenancy Strategy.