Nicholas Yancer, Author at ServiceNow Guru https://servicenowguru.com/author/nicholasyancer/ ServiceNow Consulting Scripting Administration Development Thu, 05 Dec 2024 19:41:33 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://servicenowguru.com/wp-content/uploads/2024/05/cropped-SNGuru-Icon-32x32.png Nicholas Yancer, Author at ServiceNow Guru https://servicenowguru.com/author/nicholasyancer/ 32 32 We Need to Talk About Workspaces https://servicenowguru.com/system-ui/we-need-to-talk-about-workspaces/ Thu, 05 Dec 2024 19:41:33 +0000 https://servicenowguru.com/?p=17184 Introduction It seems like every release brings with it a new Workspace, and with it either some new functionality or a shiny new coat of paint over some familiar capability. With the introduction of the Next Experience in the San Diego release, ServiceNow began the parade of “Configurable Workspaces,” or “Experiences,” that have now become

The post We Need to Talk About Workspaces appeared first on ServiceNow Guru.

]]>
Introduction

It seems like every release brings with it a new Workspace, and with it either some new functionality or a shiny new coat of paint over some familiar capability. With the introduction of the Next Experience in the San Diego release, ServiceNow began the parade of “Configurable Workspaces,” or “Experiences,” that have now become the vehicle for enabling enhanced AI and other advanced capabilities on the platform.

Here we will look at the state of these Workspaces and how they impact the usage, architecture, and design of solutions for ServiceNow. For more detailed information, see the ServiceNow documentation: Next Experience UI (servicenow.com).

Getting to know ServiceNow Workspaces and Next Experience

Workspace Overview

Prior to the Next Experience, ServiceNow dipped their toes in the enhanced UI waters by introducing the Agent Workspace (and the ability to create your own Workspaces using the framework). This would lay the groundwork for introducing what we now know as “Configurable Workspaces,” but as of the Washington DC release is no longer shipped, supported, or available for activation. Thus, for the remainder of this article the term “Workspace” is used to refer to the Next Experience configurable workspaces.

The Next Experience uses an implementation of “Web Components” (learn more about Web Components here) to encapsulate functionality within discrete units on a page to achieve the following benefits:

  1. Allow complex functionality to be packaged into self-contained and reusable units
  2. Avoid code sprawl encountered when reusing controls that require complex HTML, scripts, and styles
  3. Prevent conflicts between different implementations of similar blocks of code where styles or events and functions may overlap

Each component functions in a similar way to an “Interface” (see this page for a good description of interfaces in object-oriented programming) in that it defines a set of inputs (if necessary) and returns a set of outputs (also if necessary) while leaving the details of how the functionality is implemented up to the internal code. It is, in other words, a promise of a specified or agreed upon result but not a promise of how that result is achieved. This means that any JavaScript library can be used to implement the code within the component. And this is precisely why the Next Experience was built using this methodology: the internal workings of the component can be changed in any future release to use more efficient, simpler, or just different libraries without the platform needing to be entirely re-architected. The component is effectively future proofed as long as the new implementation uses the same inputs and outputs and returns the same result.

That is a big deal.

However, it’s important to note that, as of this writing, Workspaces are not meant to replace every UI on the Now Platform. There are now three primary interfaces in ServiceNow: Service Portal for the end-user experience; Workspace for the fulfiller experience; and what is now known as “Core UI” (or the “backend” or the “admin UI” or the “frameset UI” or… well, it goes by many names) for System Administration as well as other Fulfiller processes that have not yet been gifted with their own workspace (emphasis on “yet”).

Workspaces do not yet support the “responsive” layout that Service Portals offer, and with continued development on several portals (Employee Center and the Customer and Consumer Service portals, for example) it does not appear that there is any rush to replace Service Portal with the Next Experience just yet.

Now that we understand a bit more about Workspaces, let’s look a little closer at some of the benefits and challenges they present.

The Promise of Workspaces

Going back to the Agent Workspace, the concept of a dedicated space that would ease access to information and supercharge productivity was the driving force behind introducing this new user experience paradigm. The idea was to encapsulate the various things a fulfiller would need to do work while remaining in a single browser tab. From a focused landing page and a targeted set of record lists to a structured work area that could use nested tabs to ease navigation without losing your place, the promise of Workspaces was to simplify the fulfiller experience under a single pane of glass (I promise that’s the last time I’ll use that term in this article) to make work as efficient as possible.

Often fulfillers need additional context when working on a task, such as information about a user, configuration item, Customer Account, or other related entity. Workspaces offer a consolidated view of this related information by either presenting it as a sidebar for the current record or allowing related records to open in a new tab within the same page so that users don’t have to navigate away from a record or open a new browser tab (which affects the browser’s history stack and can often cause frustration when the “back” button takes you somewhere you didn’t expect).

Additionally, most of the new capabilities (including the fast-expanding GenAI solutions) are exclusively released for and accessible from the new Workspaces. Now Assist, Playbooks (the portions for fulfillers), Recommended Actions, and other capabilities are not accessible in the Core UI, so adoption of these capabilities will also require adoption of Next Experience Workspaces.

If you have not already at least explored the various Workspaces, now is a good time to get started as they will only become more ingrained in the platform.

The Challenge of Workspaces

Every ray of sunshine casts a shadow, and it is no different with Workspaces. Along with all the promise, benefits, and new capabilities come real, and not insignificant, challenges to adoption, development, and maintenance.

Challenge 1: Silos and Sprawl

Considering users first, one of the main challenges of adopting Workspaces is the sheer number of them. Each workspace is designed for a specific Persona and Use Case, and the functionality is designed to support it. Unlike Service Portals, where any page can be used within any portal, each page (and with it, the functionality offered by the page) in a Workspace is defined only for that Workspace and cannot be used or accessed from elsewhere. This poses significant usability challenges when a user’s responsibilities cross multiple personas. In these cases, they may have to toggle between multiple Workspaces as they work through their processes.

A specific example relates to the intersection of Request Fulfillment and Asset Management. For many organizations, the Service Desk (the consummate IT Fulfiller) is responsible for fulfilling hardware and software requests for their end user base. Along with this responsibility, they may also manage stock for the equipment they provide. So, what happens when a user requests a new laptop?

Well, when Procurement is in use the first thing to happen (after approvals) is a “Sourcing” task for the entire Request. This task is meant to identify how the requested hardware will make its way to the fulfiller to supply to the requester. The request may be fulfilled from local stock, it may require transfer from another stockroom, or it may need to be purchased from a supplier.

As a result of such purpose-built workspaces, it can often be a challenge for a user to know where they need to be for a specific task or a specific step in a larger workflow. Clear and accessible documentation along with good training can help mitigate this risk, as well as some clever design to try to bridge across Workspace silos, such as providing navigation options when a task is best worked in another Workspace.

It may also be possible to mitigate some of this conflict by adding capabilities to other workspaces, however this would require a non-trivial level of effort to “copy” a page and its functionality from one Workspace to another. The copied pages and other artifacts would also need to be kept up to date with changes to the source page.

Challenge 2: Self-Configuration

Beginning with Homepages, one of the most powerful capabilities that ServiceNow offers is the ability to empower users to create what they need to be as productive as they can. Being able to build targeted reports, consolidate them onto a page, and share them with your team has been a mainstay of the platform since its inception, and is the main reason I fell in love with it so many years back. It was the root of my career transformation and the inspiration for my mission to spread that transformation as far and as wide as I can.

Now let’s talk Landing Pages. Each Workspace hosts a Home or Landing page, and it is possible to create “Variants” of the landing page that can be surfaced to a user depending on the roles they have (we will talk further about Variants a bit later). The Landing page is meant to provide key data for the user upon entering the Workspace to help answer the question “What do I need to work on next?” So, it would seem this is akin to Dashboards. But alas, this is not the case.

Landing pages, while they support Variants, are not adjustable or sharable by the user. Each Variant must be built by someone with what amounts to administrative access within the Workspace. Dashboards are still accessible though the Platform Analytics Workspace (see the documentation for more information), and it is possible to add Dashboards to a dedicated page within a Workspace (see the documentation for a specific implementation for the CSM Configurable Workspace), but given the newly siloed nature of Workspaces (see Challenge 1) it is no longer as intuitive or seamless.

Workspaces do still offer a level of personal configuration, specifically by allowing you to define your own “Lists” for quick and specific access beyond what is configured for the Workspace in general. The interface also still allows you (for the most part) to configure list layouts and to personalize forms, and certain pages offer personalization or configuration preference options depending on the page content. However, the loss of quickly and easily creating and sharing Dashboard content is a big one.

There are not a lot of options available to mitigate these challenges, other than providing good training and documentation to ensure users know where to find things like Dashboards and understand what they are able to configure themselves. You can also look to add a “Dashboard” page to each Experience, which will ensure that users remain in the experience when clicking through any report content to view the lists and records.

Challenge 3: Development and Maintenance Complexity

ServiceNow released UI Builder as a way to configure and develop within the Next Experience framework. They also allow building custom components (although they generally discourage this). However, building a custom component requires significant additional expertise and advanced tooling, as components must be built off-platform in a command line interface or other IDE, that most seasoned platform developers and architects do not have. And for many of us, the learning curve is a bit too steep.

The power and promise of ServiceNow is that it obscured the underlying complexity of building an Enterprise-grade application and allowed folks with moderate scripting abilities to build amazing experiences. Next Experience introduces an entirely new lexicon along with purpose-built architecture that looks similar to, but is distinctly different and separate from, familiar entities like Script Includes and UI Actions.

One need only attempt to explore an existing page in a Workspace to quickly grasp the complexity and multi-layered architecture upon which many pages are built. Often sub-pages are nested within a Viewport in an existing component, and that sub-page may contain additional viewports wherein additional pages are nested, and so on. It is often a struggle to locate the component you are looking to investigate.

Additionally, the nature of Page Variants can make testing a challenge. Each Variant is given an order within a page route, and the first variant for which a user matches an Audience (as well as a match on any page conditions) will display when accessing that route. As an administrator, it can be difficult to access a particular Variant when testing, as the “all roles” nature of the admin role means that you will likely match the Audience of the first Variant by order. That fact can make changes to a variant somewhat difficult to test, although you can impersonate a user with the intended Audience in another session to make testing a bit easier.

The ServiceNow developer community has been hard at work delivering content and enablement for UI Builder. At the moment, I can only recommend that you invest the time to explore the available content, leverage the collective community for advice and support (as this amazing community has done for decades now), and build up your capabilities and comfort level with UI Builder. Additionally, a light touch is the best solution and I recommend modifications in the Workspace only as a last resort, using Page Variants where possible, and with very comprehensive documentation. In the meantime, keep open communication with ServiceNow’s product managers, who are very active and open to dialog, with the intent of working collaboratively to ensure the platform continues to work for all of us.

Challenge 4: Capability Gaps

New Workspaces are introduced quickly in response to an ever-changing environment. As a result, sometimes the functionality they are meant to replace is not completely covered in the first release. ServiceNow has adopted an agile approach to this challenge, with frequent store releases occurring outside of the Major Family Release schedule aimed at providing evolving capabilities at a faster pace.

As an example, consider the evolution of the Project Workspace. When it was first released, it was limited to the new Planning Console and lacked the “Details” page to allow users to see the complete Project record; the other navigation options still linked out to the “classic” project workspace. Over the course of several Store releases, the “Classic” pages were added to the new Workspace, and as of Xanadu if you do a fresh install then the “Classic” navigation is now gone completely. There remain, as of this writing, several functions that elude the new Workspace, such as adding Test Phases to the Project from the Planning Console and preventing Child Tasks from being added to an Agile Phase.

The best way to mitigate this challenge is to carefully plan your adoption strategy. Establish a minimum capability threshold below which you cannot adopt a Workspace and then monitor the road map and releases to know when that threshold is reached. You can also identify which pieces of functionality it may still be possible to access outside of the Workspace (or is there is a way to embed it into a tab or modal, is that an option?) and explore a hybrid and phased approach to adoption. ServiceNow does a good job of regularly adding capabilities to the Workspaces, so it is likely only a matter of time until you reach critical mass and can begin adopting.

The Path Forward

Although I spent more time focusing on the challenges posed by Workspaces, my intent is not to cause despair. Having spent time working within these experiences, looking at the constant growth the steady stream of super impressive capabilities, and the simple fact that they are not going anywhere, I am hopeful that Workspaces and the tools underlying them will continue to evolve and fulfill the ServiceNow mission of making it as easy as possible to “enable regular people to create meaningful applications to route work through an enterprise.”

The power of the Now Platform lies within its community. That includes not only the users, administrators, and developers that use the platform to carry out their mission, but the folks at ServiceNow that enable those users, administrators, and developers by listening to their needs and producing a product that is unmatched in its ability to empower and inspire every day. My hope is that we continue to collaborate to make it as easy as possible to create and deliver value from this amazing platform.

The post We Need to Talk About Workspaces appeared first on ServiceNow Guru.

]]>
Applying the Sync/Async Pattern in Flow Designer https://servicenowguru.com/graphical-workflow/applying-sync-async-pattern-flow-designer/ Thu, 18 Jul 2024 13:21:28 +0000 https://servicenowguru.com/?p=16846 Introduction Here is the scenario: you need to use up-to-date data from a record in a transaction that may take a long time to process, but you don’t want the user to have to wait for that transaction in the UI. You may, for example, need to send a Comment or Work Note to another

The post Applying the Sync/Async Pattern in Flow Designer appeared first on ServiceNow Guru.

]]>
Introduction

Here is the scenario: you need to use up-to-date data from a record in a transaction that may take a long time to process, but you don’t want the user to have to wait for that transaction in the UI. You may, for example, need to send a Comment or Work Note to another system through an API call. That can take a few seconds or more, and it is possible that multiple comments could be entered by the time you send the API call, all the while multiple API calls are stacking up. Let’s see how we can use a Flow Designer Sync/Async design pattern to optimize user experience.

One way to handle this is by accomplishing part of this action synchronously within the “Business Rule Loop” by logging an Event and stashing the data you need (for example, the Comment or Work Note) in an event parameter so that it is up to date at the time of the transaction, and then processing the longer portion asynchronously outside of the client transaction using a Script Action. This Sync/Async design pattern ensures that you are capturing the data you want to send and returning control to the User as quickly as possible while still being able to send that “correct” data regardless of how long it takes to process to the other system.

Is it possible to accomplish this same pattern using low- or no-code capabilities with Flow Designer? It is!
Use a Sync/Async design pattern to optimize user experience.

If you are not familiar with Flows and Subflows, you may want to check out the documentation first.

Foreground Flows for Synchronous Processing

By default, Flows are set to run in the background so that they do not slow down the user experience. This means that they do not run as part of the Before/After Business Rule transaction (the Business Rule Loop mentioned previously), but instead run outside of the transaction. There is an option to run Flows in the Foreground, which will run them during the Business Rule Loop and make them part of the Client transaction. It is important to make these types of Flows as efficient as possible to avoid negatively impacting the user experience.

One benefit for running Flows in this way is that you have access to the Trigger Record at runtime, so the data the user changed is available to you immediately. If you try to retrieve a Journal Entry asynchronously, it is possible that another entry has been made by the time you try to retrieve it, so you cannot guarantee that you have the right data when the asynchronous code executes.

Our first requirement, getting the “right” Comment or Work Note, is met by retrieving it in a Foreground Flow. Let’s try this out by creating a Flow that will run in the Foreground when an Incident has a new Comment and log the Comment. We will add to this later, but for now we just want to get this up and running.

‘Sync Flow on Incident Comment’ Flow
Flow name: Sync Flow on Incident Comment
Trigger: Record > Created or Updated
Table: Incident
Condition: Active is true AND Additional comments changes
Run Trigger: For each unique change
Advanced Options
Where to run the flow: Run flow in foreground
ACTIONS

  1. Action: Log
    • Level: Info
    • Message (toggle scripting for this input):

      // Retrieve the most recent Comment from the Incident
      return "From the Flow:\n\n" + fd_data.trigger.current.comments.getJournalEntry(1);
      

This log will show up in the system log table with a “Source” value of “Flow Designer.” Activate this flow and test it out by adding a Comment to any active Incident and checking the logs.

Note that you could create a custom Action to retrieve the Journal Entry from the Incident to provide a no-code method to retrieve the data for future use, but for now we’ll stick with using a script in the Message field to keep things simple.

Great! Now we can get the comment, but how do we do something with it without having to make the user wait until we are done?

Subflow With Inputs

We will use a Subflow to handle the asynchronous portion. To do this, we need to pass our synchronously obtained data to the Subflow so that it does not have to retrieve potentially newer information when it runs. We’ll cover how to make this Subflow run asynchronously from the Flow a bit later.

For this example, we will simulate a potentially slow external API call by adding a wait timer to our Subflow.

First, we create a new Subflow and add inputs for the Incident record and the Comment that we want to handle in the API call. This ensures that we have access to the comment that was entered at the time we trigger the Subflow and that we are not retrieving a more recently entered comment when the Subflow executes.

Next, we log a timestamp for when the Subflow begins processing. Then we add a 10-second Wait (you can increase this if you want, to allow yourself time to enter multiple Comments while the Subflow waits to execute during your testing). Finally, we log out the Comment from the Subflow Input. We can also add another log step where we repeat the script we used to retrieve the most recent Comment from the Incident to see if it differs from the Comment we received as a Subflow Input.

‘Async Subflow Handle Incident Comment’ Subflow
Subflow name: Async Subflow Handle Incident Comment
INPUTS & OUTPUTS

  1. First Input
    • Label: Incident
    • Name: incident
    • Type: Reference.Incident
    • Mandatory: True
  2. Second Input
    • Label: Comment
    • Name: comment
    • Type: String
    • Mandatory: True

ACTIONS

  1. Action: Log
    • Level: Info
    • Message: Async Subflow, before Wait.
  2. Flow Logic: Wait for a duration of time
    • Duration Type: Explicit duration
    • Wait for: 10s
  3. Action: Log
    1. Level: Info
    2. Message: From subflow, after wait. Input comment: [data pill] Subflow Inputs > Comment [data pill]
  4. Action: Log
    1. Level: Info
    2. Message (toggle scripting for this input):
      // Log out the most recent journal entry from the Incident
      return 'From subflow, after wait. Most recent journal entry:\n\n' + fd_data.subflow_inputs.incident.comments.getJournalEntry(1);
      

Notice that we do not create any Subflow outputs. Because we plan to run this asynchronously, the Flow that calls our Subflow will no longer be running when the Subflow completes and will not be able to do anything with any outputs. Additionally, any error handling for the actions carried out by the Subflow needs to be contained within the Subflow and not passed to the calling Flow for additional handling. For example, you may usually pass the API response status and body back to the calling Flow to check for and respond to any errors (like creating an Incident), but you will now need to do this within the Subflow.

Publish the Subflow to make it available for use. We now have the Asynchronous component of our Flow Designer Sync/Async design pattern.

Subflow component of the Flow Designer Sync/Async design pattern to optimize user experience.

Final Subflow, with Inputs shown in detail.

Call the Subflow from the Flow

We now have a Subflow to handle the Comment, but we need to add it to the Flow to complete our Flow Designer Sync/Async design. Add a new step to the Flow, select the “Subflow” option, and then locate the Subflow you created. Set the Inputs to be the Incident trigger record and the most recent Comment (the same value you passed into the Log step, which we scripted).

To make this Subflow run Asynchronously, make sure that the “Wait for Completion” checkbox is unchecked. Doing so will allow the Flow to complete and immediately return control to the Client transaction and complete the remainder of the Business Rule Loop without waiting for the Subflow.

For testing, we will add another Log step after we call the Subflow to observe the sequence of events.

Your Flow should now look like this:

‘Sync Flow on Incident Comment’ Flow
Flow name: Sync Flow on Incident Comment
Trigger: Record > Created or Updated
Table: Incident
Condition: Active is true AND Additional comments changes
Run Trigger: For each unique change
Advanced Options
Where to run the flow: Run flow in foreground
ACTIONS

  1. Action: Log
    • Level: Info
    • Message (toggle scripting for this input):

      // Retrieve the most recent Comment from the Incident
      return "From the Flow:\n\n" + fd_data.trigger.current.comments.getJournalEntry(1);
      
  2. Subflow: Async Subflow Handle Incident Comment
    • Wait For Completion: False
    • Incident: [data pill] Trigger > Incident Record [data pill]
    • Comment (toggle scripting for this input):
      // Return the most recent journal entry
      return fd_data.trigger.current.comments.getJournalEntry(1);
      
  3. Action: Log
    • Level: Info
    • Message: From sync flow, after calling Subflow.
Final Flow using a Flow Designer Sync/Async design pattern to optimize user experience.

Final Flow, showing the values passed to the Subflow inputs in detail.

NOTE: After testing this multiple times, I discovered that the “Wait for a duration of time” block in the Subflow is necessary to make the Subflow truly Asynchronous. Testing without the timer indicates that the main Flow will still hang while the Subflow is running even though the “Wait for Completion” box is unchecked. It is unclear whether this is intended behavior. For now, I recommend adding a one-second wait to the Subflow while also unchecking the “Wait for Completion” box when calling the Subflow to ensure that the Client transaction does not pause for the Subflow to complete. You are welcome to try this out on your own. I used gs.gleep() in the script for one of the Subflow Log steps instead of the “Wait for a duration of time” step and observed that the client transaction did appear to hang even when the “Wait For Completion” box was unchecked for the Subflow.

Summary

Now we have seen how to implement a Flow Designer Sync/Async design pattern that would normally be accomplished using Events and Script Actions. This gives us another tool to use for optimizing the User Experience while also ensuring access to accurate data for long-running asynchronous transactions.

The post Applying the Sync/Async Pattern in Flow Designer appeared first on ServiceNow Guru.

]]>
Getting Smart About Intelligence https://servicenowguru.com/generative-ai/getting-smart-about-intelligence/ Thu, 30 May 2024 13:08:46 +0000 https://servicenowguru.com/?p=15639 Artificial Intelligence (AI) has been building in capability, acceptance, and adoption over the past few years, and that trend has rapidly increased with the emergence of Generative AI (GenAI). ServiceNow has been building up AI-enabled capabilities on the platform for a while now and made it a key topic for Knowledge 24 soon after the

The post Getting Smart About Intelligence appeared first on ServiceNow Guru.

]]>
Artificial Intelligence (AI) has been building in capability, acceptance, and adoption over the past few years, and that trend has rapidly increased with the emergence of Generative AI (GenAI). ServiceNow has been building up AI-enabled capabilities on the platform for a while now and made it a key topic for Knowledge 24 soon after the launch of GenAI capabilities on the platform. 

With so much attention given to AI and all the different terms associated with the technology, understanding exactly what AI capabilities ServiceNow has to offer can be a challenge. Here, we look at the current state of AI offerings on the ServiceNow platform to gain a better understanding of the tools at our disposal. 

Intelligence on the ServiceNow Platform

AI, ML, PI, GenAI–these are all terms and acronyms you most likely have heard before. Are they all the same thing? Not quite. 

Artificial Intelligence (AI) is the umbrella term for all of the other capabilities. Machine Learning (ML) is a type of AI that analyzes data to find patterns and derives models that can be applied to assist with decision making. Predictive Intelligence (PI) is a subset of Machine Learning that takes data (typically structured or semi-structured) as an input and uses it to predict an output. 

Generative AI (GenAI) is an expansion of Machine Learning that adds to models the ability to create new data based on existing data given an input. This is a significant difference from PI models, where the possible outputs or range of outputs are known in advance. For example, a PI model can predict what product a person is going to buy based on past purchases, the time it will take for a Case to be resolved by a customer service agent, or a relevant Knowledge Article based on a search query or data from an Incident.  

GenAI can go further to do things like build a tailored product description highlighting the features that are most important to a buyer, recommend steps that can be taken to resolve a Case by extracting troubleshooting and resolution steps from similar cases, or presenting a unique response based on content from one or more relevant Knowledge Articles. Predictive Intelligence will display the product or the article, whereas GenAI will synthesize more targeted content based on those products and articles. This is a subtle but powerful distinction. 

So what capabilities does ServiceNow offer? The ServiceNow Documentation lists all AI-enabled capabilities and, as of May 2024, even includes AI capabilities, with a Beta launch of summarization and code explanation on nearly every page. For now, we focus on two of the most widely used capabilities: Predictive Intelligence and Now Assist, ServiceNow’s GenAI solution. 

The main similarity between the AI-enabled capabilities used on the platform is to help replace low-value, repetitive activities and to surface the right information in front of users in order to accelerate decision making, complete work faster, and become overall more accurate and efficient.

Code Generation Example - Now Assist

Now Assist for Code Generation

Predictive Intelligence

As of the Washington DC release, ServiceNow includes three predictive frameworks for creating, training, and using PI models on the platform: classification, similarity, and clustering. 

The Classification framework can predict a field value based on structured and semi-structured data. The most common example, and one of the earliest used capabilities, is predicting the category of a record based on inputs like the short description. It can be used to set nearly any choice or reference value on a record, provided there is a set list of possible values and a strong correlation between the fields used to make the prediction; this makes automatic assignment a particularly valuable use case, considering that reassignment is one of the main causes of delay in Incident and Case resolution. 

The Similarity framework identifies other existing records that are similar to a given record. A common use case for this framework is to identify Knowledge Articles that relate to an Incident or a Case. Another use case is to find similar Case or Incident records that can be used to identify troubleshooting or resolution steps that may apply to the current Case or Incident. It can also be used to find a Change Request that caused or may have contributed to an Incident. These capabilities all help to accelerate issue resolution, minimize impact to users and customers, increase efficiency, and boost user and customer satisfaction. 

The Clustering capability identifies groups of records that are similar to one another. The key distinction from Similarity is that we do not start with a given record to make a comparison. Instead, we look at a population of records and divide them into groups of similar records. This is useful to help identify common issues, potential common causes, potential Major Incidents or Issues, or other shortfalls like knowledge gaps or single points of failure. The most common use case is Major Incident and Major Issue Management, where Clustering can identify the impact and help locate a common cause. It is also helpful for proactive Problem Management, to help identify and resolve common causes of minor issues before they have a larger impact. 

Although deprecated as of the Washington DC release, the Regression framework works like Classification but is used to predict a numeric value, such as time to resolve. Its limited utility is likely why it is no longer supported. 

Now that we understand the PI frameworks and a few use cases, keep in mind these considerations when implementing them. 

Most important is identifying the outcomes you want to achieve with these capabilities (as is the case when implementing any capability). Is there value in automatically assigning a category to your records? You can assess this by thinking about the amount of time spent on categorization, whether incorrect categorization causes any issues, and what other processes or decisions rely on categorization. Despite heavy attention on defining the right categories, most organizations (although they include category in reporting) have no actionable use for it. Consider instead whether automatically assigning a record to the right group based on a predictive model rather than static assignment rules may have better outcomes. 

Data quality is almost as important as outcomes. Predictive models need to be trained to produce good, usable solutions—and they need to be trained on your data. The quality of the predictions relies solely on the quality and accuracy of the data used to train the models. You can’t make good predictions with bad data. Do you currently have processes in place to ensure the correct data for your models (both the input fields used to make the prediction and the output field you want to predict)? If not, then focus on improving data quality first in order to be successful with Predictive Intelligence. 

You must also have sufficient quantity of data to train the models. The recommended number of records is at least 30,000, with a maximum size of 300,000. More records are not always better, as the data you train must be accurate to what you want to predict. However, the more quality data you provide to the models, the better the predictions will be. 

Another key consideration is to manage expectations. When training a predictive solution, there will be a tradeoff between the coverage of the solution (i.e. the expected percentage of records that will be able to produce a prediction) and accuracy (i.e. the percentage of predicted values deemed correct). No solution will be able to predict a value for 100% of all records with 100% accuracy, so you need to decide which is more important and tune your models accordingly. 

Finally, ensure you have a plan in place to continually train your solutions. As you bring in new data, it is critical to continue training the solutions so that they remain relevant to the issues you are addressing. A solution that had good accuracy and coverage 12 months ago may no longer be able to make good predictions today due to different applications being deployed, changes in groups or organizational structure, migration to the cloud, or other factors. Be sure that you allocate adequate time and resources to continually retrain the models and review their performance. 

You can find more information on training, tuning, and maintaining predictive solutions in the ServiceNow Documentation.

Generative AI

There is a lot of excitement for and attention to GenAI right now, but also a lot of skepticism and caution. Workers in a knowledge economy are rightly concerned that some or all of their job functions may be replaced by AI. Current AI tools do not yet rise to that level; instead, they stand to replace tedious and repetitive tasks and help work get done faster. 

ServiceNow clearly recognizes the best possible use for these tools, a fact made clear by naming the product Now Assist (rather than Now Replace). The current GenAI capabilities (referred to as “Skills” by ServiceNow when also delivered as a use case) included in Now Assist generally fall into two broad categories: summarization and content creation. New capabilities are being introduced quite rapidly, so be sure to keep tabs on the ServiceNow Now Assist Documentation for updates. 

Summarization skills are useful for quickly getting up to speed on an issue or other type of ticket when a handoff occurs, whether from another agent or from an automated process like an Alert or the Virtual Agent. It also includes use cases like generating Resolution Notes for an Incident or Work Order based on activity from the record history. This can save huge amounts of time as users no longer have to manually review the activity, highlight key points, and distill that information; the summary is available for you to pick up work or to glean insights and draw conclusions. 

Content generation skills cover such use cases as drafting Knowledge Articles as well as code, Playbook, and Flow generation. In these cases, the system presents a recommended artifact for the user to then adjust and use if they wish. Often getting to this starting point saves crucial time, and these items can then be knitted together to provide the desired outcomes. 

There are also capabilities that combine both summarization and content generation. Now Assist for AI Search provides actionable answers to search queries that are generated or selected from the content of relevant search results. In other words, Now Assist can present a uniquely generated answer or can surface an existing answer that it deems the best match. Now Assist for the Virtual Agent simplifies topic discovery by eliminating the time and effort needed to set up and train NLU models or refine keywords and by using AI search capabilities to provide more meaningful and actionable results during the chat conversation. This results in a faster time to value for search and Virtual Agent through easier setup and more relevant results. 

As with PI models, it is important to consider the outcomes you want from GenAI solutions. Fortunately, Now Assist provides more targeted use cases so that outcomes are clear from the start when you select which components you want to implement. 

Even with Generative AI, the quality of data that the system uses to generate a response is critical to the utility of the response. Detailed notes must be kept on tasks and other activities that are meant to be summarized. For example, if you do not record actions taken or activities that occurred in the work notes for an Incident or a Case, and you have not used the Additional Comments to communicate, then there will not be any content for Now Assist to summarize. 

The output of GenAI solutions is a starting point and not a final product. AI is still developing rapidly, and as such, effective governance must be put in place to ensure that the right solutions are put in place and producing results, that their performance meets realistic expectations, and that the outputs are used in the right way. We cannot blame the AI model if it generates code that we copy and paste into place but do not validate that it functions and performs the way we want.

Conclusion

The ideal use of AI is to replace low-value work in order to enable us to focus on the aspects of our jobs that matter most. GenAI capabilities are not meant to displace any of the PI capabilities but rather to complement them and provide a richer, more intelligent experience throughout the platform.

Armed with our new understanding, we are ready to define our outcomes and make smarter choices about how we use intelligence to achieve them.

The post Getting Smart About Intelligence appeared first on ServiceNow Guru.

]]>