ServiceNow Guru https://servicenowguru.com/ ServiceNow Consulting Scripting Administration Development Tue, 07 Jan 2025 16:24:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://servicenowguru.com/wp-content/uploads/2024/05/cropped-SNGuru-Icon-32x32.png ServiceNow Guru https://servicenowguru.com/ 32 32 Be a Choice List “Thief” https://servicenowguru.com/system-definition/be-a-choice-list-thief/ Tue, 07 Jan 2025 16:24:53 +0000 https://servicenowguru.com/?p=17201 Do you find yourself duplicating the same choices across different forms and tables in ServiceNow? Manually updating these lists can be a real time-consuming hassle. Here's the good news! In my second article about powerful dictionary configurations, I will showcase a time-saving technique to reuse choice list values across your instance. This saves you time

The post Be a Choice List “Thief” appeared first on ServiceNow Guru.

]]>
Do you find yourself duplicating the same choices across different forms and tables in ServiceNow? Manually updating these lists can be a real time-consuming hassle.

Here’s the good news! In my second article about powerful dictionary configurations, I will showcase a time-saving technique to reuse choice list values across your instance. This saves you time and ensures consistency in your data management. This approach is particularly beneficial for multilingual environements, as you only need to translate the main choice list.

Use case 1: Linking Catalog Variables to existing Choice Options

There is a Choice field on a table, and we need to replicate this field on a Catalog Item or Record Producer form with the same choices.

For example, adding the Incident Category drop-down to the “Create Incident” Record Producer. On the left of the screenshot below is the Incident Category field as displayed in the Service Operations Workspace, and on the right is the Create Incident Record Producer in the Employee Service Center Service Portal.

  • To get these values onto the “Create Incident” Record Producer Form, create a new Select Box Variable to hold the values that need to be displayed.

  • Next, instead of creating mirror Choice options in the Question Choices Related List, navigate to the “Type Specifications” section on the Variable form. Enter the following values and save the variable:
    • Choice table: Incident [incident]
    • Choice field: Category

  • Now when we navigate back to the form in the Service Portal, we can see that there is a new Category field on the form, and it already has Choice values that exactly match the Incident Category selections. Now if you change the available Choices for the Incident Category field, the Variable drop-down values will automatically reflect those changes without have to repeat the configuration on the Variable record.

Use case 2: Reuse Choice List values in another field

Sometimes you need to replicate a choices options across several tables and fields to create consistency as data moves from one record to another, like Asset and CI attributes that exist on both records. –OR– You have several fields on the form with the same set of Choice options. For example, a design decision to use Yes/No Choice fields instead of Checkbox fields.

  • First, choose which table and field will drive the other fields options. The example will use Incident Category.
  • Navigate to the Dictionary record for the other field(s) that will be driven by the Incident Categories.
  • Find the Choice List Specification tab. If you don’t see the Choice table field, you may have to switch to Advanced view.
  • Enter the following values and save the Dictionary record:
    • Choice table: Incident [incident]
    • Choice field: Category

That’s it! Now, when you change the main field’s list, all of the other choice lists will reflect the change.

Summary

This article explored two simple ways to reuse choice list values in the ServiceNow platform. If you are a customer that supports multiple language, you’ve exponentially saved yourself some time, because you only have to translate the main field’s Choices and those changes will also be reflected everywhere as well.

By leveraging these techniques, you can streamline your ServiceNow configuration and save valuable time!

The post Be a Choice List “Thief” appeared first on ServiceNow Guru.

]]>
We Need to Talk About Workspaces https://servicenowguru.com/system-ui/we-need-to-talk-about-workspaces/ Thu, 05 Dec 2024 19:41:33 +0000 https://servicenowguru.com/?p=17184 Introduction It seems like every release brings with it a new Workspace, and with it either some new functionality or a shiny new coat of paint over some familiar capability. With the introduction of the Next Experience in the San Diego release, ServiceNow began the parade of “Configurable Workspaces,” or “Experiences,” that have now become

The post We Need to Talk About Workspaces appeared first on ServiceNow Guru.

]]>
Introduction

It seems like every release brings with it a new Workspace, and with it either some new functionality or a shiny new coat of paint over some familiar capability. With the introduction of the Next Experience in the San Diego release, ServiceNow began the parade of “Configurable Workspaces,” or “Experiences,” that have now become the vehicle for enabling enhanced AI and other advanced capabilities on the platform.

Here we will look at the state of these Workspaces and how they impact the usage, architecture, and design of solutions for ServiceNow. For more detailed information, see the ServiceNow documentation: Next Experience UI (servicenow.com).

Getting to know ServiceNow Workspaces and Next Experience

Workspace Overview

Prior to the Next Experience, ServiceNow dipped their toes in the enhanced UI waters by introducing the Agent Workspace (and the ability to create your own Workspaces using the framework). This would lay the groundwork for introducing what we now know as “Configurable Workspaces,” but as of the Washington DC release is no longer shipped, supported, or available for activation. Thus, for the remainder of this article the term “Workspace” is used to refer to the Next Experience configurable workspaces.

The Next Experience uses an implementation of “Web Components” (learn more about Web Components here) to encapsulate functionality within discrete units on a page to achieve the following benefits:

  1. Allow complex functionality to be packaged into self-contained and reusable units
  2. Avoid code sprawl encountered when reusing controls that require complex HTML, scripts, and styles
  3. Prevent conflicts between different implementations of similar blocks of code where styles or events and functions may overlap

Each component functions in a similar way to an “Interface” (see this page for a good description of interfaces in object-oriented programming) in that it defines a set of inputs (if necessary) and returns a set of outputs (also if necessary) while leaving the details of how the functionality is implemented up to the internal code. It is, in other words, a promise of a specified or agreed upon result but not a promise of how that result is achieved. This means that any JavaScript library can be used to implement the code within the component. And this is precisely why the Next Experience was built using this methodology: the internal workings of the component can be changed in any future release to use more efficient, simpler, or just different libraries without the platform needing to be entirely re-architected. The component is effectively future proofed as long as the new implementation uses the same inputs and outputs and returns the same result.

That is a big deal.

However, it’s important to note that, as of this writing, Workspaces are not meant to replace every UI on the Now Platform. There are now three primary interfaces in ServiceNow: Service Portal for the end-user experience; Workspace for the fulfiller experience; and what is now known as “Core UI” (or the “backend” or the “admin UI” or the “frameset UI” or… well, it goes by many names) for System Administration as well as other Fulfiller processes that have not yet been gifted with their own workspace (emphasis on “yet”).

Workspaces do not yet support the “responsive” layout that Service Portals offer, and with continued development on several portals (Employee Center and the Customer and Consumer Service portals, for example) it does not appear that there is any rush to replace Service Portal with the Next Experience just yet.

Now that we understand a bit more about Workspaces, let’s look a little closer at some of the benefits and challenges they present.

The Promise of Workspaces

Going back to the Agent Workspace, the concept of a dedicated space that would ease access to information and supercharge productivity was the driving force behind introducing this new user experience paradigm. The idea was to encapsulate the various things a fulfiller would need to do work while remaining in a single browser tab. From a focused landing page and a targeted set of record lists to a structured work area that could use nested tabs to ease navigation without losing your place, the promise of Workspaces was to simplify the fulfiller experience under a single pane of glass (I promise that’s the last time I’ll use that term in this article) to make work as efficient as possible.

Often fulfillers need additional context when working on a task, such as information about a user, configuration item, Customer Account, or other related entity. Workspaces offer a consolidated view of this related information by either presenting it as a sidebar for the current record or allowing related records to open in a new tab within the same page so that users don’t have to navigate away from a record or open a new browser tab (which affects the browser’s history stack and can often cause frustration when the “back” button takes you somewhere you didn’t expect).

Additionally, most of the new capabilities (including the fast-expanding GenAI solutions) are exclusively released for and accessible from the new Workspaces. Now Assist, Playbooks (the portions for fulfillers), Recommended Actions, and other capabilities are not accessible in the Core UI, so adoption of these capabilities will also require adoption of Next Experience Workspaces.

If you have not already at least explored the various Workspaces, now is a good time to get started as they will only become more ingrained in the platform.

The Challenge of Workspaces

Every ray of sunshine casts a shadow, and it is no different with Workspaces. Along with all the promise, benefits, and new capabilities come real, and not insignificant, challenges to adoption, development, and maintenance.

Challenge 1: Silos and Sprawl

Considering users first, one of the main challenges of adopting Workspaces is the sheer number of them. Each workspace is designed for a specific Persona and Use Case, and the functionality is designed to support it. Unlike Service Portals, where any page can be used within any portal, each page (and with it, the functionality offered by the page) in a Workspace is defined only for that Workspace and cannot be used or accessed from elsewhere. This poses significant usability challenges when a user’s responsibilities cross multiple personas. In these cases, they may have to toggle between multiple Workspaces as they work through their processes.

A specific example relates to the intersection of Request Fulfillment and Asset Management. For many organizations, the Service Desk (the consummate IT Fulfiller) is responsible for fulfilling hardware and software requests for their end user base. Along with this responsibility, they may also manage stock for the equipment they provide. So, what happens when a user requests a new laptop?

Well, when Procurement is in use the first thing to happen (after approvals) is a “Sourcing” task for the entire Request. This task is meant to identify how the requested hardware will make its way to the fulfiller to supply to the requester. The request may be fulfilled from local stock, it may require transfer from another stockroom, or it may need to be purchased from a supplier.

As a result of such purpose-built workspaces, it can often be a challenge for a user to know where they need to be for a specific task or a specific step in a larger workflow. Clear and accessible documentation along with good training can help mitigate this risk, as well as some clever design to try to bridge across Workspace silos, such as providing navigation options when a task is best worked in another Workspace.

It may also be possible to mitigate some of this conflict by adding capabilities to other workspaces, however this would require a non-trivial level of effort to “copy” a page and its functionality from one Workspace to another. The copied pages and other artifacts would also need to be kept up to date with changes to the source page.

Challenge 2: Self-Configuration

Beginning with Homepages, one of the most powerful capabilities that ServiceNow offers is the ability to empower users to create what they need to be as productive as they can. Being able to build targeted reports, consolidate them onto a page, and share them with your team has been a mainstay of the platform since its inception, and is the main reason I fell in love with it so many years back. It was the root of my career transformation and the inspiration for my mission to spread that transformation as far and as wide as I can.

Now let’s talk Landing Pages. Each Workspace hosts a Home or Landing page, and it is possible to create “Variants” of the landing page that can be surfaced to a user depending on the roles they have (we will talk further about Variants a bit later). The Landing page is meant to provide key data for the user upon entering the Workspace to help answer the question “What do I need to work on next?” So, it would seem this is akin to Dashboards. But alas, this is not the case.

Landing pages, while they support Variants, are not adjustable or sharable by the user. Each Variant must be built by someone with what amounts to administrative access within the Workspace. Dashboards are still accessible though the Platform Analytics Workspace (see the documentation for more information), and it is possible to add Dashboards to a dedicated page within a Workspace (see the documentation for a specific implementation for the CSM Configurable Workspace), but given the newly siloed nature of Workspaces (see Challenge 1) it is no longer as intuitive or seamless.

Workspaces do still offer a level of personal configuration, specifically by allowing you to define your own “Lists” for quick and specific access beyond what is configured for the Workspace in general. The interface also still allows you (for the most part) to configure list layouts and to personalize forms, and certain pages offer personalization or configuration preference options depending on the page content. However, the loss of quickly and easily creating and sharing Dashboard content is a big one.

There are not a lot of options available to mitigate these challenges, other than providing good training and documentation to ensure users know where to find things like Dashboards and understand what they are able to configure themselves. You can also look to add a “Dashboard” page to each Experience, which will ensure that users remain in the experience when clicking through any report content to view the lists and records.

Challenge 3: Development and Maintenance Complexity

ServiceNow released UI Builder as a way to configure and develop within the Next Experience framework. They also allow building custom components (although they generally discourage this). However, building a custom component requires significant additional expertise and advanced tooling, as components must be built off-platform in a command line interface or other IDE, that most seasoned platform developers and architects do not have. And for many of us, the learning curve is a bit too steep.

The power and promise of ServiceNow is that it obscured the underlying complexity of building an Enterprise-grade application and allowed folks with moderate scripting abilities to build amazing experiences. Next Experience introduces an entirely new lexicon along with purpose-built architecture that looks similar to, but is distinctly different and separate from, familiar entities like Script Includes and UI Actions.

One need only attempt to explore an existing page in a Workspace to quickly grasp the complexity and multi-layered architecture upon which many pages are built. Often sub-pages are nested within a Viewport in an existing component, and that sub-page may contain additional viewports wherein additional pages are nested, and so on. It is often a struggle to locate the component you are looking to investigate.

Additionally, the nature of Page Variants can make testing a challenge. Each Variant is given an order within a page route, and the first variant for which a user matches an Audience (as well as a match on any page conditions) will display when accessing that route. As an administrator, it can be difficult to access a particular Variant when testing, as the “all roles” nature of the admin role means that you will likely match the Audience of the first Variant by order. That fact can make changes to a variant somewhat difficult to test, although you can impersonate a user with the intended Audience in another session to make testing a bit easier.

The ServiceNow developer community has been hard at work delivering content and enablement for UI Builder. At the moment, I can only recommend that you invest the time to explore the available content, leverage the collective community for advice and support (as this amazing community has done for decades now), and build up your capabilities and comfort level with UI Builder. Additionally, a light touch is the best solution and I recommend modifications in the Workspace only as a last resort, using Page Variants where possible, and with very comprehensive documentation. In the meantime, keep open communication with ServiceNow’s product managers, who are very active and open to dialog, with the intent of working collaboratively to ensure the platform continues to work for all of us.

Challenge 4: Capability Gaps

New Workspaces are introduced quickly in response to an ever-changing environment. As a result, sometimes the functionality they are meant to replace is not completely covered in the first release. ServiceNow has adopted an agile approach to this challenge, with frequent store releases occurring outside of the Major Family Release schedule aimed at providing evolving capabilities at a faster pace.

As an example, consider the evolution of the Project Workspace. When it was first released, it was limited to the new Planning Console and lacked the “Details” page to allow users to see the complete Project record; the other navigation options still linked out to the “classic” project workspace. Over the course of several Store releases, the “Classic” pages were added to the new Workspace, and as of Xanadu if you do a fresh install then the “Classic” navigation is now gone completely. There remain, as of this writing, several functions that elude the new Workspace, such as adding Test Phases to the Project from the Planning Console and preventing Child Tasks from being added to an Agile Phase.

The best way to mitigate this challenge is to carefully plan your adoption strategy. Establish a minimum capability threshold below which you cannot adopt a Workspace and then monitor the road map and releases to know when that threshold is reached. You can also identify which pieces of functionality it may still be possible to access outside of the Workspace (or is there is a way to embed it into a tab or modal, is that an option?) and explore a hybrid and phased approach to adoption. ServiceNow does a good job of regularly adding capabilities to the Workspaces, so it is likely only a matter of time until you reach critical mass and can begin adopting.

The Path Forward

Although I spent more time focusing on the challenges posed by Workspaces, my intent is not to cause despair. Having spent time working within these experiences, looking at the constant growth the steady stream of super impressive capabilities, and the simple fact that they are not going anywhere, I am hopeful that Workspaces and the tools underlying them will continue to evolve and fulfill the ServiceNow mission of making it as easy as possible to “enable regular people to create meaningful applications to route work through an enterprise.”

The power of the Now Platform lies within its community. That includes not only the users, administrators, and developers that use the platform to carry out their mission, but the folks at ServiceNow that enable those users, administrators, and developers by listening to their needs and producing a product that is unmatched in its ability to empower and inspire every day. My hope is that we continue to collaborate to make it as easy as possible to create and deliver value from this amazing platform.

The post We Need to Talk About Workspaces appeared first on ServiceNow Guru.

]]>
Implementing a Factory Pattern on ServiceNow https://servicenowguru.com/scripting/implementing-a-factory-pattern-on-servicenow/ Thu, 05 Dec 2024 19:40:55 +0000 https://servicenowguru.com/?p=16592 From time to time I've run into a situation where I need to branch to use a different set of code depending on some attribute or circumstance, but regardless of the branch the code is essentially doing the same thing, it just has to do it a different way. To put more of a point

The post Implementing a Factory Pattern on ServiceNow appeared first on ServiceNow Guru.

]]>
From time to time I’ve run into a situation where I need to branch to use a different set of code depending on some attribute or circumstance, but regardless of the branch the code is essentially doing the same thing, it just has to do it a different way. To put more of a point on this, imagine that you need to integrate to update an incident on another system…but it’s not just one other system, it’s potentially many and they are all different vendors. In the end, you need to update a record on a target system, but how you do that is most likely different as each system will have its own API to consume to do this. One way to accomplish it would be to build an if/then or switch block with each block calling the right code to interact with the other system. But another, more flexible way, is to build a factory that dynamically gives you the code you need based on attributes you have defined in your instance.

The factory analogy is apt here and is a fairly common pattern in programming.  At a high level:

  • Factory: The factory represents a central machine that produces different items.
  • Products: The items on the conveyor belt are the products created by the factory.
  • Clients: The workers who take items off the conveyor belt are the clients using the factory.

There are a couple of key components to this technique, and we’re going to walk through them. As we work through an example, we will tie what we build to the above concepts to connect the dots!

First, let’s simplify and refine the above general example. We have two external systems we need to integrate with to create/update incidents. Each external system has its own REST API to call with a unique payload. We know which system we have to integrate with by the company associated with the user caller attribute on the incident in our system.  There are 2 companies right now, CompanyOne & CompanyTwo, that we need to integrate with.

At a high level, the steps involved in creating our factory are:

  1. Create the code that would call each company’s API correctly.
  2. Create a blank/dummy script include (this will be our factory).
  3. Get GlideRecord reference to our factory.
  4. Overwrite the script value from 3 with the string that represents one of our script includes from step 1.
  5. Run the GlideRecord from 4 through GlideScriptEvaluator to get an object.

Now in more detail….

Step 1

Create script includes that call the respective API’s. As a standard we’ll refer to those script includes as CompanyOneAPI & CompanyTwoAPI. These script includes contain the unique logic needed to consume the respective company’s API. So far, so good…nothing really ‘new’, right?  Here’s the new part. In each script, the method that is called to fire things off is named the same. Something like ‘execute’. So to start the integration in each case would be:

new CompanyOneAPI().execute()

or

new CompanyTwoAPI().execute()

Step 2

Now create a ‘dummy’ script include. You can name it anything, but something descriptive is always recommended and helpful. The script include does NOT need to have any actual script in it, but you can take the default scripting given as part of creating a new script include. This script include will serve our ‘factory’. It takes some input and will give us a ‘product’. Here’s an example:

Name: Factory

Description: Just used by "factories" as a record for dynamically creating other objects.

Script:

var Factory = Class.create();
Factory.prototype = {
     /*
     This script include is used purely as a placeholder to build other objects for instantiation. It is referenced in other script includes and the script attribute is 'written' too but never saved. That 
     script is then executed to dynamically give the object that is needed based on the attributes passed in the the "factory".
     */
     initialize: function() {},
     type: 'Factory'
};

 

Step 3

Now that we have our “factory” script include we need to get a GlideRecord to our Factory script include:

var grFactory = newGlideRecord("sys_script_include");
grFactory.get("api_name", "Factory");

Step 4

Next, set the script value of the script include from step 3 to the name of one of the script includes that does the actual integration (created in step one):
grFactory.script = "CompanyOneAPI";
Notes:
  • If “CompanyOneAPI” is in a specific scope, that scope would be included as well (I.e. <scope_name>.CompanyOneAPI).
  • We do NOT ever save ( .update() ) the grFactory GlideRecord.

Step 5

Finally, you take the grFactory and run it through GlideScopedEvaluator to get an instantiated script, and then an object:

var evaluator = new GlideScopedEvaluator();
var model = evaluator.evaluateScript(grFactory, 'script');
var handler = new model();
”handler” can now be used as if you’d done this:
new CompanyOneAPI();
This gives us our ‘product’ from ‘factory’.

Putting it all together

Why go through all the hassle above?  Take a look at step 4 above again.  We explicitly set the script to be “CompanyOneAPI”…but what if we wrapped steps 3-5 in a utility and allowed the script string we want to be instantiated by being passed in as an attribute?  Something like:

getInstance = function( script_include_name ) {
     //Get a reference to the script include created for populating it's script field with the name of the script include passed in.
     var grFactory = newGlideRecord("sys_script_include");
     grFactory.get("api_name", "x_snc_trng_util.Factory");
     grFactory.script = script_include;
     //Use GlideScopedEvaluator to turn evaluate the script above, returning an instantiated handler that is returned.
     var evaluator = newGlideScopedEvaluator();
     var model = evaluator.evaluateScript(grFactory, 'script');
     var handler = new model();
     return handler;
};
Now we can pass any script include name and get an instance of it.  Couple this up with a data model that allows us to look up the right script include based on the company that it is for and we have the foundation for something truly flexible going forward as company3,4,… gets added. To do the lookup there are a couple of obvious approaches:
  • Add an attribute to core_company to contain the script include string
  • Add a look-up table that ties a reference to the company to the script include string (and possibly some other attribute for further delineation).

As we’ve gone the path of the second option for most of our implementations, we’ll look at that one at a high level:

 

Company (Ref) Attribute (Str) Script (Str)
CompanyA integration.incident CompanyOneAPI()
CompanyB integration.incident CompanyTwoAPI()

 

Now we have a table we can use to look up the specific API script for a company. In our example the company would come from the caller on the incident.  We would use that in the look-up table to find the right API to use:

var LookUpGr = new GlideRecord("LookupTable");
LookUpGr.addEncodedQuery("company=<company_sys_id>^attribute=integration.incident");
LookUpGr.query();

Then, assuming we found a look-up record we could take the script string and create our handle:

if (LookUpGr.next) { 
     var data = {<JSON object with info needed for API call>}
     //Get an instantiated javascript object
     var handler = getInstance(LookUpGr.getValue('script')); 
     //Take that object and run it's execute method.  Each company API should have the same method to call (i.e. execute)
     var response = handler.execute(data); //evaluate response 
}

The last line essentially being the ‘client’ that takes the ‘product’ from the factory and uses it.

Now that we have the above framework, when we get the next company that we need to integrate with we’d need to:

  1. Create CompanyThreeAPI (with an execute method) that calls the other systems API to create/update incidents.
  2. Create a mapping in the look-up table for CompanyThree

Hopefully the above is a pattern you can find useful as I am sure there are many other applications where it can be applied. Please let me know what you think or if there’re any questions!

The post Implementing a Factory Pattern on ServiceNow appeared first on ServiceNow Guru.

]]>
Exporting Records to CSV Using ServiceNow Flow Designer https://servicenowguru.com/design/exporting-records-to-csv-using-servicenow-flow-designer/ Tue, 05 Nov 2024 10:00:22 +0000 https://servicenowguru.com/?p=17011 Exporting data from ServiceNow is a common requirement for many organizations, whether for reporting, data analysis, or integration with other systems. One of the powerful features of ServiceNow is the Flow Designer, which allows you to automate processes without writing code. In this post, we will walk through how to create a Flow in ServiceNow

The post Exporting Records to CSV Using ServiceNow Flow Designer appeared first on ServiceNow Guru.

]]>
Exporting data from ServiceNow is a common requirement for many organizations, whether for reporting, data analysis, or integration with other systems. One of the powerful features of ServiceNow is the Flow Designer, which allows you to automate processes without writing code. In this post, we will walk through how to create a Flow in ServiceNow that exports records to a CSV file.

Prerequisites

Before we begin, ensure you have the following:

– Admin access to ServiceNow: You’ll need the necessary permissions to create and manage Flows.
– Knowledge of Flow Designer: Basic understanding of how Flow Designer works.
– Records to Export: Identify the records or the table you want to export.

Step 1: Identify the Records to Export

First, determine the table from which you want to export records. For example, you might want to export incident records, change requests, or custom records. Let’s say you want to export incident records that are in the “Resolved” state.

Step 2: Create a New Flow

1. Navigate to Flow Designer: Go to the application navigator and type `Flow Designer`. Click on it to open the Flow Designer interface.

2. Create a New Flow:
– Click on `+ New` to create a new Flow.
– Give your Flow a meaningful name, such as “Export Resolved Incidents to CSV.”

 

Step 3: Add a Trigger

The Flow needs a trigger to start the process. Depending on your requirement, you can choose different types of triggers.

– Scheduled Trigger: This allows you to export records at regular intervals (e.g., daily, weekly).
– Click on `+ Add Trigger`.
– Select `Scheduled` as the trigger type.
– Configure the schedule as per your requirement, for instance, daily at midnight.

Step 4: Add an Action to Fetch Records

Now, you need to add an action to fetch the records that you want to export.

1. Add Action:
– Click on `+ Add Action` under the trigger.
– Choose `ServiceNow Core` as the action category.
– Select `Look Up Records` as the action.

2. Configure the Action:
– Choose the table from which you want to fetch records, e.g., `Incident [incident]`.
– Define the conditions to filter the records. For example, `State is Resolved`.
– Store the output in a variable, for instance, `incident_records`.

Step 5: Add an Action to Create CSV Content

To create a CSV file, you need to convert the fetched records into a CSV format.

1. Add Action:
– Click on `+` below the previous action.
– Choose `Script` as the action category.
– Select `Run Script`.

2. Write the Script:
– In the script editor, write a script that converts the records into CSV format. Here’s a sample script:

*javascript*
(function execute(inputs, outputs) {
var records = inputs.incident_records;
var csvContent = “Number,Short Description,State\n”;

records.forEach(function(record) {
csvContent += record.number + ‘,’ + record.short_description.replace(/,/g, ”) + ‘,’ + record.state + ‘\n’;
});

outputs.csvContent = csvContent;
})(inputs, outputs);

– This script assumes that you want to export the incident number, short description, and state.

 

 

Step 6: Add an Action to Save the CSV File

The next step is to save the CSV content as a file in ServiceNow.

1. Add Action:
– Click on `+` below the previous action.
– Choose `ServiceNow Core`.
– Select `Create Record`.

2. Configure the Action:
– Select `Attachment [sys_attachment]` as the table.
– Set the `Table Name` to the table where you want to attach the file, e.g., `Incident [incident]`.
– Provide a name for the file, e.g., `Resolved_Incidents.csv`.
– Set the `Content` field to the `csvContent` variable from the previous script.

Step 7: Test the Flow

Before running the Flow in production, it’s crucial to test it.

1. **Save the Flow**: Ensure all steps are configured correctly and save the Flow.

2. **Test the Flow**: Click on `Test` to manually trigger the Flow and verify that it works as expected.

Step 8: Activate the Flow

Once you have tested the Flow and confirmed it works correctly, activate it by toggling the Flow status to `Active`.

Conclusion

With this Flow in place, you now have an automated process to export records from ServiceNow to a CSV file. This Flow can be further customized to meet your specific needs, such as exporting different fields, filtering records based on more complex criteria, or integrating the export process with other workflows.

Using ServiceNow’s Flow Designer to export records is a powerful way to automate data handling, saving time and reducing manual effort. Give it a try and see how it can streamline your workflows!

The post Exporting Records to CSV Using ServiceNow Flow Designer appeared first on ServiceNow Guru.

]]>
Custom queue event handling in ServiceNow – Implementation steps https://servicenowguru.com/integration/custom-queue-event-handling-servicenow/ https://servicenowguru.com/integration/custom-queue-event-handling-servicenow/#comments Tue, 29 Oct 2024 09:57:11 +0000 https://servicenowguru.com/?p=16974 Background Looking at the ServiceNow native processes, one can easily see that big portion of them are event-based, rather than synchronous. This is especially true for the processes which are not critical to the user experience or the ones which are not dependencies of other business logic. In a nutshell, an event is logged in

The post Custom queue event handling in ServiceNow – Implementation steps appeared first on ServiceNow Guru.

]]>
Background

Looking at the ServiceNow native processes, one can easily see that big portion of them are event-based, rather than synchronous. This is especially true for the processes which are not critical to the user experience or the ones which are not dependencies of other business logic.

In a nutshell, an event is logged in a queue and when system resources are available, the event is picked up and processed by the associated Script Action.

 

Below is a simple visual representation of the process along with explanation (source: Steven Bell) :

0. I register my new event in the Registry, create my Script Action associated to that event, and if needed my Script Include which could be called by the Script Action. Registering my event tells the Worker to listen for that event, and that it will be expected to do something with it.

1. Something executes a gs.eventQueue statement which writes an event record on the queue. BTW, this is not an exhaustive list.

2,3,4. The event worker(s), whose job it is to listen for events listed in the Registry, picks up the event and sees if there is a Script Action(s) associated with the registered event.

5,6. If a Script Action is found to run then it is executed which in turn may execute my Script Include if I choose.

 

Remember the info message when adding a role to a user or a group:

What’s happening behind – an event is logged to the queue and the roles are added to the group in the first possible moment, when the system has resources for that. Usually this is near real-time, but in case of higher priority operations are already queued, this will wait till they free up some processing power.

Now if one is implementing an application, based on synchronous logic, occupying almost all the system resources, this may lead to performance implications, slowing down the instance tremendously.

One possible approach in such cases is to shift from synchronous processing to event-based processing, which will lead to better performance.

But since events are being logged (unless another queue is explicitly specified) to the default queue, we might run into performance issues again.

Here comes the custom queue implementation. It is nothing more than a separate queue to which events can be queued explicitly, leveraging the fifth parameter of gs.eventQueue() API (more on that later).

 

Implementation

The implementation process is similar with the normal event-based logic implementation. We need to have:

  • An event registered in the event registry
  • A Business rule or any other logic to fire the event
  • A Script action to process the event
  • Custom queue processor

I will not discuss the first three, because these are pretty straightforward, and docs are easily available.

Custom queue processor implementation

The easiest way to create a processor for a custom queue is to:

  • go to System Scheduler -> Scheduled Jobs -> Scheduled Jobs
  • find a job with Trigger type = interval (i.e. ‘text index events process’)

  • change the name (it can be anything) replace ‘text_index’ with the name of your custom queue inside the fcScriptName=javascript\:GlideEventManager(<HERE>).process(); line
  • set Next action to be in the near future, i.e. 30 seconds from the current moment (This is very important in order to get the job running)

  • (optional) edit the Repeat interval (short repeat interval may have some negative impact on the system performance, but at the same time, the lower the repeat interval, the sooner your event will be queued and processed)
  • Right click -> Insert and stay! Do not Save/Update!

You can have one or more custom queues, depending on the purpose. These must be aligned with the system resources – nodes, semaphores, workers. I will not go deeper on these, more information can be found in the Resources chapter below.

Logging an event to a specific (custom) queue

gs.eventQuque() API accepts 5 parameters:

  • Event name
  • GlideRecord object
  • Param1
  • Param2
  • (optional) queue

This fifth optional parameter ‘queue’ is the one that tells the system to which event queue an event should be logged.

We can log an event to the custom queue we have created above (‘custom_queue_one’) we can use the following line of code:

gs.eventQueue('event.name', grSomeObject,  null, null, ‘custom_queue_one’);

 

NB: queue name (fifth parameter) must be exactly the same as the one we’ve passed to the GlideEventManager during the process creation above.

Everything else (script actions, etc. is the same like in a normal event logging)

 

Good practices

  • Just because you can, doesn’t mean you should – this implementation is applicable only to cases where huge amounts of records must be processed (see Performance chapter)
  • Naming is important, give your names and processor readable names
  • For optimal performance, multiple custom queues can be created to handle a particular event. In this case, the event logging must be done in a way that ensures even distribution between the queues. To better organize these, one possible approach can be to:
    • Create a script include, holding an array with the names of your event queues
    • Use the following line of code to randomly distribute events to the queues:

gs.eventQueue('event.name', grSomeObject,  null, null, event_queues[Math.floor(Math.random()*event_queues.length)]);

where event_queues is an array containing the names of your queues

 

Performance

  • Even though we implement this approach to achieve performance, for low number of transactions it does not yield any performance gain, because of the Repeat interval – the longer it is, the slower will be the overall wait time
  • For large number of transactions (thousands of records), the achieved performance gain can be really significant. In one of my implementations, I was able to achieve 30x faster execution.

 

More information

The post Custom queue event handling in ServiceNow – Implementation steps appeared first on ServiceNow Guru.

]]>
https://servicenowguru.com/integration/custom-queue-event-handling-servicenow/feed/ 3
QR Code-Based Incident Creation from URL Parameters https://servicenowguru.com/client-scripts-scripting/qr-code-based-incident-creation-from-url-parameters/ Tue, 22 Oct 2024 15:56:11 +0000 https://servicenowguru.com/?p=17135 Imagine you could report issues with just a quick scan of a QR code. In this article, I'll show you how to set up a system in ServiceNow that lets users scan QR codes to create incident reports. This method makes it easy to report problems with meeting rooms, printers, or any other equipment in your

The post QR Code-Based Incident Creation from URL Parameters appeared first on ServiceNow Guru.

]]>
Imagine you could report issues with just a quick scan of a QR code. In this article, I’ll show you how to set up a system in ServiceNow that lets users scan QR codes to create incident reports. This method makes it easy to report problems with meeting rooms, printers, or any other equipment in your office.

Use Case: Reporting Incidents with QR Codes

Let’s start with the use case. Picture QR codes placed in various locations like meeting rooms, printers, or other office equipment. These QR codes contain URLs with parameters that correspond to the sys_id of the Configuration Item (CI) or asset in the ServiceNow CMDB. When a user scans the QR code, it directs them to an Incident Record Producer with important fields such as LocationAsset Information already filled in. This links the reported issue directly to the correct asset and location, allowing the helpdesk team to quickly identify and resolve the problem, while freeing the user from having to manually enter all the details.

Technical Solution: Catalog Client Script for Parsing URL Parameters

To make this work, we need a catalog client script that reads the URL parameters and maps them to variables with the same name as the “key” in the URL. Here’s the script:

Catalog Client Script

/**
 * Populates catalog variable fields with values from URL parameters.
 * @function
 * @throws {Error} Throws an error if an unexpected situation occurs.
 */
function onLoad() {
    try {
        /** Get the parameters from the URL */
        var _window = window ? window : this.window;
        var urlParams = new URLSearchParams(_window.location.search);

        /** Iterate over each parameter in the URL */
        urlParams.forEach(function(value, key) {
            try {
                /** Check if a variable with the same name exists in the catalog form */
                var variable = g_form.getControl(key);
                if (variable) {
                    /** Set the value in the catalog variable field */
                    g_form.setValue(key, value);
                }
            } catch (innerError) {
                /** Handle errors that occur during iteration */
                console.error('Error during parameter iteration:', innerError);
            }
        });
    } catch (error) {
        /** Handle errors that occur during the main execution */
        console.error('An unexpected error occurred:', error);
        /** You may choose to throw the error again or handle it differently based on your requirements. */
    }
}

How the Script Works

  1. Get the Parameters from the URL: The script starts by checking if the window object is available. It then creates a URLSearchParams object from the URL’s query string, which allows easy access to the URL parameters.
  2. Iterate Over Each Parameter: The script uses the forEach method to iterate over each key-value pair in the URL parameters.
  3. Check for Matching Variables: For each parameter, the script checks if there is a corresponding variable in the catalog form with the same name as the key. It does this using the g_form.getControl(key) method.
  4. Set the Variable Value: If a matching variable is found, the script sets its value to the corresponding value from the URL parameter using the g_form.setValue(key, value) method.
  5. Error Handling: The script includes error handling to catch and log any errors that occur during the main execution or the iteration over parameters. This ensures that any issues are logged for debugging purposes.

Implementation Steps

Here’s how you can set up the QR code solution:

  1. Create a Variable Set: Go to Service Catalog > Catalog Variables > Variable Sets and create a new variable set. Name it something generic and understandable, such as “URLParameterMapper” or “URLVariableBinder,” following your organization’s naming conventions. Add the catalog client script provided above to this variable set.
  2. Generate QR Codes: Create QR codes that encode URLs pointing to the Incident Record Producer. Make sure the URLs include parameters for the CI/asset sys_id.
  3. Create Record Producers or Catalog Items: Set up record producers or catalog items with their own variables or variable sets. Either match the variable names to the keys in the URL parameters, or create the QR code URL so that the keys match the variable names in the record producer. Otherwise, the variables won’t map correctly and won’t populate from the URL.
  4. Add the Variable Set: Add the variable set from step 1 to the record producers or catalog items created in step 3.

Conclusion

By following these steps, users can easily raise incidents by scanning QR codes, with the incident form pre-filled with relevant information from the URL parameters. This streamlines the process of incident creation and ensures accurate reporting of issues related to specific CIs or assets.

The post QR Code-Based Incident Creation from URL Parameters appeared first on ServiceNow Guru.

]]>
Leveraging ServiceNow with Highcharts: Transforming Data into Insight https://servicenowguru.com/service-portal/leveraging-servicenow-with-highcharts-transforming-data-into-insight/ https://servicenowguru.com/service-portal/leveraging-servicenow-with-highcharts-transforming-data-into-insight/#comments Tue, 08 Oct 2024 10:00:44 +0000 https://servicenowguru.com/?p=15549 In today's data-driven world, organizations are constantly seeking innovative solutions to transform complex datasets into actionable insights. ServiceNow, a leading cloud-based platform for digital workflows, provides extensive data handling and management capabilities. However, to truly leverage the power of data visualized effectively, integrating a robust tool like Highcharts can elevate the presentation and interpretation of

The post Leveraging ServiceNow with Highcharts: Transforming Data into Insight appeared first on ServiceNow Guru.

]]>
In today’s data-driven world, organizations are constantly seeking innovative solutions to transform complex datasets into actionable insights. ServiceNow, a leading cloud-based platform for digital workflows, provides extensive data handling and management capabilities. However, to truly leverage the power of data visualized effectively, integrating a robust tool like Highcharts can elevate the presentation and interpretation of data. This article explores how combining ServiceNow with Highcharts can transform raw data into clear, insightful visual representations, aiding businesses in making better-informed decisions. We’ll delve into the benefits of this integration, practical use cases, and tips for seamless implementation, setting the stage for a more intuitive and strategic approach to enterprise data analysis.

Highcharts is a flexible, feature-rich JavaScript library that allows developers to create interactive charts for the web. By integrating Highcharts with ServiceNow, you can transform static data into engaging visualizations, providing your users with a more intuitive, interactive experience. In this article, I’ll walk you through the process of integrating Highcharts with ServiceNow, showcasing use cases, and tips to get the most out of this powerful combination.

Why Integrate Highcharts with ServiceNow?

ServiceNow provides basic reporting and visualization features, but for more advanced charting needs—such as interactive dashboards or detailed drill-down capabilities—Highcharts offers several advantages:

  • Customization: Highcharts offers a wide range of chart types (line, bar, pie, scatter, etc.), and the ability to customize almost every aspect of the chart.
  • Interactivity: With Highcharts, users can interact with charts by zooming, clicking data points, and receiving real-time feedback, making data exploration more engaging.
  • Real-time Data: You can easily configure charts to update in real-time, providing up-to-the-minute insights.
  • Extensibility: Highcharts offers numerous plugins and extensions, allowing you to extend functionality as needed.

By combining ServiceNow’s robust data management with Highcharts’ visualization capabilities, organizations can create more effective reports and dashboards that drive better decision-making.

 

 

Licensing:

For this article, which has educational purposes, I will download version 11.4.3 of the library and use it in my PDI. For commercial use, it is recommended that you purchase licenses directly through their website. Another option is to use the version available within the platform, which in my research (<instance-name>/scripts/glide-highcharts.js) is 9.3.2.

Using the paid version:

In my article “Importing Excel to Servinow” I showed how to import a library into SN. Just follow the same steps to import the Highcharts library.

Using the version available on the platform:

Just create a JS Include in the Portal Theme added to the library already available on the platform:

 

Basic usage example:

For this first example I will create a simple widget with hard-coded data just to show how to use the library. To make things easier I will use a basic example that can be found in the documentation. This example was made using the version of the library that already comes with the platform.

 

Widget Basic Highchart Example
Widget Component: Client Script
Script:

<div class="page-header">
  <h1>ServiceNow Guru - Basic Highchart Example</h1>
</div>

<div id="container" style="width:100%; height:400px;"></div>

 

Widget Basic Highchart Example
Widget Component: HTML Template
Script:

var chartConfirObj = {
  chart: {
    type: 'bar',
  },
  title: {
    text: 'Fruit Consumption',
  },
  xAxis: {
    categories: ['Apples', 'Bananas', 'Oranges'],
  },
  yAxis: {
    title: {
      text: 'Fruit eaten',
    },
  },
  series: [
    {
      name: 'Jane',
      data: [1, 0, 4],
    },
    {
      name: 'John',
      data: [5, 7, 3],
    },
  ],
};

var chart = Highcharts.chart('container', chartConfirObj);

 

In this example, we see that to build the chart we need to pass an object containing the chart settings we want. The library is huge and I could spend hours here talking about each item in this object and also about each customization possibility, but the library documentation is incredibly complete and can answer all questions:

 

Now let’s give our graph a bit more reality and bring real data to it.

 

Widget Highchart Pie Example
Widget Component: Server Script
Script:

//variables that will be used to customize the graph
data.font_family = options.font_family || 'SourceSansPro,Helvetica,Arial,sans-serif';
data.fill_color = options.fill_color || '#fff';

data.title = options.title || 'Chart Title';
data.title_color = options.title_color || '#fff';
data.title_size = options.title_size || '14px';
data.title_weight = options.title_weight || 'normal';

data.label_color = options.label_color || '#fff';
data.label_size = options.label_size || '12px';
data.label_weight = options.label_weight || 'normal';
data.label_connector_color = options.label_connector_color || '#fff';

data.slice_label_size = options.slice_label_size || '14px';
data.slice_label_outline = options.slice_label_outline || 'transparent';
data.slice_label_opacity = options.slice_label_opacity || 1;
data.slice_label_weight = options.bar_label_weight || 'bold';

data.table = options.table || 'change_request';
data.agg_field = options.agg_field || 'risk';

data.graphData = [];

//get total requests
var reqCount = new global.GlideQuery(data.table).count();

// get the total requests by risk
var agg = new GlideAggregate(data.table);
agg.addAggregate('COUNT', data.agg_field);
agg.orderBy(data.agg_field);
agg.query();
while (agg.next()) {

    var count = parseInt(agg.getAggregate('COUNT', data.agg_field));
    var percentTemp = 100 * count / reqCount;
    var percent = Math.round(percentTemp * 100) / 100;

    var objReq = {
        name: agg.getDisplayValue(data.agg_field),
        y: count,
        percent: percent.toFixed(2)
    };

    // if the risk is Very High the pie should be sliced and red
    if (agg.getValue(data.agg_field) == 1) {
        objReq.sliced = true;
        objReq.selected = true;
        objReq.color = 'red';
    }

    data.graphData.push(objReq);

}

 

Widget Highchart Pie Example
Widget Component: Client Script
Script:

var chartConfirObj = {
  chart: {
    type: "pie",
    backgroundColor: c.data.fill_color,
  },
  title: {
    text: c.data.title,
    style: {
      fontFamily: c.data.font_family,
      color: c.data.title_color,
      fontSize: c.data.title_size,
      fontWeight: c.data.title_weight,
    },
  },
  tooltip: {
    formatter: function () {
      return (
        "<b>" +
        this.point.name +
        " </b>: " +
        this.y +
        " </b> (" +
        this.point.percent +
        "%)"
      );
    },
  },
  plotOptions: {
    series: {
      allowPointSelect: true,
      cursor: "pointer",
      dataLabels: [
        {
          enabled: true,
          distance: 20,
          connectorColor: c.data.label_connector_color,
          formatter: function () {
            return this.point.name + " </br>" + this.point.percent + "%";
          },
          style: {
            fontFamily: c.data.font_family,
            color: c.data.label_color,
            fontSize: c.data.label_size,
            fontWeight: c.data.label_weight,
            textOutline: c.data.bar_label_outline,
          },
        },
        {
          enabled: true,
          distance: -30,
          format: "{y}",
          style: {
            textOutline: c.data.slice_label_outline,
            fontSize: c.data.slice_label_size,
            opacity: c.data.slice_label_opacity,
            fontWeight: c.data.slice_label_weight,
            textAling: "center",
          },
        },
      ],
    },
  },
  series: [
    {
      name: "Percentage",
      colorByPoint: true,
      data: c.data.graphData,
      dataLabels: {
        style: {
          color: c.data.bar_label_color,
          fontSize: c.data.bar_label_size,
          textOutline: c.data.bar_label_outline,
          fontWeight: c.data.bar_label_weight,
        },
      },
    },
  ],
};

var chart = Highcharts.chart("containerPie", chartConfirObj);

 

Widget Highchart Pie Example
Widget Component: HTML Template
Script:

<div id="containerPie"></div>

 

Widget Highchart Pie Example
Widget Component: Option Schema
JSON:

[
    {
        "name": "table",
        "section": "other",
        "default_value": "change_request",
        "label": "Table",
        "type": "string"
    },
    {
        "name": "agg_field",
        "section": "other",
        "default_value": "risk",
        "label": "Field used to aggregate",
        "type": "string"
    },
    {
        "name": "font_family",
        "section": "other",
        "default_value": "SourceSansPro,Helvetica,Arial,sans-serif",
        "label": "Font family",
        "type": "string"
    },
    {
        "name": "fill_color",
        "section": "other",
        "default_value": "#fff",
        "label": "Fill color",
        "type": "string"
    },
    {
        "name": "title",
        "section": "other",
        "default_value": "Chart title",
        "label": "Title",
        "type": "string"
    },
    {
        "name": "title_color",
        "section": "other",
        "default_value": "#fff",
        "label": "Title color",
        "type": "string"
    },
    {
        "name": "title_size",
        "section": "other",
        "default_value": "14px",
        "label": "Title Size",
        "type": "string"
    },
    {
        "name": "title_weight",
        "section": "other",
        "default_value": "normal",
        "label": "Title weight",
        "type": "string"
    },
    {
        "name": "label_color",
        "section": "other",
        "default_value": "#fff",
        "label": "Label Color",
        "type": "string"
    },
    {
        "name": "label_size",
        "section": "other",
        "default_value": "11px",
        "label": "Label size",
        "type": "string"
    },
    {
        "name": "label_weight",
        "section": "other",
        "default_value": "normal",
        "label": "Label weight",
        "type": "string"
    },
    {
        "name": "label_connector_color",
        "section": "other",
        "default_value": "#fff",
        "label": "Label connector color",
        "type": "string"
    }
]

 

Now, with everything we have shown in this article, we have the possibility of building complex dashboards like the one we saw at the beginning of this article.

 

Conclusion

Integrating Highcharts with ServiceNow allows you to take your data visualization to the next level, enabling more interactive, detailed, and dynamic reporting. With its wide range of customization options and ability to handle real-time data, Highcharts is an ideal tool for organizations looking to enhance their ServiceNow dashboards and reports. Whether you’re visualizing incident trends, service performance, or asset utilization, this integration can provide critical insights to drive better decision-making.

By following the steps in this article, you’ll be well on your way to creating powerful, interactive charts in ServiceNow, enabling users to explore and understand complex data sets with ease.

The post Leveraging ServiceNow with Highcharts: Transforming Data into Insight appeared first on ServiceNow Guru.

]]>
https://servicenowguru.com/service-portal/leveraging-servicenow-with-highcharts-transforming-data-into-insight/feed/ 3
ServiceNow Checklist Automation: Simplifying Catalog Task Management https://servicenowguru.com/scripting/servicenow-checklist-automation-simplifying-catalog-task-management/ https://servicenowguru.com/scripting/servicenow-checklist-automation-simplifying-catalog-task-management/#comments Wed, 02 Oct 2024 10:00:08 +0000 https://servicenowguru.com/?p=15533 Automating checklist creation for tasks in ServiceNow can save significant time and ensure consistency across your IT processes. This guide will show you how to implement this using a Script Include, Business Rule (or Flow Designer), and System Properties. You will use the checklist and checklist_item tables to automatically create and manage checklists for catalog tasks. The system property stores checklist configurations in

The post ServiceNow Checklist Automation: Simplifying Catalog Task Management appeared first on ServiceNow Guru.

]]>
Automating checklist creation for tasks in ServiceNow can save significant time and ensure consistency across your IT processes. This guide will show you how to implement this using a Script IncludeBusiness Rule (or Flow Designer), and System Properties.

You will use the checklist and checklist_item tables to automatically create and manage checklists for catalog tasks. The system property stores checklist configurations in a JSON format, allowing you to customize checklist items and define their mandatory status for different catalog items.

If you need to manage a large number of checklists, you can create a custom table in your scoped application to store the configurations. A custom table will help you manage the data more easily and offer better flexibility for future updates. In this case, you will need to modify the Script Include to retrieve checklist data from the new table instead of the system property.

The automation is triggered by a Business Rule in the Global Scope, which ensures that tasks cannot be closed unless all mandatory checklist items are completed. By automating this process, you can maintain consistency and reduce manual work, especially in IT service management workflows.

Key Components

System Property

The system property, catalogTask.checklists, stores the configuration for the checklists. This property uses a JSON structure to define which checklists and items are associated with specific catalog items. Each catalog item in the configuration includes the following:

  • Country or Catalog: You can group the checklist by Catalog or Country at the highest level.
  • Catalog Item Sys ID: Each catalog item is linked to specific service request tasks, and each task can have its own checklist.
  • Checklist Name: The name of the checklist associated with the catalog item.
  • Items: A list of individual checklist items, where each item includes:
    • name: The name of the checklist item.
    • order: The display order of the checklist item.
    • mandatory: A boolean that determines whether the item must be completed before the task can be closed.

If you are handling many checklists, using a custom table within your scoped application will scale better. The custom table can store the checklist data, and you can modify the Script Include to pull data from that table instead of the system property.

Checklist and Checklist Item Tables

The checklist and checklist_item tables store the checklists and their items for each catalog task. When you create a task, the Business Rule checks the system property to determine if a checklist is required. If the checklist data exists, the script inserts records into the checklist table and adds individual items to the checklist_item table.

The checklist table links the checklist to the specific catalog task using the document field. The checklist_item table stores each checklist item with details such as the name, order, and completed status. The complete field tracks whether each item has been completed.

You can also use the Checklist Template as an alternative, provided that your governance structure allows for easy management of these configurations in production environments. This approach may simplify the process, especially if you’re looking to manage checklist setups more efficiently.

Business Rule in Global Scope

The Business Rule triggers the checklist creation process when you create a new catalog task and validates the checklist completion when the task is updated. The rule queries the checklist and checklist_item tables to make sure that all mandatory items are complete before allowing the task to close.

The Business Rule needs to be in the Global Scope to use the setAbortAction(true) method, which prevents the task from closing if any mandatory checklist items are incomplete. You cannot use this method in scoped applications because they do not allow aborting database transactions directly.

System Property Configuration

Create a system property named catalogTask.checklists and store your JSON configuration defining the checklists and their items.

Example JSON:

{
  "checklists": {
    "<Country or Catalog>": {
      "sys_id_of_Item1": {
        "checklistName": {
          "items": [
            {
              "name": "Item1 Name1",
              "order": 1,
              "mandatory": true
            },
            {
              "name": "Item1 Name2",
              "order": 2,
              "mandatory": false
            },
            {
              "name": "Item1 Name3",
              "order": 3,
              "mandatory": true
            }
          ]
        }
      },
      "sys_id_of_Item2": {
        "checklistName": {
          "items": [
            {
              "name": "Item2 Name1",
              "order": 1,
              "mandatory": false
            },
            {
              "name": "Item2 Name2",
              "order": 2,
              "mandatory": true
            }
          ]
        }
      }
    }
  }
}

 

Script Include

The ChecklistCreator Script Include creates and manages checklists for catalog tasks. It retrieves the checklist configuration from the system property and determines whether to create a checklist for a specific catalog task.

When a checklist needs to be created, the script uses the catalog item sys_id to retrieve the relevant checklist data from the system property. If the data exists, the script automatically creates the checklist and its items for the task.

The system property defines which checklist items are mandatory. The script checks the actual items in the checklist_item table to ensure they are marked as complete. If all mandatory items are complete, the task proceeds. If not, the script blocks the task from closing and displays a message to the user.

Scoped Script Include Name: ChecklistCreator

var ChecklistCreator = Class.create();
ChecklistCreator.prototype = {
    initialize: function() {},

    /**
     * Retrieves the checklist data for a catalog item from the system property.
     * 
     * @param {String} catalogItemSysId - The sys_id of the catalog item.
     * @returns {Object|null} - Returns the checklist data as a JSON object if it exists, null otherwise.
     */
    getChecklistData: function(catalogItemSysId) {
        try {
            /* Get the JSON from the system property */
            var jsonString = gs.getProperty('x_916860_autocklst.catalogTask.checklists');
            if (!jsonString) {
                gs.error('System property x_916860_autocklst.catalogTask.checklists is not set or is empty');
                return null;
            }

            /* Parse the JSON string */
            var checklists;
            try {
                checklists = JSON.parse(jsonString);
            } catch (e) {
                gs.error('Failed to parse JSON from system property: ' + e.message);
                return null;
            }

            /* Retrieve the checklist data for the catalog item */
            for (var i = 0; i < checklists.length; i++) {
                if (checklists[i].catalog_item_sys_id === catalogItemSysId) {
                    return checklists[i];
                }
            }

            return null;
        } catch (e) {
            gs.error('Unexpected error occurred in getChecklistData: ' + e.message);
            return null;
        }
    },

    /**
     * Creates checklists and items for a catalog task based on the retrieved checklist data.
     * 
     * @param {String} catalogTaskSysId - The sys_id of the catalog task.
     * @param {Object} checklistData - The checklist data retrieved from the system property.
     */
    createChecklistsForCatalogTask: function(catalogTaskSysId, checklistData) {
        try {
            if (!checklistData) {
                gs.error('No checklist data provided for catalog task: ' + catalogTaskSysId);
                return;
            }

            var items = checklistData.items;
            if (!items) {
                gs.error('Checklist items not found for the specified catalog task and checklist name');
                return;
            }

            /* Check if a checklist already exists for the catalog task */
            var existingChecklistGR = new GlideRecord('checklist');
            existingChecklistGR.addQuery('document', catalogTaskSysId);
            existingChecklistGR.query();
            if (existingChecklistGR.next()) {
                gs.info('Checklist already exists for catalog task: ' + catalogTaskSysId);
                return;
            }

            /* Create checklist record */
            var checklistSysId;
            try {
                var checklistGR = new GlideRecord('checklist');
                checklistGR.initialize();
                checklistGR.document = catalogTaskSysId;
                checklistGR.name = checklistData.checklist_name;
                checklistGR.table = 'sc_task';
                checklistSysId = checklistGR.insert();
            } catch (e) {
                gs.error('Failed to create checklist record for catalog task: ' + e.message);
                return;
            }

            if (!checklistSysId) {
                gs.error('Failed to create checklist record for catalog task: ' + catalogTaskSysId);
                return;
            }

            /* Create checklist items */
            try {
                items.forEach(function(item) {
                    var checklistItemGR = new GlideRecord('checklist_item');
                    checklistItemGR.initialize();
                    checklistItemGR.checklist = checklistSysId;
                    checklistItemGR.name = item.name;
                    checklistItemGR.order = item.order;
                    checklistItemGR.mandatory = item.mandatory;
                    checklistItemGR.insert();
                });
            } catch (e) {
                gs.error('Failed to create checklist items: ' + e.message);
                return;
            }

            gs.info('Checklist and items created successfully for catalog task: ' + catalogTaskSysId);
        } catch (e) {
            gs.error('Unexpected error occurred in createChecklistsForCatalogTask: ' + e.message);
        }
    },

    /**
 * Checks if all mandatory checklist items are checked for a given catalog task.
 * 
 * @param {String} catalogTaskSysId - The sys_id of the catalog task.
 * @param {String} checklistSysId - The sys_id of the checklist.
 * @returns {Boolean} - Returns true if all mandatory checklist items are completed, false otherwise.
 */
areMandatoryChecklistItemsChecked: function(catalogTaskSysId, checklistSysId) {
    try {
        // Retrieve the catalog task record
        var catalogTaskGR = new GlideRecord('sc_task');
        if (!catalogTaskGR.get(catalogTaskSysId)) {
            gs.error('Catalog task record not found: ' + catalogTaskSysId);
            return false;
        }

        var ritmSysId = catalogTaskGR.getValue('request_item');  // Get the RITM sys_id
        var ritmGR = new GlideRecord('sc_req_item');
        if (!ritmGR.get(ritmSysId)) {
            gs.error('RITM record not found for catalog task: ' + catalogTaskSysId);
            return false;
        }

        var catalogItemSysId = ritmGR.getValue('cat_item');  // Get the catalog item sys_id
        
        // Get checklist data for the catalog item from the system property
        var checklistData = this.getChecklistData(catalogItemSysId);
        if (!checklistData) {
            gs.error('No checklist data found for catalog item: ' + catalogItemSysId);
            return false;
        }

        var items = checklistData.items;
        if (!items || items.length === 0) {
            gs.error('No checklist items found for catalog item: ' + catalogItemSysId);
            return false;
        }

        // Check if all mandatory checklist items are completed
        for (var i = 0; i < items.length; i++) {
            var checklistItem = items[i];
            if (checklistItem.mandatory) { // Only check mandatory items
                var checklistItemGR = new GlideRecord('checklist_item');
                checklistItemGR.addQuery('checklist', checklistSysId);
                checklistItemGR.addQuery('name', checklistItem.name);  // Match the item name from property
                checklistItemGR.query();

                if (!checklistItemGR.next() || checklistItemGR.getValue('complete') !== '1') {  // Use 'complete' field to check completion
                    gs.info('Mandatory checklist item not completed: ' + checklistItem.name);
                    return false;
                }
            }
        }

        // All mandatory items are completed
        return true;
    } catch (e) {
        gs.error('Unexpected error occurred in areMandatoryChecklistItemsChecked: ' + e.message);
        return false;
    }
},


    type: 'ChecklistCreator'
};

 

Business Rule

The Global Scope Business Rule runs when a catalog task is created (inserted) or updated (on state change). The ChecklistCreator Script Include is called to manage checklist creation and validation.

  • On Insert: When you create a catalog task, the Business Rule retrieves the requested item (RITM) linked to the task. It uses the catalog item sys_id to get the corresponding checklist data from the system property. If the data is found and the short description of the task matches the checklist name, the checklist is created.
  • On Update (When Task Is Closing): When the catalog task is updated to a closed state, the Business Rule checks whether all mandatory checklist items are complete. The associated checklist record is queried, and the ChecklistCreator Script Include validates the completion status of mandatory items. If any mandatory item is incomplete, the task closure is blocked, and an error message is shown.

Global Scope Business Rule
Name: Create and Validate Checklists
When: Before INSERT and UPDATE
Table: Catalog Task
Condition:
Add appropriate condition so this BR executes for your Catalog items

/**
 * Business Rule to create and validate checklists for catalog tasks.
 * 
 * This Business Rule triggers on both insert and update operations:
 * - On insert: it creates a checklist for the catalog task if defined in the system property.
 * - On update: it validates if all mandatory checklist items are completed before closing the task.
 * 
 * @param {GlideRecord} current - The current record being processed (sc_task).
 * @param {GlideRecord} previous - The previous version of the record (null when async).
 */
(function executeRule(current, previous /*null when async*/) {
    try {
        // Instantiate the ChecklistCreator Script Include from the scoped application
        var checklistCreator = new x_916860_autocklst.ChecklistCreator();

        /**
         * Case 1: Checklist Creation on Insert
         * 
         * If a new catalog task is inserted, the script checks if a checklist needs to be created 
         * based on the catalog item and the short description of the task.
         */
        if (current.operation() == 'insert') {
            // Get the associated Requested Item (RITM) record
            var ritmGR = current.request_item.getRefRecord();
            if (ritmGR.isValidRecord()) {
                // Get the catalog item sys_id from the RITM record
                var catalogItemSysId = ritmGR.getValue('cat_item');
                // Retrieve checklist data from the system property
                var checklistData = checklistCreator.getChecklistData(catalogItemSysId);

                // If checklist data is found and the short description matches the checklist name, create the checklist
                if (checklistData && current.getValue("short_description") == checklistData.checklist_name) {
                    checklistCreator.createChecklistsForCatalogTask(current.sys_id, checklistData);
                }
            }
        }

        /**
         * Case 2: Checklist Validation on Update
         * 
         * If the catalog task is being updated to the Closed Complete state, the script checks 
         * if all mandatory checklist items are completed before allowing the task to close.
         */
        if (current.operation() == 'update' && current.state.changesTo('3')) {  // '3' represents Closed Complete state
            // Retrieve the associated checklist record for the current task
            var checklistGR = new GlideRecord('checklist');
            if (checklistGR.get('document', current.sys_id)) {
                // Validate if all mandatory checklist items are checked
                var allChecked = checklistCreator.areMandatoryChecklistItemsChecked(current.sys_id, checklistGR.sys_id);

                // If not all mandatory checklist items are checked, abort the closure and show an error message
                if (!allChecked) {
                    gs.addErrorMessage('There are mandatory checklist items that are not completed.');
                    current.setAbortAction(true);  // Prevent closure if mandatory items are not checked
                }
            }
        }

    } catch (e) {
        // Log any unexpected errors that occur during the execution of the Business Rule
        gs.error('Unexpected error in Business Rule Create and Validate Checklists: ' + e.message);
    }
})(current, previous);

 

 

Summary

By using ServiceNow’s Script Include and Business Rule, you can automate checklist creation and validation for catalog tasks, ensuring consistency and efficiency. You define checklists dynamically in a system property using a JSON format, which allows you to customize checklist items for each catalog task, including the order, mandatory status, and checklist names.

For smaller setups, the system property works well. However, for larger checklist configurations, you can create a custom table in your scoped application to store the checklist data. This makes managing a large number of checklists easier and more flexible.

With this automated process, you prevent task closures until all mandatory checklist items are completed. This reduces manual work, minimizes errors, and ensures consistent task management across your organization.

The post ServiceNow Checklist Automation: Simplifying Catalog Task Management appeared first on ServiceNow Guru.

]]>
https://servicenowguru.com/scripting/servicenow-checklist-automation-simplifying-catalog-task-management/feed/ 2
Overview of ServiceNow Hardware Asset Management https://servicenowguru.com/hardware-asset-management/overview-of-servicenow-hardware-asset-management/ Mon, 30 Sep 2024 23:26:39 +0000 https://servicenowguru.com/?p=17027 Introduction Hardware asset management is the systematic process of tracking, managing, and optimizing the lifecycle of physical IT assets to maximize their value and minimize risks. Hardware assets are items with financial value whose management involves aspects such as inventory, contracts, cost, deprecation, warranty etc. In contrast, configuration items (CIs) represent components supporting IT service

The post Overview of ServiceNow Hardware Asset Management appeared first on ServiceNow Guru.

]]>
Introduction

Hardware asset management is the systematic process of tracking, managing, and optimizing the lifecycle of physical IT assets to maximize their value and minimize risks. Hardware assets are items with financial value whose management involves aspects such as inventory, contracts, cost, deprecation, warranty etc. In contrast, configuration items (CIs) represent components supporting IT service delivery whose operational attributes and relationships needs to be managed. Some items like computers and network devices are typically managed as both Asset and CI. Effective management of hardware assets is crucial for organizations to remain competitive in today’s rapidly changing digital landscape. ServiceNow Hardware Asset Management (HAM) offers a powerful solution which helps streamline asset tracking, automate asset lifecycle management, optimize costs, improve regulatory compliance etc.

Hardware Asset Management (HAM)

 

Business Benefits

ServiceNow HAM revolutionizes the management of hardware assets throughout their lifecycle, from asset planning to disposal, delivering substantial benefits like those highlighted below:

  • Cost Optimization: ServiceNow HAM gives visibility into asset inventory data, insights about the total cost of ownership for assets, better automation of asset operations, complete lifecycle management of assets, depreciation calculations, etc. This enables HAM teams to optimize costs by taking actions to improve asset utilization, reduce hardware maintenance expenditures, curb unnecessary purchases, avoid penalties related to non-compliance, etc.
  • Compliance and Risk Mitigation: With its robust auditing and reporting features, ServiceNow HAM improves the capability of organizations to adhere to regulatory requirements, maintain accurate asset records, foster a culture of accountability, identify and mitigate risks related to assets, etc.
  • Operational Efficiency: Streamlined asset tracking helps asset operations teams understand information like current location, user, model, warranty, end of life, contract end dates, etc. It helps to save time and effort for various activities like locating specific assets, identifying whether assets are still covered under warranty, tracking and renewing contracts before expiry, replacing assets before their end of life, etc. Faster identification of affected hardware and availability of various operational attributes of the asset in its associated configuration item record will help the support teams speed up incident management. Automated capture of asset data by scanning via mobile application enhances the efficiency and accuracy of operational activities like audits and disposals.
  • Enhanced Visibility and Decision-Making: The real-time insights about asset status, usage, cost, etc., provided by ServiceNow HAM empower organizations with data-driven decision-making capabilities, enabling proactive planning and optimized resource allocation.
  • Productivity and satisfaction improvements: The enhanced visibility and operational efficiency also translate into higher productivity for the asset operations teams, support teams, procurement teams, finance teams, asset managers, and end users. The availability of self-service catalogs to request assets and efficient management of incidents and requests by the support teams will reduce frustration and increase satisfaction for the end users.
  • Sustainability Management: Timely maintenance and repair of assets help extend their useful life. Thus, it reduces the number of premature asset replacements and hence the frequency of e-waste generated from that. Implementation of a proper hardware asset disposal process followed across the organization will help to ensure that end-of-life assets are recycled or disposed of in an environmentally responsible manner, minimizing the release of hazardous materials into the environment.
  • Cybersecurity improvements: Implementing mechanisms to prevent unauthorized access to asset information can reduce the possibility of data breaches. Effectively managing assets throughout their lifecycle should enable the assets to become less vulnerable to attacks. The presence of a proper asset disposal process which involves activities like backing up and subsequent wiping of software and data from the devices before disposal helps to prevent data losses or data breaches and the security and compliance issues resulting from those.

 

 Implementation Approach and Best Practices

Embarking on the ServiceNow HAM implementation journey involves a structured approach encompassing the following key steps:

  • Assessment: Conduct a thorough examination of the current hardware asset management process and other related processes such as IT service management and configuration management. If the organization is already using ServiceNow, additionally analyze the current state of the instance. i.e. To understand the modules currently available on the instance, any existing configurations or customizations related to hardware asset management, data populated currently, various data sources, integrations, etc. If the organization is using some other asset management tool currently, analyze that tool to understand how it is configured, what data is populated, how is the asset operations managed through that tool, etc. The assessment will involve discussions with various stakeholders like asset manager, configuration manager, asset operations teams, procurement teams, finance teams, service management teams, legal teams, etc.
  • Planning: Define the scope and objectives of your initial implementation of HAM. It should cover device types, models, business use cases, operational activities, etc. Prepare a detailed roadmap for the items that you plan to implement in the initial phase and a high-level roadmap of the items that you plan to implement as part of future scope. This roadmap should be revisited periodically and updated based on factors like the latest product features, process best practices, changes in the organization’s business objectives, etc. Based on the findings from the assessment and considering the scope of the initial implementation phase as per the roadmap, prepare a detailed project plan for the initial implementation.  
  • Process definition: The to-be HAM process should be established after understanding the current state of the process and finalizing the scope and objectives of HAM implementation. The process flows should be defined and documented for all the relevant asset lifecycle management areas like asset forecasting, sourcing, procurement, receiving, stock management, deployment, maintenance, repair, replacement, retirement, disposal, etc. The process document should also cover all other important information like business objectives, roles and responsibilities for hardware asset management, governance structure, policies and procedures, integrations with other processes (configuration management, change management, incident management, service request management, etc.), reporting and KPIs, audits, contract management, etc. The process should be aligned with the ServiceNow OOB features as much as possible while ensuring that it meets the client’s business objectives from HAM implementation.
  • Data Preparation: Foundation data like companies, departments, cost centers, locations, users, groups, etc. needs to be populated accurately. This may involve efforts to set up integrations and populate the required foundation data if the HAM implementation is part of a fresh ServiceNow instance implementation program. For implementation of HAM on an instance where these data are already in place, then just a review of these data followed by creating or updating missing information is sufficient (while giving higher priority to data that will be referenced in asset records). It is also important to finalize the model categories, models, and contracts that are in scope for the initial implementation. This scoping is critical for HAM licensing calculations as the calculation is based on the HAM resource categories for which we opt-in. Reviewing and enhancing the quality of CMDB data is important for the CI classes which have asset classes corresponding to them. For example, the Computer CI class (cmdb_ci_computer) is used to track the operational information of computer assets which are stored in the Hardware assets table (alm_hardware). So, it is important to reduce data quality issues like duplicate CIs in the Computer CI class.
  • Plugin considerations: Only basic capabilities like Asset records tracking, Model records tracking, Stockrooms, Purchase orders, and Transfer orders are supported by the asset management features available in a base ServiceNow platform. For advanced capabilities like model normalization, content library, lifecycle dates automation, asset lifecycle automation, automated asset actions, enhanced mobile capabilities, asset dashboards, HAM workspace, etc, the HAM pro plugin (sn_hamp) is required. Use of this plugin is recommended as these are critical capabilities for effective hardware asset management. It is a paid plugin that can be activated only by ServiceNow personnel. Other recommended plugins for HAM that are not activated by default are Procurement, Cost management and Expanded Asset and Model Classes.
  • HAM module setup: A high-level overview of important configurations and customizations to be performed on the instance is given below:
    • Activate the HAM pro plugin and any other supporting plugins required (E.g. Procurement)
    • Opt-in to the required resource categories. This determines which all model categories are normalized when the normalization job runs. End user computers, Mobile devices, Network Gear, and Server are the 4 resource categories available. The HAM licensing costs are calculated based on the resource categories which are opted in.
    • Configure the model categories and models as required. Populate any missing model data which is currently not present or automatically populated. The models which are approved for use in the organization should be made available in the Catalog for end users to order. Devices which need to be managed in bulk should be tracked as consumables (E.g. Inexpensive items like keyboard and mouse)
    • Enable Asset Action and Swapped CI on the related list of Incidents and Changes.
    • Review the OOB flows available for various asset operational activities like asset lifecycle management activities and contract renewals. Wherever required, create a copy of those flows with modifications to meet the organization’s business objectives.
    • Validate that catalog items required for asset lifecycle flows are active. If anything is inactive, enable them if that use case is in scope.
    • Populate the stockroom data and set up stock rules if needed. Make sure that a stockroom manager is assigned to each stockroom that is populated.
    • Ensure that normalization scheduled jobs are active and working properly. Opt-in to the HAM content service if you wish to send your asset data to the content service team. Execute the jobs to download data from the content library if you wish to kick-start normalization.
    • Please ensure that Asset-CI synchronization is active. Review the Asset-CI field mappings and status mappings, and only make changes to them if there is a valid business justification. Here are a couple of best practices to consider regarding CIs:
      • For all CIs with a corresponding asset record, it is recommended to update the Asset record to trigger changes to the synchronized fields on the CI record.
      • Keep non-asset CIs, such as IP addresses and ports, out of the scope of hardware asset management. They should only be managed as CIs.
  • Data population and Management: If the organization is currently using another asset repository, take steps to migrate the asset data into ServiceNow. Ensure that only the asset records and attributes that are still relevant for the organization are migrated. Proper cleansing and mapping of data should be done to complete the migration. Appropriate methods like ServiceNow discovery or integrations using Service Graph Connectors (E.g. SCCM, JamF, Intune) should be used to automate the population of operational information of the assets into the CMDB. Ensure that configurations are in place to automate the synchronization of asset data with the corresponding configuration item’s data. The other types of integrations that may be required are integrations with procurement systems, shipping carrier systems, contract management systems, and vendor repositories (E.g. Fetching asset warranty from Lenovo). Monitoring the renewal dates of hardware asset contracts and completing their renewals before the expiry date is critical to avoid compliance, legal, financial, or business continuity risks. Hence the contract information should be populated in the Contract table (ast_contract) for all the contracts which apply to the hardware assets in scope. The contracts should be linked with the assets covered using the ‘Asset covered’ related list. Another important aspect to consider is configuring the appropriate access rights to asset-related data like product models, contracts, assets, reports, etc., based on the organization’s requirements to prevent unauthorized access to data.
  • Workspace, Reports and Dashboards: Verify that the hardware asset workspace, hardware asset overview dashboard, asset management executive dashboard, contract workspace etc., are available and populated with data. Create custom reports and dashboards if required. Access to these various reports and dashboards should be configured so that only relevant stakeholders can access the data.
  • Go-Live and Evaluation: Gather feedback from all the key stakeholders for the initial phase of HAM implementation done on non-prod instance. Make any changes if required. Then deploy the initial implementation of ServiceNow HAM organization wide. This can be implemented incrementally to production in multiple sprints in an agile manner.
  • Training and Change Management: Contributions from all the people involved are essential for the success of the hardware asset management program. So, it is critical to implement a comprehensive training program tailored to different user roles and responsibilities. Develop and execute a robust change management strategy to foster user adoption and minimize disruption. Training documents should be created to train about the HAM process, governance, standard operating procedures, product features, asset data, reports, etc. The documentation should be kept in the ServiceNow knowledge base or an appropriate SharePoint repository accessible to relevant stakeholders. Customized training tailored to their roles should be delivered to various stakeholders like executives, governance teams, asset managers, asset operation teams, contract management teams, procurement teams, finance teams, incident management teams, configuration management teams, etc. It is also recommended to send periodic mailers about data management, HAM features, best practices, governance, etc. to appropriate stakeholders to reinforce their learning. Refresher training sessions should also be conducted periodically. All the relevant documentation should be periodically reviewed and updated if needed.
  • Operations and Governance: The operational activities should be conducted on an ongoing basis as defined in the finalized HAM process document. Apart from the process flows and RACI defined in the process document, it is recommended to create detailed Standard operating procedures (SOPs) for asset operational activities and manage the SOP document in a common repository (E.g. ServiceNow knowledge base). Any new members joining the HAM operations team should be provided with a formal knowledge transfer and they should be given access to the repository containing all the relevant reference artifacts created for HAM. All the HAM-related plugins should be monitored periodically to ensure that they are updated to the latest version available. There should be strong governance as per the governance structure established as part of the process definition. Frequent meetings should be held between the key stakeholders to ensure proper governance.
  • Continual improvements: Various asset management stakeholders like IT Asset Manager, HAM process owner, Asset operation teams, ServiceNow product owner, etc., should periodically review the current state of HAM implementation. Instance performance, incidents, defects, etc., should be frequently monitored to identify areas for improvement. Implement enhancements to deliver the remaining features outlined in the roadmap, any other features identified as improvements, and product features newly introduced by ServiceNow. Continual process improvements should also be performed periodically by considering the gaps identified and the latest process best practices. The process documents, SOPs, training documents, and any other HAM artifacts should be reviewed periodically and updated whenever any process or product changes are implemented.

 

Conclusion

An effectively implemented and managed Hardware Asset Management program is a strategic investment that empowers organizations to optimize asset utilization, enhance operational efficiency, and foster a culture of compliance. With the right approach, people, processes and a commitment to continuous improvement, ServiceNow HAM can transform an organization’s IT asset management practices, paving the way for better control, compliance, and cost savings.

The post Overview of ServiceNow Hardware Asset Management appeared first on ServiceNow Guru.

]]>
GlideQuery Cheat Sheet https://servicenowguru.com/scripting/glidequery-cheat-sheet/ https://servicenowguru.com/scripting/glidequery-cheat-sheet/#comments Tue, 10 Sep 2024 11:13:13 +0000 https://servicenowguru.com/?p=15657 GlideQuery is a modern, flexible API introduced to simplify and streamline database operations in ServiceNow. It provides a more intuitive and readable approach to querying data, replacing the traditional GlideRecord scripts with a more elegant and performant solution. Whether you are a novice developer just getting started with ServiceNow or an experienced professional looking to

The post GlideQuery Cheat Sheet appeared first on ServiceNow Guru.

]]>
GlideQuery is a modern, flexible API introduced to simplify and streamline database operations in ServiceNow. It provides a more intuitive and readable approach to querying data, replacing the traditional GlideRecord scripts with a more elegant and performant solution. Whether you are a novice developer just getting started with ServiceNow or an experienced professional looking to optimize your workflows, understanding and mastering GlideQuery is essential.

In this comprehensive guide, we will take you through the fundamentals of GlideQuery, from basic concepts and syntax to advanced techniques and best practices. By the end of this article, you will have a solid grasp of how to leverage GlideQuery to enhance your data handling capabilities in ServiceNow, making your development process more efficient and your applications more responsive.

Let’s dive in and explore the power of GlideQuery, unlocking new potentials in your ServiceNow development journey.

Principles of GlideQuery

Peter Bell, a software engineer at ServiceNow, initially developed these API to use as an external tool for his team. Gradually, the API became an integral part of the platform, specifically integrated into the Paris release.

This API is versatile, compatible with both global and scoped applications. In the latter case, developers need to prefix their API calls with “global” to ensure proper functionality:

var user = new GlideQuery('sys_user')

var user = new global.GlideQuery('sys_user')

The API is entirely written in JavaScript and operates using GlideRecord in a second layer. Instead of replacing GlideRecord, its purpose is to enhance the development experience by minimizing errors and simplifying usage for developers.

I. Fail Fast

Detect errors as quickly as possible, before they become bugs.

Through a new type of error, NiceError, which facilitates the diagnosis of query errors, it is possible to know exactly what is wrong in your code and quickly fix it. Examples:

Field name validation

The API checks if the field names used in the query are valid and returns an error if any of them are not. In some cases, this is extremely important as it can prevent major issues. In the example below, using GlideRecord it would delete ALL records from the user table because the field name is written incorrectly. Note that the line that would delete the records has been commented out for safety and we have added a while loop just to display the number of records that would be deleted.

var gr = new GlideRecord('sys_user');
gr.addQuery('actived', '!=', true);
gr.query();
//gr.deleteMultiple();  commented for safety 
gs.info(gr.getRowCount());
while (gr.next()) {
    gs.info(gr.getDisplayValue('name'));
}

Using GlideQuery the API returns an error and nothing is executed:

var myGQ = new GlideQuery('sys_user')
    .where('activated', '!=', true)
    .del();

Returns:

NiceError: [2024-07-22T12:58:23.681Z]: Unknown field 'activated' in table 'sys_user'. Known fields:
[
  "country",
  "calendar_integration",
  etc.

Choice field validation

The API checks whether the option chosen for the field exists in the list of available options when it is of type choice. In the example below, using GlideRecord, the script would return nothing:

var gr = new GlideRecord('incident');
gr.addQuery('approval', 'donotexist'); //invalid value for the choice field 'approval'
gr.query();
while (gr.next()) {
    gs.info(gr.getDisplayValue());
}

Using GlideQuery, the API returns an error and nothing is executed:

var tasks = new GlideQuery('incident')
    .where('approval', 'donotexist')
    .select('number')
    .forEach(gs.log);

Returns:

NiceError: [2024-07-22T12:57:33.975Z]: Invalid choice 'donotexist' for field 'approval' (table 'incident'). Allowed values:
[
  "not requested",
  "requested",
  "approved",
  "rejected"
]

Value type validation

The API checks whether the type of value chosen for the field is correct.

var tasks = new GlideQuery('incident')
    .where('state', '8') // invalid value for the choice field 'state'
    .select('number')
    .forEach(function (g) {
        gs.info(g.number);
    });

Returns:

NiceError: [2024-07-22T12:56:51.428Z]: Unable to match value '8' with field 'state' in table 'incident'. Expecting type 'integer'

II. Be JavaScript (native JavaScript)

JavaScript objects bringing more familiarity to the way queries are made and reduce the learning curve. With GlideRecord we often have problems with the type of value returned:

var gr = new GlideRecord('sys_user');
gr.addQuery('first_name', 'Abel');
gr.query();

if (gr.next()) {
    gs.info(gr.first_name);
    gs.info(gr.firt_name === 'Abel');
    gs.info(typeof(gr.first_name));
}

Returns:

*** Script: Abel
*** Script: false
*** Script: object

As we can see Abel is different from Abel!
The reason for this confusion is that GlideRecord returns a Java object.

Using GlideQuery we don’t have this problem:

var user = new GlideQuery('sys_user')
    .where('first_name', 'Abel')
    .selectOne('first_name')
    .get(); // this method can throw error if no record is found

gs.info(user.first_name);
gs.info(user.first_name === 'Abel');
gs.info(typeof (user.first_name));

Returns:

*** Script: Abel
*** Script: true
*** Script: string

III. Be Expressive

Do more with less code! Simplify writing your code!

 

Performance

Using GlideQuery can increase processing time by around 4%, mainly due to the conversion of the Java object to JavaScript. However, keep in mind that we will often do this conversion manually after a GlideRecord.

 

Stream x Optional

 

The API works together with 2 other APIs: Stream and Optional. If the query returns a single record, the API returns an Optional object. If it returns several records, the API returns an object of type Stream, which is similar to an array. These objects can be manipulated according to the methods of each API.

 

Practical Examples

 

Before we start with the examples, it is necessary to make 2 points clear

  • The primary key of the table (in our case normally the sys_id) is always returned even if the request is not made in Select.
  • Unlike GlideRecord, GlideQuery does not return all record fields. We need to inform the name of the fields we want to get:
    .selectOne([‘field_1’, ‘field_2’, ‘field_n’])

 

I. selectOne

It is very common that we only need 1 record and in these cases we use selectOne(). This method returns an object of type Optional and we need a terminal method to process it:

a) get

// Searches for the user and if it does not exist returns an error
var user = new GlideQuery('sys_user')
    .where('last_name', 'Luddy')
    .selectOne('first_name')
    .get(); // this method can throw error if no record is found

gs.info(JSON.stringify(user, null, 4));

If success:

Script: {
    "first_name": "Fred",
    "sys_id": "5137153cc611227c000bbd1bd8cd2005"
}

If fail:

NiceError: [2024-07-21T22:28:48.274Z]: get() called on empty Optional: Unable to find a record with the following query:
GlideQuery<sys_user> [
  {
    "type": "where",
    "field": "last_name",
    "operator": "=",
    "value": "Luddyx",
    "whereClause": true
  }
]

b) orElse

Optional method used to handle queries that do not return any value.

var user = new GlideQuery('sys_user')
    .where('last_name', 'Luddy')
    .selectOne(['first_name'])
    .orElse({  //Method in the Optional class to return a default value.
        first_name: 'Nobody'
    });

gs.info(JSON.stringify(user, null, 4));

If success:

Script: {
    "first_name": "Fred",
    "sys_id": "5137153cc611227c000bbd1bd8cd2005"
}

If fail:

Script: {
    "first_name": "Nobody"
}

c) ifPresent

var userExists = false;
new GlideQuery('sys_user')
    .where('last_name', 'Luddy')
    .selectOne(['first_name'])
    .ifPresent(function (user) {
        gs.info(user.first_name + ' - ' + user.sys_id);
        userExists = true;
    });

gs.info(userExists);

If user exists:

*** Script: Fred - 5137153cc611227c000bbd1bd8cd2005
*** Script: true

If not:

*** Script: false

d) isPresent

var userExists = new GlideQuery('sys_user')
   .where('last_name', 'Luddy')
    . selectOne(['first_name'])
    .isPresent();

gs.info(userExists);

If user exists:

*** Script: true

If not:

*** Script: false

e) isEmpty

var userExists = new GlideQuery('sys_user')
   .where('last_name', 'Luddy')
    .selectOne(['first_name'])
    .isEmpty();

gs.info(userExists);

If user exists:

*** Script: false

If not:

*** Script: true

II. get

Returns a single record using sys_id

var user = new GlideQuery('sys_user')
    .get('62826bf03710200044e0bfc8bcbe5df1', ['first_name', 'last_name'])
    .orElse({
        first_name: 'Nobody',
        last_name: 'Nobody'
    });

gs.info(JSON.stringify(user, null, 4));

If user exists:

*** Script: {
    "sys_id": "62826bf03710200044e0bfc8bcbe5df1",
    "first_name": "Abel",
    "last_name": "Tuter"
}

If not:

*** Script: {
    "first_name": "Nobody",
    "last_name": "Nobody"
}

III. getBy

Returns a single record (even if there is more than 1 record) using the keys used as parameters.

var user = new GlideQuery('sys_user')
    .getBy({
        first_name: 'Fred',
        last_name: 'Luddy'
    }, ['city', 'active']) // select first_name, last_name, city, active
    .orElse({
        first_name: 'Nobody',
        last_name: 'Nobody',
        city: 'Nowhere',
        active: false
    });

gs.info(JSON.stringify(user, null, 4));

If user exists:

*** Script: {
    "first_name": "Fred",
    "last_name": "Luddy",
    "city": null,
    "active": true,
    "sys_id": "5137153cc611227c000bbd1bd8cd2005"
}

If not:

*** Script: {
    "first_name": "Nobody",
    "last_name": "Nobody",
    "city": "Nowhere",
    "active": false
}

IV. insert

The insert method needs an object as a parameter where each property must be the name of the field we want to fill. The insert returns an Optional with the data of the inserted object and the sys_id. We can also request extra fields for fields that are automatically populated:

var user = new GlideQuery('sys_user')
    .insert({  
        active: false,
        first_name: 'Thiago',
        last_name: 'Pereira',
    },['name'])
    .get()

gs.info(JSON.stringify(user, null, 4));

Returns:

*** Script: {
    "sys_id": "40f6e42147efc210cadcb60e316d43be",
    "active": false,
    "first_name": "Thiago",
    "last_name": "Pereira",
    "name": "Thiago Pereira"
}

V. update

The API has a method for when we want to update just one record. To use this method, the field used in the “where” must be the primary key and this way it updates just 1 record.

var user = new GlideQuery('sys_user')
    .where('sys_id', '40f6e42147efc210cadcb60e316d43be') //sys_id of the record created in the insert example
    .update({ email: 'thiago.pereira@example.com', active: true }, ['name'])
    .get();
gs.info(JSON.stringify(user, null, 4));

Returns:

*** Script: {
    "sys_id": "40f6e42147efc210cadcb60e316d43be",
    "email": "thiago.pereira@example.com",
    "active": true,
    "name": "Thiago Pereira"
}

VI. updateMultiple

var myQuery = new GlideQuery('sys_user')
    .where('active', false)
    .where('last_name', 'LIKE', 'Pereira') 
    .updateMultiple({ active: false });

gs.info(JSON.stringify(myQuery, null, 4));

If success:

*** Script: {
    "rowCount": 1
}

If fail:

*** Script: {
    "rowCount": 0
}

VII. insertOrUpdate

This method receives an object with the key(s) to perform the search. If one of the keys is a primary key (sys_id), it searches for the record and updates the other fields entered in the object. If no primary key is passed or the sys_id is not found, the method will create a new record. In this method we cannot select fields other than those passed in the object.

// Create a new record even though a user already exists with the values
var user = new GlideQuery('sys_user')
    .insertOrUpdate({
        first_name: 'Thiago',
        last_name: 'Pereira'
    })
    .orElse(null);
    
gs.info(JSON.stringify(user, null, 4));

Returns:

*** Script: {
    "sys_id": "5681fca1476fc210cadcb60e316d43b6",
    "first_name": "Thiago",
    "last_name": "Pereira"
}
// Update an existing record
var user = new GlideQuery('sys_user')
 .insertOrUpdate({
 sys_id: '40f6e42147efc210cadcb60e316d43be', //sys_id of the record created in the insert example
 first_name: 'Tiago',
 last_name: 'Pereira'})
 .orElse(null);

gs.info(JSON.stringify(user, null, 4));

Returns:

*** Script: {
    "sys_id": "40f6e42147efc210cadcb60e316d43be",
    "first_name": "Tiago",
    "last_name": "Pereira"
}
// Creates a new record as the sys_id does not exist
var user = new GlideQuery('sys_user')
    .insertOrUpdate({
        sys_id: 'xxxxxxxxxxxxxxxxxxxx',
        first_name: 'Thiago',
        last_name: 'Pereira2'})
    .orElse(null);
gs.info(JSON.stringify(user, null, 4));

Returns:

*** Script: {
    "sys_id": "50e338ed47afc210cadcb60e316d4364",
    "first_name": "Thiago",
    "last_name": "Pereira2"
}

VIII. deleteMultiple / del

There is no method to delete just one record. To do this we need to use deleteMultiple together with where() using a primary key. The method does not return any value.

var user = new  GlideQuery('sys_user')
    .where('last_name', "CONTAINS", 'Pereira2')
    .deleteMultiple();

IX. whereNotNull

Note: We will talk about dot walking later.

new GlideQuery('sys_user')
    .whereNotNull('company')
    .whereNotNull('company.city')
    .select('name', 'company.city')
    .forEach(function (user) {
        gs.info(user.name + ' works in ' + user.company.city)
    });

Returns:

*** Script: Lucius Bagnoli works in Tokyo
*** Script: Melinda Carleton works in London
*** Script: Jewel Agresta works in London
*** Script: Christian Marnell works in Prague
*** Script: Naomi Greenly works in London
*** Script: Jess Assad works in Tokyo
etc.

X. limit

new GlideQuery('sys_user')
    .whereNotNull('company')
    .whereNotNull('company.city')
    .limit(2)
    .select('name', 'company.city')
    .forEach(function (user) {
        gs.info(user.name + ' works in ' + user.company.city)
    });

Returns:

*** Script: Mildred Gallegas works in Rome
*** Script: Elisa Gracely works in Rome

XI. select

We often need queries that return several records and in these cases we use select(). This method returns an object of type Stream and we need a terminal method to process it:

a) forEach

var arrIncidens = [];
var incidents = new GlideQuery('incident')
    .where('state', 2)
    .limit(2)
    .select('number')
    .forEach(function (inc) {
        gs.info(inc.number + ' - ' + inc.sys_id);
        arrIncidens.push(inc.sys_id);
    });

if (arrIncidens.length==0)
    gs.info('Not found.')

If success:

*** Script: INC0000025 - 46f09e75a9fe198100f4ffd8d366d17b
*** Script: INC0000029 - 46f67787a9fe198101e06dfcf3a78e99

If fail:

*** Script: Not found.

b) toArray

Returns an array containing the items from a Stream. The method needs a parameter that is the maximum size of the array, the limit being 100.

var users = new global.GlideQuery('sys_user')
    .limit(20)
    .select('first_name', 'last_name')
    .toArray(2); // max number of items to return in the array

gs.info(JSON.stringify(users, null, 4));

Returns:

*** Script: [
    {
        "first_name": "survey",
        "last_name": "user",
        "sys_id": "005d500b536073005e0addeeff7b12f4"
    },
    {
        "first_name": "Lucius",
        "last_name": "Bagnoli",
        "sys_id": "02826bf03710200044e0bfc8bcbe5d3f"
    }
]

c) map

Used to transform each record in a Stream.

new GlideQuery('sys_user')
    .limit(3)
    .whereNotNull('first_name')
    .select('first_name')
    .map(function(user) {
        return user.first_name.toUpperCase();
    })
    .forEach(function(name) {
        gs.info(name);
    });

Returns:

*** Script: SURVEY
*** Script: LUCIUS
*** Script: JIMMIE

d) filter

Note: The filter in a Stream always occurs after the query has been executed. Therefore, whenever possible, we should make all possible filters before the select to avoid loss of performance.

var hasBadPassword = function(user) {
    return !user.user_password ||
        user.user_password.length < 10 ||
        user.user_password === user.last_password ||
        !/\d/.test(user.user_password) // no numbers
        ||
        !/[a-z]/.test(user.user_password) // no lowercase letters
        ||
        !/[A-Z]/.test(user.user_password); // no uppercase letters
};

new GlideQuery('sys_user')
    .where('sys_id', 'STARTSWITH', '3')
    .select('name', 'email', 'user_password', 'last_password')
    .filter(hasBadPassword)
    .forEach(function(user) {
        gs.info(user.name + ' - ' + user.sys_id)
    });

Returns:

*** Script: Patty Bernasconi - 3682abf03710200044e0bfc8bcbe5d17
*** Script: Veronica Achorn - 39826bf03710200044e0bfc8bcbe5d1f
*** Script: Jessie Barkle - 3a82abf03710200044e0bfc8bcbe5d10

e) limit

Both the GlideQuery and Stream APIs have the limit() method. In GlideQuery it is executed before the execution of the query, which is more performant. For this reason, whenever possible, we should use the limit before executing the select.

new GlideQuery('task')
    .orderBy('priority')
    .limit(3) // Good: calling GlideQuery's limit method
    .select('assigned_to', 'priority', 'description')
    //.limit(3) // Bad: calling Stream's limit method
    .forEach(function(myTask) {
        gs.info(myTask.priority + ' - ' + myTask.assigned_to)
    });

Returns:

*** Script: 1 - 5137153cc611227c000bbd1bd8cd2007
*** Script: 1 - 681b365ec0a80164000fb0b05854a0cd
*** Script: 1 - 5137153cc611227c000bbd1bd8cd2007

f) find

Returns the first item found in the Stream according to the given condition. Important notes:

  • Returns an Optional, which may be empty if no item is found.
  • The first item in the Stream is returned if no condition is entered.
var hasBadPassword = function(user) {
    return !user.user_password ||
        user.user_password.length < 10 ||
        user.user_password === user.last_password ||
        !/\d/.test(user.user_password) || // no numbers
        !/[a-z]/.test(user.user_password) || // no lowercase letters
        !/[A-Z]/.test(user.user_password); // no uppercase letters
};
var disableUser = function(user) {
    var myGQ = new GlideQuery('sys_user')
        .where('sys_id', user.sys_id)
        .update({
            active: false
        }, ['user_name'])
        .get();
        gs.info(JSON.stringify(myGQ, null, 4));
};
var myUsers = new GlideQuery('sys_user')
    .select('name', 'email', 'user_password', 'last_password')
    .find(hasBadPassword)
    .ifPresent(disableUser);

Returns:

*** Script: {
    "sys_id": "0e826bf03710200044e0bfc8bcbe5d45",
    "active": false,
    "user_name": "ross.spurger"
}

g) reduce

var longestFirstName = new global.GlideQuery('sys_user')
   .whereNotNull('first_name')
   .select('first_name')
   .reduce(function (acc, cur) {
       return (cur.first_name.length > acc.length) ? cur.first_name : acc;
       }, '');

gs.info(JSON.stringify(longestFirstName));

Returns:

*** Script: "Sitemap Scheduler User"

h) Every

Executes a function on each item in the Stream. If it returns true for all items in the Stream, the method returns true, otherwise it returns false.

var numToCompare = 10;
//var numToCompare = 1000;

var hasOnlyShortDescriptions = new global.GlideQuery('task')
    .whereNotNull('description')
    .select('description')
    .every(function(t) {
        return t.description.length < numToCompare;
    });

gs.info(hasOnlyShortDescriptions);

If numToCompare  is 10, returns:

*** Script: false

If numToCompare  is 1000, returns:

*** Script: true

i) some

Executes a function on each item in the Stream. If it returns true for at least one item in the Stream, the method returns true, otherwise it returns false.

var numToCompare = 10;
//var numToCompare = 1000;

var hasLongDescriptions = new global.GlideQuery('task')
   .whereNotNull('description')
   .select('description')
   .some(function (t) { 
      return t.description.length > numToCompare; 
   });

gs.info(hasLongDescriptions);

If numToCompare  is 10, returns:

*** Script: true

If numToCompare  is 1000, returns:

*** Script: false

j) flatMap

FlatMap is very similar to map but with 2 differences:

  1.  The function passed to flatMap must return a Stream.
  2. The flatMap manipulates (unwraps/flattens) the returned Stream so that the “parent” method can return the result.
var records = new global.GlideQuery('sys_user')
    .where('last_login', '>', '2015-12-31')
    .select('first_name', 'last_name')
    .flatMap(function(u) {
        return new global.GlideQuery('task')
            .where('closed_by', u.sys_id)
            .where('short_description', 'Prepare for shipment')
            .select('closed_at', 'short_description')
            .map(function(t) {
                return {
                    first_name: u.first_name,
                    last_name: u.last_name,
                    short_description: t.short_description,
                    closed_at: t.closed_at
                };
            });
    })
    .toArray(50);

gs.info(JSON.stringify(records, null, 4));

Returns:

*** Script: [
    {
        "first_name": "Thiago",
        "last_name": "Pereira",
        "short_description": "Prepare for shipment",
        "closed_at": "2021-10-04 13:43:48"
    },
    {
        "first_name": "Thiago",
        "last_name": "Pereira",
        "short_description": "Prepare for shipment",
        "closed_at": "2021-10-04 13:40:22"
    },
    {
        "first_name": "Thiago",
        "last_name": "Pereira",
        "short_description": "Prepare for shipment",
        "closed_at": "2021-10-04 13:42:14"
    }
]

Be careful because in this case we are performing “nested queries” which can cause performance problems. Check the links below:

XII. parse

Similar to GlideRecord’s addEcodedQuery but does not accept all operators. Currently only the following operators are supported:

= ANYTHING GT_FIELD NOT IN
!= BETWEEN GT_OR_EQUALS_FIELD NOT LIKE
> CONTAINS IN NSAMEAS
>= DOES NOT CONTAIN INSTANCEOF ON
< DYNAMIC LIKE SAMEAS
<= EMPTYSTRING LT_FIELD STARTSWITH
ENDSWITH LT_OR_EQUALS_FIELD
var myTask = new GlideQuery.parse('task', 'active=true^short_descriptionLIKEthiago^ORDERBYpriority')
    .limit(10)
    .select('short_description', 'priority')
    .toArray(10); // 10 is the max number of items to return in the array

gs.info(JSON.stringify(myTask, null, 4));

Returns:

*** Script: [
    {
        "short_description": "thiago teste update 2",
        "priority": 1,
        "sys_id": "8b2c540b47ce0610cadcb60e316d4377"
    }
]

XIII. orderBy / orderByDesc

var users = new global.GlideQuery('sys_user')
    .limit(3)
    .whereNotNull('first_name')
    .orderBy('first_name')
    //.orderByDesc('first_name')
    .select('first_name', 'last_name')
    .toArray(3); 

gs.info(JSON.stringify(users, null, 4));

Returns:

*** Script: [
    {
        "first_name": "Abel",
        "last_name": "Tuter",
        "sys_id": "62826bf03710200044e0bfc8bcbe5df1"
    },
    {
        "first_name": "Abraham",
        "last_name": "Lincoln",
        "sys_id": "a8f98bb0eb32010045e1a5115206fe3a"
    },
    {
        "first_name": "Adela",
        "last_name": "Cervantsz",
        "sys_id": "0a826bf03710200044e0bfc8bcbe5d7a"
    }
]

XIV. withAcls

It works in the same way as GlideRecordSecure, forcing the use of ACLs in queries.

var users = new GlideQuery('sys_user')
    .withAcls()
    .limit(4)
    .orderByDesc('first_name')
    .select('first_name')
    .toArray(4);

gs.info(JSON.stringify(users, null, 4));

XV. disableAutoSysFields

new GlideQuery('task')
    .disableAutoSysFields()
    .insert({ description: 'example', priority: 1 });

XVI. forceUpdate

Forces an update to the registry. Useful, for example, when we want some BR to be executed.

new GlideQuery('task')
    .forceUpdate()
    .where('sys_id', 'd71b3b41c0a8016700a8ef040791e72a')
    .update()

XVII. disableWorkflow

new GlideQuery('sys_user')
    .disableWorkflow() // ignore business rules
    .where('email', 'bob@example.com')
    .updateMultiple({ active: false });

XVIII. Dot Walking

var tokyoEmployee = new GlideQuery('sys_user')
    .where('company.city', 'Tokyo')
    .selectOne('name', 'department.id')
    .get();
gs.info(JSON.stringify(tokyoEmployee, null, 4));

Returns:

*** Script: {
    "name": "Lucius Bagnoli",
    "department": {
        "id": "0023"
    },
    "sys_id": "02826bf03710200044e0bfc8bcbe5d3f"
}

XIX. Field Flags

Some fields support metadata. The most common case is the “display value”. We use the “$” symbol after the field name and specify the desired metadata. Metadata supported so far:

  1. $DISPLAY = getDisplayValue
  2. $CURRENCY_CODE = getCurrencyCode
  3. $CURRENCY_DISPLAY = getCurrencyDisplayValue
  4. $CURRENCY_STRING = getCurrencyString
new GlideQuery('sys_user')
    .whereNotNull('company')
    .limit(3)
    .select('company', 'company$DISPLAY')
    .forEach(function(user) {
        gs.info(user.company$DISPLAY + ' - ' + user.company);
    });

Returns:

*** Script: ACME Italy - 187d13f03710200044e0bfc8bcbe5df2
*** Script: ACME Italy - 187d13f03710200044e0bfc8bcbe5df2
*** Script: ACME Italy - 187d13f03710200044e0bfc8bcbe5df2

XX. aggregate

Used when we want to make aggregations. GlideQuery also has methods for this and they are often easier to use than GlideAggregate.

a) count

Using ‘normal’ GlideAggregate

var usersGa = new GlideAggregate('sys_user');
usersGa.addAggregate('COUNT');
usersGa.query();
usersGa.next();
gs.info(typeof (usersGa.getAggregate('COUNT')));
var userCount = parseInt(usersGa.getAggregate('COUNT'));
gs.info(typeof (userCount));
gs.info(userCount);

Returns:

*** Script: string
*** Script: number
*** Script: 629

Using GlideQuery

var userCount = new GlideQuery('sys_user').count();
gs.info(typeof(userCount));
gs.info(userCount);

Returns:

*** Script: number
*** Script: 629

Note that in addition to using fewer lines of code, GlideQuey returns a number while GlideAggregate returns a string.

b) avg

var faults = new GlideQuery('cmdb_ci')
    .avg('sys_mod_count')
    .orElse(0);
gs.info(faults);

Returns:

*** Script: 7.533

c) max

var faults = new GlideQuery('cmdb_ci')    
    .max('sys_mod_count')
    .orElse(0);
gs.info(faults);

Returns:

*** Script: 172

d) min

var faults = new GlideQuery('cmdb_ci')    
    .min('sys_mod_count')
    .orElse(0);
gs.info(faults);

Returns:

*** Script: 0

e) sum

var totalFaults = new GlideQuery('cmdb_ci')
    .sum('sys_mod_count')
    .orElse(0);
gs.info(totalFaults);

Returns:

*** Script: 20972

f) groupBy

var taskCount = new GlideQuery('incident')
    .aggregate('count')
    .groupBy('state')
    .select()
    .forEach(function (g) {
        gs.info(JSON.stringify(g, null, 4));
    });

Returns:

*** Script: {
    "group": {
        "state": 1
    },
    "count": 13
}
*** Script: {
    "group": {
        "state": 2
    },
    "count": 19
}
etc.

g) aggregate

var taskCount = new GlideQuery('task')
    .groupBy('contact_type')
    .aggregate('avg', 'reassignment_count')
    .select()
    .forEach(function (g) {
        gs.info(JSON.stringify(g, null, 4));
    });

Returns:

*** Script: {
    "group": {
        "contact_type": ""
    },
    "avg": {
        "reassignment_count": 0.0033
    }
}
*** Script: {
    "group": {
        "contact_type": "email"
    },
    "avg": {
        "reassignment_count": 1
    }
}
etc.

h) having

new GlideQuery('core_company')
    .aggregate('sum', 'market_cap')
    .groupBy('country')
    .having('sum', 'market_cap', '>', 0)
    .select()
    .forEach(function (g) {
        gs.info('Total market cap of ' + g.group.country + ': ' + g.sum.market_cap);
    });

Returns:

*** Script: Total market cap of : 48930000000
*** Script: Total market cap of USA: 5230000000

 

 

The post GlideQuery Cheat Sheet appeared first on ServiceNow Guru.

]]>
https://servicenowguru.com/scripting/glidequery-cheat-sheet/feed/ 8