Performance Archives - ServiceNow Guru https://servicenowguru.com/category/performance/ ServiceNow Consulting Scripting Administration Development Tue, 22 Oct 2024 15:57:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://servicenowguru.com/wp-content/uploads/2024/05/cropped-SNGuru-Icon-32x32.png Performance Archives - ServiceNow Guru https://servicenowguru.com/category/performance/ 32 32 Custom queue event handling in ServiceNow – Implementation steps https://servicenowguru.com/integration/custom-queue-event-handling-servicenow/ https://servicenowguru.com/integration/custom-queue-event-handling-servicenow/#comments Tue, 29 Oct 2024 09:57:11 +0000 https://servicenowguru.com/?p=16974 Background Looking at the ServiceNow native processes, one can easily see that big portion of them are event-based, rather than synchronous. This is especially true for the processes which are not critical to the user experience or the ones which are not dependencies of other business logic. In a nutshell, an event is logged in

The post Custom queue event handling in ServiceNow – Implementation steps appeared first on ServiceNow Guru.

]]>
Background

Looking at the ServiceNow native processes, one can easily see that big portion of them are event-based, rather than synchronous. This is especially true for the processes which are not critical to the user experience or the ones which are not dependencies of other business logic.

In a nutshell, an event is logged in a queue and when system resources are available, the event is picked up and processed by the associated Script Action.

 

Below is a simple visual representation of the process along with explanation (source: Steven Bell) :

0. I register my new event in the Registry, create my Script Action associated to that event, and if needed my Script Include which could be called by the Script Action. Registering my event tells the Worker to listen for that event, and that it will be expected to do something with it.

1. Something executes a gs.eventQueue statement which writes an event record on the queue. BTW, this is not an exhaustive list.

2,3,4. The event worker(s), whose job it is to listen for events listed in the Registry, picks up the event and sees if there is a Script Action(s) associated with the registered event.

5,6. If a Script Action is found to run then it is executed which in turn may execute my Script Include if I choose.

 

Remember the info message when adding a role to a user or a group:

What’s happening behind – an event is logged to the queue and the roles are added to the group in the first possible moment, when the system has resources for that. Usually this is near real-time, but in case of higher priority operations are already queued, this will wait till they free up some processing power.

Now if one is implementing an application, based on synchronous logic, occupying almost all the system resources, this may lead to performance implications, slowing down the instance tremendously.

One possible approach in such cases is to shift from synchronous processing to event-based processing, which will lead to better performance.

But since events are being logged (unless another queue is explicitly specified) to the default queue, we might run into performance issues again.

Here comes the custom queue implementation. It is nothing more than a separate queue to which events can be queued explicitly, leveraging the fifth parameter of gs.eventQueue() API (more on that later).

 

Implementation

The implementation process is similar with the normal event-based logic implementation. We need to have:

  • An event registered in the event registry
  • A Business rule or any other logic to fire the event
  • A Script action to process the event
  • Custom queue processor

I will not discuss the first three, because these are pretty straightforward, and docs are easily available.

Custom queue processor implementation

The easiest way to create a processor for a custom queue is to:

  • go to System Scheduler -> Scheduled Jobs -> Scheduled Jobs
  • find a job with Trigger type = interval (i.e. ‘text index events process’)

  • change the name (it can be anything) replace ‘text_index’ with the name of your custom queue inside the fcScriptName=javascript\:GlideEventManager(<HERE>).process(); line
  • set Next action to be in the near future, i.e. 30 seconds from the current moment (This is very important in order to get the job running)

  • (optional) edit the Repeat interval (short repeat interval may have some negative impact on the system performance, but at the same time, the lower the repeat interval, the sooner your event will be queued and processed)
  • Right click -> Insert and stay! Do not Save/Update!

You can have one or more custom queues, depending on the purpose. These must be aligned with the system resources – nodes, semaphores, workers. I will not go deeper on these, more information can be found in the Resources chapter below.

Logging an event to a specific (custom) queue

gs.eventQuque() API accepts 5 parameters:

  • Event name
  • GlideRecord object
  • Param1
  • Param2
  • (optional) queue

This fifth optional parameter ‘queue’ is the one that tells the system to which event queue an event should be logged.

We can log an event to the custom queue we have created above (‘custom_queue_one’) we can use the following line of code:

gs.eventQueue('event.name', grSomeObject,  null, null, ‘custom_queue_one’);

 

NB: queue name (fifth parameter) must be exactly the same as the one we’ve passed to the GlideEventManager during the process creation above.

Everything else (script actions, etc. is the same like in a normal event logging)

 

Good practices

  • Just because you can, doesn’t mean you should – this implementation is applicable only to cases where huge amounts of records must be processed (see Performance chapter)
  • Naming is important, give your names and processor readable names
  • For optimal performance, multiple custom queues can be created to handle a particular event. In this case, the event logging must be done in a way that ensures even distribution between the queues. To better organize these, one possible approach can be to:
    • Create a script include, holding an array with the names of your event queues
    • Use the following line of code to randomly distribute events to the queues:

gs.eventQueue('event.name', grSomeObject,  null, null, event_queues[Math.floor(Math.random()*event_queues.length)]);

where event_queues is an array containing the names of your queues

 

Performance

  • Even though we implement this approach to achieve performance, for low number of transactions it does not yield any performance gain, because of the Repeat interval – the longer it is, the slower will be the overall wait time
  • For large number of transactions (thousands of records), the achieved performance gain can be really significant. In one of my implementations, I was able to achieve 30x faster execution.

 

More information

The post Custom queue event handling in ServiceNow – Implementation steps appeared first on ServiceNow Guru.

]]>
https://servicenowguru.com/integration/custom-queue-event-handling-servicenow/feed/ 3
Applying the Sync/Async Pattern in Flow Designer https://servicenowguru.com/graphical-workflow/applying-sync-async-pattern-flow-designer/ Thu, 18 Jul 2024 13:21:28 +0000 https://servicenowguru.com/?p=16846 Introduction Here is the scenario: you need to use up-to-date data from a record in a transaction that may take a long time to process, but you don’t want the user to have to wait for that transaction in the UI. You may, for example, need to send a Comment or Work Note to another

The post Applying the Sync/Async Pattern in Flow Designer appeared first on ServiceNow Guru.

]]>
Introduction

Here is the scenario: you need to use up-to-date data from a record in a transaction that may take a long time to process, but you don’t want the user to have to wait for that transaction in the UI. You may, for example, need to send a Comment or Work Note to another system through an API call. That can take a few seconds or more, and it is possible that multiple comments could be entered by the time you send the API call, all the while multiple API calls are stacking up. Let’s see how we can use a Flow Designer Sync/Async design pattern to optimize user experience.

One way to handle this is by accomplishing part of this action synchronously within the “Business Rule Loop” by logging an Event and stashing the data you need (for example, the Comment or Work Note) in an event parameter so that it is up to date at the time of the transaction, and then processing the longer portion asynchronously outside of the client transaction using a Script Action. This Sync/Async design pattern ensures that you are capturing the data you want to send and returning control to the User as quickly as possible while still being able to send that “correct” data regardless of how long it takes to process to the other system.

Is it possible to accomplish this same pattern using low- or no-code capabilities with Flow Designer? It is!
Use a Sync/Async design pattern to optimize user experience.

If you are not familiar with Flows and Subflows, you may want to check out the documentation first.

Foreground Flows for Synchronous Processing

By default, Flows are set to run in the background so that they do not slow down the user experience. This means that they do not run as part of the Before/After Business Rule transaction (the Business Rule Loop mentioned previously), but instead run outside of the transaction. There is an option to run Flows in the Foreground, which will run them during the Business Rule Loop and make them part of the Client transaction. It is important to make these types of Flows as efficient as possible to avoid negatively impacting the user experience.

One benefit for running Flows in this way is that you have access to the Trigger Record at runtime, so the data the user changed is available to you immediately. If you try to retrieve a Journal Entry asynchronously, it is possible that another entry has been made by the time you try to retrieve it, so you cannot guarantee that you have the right data when the asynchronous code executes.

Our first requirement, getting the “right” Comment or Work Note, is met by retrieving it in a Foreground Flow. Let’s try this out by creating a Flow that will run in the Foreground when an Incident has a new Comment and log the Comment. We will add to this later, but for now we just want to get this up and running.

‘Sync Flow on Incident Comment’ Flow
Flow name: Sync Flow on Incident Comment
Trigger: Record > Created or Updated
Table: Incident
Condition: Active is true AND Additional comments changes
Run Trigger: For each unique change
Advanced Options
Where to run the flow: Run flow in foreground
ACTIONS

  1. Action: Log
    • Level: Info
    • Message (toggle scripting for this input):

      // Retrieve the most recent Comment from the Incident
      return "From the Flow:\n\n" + fd_data.trigger.current.comments.getJournalEntry(1);
      

This log will show up in the system log table with a “Source” value of “Flow Designer.” Activate this flow and test it out by adding a Comment to any active Incident and checking the logs.

Note that you could create a custom Action to retrieve the Journal Entry from the Incident to provide a no-code method to retrieve the data for future use, but for now we’ll stick with using a script in the Message field to keep things simple.

Great! Now we can get the comment, but how do we do something with it without having to make the user wait until we are done?

Subflow With Inputs

We will use a Subflow to handle the asynchronous portion. To do this, we need to pass our synchronously obtained data to the Subflow so that it does not have to retrieve potentially newer information when it runs. We’ll cover how to make this Subflow run asynchronously from the Flow a bit later.

For this example, we will simulate a potentially slow external API call by adding a wait timer to our Subflow.

First, we create a new Subflow and add inputs for the Incident record and the Comment that we want to handle in the API call. This ensures that we have access to the comment that was entered at the time we trigger the Subflow and that we are not retrieving a more recently entered comment when the Subflow executes.

Next, we log a timestamp for when the Subflow begins processing. Then we add a 10-second Wait (you can increase this if you want, to allow yourself time to enter multiple Comments while the Subflow waits to execute during your testing). Finally, we log out the Comment from the Subflow Input. We can also add another log step where we repeat the script we used to retrieve the most recent Comment from the Incident to see if it differs from the Comment we received as a Subflow Input.

‘Async Subflow Handle Incident Comment’ Subflow
Subflow name: Async Subflow Handle Incident Comment
INPUTS & OUTPUTS

  1. First Input
    • Label: Incident
    • Name: incident
    • Type: Reference.Incident
    • Mandatory: True
  2. Second Input
    • Label: Comment
    • Name: comment
    • Type: String
    • Mandatory: True

ACTIONS

  1. Action: Log
    • Level: Info
    • Message: Async Subflow, before Wait.
  2. Flow Logic: Wait for a duration of time
    • Duration Type: Explicit duration
    • Wait for: 10s
  3. Action: Log
    1. Level: Info
    2. Message: From subflow, after wait. Input comment: [data pill] Subflow Inputs > Comment [data pill]
  4. Action: Log
    1. Level: Info
    2. Message (toggle scripting for this input):
      // Log out the most recent journal entry from the Incident
      return 'From subflow, after wait. Most recent journal entry:\n\n' + fd_data.subflow_inputs.incident.comments.getJournalEntry(1);
      

Notice that we do not create any Subflow outputs. Because we plan to run this asynchronously, the Flow that calls our Subflow will no longer be running when the Subflow completes and will not be able to do anything with any outputs. Additionally, any error handling for the actions carried out by the Subflow needs to be contained within the Subflow and not passed to the calling Flow for additional handling. For example, you may usually pass the API response status and body back to the calling Flow to check for and respond to any errors (like creating an Incident), but you will now need to do this within the Subflow.

Publish the Subflow to make it available for use. We now have the Asynchronous component of our Flow Designer Sync/Async design pattern.

Subflow component of the Flow Designer Sync/Async design pattern to optimize user experience.

Final Subflow, with Inputs shown in detail.

Call the Subflow from the Flow

We now have a Subflow to handle the Comment, but we need to add it to the Flow to complete our Flow Designer Sync/Async design. Add a new step to the Flow, select the “Subflow” option, and then locate the Subflow you created. Set the Inputs to be the Incident trigger record and the most recent Comment (the same value you passed into the Log step, which we scripted).

To make this Subflow run Asynchronously, make sure that the “Wait for Completion” checkbox is unchecked. Doing so will allow the Flow to complete and immediately return control to the Client transaction and complete the remainder of the Business Rule Loop without waiting for the Subflow.

For testing, we will add another Log step after we call the Subflow to observe the sequence of events.

Your Flow should now look like this:

‘Sync Flow on Incident Comment’ Flow
Flow name: Sync Flow on Incident Comment
Trigger: Record > Created or Updated
Table: Incident
Condition: Active is true AND Additional comments changes
Run Trigger: For each unique change
Advanced Options
Where to run the flow: Run flow in foreground
ACTIONS

  1. Action: Log
    • Level: Info
    • Message (toggle scripting for this input):

      // Retrieve the most recent Comment from the Incident
      return "From the Flow:\n\n" + fd_data.trigger.current.comments.getJournalEntry(1);
      
  2. Subflow: Async Subflow Handle Incident Comment
    • Wait For Completion: False
    • Incident: [data pill] Trigger > Incident Record [data pill]
    • Comment (toggle scripting for this input):
      // Return the most recent journal entry
      return fd_data.trigger.current.comments.getJournalEntry(1);
      
  3. Action: Log
    • Level: Info
    • Message: From sync flow, after calling Subflow.
Final Flow using a Flow Designer Sync/Async design pattern to optimize user experience.

Final Flow, showing the values passed to the Subflow inputs in detail.

NOTE: After testing this multiple times, I discovered that the “Wait for a duration of time” block in the Subflow is necessary to make the Subflow truly Asynchronous. Testing without the timer indicates that the main Flow will still hang while the Subflow is running even though the “Wait for Completion” box is unchecked. It is unclear whether this is intended behavior. For now, I recommend adding a one-second wait to the Subflow while also unchecking the “Wait for Completion” box when calling the Subflow to ensure that the Client transaction does not pause for the Subflow to complete. You are welcome to try this out on your own. I used gs.gleep() in the script for one of the Subflow Log steps instead of the “Wait for a duration of time” step and observed that the client transaction did appear to hang even when the “Wait For Completion” box was unchecked for the Subflow.

Summary

Now we have seen how to implement a Flow Designer Sync/Async design pattern that would normally be accomplished using Events and Script Actions. This gives us another tool to use for optimizing the User Experience while also ensuring access to accurate data for long-running asynchronous transactions.

The post Applying the Sync/Async Pattern in Flow Designer appeared first on ServiceNow Guru.

]]>
How to Cut Your Storage Footprint (and Bill) by Using Clone Options in ServiceNow https://servicenowguru.com/service-now-general-knowledge/cut-storage-footprint-bill-using-clone-options/ https://servicenowguru.com/service-now-general-knowledge/cut-storage-footprint-bill-using-clone-options/#comments Tue, 18 Jun 2024 13:09:20 +0000 https://servicenowguru.com/?p=16407 Reducing storage costs in ServiceNow can be a game-changer for organizations looking to optimize their IT expenditures. Leveraging advanced clone options effectively can significantly cut down your storage footprint as well as help secure your data by only cloning what is needed.  This means we must be good stewards of our resources even in Non-Production

The post How to Cut Your Storage Footprint (and Bill) by Using Clone Options in ServiceNow appeared first on ServiceNow Guru.

]]>
Reducing storage costs in ServiceNow can be a game-changer for organizations looking to optimize their IT expenditures. Leveraging advanced clone options effectively can significantly cut down your storage footprint as well as help secure your data by only cloning what is needed.  This means we must be good stewards of our resources even in Non-Production Environments (NPE).

Most instances have a limit on storage included per instance.  If you do not know your storage limits are work with your account team to ask how many TB per instance you have.  What many of us do not realize is that this also applies to our non-production instances.  This is where it really starts to add up!  If my pipeline goes DEV -> Test -> Pre-Prod -> Prod, then every 1TB over our limit in production is now 4X.  Imagine having an instance with a footprint of 100TB and then I clone that to all my non-production environments.  That would be 400TB of storage consumed!  Instead of using our IT budget for awesome features like AIOps, HRSD or GenAI we would be stuck footing a storage bill ☹.  This is where something as simple as using Advanced Clone Options and Clone Profiles can be a game changer.

Clone Home Page

At the time of writing this article we are using the Washington release which has a new feature of the Clone Homepage.  We will be doing using the Clone Homepage for the purposes of this article.  Navigate to “System Clone -> Home” and you should see the new home page like this:

ServiceNow Clone Homepage Washington Release

Creating a clone profile:

Before we can get to the advanced settings, I like to use a clone profile.  This way I can tweak the advanced clone setting once and it applies to all future clones.  Lets go create our first clone profile!

To get here from our home page we are going to:

  1. Click Configurations – that will take you to the overview page for configurations.
  2. Click on “View all” under Clone profiles – This will show you all the clone profiles on your instance
  3. Click on “New” – this will create our new Clone Profile

ServiceNow Clone Configuration Homepage Washington Release

 

 

 

 

ServiceNow Clone Profiles Create New Profile Washington Release

 

 

 

 

Advanced Clone Settings:

There are currently a few options to setup our clone profile.  In my opinion ServiceNow’s DOCS site does not do a good job of explaining each setting for us to make an informed decision on what will work best for our organization.  Let’s take a few seconds and over each one of them so you can make the best decisions for your instance.

Naming our Clone

Let’s Give our Clone Profile a name and set it as default.  This will ensure that this profile is used in all future clones by default and remind you how awesome the Guru’s are here at SNGuru!

ServiceNow Naming a Clone ProfileWashington Release

General Settings

Now let’s go through the current options available on the “General” tab as this is where we will start to see some space savings in NPE!

  • The amount of data copied from the Task Table:
    • This is a MAJOR setting to help save on storage.
    • Some instances have never archived and have 10 years of data in the task table, do we really need all that data in NPE?
  • Apply On-Demand Backup
    • This is a personal preference and does a good job explaining what it does.

Exclusion Settings:

Next up is our Exclusions tab and it is an important one.  This is the second part that can move the needle BIG time for storage.  To give you an idea I have seen an instance with an audit log in excess of 40TB+ and this was a game changer in our clone times as well as storage footprint.  OOTB our clone profile had 182 tables in the “Exclusions list”.  You’re probably asking how do I know what tables to add here to save on storage?

I cannot provide screenshots for this portion so you will have to follow along via text.  If you have screenshots of this please add them into the comments below.

  1. Login to the Support portal formerly known as HI.
  2. Go to the “Automation Store” and find the item called “Database Footprint”.
  3. Once the item is opened select your production instance
  4. Second select Number of largest tables by size. It defaults to 10 so I select 40
  5. Hit submit

After hitting submit you will be presented with the top tables on your production instance that are taking up space.  From here we need to evaluate each table and decide if we can add it to our exclusion list!  Here are a few tables I exclude from clones to save on storage:

  • sys_flow_* //Any flow tables that have runtime data and logs
  • pa_snapshots
  • discovery_log
  • cmdb_multisource_data
  • sys_journal_field
  • cmdb

Once you have all the tables added, here are the settings we use for our clones to ensure we exclude tables in our Exclusion list, Audit and Attachment data.

Preservers Settings:

These won’t save you on storage but we did not want to leave anything out as to cause confusion.  I prefer to have each Non-Production instance have its own theme, the uglier the better so I know it is not Production.  The settings below reflect my preferences.

Scheduling your Clone:

All the hard work is out of the way with our new clone profile, and it should be a breeze scheduling our next clone and all future clones.

Navigate to “System Clone -> Home” and select “Request clone”

 

On our clone request form, we should see our profile show up automagically because we set default above when we named it.  If not select your Clone profile and watch the settings apply.  As for locking your settings I prefer to lock them to ensure that this clone moves forward with the settings I have when requesting it.

Conclusion

By incorporating these clone options, you not only have an opportunity to save on storage costs but also enhance the performance and security of your sub-production instances.  Let us know in the comments how you plan to use these and let’s see who can save the most on their storage.  Happy Cloning!

The post How to Cut Your Storage Footprint (and Bill) by Using Clone Options in ServiceNow appeared first on ServiceNow Guru.

]]>
https://servicenowguru.com/service-now-general-knowledge/cut-storage-footprint-bill-using-clone-options/feed/ 1
How to use a ServiceNow Read Replica https://servicenowguru.com/performance/how-to-use-servicenow-read-replica/ https://servicenowguru.com/performance/how-to-use-servicenow-read-replica/#comments Tue, 04 Jun 2024 14:56:54 +0000 https://servicenowguru.com/?p=16004 We have all heard about dynamic scaling or horizontal scaling in the cloud but can our SaaS offering from ServiceNow scale like this?  The answer, YES!  The platform can scale with more app nodes, but the real horsepower comes from scaling the back end using Read Replicas. Before we go further, I would like to

The post How to use a ServiceNow Read Replica appeared first on ServiceNow Guru.

]]>
We have all heard about dynamic scaling or horizontal scaling in the cloud but can our SaaS offering from ServiceNow scale like this?  The answer, YES!  The platform can scale with more app nodes, but the real horsepower comes from scaling the back end using Read Replicas.

Before we go further, I would like to take a second and give a brief description to those of us asking “What is a read replica and do I really need one?” A read replica in ServiceNow is a database instance that is configured to handle read-only operations, helping to offload traffic from the primary database and improve performance as well as improving user experience through response times.

Do I have a Read Replica?

Before you go check if you have a Read Replica, please don’t everyone at once start banging down ServiceNow’s doors demanding Read Replicas.  Not everyone will have a Read Replica, nor will everyone need one.  Having a Read Replica is based on instance size, performance and many other factors.  Work with your Account Team to determine if one is right for your instance.

One of the larger instances I have worked on had a database footprint of 100TB+ and had multiple read replicas.  You’re probably asking how do I know if I need a read replica which we will not cover here, however we will go over how to check if you have one:

First we can check our “Secondary Database Pools” to see if we have data in the table:

sys_db_pool

Once we verify we have read replicas we need to check our “categories”.

sys_db_category

If both of these tables show up on your instance and have data in the tables, then you have a read replica!  Now onto how we use it.

How to use it:

ServiceNow will tune your queries to automatically go to a read replica more often than not without you the customer doing a thing.  However, there are some scenarios where you will want to use a query category to ensure it is forced to the correct place, aka a Read Replica!

From our previous steps, we can see our categories (sys_db_category) but the two I find myself using to start with are “reporting” for report-type queries and scripts and “reroute” for most other things.

Scenario 1:

I want to query data and have NO intention to write back to the database:

In this scenario, I want to bring back a list of all Active incidents.  This can be an expensive query in a large-scale instance and could put undue load on the primary database.  Using the “setCategory” in our GlideRecord will force this query to use the read replicas configured within the “reroute” category.

//return back a list of all active incident

var gr = new GlideRecord('incident');
gr.addActiveQuery();
gr.setCategory("reroute"); //This forces it to a read replica
gr.query();

Scenario 2:

In this scenario, I have a reporting engineer who needs to report off-platform and will be making an API query to bring back those same Active incidents.  Since this is reporting and we don’t want to hammer our primary DB we should instruct our API consumer to pass in a parameter that will ensure that this API query goes to a read replica:

URL Parameter:  sysparm_query_category=reporting

https://instance.service-now.com/api/now/table/incident?sysparm_query=active%3Dtrue&&sysparm_query_category=reporting

When NOT to use a Read Replica:

If we are going to be checking if a record exists before inserting it, then we should NOT use a Read Replica.  This is due to replication lag between the primary and the replica.

In closing, Read Replicas are not the only way to improve performance but are part of a holistic architecture to help improve platform performance.  How are you using read replicas today or how do you plan on using them in the future?  Let us know in the comments!

Pro tip:  Troll your slow query log for the top 20 and fix those first. Then do the same next month.

The post How to use a ServiceNow Read Replica appeared first on ServiceNow Guru.

]]>
https://servicenowguru.com/performance/how-to-use-servicenow-read-replica/feed/ 4