Lead System Architect Student Guide

Lead System Architect 7.4 Student Guide © 2018 Pegasystems Inc., Cambridge, MA All rights reserved. Trademarks For Peg

Views 157 Downloads 5 File size 8MB

Report DMCA / Copyright

DOWNLOAD FILE

Recommend stories

  • Author / Uploaded
  • Nanda
Citation preview

Lead System Architect 7.4 Student Guide

© 2018 Pegasystems Inc., Cambridge, MA All rights reserved. Trademarks For Pegasystems Inc. trademarks and registered trademarks, all rights reserved. All other trademarks or service marks are property of their respective holders. For information about the third-party software that is delivered with the product, refer to the third-party license file on your installation media that is specific to your release. Notices This publication describes and/or represents products and services of Pegasystems Inc. It may contain trade secrets and proprietary information that are protected by various federal, state, and international laws, and distributed under licenses restricting their use, copying, modification, distribution, or transmittal in any form without prior written authorization of Pegasystems Inc. This publication is current as of the date of publication only. Changes to the publication may be made from time to time at the discretion of Pegasystems Inc. This publication remains the property of Pegasystems Inc. and must be returned to it upon request. This publication does not imply any commitment to offer or deliver the products or services described herein. This publication may include references to Pegasystems Inc. product features that have not been licensed by you or your company. If you have questions about whether a particular capability is included in your installation, please consult your Pegasystems Inc. services consultant. Although Pegasystems Inc. strives for accuracy in its publications, any publication may contain inaccuracies or typographical errors, as well as technical inaccuracies. Pegasystems Inc. shall not be liable for technical or editorial errors or omissions contained herein. Pegasystems Inc. may make improvements and/or changes to the publication at any time without notice. Any references in this publication to non-Pegasystems websites are provided for convenience only and do not serve as an endorsement of these websites. The materials at these websites are not part of the material for Pegasystems products, and use of those websites is at your own risk. Information concerning non-Pegasystems products was obtained from the suppliers of those products, their publications, or other publicly available sources. Address questions about non-Pegasystems products to the suppliers of those products. This publication may contain examples used in daily business operations that include the names of people, companies, products, and other third-party publications. Such examples are fictitious and any similarity to the names or other data used by an actual business enterprise or individual is coincidental. This document is the property of: Pegasystems Inc. One Rogers Street Cambridge, MA 02142-1209 USA Phone: 617-374-9600 Fax: (617) 374-9620

www.pega.com

DOCUMENT: Lead System Architect Student Guide SOFTWARE VERSION: Pega 7.4 UPDATED: 01 2018

CONTENTS

COURSE OVERVIEW

5

Before you begin Introducing the business scenario

6 8

Front Stage Event Booking business scenario

8

ENTERPRISE DESIGN

9

Designing Pega for the enterprise

10

Introduction to designing Pega for the enterprise Designing the Pega enterprise application Application deployment and design decisions Security design principles Reporting and data warehousing How to define a release management approach Pega application monitoring Case interaction methods from external applications

Setting up the Pega Platform

10 11 12 15 18 22 25 28

31

Introduction to setting up Pega Platform Deployment options High availability Cluster topologies Planned and unplanned outages Hardware sizing estimation

31 32 35 38 42 46

STATEGIC APPLICATION, AI AND AUTOMATION DESIGN

47

Leveraging Pega applications

48

Introduction to leveraging Pega applications Benefits of leveraging a Pega application Pega's application offerings How to customize a Pega application

48 49 51 53

Leveraging AI and robotic automation Introduction to leveraging AI and robotic automation Artificial intelligence and robotic automation comparison

57 57 58

ASSET DESIGN AND REUSE

63

Starting with Pega Express

64

Introduction to starting with Pega Express Benefits of Pega Express Development roles How to ensure adoption of Pega Express How to reuse assets created in Pega Express

64 64 65 65 66

Designing for specialization

68

Introduction to designing for specialization Object Oriented Development in Pega Specialization design considerations How to choose the application structure Specialization and component applications Ruleset, class, and circumstance specialization

i ©2018 Pegasystems

68 69 71 73 75 76

Specializing an application by overriding rulesets Pattern inheritance and organization hierarchy specialization Specialization use cases

Promoting reuse

78 79 81

83

Introduction to promoting reuse Relevant records How to leverage built-on applications and components Application versioning in support of reuse The role of a COE in reuse

83 84 85 87 88

CASE DESIGN

89

Designing the case structure

90

Introduction to designing the case structure How to identify cases Case processing Case design - example one Case design - example two

90 91 93 96 98

Assigning work

100

Introduction to assigning work Push routing and pull routing How to leverage work parties in routing Get Next Work How to customize Get Next Work

100 101 103 105 111

DATA MODEL DESIGN

113

Designing the data model

114

Introduction to designing the data model Data model reuse layers How to extend a data class to higher layers How to maintain data integrity

114 115 118 120

Extending an industry framework data model

123

Introduction to extending an industry foundation data model Industry foundation data model benefits How to extend an industry foundation data model How to use integration versioning

123 124 127 130

USER EXPERIENCE DESIGN

131

User experience design and performance Introduction to user experience design and performance How to identify functionality that impacts UX User experience performance optimization strategies How to design the user experience to optimize performance

Conducting usability testing

132 132 133 135 137

140

Introduction to conducting usability testing Usability testing How to conduct usability testing

140 141 142

SECURITY

144

ii ©2018 Pegasystems

Defining the authorization scheme

145

Introduction to defining the authorization scheme Authorization models How to create roles and access groups for an application How to configure authorization Rule security mode

Mitigating security risks

145 146 150 153 154

155

Introduction to mitigating security risks Security risks Content security policies Rule Security Analyzer How to secure an application

155 156 157 159 160

REPORTING

163

Defining a reporting strategy

164

Introduction to defining a reporting strategy How to define a reporting strategy

164 165

How to define a reporting strategy

167

Introduction to designing reports for performance Impact of reports on performance How to configure an application to improve report performance How to tune the database to improve report performance

167 168 169 171

BACKGROUND PROCESSING

173

Designing background processing

174

Introduction to designing background processing Background processing options Asynchronous integration Default agents

DEPLOYMENT AND TESTING

174 175 178 180

181

Defining a release pipeline

182

Introduction to defining a release pipeline DevOps release pipeline Best practices for team-based development Continuous integration and delivery Release pipeline testing strategy Modular development deployment strategies

182 183 185 188 191 192

Assessing and monitoring quality

200

Introduction to assessing and monitoring quality How to establish quality standards on your team How to create a custom guardrail warning How to customize the rule check-in approval process

Conducting load testing

200 201 203 204

208

Introduction to conducting load testing Load testing How to load test a Pega application Load testing best practices

208 209 211 212

iii ©2018 Pegasystems

POST PRODUCTION EVENTS

213

Estimating hardware requirements

214

Introduction to estimating hardware requirements Hardware estimation events How to submit a hardware sizing estimate request

214 215 217

Handling flow changes for cases in flight

219

Introduction to handling flow changes for cases in flight Flow changes for cases in flight How to manage flow changes for cases in flight How to use problem flows to resolve flow issues How to manage problem flows from the Flow Errors landing page

Extending an application

219 220 221 225 228

229

Introduction to extending an application How to extend existing applications

229 230

COURSE SUMMARY

234

Lead System Architect summary

235

iv ©2018 Pegasystems

COURSE OVERVIEW

5 ©2018 Pegasystems

Before you begin Lead System Architect overview The way you build software changes when you use the Pega Platform and Pega Standard Applications. You, as the lead system architect, are expected to know about all aspects of Pega, including enterprise application design, environment choices such as cloud or on-premise, artificial intelligence and robotics, and DevOps. As the lead system architect, you need to know the technical advantages of Pega, and how to oversee one or more development teams to quickly implement Pega technology. You guide the creation of scalable solutions that perform well, and deliver a great user experience. Leading this endeavor can be a daunting task. In this course, you learn how to make the most appropriate design choices in the context of a business scenario. The Pega technologies discussed in this course each have a significant level of depth. As a result, you learn how to architect the best possible solution using both the skills you learn in this course and all of the resources available to you. The exercises in this course are based on Pega Platform 7.3. You may notice slight navigation changes in the Pega Platform in subsequent releases. The provided instructions and screen captures can be easily applied to the newer versions. In other cases, supplemental material has been added to the course content. Look for references to the current version. Exercises in this course apply to: Pega Platform 7.3, Pega Platform 7.4.

Objectives After completing this course, you should be able to: l

Design the Pega application as the center of the digital transformation solution

l

Describe the benefits of starting with a Pega customer engagement or industry application

l

Recommend appropriate use of robotics and artificial intelligence in the application solution

l

Leverage assets created by business users who are building apps in Pega Express

l

Design case types and data models for maximum reusability

l

Design an effective reporting strategy

l

Design background processes, user experience, and reporting for optimal performance

l

Create a release management strategy, including DevOps when appropriate

l

Ensure your team is adhering to development best practices and building quality application assets

l

Evolve your application as new business requirements and technical challenges arise

Prerequisites To succeed in this course, students should:

6 ©2018 Pegasystems

l

Possess the Senior System Architect certification (CSSA)

l

Have at least twelve months of experience building Pega applications

l

l

Complete prerequisite course work. Refer to the Certified Lead System Architect page on the PDN for details. Pass the Lead System Architect Readiness exam

7 ©2018 Pegasystems

Introducing the business scenario Front Stage Event Booking business scenario Front Stage Event Booking assists customers with booking large-scale, high-profile corporate and musical events, hosting between 5,000 and 18,000 guests per event. Front Stage has been in business for 30 years, and uses a range of technology. Some technology is old, such as the reservation system that runs on a mainframe. Some technology is new, such as the most recent mobile application that helps sales executives track leads. Front Stage relies on the Information Technology (IT) department to maintain legacy applications, as well as to support their highly mobile sales organization. In the past, IT created applications that were difficult to use and did not meet the needs of the end users of these applications. In some cases, the new application slowed the business instead of making the users more productive. Front Stage is aware of several smaller event booking companies who are using newer technology to gain a competitive edge. These smaller companies have started to cut into the corporate event booking segment, and Front Stage sees a dip in sales in this segment as a result. Front Stage's CEO, Joe Schofield, recognizes that if Front Stage avoids investing in technology to transform the way they operate, then Front Stage will be out of business in two years.

Your mission: Architect a Pega solution for Front Stage During this course, you create an event booking solution for Front Stage. Using the business scenario document, you apply what you learn in each lesson to design the best technical solution to meet Front Stage's vision of digital transformation.

8 ©2018 Pegasystems

ENTERPRISE DESIGN

9 ©2018 Pegasystems

Designing Pega for the enterprise Introduction to designing Pega for the enterprise Pega is enterprise-grade software designed to transform the way organizations do business and the way they serve customers. The Pega application not only works with existing enterprise technologies, but also leverages those technologies to provide an end-to-end architectural solution. After this lesson, you should be able to: l

Describe the design thinking and approach for architecting Pega for the enterprise

l

Describe deployment options and how those deployment choices can affect design decisions

l

Describe how Pega interacts with existing enterprise technologies

l

Describe the design approach when architecting a Pega application

10 ©2018 Pegasystems

Designing the Pega enterprise application You can easily be overwhelmed by the number of external applications and channels you need to work with to deliver a single application to your business users. With this in mind, the following video describes how to design the end-to-end Pega enterprise application, starting with Pega in the middle of your design.

Transcript Pega is not just another web application that sits in your library of web or mobile apps. Pega radically transforms the way organizations do business. Pega can drastically reduce costs, build customer journeys, and fully automate work. Your job, as a lead system architect, is to take the digital transformation vision and transform business applications that perform real work for real people and drive business outcomes for even the largest of organizations. It is easy to be overwhelmed by all of the existing technologies, channels, integrations to legacy systems, and trying to figure out how the Pega fits into the big picture. But, if you start with Pega in the middle, and work your way out to those channels and systems of record, one application at a time, the vision becomes reality, release by release. The entire digital transformation of a large organization is not realized in one release of the application. At the start of a project, you probably only know a portion of what the end to end architecture will look like, and that is ok. Instead of thinking channel-in or system-up, think of Pega out—intelligently routing work from channels through back end systems, then adding automation where it makes sense, and thinking end to end at all times. Whether you are designing your application with Pega Platform or you are starting with a Pega CRM or industry application, designing with Pega in the middle and thinking one application at a time allows you to implement your application based on what you know today, and gives you the freedom and flexibility to design for whatever comes tomorrow.

11 ©2018 Pegasystems

Application deployment and design decisions Pega works the same regardless of the environment in which it is running. Pega runs the same on Pega Cloud as it does on a customer cloud, such as Amazon Web Services (AWS) or Google Cloud, as it does on-premise. No matter the environment, Pega follows the standard n-tier architecture you may already recognize.

Because Pega is software that writes software, you can run your application anywhere or move it from one environment to another. For example, you could start building your application on a Pega Cloud environment, then move your application to an on-premise environment. The application functions the same way. Consider these two environment variations when designing your application: l

Requirements to deploy an enterprise archive (.ear)

l

Requirements to use multitenancy

Enterprise archive (.ear) deployment Pega can be deployed as an enterprise archive (.ear) or a web archive (.war). Use an enterprise (.ear)  deployment if you have one or more of the following requirements: l

You need to use message-driven beans (JMS MDB) to handle messaging requirements

l

You need to implement two phase commit or transactional integrity across systems

12 ©2018 Pegasystems

l

l

You need to implement Java Authentication and Authorization Service (JAAS) or Java Enterprise Edition (JEE) security You have enterprise requirements that all applications run on a JEE compliant application server

Otherwise, a .war deployment is sufficient. For a listing of supported application servers and corresponding deployment archive types, see the Platform Support Guide.

Multitenancy Multitenancy allows you to run multiple logically separate applications on the same physical hardware. This allows the use of a shared layer for common processing across all tenants, yet allows for isolation of data and customization of rules and processes specific to the tenant.

Multitenancy supports the business processing outsourcing (BPO) model using the Software as a Service (SaaS) infrastructure. For example, assume the shared layer represents a customer service application offered by ServiceCo. Each partner of ServiceCo is an independent BPO providing services to a distinct set of customers. The partner (tenant) can customize processes unique to the business and can leverage the infrastructure and shared rules that ServiceCo provides. When designing for multitenancy, consider: l

l

l

l

Release management and software development lifecycle – The multitenant provider must establish guidelines for deploying and manages instances and work with tenants to deploy, test, and monitor applications. Multitenant application architecture – The multitenant provider must describe the application architecture to the tenants and explain how tenant layers can be customized. System maintenance – Maintenance activities in multitenancy affect all tenants. For example, when a system patch is applied, all tenants are affected by the patch. Tenant life cycle – The multitenant provider and tenant must work together to plan disk and hardware resources based on the tenant's plans for the application.

13 ©2018 Pegasystems

l

Tenant security – The two administrators in a multitenant environment include the multitenant provider administrator and the tenant administrator. The multitenant provider sets up the tenant, and manages security and operations in the shared layer. The tenant administrator manages security and operations of the tenant layer.

For more information on multitenancy, see the Multitenancy help topic.

KNOWLEDGE CHECK

Name two situations in which you need to make additional design consideration with respect to how the application is deployed? Enterprise tier deployment (.ear) requirements and use of multitenancy

14 ©2018 Pegasystems

Security design principles Like performance, security is always a concern, no matter what application you work on or design. Whether on premise or in a cloud environment, failing to secure your application exposes the organization to huge risk, and can result in serious damage to the organization's reputation. Take security design and implementation very seriously and start the security model design early.

Your organization's security standards Your organization likely has standards on how all applications authenticate users and what data can be accessed based on role. You may also be required to use third-party authentication tools when invoking web services, or when another application calls Pega as a service. Ask the enterprise architecture team or technical resources at the organization for security standards so you know what you need to account for in your design and implement in the application. An organization's security policies are often the result of industry regulatory requirements. Many industries have specific regulations on sharing data outside of the organization as well as within the organization. For example, in the United States, healthcare organizations comply with HIPAA (Health Insurance Portability and Accountability Act). Educate yourself with industry and government regulations that apply to the application you are designing. If the application resides in a cloud environment or is a hybrid cloud/on-premise deployment, acquaint yourself with the network architecture and security protocols in place. Learn who is performing what role in maintaining the security of the application. For example, Pega Cloud describes the architecture, security controls, compliance with government standards, and monitoring services Pega Cloud offers in the Pega Cloud Security Overview document. Work with the infrastructure teams at your organization to identify security contacts and what measures are in place to protect application data and customer privacy.

Authentication design considerations Authentication is proving to the application you are who you say you are. Each organization has policies on how users are authenticated into the application. Most organizations use some form of single sign on. If the organization is running an enterprise tier deployment, it may be using containerbased authentication or JAAS or JEE security. If so, this affects how you design your authentication scheme and your application.

15 ©2018 Pegasystems

In short, the Pega application implements to the organization's authentication policy. For more information on authentication protocols supported by Pega, see the PDN article Authentication in the Pega Platform.

Authorization design considerations Authorization is about who can do what and who can see what in the application. In general, give the minimum access needed to perform the job. This rule applies to both end users and developers. As you are designing your authorization scheme: l

l

l

l

l

Create a matrix of access roles, privileges, and attributes to be secured. Determine where to use role-based access (RBAC) and attributed-based controls (ABAC) in your authorization scheme. For more information on RBAC and ABAC, see the PDN article Authorization models in the Pega Platform. Define security on reports and attachments, and background processes. Background processes such as agents need an associated access group. Determine the level of auditing (history) required for each case type. Only write entries when necessary. Otherwise, you can impact performance when history tables become too large. Determine what level of rule auditing is required for developer roles. Secure developer access. Not every developer should have administrator rights. Your organization may also have restrictions on which developers can create activity rules or SQL connector rules.

16 ©2018 Pegasystems

l

Leverage the Access Deny rule type. Some organizations enforce a deny first policy. In this model, users must be explicitly granted privileges to access certain information. If you have similar requirements for the application you are designing, review usage of the Rule Security Mode setting on each access group. For more information on usage of this setting, see the PDN article Setting role privileges automatically for access group Deny mode.

Grasping the importance of security design and analysis of your application is essential. If you need a refresher, see the Customizing Security Requirements in Pega Applications course on Pega Academy. Also refer to Security checklist for Pega Platform applications on the PDN throughout the design of your application.

KNOWLEDGE CHECK

When should you begin the design of your security model? Begin designing your security model as early as possible. Several factors can impact how you implement security in the application. Be aware of those factors to make sure your application meets the organization's security standards. Failing to meet these standards prevent your application from going to production. Improperly securing your application opens your organization to unnecessary risk.

17 ©2018 Pegasystems

Reporting and data warehousing Organizations often want to combine data from web applications, legacy applications, and other sources in order to make decisions in real time or near real time. To make these decisions, many organizations use business intelligence software to collect, format, and store the data, and provide software to analyze this data.

18 ©2018 Pegasystems

A data warehouse is a system used for reporting and data analysis. The data warehouse is a central repository of integrated data from one or more separate sources of data. The extract, transform, and load (ETL) process prepares the data for use by the data warehouse. The following conceptual image illustrates a typical end-to-end process of extracting data from systems of record and storing the data in the warehouse, then making that data available to reporting tools.

The key factor that determines whether you design your reports in the Pega application or leverage an external reporting tool is the impact on application performance. For example, if your reporting requirements state that you need to show how many assignments are in a workbasket at any given

19 ©2018 Pegasystems

time, creating a report on the assignment workbasket table is appropriate. If you to analyze multiple years of case information to perform some type of trending analysis, use reporting tools suited for that purpose instead. You can provide a link to those reports from the end user portal in the Pega application.

Business Intelligence Exchange (BIX) Business Intelligence Exchange (BIX) allows you to extract data from your production application, and format the data to make it suitable for loading into a data warehouse. BIX is an optional add-on product consisting of a ruleset and a stand-alone Java program that can be run from a command line. BIX data from the BIX process can be formatted as XML or comma separated (CSV), or can be output directly to a database. The following diagram depicts the process of extracting the data from the Pega database and preparing the data for use by downstream reporting processes.

For more information on BIX, see the help topic Business Intelligence Exchange.

Archiving and purging data Another facet of the data management and warehousing solution is planning how and when to purge data from the production system. Over time, the work and history tables can grow significantly. In addition to making this data available for reporting from a data warehouse, create a strategy for managing the size of these tables. This strategy could include partitioning database tables or moving the data to a staging database. This strategy could also involve purging this data from the database after it has been archived in the warehouse. Note: Pega provides a wizard for purging data from production tables. For more information on purging data using the wizard, see the help topic Purge/Archive wizard.

20 ©2018 Pegasystems

KNOWLEDGE CHECK

What is the primary reason for using an external reporting tool instead of Pega reporting? An external reporting tool is used because of the potential impact on system performance. If you need a report that does heavy analysis or trending type reporting over large quantities of data, use a tool meant for that purpose. Pega can handle this type of reporting, but be aware of impact to system performance, particularly when embedding reports in end user portals.

21 ©2018 Pegasystems

How to define a release management approach Depending on the application release model, development methodologies, and culture of the organization, you see differences in the process and time frame in which organizations deliver software to production. Some organizations take more time moving new features to production because of industry and regulatory compliance. Some have adopted automated testing and code migration technologies to support a more agile delivery model. Organizations recognize the financial benefit of releasing application features to end users and customers faster than their competitors, and many have adopted a DevOps approach to streamline their software delivery life cycle. DevOps is a collaboration between Development, Quality, and Operations staff to deliver high- quality software to end users in an automated, agile way. By continuously delivering new application features to end users, organizations can gain a competitive advantage in the market. Because DevOps represents a significant change in culture and mindset, not all organizations are ready to immediately embrace DevOps. These are your tasks as the technical lead on the project:

1. Assess the organization's existing release management processes and tooling. Some organizations may already work with a fully automated release pipeline. Some organizations may use limited automated testing or scripts for moving software across environments. Some organizations may perform all release management tasks manually. 2. Design a release management strategy that achieves the goal of moving application features through testing and production deployment, according to the organization's release management protocols. 3. Evolve the release management process over time to an automated model, starting with testing processes. The rate of this evolution depends on the organization's readiness to adopt agile methodologies and rely on automated testing and software migration tools and shared repositories. Important: While setting up your team release management practices, identify a Release Manager to oversee and improve these processes. The Release Manager takes care of creating and locking rulesets and ensures that incoming branches are merged into the correct version.

22 ©2018 Pegasystems

Release pipeline Even if the organization releases software in an automated way, most organizations have some form of a manual (or semi-automated) release pipeline. The following image illustrates the checkpoints that occur in the release pipeline.

This pipeline highlights developer activities and customer activities. Developer activities include: l

Unit testing

l

Sharing changes with other developers

l

Ensuring changes do not conflict with other developer's changes

Once the developer has delivered changes to the customer, customer activities typically include: l

Testing new features

l

Making sure existing features still work as expected

l

Accepting the software and deploying to production

These activities occur whether or not you are using an automated pipeline. The Standard Release process described in Application release management for Pega Platform explains the tasks of packaging and deploying changes to your target environments. If you are on Pega Cloud, be aware of certain procedures when promoting changes to production. For more information, see Change management in Pega Cloud.

23 ©2018 Pegasystems

Moving to an automated pipeline In an organization that deploys software with heavy change management processes and governance, you contend with old ways of doing things. Explain the benefits of automating these processes, and explain that moving to a fully automated delivery model takes time. The first step is to ensure that the manual processes in place, particularly testing, have proven to be effective. Then, automating bit by bit over time, a fully automated pipeline emerges. When discussing DevOps, the terms continuous integration, continuous deployment, and continuous delivery are frequently used. Use the following definitions for these terms: l

Continuous integration – Continuously integrating into a shared repository multiple times per day

l

Continuous delivery – Always ready to ship

l

Continuous deployment – Continuously deploying or shipping (no manual process involved)

Automating and validating testing processes is essential in an automated delivery pipeline. Create and evolve your automated test suites using Pega Platform capabilities along with industry testing tools. Otherwise, you are automating promotion of code to higher environments, potentially introducing bugs found by your end users that are more costly to fix. For more information on the DevOps pipeline, see the DevOps release pipeline overview.

KNOWLEDGE CHECK

Is the goal of your release management strategy to move the organization to DevOps? No. The goal of your release management strategy is to implement a repeatable process for deploying high-quality applications so users of that application can start realizing business value. Over time, as those processes become repeatable, they are ideal for automation. Continuous integration and continuous delivery (and eventually, continuous deployment) benefit the organization and often give it a competitive advantage.

24 ©2018 Pegasystems

Pega application monitoring Many organizations have application performance monitoring (APM) tools in place to track and report on application performance and responsiveness. While these tools can report on data such as memory and CPU usage on your database and application servers, they do not provide detailed information about the health of the Pega application itself. Pega provides two tools designed to monitor as well as provide recommendations on how to address alerts generated by the Pega application. These tools compliment any APM tools you might be using to give you a complete picture of the health of your Pega application. l

l

Autonomic Event Services (AES) – AES monitors on-premise applications. AES is installed and managed on-site. Predictive Diagnostic Cloud (PDC) – Pega PDC is a Pega-hosted Software as a Service (SaaS) application that monitors Pega Cloud applications. PDC can also be configured to monitor onpremise applications.

The tool you use depends on your monitoring requirements and if you want to customize the monitoring application. The following table compares differences between AES and PDC. PDC

AES

Hardware provisioning

Pega

Customer

Installation and upgrades

Pega

Customer

Ability to customize

Upon request

Fully customizable

Release schedule

Quarterly

Yearly

Communication with monitored nodes

One-way

Two-way

Active system management (restart agents,

Not available

Available

listeners, quiesce node) Both AES and PDC monitor the alerts from and health activity for multiple nodes in a cluster. Both send you a scorecard that summarizes application health across nodes. The most notable difference, from an architecture standpoint, is that AES interacts with the monitor node to allow you to manage processes on the monitored nodes, such as restarting agents and quiescing application nodes. You can use AES or PDC to monitor development, test, or production environments. For example, you can set up AES to monitor a development environment to identify any troublesome application area before promoting to higher environments. The System Management Application (SMA) can be used to monitor and manage activity on an individual node. SMA is built on Java Management Extensions (JMX) and provides a standard API to monitor and manage resources either locally or by remote access.

25 ©2018 Pegasystems

Pega Platform continually generates performance data (PAL) and issues alerts if the application exceeds thresholds that you have defined. The following diagram compares the access to monitored nodes to gather and display that performance data.

For more information on AES, PDC, and SMA, see the following resources: l

Autonomic Event Services (AES) landing page

l

Predictive Diagnostic Cloud (PDC) landing page

l

System Management Application (SMA) help topic

26 ©2018 Pegasystems

KNOWLEDGE CHECK

What are some differences between PDC and AES? The Autonomic Event Services (AES) application communicates with the monitored system in a twoway fashion. AES allows you to manage requestors, agents, and listeners from the AES console. The Predictive Diagnostic Cloud (PDC) only reads data from, and does not communicate back, to the monitored system.

27 ©2018 Pegasystems

Case interaction methods from external applications You can expose Pega case types to external application by generating mashup code or by generating microservice code from within the case type settings in Designer Studio. The method you choose depends on your use case and requirements.

Pega Web Mashup Pega Web Mashup, fomerly known as the Internet Application Composer (IAC), allows you to embed mashup code in any website architecture. Use this option when you need to embed Pega UI content into the organization's website, whether it is hosted on-premise or on Pega Cloud. For example, you could embed a credit card application case type into a bank's corporate website.

For more information on deployment and configuration options, see the Pega Web Mashup landing page on the PDN.

28 ©2018 Pegasystems

Microservices A microservice architecture is a method for developing applications using independent, lightweight services that work together as a suite. In a microservices architecture, each service participating in the architecture: l

Is independently deployable

l

Runs a unique process

l

Communicates through a well-defined, lightweight mechanism

l

Serves a single business goal

The microservice architectural approach is usually contrasted with the monolithic application architectural approach. For example, instead of designing a single application with Customer, Product, and Order case types, you might design separate services that handle operations for each case type. Exposing each case type as a microservice allows the service to be called from multiple sources, with each service independently managed, tested, and deployed.

While Pega Platform itself is not a microservice architecture, the Pega Platform compliments the microservice architectural style for the following reasons: l

You can expose any aspect of Pega (including cases) as a consumable service, allowing Pega to participate in microservice architectures. For more information on Pega API, see the Pega API for the Pega Platform PDN article.

29 ©2018 Pegasystems

l

l

You can create this service as an application or as an individual service that exists in its own ruleset. You can reuse services you create across applications, leveraging the Situational Layer Cake for additional flexibility in what each service can do, without overloading the service.

Tip: Microservice architecture is a broad topic. Researching benefits and drawbacks of this style before committing to a microservice architecture is recommended. For further guidance, see the Microservices article by Martin Fowler.

KNOWLEDGE CHECK

What is the difference between exposing a case type using mashup and exposing a case type using a microservice? A mashup allows you to embed the entire case type UI into the organization's web application(s). A microservice allows you to call a specific operation on a case type (or other Pega objects, such as assignments) to run a single purpose operation from one or more calling applications.

30 ©2018 Pegasystems

Setting up the Pega Platform Introduction to setting up Pega Platform Your application can be deployed on-premise or in a cloud environment. To set up Pega Platform optimally for the application, you need to understand the profile and operational requirements of that environment. In this lesson, you look at deployment options, high availability, and hardware sizing as well as planned and unplanned outages. After this lesson, you should be able to: l

Compare deployment options

l

Architect an environment with high availability

l

Request hardware sizing estimates for an environment

l

Take a node out of service with minimal disruption

31 ©2018 Pegasystems

Deployment options Because of Pega Platform's standards-based open architecture, you have maximum flexibility in deploying and evolving your applications. Pega Platform can run on-premise in different operating system environments with any of the popular application servers and databases. In addition, Pega Platform can be made available as a cloud application for development, testing, and production. You can mix approaches with development and test environments on cloud, and then move production-ready applications to on-premise.

On-premise On-premise refers to systems and software that are installed and operate on customer sites, instead of in a cloud environment.

Pega Platform requires two pieces of supporting software in your environment: l

l

A database to store the rules and work objects used and generated An application server that supports the Java EE specification – Provides a run-time environment and other services (such as database connections, Java Messaging Services (JMS) support, and connector and services interfaces to other external systems)

Cloud choice Running on the cloud in any form is an attractive option for many organizations. Pega Platform provides flexible support across different cloud platforms and topology managers. Your platform choice depends on your needs and your environment.

The three basic models for deploying Pega Platform on the cloud include: l

Pega Cloud – Pegasystems’ managed cloud platform service offering is architected for Pegasystems’ applications. Pega Cloud is the fastest time to value.

32 ©2018 Pegasystems

For more information about Pega Cloud, see the PDN article Pega Cloud. l

l

Customer Managed Cloud – Customer-managed cloud environments are run within private clouds or run on Infrastructure-as-a-Service (IaaS) offerings delivered by providers such as Amazon Web Services, Microsoft Azure, or Google Cloud Platform.

Partner Managed Cloud – Partner-managed cloud environments are owned and controlled by business partners. A partner-managed cloud delivers the Pega Platform as a custom hosting solution or purpose-built application service provider.

Pivotal Cloud Foundry To simplify IT operations and automate system management tasks, deploy the Pega Platform on the Pivotal Cloud Foundry (PCF) Platform-as-a-Service infrastructure (PaaS).

PCF is a topology manager. Using a topology manager like PCF still requires one of the above Cloud providers. For more information, see the PDN article Deploying Pega Platform Service Broker on Pivotal Cloud Foundry by using the Ops Manager. To have greater control over the deployment, use BOSH to deploy the Pega Service Broker. For more information, see the PDN article Deploying the Pega Platform Service Broker on Cloud Foundry by using BOSH.

33 ©2018 Pegasystems

Docker container Pega can run as a Docker container. Docker is a Container-as-a-Service (CaaS) infrastructure. Docker is a cost-effective and portable way to deploy a Pega application because you do not need any software except the Docker container and a Docker host system.

Developers use Docker to eliminate certain problems when collaborating on code with co-workers. Operators use Docker to run and manage apps side-by-side in isolated containers. Enterprises use Docker to build agile software delivery pipelines. Containers provide a way to package software in a format that can run isolated on a shared operating system. For more information on Docker support, see the PDN article Pega Platform Docker Support.

KNOWLEDGE CHECK

What deployment options are supported for Pega Platform? On-premise and cloud are supported for Pega Platform. The three options for cloud are Pega Cloud, Customer Managed Cloud, and Partner Managed Cloud.

34 ©2018 Pegasystems

High availability Application outages can be costly to organizations. The organization loses business when the application is not available, and it may also be subject to penalties and fines. An unplanned application outage can also damage the organization's reputation.

Availability is a percentage of time your application is functioning. High availability (HA) has no standard definition because up-time requirements vary. In general, HA refers to systems that are durable and likely to operate continuously without failure for a long time. You can correlate the business value of HA to the cost of the system being unavailable. An industry standard way of referring to availability is in terms of nines:

Availability (%)

Downtime/year

99.9 (three nines)

8.76 hours

99.99 (four nines)

52.56 minutes

99.999 (five nines)

5.26 minutes

Nine nines

31.5569 milliseconds

High availability architecture To reduce the risk of unplanned application downtime, design the application to withstand this risk. Designing a highly available application means building redundancy and failover into your application so that these risks are minimized. For example, one implementation can have several load balancers, physical and virtual machines, shared storage repositories, and databases.

35 ©2018 Pegasystems

Clustering The concept of clustering involves taking two or more Pega Platform servers and organizing them to work together to provide higher availability, reliability, and scalability than can be obtained by using a single Pega Platform server. The app servers can be on-premise or in a cloud and must have a means of dynamically allocating servers to support increased demand. Pega Platform servers are designed to support redundancy among various components, such as connectors, services, listeners, and search. The exact configuration varies based on the specifics of the applications in the production environment.

Load Balancing Load balancing is a methodology to distribute the workload across multiple nodes in a multinode clustered environment. Load balancers monitor node health and direct requests to healthy nodes in the cluster. Since the requestor information (session, PRThread, and clipboard page) are stored in memory, Pega requires all requests from the same browser session to go to the same JVM. In network parlance, this is also known as sticky sessions or session affinity. Session affinity is configured with the load balancer. It ensures that all requests from a user are handled by the same Pega Platform server. Load-balancing Pega nodes in a multinode clustered environment can be achieved by using hardware routers that support “sticky” HTTP sessions. Cisco systems Inc. and F5 Networks Inc. are examples of vendors who offer such hardware. There is also software, virtual, and cloud-based load balancer solutions such as Amazon EC2’s elastic scaling that are available. The load balancers must support session affinity and cookie persistence. Production load balancers offer a range of options for configuring session affinity. The Pega Platform supports cookie-based affinity. You can configure cookies for high availability session affinity using the following variables: l

session/ha/quiesce/customSessionInvalidationMethod

l

session/ha/quiesce/cookieToInvalidate

SSO Authentication Single sign-on (SSO) authentication, though not required, provides a seamless experience for the end user. Without SSO, the user reauthenticates when the user session is moved to another node.

Shared Storage Users' session data is persisted in shared storage in the event of failover or server quiesce. The shared storage allows stateful application data to be moved between nodes. Pega supports a shared storage system which can either be a shared disk drive, Network File System (NFS), or a database. All three of these options require read write access on those systems for Pega to write data. By default Pega uses database persistence in a HA configuration. If organizations decide on a different shared storage system then they need to make sure the Shared Storage integrates with Pega 7. It is essential to configure shared storage to support Quiesce and crash recovery.

Split schema The database tier must have a failover solution that meets production service-level requirements. The Pega Platform database has a split schema. With split schema, the Pega Database Repository is

36 ©2018 Pegasystems

defined in two separate schemas: Rules and Data. Rules includes the rule base and system objects, and Data includes data and work objects. Both can be configured during installation and upgrade. When users save a change to a process flow, they are saving a record to the Rule schema. The Data schema stores run-time information such as processes state, case data, assignments, and audit history. The split schema design is used to support solutions that need to be highly available. Split schema can also be used to separate customer transactional data and rules from an operational perspective. For example, data typically changes more than rules so it can be backed up more frequently. Rule upgrades and rollbacks can be managed independently from data.

With split schema rolling re-starts can be performed to support upgrades, reducing server downtime. In this example, the rules schema is copied and upgraded once. Each node in the cluster is then quiesced, redirected to the updated rules schema, and then restarted one at time. For more information on high availability configuration, see the High Availability Administration Guide.

KNOWLEDGE CHECK

Why is using single sign-on in a high-availability architecture recommended? Because the user must reauthenticate when the user session is moved to another node.

37 ©2018 Pegasystems

Cluster topologies Horizontal and vertical cluster topologies The Pega platform is most often deployed as a traditional JEE application, running within a JVM inside an application server. Pega engines can be scaled vertically or horizontally and are typically deployed in conjunction with a load balancer and other infrastructure resources such as proxy and HTTP. Note: Load balancers must support session affinity for users who are leveraging the Pega UI. Service clients can invoke Pega using either stateless or stateful sessions. Horizontal scaling means that multiple application servers are deployed on separate physical or virtual machines. Vertical scaling means that multiple Pega 7 servers are deployed on the same physical or virtual machines by running them on different port numbers. Pega 7 natively supports a combination setup that uses both horizontal and vertical clusters. A cluster may have heterogeneous servers in terms of hardware and operating system. For instance, some servers can use Linux, and others can use Windows. Usually, the only restriction is that all servers in a cluster must run the same Pega 7 version. Pega engines can be deployed across heterogeneous platforms as shown here.

38 ©2018 Pegasystems

For example, a collection of WebSphere nodes can be used to support a human user community and a Jboss node can be deployed to handle low-volume service requests made by other applications. The Pega engine can also be deployed on zSeries mainframes. This flexible deployment model allows Pega customers to build a process once, and deploy it seamlessly across multiple platforms and channels without redesigning or rebuilding the process each time.

In-memory cache management options for multi-node cluster topologies Pega Platform supports two in-memory cache management options for multi-node cluster topologies: Hazelcast and Apache Ignite.

Option

Embedded

Client-Server

Hazelcast

Default

Not Supported

Ignite

Supported

Supported

Hazelcast Hazelcast is embedded in the Pega platform and is the default in-memory cache management option or in-memory data grid used by Pega in multinode cluster topologies.

Hazelcast is an open source in-memory data grid based on Java. In a Hazelcast grid, data is evenly distributed among the nodes of a cluster, allowing for horizontal scaling of processing and available storage. Hazelcast can only run embedded in every node and does not support a client-server topology.

Apache Ignite Pega can use Apache Ignite instead of Hazelcast for in-memory cache management.

Apache Ignite is an in-memory cache management platform that can provide greater speed and scalability in large multinode clusters. Apache Ignite supports high-performance transactions, realtime streaming, and fast analytics in a single, comprehensive data access and processing layer.

39 ©2018 Pegasystems

Unlike Hazelcast, Apache Ignite supports client-server mode. Client-server mode provides greater cluster stability in large clusters and supports the ability for servers and clients to be separately scaled up. Use client-server mode for large production environments that consist of more than five cluster nodes or if you experience cluster instability in clusters that contain fewer nodes. The number of nodes in the cluster that can lead to cluster instability depends on your environment, and switching to client-server mode is determined individually. Client-server mode is a clustering topology that separates Pega Platform processes from cluster communication and distributed features. Client-server mode clustering technology has separate resources and uses a different JVM from the Pega Platform. The client nodes are Pega Platform nodes that perform application jobs and call the Apache Ignite client to facilitate communication between Pega Platform and the Apache Ignite servers. The Servers are stand-alone Apache Ignite servers that provide base clustering capabilities, including communication between the nodes and distributed features. At least three Apache Ignite servers are required for one cluster.

The client-server topology adds value to the business by providing the following advantages: l

l

l

l

The cluster member life cycle is independent of the application life cycle since nodes are deployed as separate server instances. Cluster performance is more predictable and reliable because cluster components are isolated and do not compete with the application for CPU, memory, and I/O resources. Identifying the cause of any unexpected behavior is easier because cluster service activity is isolated on its own server. A client-server topology provides more flexibility since clients and servers can be scaled independently.

For more information about enabling client-server mode, see Pega Platform Deployment Guides.

40 ©2018 Pegasystems

KNOWLEDGE CHECK

When would you use the client-server cluster topology? You use the client-server cluster topology for large production environments that consist of more than five cluster nodes or if you experience cluster instability.

41 ©2018 Pegasystems

Planned and unplanned outages Applications become unavailable to users for two reasons: system maintenance and system crashes. In either situation, no loss of work should occur and work can continue to be processed.

Planned outage In a planned outage, you know when application changes are taking place. For example, if you need to take a node out of service to increase heap size on the JVM, you can take that node out of service and move users to another node users noticing any difference.

Quiesce The process of quiescing provides the ability to take a Pega Platform server out of service for maintenance or other activities. To support quiescing, the node is first taken out of load balancer rotation. The Pega application passivates, or stores, the user session, and then activates the session on another node. Passivation works at the page, thread, and requestor level. The inverse of passivation is activation. Activation brings the persisted data back into memory on another node. You can quiesce a node from: l

The High Availability landing pages in Designer Studio (if you are not using multiple clusters)

l

The System Management Application (SMA)

l

The Autonomic Event Services (AES) application (recommended for use with multiple clusters)

l

REST API services (starting in v7.4)

l

A custom Pega Platform management console by incorporating cluster management MBeans

When quiesced, the server looks for the accelerated passivation setting. By default, Pega Platformsets the passivation to five seconds. After five seconds, it passivates all existing user sessions. When users send another request, their user session activates in another Pega Platformserver without loss of information. The five second passivation timeout might be too aggressive in some applications. System Administrators can increase the timeout value to reduce load. The timeout value should be large enough so that a typical user can submit a request.

42 ©2018 Pegasystems

Once all existing users are moved from the server, the server can be upgraded. Once the process is complete, the server is enabled in the load balancer and quiesce is canceled.

High availability roles Pega Platform provides two roles (PegaRULES:HighAvailabilityQuiesceInvestigator and PegaRULES:HighAvailabilityAdminstrator) that you can add to access groups for administrators who manage highly-available applications. The High Availability Quiesce Investigator role lets administrators perform diagnostics or debug issues on a quiesced system. When quiesced, the system reroutes all users without this role. The High Availability Administrator role gives administrators access to the High Availability landing pages. These users can also investigate issues on a quiesced system.

Out-of-place upgrade Pega Platform provides the ability to perform out-of-place upgrades with little or no downtime. An outof-place, or parallel, upgrade involves creating a new schema, migrating rules from the old schema to the new schema, and upgrading the new schema to a new Pega release. The data instances in the existing data schema also update. Once the updates are complete, the DB connections are modified to point to the new schema and the nodes are quiesced and restarted one at a time in a rolling restart.

43 ©2018 Pegasystems

In-place upgrade Pega Platform also provides the ability to perform in-place upgrades, which may involve significant downtime because existing applications need to be stopped. After, pre-upgrade scripts or processes may need to be run. Prior to importing the new version of the Pega rulebase, the database schema would be updated manually or automatically using the Installation and Upgrade Assistant (IUA). EAR or WAR files, if used, are undeployed and replaced with the new EAR and WAR files. The new archives would need to be loaded. After, additional configuration changes may be made using scripts or the Upgrade Wizard.

44 ©2018 Pegasystems

Unplanned outage With Pega Platform configured for high availability, the application can recover from both browser and node crashes. Pega Platform uses dynamic container persistence for relevant work data. The dynamic container maintains UI and work states, but not the entire clipboard.

Node crash Pega saves the structure of UI and relevant work metadata on shared storage devices for specific events. When a specific value is selected on a UI element, the form data is stored in the shared storage as a requestor clipboard page. When the load balancer detects a node crash, it redirects traffic to another node in the pool of active servers. The new node that processes the request detects a crash and uses the UI metadata to reconstruct the UI from the shared storage. On redirection to a new node, the user must reauthenticate, so a best practice is to use Single Sign-on to avoid user interruption. Since the user’s clipboard is not preserved from the crash, data that has been entered but not committed on assign, perform, and confirm harnesses is lost.

Browser crash When the browser terminates or crashes, users connect to the correct server based on session affinity. The state of the user session is recovered without loss since the clipboard preserves both metadata for the UI and any data entered on screens and submitted.

Crash recovery matrix Events

Browser crash

Node crash

UI is redrawn

Yes

Yes

User must reauthenticate

No, if redirected to the same node Yes, if the authentication cookie was lost

No, with Single Sign-on Yes, without Single Sign-on

Data entry loss

No, if redirected to the same node Data not committed is lost if the authentication cookie was lost

Data not committed is lost

KNOWLEDGE CHECK

When would you quiesce a node? When taking the node out of service for maintenance or other activities.

45 ©2018 Pegasystems

Hardware sizing estimation Pega offers the Hardware sizing estimation service to help organizations with hardware resource planning. The sizing estimate can be applied to on-premise, Pega Cloud, or customer cloud implementations. Using either known or predicted data about the application's usage, Pega applies complex modeling techniques to estimate hardware sizing for the application server disk, CPU, JVM, and database requirements. The hardware sizing estimation team constantly reviews, updates, and enhances the model using global production field information and feedback, in-house metrics, and periodic performance benchmarks to provide the best estimate possible. Sizing estimates can be applied to any environment at any point of the project, from sales through enterprise planning and production. The hardware sizing estimate can be performed again after your application goes into production. Events such as adding new case types or adding more concurrent users require a new hardware estimate to ensure your application can handle the additional load. The hardware sizing estimation service uses a questionnaire-based process to gather information. You can send an email to [email protected].

KNOWLEDGE CHECK

Which environments can the hardware sizing estimate be requested for? The hardware sizing estimation can be applied to any environment at any point in a project.

46 ©2018 Pegasystems

STATEGIC APPLICATION, AI AND AUTOMATION DESIGN

47 ©2018 Pegasystems

Leveraging Pega applications Introduction to leveraging Pega applications Pega's customer engagement and industry applications can shorten the delivery time frame of your business application. By determining the differences, or gaps, between what the Pega application provides and your organization's requirements, you can deliver a minimum set of functionality to begin providing value in days and weeks, not months and years. By the end of this lesson, you should be able to: l

Explain the benefits of using Pega's application offerings

l

Describe how the gap analysis approach impacts your application design tasks

48 ©2018 Pegasystems

Benefits of leveraging a Pega application Building software applications that return an investment in the form of cost savings, automation, or new business can be a risky, time consuming, and expensive endeavor for an organization. Organizations want to minimize as much risk, cost, and time investment as possible. To achieve those objectives, deliver an application with the minimum set of functionality that is lovable to the business. The minimum lovable product (MLP) provides an application with the minimum amount of functionality that your users love, and not just live with. Over time, you iterate and improve that product or application as business needs change and evolve. By starting with a Pega application, you are far closer to the MLP than if you start your application design from the beginning.

Instead of starting with a lengthy requirements gathering process, you can demonstrate the functionality that the Pega application provides for you and compare that functionality to your business requirements. This process is called performing a gap analysis. Once you identify these gaps, you can design for the minimum amount of functionality needed to deliver the MLP.

49 ©2018 Pegasystems

KNOWLEDGE CHECK

What is the primary benefit of using a Pega application You can more rapidly delivery business value (an MLP) to your end users when starting with a strategic application compared with creating an application design from scratch. Starting with a Pega application allows business users to comment on existing application features. Once the business identifies the differences (gaps), you only have to customize the application to fill those gaps.

50 ©2018 Pegasystems

Pega's application offerings Pega offers two categories of applications: l

Customer engagement applications

l

Industry applications

Customer engagement applications include Pega Customer Service, Pega Marketing, and Pega Sales Automation. The Customer Decision Hub centralizes all customer engagement activities and provides intelligent customer interactions, regardless of channel or application.

The industry applications provide solutions for: l

Communications & Media

l

Energy & Utilities

l

Financial Services

l

Government

l

Healthcare

l

Insurance

l

Life Sciences

l

Manufacturing & High Technology

In some cases, the customer engagement applications intersect with an industry solution. For example, Customer Service for Financial Services and Pega Marketing for Financial Services are both customer engagement applications with a focus on the Financial Services industry.

51 ©2018 Pegasystems

Tip: For business context of what each customer engagement and industry application provides, see the Products page on pega.com.

KNOWLEDGE CHECK

Why do you need to know about Pega's customer engagement and industry application offerings? Pega's customer engagement and industry applications are key to rapidly delivering a solution that provides immediate business value. You need to know at a high level what each application provides and where to learn more about each application.

52 ©2018 Pegasystems

How to customize a Pega application Even though each Pega application has a unique set of capabilities to meet specific industry needs, the process for customizing the application is the same regardless of the application. The following process describes how to customize and extend a Pega application. l

Acquaint yourself with application features

l

Create your application implementation layer

l

Perform the solution implementation gap analysis

l

Define the Minimum Lovable Product (MLP)

l

Record desired customizations using Agile Workbench

l

Customize the application

l

Evolve the application

Acquaint yourself with the Pega application features To effectively demonstrate the application to your business users, you need to know the Pega application features and how those features relate to and solve the business problem at hand. Pega offers several resources to help you to familiarize yourself with the application. Use the Pega Products & Applications menu on the PDN to navigate to the appropriate application landing page. Caution: If you are unaware of the features already provided to you by the Pega application, you will spend time and resources building features that already exist. Use the application overview in the Pega application itself to review application features.

53 ©2018 Pegasystems

Create the application implementation layer The application implementation layer represents the container for your application customizations. As business users add case types with Pega Express, the application implementation layer houses the rules created by those users and any customizations you make to support. The following diagram illustrates the generalized structure of a Pega application and how your application implementation layer fits into that structure.

Depending on the installation, a Pega application can have multiple build-on products to provide the full suite of functionality. For example, the following diagram illustrates the construction of the Pega Customer Service for Healthcare application, including Pega Sales Automation and Pega Marketing applications.

54 ©2018 Pegasystems

Note: Use the New Application wizard to create the application implementation layer.

Perform the solution implementation gap analysis The solution implementation gap analysis process allows you to demonstrate the application while discussing the organization's needs. This process guides the customer into using capabilities that the application already provides instead of building new features. The goal is to create a first release that allows the organization to start seeing business value quickly. Note: The solution implementation gap analysis differs from the product gap analysis. The product gap analysis is performed early in the sales cycle to determine if the Pega application is a right fit for the organization, or if a custom application is required.

Define the Minimum Lovable Product The Minimum Lovable Product (MLP) is the minimum functionality the business needs to get business value from the customized application. The MLP is also known as the first production release. You can address features that are prioritized after the MLP in subsequent iterations of the application. These subsequent iterations are known as extended production releases.

Record customizations using Agile Workbench As you demonstrate the application with your business users, you can record the desired customizations with Agile Workbench. Agile Workbench integrates with Agile Studio to record feedback, bugs, and user stories to allow you and your team to customize the application accordingly. Agile Workbench also integrates with other project management tools such as JIRA.

55 ©2018 Pegasystems

Customize the application After you have prioritized the backlog captured in Agile Workbench according to the MLP definition, you can start customizing the application. Typically, the MLP includes creating connections to back-end systems, configuring security and reports, and creating application-specific rules to meet immediate business requirements. For example, configuring coaching tips is a configurations step unique to Pega Customer Service.

Evolve the application During the initial application demonstration with the business users, you captured the user's customization requirements and requests. Not all of those customizations were delivered with the first production release. As the business uses the application in production, users have new requirements and enhancements to improve the application. Continue to use Agile Workbench to capture feedback in the application directly. This way, you can evolve and improve your application over time and according to business needs.

KNOWLEDGE CHECK

What is the purpose of the Minimum Lovable Product (MLP)? The purpose of the MLP is to deliver value to the business as soon as possible, while allowing for subsequent iterations of the application to improve and deepen application functionality.

56 ©2018 Pegasystems

Leveraging AI and robotic automation Introduction to leveraging AI and robotic automation Artificial intelligence (AI) and robotic automation technology changes the way people work. AI and robotics both automate work, each in a different way. Choosing the most appropriate automation technology depends on the result you want to achieve. After this lesson, you should be able to: l

Compare AI and robotic automation technologies

l

Identify opportunities to leverage AI in your application

l

Identify opportunities to leverage robotic automation in your application

57 ©2018 Pegasystems

Artificial intelligence and robotic automation comparison Artificial Intelligence (AI) and robotic automation are similar in that they perform a task or tasks instead of a human being. AI and robotic automation solutions are not impacted by geography and are not prone to error, like human beings. However, the application of each technology differs based on what you are trying to achieve. You could also design a solution that uses AI and robotic automation capabilities in tandem; they are not mutually exclusive technologies. Grasping the benefits and differences between AI and robotic automation allows you to identify opportunities to use these technologies and to design an application that can radically change the way the organization performs work. Pega offers the following technology to meet these needs: l

l

AI capabilities, in the form of the Intelligent Virtual Assistant, Customer Decision Hub, the Decision Management features Robotic automation capabilities, including Robotic Desktop Automation (RDA) , Robotic Process Automation (RPA), and Workforce Intelligence (WFI)

The following table summarizes the key differences between Robotic Desktop Automation (RDA), Robotic Process Automation (RPA), Workforce Intelligence (WFI) and AI capabilities. Capability

RDA

Assists end users with routine manual tasks

X

Fully replaces the end user's involvement in the task Identifies opportunities for process improvement

RPA

WFI

X

AI

X X

Self learning technology, requiring no programming

X

Artificial intelligence Artificial intelligence (AI) can be defined as anything that makes the system seem smart. An artificial intelligence solution learns from the data available to it. This data can be structured or unstructured (such as big data), and can take in images, sound, or text inputs. The value of AI increases as the solution gains age and experience, not unlike a human being. For an AI solution to be self learning, the AI solution uses experiences to form its basis of knowledge, not programmed inputs. The Adaptive Decision Manager (ADM) service is an example of adaptive learning technology. For example, when training an Artificial Intelligence solution to recognize a cat, you do not tell the AI to look for ears, whiskers, a tail, and fur. Instead, you show the AI pictures of cats. When the AI responds with a rabbit, you coach the AI to distinguish a cat from a rabbit. Over time, the AI becomes better at identifying a cat. This technology can be a powerful ally in building a customer's profile, preferences and attributes.

58 ©2018 Pegasystems

An AI solution can also predict the next action a customer will take. This ability allows an organization to serve the customer in a far more effective way. The organization can know why a customer is contacting them before the customer even calls. For example, an AI solution can guide a customer service representative to offer products or services that the customer actually wants, based on previous behavior of the customer. Predictive Analytics provides this capability. The Customer Decision Hub combines both predictive and adaptive analytics to provide a seamless customer experience and only shows offers relevant to that customer. The Customer Decision Hub is the centerpiece of the Pega Sales Automation, Pega Customer Service, and Pega Marketing applications. AI uses a natural language process (NLP) to detect patterns in text or speech to determine the intent or sentiment of the question or statement. For example, a bank customer uses the Facebook messenger channel to check his account balance. In the background, the bank's software analyzes the intent of the question in the message, performs the balance inquiry and returns the response to the customer. The Intelligent Virtual Assistant is an example of NLP in action. Note: AI is a powerful tool. AI can also carry risk, if you are not cautious. For more information on this topic, see the AI Customer Engagement: Balancing Risk and Reward presentation on Pega.com.

Robotic automation Robotic automation is technology that allows software to replace human activities that are rule based, manual, and repetitive. Pega robotic automation applies this technology with: l

Robotic desktop automation (RDA)

l

Robotic process automation (RPA)

l

Workforce intelligence (WFI)

Robotic desktop automation (RDA) automates routine tasks to simplify the employee experience. RDA mimics the actions of user interacting with another piece of software on the desktop. For example, a customer service representative (CSR) logs into five separate desktop applications to handle customer inquiries throughout the day. You can use RDA to log that CSR into these applications automatically. This allows the CSR to focus on better serving the customer.

59 ©2018 Pegasystems

Usage of RDA is also known as user-assisted robotics.

Robotic process automation (RPA) fully automates routine and structured manual processes. No user involvement is required. With RPA, you assign a software robot to perform time consuming, routine tasks with no interaction with a user. These software robots perform work on one or more virtual servers. For example, a bank requires several pieces of documentation about a new customer before the bank can onboard that new customer. Gathering this information can take one person an entire day to complete this task. You can use RPA to gather these documents from one or more source systems. The software robot can perform the same process in minutes.

60 ©2018 Pegasystems

Use of RPA is also known as unattended robotics.

Workforce intelligence (WFI)connects desktop activity monitoring to cloud-based analytics to gain insight about your people, processes, and technology. WFI enables the organization to find opportunities to streamline processes or user behavior. For example, this technology can identify where a user is repeatedly copying and pasting, switching screens, or typing the same information over and over. This allows the organization to detect areas for process improvement. When you implement changes to those processes, the organization can realize significant time and money savings. For more information on RDA, RPA, and WFI, see the Pega Robotic Automation landing page on the PDN.

61 ©2018 Pegasystems

KNOWLEDGE CHECK

What characteristics distinguish Artificial Intelligence from robotic automation? What are some examples of robotic automation and Artificial IntelligenceI? Robotic automation mimics user behavior through software. Software robots perform routine, sometimes, onerous tasks instead of users. Robotic automation can solve problems where a web services or data warehousing solution was previously required. An example of robotic automation is Robotic Desktop Automation, which can invoke automations to gather data from legacy systems from the users desktop. Artifical Intelligence solutions learn based on available inputs. AI solutions also need to be trained to refine it's ability to predict future behavior of those interacting with the AI solution. The Customer Decision Hub and Intelligent Virtual Assistant are two examples of Pega's implementations of AI.

62 ©2018 Pegasystems

ASSET DESIGN AND REUSE

63 ©2018 Pegasystems

Starting with Pega Express Introduction to starting with Pega Express Pega Express gives you the ability to collaborate with the business to quickly create new applications and, over time, build a library of assets that can be reused across the enterprise. After this lesson, you should be able to: l

Explain the benefits of Pega Express

l

Describe the development roles

l

Ensure the successful adoption of Pega Express

l

Reuse assets created in Pega Express

Benefits of Pega Express (missing or bad snippet) Pega Express is designed for everyone, and it enables you to build more through composition, templates, and reuse. Whether that means leveraging your company's IT assets, driving consistency and reuse through templates, or extending Pega applications, Pega Express accelerates your project. Leverage Pega Express for enablement, collaboration, and innovation.

Building an application with Pega Express in mind ensures guardrail compliance, transparency, and visibility.

64 ©2018 Pegasystems

KNOWLEDGE CHECK

Name three ways that Pega Express can accelerate your projects. Pega Express lets you leverage your company's IT assets, drive consistency and reuse through templates, or extend Pega applications.

Development roles (missing or bad snippet) Pega Express is a development environment designed for new users with less technical expertise, such as business experts. Pega Express includes contextual help and tours for self-learning, enabling business experts to quickly create applications. Staff members certified in Pega act as coaches for teams of business experts to facilitate application development.

KNOWLEDGE CHECK

Pega Express is designed for what type of users? Pega Express is designed for new Pega users who are not technical experts.

How to ensure adoption of Pega Express (missing or bad snippet)

65 ©2018 Pegasystems

Leveraging Pega Express in business can bring tremendous advantages, but there are some things to consider to ensure the successful adoption of Pega Express. First, establish governance and evaluate applications are fit for the program. Once the application is reviewed, pair the business team with a coach. Teams with little or no Pega Express experience rely heavily on the coach, while other more experienced teams are more self-sufficient. Ensure that there are regular meetings between the project team and the coach. Publish a release train schedule and hold retrospectives to adjust what was not working. Create a community for business users to share their experiences and ask questions. Establish an access management and support strategy for applications in production. Develop a maturity model for your organization. Pega recommends a four-level maturity model.

Use the PDN Pega Express community to share ideas and ask questions.

KNOWLEDGE CHECK

Describe the recommended maturity model. See above image.

How to reuse assets created in Pega Express (missing or bad snippet) Ensure that assets are built to be reusable, and publish them for use in Pega Express. Existing assets can be refactored into reusable assets using the refactoring tools. As an organization's maturity model is implemented, more and more reusable enterprise assets are available. Manage the common assets through a center of excellence (COE).

66 ©2018 Pegasystems

KNOWLEDGE CHECK

How are reusable assets best managed? They are best managed by the COE.

67 ©2018 Pegasystems

Designing for specialization Introduction to designing for specialization Pega Platform provides various solutions for specializing applications to support ever-changing requirements. This lesson describes these specialization solutions and the best ways to apply them in your applications. After this lesson, you should be able to: l

Describe the principles and purposes of component applications

l

Discuss the advantages of building application layers using multiple component applications

l

Specialize an application by overriding rulesets in the built-on application

l

Decide when to use ruleset, class, and circumstance specialization

l

Decide when to use pattern inheritance and organization hierarchy specialization

l

Analyze and discuss various approaches to specializing an application to support a specific set of requirements

68 ©2018 Pegasystems

Object Oriented Development in Pega A consideration of Pega asset design and reuse starts with a brief mention of Object Oriented Development (OOD) principles and how Pega leverages them and how Pega allows you to leverage them. According to Robert Martin, Object Oriented Development encompasses three key aspects and five principles.

Aspects of OOD The following are the three essential aspects of OOD.

Encapsulation Encapsulation is used to hide the values or state of a structured data object inside a class, preventing unauthorized parties' direct access to the object.

Inheritance Inheritance is the ability for one object to take on the states, behaviors, and functionality of another object.

Polymorphism Polymorphism lets you assign various meanings or usages to an entity according to its context. Accordingly, you can use a single entity as a general category for different types of actions.

SOLID development principles SOLID is a mnemonic for the five principles of OOD. According to Martin, OOD should adhere to these principles.

Single Responsibility Open/Closed Liskov Substitution Interface Segregation Dependency Inversion

Single Responsibility The Single Responsibility principle states that every module or class should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by the class.

69 ©2018 Pegasystems

Open/Closed The Open/Closed principle states that software entities (such as classes, modules, and functions) should be open for extension, but closed for modification. An entity can allow its behavior to be extended without modifying its source code. The Open/Closed principle is most directly related to extensibility in Pega. If implemented correctly, an object would not need to be changed if additional features are added to the object. Following this principle helps avoid maintenance-intensive ripple effects when new code is added to support new requirements.

Liskov Substitution The Liskov Substitution principle states that functions that use pointers or references to base classes must be able to use objects of derived classes without knowing it.

Interface Segregation The Interface Segregation principle (ISP) states that no client should be forced to depend on methods it does not use. ISP splits interfaces that are very large into smaller and more specific ones so that consumers will only have to know about the methods that are of interest to them. ISP is intended to keep a system decoupled and thus easier to refactor, change, and redeploy.

Dependency Inversion The Dependency Inversion principle refers to a specific form of decoupling software modules. When following this principle, the dependency relationships established from high-level, policy-setting modules to low-level, dependency modules are reversed, thus rendering high-level modules independent of the low-level module implementation details.

70 ©2018 Pegasystems

Specialization design considerations When deciding on the application layers to be developed, take into account the business requirements for specialization. A one-size approach does not suit all projects. Selecting a design that introduces more specialization layers than are required can be complex. This complexity increases the time, resources, and effort required to produce a Minimum Lovable Product (MLP).

Specialization considerations Always follow object-oriented principles to ensure rules are extensible. For example, use parameterization and dynamic class referencing (DCR) to support specialization in the future. When considering specialization, be aware of the following things: l

l

If there is no current requirement to specialize an application, there is no reason to immediately create two layers, one of which is a specialization layer. If specialization requirements do exist, then you might create a specialization layer. Otherwise, you could use circumstancing or pattern inheritance.

Single implementation application Most development efforts can achieve the MLP by developing a single implementation application either directly on the Pega Platform or by leveraging one or more of Pega's Horizontal or Industry applications.

A single application is the best approach in the following scenarios: l

l

l

The enterprise does not span multiple regions where business rules vary dramatically. The enterprise is only interested in completing the implementation of a framework developed by a vendor. The enterprise does not want to extend its own application. The enterprise has divisions that develop division-unique applications.

71 ©2018 Pegasystems

Framework application with multiple implementations In special cases, the development effort may require a Framework layer on which one or more implementation applications are built.

This diagram shows specialization of an application across different regions in North America. The procedures and policies specific to a region are layered on top. Every time the system interacts with a user or advances a case, it selects the policy and procedure that is most specific to the situation at hand. This means that only those policies and procedures that are specific to French-speaking Quebec, for example, need to be defined in that layer. For all other regional policies and procedures, the more generic layers below it are consulted in order. A framework application on which one or more Implementation applications are built makes sense in the following scenarios: l

l

The enterprise spans multiple regions where business rules vary dramatically and most of the core framework functionality will be reused across the enterprise. The enterprise customizes a core application to target distinct customer types. Business rules vary dramatically between customer types.

72 ©2018 Pegasystems

How to choose the application structure When creating an application with the New application wizard, you have the option to specify the application structure in the advanced settings. The application structure is either implementation or framework, and the default is implementation. In both cases, a single application is created, but with different purposes.

Selecting implementation creates an application for a specific business purpose. Users log in to the application to achieve numerous business outcomes. An implementation can be built on other implementations or frameworks.

A framework layer defines a common work-processing foundation. The framework contains the default configuration of the application that is specialized by the implementations. Do not run the framework on its own; there should always be at least one implementation. Users do not log in to a framework, but

73 ©2018 Pegasystems

to an implementation of the framework. Implementations extend the elements of the framework to create a composite application that targets a specific organization or division. For example, the MyCo enterprise makes auto loans, and has an auto loan framework that is composed of the assets needed for MyCo's standard auto loan process. Each division of MyCo extends that basic auto loan application to meet their specific divisional needs. For example, the commercial business line division's auto loan application needs to handle loan requests distinct from that of MyCo's personal line division. Any application, regardless if it is a framework or implementation, can be built on other applications and leverage reusable components. Elements of an application can be specialized using class, ruleset, and circumstancing. Only create a framework layer if business requirements at the start of the project state that a framework layer specialized by several implementations is the most suitable specialization technique. Important: Only create a framework if you know you need one. Do not create a framework for the sake of future-proofing. Depending on future requirements may be more appropriate. Maintaining a framework comes at a cost that cannot be justified without evidence for its need in the near future.

KNOWLEDGE CHECK

When would you create a framework? When it is clear at the start of a project that an application needs to be specialized based on organization or division.

74 ©2018 Pegasystems

Specialization and component applications When considering approaches to application specialization, think of applications as components rather than as frameworks. Components are part of a whole, like the wheels or the engine of a car. In contrast, a framework describes the essential, static structure, like the chassis of a car. Note: A framework can also be referred to as a foundation, model, template, or blueprint Pega component applications follow the object-oriented programming (OOP) open/closed principle. This principle states that an object does not need to be changed to support its use by other objects. In addition, objects need not change if additional features are added to the used object. This avoids maintenance-intensive ripple effects when new code is added to support new requirements. Modeling a business process according to the Template Design Pattern used by Pega Platform follows the open/closed principle. You define a foundation algorithm in a base class. The derived classes implement code at allowed extension points. When you use applications as components, you can take a modular approach to application configuration. You can create application layers by building on multiple component applications. Note: Do not confuse a component application with a component that is a collection of rulesets used to create a small feature that can be added to any Pega Platform application.

Applications as components You can design Pega applications specifically for use as components. By definition, a component is recursive. That is, a component can comprise other components in the same way that objects can comprise other objects. For example, you can define applications as a small number of rulesets, each of which contains rules for a specific purpose. For example, you may design an application specifically for handling mail transactions. This approach is in contrast to adding all the rules to a single component ruleset. The term component implies that an object has a stable interface and can be tested separately according to published specifications. For example, an application could contain its own unit test code that lets you test the application on its own and allows the application to be self-testable.

Layers and multiple built-on applications Use built-on component applications to modularize functionality and promote reuse. Built-on applications encourage use of Application Validation mode over ruleset prerequisites. Warnings related to use of the same ruleset across applications are avoided. For more information, see the PDN article Using multiple built-on applications. For more information about how Pega Platform processes various hierarchical structures of multiple built-on applications at design time, see the PDN article Application stack hierarchy for multiple builton applications.

75 ©2018 Pegasystems

Ruleset, class, and circumstance specialization Code developed according to object-oriented principles is inherently extensible. For this reason, rules can be specialized by ruleset, class, or circumstance.

Ruleset and class specialization A framework is a common work-processing foundation that you extend to an implementation. Designing a framework requires knowledge of how more than one implementation would use it. Without this information, abstracting a model common to both implementations is difficult. Focus on the implementation at the beginning while watching for future specialization. The following image shows how Pega Platform supports ruleset and class specialization dimensions. In this example, the Center of Excellence (COE) team is responsible for locking down the base 01.01.01 application. The COE is also responsible for developing the next version of the foundation or base MyApp application (for example, 02.01.01).

76 ©2018 Pegasystems

Parent Implementation applications, MyAppEU and MyAppUK, remain built on MyApp 01.01.01 until they are ready to upgrade to the 02.01.01 version. This is no different than upgrading applications to a newer version of Pega. The difference is that Pega is the COE. The purpose of the Application Version axis is to permit evolution (upgrading and versioning) of the applications, including the locked down foundation application.

Ruleset Specialization Example An example of ruleset specialization is how Pega handles localization. Localization in Pega is accomplished using rulesets. After running the localization process, you complete the translation in progress by selecting the new default ruleset created by the system to store the translated strings. You select an organization ruleset so that the translations can be shared across applications. Then you add this ruleset to the access groups that require this translation. The system is automatically displayed the translated text when the user interface is rendered.

Circumstance specialization Circumstancing is appropriate when specializing rules in the same work pool. One benefit of circumstancing is that it enables you to see the base rule and its original configuration. Creating a copy of rules for the purpose of specialization can create difficulty in identifying and reviewing the original rule. Circumstanced rules are displayed in the App Explorer and Records Explorer, enhancing rule maintenance. You cannot circumstance classes or applications. You can circumstance case type rules. Note: Locating rules that are similarly circumstanced requires a report that filters by pyCircumstanceProp and pyCircumstanceVal.

77 ©2018 Pegasystems

Specializing an application by overriding rulesets To create an application by overriding rulesets in the built-on application, do the following:

1. Create a new ruleset using the Record Explorer. 2. In the Create RuleSet Version form, select the Update my current Application to include the new version option. 3. Copy the existing Application rule to the new ruleset and give the application a new name that represents its specialized purpose. 4. Open the new Application rule. 5. Configure the new application as built-on the original application. 6. Remove every ruleset from the application rulesets list except the ruleset you created in step 1. 7. Open the original application again and remove the ruleset you created and added in step 1. 8. Create new access groups that point to the new Application rule you created in steps 2 and 3. Note: A ruleset override application can be constructed without needing to run the New Application wizard.

78 ©2018 Pegasystems

Pattern inheritance and organization hierarchy specialization You can use pattern inheritance as a special type of class specialization within an existing workpool. You can also leverage pattern inheritance to specialize applications according to organization structure. The Pega class naming convention is displayed in the following table. There are two optional implicit hierarchies within each class name: organization and class specialization. In the table below, a class name can be formed using any combination of values from each of the three columns.

Optional Organization

Optional application qualifier prefix plus standard Data/Work/Int prefixes

Optional Class specialization

Org-

[App]-Data-Class

-A

Org-Div-

[App]-Work-CaseType

-B-C

Org-Div-Unit-

[App]-Int-Class

-B-D

Pattern inheritance specialization Pattern inheritance can leverage Dynamic Class Referencing (DCR) to decide which case type class to construct. You can also change a case type’s class name dynamically by using a Declare OnChange rule or data transform. The additional class names use pattern inheritance to derive the case type class name as shown in this example:

Org-App-Work-CaseType Org-App-Work-CaseType-A Org-App-Work-CaseType-B Similar to circumstancing, the requisite property values need to be available. For example, a parent class needs to have the requisite information available to determine which pattern-inheritance implementing class to use when instantiating a child case. Assuming that the number of specialized classes is relatively small, pattern-inheritance specialization enhances maintenance when using the App Explorer within the Designer Studio. All specialized instances are grouped by class and can be viewed in one place. To avoid having to make multiple updates to every specialization of a specific rule, be careful not to over-extend the case types when using pattern-inheritance specialization. Also, do not rely heavily on direct-inheritance, subclass specialization. If numerous subclasses of a specific framework class use polymorphism to override a base class rule, you may need to update each override rule due to some change.

79 ©2018 Pegasystems

Organization hierarchy specialization The organization structure of the company can be used to specialize case types or reuse component applications. For example, some companies are so large that you can write applications specifically for one division. The code within those applications is likely not shared with other divisions. For such scenarios, you specialize case type class names containing a division modifier. For example:

ORG-DIV-APP-[Work/Data/int]Although rarely used, Pega Platform supports specialization down to the org unit level. In those situations, case type class names contain an org unit modifier. For example:

ORG-DIV-UNIT-APP-[Work/Data/int]If any application within the organization can reuse component applications, you can specify the applications as built-on applications by an enterprise layer application. Similarly, component applications capable of being reused by any application within a specific division could be specified as built-on applications by a specific division layer application.

80 ©2018 Pegasystems

Specialization use cases This topic describes sample use cases and recommended specialization approaches.

Use case: Define the product at the enterprise level 3Phase Inc. is a large electronics manufacturing company that has been in business for 25 years. Over time, the ability of the company to stock new products and replacement parts has become increasingly complex. To manage this issue, the company created a separate Supply Chain Management (SCM) division. Currently, there is no need to specialize SCM applications by region. Design a specialization solution that supports this requirement.

Discussion and recommendation The recommended specialization approach involves defining the product at the enterprise level. Defining utilities such as a product selector should also be defined at the enterprise level. This approach enables the SCM division and the Sales and Customer Service divisions to share the standard product data definition and utilities. SCM’s mission is specialized—ensuring that products and replacement parts are in stock within different regions. The mission of SCM is outside the scope of Sales and Customer Service personnel responsibilities. As a result of SCM's specialized mission, you develop a unique set of application rules for the SCM division using the case structure: PHA-SCM-APP-[Work/Data/Int]. In the future, if you need to specialize rules for a particular a region, you can investigate various approaches such as: l

l

l

Circumstance rules by the name of the region, including case type rule circumstancing Use pattern inheritance specialization by appending the region’s acronym, prefixed with a dash, to case type class names Create a new application specific to the region. You create a wrapper application built on the existing application. Then, you create a new implementation application built on the existing application.

Note: Only the first two approaches support all regions with a single application. The third option requires application switching.

Use case: Creating a component implementation layer GameMatch is a social media company that specializes in introducing members to each other as they play different types of games. The process for setting up and playing the game is the same for any game. The rules for each game are different. The entire process runs in the context of the game members decide they want to play. For the purpose of reporting, you store each interaction from game launch to match completion to different tables according to the selected game. Design a specialization solution that supports this requirement.

81 ©2018 Pegasystems

Discussion and recommendation For the following reasons, the recommended specialization approach involves creating a component implementation layer. l

The entire end-to-end interaction is similar regardless of the game selected.

l

The rules for playing each game are different.

l

Requiring users to switch context from one interaction to another is acceptable.

l

Separately persisting each interaction is desirable.

You develop an implementation application for each unique type of game built on a framework specialization.

82 ©2018 Pegasystems

Promoting reuse Introduction to promoting reuse When building an application you should consider packaging certain assets separately to promote reuse. This lesson explains how to leverage relevant records and how to decompose an application into reusable built on application and components. The lesson also discusses the role of a COE in managing reuse. After this lesson, you should be able to: l

Simplify reuse with relevant records

l

Leverage built on applications and components for reuse

l

Discuss the role of a COE in reuse

83 ©2018 Pegasystems

Relevant records Relevant records designate records of a case or data type as important or reusable in a child class. Relevant records for a case type can include references to fields (properties), views (sections), processes (flows), or user actions (flow actions) that are explicitly important to your case. For a data type, relevant records designate the most important inherited fields (properties) for that data type. The relevant records can include records that are defined directly against the class of the case or data type and fields inherited from parent classes. Designating a record as a relevant record controls the majority of the prompting and filtering in the Case Designer and Data Designer. For example, user actions and processes defined as relevant records show up when adding a step in the Case Designer.

Fields marked as relevant for a case type define the data model of the case. Processes and user actions marked as relevant appear in Case Designer prompts to encourage reuse. Views marked as relevant appear as reusable views. Fields, views, processes, and user actions are automatically marked as relevant records when you create them within the Case Designer and Data Designer. You can manually designate relevant records on the Relevant Records landing page.

KNOWLEDGE CHECK

What impact does designating a record as a relevant record have? Relevant records control design-time prompting and filtering in several areas of Data Designer and Case Designer.

84 ©2018 Pegasystems

How to leverage built-on applications and components An application can build on other applications and include components. Use built-on applications and components to create reusable assets. To create a solution that includes case types, you need an application. For example, you must create an application if you want to create a case for a bank transfer and send a check that can be reused across the organization. Applications can then build on that application and use the bank transfer case as is or specialize the case. Create a component to add a feature without a case type that cannot run on its own (for example, reusable flows or correspondence, integration assets or a function library). The following example shows the built-on applications and components that support a Claims application. This example is built on PegaRULES. You can also build your application on Pega solutions, such as a vertical Finance framework built on a horizontal Customer Service framework.

Numerous applications and components are available for reuse on the Pega Exchange. To contribute to the Pega Exchange, submit applications and components to make them available to the Pega

85 ©2018 Pegasystems

community. For example, you can add the PegaDevOpsFoundation application as sibling built-on application when using the Deployment Manager application to orchestrate CI/CD. For applications you want to display in the New Application wizard as potential built-on applications, select Show this application in the New Application wizard on the Application wizard tab on the application rule. Use the Components landing page (Designer Studio > Application > Components) to create and manage components. A component defines a set of ruleset versions and can have other components or applications as prerequisites. When you design and implement the initial version of an application, it includes multiple discrete pieces of functionality. As you become more familiar with the business needs of the organization, opportunities to reuse those components in other applications arise. To create reusable built-on applications and components, first, identify the reusable components. Then, refactor the appropriate rules from the existing application into your new reusable built-on applications and components. Note: It is important to define relevant records for components, not just applications, to simplify and encourage their use. See pxSetRelevantRecord.

KNOWLEDGE CHECK

When would you create a component rather than an application? When creating a feature without a case type that is not runnable on its own

86 ©2018 Pegasystems

Application versioning in support of reuse At the enterprise level, having a large number of specialized rulesets is possible. This raises the possibility that multiple applications reference the same ruleset, and this generates a warning. Packaging rulesets into smaller applications or components helps minimize the issue. However, if you make major updates to rulesets within a built-on application, that application may no longer be consistent with the parent application or with other built-on applications. For example, an updated validation rule in a built-on application might enforce that added properties have values. This could be problematic to parent applications. In this example, you might consider upgrading the application version.

Reasons to version an application These are reasons to version an application: l

The application is using upgraded built-on application versions.

l

Rulesets in the application ruleset list have been versioned, added, or removed.

When versioning an application, you can control: l

The patch levels of the ruleset versions that the application specifies

l

The versions of the application's built-on applications

You can also lock the application to prevent unauthorized updates. Tip: Detailed documentation of application dependencies between applications benefits maintenance. Part of a Center of Excellence's (COE) role is to keep track of application dependencies.

Reasons not to version an application There are valid reasons for not increasing an application version. A change could entail adding rules that are not used by the parent application. For example, adding properties that are not validated or Rule-File-Binary rules do not impact a parent application. Utility code consists of reusable functions and data transforms. Placing reusable utility code, such as functions and data transform, in a specialized ruleset can be added to a built-on application. Parent applications at any version then have access to that utility code.

87 ©2018 Pegasystems

The role of a COE in reuse To encourage appropriate and best use of the Pega Platform, organizations can create a Center of Excellence (COE). A COE is an internal organization that centralizes the resources and expert staff who support all project roles, including business analysts, lead system architects, and administrators.

Reuse is critical in getting the benefits and value of the Pega Platform. The responsibility of the COE is to manage and promote reuse across projects for the organization. If no one is responsible and accountable for reuse, assets are often reinvented.

KNOWLEDGE CHECK

How is a COE involved in reuse? The COE maintains a central repository of reusable assets for use across the organization.

88 ©2018 Pegasystems

CASE DESIGN

89 ©2018 Pegasystems

Designing the case structure Introduction to designing the case structure The case structure is one of the most important considerations when designing an application. Designing the case structure encompasses identifying the individual business processes and how you represent each process (cases, subcases, subflows). A poorly designed case structure may cause refactoring if present or future requirements become challenging to implement. After this lesson, you should be able to: l

Identify cases given a set of requirements

l

Compare parallel processing options

l

Explain advantages and disadvantages of subcase and subflow processing

l

Determine when to use tickets

90 ©2018 Pegasystems

How to identify cases A case type represents a group of functionality that facilitates both the design time configuration and run-time processing of cases. Pega Platform provides many features that support case processing, such as the case life cycle design (including stages and steps), reporting, and security. The LSA decides which Pega Platform features to use—flow, subflows, data objecs, or other components—when designing a solution. In some applications, a case type is easily identified as it represents a single straightforward business process. However, in other situations, producing a robust design with longevity may require careful consideration. In Pega Platform, other dependencies on the design include reporting, security, locking, extensibility, and specialization. Consider all requirements before creating the design since this forms the foundation for the application. The case design begins by identifying the processes in the application. Next, you determine if subcases are warranted, including any additional cases that may benefit from case processing. Identify specialization needs for these cases. Consider future extensibility requirements before implementing the case designs. For example, a Purchase Request process involves identifying a list of the items, and quantity of each item to be purchased. The list of items is reviewed and then forwarded for approval by one or more approvers. Then, the list of items is ordered. When the items are received, the process is complete. You can use several case designs to represent this business process: l

A single Purchase Request Case for the entire process, with subprocesses for each line item

l

A Purchase Request case to gather the initial request and spawn a single Purchase Order Case

l

A Purchase Request case to gather the initial request and spawn Purchase Order subcases for each line item

All provided solutions may be initially viable, but the optimal solution takes into account current and possible future requirements to minimize maintenance.

Guidelines for identifying cases Basic questions to consider when identifying cases include: l

l

l

l

l

Does the case represent the item(s) that require processing? Is there a benefit to splitting the case (process) into multiple cases or subcases, or are parallel flows sufficient? Is a case or subcase really required? If a subcase is created, a general rule is that the processing of the main case depends on the completion of the subcase(s). Does this hold true? Are there other present or future requirements, such as reporting and security, that may be more easily implemented by adjusting the case design?

91 ©2018 Pegasystems

Carefully consider all of the questions. Some situations may not require additional cases(s) and could instead result in an inefficient use of resources and an overly complex solution. Because creating cases involves additional processing, always ensure you have a good reason for creating additional cases. The previous Purchase Request example above illustrates these points: l

l

l

If there were security requirements based on the approval of individual line item types, you can implement the case design solution with subcases for individual line items. If there is no specific requirement for line item processing, the simple solution involving subprocesses for each line item is suitable. Adding a Purchase Order case may be unnecessary unless there was a requirement specifically stating the need for it (for example, Reporting).

Case identification may be straightforward or there may be situations where additional cases or processes could be advantageous. An example of this is data that may support the case processing, such as customer or account data. In situations where using the infrastructure provided by case processing for data—such as an approval process, security, or auditing—may be advantageous, then providing a case may be more suitable than updating the data directly.

92 ©2018 Pegasystems

Case processing Requestors execute the majority of case processing in Pega Platform. Each requestor executes within its own Java thread. The separate Java threads allow multiple requestors to perform actions in parallel. The most common requestor types are initiated by Services, Agents, or different users logging on to the system to process cases. The case design determines how efficiently the case is processed. An efficient case design accounts for steps or processes that can be performed in parallel by separate requestors. One example is leveraging subprocesses to gain approval from different users. Each approval process is performed by separate requestors. A more complex example is queuing tasks to a standard agent in a multinode system. There are limitations to this type of parallel processing. Limitations differ with the case configuration and the type of processing implemented. The two major types of parallel processing are same-case processing and subcase processing.

Same-case processing Same-case processing occurs when multiple assignments associated with the same case are created. Each assignment is initiated through a child or subprocess that is different from the parent process. Multiple assignments for a single case are initiated through Split Join, Split For Each, or Spinoff subprocesses. The Split Join and Split For Each subprocesses pause and then return control to the main flow, depending on the return conditions specified in the subprocess shape. The Spinoff subprocess is not meant to pause the main flow as it is an unmanaged process. All of these subprocess options may result in multiple assignments being created, leading to different requestors processing the case (assuming they are assigned to different users). One limiting factor is locking. The default case locking mechanism prevents users from processing (locking) the case at the same time. This has been alleviated in Pega Platform with the introduction of optimistic locking. Optimistic locking allows multiple users to access the case at the same time, and only locking the case momentarily when completing the assignment. The drawback is that once the first user has submitted changes, subsequent users are prompted to refresh their cases prior to submitting their changes. The probability of two requestors accessing the case at the same time is low, but the designer should be aware of this possibility and the consequences, especially in cases where the requestor is a nonuser.

Subcase processing The primary difference between subcase and same-case processing is that one or more subcases are involved. The processes for each subcase may create one or more assignments for each subcase. Locking can be a limiting factor when processing these assignments. If the default locking configuration is specified for all subcases, then all subcases including the parent are locked while an assignment in any subcase is performed. This can be alleviated by selecting the Do Not Lock Parent configuration in the subcases. Locking is a significant difference between subflow and subcase parallelism. Tip: With the correct locking configuration, simultaneous processing may take place without interruption for subcases, whereas a possibility exists for interruption when subflows are involved. This behavior must be accounted for, especially when automated tasks such as agents are involved. A locked parent case may prevent the agent from completing its task, and error handling must be incorporated, allowing the agent to retry the task later on. If a design leveraging subcases with

93 ©2018 Pegasystems

independent locking was used such that the agent operated on the subcase, it minimizes the possibility of lock contention. In general, lock subcases independently of the parent case unless there is a reason for also locking the parent case. When waiting for the subcases to complete processing, a wait step is used to pause the parent case. If subcases of the same type are involved, you configure the same wait shape to allow the main case to proceed after all subcases are resolved.

If different types of subcases are involved, a ticket is used in conjunction with the wait shape to allow the parent case to proceed only after all subcases, regardless of the type, are completed. The AllCoveredResolved ticket is used and is triggered when all the covered subcases are resolved. You configure the ticket in the same flow as the wait shape, and you place the ticket in the flow at the location at which processing should continue. Configure the wait shape as a timer with a duration longer than the time to issue the ticket.

Subcase and subflow comparison You have many factors to consider when deciding on a suitable case design. The following table summarizes some advantages of leveraging a design incorporating multiple cases or subcases.

Factor

Consideration

Security

Class-based security offers more options for security refinement using multiple cases. Data security increased as subcases only contain data pertinent to their case.

94 ©2018 Pegasystems

Factor

Consideration

Reporting

Reporting on smaller data sets may be easier and offer potential for performance increase (the may be a disadvantage if a join is required).

Persistence

You have the ability to persist case data separately.

Locking

You have the ability to lock cases separately and process without interruption (significant in cases involving automated processing).

Specialization

This can be extended or specialized with a class or leverage Case Specialization feature.

Dependency Management

Subcase processing can be controlled through the state of parent or sibling cases.

Performance

There is pure parallel processing since separate cases may be accessed using a separate requestor.

Ad hoc processing

You can leverage the ad hoc case processing feature.

Advantages of a single case design involving only subflows are listed in the following table.

Factor

Consideration

Data

Data is readily available; no replication of data is necessary.

Reporting

All data is accessible for reports.

Attachments

All attachments are accessible (coding required for subcases).

Policy Override

Implementing this feature is easy. Managing “Suspend work” when multiple cases are involved is more complex.

95 ©2018 Pegasystems

Case design - example one Consider the following requirements by an automobile manufacturing company automating an Illness and Injury Reporting application. Like many corporations, the automobile manufacturing company must log work-related fatalities, injuries, and illnesses. For example, if an employee contracts tuberculosis at work, then the illness must be logged in the company's safety records. Certain extreme events, such as death, must be reported to the regulatory agency immediately. These reports are called submissions. Submission processes and requirements differ by country. Some countries have additional rules, based on state or province. Typically, these rules are more stringent forms of the national guidelines. There are also some guidelines that are specific to injury type. A small subset of injuries requires injury-specific fields to be filled in. For example, with hearing loss, the actual assessment of the loss, measured in decibels, must be recorded. The Illness and Injury Reporting application must support two processes. First, any injury or illness must be recorded. This is a guided and dynamic data-entry procedure that is specific to the regulations of the country in which the plant is located. The culmination of these entries is an electronic logbook. Second, the application must generate a summary of submission records for every plant per year. Each record summary must be verified, certified, and posted. Notably, severe events must be reported to the regulatory body of the corresponding country, and the status of this submission must be tracked. The reports of these record type are separate—there is never a need for a list of records that is a mix of Injury Records, Annual Summaries, and Submissions. However, because summaries are a culmination of injury records, and submissions are spawned by injury records, assuming that injury record information is included in summary or submission report is reasonable. The following image illustrates the requirements:

96 ©2018 Pegasystems

Design a case structure to support this application.

Solution You probably identified three independent case types with no subcase relationships: l

Illness Injury is for logging each illness or injury event

l

Annual Summary is to track the end-of-year report for each plant

l

Submission is for those events that must be reported to the regulatory agency.

Discussion An Annual Summary appears to be only a report, but you create a case because the requirements explicitly state that the status of these reports must be tracked, indicating a dedicated process. Furthermore, these reports must contain static information. While the original content may be derived from a report, this content must be fixed and persisted. Create a Submission case since the requirements stated that the Submission process, and status of each submission, must be tracked. Submission tracking is performed independently of the original injury record, and so is best kept as a separate case. You might consider that Submission could be a subclass of Injury Illness, but Submission is not a type of illness injury. Submission is a case that is spawned by an Illness injury case. Also, Submission is not a subcase of Illness Injury since the Illness Injury is not dependent on the Submission processing being completed.

97 ©2018 Pegasystems

Case design - example two Consider the following requirements for an automobile manufacturing company automating a warranty claim application. Two primary processes are supported by the application: a Warranty Claim process and a Recall process. For a warranty claim, a customer brings a car to the dealership because something is not working correctly. The dealer assesses the problem with the car, enters the claim details into the application, and then receives verification from the application that the repair is covered under warranty. The dealer is subsequently compensated for the work by the car manufacturer. Every warranty claim includes one or more claim tasks. Each claim task represents the work that must be completed to resolve the claim. Most warranty claims are simple and have a single Pay Dealer claim task. Some warranty claims require more complex processing if disputes are involved. Recalls are separate from warranty claims. Recalls cover the entire process from initially notifying customers when a recall is necessary to compensate the dealer for the work completed to support the recall. One particular type of claim task is a "Part Return". This claim task stands apart from others in that it requires an additional set of business rules and the process is different.

Design a case structure to support this application.

Solution At least two cases are possible: Recall and Warranty Claim. Recall has no dependencies but does have a distinct process. You might represent Recall as a standalone case. You have several design options for the Warranty Claim case. 98 ©2018 Pegasystems

One option is to create a stand-alone Warranty Claim case with conditional subprocesses spawned for each type of claim task. This approach is easy to implement, but it limits extensibility and the ability to specialize the claim tasks. Another option is to create the Warranty Claim case with a subcase for each claim task. This design option offers the flexibility to create specialized claim tasks such as Parts Return. The Warranty Claim case is the parent, or cover, case of the Claim Task case since the Warranty Claim depends on all Claim tasks resolving before the warranty Claim case can be resolved. You represent the Parts Return case type as a subclass of the ClaimTask class to indicate that PartsReturn is a specific type of ClaimTask case. This is an important distinction between subclasses and subcases. The hierarchy for subcases is established in the Case type rule, similar to the composition relationship between pages in a data model. A subclass indicates an is-a relationship and is indicated as such in the class structure.

Not enough information is provided in the requirements to determine which solution is more suitable for the Claim Task case design. If there are many specialization or extensibility requirements for the application, the latter design for the Claim task is a more suitable design.

99 ©2018 Pegasystems

Assigning work Introduction to assigning work A case is often assigned to a user to complete a task. For example, an employee expense requires approval by the manager of a cost center or a refund is processed by a member of the accounting team. In this lesson, you learn how to leverage work parties in routing and how to customize the Get Next Work functionality to fulfill the requirements. After this lesson, you should be able to: l

Compare push routing to pull routing

l

Leverage work parties in routing

l

Explain the default Get Next Work functionality

l

Customize Get Next Work

100 ©2018 Pegasystems

Push routing and pull routing The two basic types of routing are push routing and pull routing.

Push routing logic is invoked during case flow processing to determine the next assignment for the case. Push routing occurs when a pyActivityType=ROUTE activity is used to create either a worklist or workbasket assignment. When routing to a worklist assignment, Pega can use multiple criteria to select the ultimate owner, such as availability (whether an operator is available or on vacation), the operator’s work group, operator skills, or current workload. You can even configure routing to a substitute operator if the chosen operator is not available. Pull routing occurs outside the context of a case creating an assignment. In standard portals, you can pull the next assignment to work on using Get Next Work by clicking Next Assignment at the top of the portal. It is also possible to pull an assignment to work on by checking Look for an assignment to perform after add? within a flow rule. The Get Next Work feature selects the most urgent assignment from a set of assignments shared across multiple users. Ownership of the fetched assignment does not occur until either MoveToWorklist is called or the user submits the fetched assignment's flow action. The GetNextWork_MoveToWorklist Rule-System-Settings rule must be set to true for the MoveToWorklist activity to be called. Note: MoveToWorklist is called from pzOpenAssignmentForGetNextWork following the successful execution of the GetNextWorkCriteria decision tree.

101 ©2018 Pegasystems

The following image illustrates how pyActivityType=ROUTE activities, when run in a case processing context, are used to achieve push routing. The image also illustrates how GetNextWork-related rules, executed in a non-case processing context, are used to achieve pull routing.

KNOWLEDGE CHECK

What feature in the Pega Platform supports the pull routing paradigm? Get Next Work

102 ©2018 Pegasystems

How to leverage work parties in routing Adding work parties to case types allows for consistent design and can simplify configuration of routing throughout the case life cycle. The layer of abstraction provided by work parties aids the developer by providing a dynamic, extensible and reusable routing solution. Using work parties in your solutions allows you to leverage related base product provided functionality such as a ready to use data model, validation, UI forms and correspondence which simplifies design and maintenance. The basic configuration for work parties have already been described in pre-requisite courses. This topic will concentrate on forming a deeper understanding of work party functionality and routing configuration to aid the developer in fully leveraging this functionality.

Understanding Work Party Behavior Most of the existing code for work parties is contained in the Data-Party class which provides the base functionality and orchestration for rules in derived classes. Several classes extend Data-Party, such as Data-Party-Operator which overrides the base functionality. It is important to understand this polymorphic behavior since the behavior will change depending on the class used to define the work party. For example, work party initialization, validation and display will differ between work parties constructed from the Data-Party-Operator class compared to those constructed from the Data-PartyPerson class. You are encouraged to review and compare the rules provided in the Data-Party class and subclasses to gain an appreciation for the base functionality and specialization provided. An example of a polymorphic rule is the WorkPartyRetrieve activity. This activity is overridden within the Data-Party-Operator and Data-Party-Person derived classes.

The WorkPartyRetrieve activity is significant since it is invoked every time a page is added to the .pyWorkParty() pages embedded on the Case. The .pyWorkParty() page group stores the invidual parties added to the case. The property definition contains an on-change activity which ultimately invokes the WorkPartyRetrieve activity. It may be required to override the default behavior of some aspect of the work parties such as validation or display. This can be performed either through ruleset specialization or by extending the work party class and overriding the required rules. If this is required, ensure the scope of the changes are made correctly so not to change behavior in unintended ways.

Configuring Work Party rules A Work Party rule defines a contract which defines the possible work parties which can be utilized in the Case. Define work parties for the case in the standard work parties rule

103 ©2018 Pegasystems

pyCaseManagementDefault. Use meaningful party role names to enhance application maintainability. Avoid generic names such as Owner and Originator. Generic, non-descriptive role names such as Owner can be subject to change and may not intuitively describe the party’s role with respect to the case. You can use the visible on entry (VOE) option to add work parties when a case is created. VOE allows you to: • Enable the user to add work parties in the New harness • Automatically add the current operator as a work party • Automatically add a different operator as a work party Use the data transform CurrentOperator in the work parties rule to add the current operator as a work party when a case is created. You can create custom data transforms to add other work parties when a case is created. For example, you can create a data transform CurrentManager that leverages the organization structure to retrieve the manager for a the current operator.

Initializing Work Parties Prior to routing to a designated work party, the work party property values must be initialized. This can be performed at the time the case is created as described above using the VOE option on the Work Parties rule or it can be performed dynamically during case processing. When dynamically setting the work party values, leverage the addWorkObjectParty activity. This also allows you to specify a data transform to initialize the values. Caution must be used when using this activity however if the work party already exists and is not declared repeatable. Note: Do not attempt to initialize and set the values on the work party page directly as this may cause unintended results

Assignment Routing Leveraging Work Parties There are two already provided routing activities which are commonly leveraged for work party routing: l

l

ToWorkParty: Routes an assignment to the worklist of the party specified by the party parameter ToNewWorkParty: Routes an assignment to the worklist of the party specified by the party parameter if it exists OR adds a new work party as configured by the parameters and then routes the assignment to the new work party

Note: As a best practice, define routing for every assignment, including the first assignment. This prevents routing issues if the case is routed back to the first assignment during case processing, or if the previous step is advanced automatically via an SLA.

KNOWLEDGE CHECK

What is the advantage of using work parties? Work parties allows for consistent design and configuration of routing throughout the case life cycle.

104 ©2018 Pegasystems

Get Next Work

105 ©2018 Pegasystems

Using the Get Next Work feature, your application can select the next assignment for a user. By choosing the best, most appropriate assignment to work on next, your application can promote user productivity, timeliness of processing, and customer satisfaction.

106 ©2018 Pegasystems

107 ©2018 Pegasystems

Users typically click Next Assignment in the Case Manager or Case Worker portal to retrieve assignments. An activity then starts and performs several steps to retrieve the assignments. The application calls the @baseclass.doUIAction activity, and that calls Work-.GetNextWork, and that immediately calls Work-.getNextWorkObject. What happens next depends on the configuration of the operator record of the user. If Get from workbaskets first is not selected, the Work-.findAssignmentInWorklist activity is invoked, followed by the Work-.findAssignmentInWorkbasket activity. If Get from workbaskets first is selected, the Work-.findAssignmentInWorklist activity is skipped and Work-.findAssignmentInWorkbasket is immediately invoked. The Work-.findAssignmentInWorklist and Work-.findAssignmentInWorkbasket activities retrieve the assignments with the Assign-Worklist.GetNextWork and Assign-WorkBasket.GetNextWork list views, respectively.

When multiple workbaskets are listed on a user operator record, the workbaskets are processed from top to bottom. If you configure an Urgency Threshold for the workbasket, then assignments with an urgency above the defined threshold are prioritized. Lower urgency assignments are considered only after all applicable workbaskets are emptied of assignments with an urgency above the threshold. If Merge workbaskets is selected, the listed workbaskets are treated as a single workbasket.

108 ©2018 Pegasystems

Instead of specifying the workbaskets to retrieve work from, you can select the Use all workbasket assignments in user's work group options to include all workbaskets belonging to the same work group as the user.

If you configure the case to route with the ToSkilledWorkbasket router, then the skills defined on the operator record of the user are considered when retrieving the next assignment. An assignment can have both required and desired skills. Only required skills are considered by Get Next Work.

Define the user's skills using the Skill and Rating fields on the operator record. Skills are stored in the pySkills property on the OperatorID page. Skills checking is not performed when users fetch work from their own worklist since they would not own an assignment without the proper skills. The Get Next Work functionality ensures that users can only retrieve assignments from the workbasket, if the user has all the skills with at least the ratings defined.

109 ©2018 Pegasystems

The Assign-Worklist.GetNextWork list view uses the default getContent activity to retrieve assignments. The Assign-WorkBasket.GetNextWork uses a custom get content activity getContentForGetNextWork to construct a query. The query varies based on Rule-System-Settings rules that start with GetNextWork_. By default the query compares the user's skills to the assignment's required skills, if any. Before the assignment returned by the list view is selected, the Assign-.GetNextWorkCriteria decision tree checks if the assignment is ready to be worked on and if the assignment was previously worked on by the user today. The assignment is skipped if it was previously worked on by the user today.

KNOWLEDGE CHECK

Where are the settings specified on the user's operator record applied when getting the next assignment? In the custom get content activity getContentForGetNextWork used in the AssignWorkBasket.GetNextWork list view

110 ©2018 Pegasystems

How to customize Get Next Work You can customize Get Next Work processing to meet the needs of your application and your business operations. The most common customization requirement is adjusting the prioritization of work returned by Get Next Work. You change the prioritization of work by adjusting the assignment urgency. Adjusting assignment urgency may not be a good long term solution however, since urgency can also be affected by other case or assignment urgency adjustments. A better long term solution is to adjust the filter criteria in the Assign-WorkBasket.GetNextWork and Assign-Worklist.GetNextWork list views. For example, you can sort by the assignment's create date or join the assignment with the case or another object to leverage other data for prioritization. Sometimes different work groups have different Get Next Work requirements. When different business requirements exist, you customize each Get Next Work functionality to satisfy both sets of requirements. So a change implemented to satisfy a certain requirement should not affect the solution to a different requirement. For example, if assignments for gold-status customers should be prioritized for customer service representatives (CSRs), but not for the accounting team, then the change implemented to prioritize gold customers for CSRs must not affect the prioritization for the accounting team. You can create several circumstanced list views if the requirements cannot be implemented in a single list view, or if a single list view is hard to understand and maintain. Use the Assign-.GetNextWorkCriteria decision tree to filter the results returned by the GetNextWork list view. You can define and use your own when rules in the GetNextWorkCriteria decision tree. Create circumstanced versions of the GetNextWorkCriteria decision tree if needed. Note: Using the GetNextWorkCriteria decision tree for filtering has performance impacts since items are iterated and opened one-by-one. Always ensure the GetNextWork list view performs the main filtering.

111 ©2018 Pegasystems

Circumstance Example: GetNextWork ListView: Assign-WorkBasket Circumstanced: OperatorID.pyWorkGroup = FacilityCoordinator@FSG Criteria: .pxWorkGroup = FacilityCoordinator@FSG Get These Fields: .pxUrgencyAssign Descending (1), pxCreateDateTime Ascending (2) Show These Fields: .pxUrgencyAssign, pxCreateDateTime Besides circumstancing the GetNextWork ListView, it is also possible to circumstance the GetNextWorkCriteria Decision Tree for a particular WorkGroup. Other alternatives exist such as specializing the getContentForGetNextWork Activity to call a different Decision rule to produce the desired results. When specializing any of these rules, it is important to implement changes efficiently to ensure results will be performant.

KNOWLEDGE CHECK

How can you change the Get Next Work prioritization without customizing the GetNextWork and GetNextWorkCriteria rules? By adjusting the assignment urgency

112 ©2018 Pegasystems

DATA MODEL DESIGN This lesson group includes the following lessons: l

Designing the data model

l

Extending an industry framework data model

113 ©2018 Pegasystems

Designing the data model Introduction to designing the data model Every application benefits from a well-designed data model. A well-designed data model facilitates reuse and simplifies maintenance of the application. At the end of this lesson, you should be able to: l

Design a data model for reuse and integrity

l

Extend data classes

l

Reuse data types

l

Expose integration using a data type

114 ©2018 Pegasystems

Data model reuse layers Designing a data model for reuse is one of the most critical areas in any software project. A welldesigned data model has a synergistic effect, the whole being greater than the sum of its parts. In contrast, a poorly designed data model has a negative effect on the quality and maintainability of an application.

Designing reuse layers To reinforce reuse, Pega emphasizes using a layered approach when constructing applications. Application layers mirror the subsumptive and compositional nested containment hierarchies that are inherent in object-oriented programming (OOP) software development. A compositional hierarchy follows from objects being components. Objects can contain objects, which in turn can contain objects. When a class extends another class it inherits the parent class’s attributes and methods. A subsumptive hierarchy is the notion that inheritance is also a classification exercise where newly derived classes have an “is-a” relationship to their parent class while further refining and narrowing its parent’s characteristics to form a unique subset. Refinement and narrowing specialization can also be achieved using pattern inheritance. The following example shows an example of how this might be achieved.

Org-App-Work-CaseType l Org-App-Work-CaseType-A l

Org-App-Work-CaseType-B

Note: This type of specialization is typically implemented within the same application as opposed to an application built on the application that defines Org-App-Work-CaseType.

Inheritance: Hotel Rooms Request ( FSG-Data-Hotel-RoomsRequest )

Name

Label

Inheritance Type

1

FSG-Data-Hotel-RoomsRequest

Hotel Rooms Request

Pattern

2

FSG-Data-Hotel

Hotel

Both

3

FSG-Data

Organization Data Class

Pattern

4

FSG

Top Level Class

Directed

5

Data-

Data- classes

Directed

6

@baseclass

@baseclass

NA

A Rooms Request is not of the FSG-Data-Hotel class, but a Rooms Request can inherit attributes from that class. Take care not to use the inherited Hotel .Name property for anything other than the Hotel’s name. If desired, the RoomsRequest class could contain a page property of class FSG-Data-Hotel that references a D_Hotel data page with the .Name property used as the lookup parameter.

115 ©2018 Pegasystems

Extending data classes through application layers Applications evolve over time and leverage existing functionality as opposed to re-inventing it. Applications are more manageable if implemented layer-by-layer, each layer serving a specific purpose. The use of layers to build applications is a good way to mange complexity. You are managing the complexity that arises when dealing with inter-object dependencies. As shown in the dependency tree below, we group dependencies into levels.

The algorithm to derive dependency level is simple and recursive. In this example we begin with a base level 0 that contains reusable artifacts. The algorithm follows these steps:

1. Identify the set of classes within D_AllClasses where: a. If level = 0, class may not have any dependencies. b. If level > 0, every class dependency is either bi-directional or pointing to class in D_ RemovedClasses. 2. Transfer this set of classes out of D_AllClasses into D_RemovedClasses. 3. If D_AllClasses is empty, halt. 4. Level = Level + 1. 5. Go to Step 1. Note: Persisted reference data typically is Level 0 only but can also be Level 1 or higher. A small number of consecutive levels approximate a layer. The lowest levels are synonymous with an Enterprise layer.

116 ©2018 Pegasystems

An OrgDivision-specific application is built and dependent on one or more enterprise applications or components. Each OrgDivision application would occupy levels in the dependency tree above the levels that constitute the Enterprise layer. The built-on levels shared by every OrgDivision application would constitute a Division layer. This would occur despite a single OrgDivision having the freedom to define data classes that have no dependencies.

Extending data classes through application layers example The following diagram shows how data classes are extended through the application layers used in the FSG application.

The diagram also shows how the application layers are configured. The Email application is built on Pega. The Hotel application is built on the Email application. The Event Booking application is built on the Hotel application. Within the Pega application are various dependency levels. The complexity amongst those levels is managed by Pega. You could install FSG-Data classes and their respective rulesets as a COE managed Enterprise application. Such a COE managed application would be above Pega and beneath the Email application.

117 ©2018 Pegasystems

How to extend a data class to higher layers One approach to extend a data class is to place data classes in a separate, non-work class ruleset. You typically do this at the Enterprise level. The New Application wizard does not generate a -Workclass, but it does generate an -Data class. Creating -Work is redundant since the class extends Work-Cover-. Case types are never defined under -. The New Application wizard must be run to create a new implementation application where the name can imply that the application’s purpose is generic across an enterprise. However, the application is an implementation of a business outcome (for example, the ability to schedule and pre-edit an email). When adding a data type to an enterprise application, Pega automatically creates the data type in -Data. This new class is created in the ruleset. This is a different ruleset from the enterprise application’s case type ruleset. In contrast, when adding a data type to an implementation application built on an enterprise application, Pega asks whether the data type should be created at the enterprise level or the implementation level. The base data class --Data is created in the same ruleset as the --Work class. This does not mean that every App-related Data class should be added to that ruleset. You could also create a new ruleset synonymous with the , but where data classes are added to the -Data class. This approach allows for applications to introduce a new, generic data type that can be referenced by other applications. The case type-containing application would control the content of this enterprise-level Data ruleset. However, that enterprise-level Data ruleset would not be included in the case-type-containing application’s ruleset list. This is done to satisfy the requirement that the same ruleset should not be included in multiple applications. For example, consider a product warranty processing application. It makes sense for such an application to define an -Data-Warranty class to define the basic attributes of a warranty (StartDate, ExpirationDate, TermsAndConditions). A warranty processing application might need to extend that class to meet its application-specific needs. Other applications may need warranty information but do not need to process warranties. Those applications benefit by having access to a ruleset that only contains warranty-related data class definitions. As a general rule, Enterprise-level data classes should only be aware of Enterprise level Work- classes. Similarly, only Implementation-level data classes should be aware of Implementation level Workclasses. How can an enterprise-level -Data-Warranty class interact with -Warranty-Work cases? The solution is to use both the Template Design Pattern and Dynamic Class Referencing (DCR).

Applying the Template Pattern and DCR Suppose an enterprise-level -Data-Vehicle class in a specialized non- ruleset. That specialized ruleset can define a Work- list property named .VehicleList that contains -Data-Vehicle instances. A Vehicle section is included in the -Data-Vehicle class that contains an Add Vehicle To List button. When the button is clicked, the displayed -Data-Vehicle page is appended to .VehicleList.

118 ©2018 Pegasystems

To implement this feature, you associate an AddVehicleToList data transform to the button click action. Then you configure the AddVehicleToList data transform as shown in the following table.

Classes and Properties

Data Pages

Data Transforms

Application Layer

Org-App-DataVehicle-Car

D_Vehicle     When .Tpye="Car"     set .Class="Org-AppData-Vehicle-Car"

Enterprise Layer

Work-.VehicleList Org-Data-Vehicle .Type Org-DataVehicle-Car

D_Vehicle     When .Tpye="Car"     set .Class="Org-DataVehicle-Car"

Org-Data-Vehicle AddToVehicleList    Update Primary set .pxObjClass = D_ Vehicle[.Type].Class    If param.WorkPage = "" param.WorkPage = "pyWorkPage"    Append to WorkPage .VehicleLisst Primary

The App Layer in the figure above might be an Auto Insurance application that insures different types of vehicles such as cars, trucks, and motorcycles. By overriding the data transform that the D_Vehicle data page sources, the Auto Insurance Quote application is free to extend the -Data-Vehicle class using either pattern or direct inheritance, typically the latter. The following table shows possible class names after pxObjClass is set within the AddVehicleToList data transform. Note that the organizational layer data class Org-Data-Vehicle can be in a ruleset other than the organization ruleset.

Enterprise Layer

App Layer

-Data-Vehicle

-AutoQuote-Data-Vehicle

-Data-Vehicle-Car

-AutoQuote-Data-Vehicle-Car

-Data-Vehicle-Truck

-AutoQuote-Data-Vehicle-Truck

-Data-Vehicle-Motorcycle

-AutoQuote-Data-Vehicle-Motorcycle

Once the pxObjClass of the Primary page has been set, the AddVehicleToList data transform is free to leverage OOP polymorphism. Rules defined in specialized classes are invoked which override rules provided in the base class. For example, Org-Data-Vehicle defines a rule with the same name (pyRuleName) and class (pyRuleClass). The root class rule’s implementation can simply be empty (stubbed out) assuming it will always be overridden by a rule in a more specific class. Note how the AddVehicleToList data transform utilized a D_Vehicle lookup data page to set the vehicle class based on vehicle type. A higher layer application should be allowed to extend the -DataVehicle class without having to modify a rule in a locked ruleset. Unlocking the ruleset violates the Open-Closed Principle. A when/otherwise data transform that converts the type parameter to a class name may seem unconventional. However, the application that is being extended can override the type-to-class conversion data transform — the locked ruleset implementation will never be changed.

119 ©2018 Pegasystems

How to maintain data integrity Maintaining data integrity is crucial in applications that persist data that can be accessed and updated by multiple requestors. This data includes shared information that can be updated, such as reference data, where Pega is the system of record. You can mitigate data integrity issues by locking instances, avoiding redundancy, and accounting for temporal data.

Locking instances On the Locking tab of a data class rule, select Allow locking to allow instances to be locked. The Locking tab is only displayed when the data class is declared concrete as opposed to abstract. By default, the class key defined in the General tab is used as a key.

Avoiding redundancy A potential data integrity issue can arise if you are querying two different tables in a database for the same entity. If you are retrieving data from similar columns that exist in both tables, the values in those columns may be different. To avoid these potential conflicts, keep in mind the single source of truth principle. This is the practice of structuring information models and associated schemata such that every data element is stored exactly once. Within a case hierarchy, you may want to always use data propagation from a parent case to each child case. However, if the data propagated from the cover is subject to change, then accessing the data from pyWorkCover directly is better. You can use a data page if you need to access data from a cover’s cover.

120 ©2018 Pegasystems

The following image illustrates the propagation of information from the WorkPage to a Claim case (W = WorkPage and C = Claim). W1 is the original WorkPage, W2 is W1’s cover, and C is W2’s cover.

It should be noted that the use of recursion in the above example could be avoided by defining a ClaimID property at Org-App-Work and ensuring the value of ClaimID is propagated to each child case. A subcase’s pxCoverInsKey would never change. Not setting the ClaimID property initially and propagating it leads to information loss which requires effort to recover. Any property not subject to change can similarly be propagated, especially for purposes such as reporting and security enforcement. Referencing pages outside a case has the additional benefit of reducing the case’s BLOB size. A smaller BLOB reduces the amount of clipboard memory consumed. The clipboard is able to passivate memory if it is not accessed within a certain period of time. Instead of maintaining large amounts of data within the BLOB as either embedded pages or page lists, consider storing that information in history-like tables, for example, tables which do not contain a BLOB column. These tables let you use data pages to retrieve the data as needed. As with any type of storage, consider exposing a sufficient number of columns in these tables to allow a glimpse of what the pages may contain while avoiding BLOB reads.

KNOWLEDGE CHECK

How does the single source of truth principle help in data integrity? This principle ensures that every data element is stored exactly once

121 ©2018 Pegasystems

Accounting for temporal data Accounting for temporal data is another way to help ensure data integrity. Temporal data is valid within a certain time period. One approach to accommodating temporal data is to create data classes using a base class containing Version and IsValid properties. The base class can also contain properties suited for lists such as ListType and SortOrder. Alternatively, you can accommodate temporal data using a custom rule. Custom rules can be the best approach for maintaining a list that contains a list. For example, assume you create a survey that contains a list of questions. Each question has a list of responses. You could define a Rule-Survey rule class and the following data model. l

QuestionList — A field group list of class Data-SurveyQuestion

l

ResponseList — A field group list of class Data-SurveyResponse

The ResponseList is owned by each Data-SurveyQuestion. Using data pages to perform SnapShot Pattern lazy loading can compromise data integrity if the data page lookup parameters do not account for temporal data. The child case could load information newer than the information that the parent case communicated to a customer. You could use data propagation to child cases but this approach can cause redundant data storage. To avoid this issue, you can have child cases refer to their cover case for the information (the questions and possible responses) sent to the customer. Only the cover needs the SnapShot Pattern. The child case only needs to record the customer’s response(s) to each question.

KNOWLEDGE CHECK

What are two ways to accommodate temporal data? Use a data class with a version property, or use a custom rule.

122 ©2018 Pegasystems

Extending an industry framework data model Introduction to extending an industry foundation data model Pega offers a foundation data model for each major industry, including financial services, healthcare, and communications. Similar to leveraging a Pega application, using the industry foundation data model can give you an improved starting point when implementing your application. After this lesson, you should be able to: l

Identify benefits of using an industry foundation data model

l

Extend an industry foundation data model

123 ©2018 Pegasystems

Industry foundation data model benefits Pega's industry foundation data models allow you to leverage and extend an existing logical data model instead of building one from scratch. Pega offers industry data models for Healthcare, Communications and Media, Life Sciences, Insurance, and Financial Services. Instead of building data classes yourself, you can map the system of record properties to the data class and properties of the industry data model. For example, the following image illustrates the logical model for the member data types for the Healthcare Industry foundation.

You can embellish the industry foundation data classes to include additional properties from external systems of record as needed. Pega's industry foundation data models apply the Separation of Concerns (SoC) design principle: Keep the business logic separate from the interface that gets the data. The business logic determines when the data is needed and what to do with that data after it has been retrieved. The interface insulates the business logic from having to know how to get the data.

124 ©2018 Pegasystems

In Pega Platform, data pages connect business logic and interface logic. The following image illustrates the relationship between the data page, the data class, and the mechanism for retrieving the data.

This design pattern allows you to change the integration rules without impacting the data model or application behavior.

KNOWLEDGE CHECK

What are the two key benefits of using a Pega industry foundation data model? The industry foundation provides data pages that separate business logic from the source of the data (interface), and provides a robust starting point for data properties and classes. Rather than directly extend the industry foundation data model, your project may mandate that you use an Enterprise Service Bus (ESB)- dictated data model. The goal of an ESB is to implement a canonical data model that allows clients who access the bus to talk to any service advertised on the bus.

125 ©2018 Pegasystems

The Pega development team does not define the canonical data model. However, the team may maintain the mapping between the canonical data model and the foundation data model. Note: It is not a best practice to use a custom data model or wrap the foundation data model. The best practice is to leverage the foundation data model and embellish it as needed.

126 ©2018 Pegasystems

How to extend an industry foundation data model Follow this process to extend an industry foundation data model: l

Obtain industry foundation data model documentation

l

Obtain the data model for the system of record

l

Map the system of record data model to the industry foundation data model

l

Add properties to the industry foundation data model and add data classes as needed

l

Maintain a data dictionary

Before you begin the mapping process, determine which parts of the data to map. For example, when producing the initial minimal lovable product (MLP), it may not be necessary to map all of the data from the source before the go-live. In some situations, it may be easier to extend data using circumstancing and pattern inheritance. Note: Building web services can be an expensive and lengthy process. If you discover that you need to build a new web service, consider using robotic automation.

Obtain industry foundation data model documentation The Community landing page for each industry foundation data model contains an entity relationship diagram (ERD) and data dictionary. You will need these documents to help you map the industry foundation data model to the system of record data model. Acquaint yourself with the relationship of the data types, classes, and properties that the industry foundation data model provides. For example, the Pega Customer Service data model has three main classes: 1. Pega-Interface-Account 2. Pega-Interface-Customer 3. Pega-Interface-Contact In an industry such as banking, a customer typically has multiple accounts such as checking and savings. A customer should be able to define multiple contacts per account. Therefore, the relationship between Account and Contact is many-to-many. Pega industry frameworks do not force the consumer of its data models to use intermediary, many-tomany association classes. Instead, in true Separation of Concerns (SoC) fashion, Pega industry frameworks hide many-to-many relationship complexity by having data consumers reference appropriately named and parameterized data pages.

Obtain the data model from the system of record Work with the enterprise architecture team at your organization to obtain a model of the system of record. This data model documentation can take the form of:

127 ©2018 Pegasystems

l

An entity relationship diagram

l

A canonical data model (typically used ESB solutions)

l

A WSDL or XSD

l

A spreadsheet

Regardless of format, this documentation serves as a source for mapping the industry model to the system of record.

Map the system of record data model to the industry foundation data model The next step is to map the system of record data model to the industry foundation data model. To help with this process, use a tabular format to record this information, such as a spreadsheet. The output is a reference document to use when mapping property values from the integration response to the application data structure. During this analysis, you may find that you need new properties for the application. For example, when mapping the healthcare industry foundation data model, you may find that you need a property to store information when a claim is submitted outside of the member's home country. Record the name and class where the property resides because you will need to add it to the application data model.

Add data classes and properties Only create new data classes and properties if your application requires that data from the source system. Use the data classes and properties from the industry foundation data model as much as possible. If you create any new properties and data classes, generate the integration rules into the organization level integration ruleset. Test each data page to ensure the mapping of the source data to the application data model is correct. Tip: Over time, web service and rest service definitions can change. Use rulesets to maintain versions of integration rules. As a best practice, allow the center of excellence to manage and deploy these rules to development environments.

Maintain a data dictionary If the data mapping is not recorded, it may be difficult or impossible for another team maintaining the model to reverse the mapping if necessary. A data dictionary is especially important if two or more source data items map to one output data item (this type of relationship is called surjection). For instance, the same type of information exists in two different paths within the integration data model. Encourage your development team to document the meaning and proper use of data model properties.

128 ©2018 Pegasystems

KNOWLEDGE CHECK

What is the primary benefit of using an industry foundation data model? You do not have to create the data model yourself. You map the system of record data model to the industry foundation data model, and only add new fields if required by the application.

Circumstancing You can also circumstance foundation rules in order to extend them. Circumstancing rules enables you to use the original foundation rule in your application without having to override the rules. Because the value of the circumstancing property can change dynamically within the requestor’s context, you can use the base version of the foundation rule by default. For example, assume you need to extend a section in a foundation class. Rather than override the section, you can circumstance the section by adding a property to the Data-Admin-Operator-ID class. That property is visible within the top-level OperatorID clipboard page. For example, the property can be defined as OperatorID.PrimaryRole = FacilityCoordinator You can derive the PrimaryRole property value using a declare expression as shown in this example: .PrimaryRole = @whatComesAfterLast (AccessGroup.pyAccessGroup,”:”)

Pattern inheritance Extending a foundation class using pattern inheritance is another option, which is similar to direct inheritance extension. You use Dynamic Class Referencing (DCR) to choose the appropriate pxObjClass value.

129 ©2018 Pegasystems

How to use integration versioning The development of a Pega application in parallel to the development of an ESB or another form of integration to a legacy system is common. The integration data model has its own unique internal dependencies. The mapping code depends on the current state of the integration data model for conversion to the business logic data model. Even after an application is placed into production, these internal dependencies may change. The business data model remains the same and the mapping code is the adapter that insulates the business data model from these changes. This is also known as loose coupling. To deal with changes to integration data models, generate the models with new integration base classes such as -Int-. Note that the new data model may create code redundancy. This issue is easily addressed by generating the mapping code into a different ruleset. Over time, when there is no need to return to a previous ruleset , you can remove the ruleset from application definition. Ideally, you remove the unused ruleset when versioning the application. Changes to the integration root classes need to be accounted for. Use DCR to accommodate those changes. Since data pages support when conditions, you can use those conditions to determine which integration version to use based on application version. You can also use a Data-Admin-System-Setting value to a data page to accommodate the interface data model change.

130 ©2018 Pegasystems

USER EXPERIENCE DESIGN

131 ©2018 Pegasystems

User experience design and performance Introduction to user experience design and performance Application performance and user experience are naturally related. If the application’s user interface is not responsive or performance is sluggish, the user experience is poor. After this lesson, you should be able to: l

Identify application functionality that can directly impact the user experience

l

Describe strategies for optimizing performance from the perspective of the user

l

Design the user experience to optimize performance

132 ©2018 Pegasystems

How to identify functionality that impacts UX A good user experience is not only related to the construction of user views and templates. Thoughtful use of features such as background processing can impact the user experience and overall performance of the application. Consider these areas of application functionality that directly impact the user experience and use the following guidelines to ensure the application provides the best possible experience for your end users: l

Background Processing

l

SOR Pattern

l

External Integration

l

Network Latency

l

Large Data Sets

l

User Feedback

l

Responsiveness

Background processing Moving required processing to perform in the background so the end user does not have to wait can improve the perceived performance of an application. This is can also be referred to as asynchronous processing. While the user is busy reading a screen or completing another task, the application performs required tasks in another requestor session, apart from the user’s requestor session. Scenarios where background processing can be leveraged include: l

SOR Pattern

l

External Integration

l

Network latency

Leverage the system of record (SOR) pattern When leveraging the SOR pattern, required data is not kept with the case itself. Instead the data is retrieved when needed at run time from an external SOR. In these scenarios, you can defer the load of the external data after the initial screen is loaded.

Design for realistic integration response times External integration with an SOR is almost always a requirement. When integrating with external systems, establish realistic expectations on the amount of time needed to load data retrieved from the external systems. By leveraging background processing and asynchronous processing, you can quickly render an initial user interface. This technique allows the end user to start working while the application gathers additional data. The application then displays the data as soon as it becomes available.

133 ©2018 Pegasystems

Estimate network latency accurately Never underestimate the impact that network latency can have on the amount of time required to retrieve data from external systems. Whenever possible, colocate the Pega Database repository on the same high-speed network as the application servers running the Pega Platform or engine. Keep systems you are integrating with as close as possible to your data center. If the system you are integrating with is located very far away, consider using replicated data from a nearby data warehouse or proxy system. You can also use Edge Servers for web content that is referenced frequently.

Avoid usage of large data sets When it comes to data, less is always better. Avoid retrieving large data sets. Keep your result sets as small as possible. Only retrieve the data that is immediately required for the task at hand. Consider aggregating data sets ahead of time by introducing data warehouses and multidimensional databases where quick response times are critical.

Provide meaningful user feedback If it will take longer than a couple of seconds to load a screen, provide the end user meaningful feedback of how much time is needed to complete the processing. Give the end user something else to do while the processing is taking place. Also, you could design the interaction so the user can opt to perform it in the background or cancel out if it is taking too long. Always keep the end user in control.

Leverage responsive UI As the form factor changes, leverage Pega's Responsive UI support and only show the user what is absolutely necessary to complete the task at hand. Avoid creating the "Everything View" that tries to show every piece of information all at once. Move unnecessary or optional information away from the screen as the screen size is reduced. Keep your User Interfaces highly specialized and focused on individual and specific tasks.

Other performance issues that affect the user experience Many application performance issues affect the user experience directly or indirectly. Because the application shares resources across users and background processes, an issue in another part of the application can affect the individual end user in some way. Sometimes, performance issues show up only after the application has been in production for several weeks and users over a period of days start to experience a performance degradation. Use performance tools and troubleshooting techniques to identify causes of poor performance. For more information, see the following PDN article: Support Play: A methodology for troubleshooting performance.

134 ©2018 Pegasystems

User experience performance optimization strategies Poor application performance can prevent users from meeting service levels and can seriously impact the overall user experience. Careful design and usage of the UI components, case types, and service calls can help avoid performance issues. Apply the following strategies to provide an optimal user experience: l

Use layout groups to divide a rich user interface

l

Leverage case management to divide complex cases

l

Run service calls asynchronously

l

Investigate PEGA0001 alerts

Use layout groups to divide a rich user interface Loading a single, complex form into the user's session can impact performance. Design the user interface to allow the user to focus on completing a specific task. Techniques include: l

Breaking complex forms into logical modules that allow users to focus on an immediate goal

l

Using layout groups to break long forms into logical groups

Once the form is divided into layout groups, design each layout group to use the deferred load feature. This approach allows data in the layout group to be loaded when the form loads. The data for the other layout groups dynamically loads when users select each layout group on the browser. Important: Use data pages as the source of data for deferred loading. Cache and reuse the data sets using appropriately sized and scoped data pages.

Leverage case management to divide complex cases Dividing complex cases into smaller, more manageable subcases allows you to process each subcase independently. This technique avoids loading a single large case into the user's session. For example, optimistic locking avoids the need to open parent cases when a subcase is opened. Each subcase opened in a separate PRThread instance executes asynchronously and independently of other cases.

Run service calls asynchronously For any long-running service call, use the run-in-parallel option on the Connect method. This option allows the service call to run in a separate requestor session. When the service call runs in another requestor, users are not waiting for the service call to finish and can continue with other tasks.

Investigate PEGA0001 alerts PEGA0001 alerts typically mask other underlying performance issues that can negatively impact the user experience. Leverage one of the Pega-provided performance tools such as AES or PDC to identify the underlying performance issue. Once you have identified the cause of the performance problem, redesign and implement a solution to address the problem.

135 ©2018 Pegasystems

Examples of alerts that are behind the PEGA0001 alert messages include: l

PEGA0005 — Query time exceeds limit

l

PEGA0020 — Total connect interaction time exceeds limit

l

PEGA0026 — Time to connect to database exceeds limit

Tip: In general, avoid retrieving large result sets. Only retrieve the minimal information required for the task at hand.

Design practices to avoid Avoid these design practices: l

Misuse of list controls

l

Uncoordinated parallel development

Misuse of list controls Misuse of list controls is a common problem that can easily be avoided during the design of a solution. Configure autocomplete controls to fetch data from data pages that never return more than 100 rows of data. Limit drop-down list boxes to no more than 50 rows of data. Autocomplete controls negatively impact the user experience if: l

The potential result set is larger than 100 rows

l

If all the results in the list start with the same three characters

Reduce the result set size for all list controls if more than 100 rows use a different UI component or data lookup mechanism. Source list controls from data pages that load asynchronously.

Uncoordinated parallel development Uncoordinated parallel development efforts can also impact performance for the user. For example, multiple development teams could invoke the same web service returning the same result set multiple times and within seconds of each other. Multiple service calls returning the same result set waste CPU cycles and memory. To avoid this situation, devise a strategy for the development teams to coordinate web service calls through use of shared data pages.

136 ©2018 Pegasystems

How to design the user experience to optimize performance The best way to prevent performance issues is to design the application to avoid them in the first place. Use the following techniques to optimize user interface performance and for the best possible user experience: l

Leverage asynchronous and background processing

l

Implement the system of record (SOR) data retrieval pattern

l

Utilize deferred data loading

l

Paginate large result sets

l

Leverage data pages

l

Use repeating dynamic layouts

l

Maximize client-side expressions

l

Use single-page dynamic containers

l

Utilize Layout refresh and Optimized code

l

Leverage new Pega Platform user interface features

Asynchronous processing options Pega provides multiple mechanisms to run processing in parallel or asynchronously when integrating with external systems. For instance, you may initiate a call to a back-end system and continue your processing without blocking and waiting for the external system’s response. This is useful when the external system processing time can be long and when the result of the processing is not needed immediately. This topic presents how the following Pega Platform features can be leveraged to improve the user experience.

Run connectors in parallel Imagine the following scenario. In a claims application, you retrieve the data and policies for customers who call to file a claim. The data is retrieved using two connectors: GetCustomerData and GetPolicyList. To speed up the loading of customer data, you run the connectors in parallel. You can use the Run in Parallel option to accomplish this. In this case, each connector runs as a child requestor. The calling activity, the parent requestor, retains control of the session and does not have to wait for each connector, in succession, to receive its response. The Run in Parallel feature is useful when subsequent tasks can be performed while waiting for multiple connectors to retrieve their individual responses.

Execute connectors using queued mode Imagine the following scenario. You have a SOAP connector called UpdateCustomerData that updates a customer record in an external system. The response returned by the service is irrelevant for

137 ©2018 Pegasystems

subsequent processing. Since the customer might be temporarily locked by other applications, you retry the execution if it fails. In addition to being executed synchronously and in parallel the SOAP, REST, SAP, and HTTP connectors can also be executed in queue mode. Select queueing in the Processing Options section on the connector record’s Service tab to configure queuing. When queueing is used, each request is queued, and then processed later in the background by an agent. The next time that the agent associated with the queue runs, it attempts to execute the request. The queueing characteristics are defined in the connector's Request Processor.

Background processing Background processing can also be leveraged to allow an initial screen to load which allows the user to continue working while additional detailed information is retrieved. This strategy is particularly useful when using the SOR design pattern.

Pagination Pagination can be leveraged to allow long-running reports to retrieve just enough information to load the first page of the report. As the user scrolls down to view the report additional records are retrieved and displayed as they are needed. Use appropriate pagination settings on grids and repeating dynamic layouts to reduce the amount of markup used in the UI.

Deferred data loading Deferred data loading can be used to significantly improve the perceived performance of a user interface. Either through asynchronous or background processing the screen is rendered almost intermediately allowing the user to get on with the task at hand while additional information is retrieved and displayed as it becomes available. Use defer load options to display secondary content.

Data pages Use data pages as the source for list-based controls. Data pages act as a cached data source, that can be scoped, invalidated based on configured criteria and garbage collected.

Repeating dynamic layouts Use repeating dynamic layouts for nontabular lists. Avoid multiple nested repeating dynamic layouts.

Consolidate server-side processing Ensure that multiple actions that are processed on the server are bundled together so that there is only a single round trip.

138 ©2018 Pegasystems

Client-side expressions Use client-side expressions instead of server-side expressions whenever possible. Whenever expressions can be validated on the client, they run on the browser. This is typically true for visibility conditions, disabled conditions, and required conditions. Enable the Client Side Validation check box (only visible when you have added an expression), and then tab out of the field.

Single page dynamic containers Use non i-Frame (i-Frame free) single page dynamic containers (this is the default setting in Pega 7). Single Page Dynamic Containers are much lighter on the browser, enables better design and web-like interaction. Only embed sections if they are truly being reused. Pega copies the configuration of included sections into the sections they are included. It is more efficient to reference a section by simply dragging and dropping the section into a cell.

Layout refresh and optimized code Use refresh layout instead of refresh section to refresh only what is required. To reduce markup, use the Optimized code based settings in the dynamic layouts Presentation tab.

Pega Platform features Leverage all the newest technologies in Pega Platform for better client-side performance and smaller markup. The newest user interface technology is all HTML 5 and CSS 3.0. Take advantage of icon fonts and new menus. Use the least number of layouts and controls possible, and always use the latest components and configurations available. Use screen layouts, layout groups, dynamic layouts, dynamic containers, and repeating dynamic layouts. Avoid legacy accordion, column repeat, tabbed repeat, and freeform tables as they run in quirks mode and should not be used.. Use layout groups rather than legacy tabs, which have been deprecated. Also avoid inline styles (not recommended although still available), smart layouts, and panel sets.

139 ©2018 Pegasystems

Conducting usability testing Introduction to conducting usability testing Usability testing is a method for determining how easy an application is to use by testing the application with real users in a controlled setting. Usability testing is an important part of designing and implementing a good user experience. After this lesson, you should be able to: l

Discuss the importance of usability testing

l

Plan for usability testing

l

Describe the six stages of usability testing

140 ©2018 Pegasystems

Usability testing Planning for, and building usability testing into, the project plan is essential to designing a positive user experience. The goal of usability testing is to better understand how users interact with the application, and to improve the application based on the results of the usability tests. Usability testing involves interacting with real-world participants—the users of the application—to obtain unbiased feedback and usability data. During usability testing, you collect both quantitative and qualitative data concerning the user experience. The quantitative data is generally the most valuable. An example of quantitative data is: 73% of users performed the given task in 2 seconds or less. Sometimes qualitative data, such as a user’s opinion of the software, is also collected. An example of qualitative data is: 67% of users tested agreed with the statement, “I felt the app was easy to use".

Who participates in usability testing? Effective usability testing involves typical users of your application. This is critical in making usability testing work. Usability testing cannot be substituted by working with project managers, project sponsors, stakeholders, or business architects.

Why is usability testing so important? Usability testing validates the ease of use of the user interface design so that time and resources are not spent on developing a poor user interface. Usability testing is done to collect valuable design data in the least amount of time as possible. The output of usability testing helps identify usability issues. To make the testing effective, establish goals before you start planning.

What are some of the goals to set? Set measurable metrics or Key Performance Indicators (KPIs) for quantitative testing. Some examples include: l

Reduce the number of steps to complete the approval process by 10%.

l

Reduce the number of errors on a given transaction by 20%.

l

Reduce response times throughout the application by at least 30%.

Who performs usability testing? Usability testing in an enterprise environment is a serious task. The best person to perform it is someone who has experience conducting these tests with end users and documenting the feedback. Pega offers assistance with usability testing and recommends that you engage with Pega to review all screens before starting to develop them.

When do you perform usability testing? Usability testing is conducted periodically throughout the software development life cycle and can help identify issues early in the software development life cycle.

141 ©2018 Pegasystems

How to conduct usability testing Usability testing is a method for determining how easy an application is to use by testing the application with real users in a controlled setting. Users are asked to complete tasks to see where they encounter problems and experience confusion. Observers record the results of the test and communicate those results to the development team or product owner. To conduct a usability test of your application, you need to recruit test participants. Select test participants who have the same skills as your production application users. After you plan the usability test and recruit usability test participants, you conduct the usability test. Usability testing typically involves six stages:

1. Select tasks to test 2. Document sequence of testing steps 3. Decide on the testing method 4. Select participants 5. Conduct tests 6. Compile results

Select tasks to test Work with the product owner to select the tasks to test. The selected tasks cover the most common and important use cases. For example, you are conducting usability testing for a time-sheet application. The product owner identifies three tasks for usability testing: l

Users enter their hours worked in the time-sheet application, and then submit the time sheet to their manager for approval.

l

Users can view their vacation balance.

l

Managers review and can approve time sheets.

Document sequence of testing steps Break down each of the identified tasks into a sequence of steps and document them. Then, give this document to the usability testing participants. The following steps explain how to enter a time sheet.

1. Use the Timesheet application to enter your hours for the week. 2. Submit the time sheet for your manager’s review and approval. Use the following table as a reference.

142 ©2018 Pegasystems

Day of the week Activity Mon Attended training on Agile Methodology Tue

Attended training on Agile Methodology

Wed

Attended training on Agile Methodology

Thu

General Work

Fri

General Work

3. Begin by adding the appropriate time codes for training and general work for each day of the week. 4. Review, and then submit your completed time sheet.

Decide on a testing method Usability testing can be conducted in an unmoderated or moderated setting. The brevity and quality of your testing instructions are critical when conducting unmoderated testing because one can guide the testing participants if they need help. The benefit of conducting moderated testing is that you can immediately respond to user behavior and ask questions.

Select participants Consult with the product owner to get the list of end user participants who will participate in performing the usability testing.

Conduct testing Ensure that the usability testing participants understand the tasks and the sequence of steps. You want the usability testing participants to perform these tasks without assistance. Monitor the participants as they perform the testing, take notes, and measure all interactions. While participants perform the testing, they should also take notes based on what they observe.

Compile feedback Compile feedback based on the notes provided by the testing participants, and further discussion with the participants. Consider measuring both user performance and user preference metrics. Users performance and preference do not always match. Often users perform poorly when using a new application, but their preference ratings may be high. Conversely, they may perform well but their preference ratings are low.

143 ©2018 Pegasystems

SECURITY

144 ©2018 Pegasystems

Defining the authorization scheme Introduction to defining the authorization scheme In most cases, you want to restrict authenticated users from accessing every part of an application. You can implement authorization features that ensure users can access only the user interfaces and data that they are authorized to access. The Pega Platform provides a complementary set of access control features called Role-based access control and Attribute-based access control. After this lesson, you should be able to: l

Compare role and attribute based access control

l

Identify and configure roles and access groups for an application

l

Determine the appropriate authorization model for a given use case

l

Determine the rule security mode

145 ©2018 Pegasystems

Authorization models Use authorization models to define a user's access to specific features of Pega Platform. For example, you can restrict an end user's ability to view data or perform certain actions on a specific instance of a class at run time. You can restrict a business or system architect's ability to create, update, or delete rules at design time or determine access to certain application development tools such as the Clipboard or Tracer. The Pega Platform offers two authorization models that are different but complementary: Role-based access control (RBAC) and Attribute-based access control (ABAC). Role-based and Attribute-based access control coexist.

Role-based access control (RBAC) Every application has distinct user roles that form the basis for authorization. You configure RBAC using the following rule types.

You define Access of Role to Object (ARO) rules on a class basis. Pega navigates the class hierarchy and determines the most specific ARO relative to the class of the object for the user roles. Any less specific AROs in the class hierarchy for the user roles are ignored. The operation being performed is allowed if the most specific ARO allows the operation. If the user possesses multiple roles, the most specific ARO rules are determined for each role. The Pega platform performs the operation if the operation is allowed in any of the most specific AROs for each role. Privileges provide better security because they are defined on individual rules. For example, in order to execute a flow action secured by a privilege, the user must possess the privilege. The privilege is

146 ©2018 Pegasystems

granted through the most specific AROs for the class of the object and the user roles. There is, however, an option on the role for inheriting privileges within AROs defined in the class hierarchy. Selecting this option provides the user with all privileges in the class hierarchy for AROs and user roles. In the following example, the role has the option for inheriting privileges selected. If the user works on an Expense Report case, the access rights are defined in the row highlighted in green. Additional privileges are inherited from the class hierarchy (TGB-HRApps-Work and Work-).

Access class

Read instan ces

Write instan ces

Delete instan ces

Re ad rul es

Wri te rul es

Dele Exec te ute rule rules s

Execut Privileges e activit ies

Work-

5

5

5

5

5

AllFlows(5) AllFlowActions(5)

TGB-

5

5

5

5

5

ManagerReports

HRAppsWork TGBHRAppsWorkExpenseRe port

(5) 5

5

5

5

5

5

SubmitExpenseR eport(5)

Note: If a user has multiple roles, the roles are joined with an OR such that only one of the most specific AROs for each role needs to grant access in order to perform the operation.

Access Deny rules explicitly deny access to an operation. All operations are denied by default until the creation of AROs that grant access for a user with that role to perform the operation. If an access deny rule exists in the class hierarchy for a user's role that denies the operation and an ARO for the same user's role grants the operation, the operation is denied. KNOWLEDGE CHECK

Which Access of Role to Object is used if there are several available in the inheritance path? The most specificAccess of Role to Object in the class hierarchy relative to the class of the object is identified and defines the access. Any less specific AROs in the class hierarchy are ignored.

147 ©2018 Pegasystems

Attribute-based access control (ABAC) ABAC is optional and used in conjunction with RBAC. ABAC compares user information to case data on a row-by-row or column-by-column basis. You configure ABACusing Access Control Policy rules that specify the type of access and Access Control Policy Condition rules defining a set of policy conditions that compare user properties or other information on the clipboard to properties in the restricted class.

You define access control policies for classes inheriting from Assign-, Data-, and Work- and use the full inheritance functionality of Pega Platform. Access control policy conditions are joined with AND when multiple same-type access control policies exist in the inheritance path with different names. Access is allowed only when all defined access control policy conditions are satisfied. Note: When both RBAC and ABAC models are implemented, the policy conditions in the models are joined with an AND. Access is granted only when both the RBAC policy conditions AND the ABAC policy conditions are met. In the following example, if the HR application user wants to update a Purchase case, the conditions for the access control policies defined in the class hierarchy are joined with AND. The user is granted access for updating the Purchase case only if WorkUpdate AND HRUpdate AND HRPurchaseUpdate evaluates to true.

Acces s class

Read

Update

Work-

WorkRead

WorkUpdate

Delet e

Discover PropertyRead

WorkDisco ver

148 ©2018 Pegasystems

PropertyEncryp t

Acces s class

Read

TGBHRWork TGBHRWorkPurch ase

HRPurchase Read

Update

Delet e

Discover PropertyRead

HRUpdate

HRDel ete

HRDiscove r

HRPurchaseUp date

PropertyEncryp t

HRPropRead

HRPurchaseProp Read

HRPurhcasePropE ncrypt

To enable ABAC, in the Records Explorer, go to Dynamic System Settings and update the EnableAttributeBasedSecurity value to True.

KNOWLEDGE CHECK

Which access control policy is used if there are several available in the inheritance path? All access control policies having the same type and different names are considered. The conditions are joined with AND.

149 ©2018 Pegasystems

How to create roles and access groups for an application Each user of an application has a defined role for processing cases. Some users can only create cases, while other users may be responsible for reviewing cases and determining case outcomes. Most applications allow one group of users to create and process cases, and a second group of users to approve or reject those cases.

For example, in an application for managing purchase requests, any user can submit a purchase request, but only department managers can approve purchase requests. Each group of users performs a specific role in processing and resolving the case.

Access role An access role identifies a job position or responsibility defined for an application. For example, an access role can define the capabilities of LoanOfficer or CallCenterSupervisor. The system grants users specified capabilities, such as the capability to modify instances of a certain class, based on the access roles acquired at sign on. Before you create roles for your application, identify the roles required in your application. A role defines what a user can do in the application. The role represents the functions a group of users perform for a specific application. For example, a role might represent managers, fulfillment operators, clerical workers, or auditors. A given user can be assigned to multiple roles. The access roles available to a user act as a collection of capabilities granted. For example, users with the role of fulfillment operator might have access to open customer order records, while users with the role of manager may have access to open and closed customer order records.

150 ©2018 Pegasystems

Applications have three default roles: :User, :Manager, and :Administrator. As a best practice, create application-specific roles, and do not use the default roles. You can clone the default roles as starting points for the application-specific roles. The naming convention used for roles is: application name, colon, role name. Use the Access Roles landing page (Designer Studio > Org & Security > Groups & Roles > Access Roles) to create new application specific roles by cloning the default roles. Use an Access of Role to Object to grant access permissions to objects (instances) of a given class and named privileges to a role. Access permissions and named privileges can be granted up to a specified Production Level between 1 and 5 (1 being Experimental and 5 being Production) or conditionally via Access When rules.

Use an Access Deny rule to override access if an access group has multiple roles with conflicting privileges. Defining Access Roles that only contain Access Deny rules facilitates maintenance. Roles that only contain Access Deny rules can be described as Access Deny-only Access Roles.

Access group An access group is associated with a user through the Operator ID record. The access group determines the access roles the users in the access group hold. The naming convention used for access groups is: application name, colon, group of users. Splitting roles to allow for reuse is useful at times. For example, the roles are in an HR application are Employee, HR generalist, HR manager, and executive. Both HR managers and executives can update

151 ©2018 Pegasystems

delegated rules. In this case, create an additional role called DelegatedRulesAdmin. This role is assigned to both HR managers and executives.

KNOWLEDGE CHECK

What is the purpose of the default roles? To be used as the basis when creating application specific roles

152 ©2018 Pegasystems

How to configure authorization The Role-based access control (RBAC) and Attribute-based access control (ABAC) authorization models always coexist. RBAC is defined for every user through the roles specified in the access group, and ABAC is optionally added to complement RBAC. You use RBAC to implement requirements where the user might be restricted to accessing specific UI components, such as audit trail and attachments, or restricted from performing specific actions on a case using privileges. You can also use RBAC to restrict access to rules and application tools, such as Tracer and Access Manager, during design time. You use ABAC to restrict access on specific instances of classes. ABAC is the best practice to make authorization granular. Use ABAC instead of Access When rules in Access to Rule of Object (ARO) rules. The following table shows actions supported by RBAC and ABAC.

Action

Description

RBAC ABAC

Open/read instances

Open a case and view case data in reports and searches

Property Read in instances

Restrict data in a case the user can open

X

Discover instances

Access data in a case without opening the case

X

Modify/update instances

Create and update a case

X

X

Delete instances

Delete and update a case

X

X

Run report

Run reports

X

Execute activity

Execute activities

X

Open rules

Open and view a rule

X

Modify rules

Create and update a rule

X

Privileges

Execute rules requiring specified privileges

X

Note: You can only define ABAC for classes inheriting from Assign-, Data-, and Work-. Use the Access Manager to configure RBAC. ABAC is configured manually.

KNOWLEDGE CHECK

When do you configure ABAC? To complement RBAC by restricting actions on a specific instance.

153 ©2018 Pegasystems

X

X

Rule security mode The Rule security mode setting on the access group helps enforce a deny first policy. In a deny first policy, users must be granted privileges to access certain information or perform certain actions. The rule security mode determines how the system executes rules accessed by members of the access group. The three supported rule security modes are Allow, Deny, and Warn. Allow is the default and recommended setting. The system allows users in the access group to execute a rule that has no privilege defined, or to execute a privileged rule for which the user has the appropriate privilege. If more specific security is needed for an individual rule, specify a privilege for the rule. Use Deny to require privileges for all rules and users. This setting is only recommended if your organization security policies require a granular and strict security definition. If Deny is selected and a privilege is not defined for a rule, the system automatically generates a privilege for the rule and checks if the user has been assigned that privilege. The privilege is made up of :Class.RuleName (5)—for example, Rule-Obj-Flow:MyCo-Purchase-Work-Request.CREATE (5). The generated privilege is not added to the rule. If the user has the generated privilege, the system executes the rule. If the user lacks the generated privilege, the system denies execution and writes an error message to the PegaRULES log. If a privilege is defined for a rule, the system checks whether the user has the privilege defined on the rule. If not, the system checks if the user has the generated privilege for the rule. If the user has either privilege, the system executes the rule. If the user has neither privilege, the system denies execution of the rule and logs an error message in the PegaRULES log. Use Warn to identify missing privileges for a user role. The system performs the same checking as in Deny mode, but performs logging only when no privilege has been specified for the rule or the user role. The warning messages written to the PegaRULES log are used to generate missing privileges for user roles with thepyRuleExecutionMessagesLogged activity. Ensure sufficient time and resources are available to perform a system-wide test including all expected users before changing the rule security mode. See the PDN article Setting role privileges automatically for access group Deny mode.

KNOWLEDGE CHECK

When would you set the Rule security setting to Deny? When the organization's security policies require a granular security definition across the application.

154 ©2018 Pegasystems

Mitigating security risks Introduction to mitigating security risks Securing an application and ensuring that the correct security is set up is important. Correct security entails users only accessing cases they are allowed to access and only seeing data they are allowed to see. This lesson examines common mistakes that can open up vulnerabilities in the system, and how to address them to help avoid potential risks. After this lesson, you should be able to: l

Secure an application in the production environment

l

Identify potential vulnerabilities with the Rule Security Analyzer

l

Detect and mitigate attacks using Content Security Policies

155 ©2018 Pegasystems

Security risks Every application includes a risk of tampering and unwanted intruders. When an application is developed traditionally using SQL or another language, vulnerabilities inherent to the language are included, leaving the systems open to attack. Tampering can occur in many ways, and are often difficult to detect and predict. URL tampering or cross-site scripting can easily redirect users to malicious sites, so taking the proper steps to protect your application is essential.

Developing applications using best practices ensures that rules are written properly, and secures the application against threats. To maximize the integrity and reliability of applications security, features must be implemented at multiple levels. Each technique to strengthen the security of an application has a cost. Most techniques have one-time implementation costs, but some might have ongoing costs for processing or user inconvenience. You determine the actions that are most applicable and beneficial to your application. When initially installed, Pega Platform is intentionally configured with limited security. This is appropriate for experimentation, learning, and application development.

KNOWLEDGE CHECK

What can you do to mitigate security risks when developing applications? Follow best practices and take actions to strengthen the security.

156 ©2018 Pegasystems

Content security policies Every application includes a risk of tampering and unwanted intruders. When an application is developed traditionally using SQL or another language, vulnerabilities inherent to the language are included, leaving the systems open to attack. Tampering can occur in many ways, and are often difficult to detect and predict. URL tampering or cross-site scripting can easily redirect users to malicious sites, so taking the proper steps to protect your application is essential. Developing applications using best practices ensures that rules are written properly, and secures the application against threats. To maximize the integrity and reliability of applications security, features must be implemented at multiple levels. Each technique to strengthen the security of an application has a cost. Most techniques have one-time implementation costs, but some might have ongoing costs for processing or user inconvenience. You determine the actions that are most applicable and beneficial to your application. When initially installed, Pega Platform is intentionally configured with limited security. This is appropriate for experimentation, learning, and application development.

Content Security Policies (CSP) are used as a layer of security that protects your browser from loading and running content from untrusted sources. The policies help detect and mitigate certain types of attacks on your application through a browser, including Cross Site Scripting (XSS) and data injection attacks. When a browser loads a page, it is instructed to include assets such as style sheets, fonts, and JavaScript files. The browser has no way of distinguishing script that is part of your application and script that has been maliciously injected by a third party. As a result, the malicious content could be loaded into your application. CSPs help protect your application from such attacks. Note: If an attack takes place, the browser reports to your application that a violation has occurred. CSPs are a set of directives that define approved sources of content that the user's browser may load. The directives are sent to the client in the Content-Security-Policy HTTP response header. Each browser type and version obey as much of the policy as possible. If a browser does not understand a directive, then that directive is ignored. In other situations, the policy is explicitly followed. Each directive governs a specific resource type that affects what is displayed in a browser. Special URL schemes that refer to specific pieces of unique content—such as data:, blob:, and filesystem:—are excluded from matching a policy of any URL and must be explicitly listed. CSPs are instances of the Rule-Access-CSP class in the Security category. To access the Content Security Policies in an application, you can: l

Specify the Content Security Policy on the Integration & Security tab of the application rule form

l

Use the Application Explorer to list the Content Security Policies in your application

l

Use the Records Explorer to list all the Content Security Policies that are available to you

For details on how to set Content Security Policies, see the help topic Policy Definition tab on the Content Security Policies form.

157 ©2018 Pegasystems

KNOWLEDGE CHECK

Content security policies help detect and mitigate certain types of attacks by __________. preventing browsers from loading and running content from untrusted sources

158 ©2018 Pegasystems

Rule Security Analyzer The Rule Security Analyzer tool identifies potential security risks in your applications that may introduce vulnerabilities to attacks such as cross-site scripting (XSS) or SQL injection. Typically, such vulnerabilities can arise only in non-autogenerated rules such as stream rules (HTML, JSP, XML, or CSS), and custom Java or SQL statements. The Rule Security Analyzer scans non-autogenerated rules, comparing each line with a regular expressions rule to find matches. The tool examines text, HTML, JavaScript, and Java code in function rules and individual activity Java method steps, and other types of information depending on rule type. The Rule Security Analyzer searches for vulnerabilities in code by searching for matches to regular expressions (regex) defined in Rule Analyzer Regular Expressions rules. Several Rule Analyzer Regular Expression rules are provided as examples for finding common vulnerabilities. You may also create your own Rule Analyzer Regular Expression rules to search for additional patterns. The most effective search for vulnerabilities is to rerun the Rule Analyzer several times, each time matching against a different Regular Expressions rule. Important: Use trained security IT staff to review the output of the Rule Security Analyzer tool. They are better able to identify false positives and remedy any rules that do contain vulnerabilities. Running the Rule Security Analyzer before locking a ruleset is recommended. This allows you to identify and correct issues in rules before they are locked. The Rule Security Analyzer takes a couple of minutes to run through the different regular expressions. For more information on the Rule Security Analyzer, click How to use the Rule Security Analyzer tool.

KNOWLEDGE CHECK

The Rule Security Analyzer tool helps identify security risks introduced in __________ rules. non-autogenerated

159 ©2018 Pegasystems

How to secure an application Find out who is responsible for application security in the organization and engage them from the start of the project to find out any specific requirements and standards, and what level of penetration testing is done.

Rules Perform the following tasks: l

Ensure that properties are of the correct type (integers, dates, not just text).

l

Run the Rule Security Analyser and fix any issues.

l

Fix any security issues in the Guardrail report.

Rulesets Lock each ruleset version, except the production ruleset, before promoting an application from the development environment. Also, secure the ability to add versions, update versions, and update the ruleset rule itself by entering three distinct passwords on the security tab on the ruleset record.

Documents If documents can be uploaded into the application, complete the following tasks: l

l

Ensure that a virus checker is installed to enforce which files can be uploaded. You can use an extension point in the CallVirusCheck activity to ensure that a virus checker is installed. Ensure file types are restricted by adding a when rule or decision table to the SetAttachmentProperties activity to evaluate whether a document type is allowed.

Authorization Verify that the authorization scheme is implemented and has been extensively tested to meet requirements. Ensure the production level is set to an appropriate value in the System record. Set the production level to 5 for the production environment. The production-level value affects Rule-AccessRole-Obj and Rule-Access-Deny-Obj rules. These rules control the classes that can be read and updated by a requestor with an access role. If this setting interferes with valid user needs, add focused RuleAccess-Role-Obj rules that allow access instead of lowering the production level.

Authentication Enable the security policies if out-of-the-box authentication is used (Designer Studio > Org & Security > Authentication > Security Policies). If additional restrictions are required by a computer security policy, add a validation rule. Set up time-outs at the application server level, requestor level, and access group level that are of an appropriate length.

160 ©2018 Pegasystems

Integration Work with the application security team and external system teams to ensure connectors and services are secured in an appropriate way.

Operators and access groups If the Pega Platform was deployed in secured mode out-of-the-box users are disabled be default. Disable any users not used if the platform was not deployed in secure mode. Enable security auditing for changes to operator passwords, access groups, and application rules. Review the Unauthenticated access group to make sure that it has the minimum required access to rules.

Dynamic system settings Configure the Dynamic System Settings in a production environment as described in the PDN article Security checklist for Pega 7 Platform applications. Note: Do not configure the Dynamic System Settings for a development environment, because they restrict the Tracer tool and other developer tools.

Deployment When deploying an application to an environment other than development, limit or block functionality to certain features and remove unnecessary resources. Default settings exposes risks into an application since they provide known starting point for intruders. Taking defaults out of the equation reduces overall risk dramatically. Make the following changes to default settings: l

l

l

l

l

Rename and deploy prweb.war only on nodes requiring it. Knowing the folder and content of prweb.war is a high security risk as it provides access to the application. Remove any unnecessary resources or servlets from the web.xml. Rename default servlets where applicable, particularly PRServlet. Rename prsysmgmt.war and deploy it on a single node per environment. Also, deploy prsysmgmt.war on its own node as someone could get the endpoint URL from the application server by taking the URL from the help pop-up window. Password protect access to the SMA servlet on the production environment. Rename prhelp.war and deploy it on a single node per environment. In addition, deploy prsysmgmt.war on its own node as someone could get the endpoint URL from the application server by gtaking the URL from the help pop-up window.  Rename prgateway.war and rename and secure the prgateway servlet. The prgateway.war contains the Pega Web Mashup proxy server to connect to a Pega application.

161 ©2018 Pegasystems

Database Ensure that the system has been set up using a JDBC connection pool approach through the application server, rather than the database being set up in the prconfig.xml. Limit the capabilities and roles that are available to the PegaRULES database account on environments other than development to reduce additional capabilities to truncate tables, create or delete tables, or otherwise alter the schema. This limit on capabilities and roles might cause the View/Modify Database Schema tool to operate in read-only mode.

KNOWLEDGE CHECK

What can you do to mitigate security risks when developing applications? Follow best practices and take actions to strengthen the security.

162 ©2018 Pegasystems

REPORTING

163 ©2018 Pegasystems

Defining a reporting strategy Introduction to defining a reporting strategy Defining a reporting strategy goes beyond creating reports in Pega. Many organizations use a data warehousing solution and have distinct requirements for retaining data. After this lesson, you should be able to: l

Identify requirements that influence reporting strategy definition

l

Discuss alternatives to traditional reporting solutions

l

Define a reporting strategy for the organization

164 ©2018 Pegasystems

How to define a reporting strategy Before you define your reporting strategy, assess the overall reporting needs of the organization. The goal is to get the right information to the right users when they need the information. Treat your reporting strategy design as you would any large-scale application architecture decision. A robust reporting strategy can help prevent future performance issues and help satisfy users' expectations. As you define your reporting strategy, ask yourself the following questions: l

What reporting requirements already exist?

l

Who needs the report data?

l

When is the report data needed?

l

Why is the report data needed?

l

How is the report data gathered and accessed?

What reporting requirements already exist? Organizations rely on reporting and business intelligence to drive decisions in the organization. Sometimes, government and industry standards drive reporting needs. For example, executive management requires dashboard reports and Key Performance Indicators (KPIs) to drive strategic decisions for the business, oversee how the organization is performing, and take action based on that data. Line level managers need reports to ensure their teams meet business service level agreements (SLAs). When defining a reporting strategy, inventory the reports that the business uses to make key decisions to help determine the overall reporting strategy.

Who needs the report data? Once you have an inventory of these reports, create a matrix categorizing the user roles and reports each role uses to make business decisions. For example, you may create throughput reports for various users. Line managers use the reports to show the throughput of team members. Managers use reports to optimize routing of assignments. Executives may want to see a summary of throughput over specific periods. The reports enable the executives to drill down into the results of individual departments and plan staffing requirements for these departments in the coming months. Individual team members can see their own throughput metrics to gauge how close they are to meeting their own goals and to work toward their incentives.

When is the report data needed? Identify how frequently the data needs to be delivered. The outcome of your research affects configuration decisions such as report scheduling settings, agent schedules, and refresh strategies on data pages. Other factors, such as currency requirements, may play a role in your strategy. For example, you may have a data page that contains exchange rates. This data needs to be current on an hourly basis. In addition, the report that sources the data page must have a refresh strategy.

165 ©2018 Pegasystems

Related to frequency is the question of data storage and availability. The answer influences how you architect a data-archiving or table-partitioning approach. Implementing a data-archiving strategy and partitioning tables can help with the long-term performance of your reports.

Why is the report data needed? This question is related to who needs the data. Existing reporting requirements influence what data the report must contain. As you research the need for each report, you may find the report data is not needed at all. For example, you may discover that no one in the organization reads the report or uses the report as the basis for any decisions. On the other hand, you may find opportunities to provide new reports. For example, a department could not create a necessary report because the current business process management system cannot extract the data. With the Pega application, you can extract this data using BIX and feed it to a data warehouse where managers perform analytics on resolved work.

How is the report data gathered? Pega Platform offers several options for gathering report data within the application. The strategy you recommend depends on the type of reporting you are doing. If the organization requires heavy trending reporting, and business intelligence (BI) reporting, a data warehouse may be a better fit. If you want to display the status of work assignments on a dashboard in the application, report definitions with charting is appropriate.

Alternatives to standard reporting solutions Although Pega offers powerful reporting capabilities, also consider alternatives to traditional reporting and data warehousing approaches. These approaches may be the best way to meet your reporting requirements. For example, you can use robotic automation to gather data from external desktop applications. Or, if you are using data for analytics, consider using adaptive and predictive decisioning features. If you need dynamic queries, you can also use freeform searching on text such as Elasticsearch instead of constructing a report definition to gather the data. With the growth in popularity of big data and NoSQL databases, freeform search is becoming more common. Starting in v7.4, you can run report definitions against Elasticsearch indexes instead of using SQL queries directly against the database. Be aware however, that running report definitions against Elasticsearch indexes is disabled by default and does not apply to reports that have features that are not supported by Elasticsearch. If a report query cannot be run against Elasticsearch indexes, Pega Platform automatically uses an SQL query.

166 ©2018 Pegasystems

How to define a reporting strategy Introduction to designing reports for performance Poorly designed reports can have a major impact on performance. A report may run with no issues in a development environment. When run with production data, the report may perform poorly. This issue may impact performance for all application users. After this lesson, you should be able to: l

Explain the causes of poorly performing reports and the impact poor performance can have on the rest of the application

l

Describe how to design reports to minimize performance issues

l

Identify the cause and remedy a poorly performing report

167 ©2018 Pegasystems

Impact of reports on performance When an application is first put into production, a report may run with no issue and within established service level agreements (SLAs). As the amount of application data grows, the report may run more slowly. Poor report performance can cause memory, CPU, and network issues. These issues can affect all application users, not just the user running the report. To help you diagnose and mitigate these issues, Pega generates performance alerts when specific limits or thresholds are exceeded. For example, the PEGA0005 - Query time exceeds limit alert helps you recognize when queries are inefficiently designed or when data is loaded indiscriminately. For more information about performance alerts, see the PDN article Performance alerts, security alerts, and Autonomic Event Services. Important: Guardrail warnings alert you to reports that could have performance issues. Instruct your teams to address warnings before moving your application from development to target environments.

Memory impact Large result sets can cause out-of-memory issues. The application places query results on the clipboard page of the users. If those pages are not managed, your application eventually shuts down with an out-of-memory (OOM) error.

CPU impact Using complex SQL can also have a CPU impact on the database server. When the database is performing poorly, all users on all nodes are affected. Autonomic Event Service (AES) and Predictive Diagnostic Cloud (PDC) can help you identify the issues. Your database server administrator can set up performance monitoring for the database server.

Network impact Sending large result sets over the network can may cause perceived performance issues for individual users depending upon their bandwidth, network integrity, and network traffic.

KNOWLEDGE CHECK

How can reporting affect performance across the application? Poorly designed queries can be CPU intensive, causing issues with the database. When the database is affected, all users are affected. Not managing large result sets can adversely affect application server memory and database memory. The most common consequence of improper management of large result sets is an out-of-memory error on the application server node.

168 ©2018 Pegasystems

How to configure an application to improve report performance A report definition (and the Obj-* methods) is just a query. Pega Platform constructs and optimizes a query based on parameters defined in the application rule. Then, Pega Platform delivers the results of the query to a clipboard page to either display to your end users or to use for other purposes, such as running an agent on the result set. The same principles you use in tuning a database query can be applied to designing reports for performance. You can configure the report definition and Obj-* methods to use. You can apply techniques at the database level. Or, you can choose to take an entirely different approach for gathering data, such as using robotic automation or elasticsearch. The goal is to return data to your users in the most efficient way possible and with as little impact to other users. The following techniques discuss best practices for configuring rules within the application.

Use data pages when possible The best approach to optimizing your report is to avoid running the report. Data pages can help you do that. If data already exists on data page, use it. Design the refresh strategy to only get data when required. Use node scoped pages when possible.

Paginate results Paginating results allows you to return groups of data at a time. Returning big groups of records may make it difficult for users to find the information they are looking for. For example, a report that returns 50 records at a time may be too much information for a user to sift through. Select the Enable Paging option on the report definition and specify the page size.

Optimize properties If you expect to use a property in your selection criteria, optimize that property. Optimizing a property creates a column in the database table, which you can then index as needed. For more information about optimizing properties, see the help topic Property optimization using the Property Optimization tool.

Utilize declare indexes Declare indexes allow you to expose embedded page list data. For example, the application stores Work-Party instances in an pr_index_workparty table. This allows you to write a report definition that joins work object data to work party data instead of extracting the work party data from the pzpvstream column (the BLOB), which can be expensive. For more information on how to utilize declare index, see the article How to create a Declare Index rule for an embedded property with the Property Optimization tool.

169 ©2018 Pegasystems

Leverage a reports database To reduce impact of queries on the production database, you can run reports against a reports database (also known as an alternate database). This approach offloads demand of the production database to a replicated database. For more information on using a reports database, see the topic Setting up a reports database.

Avoid outer joins Selecting Include all rows on the Data Access tab of the report definition can be costly. This option causes the system to use an outer join for the report in which all instances of one of the classes are included in the report even if they have no matching instances in the other class. If possible, select Only include matching rows.

170 ©2018 Pegasystems

How to tune the database to improve report performance You can perform specific database tuning and maintenance tasks to help improve report performance. Enlist the help of your database administrator to perform these tasks and to provide additional guidance. These tasks vary depending on the database vendor you are using. Regardless of the database your application is running on, these techniques can help you improve reports performance.

Partition tables Table partitioning allows tables or indexes to be stored in multiple physical sections. A partitioned index is like one large index made up of multiple little indexes. Each chunk, or partition, has the same columns, but a different range of rows. How you partition your tables depends on your business requirements. For more information on partitioning Pega tables, see PegaRULES table partitioning

Run Explain Plans on your queries An Explain Plan describes the path the query takes to return a result set. This technique can help you determine if the database is taking the most efficient route to return results. You can extract the query with substituted values by using the Database profiler or by tracing the report while it runs. Once you have the query with substituted values, you can run the Explain Plan for the query in the database client of your choice.

Create table indexes After you have exposed one or more columns in a database tables, you can create an index on that column. Do not create too many indexes, because this can degrade performance. In general, create an index on a column if any of the following statements is true. l

The column is queried frequently.

l

A referential integrity constraint exists on the column.

l

A UNIQUE key integrity constraint exists on the column.

Drop the pzpvStream column on pr_index tables Pr_index tables do not require the pzPvStream column. Removing this column prevents replicated data from being returned to the application and taking up memory on the clipboard.

Purge and archive data Depending on the retention requirements for your application, consider archiving data to Nearline or Offline storage, either in another database table or in a data warehouse. Purging and archiving data that is either no longer needed or infrequently accessed can improve report performance because the

171 ©2018 Pegasystems

application has a smaller set of records to consider when running the query. You can also use the Purge and Archive wizard to achieve this purpose. For more information about purging and archiving data, see the help topic Purge/Archive wizards. Important: Be sure to consider table relationships to ensure your archiving solution encompasses all application data.

Load test with realistic production data volumes You can prepopulate a staging environment with production-like data to test your reports with a realistic volume of data. Many organizations require any sensitive information be removed (scrubbed) prior to running this type of test, and this can take some time. Plan your testing accordingly.

172 ©2018 Pegasystems

BACKGROUND PROCESSING

173 ©2018 Pegasystems

Designing background processing Introduction to designing background processing The design of background processing is crucial to meeting business service levels and automating process. Background processes must be carefully designed to ensure all work can be completed within the business service levels. There are several features provided by Pega Platform which can be leveraged to provide an optimal solution. After this lesson, you should be able to: l

Evaluate background processing design options

l

Configure asynchronous processing for integration

l

Optimize default agents for your application

174 ©2018 Pegasystems

Background processing options Pega Platform supports several options for background processing. You can use standard and advanced agents, service level agreements (SLAs), and the Wait shape to design background processing in your application.

Standard agents Standard agents are generally preferred when you have items queued for processing. Standard agents allow you to focus on configuring the specific operations to perform. When using standard agents, Pega Platform provides built-in capabilities for error handling, queuing and dequeing, and commits. By default, standard agents run in the security context of the person who queued the task. This approach can be advantageous in a situation where users with different access groups leverage the same agent. Standard agents are often used in an application with many implementations that stem from a common framework or in default agents provided by Pega Platform. The Access Group setting on an Agents rule only applies to Advanced Agents which are not queued. To always run a standard agent in a given security context, you need to switch the queued Access Group by overriding the System-Default-EstablishContext activity and invoke the setActiveAccessGroup() java method within that activity. Queues are shared across all nodes. The throughput can be improved by leveraging multiple standard agents on separate nodes to process the items in a queue. Note: There are several examples of default agents using the standard mode. One example is the agent processing SLAs ServiceLevelEvents in the Pega-ProCom ruleset.

KNOWLEDGE CHECK

As part of an underwriting process, the application must generate a risk factor for a loan and insert the risk factor into the Loan case. The risk factor generation is an intensive calculation that requires several minutes to run. The calculation slows down the environment. You would like to have all risk factor calculations run automatically between the hours of 10:00 P.M. and 6:00 A.M. to avoid the slowdown during daytime working hours. Design a solution to support this Create a standard agent to perform the calculation. Include a step in the flow to queue the case for the agent. Pause the case processing and wait for the agent to complete processing. This solution delays the loan process and waits for the agent to resume the flow. It can take advantage of other claims processing agents if enabled on other nodes which may reduce the time it take stop process all of the loan risk assessments.

175 ©2018 Pegasystems

KNOWLEDGE CHECK

You need to automate a claim adjudication process in which files containing claims are parsed, verified, adjudicated. Claims which pass those initial steps are automatically created for further processing. A single file containing up to 1,000 claims is received daily before 5:00 P.M. Claim verification is simple and takes a few milliseconds but claim adjudication might take up to five minutes Create a standard agent to perform the calculation. Include a step in the flow to queue the case for the agent. Pause the case processing and wait for the agent to complete processing Using the File service activity to only verify claims and then offload the task to the agent is preferred because it does not significantly impact the intake process. It can also take advantage of multinode processing if available. Furthermore, the modular design of the tasks would allow for reuse and extensibility if required in the future. However, if you use the same file service activity for claim adjudication, it impacts the time required to process the file. Processing is only available on a single node and there is little control overthe time frame while the file service runs. Extensibility and error handling might also be more challenging. Consideration must be given to the time an agent requires to perform the task. For example, the time required to process the claims by a single agent is 5,000 minutes (83.33 hours). This is not suitable for a single agent running on a single node to complete the task. A system with the agent enabled on eight nodes could perform the task in the off-hours. If only a single node is available, an alternative solution is to split the file up into smaller parts, which are then scheduled for different agents (assuming there is enough CPU available for each agent to perform its task).

Advanced agents Use Advanced agents when there is no requirement to queue and perform a reoccurring task. Advanced agents can also be used when there is a need for more complex queue processing. When advanced agents perform processing on items that are not queued, the advanced agent must determine the work that is to be performed. For example, if you need to generate statistics every midnight for reporting purposes,. the output of a report definition can determine the list of items to process. Tip: There are several examples of default agents using the advanced mode, including the agent for full text search incremental indexing FTSIncrementalIndexer in the Pega-SearchEngine ruleset. In situations where an advanced agent uses queuing, all queuing operations must be handled in the agent activity. Tip: The default agent ProcessServiceQueue in the Pega-IntSvcs ruleset is an example of an advance agent processing queued items. When running on a multinode configuration, configure agent schedules so that the advanced agents coordinate their efforts. To coordinate agents, select the advanced settings Run this agent on only one node at a time and Delay next run of agent across the cluster by specified time period.

176 ©2018 Pegasystems

KNOWLEDGE CHECK

ABC Company is a distributor of discount wines and uses Pega platform for order tracking. There are up to 100 orders per day, with up to 40 different line items in each order specifying the product and quantity. There are up to 5,000 varieties of wines that continuously change over time as new wines are added to and dropped from the list. ABC Company want to extend the functionality of the order tracking application to determine recent hot-selling items by recording the top 10 items ordered by volume each day. This information is populated in a table and used to ease historical reporting. An advanced agent runs after the close of business each day, and it performs the following tasks: • Opens all order cases for that day and tabulates the order volume for each item type • Determines the top 10 items ordered and records these in the historical reporting table The agent activity should leverage a report to easily retrieve and sort the number of items ordered in a day. When recording values in the historical table, a commit and error handling step must be included in the activity.

Service level agreements (SLAs) Using SLAs is a viable alternative to using an agent in some situations. The escalation activity in an SLA provides a method for you to invoke agent functionality without creating a new agent. For example, if you need to provide a solution to conditionally add a subcase at a specific time in the future, then adding a parallel step in the main case incorporating an assignment with an SLA and escalation activity can perform this action. Tip: The standard connector error handler flow Work-.ConnectionProblem leverages an SLA to retry failed connections to external systems. An SLA must always be initiated in the context of a case. Any delay in SLA process impacts the timeliness of executing the escalation activity. The SLA Assignment Ready setting allows you to control when the assignment is ready to be processed by the SLA agent. For example, you can create an assignment today, but configure it to process tomorrow. An operator can still access the assignment if there is direct access to the assignment through a worklist or workbasket. Note: Pega Platform records the assignment ready value in the queue item when the assignment is created. If the assignment ready value is updated, the assignment must be recreated for the SLA to act on the updated value.

Wait shape The Wait shape provides a viable solution in place of creating a new agent or using an SLA. The Wait shape can only be applied against a case within a flow in a step, and wait for a single event (timed or case status), before allowing the case to advance. Single-event triggers applied against a case represents the most suitable use case for the Wait shape; the desired case functionality at the designated time or status follows the Wait shape execution.

177 ©2018 Pegasystems

Asynchronous integration Pega Platform provides multiple mechanisms to perform processing asynchronously. For instance, an application may initiate a call to a back-end system and continue processing without blocking and waiting for the external system’s response. This approach is useful when the external system processing time can be an issue and when the result of the processing is not required immediately. A similar feature is also available for services allowing you to queue an incoming request.

Asynchronous service processing Several service types contain asynchronous processing options that leverage the standard agent queue. These service rules can be configured to run asynchronously or to perform the request asynchronously after the initial attempt to invoke the service synchronously fails. In both cases, a queue item ID that identifies the queued request is returned to the calling application. This item ID corresponds to the queued item that records the information and state of the queued request. Once the service request is queued, the ProcessServiceQueue agent in the Pega-IntSvcs ruleset processes the item queued and invokes the service. The results of the service request are stored in the instance and the service request is kept in the queue until the results are retrieved. In the meantime, the calling application that initiated the service request stores the queue item ID and continues its execution. In most cases, the calling application calls back later with the queue item ID to retrieve the results of the queued service request. The standard activity @baseclass.GetExecutionRequest is used as a service activity by the service to retrieve the result of the queued service.

When configuring this option for the service, a service request processor that determines the queuing and dequeuing options must be created. This information is used by the ProcessServiceQueue agent for supporting information to perform the tasks.

Asynchronous connector processing Several connector rules offer an asynchronous execution mode through the queue functionality similar to asynchronous services. When leveraging this capability, the connector request is stored in a queued item for the ProcessConnectQueue agent in the Pega-IntSvcs ruleset to make the call to the service at a later time. The queued connector operates in a fire-and-forget style. This means that there is no

178 ©2018 Pegasystems

response immediately available from the connector. Before choosing to use this asynchronous processing mechanism, assess whether the fire-and-forget style is suitable for your requirements. A connector request processor must also be configured for the asynchronous mode of operation. This configuration is similar to asynchronous service configuration, with the difference being the class of the queued object.

Background connector invocation Most connector rules have the capability to run in parallel by invoking the connectors from an activity using the Connect-* methods with the RunInParallel option selected. When the run in parallel option is selected, a connector runs as a child requestor. The calling activity continues the execution of subsequent steps. Use the Connect-Wait method to join the current requestor session with the child requestor sessions.

Note: If you configure several connectors to run in parallel, ensure the response data is mapped to separate clipboard pages, and error handling is set up. If a slow-running connector is used to source a data page, the data page can be preloaded using the Load-DataPage method in an activity to ensure the data is available without delay when needed. Grouping several Load-DataPage requestors by specifying a PoolID is possible. Use the Connect-Wait method to wait for a specified interval, or until all requestors with the same PoolID have finished loading data.

KNOWLEDGE CHECK

In which situation would you consider asynchronous integration? When the response is not immediately required

179 ©2018 Pegasystems

Default agents When Pega Platform is initially installed, many default agents are configured to run in the system (similar to services configured to run in a computer OS). Review and tune the agent configurations on a production system because there are default agents that: l

Are unnecessary for most applications because the agents implement legacy or seldom-used features

l

Should not run in production

l

Run at inappropriate times by default

l

Run more frequently than needed, or not frequently enough

l

Run on all nodes by default, but should only run on one node

For example, by default, there are several agents configured to run the Pega-DecisionEngine in the system. Disable these agents if decisioning is not applicable to the application(s). Enable some agents only in a development or QA environment, such as the Pega-AutoTest agents. Some agents are designed to run on a single node in a multinode configuration. A complete review of agents and their configuration settings are available in the following PDN article About Agent Queues. Because these agents are in locked rulesets, they cannot be modified. To change the configuration for these agents update the agent schedules generated from the agents rule.

KNOWLEDGE CHECK

Why is it important to review and tune default agents? Because there might be agents that should not run on the environment or that need to be tuned to fit the application

180 ©2018 Pegasystems

DEPLOYMENT AND TESTING

181 ©2018 Pegasystems

Defining a release pipeline Introduction to defining a release pipeline Use DevOps practices such as continuous integration and continuous delivery to quickly move application changes from development through testing to deployment on your production system. This lesson explains how to use Pega Platform tools and common third-party tools to implement a release pipeline. After this lesson, you should be able to: l

Describe the DevOps release pipeline

l

Discuss development process best practices

l

Identify continuous integration and delivery tasks

l

Articulate the importance of defining a test approach

l

Develop deployment strategies for applications

182 ©2018 Pegasystems

DevOps release pipeline DevOps is a culture of collaboration by development, quality, and operations to address issues in all three spaces. A continuous integration and delivery pipeline is an automated process to quickly move applications from development through testing to deployment.

Pega Platform includes tools to support DevOps, keeps an open platform, provides hooks and services based on standards, and supports most popular tools. The release pipeline in the following diagram illustrates the best practices for using Pega Platform for DevOps. At each stage in the pipeline, a continuous loop presents the development team with feedback on testing results.

In most cases, the system of record is a shared development environment. On large scale deployments, this could be a central server where all the rule branches are merged from different development environments. The automation server plays the role of an orchestrator and manages the actions that happen in continuous integration and delivery. In this example, Pega’s Deployment Manager is used as the automation server, but another equivalent tool could be used, such as Jenkins.

183 ©2018 Pegasystems

The application repository stores the application archive for each successful build. The successful build is the version deployed to the higher environments. Typically, there is a development repository and production repository. In this example, JFrog artifactory is used as the application repository, but another equivalent tool could be used. For example, for Pega Cloud applications hosted in the Amazon Web Services (AWS) cloud computing service, S3 buckets fulfill the role of a repository. Note: Pega Platform is assumed to manage all schema changes.

KNOWLEDGE CHECK

What role does the automation server play in a release pipeline? The automation server plays the role of an orchestrator.

184 ©2018 Pegasystems

Best practices for team-based development Pega Platform developers use agile practices to create applications in a shared development environment leveraging branches to commit changes. Follow these best practices to optimize the development process: l

l

l

l

l

l

Leverage multiple built-on applications to develop smaller component applications. Smaller applications are easier to develop, test, and maintain. Use branches when multiple teams contribute to a single application. Use the Branches explorer to view quality, guardrail scores, and unit tests for branches. Peer review branches before merging. Create reviews and assign peer reviews from the Branches explorer and use Pulse to collaborate with your teammates. Use Pega Platform developer tools, such as rule compare and rule form search, to determine how to best address any rule conflict. Hide incomplete or risky work using toggles to facilitate continuously merging of branches. Create PegaUnit test cases to validate application data by comparing expected property values to the actual values returned by running the rule.

Multiteam development flow The following diagram shows how multiple development teams interact with the system of record (SOR).

The process begins when Team A requests a branch review against the system of record. A Branch Reviewer first requests conflict detection, then executes the appropriate PegaUnit tests. If the Branch

185 ©2018 Pegasystems

Reviewer detects conflicts or if any of the PegaUnit tests fail, the reviewer notifies the developer who requests the branch. The developer stops the process to fix the issues. If the review detects and the PegaUnit tests execute successfully, the branch merges into the system of record. The ruleset versions associated to the branch are then locked. Remote Team B can now perform an on-demand rebase of the SOR application’s rules into their system. A rebase pulls the most recent commits made to the SOR application into Team B's developer system. The SOR host populates a comma-separated value D-S-S named HostedRulesetList. Team B defines a type=Pega Repository that points to the SOR host’s PRRestService. After Team B clicks the Get latest ruleset versions link within its Application rule and selects the SOR host’s Pega Repository, a request goes to the SOR to return information about versions for every ruleset within the SOR’s HostedRulesetList. Included in that information is each version’s pxRevisionID. Team B’s system then compares its ruleset versions to versions in the response. Only versions that do not exist in Team B’s system, or where the pxRevisionID does not match the SOR system’s pxRevisionID, are displayed. Team B then proceeds with the rebase or cancel. Only versionable rules are included when a rebase is performed. Non-versioned rules such as Application, Library, and Function are included in a rebase operation. For this reason, packaging Libraries as a component are desirable.

Always-locked ruleset versions option When initially developing an application, open ruleset versions are necessary and desirable. At some point, a transition can be made to where the application’s non-branched rulesets always remain locked. When merging a branch, an option exists to choose Create new version and Lock target after merge. to facilitate rebase operations. A system that requests a rebase from a ruleset's always-locked SOR host detecst newly created and locked ruleset versions before proceeding with the rebase or cancel.

186 ©2018 Pegasystems

KNOWLEDGE CHECK

When would you use a release toggle? To exclude work when merging branches

187 ©2018 Pegasystems

Continuous integration and delivery DevOps enables software delivery pipelines for applications. A continuous integration and continuous delivery (CI/CD) pipeline is an automated process to quickly move applications from development through testing to deployment.

The Pega pipeline The following image depicts the high-level overview of the Pega pipeline. Different questions are asked during every stage of the pipeline. These questions can be grouped into two different categories:

188 ©2018 Pegasystems

l

l

Developer centric questions – Are the changes good enough to share and do they work together with other developer changes? Customer centric questions – Is the application with the new changes functional as designed and expected by customers and ready to use by customers?

Drilling down to specific questions for each step in the pipeline: l

l

l

l

Ready to share – As a developer, am I ready to share my changes with other developers? Ensure that the new functionality being introduced works and critical existing functionality still continues to work. Integrate changes – Do all the integrated changed work together? Once all the changes from multiple developers have been integrated, does all the critical functionality still work? Ready to Accept – Do all the acceptance criteria for the application still pass? This is typically where the application undergoes regression testing to ensure functional correctness. Deploy – Is this application ready for real deployment? The final phase is where the fully validated application is deployed into production, typically after it was verified in a preproduction environment.

Continuous integration With continuous integration, application developers frequently check in their changes to the source environment and use an automated build process to automatically verify these changes. The Ready to Share and Integrate Changes steps ensure that all the necessary critical tests are run before integrating and publishing changes to a development repository. During continuous integration, maintain these best practices: l

l

Keep the product rule Rule-Admin-Product, referenced in an application pipeline, up-to-date. Automatically trigger merges and builds using the Deployment Manager. Alternatively, an export can be initiated using the prpcServiceUtils.bat tool

189 ©2018 Pegasystems

l

l

Identify issues early by running PegaUnit tests and critical integration tests before packaging the application. If any of these tests fail, stop the release pipeline until the issue is fixed. Publish the exported application archives into a repository, such as JFrog Artifactory, to maintain a version history of deployable applications.

KNOWLEDGE CHECK

What are the key characteristics of the continuous integration process? Integrate changes frequently; rapid testing and feedback; ability to identify cause of failure and address in quickly

Continuous delivery With continuous delivery, application changes run through rigorous automated regression testing and are deployed to a staging environment for further testing to ensure that the application is ready to deploy on the production system. In the Ready to Accept step, testing runs to ensure that the acceptance criteria are met. The Ready to Deploy step verifies all the necessary performance, scale, and compatibility tests necessary to ensure the application is ready for deployment. The Deploy step validates in a preproduction environment, deploys to production, and runs postdeployment tests with the potential to roll back as needed. Follow these best practices to ensure application quality: l

Use Docker or a similar tool to create test environments for user acceptance tests (UAT) and exploratory tests.

l

Create a wide variety of regression tests through the user interface and the service layer.

l

Check the tests into a separate version control system such as Git.

l

If a test fails, roll back the latest import.

l

If all the tests pass, annotate the application package to indicate that it is ready to be deployed. Deployment can be performed manually or automatically.

KNOWLEDGE CHECK

What is the key purpose of the continuous delivery process? To ensure the application is ready for deployment on the production system by performing extensive regression testing

190 ©2018 Pegasystems

Release pipeline testing strategy Having an effective automation test suite for your Pega application in your continuous delivery pipeline ensures that the features and changes you deliver to your customers are high quality and do not introduce regressions. At a high level, this is the recommended test automation strategy for testing your Pega applications: l

Create your automation test suite based on industry best practices for test automation.

l

Build up your automation test suite by using Pega Platform capabilities and industry test solutions.

l

Run the right set of tests at different stages.

l

Test early and test often.

Industry best practices for test automation can be graphically shown as a test pyramid. Test types at the bottom of the pyramid are the least expensive to run, easiest to maintain, require the least amount of time to run, and represent the greatest number of tests in the test suite. Test types at the top of the pyramid are the most expensive to run, hardest to maintain, require the most time to run, and represent the least number of tests in the test suite. The higher up the pyramid you go, the higher the overall cost and the lesser the benefits.

UI-based functional and scenario tests Use UI-based functional tests and end-to-end scenario tests to verify that end-to-end cases work as expected. These tests are the most expensive to run. Pega Platform supports automated testing for these types of tests through the TestID property in user interface rules. For more information, see the article Test ID and Tour ID for unique identification of UI elements. By using the TestID property to uniquely identify a user interface element, you can write dependable automated UI-based tests against any Pega application.

191 ©2018 Pegasystems

API-based functional tests Perform API-based testing to verify that the integration of underlying components work as expected without going through the user interface. These tests are useful when the user interface changes frequently. In your Pega application, you can validate case management workflows through the service API layer using the Pega API. Similarly, you can perform API-based testing on any functionality that is exposed through REST and SOAP APIs.For more information on the Pega API, see the article Getting started with the Pega API.

Unit tests Use unit tests for most of your testing. Unit testinglooks at the smallest units of functionality and are the least expensive tests to run. In an application, the smallest unit is the rule. You can unit test rules as you develop them by using the PegaUnit test framework. For more information, see the article PegaUnit testing.

Automation test suite Use both Pega Platform capabilities and industry test solutions, such as JUnit, RSpec, and SoapUI to build your test automation suite. When you build your automation test suite, run it on your pipeline. During your continuous integration stage, the best practice is to run your unit tests, guardrail compliance, and critical integration tests. These tests ensure that you get sufficient coverage, quick feedback, and fewer disruptions from test failures that cannot be reproduced. During the continuous delivery stage, a best practice is to run all your remaining automation tests to guarantee that your application is ready to be released. Such tests include acceptance tests, full regression tests, and nonfunctional tests such as performance and security tests. You receive the following benefits by running the appropriate tests at each stage of development: l

Timely feedback

l

Effective use of test resources

l

Predictable pipeline

l

Reinforcement of testing best practices

KNOWLEDGE CHECK

Why is it recommended to use unit testing for most of the testing? Unit tests test the smallest units of functionality and are the least expensive tests to run.

Modular development deployment strategies Dedicating a ruleset to a single case type helps to promote reuse. Other reasons to dedicate a ruleset to a single case type include:

192 ©2018 Pegasystems

l

l

l

l

l

Achieving continuous integration (CI) branch-based development Encouraging case-oriented user stories using Agile Studio’s scrum methodology to manage project software releases Managing branches that contain rules that originate from different rulesets. When this occurs, a branch ruleset is generated and the generated ruleset prepends the original ruleset's name to the branch name Accommodating multiple user stories in a branch Simplifying the ability to populate the Agile Workbench Work item to associate field when checking a rule into a branch

When you create a project within Agile Studio, a backlog work item is also created. When developing an application built on a foundation application, the Agile Studio backlog can prepopulate with a user story for each foundation application case type. Case types appropriate for the Minimal Loveable Product (MLP) release can then be selected from that backlog. For more information, see the article Review Case Type Backlog.

Pega’s Deployment Manager provides a way to manage CI/CD pipelines, including support for branchbased development. It is possible to automatically trigger a Dev-to-QA deployment when a single branch at a time successfully merges into the primary application. The rules checked into that branch must only belong to the primary application for this to occur. When a case type class is created within a case type-specific ruleset, rules generated by Designer Studio’s Case Designer view will also be added to that ruleset. This is true despite Case Designer supporting the ability to develop multiple case types within the same application.

Branch-based development review Application Branches are managed within Designer Studio’s App view.

193 ©2018 Pegasystems

While it is not necessary to dedicate a branch to a single case type, as seen in the following image, doing so simplifies the branch review process.

When a case-related rule in a case-specific ruleset is saved to a branch, a case-specific branch ruleset generates if one does not already exist. Changes made within the Case Designer that affect that rule occur within the branch ruleset’s version of that rule. When a branch ruleset is created, it is placed at the top of the application's ruleset stack.

194 ©2018 Pegasystems

The merge of a single branch is initiated from the Application view’s Branches tab by right-clicking on the branch name to display a menu.

At the end of the merge process, the branch will be empty when the Keep all source rules and rulesets after merge option is not selected. The branch can then be used for the next sets of tasks, issues, or bugs defined in Agile Studio.

195 ©2018 Pegasystems

Deployment Manager branch-based development Consider a scenario in which the Deployment Manager application, running on a separate orchestration server, is configured to automatically initiate a delivery when a single-branch merge completes for an application successfully. Also suppose the development environment application, built on the same PegaDevOpsFoundation application, configures the RMURL (Release Manager URL) Dynamic System Setting (D-S-S) to point to the orchestration server’s PRRestService. When initiating a single-branch merge, the development environment sends a request to the Deployment Manager application. The Deployment Manager application orchestrates the packaging of the application within the development environment, the publishing of that package to a mutual Dev/QA repository, and the import of that package into the QA environment.

Application packaging The initial Application Packaging wizard screen asks which built-on applications in addition to the application being packaged should be included in the generated product rule. Note that components are also mentioned, a component being a Rule-Application where pyMode = Component.

196 ©2018 Pegasystems

Multiple applications referencing the same ruleset is highly discouraged. Immediately after saving an application rule to a new name, warnings appear in both applications — one warning for each dualreferenced ruleset.

The generated warnings lead to the following conclusions: l

l

A product rule should contain a single Rule-Application where pyMode = Application. Product rules should be defined starting with applications that have the fewest dependencies, ending with applications that have the greatest number of dependencies.

The FSGEmail application would be packaged first, followed by the Hotel application, followed by the Booking application. While it is possible to define a product rule that packages a component only, there is no need to do so. The component can be packaged using the component rule itself as shown below.

197 ©2018 Pegasystems

Currently, the Deployment Manager only supports pipelines for Rule-Application instances where pyMode = Application. When an application is packaged, and that application contains one or more components, those components should also be packaged. If a built-on application has already packaged a certain component, that component can be skipped. In the following image, the FSGEmail application’s product rule includes the EmailEditor component. Product rules above FSGEmail (for example, Hotel and Booking) do not need to include the EmailEditor component.

The Open-closed principle applied to packaging and deployment The goal of the Open-closed principle is to eliminate ripple effects. A ripple effect occurs when an object makes changes to its interface as opposed to defining a new interface and deprecating the

198 ©2018 Pegasystems

existing interface. The primary interface for applications on which other applications are built, such as FSGEmail and Hotel, is the data required to construct the new interface using data propagation. If the EmailEditor component mandates a new property, the FSGEmail application needs to change its interface to applications that are built on top of it, such as the Hotel application. The Hotel application then needs to change its built-on interface to allow the Booking application to supply the value for the new mandated property. By deploying applications separately and in increasing dependency order, the EmailEditor component change eventually becomes available to the Booking application without breaking that application or the applications below it. Note: It is not a best practice to update all three applications (FSGEmail, Hotel, and Booking) using a branch associated to the Booking application.

199 ©2018 Pegasystems

Assessing and monitoring quality Introduction to assessing and monitoring quality Coupled with automated unit tests and test suites, monitoring the quality of the rules is crucial to ensuring application quality before application features are promoted to higher environments. After this lesson, you should be able to: l

Establish quality measures and expectations on your team

l

Create a custom guardrail warning

l

Customize the rule check-in approval process

200 ©2018 Pegasystems

How to establish quality standards on your team Fixing a bug costs far more once the bug has reached production users. The pattern of allowing lowquality features into your production environment results in technical debt. Technical debt means you spend more time fixing bugs than working on new features that add business value. Allowing unreviewed or lightly tested changes to move through a continuous integration/continuous deployment (CI/CD) pipeline can have disastrous results for your releases. Establishing standard practices for your development team can prevent these type of issues and allows you to focus on delivering new features to your users. These practices include: l

Leveraging branch reviews

l

Establishing rule check-in approval process

l

Addressing guardrail warnings

l

Creating custom guardrail warnings

l

Monitoring alerts and exception

Establishing these practices on your team helps to ensure that your application is of the highest quality possible before promoting to other environments or allowing the change's inclusion in the continuous integration pipeline.

Leveraging branch reviews To increase the quality of your application, you or a branch development team can create reviews of branch contents. For more information on how to create and manage branch reviews, see the help topic Branch reviews. The Branch quality landing page aids the branch review process, displaying guardrail warnings, merge conflicts, and unit test results. It is important to maintain a high compliance score and to ensure code has been tested. The Deployment Manager’s non-optional pxCheckForGuardrails flow will halt a merge attempt when a Get Branch Guardrails response shows that the weighted guardrail compliance score is less than the minimum-allowed guardrail score. Use Pulse to collaborate on reviews. Pulse can also send emails when a branch review is assigned and closed. Once all comments and quality concerns are addressed, you can merge the branch into the application.

Establishing check-in approval You can enable and customize the default rule check-in approval process to perform steps you see necessary to maintain the quality of the checked-in rules. For example, you can modify the check-in approval process to route check-ins from junior team members to a senior team member for review.

201 ©2018 Pegasystems

Addressing guardrail warning The Application Guardrails landing page (Designer Studio > Application > Guardrails) helps you understand how compliant your application is with best practices, or guardrails. For more information on the reporting metrics and key indicator available on the landing page, see the help topic Application Guardrails landing page. Addressing the warnings can be time consuming. Review and address these warnings daily so they do not become overwhelming and prevent you from moving your application features to other environments.

Creating custom guardrail warnings You can create custom guardrail warnings to catch certain types of violations. For example, your organization wants to place a warning on any activity rule that uses the Obj-Delete activity method. You can create a custom guardrail warning to display a warning that must be justified prior to moving the rule to another environment.

Monitoring alerts and exception Application with frequent alerts and exceptions should not be promoted to other environments. Use Autonomic Event Services (AES). For more information, see the articlec Introduction to Autonomic Event Services (AES). If you do not have access to AES, use the PegaRULES Log Analyzer (PLA) to download and analyze the contents of application and exception logs. For more information, see the topic PegaRULES Log Analyzer (PLA) on Pega Exchange.

202 ©2018 Pegasystems

How to create a custom guardrail warning Guardrail warnings identify unexpected and possibly unintended situations, practices that are not recommended, or variances from best practices. You can create additional warnings that are specific to the organization's environment or development practices. Unlike rule validation errors, warning messages do not prevent the rule from saving or executing. To add or modify rule warnings, override the empty activity called @baseclass.CheckForCustomWarnings. This activity is called as part of the Rule-.StandardValidate activity, which is called by, for example, Save and Save-As, and is designed to allow you to add custom warnings. You typically want to place the CheckForCustomWarnings activity in the class of the rule type to which you want to add the warning. For example, if you want to add a custom guardrail warning to an activity, place CheckForCustomWarnings in the Rule-Obj-Activity class. Place the CheckForCustomWarnings activity in a ruleset available to all developers. Configure the logic for checking if a guardrail warning needs to be added in the CheckForCustomWarnings activity. Add the warning using the @baseclass.pxAddGuardrailMessage function in the Pega-Desktop ruleset. You can control the warnings that appear on a rule form by overriding the standard decision tree Embed-Warning.ShowWarningOnForm. The decision tree can be used to examine information about a warning, such as name, severity, or type to decide whether to present the warning on the rule form. Return true to show the warning, and false if you do not want to show it.

KNOWLEDGE CHECK

When would you create a custom guardrail warning? To identify variances from best practices specific to the organization's environment or development practices

203 ©2018 Pegasystems

How to customize the rule check-in approval process The rule check-in feature allows you to use a process to manage changes to the application. Use this feature to make sure that checked-in rules are meeting the quality standards by ensuring they are reviewed by a senior member of the team. The Pega Platform comes with the Work-RuleCheckIn default work type for the approval process. The work type contains standard properties and activities and a flow called ApproveRuleChanges that is designed to control the rule check-in process.

204 ©2018 Pegasystems

For instructions on how to enable rule check-in approval, see the help topic Configuring the rule checkin approval process.

205 ©2018 Pegasystems

206 ©2018 Pegasystems

When the default check-in approval process is in force for a ruleset version, the flow starts when a developer begins rule check in. The flow creates a work item that is routed to a workbasket. The standard decision tree named Work-RuleCheckIn.FindReviewers returns the workbaskets. Rules awaiting approval are moved to the CheckInCandidates ruleset. By default, the review work items are assigned to a workbasket with the same name as the candidate ruleset defined in the Work-RuleCheckIn.pyDefault data transform. Override the WorkRuleCheckIn.FindReviewers decision tree if you want to route to a different workbasket or route to different workbaskets based on certain criteria. The approver can provide a comment and take three actions: l

Approve the check-in to complete the check-in process and resolve the rule check-in work item.

l

Reject the check-in to delete the changed rule and resolve the rule check-in item.

l

Send it back to the developer for further work to route the work item to the developer and move the rule to the developer's private ruleset.

Affected parties are notified by email about the evaluation results. You can enhance the default rule check-in approval process to meet your organization's requirements.

KNOWLEDGE CHECK

How can the rule check-in approval process help in monitoring quality? By ensuring rules are reviewed by senior members of the team before they are checked in

207 ©2018 Pegasystems

Conducting load testing Introduction to conducting load testing Load testing is an important part of preparing any application for production deployment. It helps identify performance issues that may only become apparent when the application is under load. Performance issues found in load testing are not easy to detect in a normal development environment. After this lesson, you should be able to: l

Design a load testing strategy

l

Leverage load testing best practices

208 ©2018 Pegasystems

Load testing Load testing is the process of putting demand on your application and measuring its response. Load testing is performed to determine a system's behavior under both normal and anticipated peak load conditions. Load testing helps identify the maximum operating capacity of an application as well as any bottlenecks, and determines which component is causing degradation. The term load testing is often used synonymously with concurrency testing, software performance testing, reliability testing, and volume testing. All of these are types of nonfunctional testing that are part of functionality testing used to validate suitability for use of any given software.

Load testing allows you the validate that your application meets the performance acceptance criteria, such as response times, throughput, and maximum user load. The Pega Platform can be treated as any web application when performing load testing. Tip: Performance testing requires skilled and trained practitioners who are able to design and construct, execute and review performance tests taking into account best practices. You can engage Pega's Performance Health Check service to help design and implement your load testing plan.

209 ©2018 Pegasystems

KNOWLEDGE CHECK

What question does the load testing answer? Will the system meet the expected performance goals?

210 ©2018 Pegasystems

How to load test a Pega application To load test your Pega application, you can use any web application load-testing tool, such as jMeter or Loadrunner. Before exercising a performance test, the best practice is to exercise the main paths through the application, including all those to be exercised by the test script, and then take a Performance Analyzer (PAL) reading for each path. Investigate and fix any issues that are exposed. Note: Load testing is not the time to shake out application quality issues. Ensure that the log files are clean before attempting any load tests. If exceptions and other errors occur often during routine processing, the load test results are not valid. Run the load test as a series of iterations with the goals identified by business metrics along with technical metrics to achieve. l

l

Test environment baseline – This is the first test to establish that application, environment, and tools are all working correctly. Application baseline – This is a test run with one user or one batch process creating a case in a single JVM. Then increase to 10 then to 100 users or 100 cases created by the batch process.

l

Full end to end test – This is the first full test of the application end to end, still in a single JVM.

l

Failure in one JVM – Test what happens if there is a failure in one of the JVMs.

l

Span JVMs based on the peak business and technical metrics/goals – This is iterated as much as is needed to achieve agreed success metrics.

Begin testing just with HTTP transactions by disabling agents and listeners. Then, test the agents and listeners. Finally, test with both foreground and background processing. The performance tests must be designed to mimic the real-world production use. Collect data on CPU utilization, I/O volume, memory utilization, and network utilization to help understand the influences on performance. Relate the capacity of the test machines to production hardware. If the test machines have 20 percent of the performance of the production machines, then the test workload should be 20 percent of the expected production workload. If you expect to use two or more JVMs per server in production, use the same number when testing.

KNOWLEDGE CHECK

Which tool is recommended for load testing a Pega solution? Use the web application load testing tool your organization is most familiar with.

211 ©2018 Pegasystems

Load testing best practices Pegasystems has extensive performance load testing experience, based on hundreds of implementations. The following list provides ten best practices that help you plan for success in the testing and implementation of a Pega solution. For the entire list of best practices, see the PDN article Ten best practices for successful performance load testing, or click any of the following links to go directly to that best practice.

1. Design the load test to validate the business use 2. Validate performance for each component first 3. Script user log-in only once 4. Set realistic think times 5. Switch off virus-checking 6. Validate your environment first 7. Prime the application first 8. Ensure adequate data loads 9. Measure results appropriately 10. Focus on the right tests KNOWLEDGE CHECK

Pega recommends testing an application with a step approach. What does that mean? First testing with 50 users, then 100, 150, and 200 for example. Generate a simple predictive model to estimate expected response time for more users.

212 ©2018 Pegasystems

POST PRODUCTION EVENTS

213 ©2018 Pegasystems

Estimating hardware requirements Introduction to estimating hardware requirements Due to the number of users expected to be on the system at one time, the number of case types you expect to process on any given day, and other factors, your application needs appropriate computing resources. Pega offers the Hardware Sizing Estimate service to guide you through this process. By the end of this lesson, you should be able to: l

Identify events that cause a hardware (re)sizing

l

Describe the process for submitting a hardware sizing estimate

l

Submit a hardware sizing estimate request

214 ©2018 Pegasystems

Hardware estimation events At the beginning of your project, someone sized the development, testing, and production environments based on the expected concurrent number of users, case type volume, and other factors that impact application performance. You may have been part of the initial application sizing exercise, depending on when you arrived on the project. When you perform formal load testing, you see how well your application performs according to key performance indicators (KPIs). Note: Throughout the development cycle, you can monitor performance of the Pega application using the Predictive Diagnostic Cloud or Autonomic Event Services. The tool you use depends if you are onpremise, using Pega Cloud, or using another cloud environment. As you add new users and new functionality to the application, the environment infrastructure can become insufficient for what you are asking the application to handle. For example, your new commercial loan request application shortened the process from two weeks to two days. Because the commercial loan application is successful, the personal loan department wants to start using the application. You expect 10 times as many personal loan requests than commercial loan requests and 700 new personal loan processors to start using the application. The effect is similar to adding water to a glass that is not large enough to hold the amount of water you need it to hold: To hold more water, you need a larger glass.

Consider initiating a new hardware sizing estimate if you are: l

Increasing the number of concurrent users

l

Introducing a new Pega application, such as Pega Customer Service or Pega Sales Automation

l

Increasing the number of background processes, such as agents or listeners

215 ©2018 Pegasystems

l

Introducing a new case type

l

Introducing one or more new integrations to external systems, including robotic automations

Pega offers the Hardware Sizing Estimate service to help you assess hardware needs based on current and planned application usage. Even if you are unsure if the application infrastructure needs modification, you can initiate a request with this service for guidance on how to proceed. The resulting estimate includes recommended settings for application server memory, number of JVMs, and database server disk and memory needed to support your application.

KNOWLEDGE CHECK

Why do you initiate a new sizing estimate when you are adding new functionality or users to your existing application? Your current application infrastructure may be insufficiently sized to handle the load of new users, applications, or case types you plan to introduce. To avoid a degradation in application performance as you evolve your application, estimate new infrastructure requirements and implement whatever is necessary to support your application requirements.

216 ©2018 Pegasystems

How to submit a hardware sizing estimate request You determined that new changes or enhancements to your application could impact the performance or stability of your production application. The Pega Hardware Sizing Estimate team can help you estimate the infrastructure needs for your application. The following steps illustrate how to submit a request to the Hardware Sizing Estimate team.

Initiate the request in one of two ways: l

l

If you are internal to Pega, create a request in the Hardware Sizing Estimate application. The Hardware Sizing Estimate application prompts you for the environment information needed to process your sizing estimate request. If you not internal to Pega, send an email to [email protected]. The Hardware Sizing Estimate team sends you an excel-based questionnaire to complete.

Important: Do not use an existing version of the questionnaire. The Hardware Sizing Estimate team constantly refines the sizing models. Always request a new questionnaire for new hardware sizing estimates. Whether you use the sizing request application or the questionnaire process, collect information about current number of users and details about the database, application server, and JVM configuration. Work with your operations team to help you gather this information.

217 ©2018 Pegasystems

The Hardware Sizing Estimate team uses the information you supply about your application to produce an estimated sizing report. The process takes approximately five business days. The team processes requests on a first in, first out (FIFO) basis. The Hardware Sizing Estimate team sends you the sizing report when complete. Note: The process for sizing estimation is the same if your application is running on Pega Cloud. The Hardware Sizing Estimate team works with Pega Cloud provisioning to communicate environment sizing recommendations.

218 ©2018 Pegasystems

Handling flow changes for cases in flight Introduction to handling flow changes for cases in flight Business applications change all the time. These changes often impact active cases in a production environment. Without proper planning, these active cases could fall through the cracks due to a deleted or modified step or stage. By properly planning your strategy for upgrading production flows, active cases will be properly accounted for and the change will be seamlessly integrated. This lesson presents three approaches to safely update flow rules without impacting existing cases that are already in production. After this lesson, you should be able to: l

Identify updates that might create problem flows

l

Choose the best approach for updating flows in production

l

Use problem flows to resolve flow issues

l

Use the Problem Flow landing page

219 ©2018 Pegasystems

Flow changes for cases in flight Business processes frequently change. These changes can impact cases that are being worked on. Without proper planning, these in-flight cases could become stuck or canceled due to a deleted or modified step or stage. For example, assume you have a flow where a cases goes from a Review Loan Request step, then to a Confirm Request step, and then to a Fulfill Request step. If you remove the Confirm Request step during a process upgrade, what happens to open cases in that step? By properly planning your strategy for upgrading production flows, in-flight cases will be properly accounted for and the upgrade will be seamlessly integrated.

Possible reasons for problem flows Since flow rules hold assignment definitions, altering a flow rule could invalidate existing assignments. Following are examples of why a problem may occur in a flow: l

l

l

l

You remove a step in which there are open cases. This change causes orphaned assignments. You replace a step with a new step with the same name. This change may cause a problem since flow processing relies on an internal name for each assignment shape. You remove or replace other wait points in the flow such as a Subprocess or a Split-For-Each shape. These changes may cause problems since their shape IDs are referenced in active subflows. You remove a stage from a case life cycle and there are in-flight cases. In-flight cases are not be able to change stages.

Parent flow information that affects processing Run-time flow processing relies on flow information contained in assignments. Changing an active assignment's configuration within a flow, or removing the assignment altogether, will likely cause a problem. Critical flow-related assignment information includes:

pxTaskName — the shape ID of the assignment shape to which it is linked. For example, Assignment1 pyInterestPageClass — the class of the flow rule. For example, FSG-Booking-Work-Event pyFlowType — the name of the flow rule. For example, Request_Flow_0

220 ©2018 Pegasystems

How to manage flow changes for cases in flight There are three fundamental approaches to safely updating flows that are already in production. Because each application configuration and business setting is unique, choose the approach that best fits your situation. Important: Whichever approach you choose, always test the assignments with existing cases, not just the newly created cases.

Approach 1: Switch to the application version of in-flight cases This approach allows users to process existing assignments without having to update the flows. Add a new access group that points to the previous application version. Then, add the access group to the operator ID so that the operator can switch to the application from the user portal. In this example, an application has undergone a major reconfiguration. You created a new version of the application that includes a newer ruleset versions. Updates include reconfigured flows, as well as decisioning and data management functionality. You decided to create a new access group due to the extent of changes that go beyond flow updates.

Advantage: The original and newer versions of the application remain intact since no attempt is made to backport enhancements added to the newer version. Drawback: Desirable fixes and improvements incorporated into the newer application version are not available to the older version. Care must be taken not to process a case created in the new application version when using the older application version and vice versa. Both cases and assignments possess a pxApplicationVersion property. Security rules, such as Access Deny, can be implemented to prevent access to cases and assignment that do not correspond to the currently used application version. The user's worklist can either be modified to only display cases that correspond to the currently used application version or the application version can simply display as separate a worklist view column. Likewise, Get Next Work should be modified to only return workbasket assignments that correspond to the currently used application version.

221 ©2018 Pegasystems

Approach 2: Process existing assignments in parallel with the new flow This approach preserves certain shapes, such as Assignment, Wait, Subprocess, Split-For-Each, and so on, within the flow despite those shapes no longer being used by newly created cases. The newer version of the flow is reconfigured such that new cases never reach the previously used shapes; yet existing assignments continue to follow their original path. In this example, you have redesigned a process so that new cases no longer utilize the Review and Correct assignments. You will replace them with Create and Review Purchase Request assignments. Because you only need to remove two assignments, you decide that running the two flow variations in parallel is the best approach.

You make the updates in the new flow version in two steps. First, drag the Review and Correct assignments to one side of the diagram. Remove the connector from the Start shape to the Review assignment. Keep the Confirm Request connector intact. This ensures that in-flight assignments can continue to be processed.

222 ©2018 Pegasystems

Second, Insert the Create and Review Purchase Request assignments at the beginning of the flow. Connect the Review Purchase Request to the Create Purchase Order Smart Shape using the Confirm Request flow action.

Later, you can run a report that checks whether the old assignments are still in process. If not, you can remove the outdated shapes in the next version of the flow. Advantage: All cases use the same rule names across multiple versions. Drawbacks: This approach may not be feasible given configuration changes. In addition, it may result in cluttered Process Modeler diagrams.

Approach 3: Move existing assignments In this approach, you set a ticket that is attached within the same flow, change to a new stage, or restart the existing stage. In-flight assignments advance to a different assignment where they resume processing within the updated version. You run a bulk processing job that locates every outdated assignment in the system affected by the update. For each affected assignment, bulk processing should call Assign-.OpenAndLockWork followed by Work-.SetTicket, pxChangeStage, or pxRestartStage. For example, you can execute a Utility shape that restarts a stage (pxRestartStage).

223 ©2018 Pegasystems

The following example shows a bulk assignment activity using SetTicket:

After you have configured the activity, you deploy the updated flow and run the bulk assignment activity. Important: The system must be off-line when you run the activity.

Example In this example, a small insurance underwriting branch office processes about 50 assignments a day; most are resolved within two days. In addition, there is no overnight processing. You run a bulk process because the number of unresolved assignments is relatively small and the necessary locks can be acquired during the evening. Note that it is not necessary to use the Commit method. Advantage: A batch process activity directs assignments by performing the logic outside the flow. You do not need to update the flow by adding a Utility shape to the existing flow. The activity enables you to keep the processing logic in the flow and makes upgrades easier. The activity also facilitates flow configuration and maintenance in Pega Express. Drawback: It might be impractical if the number of assignments is large, or if there is no time period when the background processing is guaranteed to acquire the necessary locks.

224 ©2018 Pegasystems

How to use problem flows to resolve flow issues When an operator completes an assignment and a problem arises with the flow, the primary flow execution is paused and a standard problem flow starts. A standard problem flow enables an administrator to determine how to resolve the flow. Pega Platform provides two standard problem flows: FlowProblems for general process configuration issues, and pzStageProblems for stage configuration issues. The problem flow administrator identifies and manages problem flows on the Flow Errors landing page. Note: As a best practice, override the default workbasket or problem operator settings in the getProblemFlowOperator routing activity in your application to route the problem to the appropriate destination.

Customizing FlowProblems You can copy the FlowProblems flow to your application to support your requirements. Do not change the name key. In this example, you add a Send Email Smart Shape to each of the CancelAssignment actions so that the manager is notified when the cancellations occur.

Managing stage-related problem flows Problem flows can arise due to stage configuration changes, such as when a stage is removed or relocated. When an assignment is unable to process due to a stage-related issue, the system starts the standard pzStageProblems flow.

225 ©2018 Pegasystems

In the following example, assume the Booking stage has been refactored to be a separate case type. A Booking case creation step and a wait step were added at the end of the Request stage's process. As a result you remove the unnecessary Booking stage within the parent stage's case life cycle. Finally, any inflight assignments that existed in the Booking stage were not moved back to the Request stage using a bulk processing activity.

When a user attempts to advance a case formerly situated in the removed Booking stage, the pzStageProblems flow would be initiated. Within this flow, the operator can use the Actions menu to select Change stage.

The operator can then manually move the case to another stage, the Request stage being the most appropriate choice.

226 ©2018 Pegasystems

For backward compatibility consider temporarily keeping an outdated stage and its steps as they are. For newly created cases, use a Skip stage when condition in the Stage Configuration dialog to bypass the outdated stage.

227 ©2018 Pegasystems

How to manage problem flows from the Flow Errors landing page Identify and manage problem flows using the Flow Errors landing page accessed by navigating to Designer Studio > Case Management > Processes. The pzLPProblemFlows ListView report associated with this landing page queries worklist and work assignments where the pxFlowName property value starts with FlowProblems. These flow error assignments were initially routed to the operator and assignment type returned by the nonfinal getFlowProblemOperator activity. The default values are Broken Process and Workbasket, respectively. Each row in the report identifies an individual flow problem. Rows may reflect a common condition or unrelated conditions generated by multiple applications. Use the following features to fix problem flows: Use Resume Flow if you want to resume flow execution beginning at the step after the step that paused. Use this option when the application performs a valid action that is not represented in the flow. For example, a flow contains a decision shape that evaluates a decision table. If the decision table returns a result that does not correspond to a connector in the flow, add the connector to the decision shape and resume the flow. The flow uses the already-determined decision result to select the appropriate connector and advance to the appropriate shape. Use Retry Last Step to resume flow execution by reexecuting the step that paused. Use this option to resume flow execution that pauses due to a transient issue such as a connector timeout or failure, or if you resolve an error by reconfiguring a rule used by the flow. For example, if you add a missing result to a decision table to fix a flow error, select Retry last step to reevaluate the decision table to determine the appropriate flow connector. Use Restart Flow to start the flow at the initial step. If an issue requires an update to a flow step that the application already processed, resume the flow at the initial step. Use Delete Orphan Assignments to delete assignments for which the work item cannot be found. Use this option to resolve flow errors caused by a user lacking access to a needed rule, such as an activity, due to a missing ruleset or privilege. Selecting Delete Orphan Assignments resolves an assignment that a user is otherwise unable to perform. Note: Always test updated flow rules with existing work objects, not only newly created ones.

228 ©2018 Pegasystems

Extending an application Introduction to extending an application Case specialization describes how an existing application can be transformed into a framework / model / template / blueprint application without having to rename the classes of existing case type instances. As an LSA, you are sometimes asked to take an existing application and evolve it so as to use it as a foundation for more specialized implementations. After this lesson, you should be able to: l

Describe how an existing application can be transformed into a framework / model / template / blueprint application

l

Extend an application to a new user population

l

Split an existing user population

229 ©2018 Pegasystems

How to extend existing applications Extending a production application can occur for various reasons, planned or unplanned. Some of these reasons include: l

l

The enterprise has planned to sequentially roll out extensions to a foundation application due to budgetary and development resource limitations The enterprise has discovered the need to: Extend the production application to a new set of users Split the production application to a new set of users

In either situation, the resulting user populations access their own application derived from the original production application. The previous scenarios fall into two major approaches: l

Extending the existing production application to support a new user population

l

Splitting the existing production application to support subsets of the existing user population

Within each of the two major approaches are two deployment approaches: either to a new database or to the same database.

Deployment approaches Whether extending or dividing an application, you can host the user populations on either a new database or the original database.

Deploying to a new database When you deploy the application to a new database, the data in both applications are isolated from each other. For instance, you can use the same table names in each database. Use ruleset specialization to differentiate the rules specific to each application's user population. This approach is similar to using foundation data model classes — embellishment is preferable to extension. You do not need to use class specialization.

Deploying to the original database When you deploy to the original database, use class specialization to differentiate the data. Class specialization creates new Data-Admin-DB-ClassGroup records and work pools. As a result, case data is written to tables that are different from the original tables. Security enforcement between applications hosted on the same database is essential. Unlike case data, assignments and attachments cannot be stored to different database tables. You can avoid this issue by using Pega’s multitenant system. Discuss with the organization whether Pega’s multitenant system is a viable option. Applications, cases, and assignments contain various organization properties. Use these properties as appropriate to restrict access between applications hosted in the same database.

230 ©2018 Pegasystems

Organization Properties Application

Case

Assignment

pyOwningOrganization

pyOrigOrg

pyOwnerOrg

pxAssignedOrg

pyOwningDivision

pyOrigDivision

pyOwnerDivision

pxAssignedOrgDiv

pyOwningUnit

pyOrigOrgUnit

pyOwnerOrgUnit

pxAssignedOrgUnit

pyOrigUserDivision Run the New Application wizard to achieve class specialization. In the Name your application screen, click the Advanced Configuration link. Under Organization settings, enter at least one new value in the Organization, Division, and Unit Name fields. Suppose the new user population is associated to new division and there is a requirement to prevent an operator in the new division from accessing an assignment created by the original division. The easiest solution is to implement a Read Work- Access Policy that references the following Work- Access Policy Condition.   pxOwnerDivision = Application.pyOwningDivision AND pxOwnerOrganization = Application.pyOwningOrganization Alternatively, you can also define an access deny rule. Note: Using De Morgan’s law, define the access deny-invoked access when rule as negation of how a single-access-role access when rule would be defined. When...   pxOwnerDivision != Application.pyOwningDivision OR pxOwnerOrganization != Application.pyOwningOrganization

Extending an application to a new user population If you extend an application to support a new user population, the extended application can be: l

l

An application previously defined as a foundation application An application that becomes a template, framework, blueprint, or model application on top of which new implementations are built

Extending the application to a new database When deploying to a new database, ruleset specialization is sufficient to differentiate the existing application’s user population. Use the ruleset override procedure described in the Designing for Specialization lesson to specialize the existing application and to define the new application.

Extending the application to an existing database To support a new user population within an existing database, run the New Application wizard to generate an application that extends the classes of the existing application’s case types. Then use the ruleset override procedure described in the Designing for Specialization lesson to specialize the existing application.

231 ©2018 Pegasystems

Splitting an application's existing user population In some situations, you may want to split an application's existing user population into subsets. The resulting subsets each access a user population-specific application built on the original application. When active cases exist throughout a user population and there is a mandate to subdivide that user population into two distinct applications, reporting and security become problematic. Cloning the existing database is not a good approach. This can make controlling duplicate access, such as agents, difficult.

Moving a subset of the existing user population to a new database If you create a new database to support a subdivided user population, and immediate user migration is not required, you can gradually transition user/account data from the existing database to the new database. Ideally, transfer user/account data starting with classes that have the fewest dependencies. For example, attachment data does not reference other instances. Copy resolved cases for a given user/account to the new database, but do not purge resolved cases from the original system immediately. Wait until the migration process is complete for that user/account. Use the Purge/Archive wizard to perform this task (Designer Studio > System > Operations > Purge/Archive). Optionally, modify case data organization properties to reflect the new user population. A requirement to immediately move a subset of an existing user population to a new database is more complex due to the likelihood of open cases. Use the Package Work wizard to perform this task ( Designer Studio > Distribution > Package Work).

Creating subsets of the existing user population within the original database The most complex situation is when immediate user population separation is mandated within the same database. To support this requirement, a subset of the existing cases must be refactored to different class names. Manipulating the case data model for an entire case hierarchy while a case is active is risky and complex. For this reason, seek advice and assistance before attempting a user population split for the same application within the same database.

Case type class names Avoid refactoring every case type class name when splitting a user population within an existing database. Refactoring class names is a time-consuming process. Businesses prefer the most expedient and cost effective change management process. The most cost effective approach keeps the largest percentage of users in the existing work pool class and moves the smaller user population to a new work pool class. Pega auto-generates Database table names. Pega Express generates names for rules such as When rules, Flow names, and Sections. Case type class names need not exactly reflect their user populations. An application's name, its organization properties, and associated static content are sufficient to distinguish one specialized application from another.

232 ©2018 Pegasystems

The notion of defining a framework, foundation, template, model, or blueprint layer that abstracts a business process is sound. In the past, these foundations classes used the FW (FrameWork) abbreviation in their class names. Naming case classes using the FW abbreviation sometimes occurs at the beginning of the development process. If during post-production an implementation application becomes a framework application, its class name does not contain the FW abbreviation. This abbreviation is optional, not a necessary, naming convention.

233 ©2018 Pegasystems

COURSE SUMMARY

234 ©2018 Pegasystems

Lead System Architect summary Now that you have completed this course, you should be able to: l

Design the Pega application as the center of the digital transformation solution

l

Describe the benefits of starting with a Pega customer engagement or industry application

l

Recommend appropriate use of robotics and artificial intelligence in the application solution

l

Leverage assets created by business users who are building apps in Pega Express

l

Design case types and data models for maximum reusability

l

Design an effective reporting strategy

l

Design background processes, user experience, and reporting for optimal performance

l

Create a release management strategy, including DevOps, when appropriate

l

Ensure your team is adhering to development best practices and building quality application assets

l

Evolve your application as new business requirements and technical challenges arise

Next steps To further your learning and share in discussions pertinent to lead system architects, including the latest information on certification requirements, see the Lead System Architect Central space on the PDN.

235 ©2018 Pegasystems