Salesforce is a mature SaaS application, yet many organizations are still plagued with network and application performance issues due to the unpredictability of the Internet and the complexity of the Salesforce platform. Salesforce, unlike other SaaS applications, such as Office 365, is not an “out-of-the-box” application. It’s typically customized by developers to suit the needs of the enterprise, and often leverages third-party applications for business functions such as Marketing, Sales and Customer Support. User location can also impact performance, as Salesforce hosts each enterprise’s application environment from a specific data center.
In general, SaaS applications can be challenging to monitor given that you don’t own the application infrastructure, nor do you own all of the external network and service dependencies (ISPs, DNS, SWG, etc.) your users rely on for a good digital experience. Traditional network and application monitoring tools, such as packet capture and Flow analyzers, don’t work outside of your environment, leaving you blind to the performance of your critical SaaS applications.
Despite these challenges, it’s still possible to gain visibility into networks and services that you don’t manage. By leveraging active monitoring techniques and adopting a lifecycle approach that emphasizes readiness, you can get ahead of performance issues that impact user experience.
Salesforce Service Delivery Architecture
Salesforce serves its core platform from one of the dozens of instances that they host in North America, Europe, and AsiaPac. Each enterprise can choose just one of these instances, which effectively ties them to a specific physical data center. For example, historically, the Salesforce NA38 instance has been hosted in either Dallas, Texas or Phoenix, Arizona, with one of these serving as the active host and the other as backup. When accessing Salesforce’s front door (salesforce.com or login.salesforce.com), users will connect to a data center that is local to their region; however, once authenticated, they will connect to the data center that hosts their organization’s instance.
This service model can prove to be problematic for enterprises that have a global presence, because whatever their choice of instance, they may be putting some users at a disadvantage. The further away a user is from the Salesforce data center, the more likely they will transit through more network hops and service providers, leading to significant variability in performance for users.
Given Salesforce’s service delivery architecture, there are three main factors to consider when developing a monitoring strategy:
1 — Your network architecture can dramatically impact availability and performance of Salesforce. Whether or not you choose to backhaul traffic via MPLS WAN to a central egress point for security purposes, or connect your branch offices directly to the Internet (perhaps using a cloud-based security solution, such as Zscaler), consider how this will impact your user locations.
2 — Internet transit relies on external dependencies outside of your control. These dependencies include ISPs, DNS, and cloud-based services such as Secure Web Gateway (SGW) providers. Unlike Microsoft, Salesforce does not have an extensive global network, so the majority of external transit hops to your Salesforce host data center will take place over the Internet—meaning that there will be a higher number of dependencies other than Salesforce to manage. And because every one of your user locations will be taking a different path to get to Salesforce, there may be more significant variation in performance.
3 — Salesforce as an application is not the same for every enterprise. You can develop and run your own code and create custom packages that may include external dependencies, such as APIs and 3rd-party applications. Because Salesforce is not an “out-of-the-box” application, you need to think about monitoring the health of your specific implementation of Salesforce, not just the Salesforce platform and underlying application infrastructure.
Getting Visibility Into Service Delivery
To gain insight into end-to-end performance for each of your Salesforce users, you need external visibility at both network and application layers. By placing a ThousandEyes Enterprise Agent into each of your locations, such as branch offices and data center, you can run active tests to understand availability and performance of not only Salesforce but every intermediary provider.
An HTTP server test in the ThousandEyes application will provide application layer data (e.g., DNS and wait time) as well as network path visualization and hop-by-hop metrics such as packet loss, latency, and jitter.
Understanding network and application performance for both the Salesforce front door, as well as your production instance, will give you visibility into the experience of your users and provide clarity your IT team needs to pinpoint and troubleshoot issues quickly. Adding a transaction test that simulates a real user interaction, including authentication, can also be valuable in ensuring key business workflows are functional.
Getting to Root Cause
It can be challenging to troubleshoot availability and performance issues when there may be multiple networks and services between each of your users and Salesforce. IT groups often waste considerable cycles determining where a problem occurs, particularly when you’ve eliminated your network, as well as Salesforce, as the cause.
For example, in Figure 2, multiple user sites within an enterprise environment experienced degraded performance connecting to Salesforce. Using ThousandEyes, the enterprise was able to quickly determine that multiple nodes across two ISP networks were suffering packet loss. They were able to establish root cause without hunting within their environment or escalating to Salesforce—neither of which was the source of the issue.
The deep visibility ThousandEyes provides not only allows for very rapid problem identification, but it also gives users the evidence they need to escalate to the right party. There’s no need to guess who is at fault.
In the example in Figure 3, Salesforce was experiencing a significant outage event within their Dallas, Texas data center, which was affecting availability for many users trying to connect to the NA32 instance. The hop-by-hop path visualization and network metrics in the ThousandEyes application provided clear evidence that Salesforce was the source of the issue.
Sharing meaningful data with providers such as Salesforce can enable escalation and remediation to progress smoothly, without the need for finger pointing.
Adopting a Cloud Readiness Lifecycle
The key to achieving a good user experience for your Salesforce users and getting ahead of change is to get visibility early, so you can define success metrics, get to know your providers, and have the data you need to get to root cause quickly. ThousandEyes advocates a continuous lifecycle approach to monitoring, including a readiness phase that will ensure issues are uncovered early, before impacting users (see Figure 4).
Ensuring an excellent user experience for any SaaS application is challenging—because there’s no steady state in the cloud. But with application-aware network visibility into your external environment, you can successfully navigate the cloud and achieve SaaS success.
Start monitoring Salesforce today with a ThousandEyes free trial. You’ll immediately be able to monitor SaaS applications from 150+ cities around the globe using ThousandEyes-managed Cloud Agents. You can also deploy Enterprise Agents inside your network to gain an inside-out view, as well as Endpoint Agents to understand the individual end-user experience. Want to learn more from our experts? Request a demo.