3 tips for website performance optimization
Connecting state and local government leaders
Monitoring performance metrics and user experience will deliver the visibility required to optimize performance.
With the public sector’s renewed focus on customer experience, it’s imperative that the front line of that experience – the website – is up and running well. Recent directives like the U.S. Public Participation Playbook have stressed that agency websites must be designed with the user in mind and must be available to provide the services that citizens need.
A slow, or broken, site is no longer acceptable. And that’s not just for citizen-facing websites. Federal IT managers face the same challenge with internal sites such as intranets and back-end resource sites that provide both internal communication and services to federal civilian and military workforce, such in theater campaign information or family services.
So what can federal IT pros do to keep ahead of the challenge, catch critical issues before they impact the user and keep external and internal sites running at optimal performance?
The answer is three-fold:
- Monitor key performance metrics on the back-end infrastructure that supports the website. On the network that means checking bandwidth utilization and latency; with systems capacity watch the CPU, memory, disk space, database wait time, storage I/O.
- Track customer experience and front-end performance from the outside, looking for slow web page elements, DNS issues and external application performance.
- Integrate back- and front-end information to get a complete picture, which will help determine the root cause of a slow user experience quickly and resolve the issue.
Performance monitoring
Most federal IT pros understand the advantages of standard performance monitoring, but monitoring in real time is just not enough. To truly optimize internal and external site performance, the key is to have performance information in advance.
This advance information is best gained by establishing a baseline, then comparing activity to that standard. What does an average day look like? Are there spikes in traffic at the same time each day or at a certain time of year? With a baseline in place, a system can be configured to provide alerts based on information that strays from the baseline.
In fact, the goal is to get an alert that there may be a problem, so troubleshooting can start immediately and the root cause can be uncovered before it impacts customers – whether employees, warfighters or citizens. By anticipating an impending usage spike that will push capacity limits, the IT team can be proactive and avoid a slowdown.
That historical baseline will also help allocate resources more accurately and enable capacity planning. Capacity planning analysis provides a predictive advantage – it lets IT managers configure the system to send an alert when a resource limit may be reached based on historical analysis instead of an alert after a limit has been reached and the site is down.
Automation is also a critical piece of performance monitoring. If the site goes down over the weekend, automated tools can restart the site if it crashes and send an alert when it’s back up so the team can start troubleshooting.
End-user experience monitoring
Understanding the customer experience is a critical piece of ensuring optimal site performance. Let’s say the back-end performance looks good, but calls are coming in from end-users that the site is slow. What’s the next step?
Ideally, IT staff would be able to mimic a user’s experience, from wherever that user is located, anywhere around the world. This allows the team to isolate the issue to a specific location. For page-formatting or other front-end issues, such visibility from point to point – from the end-user through the infrastructure and back to the end user – will help more effectively pinpoint the problem's root cause.
It is important to note that federal IT pros face a unique challenge in monitoring the end-user experience. Many monitoring tools are cloud based, and therefore will not work within a firewall. If this is the case, be sure to find something that works inside the firewall that will monitor internal and external sites equally.
Data integration
Monitoring back-end performance using historical baseline data with capacity planning analysis as well as the end-user experience will deliver real data – historical data, accompanying analysis and factual metrics – from across the infrastructure. The ultimate objective is to bring all this information together to provide the visibility across the front- and back-end alike, to know where to start looking for any anomaly, no matter where it originates.
The goal is to improve visibility in order to optimize performance. The more data IT pros can muster, the greater their power to optimize performance and provide customers with the optimal experience.
NEXT STORY: Who's getting it done on open data?