Blog Posts Archive - Pingdom https://www.pingdom.com/blog/ Website Performance and Availability Monitoring | Pingdom Mon, 15 Apr 2024 13:47:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://www.pingdom.com/wp-content/uploads/2024/01/cropped-SW_Logo_Stacked_Web_Orange-32x32.png Blog Posts Archive - Pingdom https://www.pingdom.com/blog/ 32 32 Introduction to Observability https://www.pingdom.com/blog/introduction-to-observability/ Mon, 15 Apr 2024 13:18:30 +0000 https://www.pingdom.com/?post_type=blog&p=35266 These days, systems and applications evolve at a rapid pace. This makes analyzing the internal performance of applications complex. Observability emerges as a path to efficient and effective operational insights. Imagine a team of doctors monitoring a patient’s vitals—heart rate, temperature, blood pressure. These readings, combined with observation of symptoms, paint a picture of the […]

The post Introduction to Observability appeared first on pingdom.com.

]]>
These days, systems and applications evolve at a rapid pace. This makes analyzing the internal performance of applications complex. Observability emerges as a path to efficient and effective operational insights. Imagine a team of doctors monitoring a patient’s vitals—heart rate, temperature, blood pressure. These readings, combined with observation of symptoms, paint a picture of the patient’s health. This allows doctors to diagnose issues and provide care. Observability works similarly for digital systems. It’s the ability to see inside and understand your software’s behavior, like doctors observing a patient.

By collecting and analyzing data such as resource usage, performance metrics, and error logs, observability gives you the pulse of your system, ultimately helping you catch problems early, optimize performance, and deliver a better user experience. As a result, when digital systems grow increasingly complex, observability becomes the key to figuring out what’s happening inside your systems.

With microservices, cloud deployments, and constant updates, observability isn’t just a nice to have—it’s essential. It empowers developers to identify and fix bugs faster, leading to less downtime. For businesses, it translates to smoother operations, reduced costs, and a competitive edge. This post is your guide to unlocking the power of observability. We’ll dive into its core principles, explore key components like logs, metrics, and tracing, and showcase the benefits it can bring. By the end, you’ll understand why observability matters and how to implement it for your digital systems.

What Is Observability?

Understanding observability starts with grasping its foundational principles. These principles act as the building blocks, forming the basis for the powerful insights they provide into the inner workings of digital systems.

Real-Time Insights into System Behavior

  • Instant understanding: Observability provides an instantaneous view of your digital systems, allowing you to understand their behavior as events unfold.
  • Timely problem solving: This real-time perspective enables quick identification of issues.
  • Proactive management: Observability is not only about reacting to problems; it’s about being proactive. By leveraging real-time insights, you can foresee potential challenges and take preventive measures.

Comprehensive Data Collection

  • Full spectrum of information: Observability goes beyond surface-level data, collecting a comprehensive set of information.
  • Uncovering hidden patterns: By gathering diverse data, observability reveals hidden patterns and correlations.
  • Historical context: Comprehensive data collection isn’t just about the present; it also builds a historical context.

Key Components of Observability

Observability relies on three crucial components—tracing, logging, and metrics—which work together to provide a comprehensive view of system behavior.

Tracing

Tracing is like creating a digital map for every journey in your computer world. It helps you follow the path of data or requests, much like tracking a package as it moves from one place to another. When things slow down or go wrong, tracing acts like a detective, showing you exactly where the issue is happening. It’s a guide to give you insights into how everything is moving and helps you fix problems quickly.

Tracing allows you to pinpoint the exact step causing the problem, facilitating swift troubleshooting. Essentially, it’s a digital guide offering insights into the flow and performance of your processes.

Logging

Consider logging as the detailed diary of your digital journey. It diligently records significant events, errors, and warnings, creating a chronological record of your system’s activities. When issues arise, logs serve as a troubleshooting guide, offering context about what happened before, during, and after an event. Logs act as an audit trail, ensuring accountability and compliance with regulations. It’s the historical narrative to assist in understanding system behavior over time.

Metrics

Metrics are the numbers that tell you how well your computer is doing. They’re like the vital signs of a patient in a hospital. These numbers include information such as how fast your system responds to requests, how much memory it’s using, and if there are any errors. By keeping an eye on these metrics, you can catch potential issues before they become big problems. Metrics act like your system’s health report, giving you the data you need to keep it in top shape.

Holistic Understanding for Effective Troubleshooting and Optimization

Observability is more than seeing the data; it’s about harnessing that knowledge to improve your systems proactively. Here’s how a holistic understanding empowers you to troubleshoot effectively and optimize for success.

  1. Spot problems quickly: Tracing helps find issues in the system fast, like a digital detective pinpointing the exact trouble spots.
  2. Prioritize based on impact: Focus on critical issues first and leave minor hiccups for later to ensure smooth operations.
  3. Fine-tune performance continuously: Optimize resource allocation, configuration settings, and code based on data-driven decisions.
  4. Deliver seamless user experiences: Understand user journeys, identify pain points, and iterate based on real-time feedback.
  5. Pinpoint issues with precision: Identify the exact source of errors and performance bottlenecks.

Benefits of Implementing Observability

Using observability in your digital systems is like having a superpower. It makes everything work better and helps in fixing problems quickly. Below are some of the factors that make it essential to implement it in your application.

Improved System Reliability

Embracing observability translates to a more reliable digital infrastructure.

  • Proactive issue detection: Observability allows for the early identification of potential issues before they escalate. It’s akin to having a warning system that spots anomalies, enabling proactive problem-solving.
  • Faster incident response: With observability, incident response becomes swift and precise. The ability to trace, log, and measure metrics in real time accelerates the identification and resolution of problems, minimizing the impact on users.

Enhanced Troubleshooting

Observability significantly elevates troubleshooting capabilities, fostering a deeper understanding of system intricacies.

  • Root cause analysis: Observability provides the tools for in-depth root cause analysis. Tracing, logging, and metrics work in tandem to uncover the underlying reasons for issues, aiding in the development of effective solutions.
  • Reduced downtime: By swiftly identifying and resolving issues, observability minimizes downtime. This reduction in system downtime ensures continuous service availability, contributing to a more resilient digital environment.

Optimized Resource Utilization

Observability facilitates a deep dive into resource metrics, allowing organizations to optimize resource utilization efficiently.

User Experience Enhancement

By providing insights into the user journey, observability helps you identify areas for improvement. This gives you the insight you need to enhance the overall user experience.

Difference Between Observability and Traditional Monitoring

Observability and monitoring are two terms that are closely related to each other. You might think observability is similar to monitoring in that both examine the performance of your application. But under the hood, their differences provide valuable insights. This section serves as your decoder to bring the important distinction between the two terms.

Monitoring as a Subset of Observability

Imagine observability as a vast ocean bringing so many functionalities and knowledge. While monitoring dips its toe into that ocean—focusing on specific metrics and some predefined thresholds—it captures only a fraction of the available data. On the other hand, observability dives deep, finding all the hidden patterns and available signals and presenting a complete visual sketch of the system’s health.

Comprehensive Insights vs. Surface-Level Data

Think of monitoring as a traffic light: red for errors, yellow for warnings, and green for everything all right. Observability, however, represents a detailed map. To put it simply, observability presents the logs, traces, and contextual information that help you understand the why behind the what. So, you know what went wrong and why the issue persists. But in monitoring, you know only that there’s an issue in the application while the root cause is still under investigation.

Proactivity vs. Reactivity

Monitoring is reactive, meaning it informs you when something’s gone wrong. Observability is proactive; it helps you anticipate and fix issues before they impact the user or system. It’s like having a weather forecast (observability) instead of just looking out the window (monitoring) when a storm hits.

Interconnected View vs. Isolated Checks

Observability links all the different parts of your system, giving you an interconnected view. It’s like seeing the entire ecosystem of a forest. Monitoring, on the other hand, might check individual trees without showing you the whole picture. Observability helps you understand how changes in one part affect the entire system.

In the realm of general observability concepts, where understanding the performance and health of IT environments is paramount, specialized tools like SolarWinds® Observability can ensure the optimal functioning of applications and infrastructure. In the next section, you’ll have a closer look at SolarWinds Observability, its different features, and capabilities.

SolarWinds Observability

SolarWinds Observability is a powerful platform designed to help you understand, monitor, and optimize your applications and systems. Imagine it as a friendly guide that not only tells you when something isn’t quite right but also explains why and helps you fix it before it becomes a big problem. Industries love using SolarWinds because it’s like having a strong backbone support for their IT teams to resolve any issues quickly. It doesn’t just look at the surface but dives deep into the heart of your systems, giving you insights to keep everything running smoothly. Whether it’s spotting potential issues, understanding why something went wrong, or optimizing your system for peak performance, SolarWinds Observability has your back.

Features and Capabilities of SolarWinds Observability Platform

Explore the features that make SolarWinds Observability a standout solution for understanding and optimizing your digital applications.

  1. Metrics analysis: Monitor every vital sign of your system, like CPU, memory, network, databases, etc.
  2. Log detective: Scrutinize logs like a seasoned investigator, unearthing hidden clues and patterns.
  3. Trace the journey: Follow the path of each request, pinpointing bottlenecks and performance issues.
  4. Alerts on steroids: Receive smart, actionable alerts that cut through the noise and guide you to the problems that matter.
  5. Dashboards at your fingertips: SolarWinds Observability provides fully customizable dashboards to visualize your system’s health in real time, just the way you like it.
  6. Deep dives with insights: Analyze data comprehensively across all your tools, uncovering deeper insights and correlations where SolarWinds provides lots of tools to analyze the performance.
  7. Cloud-agnostic freedom: Monitor on-premises, cloud, or hybrid environments seamlessly, without boundaries.
  8. Openness for collaboration: Integrate with your favorite tools and workflows, building your perfect observability ecosystem.

How Does SolarWinds Enhance Observability in Diverse Environments?

SolarWinds Observability is an end-to-end solution that helps optimize and improve user experience for any digital application. Whether you’re running web servers in your own data center or managing applications across multiple cloud providers, SolarWinds Observability seamlessly integrates, gathering data and offering insights from every corner. This means no siloed information and no hidden weaknesses—just a complete picture of your entire digital world, regardless of its shape or size. Whether you’re navigating a cloud-based infrastructure, traditional on-premises servers, or a combination of both, SolarWinds smoothly integrates and speaks the language of each environment.

So, whether you’re a cloud-native startup or a seasoned on-premises veteran, SolarWinds Observability can be your trusted partner in navigating the ever-evolving digital landscape. Its adaptability ensures that you always have the insights you need to optimize performance, troubleshoot issues, and deliver exceptional user experience no matter where your systems reside.

Conclusion

On your journey through the world of observability, we’ve uncovered the magic of understanding, troubleshooting, and optimizing digital systems. Imagine it like having a strong backbone or toolkit support for the entire application. From understanding the basic principles of real-time insights to understanding the capabilities of strong platforms such as SolarWinds Observability to implement the concept of observability, you’ve learned to be the strong support suite of your IT landscape. Remember, it’s not just about seeing what’s happening; it’s about understanding, fixing, and optimizing with ease. Whether you’re a tech wizard or just starting your digital adventure, this post has equipped you with the knowledge to navigate and excel in the ever-evolving realm of observability.


This post was written by Gourav Bais. Gourav is an applied machine learning engineer skilled in computer vision/deep learning pipeline development, creating machine learning models, retraining systems, and transforming data science prototypes into production-grade solutions.

The post Introduction to Observability appeared first on pingdom.com.

]]>
Webpages Are Getting Larger Every Year, and Here’s Why it Matters https://www.pingdom.com/blog/webpages-are-getting-larger-every-year-and-heres-why-it-matters/ https://www.pingdom.com/blog/webpages-are-getting-larger-every-year-and-heres-why-it-matters/#comments Thu, 29 Feb 2024 12:08:18 +0000 https://royal.pingdom.com/?p=27412 Last updated: February 29, 2024 Average size of a webpage matters because it correlates with how fast users get to your content. People today have grown to expect good performance from the web. If your website takes more than 2.5 seconds to load, your users will probably never return to you again.  Further, the more […]

The post Webpages Are Getting Larger Every Year, and Here’s Why it Matters appeared first on pingdom.com.

]]>
Last updated: February 29, 2024

Average size of a webpage matters because it correlates with how fast users get to your content. People today have grown to expect good performance from the web. If your website takes more than 2.5 seconds to load, your users will probably never return to you again.  Further, the more data your webpage needs to download, the longer it will take—particularly on slow mobile connections. 

Balancing a rich experience with page performance is a difficult tradeoff for many publishers. We gathered statistics from the top 1000 websites worldwide to see how large their pages are. We’ll look at what’s driving this change and how you can track the size of your own company’s site. 

Recent trends 

According to the HTTP Archive, the current average page size of top sites worldwide is around 2484 KB, which has steadily increased over the years. This is based on measuring transferSize, which is the weight of the payload of HTML, as well as all of its linked resources (favicon, CSS files, images), once fully loaded (i.e. at the window.onload event). 
 

Graph of mean Kilobyte totals (April 2018 to July 2023) by The HTTP Archive 

With broadband speeds increasing yearly, publishers have added richer content to their webpages. This includes larger media such as images and video. It also includes increasingly sophisticated JavaScript behavior using frameworks like React and Angular. 

Additionally, the complete access-speed equation should also consider the average Internet speed in your server and user countries. Today, no matter where your user is in the world, keeping all your webpage sizes under the global average would seem more of a necessity than just a good practice. 

Average Internet Speeds [Mbps] by Country in 2023 by Fastmetrics. © 2023 Fastmetrics, Inc. All rights reserved. 

Top 10 websites 

Here are the top 10 websites that are most commonly visited in 2023 globally: 

  1. google.com
  2. youtube.com
  3. facebook.com
  4. twitter.com
  5. wikipedia.org
  6. instagram.com
  7. reddit.com
  8. asuracans.com
  9. tiktok.com
  10. fandom.com

Imagine what these websites have to do in terms of load time, page speed and file size to ensure they remain at the top. 

Actual webpage size 

With Google Chrome, it’s possible to manually check a webpage’s size transferred over the network when you load it. First, on the website whose page size you want to check, open the DevTools Network tab, and re-load the webpage. 

Chrome DevTools 

On the bottom right corner of the DevTools panel, you’ll see the amount of data transferred. This is actual file size of the webpage that’s transferred to your browser via your network. 

In order to perform this same task in an automatic way for 1000 sites, we wrote a Python scraper program (code on GitHub) that uses Selenium and Headless Chrome to calculate the actual total webpage sizes (including dynamic content loaded by JavaScript before the user starts interacting). 

Headless Chromium is a feature of Google’s browser starting on version 59. To use it, the chrome executable runs from command line with the –headless option. We operate it programmatically with a special WebDriver for Selenium (Python-flavored in our case). We also use the Chrome DevTools Protocol to access the Network.loadingFinished events using the RemoteWebDriver. For this, ChromeDriver is running standalone, which by default listens on a remote debugging port 9515 on the local network, available for us to connect using Selenium. Additionally, performance logging is enabled in our code. 

All this has been provided in our sample code at github.com/jorgeorpinel/site-page-size-scraper

Context and limitations 

Some of the 1000 websites may be skipped by our tool, given the following rules: 

  • 10-second total page loading timeout; 
  • 10-second script execution timeout; 
  • Ignored when the response is empty; 
  • Scraper tool ran from the USA. (Some websites are not available or present different content when loaded from different locations.) 

Note: Some top webpages from other countries e.g. China didn’t load in the USA or redirect to global content landing pages. The correct way to measure them would be to load each from inside their country but that goes beyond the scope of this article. 

Gathering the statistics 

We ran the tool providing a list of websites as its only argument: 

$ ./from_list.py 2023-10-04-alexa-topsites-1000.txt 

Loaded list of 1000 URLs: 
 Loading http://google.com... loadingFinished: 395332B, 1.96s 
 Loading http://youtube.com... loadingFinished: 1874222B, 3.16s 
 Loading http://facebook.com... loadingFinished: 1387049B, 1.21s 
 … 
 The average webpage size is 2.07MB from 892 processed websites... 

You can run the above script for yourself and then compare the results to manually loading those web pages with the DevTools. 

Optimize Large Websites 

A significant portion of your website’s audience, over one-third to be precise, will disengage if they encounter prolonged loading times for various elements such as icons, images, videos, GIFs, and other multimedia assets. Furthermore, nearly half of your visitors anticipate swift website interactions, expecting everything to unfold within a mere two seconds. This poses a substantial challenge, particularly when dealing with intricate animations, extensive JavaScript packages, and hefty media files. Therefore, it becomes imperative to employ strategies that optimize your website’s production bundle, making it as compact as possible for efficient data transmission. 

To delve deeper into this issue, let’s consider the user experience aspect. On average, users allocate approximately 5.94 seconds to scrutinize a website’s primary image. During this brief window, it is crucial to make a lasting impression. To achieve this, a careful selection of images that are not only relevant but also captivating is essential. It’s equally important to steer clear of distracting image sliders, as research suggests that users predominantly focus on the initial image in a slider, rendering subsequent slides largely unnoticed. 

Checking total size of your webpage with SolarWinds Pingdom 

A quick and easy way to check your total page size is using the SolarWinds® Pingdom® Website Speed Test. This free tool also uses real web browsers in dedicated servers distributed in different global locations to load and analyze website performance. It also adds significant insight into the composition of different aspects of a website’s performance. 

Basic performance analysis of google.com via Pingdom. 

The SolarWinds Pingdom online tool also separates page size by content type (images, scripts, etc.) and by domain (to differentiate resources coming from the same website, CDNs, third parties, etc). 

Content size and Requests by content type 

SolarWinds is an easy-to-use website performance and availability monitoring service that helps keep your websites fast and reliable. Signing up is free and registered users can enjoy a myriad of tools such as page speed monitoring, real user tracking, root cause analysis, website uptime monitoring, nice looking pre-configured reports, and a full REST API.Pingdom is an easy-to-use website performance and availability monitoring service that helps keep your websites fast and reliable. Signing up is free and registered users can enjoy a myriad of tools such as page speed monitoring, real user tracking, root cause analysis, website uptime monitoring, nice looking pre-configured reports, and a full REST API. 

To mention just one great feature among all of the SolarWinds Pingdom solution’s offerings, Real User Monitoring (RUM) is leveraged automatically to create greater insight on the regional performance of your website. Now you can get insight into how real users experience the performance of your site around the world. 

Experience Monitoring / Visitor Insights (RUM) map in Pingdom. 

Conclusion 

You already know your audience. Knowing your website’s page sizes will allow you to better control the performance and availability of your content and applications. Everyone loves a fast website! 

Sign up for a free trial of SolarWinds Pingdom to monitor your users’ digital experience, such as uptime monitoring, visitor insights, page speed monitoring, and immediate alerts. 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners. 

The post Webpages Are Getting Larger Every Year, and Here’s Why it Matters appeared first on pingdom.com.

]]>
https://www.pingdom.com/blog/webpages-are-getting-larger-every-year-and-heres-why-it-matters/feed/ 1
A Beginner’s Guide to Using CDNs https://www.pingdom.com/blog/a-beginners-guide-to-using-cdns-2/ https://www.pingdom.com/blog/a-beginners-guide-to-using-cdns-2/#respond Wed, 28 Feb 2024 14:07:36 +0000 https://royal.pingdom.com/?p=27542 Last updated: February 28, 2024 Websites have become larger and more complex over the past few years, and users expect them to load instantaneously, even on mobile devices. The smallest performance drops can have big effects; just a 100ms decrease in page load time can drop conversions by 7%. With competitors just a click away, organizations […]

The post A Beginner’s Guide to Using CDNs appeared first on pingdom.com.

]]>
Last updated: February 28, 2024

Websites have become larger and more complex over the past few years, and users expect them to load instantaneously, even on mobile devices. The smallest performance drops can have big effects; just a 100ms decrease in page load time can drop conversions by 7%. With competitors just a click away, organizations wishing to attract and retain customers need to make web performance a priority. One relatively simple method of doing this is by using content delivery networks (CDNs). 

In this article, we’ll explain how CDNs help improve web performance. We’ll explain what they are, how they work, and how to implement them in your websites.  

What is a CDN? 

A CDN is a distributed network and storage service that hosts web content in different geographical regions around the world. This content can include HTML pages, scripts, style sheets, multimedia files, and more. This lets you serve content from the CDN instead of your own servers, reducing the amount of traffic handled by your servers.  

CDNs can also act as a proxy between you and your users, offering services such as load balancing, firewalls, automatic HTTPS, and even redundancy in case your origin servers go offline (e.g., Cloudflare Always Online).  

Why Should I Use a CDN? 

CDNs offload traffic from your servers, reducing your overall load. They are also optimized for speed and, in many cases, offer faster performance, which can improve your SEO rankings. Since CDNs host data in centers located around the world, they literally move your content closer to your users. This can greatly reduce latency for some users and avoid downtime caused by data center outages or broken routes.  

How Do CDNs Work? 

A CDN consists of multiple data centers around the world called points of presence (PoPs). Each PoP is capable of hosting and serving content to users. CDNs route users to specific PoPs based on a number of factors, including distance, PoP availability, and connection speed.  

A PoP acts as a proxy between your users and your origin server. When a user requests a resource from your website such as an image or script, they are directed to the PoP. The PoP will then deliver the resource to the user if it has it cached.  

But how do you get the content to your PoP? Using one of two methods: pushing or pulling. Pushing requires you to send your content to the CDN beforehand. This gives you greater control over what content gets served by the CDN, but if a user requests content that you haven’t yet pushed, they may experience errors.  

Pulling is a much more automatic method, where the CDN automatically retrieves content that it hasn’t already cached. When a user requests content that isn’t already cached, the CDN pulls the most recent version of the content from your origin server. After a certain amount of time, the cached content expires and the CDN refreshes it from the origin the next time it’s requested.  

How Do I Choose a CDN? 

While CDNs work the same way fundamentally, they differ in a number of factors, including:  

Price 

Most CDNs charge based on the amount of bandwidth used. Some may also charge based on the number of cache hits (files served from cache), cache misses (retrievals from the origin), and refreshes. Others charge a fixed fee and allow a certain amount of bandwidth over a period of time. When comparing CDNs, you should estimate your bandwidth needs and anticipated growth to find the best deal.  

Availability and Reliability 

CDNs strive for 100% uptime, but perfect uptime is never guaranteed. Consider your availability needs and how each CDN supports them. Also, compare CDNs based on their PoP uptime rather than their overall uptime, especially in the regions you expect to serve. If possible, verify that your CDN offers fallback options such as routing around downed PoPs.  

PoP Locations (Regions Served) 

Depending on where your users are located, certain PoPs can serve your users more effectively. Choose a CDN that manages PoPs close to your users, or else you’ll miss out on many of the performance benefits that CDNs offer.  

How Do I Add a CDN to My Website? 

The process of adding a CDN to your website depends on where and how your website is hosted. We’ll cover some of the more common methods below.  

Web Hosting Provider 

If your website is hosted by a provider such as inMotion Hosting, HostGator, or 1&1, your provider may offer a CDN as a built-in or extra service. For example, Bluehost provides Cloudflare for free and enabled by default for all plans. You can still use a CDN if your host doesn’t explicitly support it, but it may fall under one of the following processes.  

Content Management System (CMS) 

A content management system (CMS) like WordPress and Squarespace often support CDNs through the use of plugins. For WordPress, Jetpack provides support for its own CDN automatically. Others such as W3TC, WP Super Cache, and WP Fastest Cache let you choose which CDN to direct users to.  

Self-Hosted 

Websites that you host yourself offer the greatest flexibility in choosing a CDN. However, they also require more setup. As an example, let’s enable Google Cloud CDN for a website hosted on the Google Cloud Platform (GCP).  

This example assumes you have a GCP account, a domain registered with a registrar, and a website hosted in Compute Engine, App Engine, or another GCP service. If you don’t already have a GCP account, create one and log into the Google Cloud Console.  

Step 1: Configure Your DNS Records 

Traditionally, the way to route your users to a CDN was to change the resource URLs in your website to point to URLs provided by the CDN. Most modern CDNs avoid this by managing your DNS records for you, letting you redirect users without requiring changes to your website.  

To configure Cloud DNS, view the Cloud DNS quickstart document and follow the instructions for creating a managed public zone. Don’t create a new record or a CNAME record yet, since we don’t yet have an IP address to point the DNS record to. In the screenshot below, we created a new zone called mydomain-example for the domain subdomain.mydomain.com.  

Creating a DNS zone in Cloud DNS. © 2019 Google, LLC. All rights reserved. 

After creating the zone, update your registrar’s domain settings to point to the Cloud DNS name servers. This will let you manage your domain records through Cloud DNS instead of through your registrar. For more information, visit the Cloud DNS documentation page on updating your domain’s name servers or refer to your registrar’s documentation.  

Step 2: Enable Cloud CDN  

With DNS configured, we now need to enable the CDN itself. With Cloud CDN, a load balancer must be selected as the origin. If you don’t already have a load balancer, you can follow these how-to guides to create one. For a standard HTTP/S website, follow this guide for specific instructions.  

With your load balancer created, follow these instructions to enable Cloud CDN for an existing backend service. Once your new origin is created, select it from the origin list. You will need the IP address displayed in the Frontend table to configure Cloud DNS, so make sure you copy it or keep this window open. The following screenshot shows an example Cloud CDN origin:  

Viewing origin details in Cloud CDN. © 2019 Google, LLC. All rights reserved. 

After retrieving your front-end IP address, return to Cloud DNS and select your zone. Create a new A record to point the domain to your origin’s IP address. You can find instructions on the Cloud DNS quickstart documentation page under creating a new record. This is shown in the screenshot below. Optionally, you can also create a CNAME record to redirect users from a subdomain, such as www.yourdomain.com.  

Creating a new DNS record set in Cloud DNS. © 2019 Google, LLC. All rights reserved. 

Step 3: Configure your web server 

To ensure your content is properly cached, make sure your web server responds to requests with the correct HTTP headers. Cloud CDN only caches responses that meet certain requirements, some of which are specific to Cloud CDN. You will need to view your web server’s documentation to learn how to set these headers. Apache and Nginx provide guides with best practices for configuring caching.  

Step 4: Upload Content to the CDN 

For most website operators you don’t need to do anything to upload content. That’s because the CDN will automatically cache resources from your server as people access your site. This is also known as the “pull method”. Alternatively, Google does allow you to push specific content you want to host by manually uploading it.  

How Does a CDN Impact Performance? 

To demonstrate the performance benefits of CDNs, we ran a performance test on a website hosted on the Google Cloud Platform. The website is a static single page website created with Bootstrap and the Full Width Pics template, and consists of seven high-resolution images, courtesy of NASA/JPL-Caltech. The server is a Google Compute Engine instance located in the us-east1-b region running Nginx 1.10.3.  

We configured the instance to allow direct incoming HTTP traffic. We also set up Google Cloud CDN for the instance. You can see a screenshot of the web page and networking timing of the site below using a waterfall chart.  

A waterfall chart of the test site using Chrome DevTools. © 2019 Google, LLC. All rights reserved. 

We then ran a performance test using SolarWinds® Pingdom®. Pingdom provides a page speed test that measures the time needed to fetch and render each element of a web page. We created two separate checks to test the origin server and CDN separately, then compared the results to see which method was faster. To maximize latency, we ran both checks from the Pingdom Eastern Asia location.  

Origin Results 

Running a speed test on the origin server resulted in a page load time of 3.68 seconds. The time to download the first byte from the server (shown as a blue line) was 318 milliseconds, meaning users had to wait one-third of a second before their device even began receiving content. Rendering the page (indicated by the orange line) took an additional 679ms, meaning users had to wait almost a full second to see anything on their screen. By the time the page finished rendering (green line), users had been waiting more than 3.5 seconds.  

Most of this delay was due to downloading the high-resolution images, but a significant amount of time was spent connecting to the server and waiting for content to begin transferring.  

Page load timeline when connecting to our test origin server. 

CDN Results 

With a CDN, we immediately saw a substantial improvement in load time. The entire page loaded in just 1.04 seconds, more than two seconds faster than the origin server. The most significant change is in the time to first byte (blue line), which dropped to just 7ms. This means our users began receiving content almost immediately after connecting to the CDN.  

Page load timeline when connecting to Google Cloud CDN. 

While there wasn’t a significant improvement in the DOM content load time (orange line), the connection and wait times dropped significantly. We also saw content begin to appear on the page as early as 0.5 seconds into the page load time. We can confirm this by looking at the film strip, which shows screenshots of the page at various points in the loading process. This is compared to the 1.5 seconds it took for the origin server to begin rendering content.  

Comparing the page rendering time with a CDN (bottom) and without a CDN (top). 

New Advancements in CDN Technology 

Over the years, CDN technology has seen significant advancements to cater to the growing demands of faster and more secure content delivery. Some of the notable advancements include:  

Edge Computing: By processing data closer to the user, edge computing reduces latency and improves content delivery speeds.  

5G Networks: The rollout of 5G networks globally has enhanced the performance of CDNs, providing faster and more reliable content delivery.  

These advancements continue to shape the CDN landscape, offering improved performance and new capabilities for website owners.  

Conclusion 

CDNs offer a significant performance boost without much effort on the part of organizations. The biggest challenge is finding out which CDN provider to choose. If you’re not sure which provider will benefit you the most, we benchmarked four of the most popular providers (Cloudflare, Fastly, AWS CloudFront, and Google CDN). While performance plays a major role in each provider’s viability, we also encourage you to factor in additional features, security, and integrations offered by the CDN.  

After setting up your CDN, you can check the performance difference using SolarWinds® Pingdom®. In addition to running one-time tests, you can use Pingdom to schedule periodic checks to ensure your website is always performing at its best. In addition, you can use Pingdom to constantly monitor your website’s availability and usability. Sign up for a Pingdom 30-day free trial.  

The post A Beginner’s Guide to Using CDNs appeared first on pingdom.com.

]]>
https://www.pingdom.com/blog/a-beginners-guide-to-using-cdns-2/feed/ 0
The Five Most Common HTTP Errors According to Google https://www.pingdom.com/blog/the-5-most-common-http-errors-according-to-google/ https://www.pingdom.com/blog/the-5-most-common-http-errors-according-to-google/#comments Wed, 28 Feb 2024 13:49:32 +0000 http://royalpingdom.wpengine.com/?p=2481

Sometimes when you try to visit a web page, you’re met with an HTTP error message. It’s a message from the web server that something went wrong. In some cases it could be a mistake you made, but often it’s the site’s fault. Now, you might wonder, which are the most common HTTP errors that people encounter when they surf the Web? That is the question we’ll answer in this article.

The post The Five Most Common HTTP Errors According to Google appeared first on pingdom.com.

]]>
Last updated: February 28, 2024

Sometimes when you try to visit a web page, you’re met with an HTTP error message. It’s a message from the web server that something went wrong. In some cases, it could be a mistake you made, but often, it’s the site’s fault. 

Each type of error has an HTTP error code dedicated to it. If you try to access a non-existing page on a website it leads to a 404 error. 

Now, you might wonder, which are the most common HTTP errors that people encounter when they surf the Web? That is the question we’ll answer in this article. 

Google to the Rescue 

We asked Google “5 most common HTTP errors” and this is what it gave us: 

This is the result of millions of web users telling us themselves what errors they encounter the most.  

People who encounter errors when they visit websites want to know more about that error. They’ll probably go to the nearest search engine to do so. 

For this, Google’s search statistics should give us a pretty good idea of how the most common HTTP errors compare amongst themselves. It’s a great tool for estimating the “popularity” of search terms. 

Using Google Insights for Search we went through the above most common five HTTP error codes and compared them against each other. For this comparison, we chose the location “worldwide”. The period included all searches in 2023, and the type of search was limited to web search. When the dust settled from this little shootout, this is what we had: 

Note: Read our analysis on how Google collects data about the Internet and its users to understand better how Google works. 

The Top Five Errors, According to Google 

Here they are, listed and explained in reverse order, the five most common HTTP errors. Drumroll, please… 

5. HTTP Error 403 (Forbidden) 

This error is similar to the 401 error, but note the difference between unauthorized and forbidden. In this case, no login opportunity was available. This can happen, for example, if you try to access a (forbidden) directory on a website. 

To resolve an HTTP 403 error, the client or user should typically: 

  • Ensure they are using valid and authenticated credentials if required. 
  • Confirm they have the necessary permissions to access the resource. 

If IP blocking is suspected, ensure their IP address is not restricted.  

4. HTTP Error 401 (Unauthorized) 

This error happens when a website visitor tries to access a restricted web page but isn’t authorized to do so. The reason for this error is usually because of a failed login attempt. However, there can be more than one reason why this error occurs. Let’s look at the common ones: 

  1. Logging In: When a user tries to access a protected resource without being logged in, the server responds with a 401 error, prompting the user to provide their login credentials. 
  1. Client Authentication Required: This happens when the client that requests a certain resource from the server tries to request it without any authentication. In this case, the server indicates that the client must authenticate itself to get the requested resource by throwing the 401 error. This can happen if the client needs to provide a username and password or some other form of authentication token which is currently missing in the request. 
  1. Lack of Credentials: Another common reason for this error is that the client hasn’t provided valid authentication credentials in the request. These credentials are typically sent in the request header. For instance, methods like Basic Authentication (username and password), Bearer Token (OAuth 2.0), or API keys. 
  1. Invalid Credentials: In some cases even when the client does provide credentials, they could be invalid or expired. The server will then respond with a 401 error. In this case, the client should reauthenticate and send the request again with valid credentials. 
  1. Insufficient Permissions: Sometimes even with valid authentication credentials the server may return a 401 error. This maybe because the client doesn’t have the correct or necessary permissions to access this resource. In order to get through this, the client should request for the necessary permissions to access the resource. 

3. HTTP Error 404 (Not Found) 

Most people are bound to recognize this one. A 404 error happens when you try to access a resource on a web server (usually a web page) that doesn’t exist. Some reasons for this can be a broken link, a mistyped URL, or that the webmaster has moved the requested page somewhere else (or deleted it). To counter the ill effects of broken links, some websites set up custom pages for them (and some of those are really cool). 

Some of the common causes for this error include: 

  • The URL in the client’s request is misspelt or contains a typographical error. 
  • The requested resource has been deleted or moved to a different location. 
  • The server’s configuration is incorrect, preventing it from serving the requested resource. 
  • The resource never existed on the server. 

Some common techniques can help in resolving an HTTP 404 error quickly: 

  • Double-check the URL to ensure there are no typographical errors. 
  • Verify that the resource being requested actually exists on the server and is accessible at the specified URL. 
  • If the resource has been moved, update the URL accordingly. 
  • If the error persists on a website, the website owner or administrator should ensure that their server’s configuration is correct and that any deleted or moved resources are appropriately redirected or handled. 

If you aren’t managing 404 errors on your website properly, it can have a negative effect on your website’s search engine ranking. It can also hamper the overall user experience of your website. Ensure that you have a dedicated “Not Found” page that appears to your users and lets them know that the web page they’re trying to access is not available. 

2. HTTP Error 400 (Bad Request) 

This is basically an error message from the web server telling you that your application (e.g., your web browser) accessed it incorrectly or that the request was somehow corrupted on the way. This can happen due to one or more of the following reasons: 

  1. Malformed Request: The most common reason for a 400 Bad Request error is that the request is malformed. The client’s HTTP request does not conform to the HTTP protocol’s standards. This could be due to the following: 
  • Missing or improperly formatted headers. 
  • Invalid or missing request parameters. 
  • Unsupported HTTP methods (e.g., using POST when GET is expected). 
  • Incorrect content length or content type headers. 
  1. Security Concerns: A 400 error may be used by the server to protect against potential security threats. For instance, requests with extremely long URLs could be used for denial-of-service attacks. 

Some common techniques can help in resolving an HTTP 400 error quickly: 

  • Ensure the request’s syntax adheres to the HTTP standard. 
  • Verify that all required headers and parameters are included and correctly formatted. 
  • Ensure the correct HTTP method (GET, POST, PUT, DELETE, etc.) is used to access the resource 

1. HTTP Error 500 (Internal Server Error) 

The description of this error pretty much says it all. It’s a general-purpose error message for when a web server encounters some form of internal error. For example, the web server could be overloaded and therefore unable to handle requests properly. 

There are various reasons why an HTTP 500 error might occur, including: 

  • Software bugs or programming errors in the server code. 
  • Overloaded or under-resourced server infrastructure. 
  • Issues with the server’s configuration. 
  • Database connection problems. 
  • Unexpected exceptions or crashes in server applications. 

Resolving an HTTP 500 error typically involves troubleshooting and debugging on the server side. Actions that can be taken include: 

  • Checking server logs for detailed error information. 
  • Identifying and fixing software bugs or configuration issues. 
  • Ensuring that the server infrastructure is properly configured and adequately resourced. 
  • Addressing any database or application-specific problems. 

HTTP Error Code Cheat Sheet 

When dangling with HTTP errors, a cheat sheet can come in handy for figuring out which HTTP error you’re dealing with and what it really means. 

.As a primer, any HTTP status code in the form of 2XX is not erroneous, instead, it indicates a successful request and response. We can say the same thing for 3xx, however, from an end user perspective it might seem like an error code. Any status code in the form of 4xx or 5xx is definitely erroneous. 

Here’s the cheat sheet which summarises this: 

 Some Additional Comments on Website Errors 

We want to point out that all the error messages above are errors reported by the web server back to the visitor (that is the nature of HTTP errors; they come from the web server you are accessing). 

Needless to say, if you can’t access a website at all—for example, if it’s network that is down—you won’t get an HTTP error back. Your connection attempt will simply time out. 

We should add that the results from Google actually match our own data quite well. As you might know, we here at SolarWinds® Pingdom® monitor websites and servers for a living (you can set up your own account by clicking here). When helping customers with problems, we have often come upon the dreaded (and pretty vague) HTTP error 500, “internal server error.” 

If you want to deliver a top-notch experience for your website users, learn how to analyze and improve page load performance

The post The Five Most Common HTTP Errors According to Google appeared first on pingdom.com.

]]>
https://www.pingdom.com/blog/the-5-most-common-http-errors-according-to-google/feed/ 4
Page Load Time vs. Response Time – What Is the Difference? https://www.pingdom.com/blog/page-load-time-vs-response-time-what-is-the-difference/ https://www.pingdom.com/blog/page-load-time-vs-response-time-what-is-the-difference/#respond Wed, 28 Feb 2024 13:19:58 +0000 https://royal.pingdom.com/?p=27791 Last updated: February 28, 2024 Page load time and response time are key metrics to monitor, and they can give you an in-depth understanding of how your website is performing. However, the difference between page load time and response time isn’t immediately obvious, and neither are the benefits of tracking them independently.   In this article, […]

The post Page Load Time vs. Response Time – What Is the Difference? appeared first on pingdom.com.

]]>
Last updated: February 28, 2024

Page load time and response time are key metrics to monitor, and they can give you an in-depth understanding of how your website is performing. However, the difference between page load time and response time isn’t immediately obvious, and neither are the benefits of tracking them independently.  

In this article, we define page load time and website response time. This will help you figure out your overall web response times. We’ll also discuss what monitoring these metrics can teach you about your website and look briefly at how to improve response and loading times on your site so you can fully optimize your website for speed.  

Response Time 

Response time refers to the time it takes for an inquiry from a user to receive a response from a server. Response time can be broken down into five parts:  

  • DNS lookup—This is the time it takes to resolve the hostname to its IP address. If the DNS lookup time is high, this may indicate an issue with the DNS servers. 
  • Connection time—Referring to the time it takes to connect to the server, these results are generally used to identify network latency. High connection times are often caused by network or routing issues. 
  • Redirect time—This refers to the time it takes for any necessary HTTP redirects and any extra DNS lookups or connection time during this process. 
  • First byte— This refers to the time it takes for the first byte of data to transfer. Slow times here can signal issues with server load. 
  • Last byte—This refers to the time it takes to download the final server response. A problem here indicates a bandwidth issue, so you may need to upgrade your bandwidth to increase download speed. 

Response time is often defined as the time to first byte (TTFB), which is the time it takes for the browser to receive the first byte of data being transferred from the server.  

What is Response Time in Networking? 

Response time in networking refers to the time it takes for a system or network component to respond to a request. It encompasses various latency factors, including transmission delays, processing delays, and queuing delays. In networking, response time is crucial for assessing network performance.  

Good API Response Time 

A response time of 600 ms is generally considered good for a website’s initial server response. However, faster response times, ideally in the range of 100 to 500 ms, are often targeted for optimal user experience. A good API response time is typically below 100 ms. Fast API responses are crucial for real-time applications and a smooth user experience.  

Tracking Response Time with Pingdom 

With SolarWinds ® Pingdom®, you can track response times via uptime monitoring. The uptime monitoring feature synthetically tests your website from more than 100 data centers located around the globe, reporting your site’s response times and alerting you immediately if any outages occur. Moreover, you can also monitor your website from some specific regions such as Europe, North America, Australia, etc.  

Here at Pingdom, however, to ensure response times are as accurate as possible, we calculate response time in three parts:  

  • Time to first byte 
  • Time to receive headers 
  • Time to load the HTML of the site 

If you would prefer a response time recorded as TTFB, then you can use a ping check.  

Fast and Reliable Website Monitoring

Make sure you’re always the first to know when your site is unavailable or slow.

Load Time

Page load time is a different but equally important metric. Load time is a simpler concept referring to the time it takes to download and display an entire individual webpage. This includes all page elements, such as HTML, scripts, CSS, images, and third-party resources.  

Here’s a typical request-response process contributing to load time:  

  • A user enters a URL and the browser makes a request to the server 
  • The web server processes the request and sends a response back to the browser 
  • The browser starts receiving page content 
  • The entire page loads and becomes available to the user to browse 

Load time is the elapsed time between a user submitting a URL and the entire page becoming available on the browser for the user to view. Consequently, you will find load times are often much higher than website response times.  

How to find Website Response Time? 

You can find out the response time of a website using online tools and browser developer tools. Tools like Google’s PageSpeed Insights, GTmetrix, and Pingdom offer insights into response times and overall page performance. To check response times using browser developer tools, you can refer to resources like Google’s DevTools documentation: Google DevTools.  

Using Pingdom 

With Pingdom, you can monitor your page load times in two ways:  

  • Webpage load speed monitoring—Granular testing reports the size and load times of every element on your website, from HTML and CSS to fonts, images, and final load times. Additionally, it provides suggestions on how to improve load speed. 
  • Real user monitoring—Pingdom tracks real users on your site and reports the actual load times your visitors experience. You can use this data to better understand how load times for your site differ depending on factors like location, device, or browser. You can also track your load times over a period of up to 400 days, helping you see if any optimization strategies you have implemented are impacting site speed. 

Response Time vs. Page Speed 

Response time refers to the time it takes for a server to respond to a request, usually measured in milliseconds. It focuses on the server’s initial response. Page speed, on the other hand, encompasses the entire time it takes for a webpage to load fully for the user. It includes response time but also considers the time it takes to download all assets (HTML, images, scripts, etc.) and render the page in the browser.  

Page Load Time vs. Response Time—Which One Should You Monitor? 

The answer is both, of course. Page load time and response time are key metrics you should always track because they give you insight into the user experience of your visitors.  

In 2024, when network penetration and speed are higher than ever, it’s critical for websites to load in under 2-3 seconds. Any websites that take more than 3 seconds to load will lose out on potential customers. Moreover, Google’s first page results websites load in less than 2 seconds. A good website load time is generally considered to be under 3 seconds. Faster load times, such as 1 to 2 seconds, are often targeted for optimal user satisfaction and SEO performance.  

The longer it takes for your website to load, the more discontent your users are and the higher the bounce rates. Beyond the 2-3 second mark, every 1 second leads to roughly 16% dissatisfaction among your visitors. It also decreases the conversion rate by around 4%. Almost 2/3rd of your visitors will make a decision to buy something from your website based on your website’s loading time.  

You want to ensure your pages are loading quickly and efficiently. Your website’s web response times are the total time combined for your page load time and your network response time.  By monitoring response and page load time, you can quickly identify and fix issues as they arise, minimizing user disruption.  

Slow Response Times 

Slow response times can indicate many other intricate and specific issues:  

  • Struggling server—If your response times are consistently high, this may indicate your server is overloaded. Contact your web host—your response time data can help them solve the problem—but you may need to consider moving onto a VPS or dedicated server package. 
  • Bandwidth—Slow response times can also contribute to bandwidth issues. Contact your hosting provider and discuss the problem—it may be time to upgrade hosting plans. Investing in a high-quality content delivery network (CDN) service might also be a good option. 
  • Downtime—There’s often a direct correlation between high response times and downtime. If your response time is high, monitor your uptime to ensure your site isn’t suffering from ongoing outages. 

If your website goes down, Pingdom will run additional tests and perform a root cause analysis. This will help you see what has gone wrong and where allowing you to quickly address the issue. You can also run a traceroute on any issue, identifying server output and allowing you to examine server response codes.  

Slow Load Times 

Many things can cause high load times. Pingdom provides in-depth reports on your page loading times, allowing you to drill down and identify the components causing issues.  

  • Performance grades—Pingdom performance grades are based on page load time and give you an overview of how your site is performing and where it needs to improve. 
  • Element size—Pingdom monitors each page element (including HTML, JavaScript, CSS, images, and more) and reports individual file sizes. This will help you identify any bloated components you need to optimize. Depending on the issue, you may need to compress your images, remove unnecessary custom fonts, gzip your files, or implement other optimization strategies. 
  • Element loading times—Pingdom also reports loading times for individual page elements and the order in which they load. Pingdom takes screenshots of the loading process at 50-millisecond intervals and presents them in a filmstrip so you can analyze what’s happening as your site loads and identify bottlenecks affecting webpage load speed. Depending on the results, you may need to look at altering the order in which scripts and styles load on your site. 

Final Thoughts on Page Load Time vs. Response Time 

Monitoring page load time and response time will give you key insights into how your website is performing and the user experience of your visitors. It will also help you evaluate your overall website’s web response times. Once you’ve set up website monitoring, analyze and use the data to make necessary improvements to your website. This will help ensure your website is consistently functioning at its optimal level.    

The post Page Load Time vs. Response Time – What Is the Difference? appeared first on pingdom.com.

]]>
https://www.pingdom.com/blog/page-load-time-vs-response-time-what-is-the-difference/feed/ 0
Can gzip Compression Really Improve Web Performance? https://www.pingdom.com/blog/can-gzip-compression-really-improve-web-performance/ https://www.pingdom.com/blog/can-gzip-compression-really-improve-web-performance/#respond Mon, 26 Feb 2024 14:56:58 +0000 https://royal.pingdom.com/?p=27320 Last updated: February 26, 2024 The size of the web is slowly growing. Over the past decade, the average webpage weight grew by 356%, from about 484 KB to 2.205 MB. Considering 800 KB was the average size of a website in 2012, that’s an enormous difference. While it’s true that the global average internet […]

The post Can gzip Compression Really Improve Web Performance? appeared first on pingdom.com.

]]>
Last updated: February 26, 2024

The size of the web is slowly growing. Over the past decade, the average webpage weight grew by 356%, from about 484 KB to 2.205 MB. Considering 800 KB was the average size of a website in 2012, that’s an enormous difference. While it’s true that the global average internet speed is increasing, users with slow, limited, or unreliable internet access often end up waiting. 

The question is, how do we keep websites fast even as they get bigger and bigger? One answer is in data compression using gzip. The good news is that it’s easy to enable and helps give your site an instant boost. We’ll show you how. 

What Exactly is gzip? 

Gzip is a data compression algorithm capable of compressing and decompressing files quickly. The name also refers to two other technologies: the software that compresses and decompresses files and the format in which those files are stored. Gzip can compress almost any file type, from plain text to images, and is fast enough to compress and decompress data on the fly. 

Using gzip on the Web 

Web servers use gzip to reduce the total amount of data transferred to clients. When a browser with gzip support sends a request, it adds “gzip” to its Accept-Encoding header. When the web server receives the request, it generates the response as normal and then checks the Accept-Encoding header to determine how to encode the response. If the server supports gzip, it uses gzip to compress each resource. It then delivers the compressed copies of each resource with an added Content-Encoding header, specifying that the resource is encoded using gzip. The browser then decompresses the content into its original uncompressed version before rendering it to the user. 

However, this comes at a cost. Compression is a CPU-intensive process, and the more you compress a file, the longer it takes. Because of this, gzip offers a range of compression levels from 1 to 9; 1 offers the fastest compression speed but at a lower ratio, and 9 offers the highest compression ratio but at a lower speed. The gzip application uses level 6 by default, favoring higher compression over speed. Nginx, on the other hand, uses level 1, favoring higher speeds over file size savings. 

52% of websites and nearly all modern browsers and web servers support gzip, while nearly 89% support a form of compression. The entire process is transparent to the user, fast enough to run on almost any device, and in some cases, can reduce the size of a resource by 72%

Adding gzip to Your Website

There are two ways to compress web content: dynamically and statically. Dynamic compression compresses files when they’re requested by the user and is the default approach used by most web servers. Dynamic compression is useful for content that changes frequently, such as application-generated web pages. 

On the other hand, static compression compresses each file in advance and delivers this pre-compressed version when the original file is requested. Files that don’t change frequently—such as JavaScript, CSS, fonts, and images—benefit the most from static compression since they only need to be compressed once. This saves CPU time at the cost of a slightly longer deployment. 

Nginx 

Nginx supports gzip through the ngx_http_gzip_module module. 

Dynamic Compression 

To enable dynamic compression, just add gzip on; to your global, site, or location configuration block. The gzip module supports a number of different configurations, including the type of files to compress, the compression level, and proxying behavior. You can also set a minimum required file size, which prevents lower compression ratios or even larger file sizes for smaller files. 

For example, the following compresses HTML, CSS, and JS files larger than 1.4 KB using a compression level of 6, while allowing compression for all proxied requests: 

gzip_types text/html text/css application/javascript; 
 gzip_min_length 1400; 
 gzip_comp_level 6; 
 gzip_proxied any; 

Preventing Certain File Types from Being Compressed 

Several file formats—particularly image formats such as JPG, PNG, and GIF—are already compressed using their algorithms. This is also true for many audio and video formats. Not only would these not benefit from gzip, but their sizes could actually increase. This is why gzip_types is only limited to text-based files since they benefit the most from compression. 

Supporting Proxies 

In some cases, proxy servers (e.g., Content Delivery Networks) can interfere with how gzipped content is delivered. Some proxies might cache gzipped resources without also caching their Content-Encoding, or even try to re-compress compressed content. The Vary HTTP response header specifies how proxies and caches handle compressed content and should be enabled whether dynamic or static compression is enabled. You can do this by adding gzip_vary on; to your configuration. 

Static Compression 

Static compression is available through the ngx_http_gzip_static_module module. This module isn’t built by default, so you will need to build Nginx with the –with-http_gzip_static_module parameter provided. 

Once it’s built, you can enable it by using gzip_static on; in place of gzip on; in your configuration. 

Before you can use static compression, you will need to create a gzipped copy of each file you want to serve. When a client requests the original file, Nginx checks to see if the compressed version is available by appending “.gz” to original file name. If the compressed version isn’t available, or if the client doesn’t support gzip, Nginx delivers the uncompressed version. 

Apache 

Apache supports gzip through the mod_deflate module. To load the plugin, add the following line to your Apache configuration file: 

LoadModule deflate_module modules/mod_deflate.so 

Dynamic Compression 

To enable dynamic compression, add SetOutputFilter DEFLATE to the section you want to configure. As with Nginx, you can enable gzip for the entire web server or for a specific configuration block. 

Apache also supports compression for certain file types, setting compression levels, and managing proxy settings. For example, you can limit compression to HTML files by using the AddOutputFilterByType directive: 

# Only compresses HTML and XML files 
 AddOutputFilterByType DEFLATE text/html 

Alternatively, you can compress all but certain file types using the SetEnvIfNoCase directive: 

# Compresses all file types except GIF, JPG/JPEG, and PNG 
 SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip 

You can learn more about mod_deflates options in the Apache documentation

Static Compression 

mod_deflate already provides support for pre-compressed files. However, it requires some additional configuration using mod_rewrite. The result is essentially the same as Nginx: Apache appends “.gz” to the original filename to find the pre-compressed version and serves it if it exists. This example uses this method to serve pre-compressed CSS files: 

RewriteCond "%{HTTP:Accept-encoding}" "gzip" 
 RewriteCond "%{REQUEST_FILENAME}\.gz" -s 
 RewriteRule "^(.*)\.css" "$1\.css\.gz" [QSA]

Benchmarking gzip Using Static and Dynamic Compression

To show the difference between compressed and uncompressed websites, we ran a page speed test on a website with three different configurations: one with compression enabled, one with compression completely disabled, and one serving only pre-compressed content. 

We created a basic website using Hugo and hosted it on an f1-micro Google Compute Engine instance running Nginx version 1.10.3 on Debian 9. For the gzip-enabled versions, we used the default settings for both Nginx and the gzip command-line application. For the static compression test, we only compressed the CSS, font, and JavaScript files. 

To run the test, we used a recurring page speed check to contact the site every 30 minutes. After four runs, we reconfigured and restarted the Nginx server for the next test. We dropped the first run to allow time for the Nginx server to warm up. We then averaged the remaining results and took a screenshot of the final test’s Timeline. 

Since the Debian Nginx package doesn’t have built-in support for static compression, we built Nginx from the Debian source with the module enabled. We verified that Nginx was using static compression and not dynamic compression by using strace to see which files were being accessed: 

# strace -p 2>&1 | grep gz
[pid 3612] open("/var/www/html/css/bootstrap.min.css.gz", O_RDONLY|O_NONBLOCK) = 8
[pid 3612] open("/var/www/html/css/font-awesome.css.gz", O_RDONLY|O_NONBLOCK) = 8
[pid 3612] open("/var/www/html/css/custom.css.gz", O_RDONLY|O_NONBLOCK) = 8
[pid 3612] open("/var/www/html/js/jquery-1.11.3.min.js.gz", O_RDONLY|O_NONBLOCK) = 8
[pid 3612] open("/var/www/html/js/bootstrap.js.gz", O_RDONLY|O_NONBLOCK) = 8

No Compression Enabled

With no compression, the web server transferred 445.57 KB and took 329 ms to load.

Content TypeSize
HTML5.15 KB
Images119.82 KB
CSS159.06 KB
JavaScript161.54 KB
Total445.57 KB
Timeline with no compression

Dynamic Compression Enabled

With dynamic compression, the web server transferred 197.6 KB and took 281 ms to load. The most significant savings came from the CSS and JavaScript files, with the size of the CSS files alone dropping by over 130 KB.

Content TypeSize
HTML2.01 KB
Images119.82 KB
CSS28.86 KB
JavaScript46.94 KB
Total197.6 KB
Timeline with dynamic compression enabled.

Static Compression Enabled

With static compression, the web server transferred 197.2 KB and took 287 ms to load.

Content TypeSize
HTML1.98 KB
Images119.85 KB
CSS28.63 KB
JavaScript46.57 KB
Total197.2 KB
Timeline with static compression.

Results

Both static and dynamic compression reduced the size of our website by 77% and improved page load times by nearly 15%. For some files, such as bootstrap.min.css, gzip reduced the file size by over 83%. Although this was a small site with few optimizations, simply enabling gzip on the web server allowed for significant savings in load time. 

The fact that static compression is performed roughly the same as dynamic compression also shows that for smaller sites, the cost of dynamic compression on system resources is minor. Websites with larger files and higher traffic volumes will likely see a more significant amount of CPU usage and will benefit more from static compression. 

Conclusion 

With nearly universal support and a simple setup process, there’s little reason not to use gzip on your websites. Gzip is a fast and easy way to improve page speed performance while still delivering a high-quality experience to your users. See if your website supports gzip by running a free speed test, and sign up for a free SolarWinds® Pingdom® trial for more insights into your website’s performance. 

The SolarWinds and SolarWinds Cloud trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners. 

The post Can gzip Compression Really Improve Web Performance? appeared first on pingdom.com.

]]>
https://www.pingdom.com/blog/can-gzip-compression-really-improve-web-performance/feed/ 0
Troubleshooting End-User Issues With a DEM Tool https://www.pingdom.com/blog/troubleshooting-end-user-issues-with-a-dem-tool/ Thu, 23 Nov 2023 09:09:25 +0000 https://www.pingdom.com/?post_type=blog&p=34618 In the last decade, businesses have made massive investments in the digital economy with the goal of increasing operational efficiency and improving their customer or end-user experience. However, it isn’t rare for businesses to incur losses due to poor page load speed, failed transactions, or website errors. This is why businesses need to track end-user […]

The post Troubleshooting End-User Issues With a DEM Tool appeared first on pingdom.com.

]]>
In the last decade, businesses have made massive investments in the digital economy with the goal of increasing operational efficiency and improving their customer or end-user experience. However, it isn’t rare for businesses to incur losses due to poor page load speed, failed transactions, or website errors. This is why businesses need to track end-user experience in real time and resolve issues quickly. But because web applications depend on numerous third-party APIs, JavaScript and CSS components, hosting servers, and networking components, tracking every component and finding the root cause of issues can be a complex challenge. This is where digital experience monitoring (DEM) tools can help. DEM refers to emerging tools and techniques designed to help track user experience and performance issues by collecting and analyzing data from websites and applications from the user’s perspective. It includes both synthetic monitoring and real user monitoring (RUM).

How DEM Helps Troubleshoot End-User Experience Issues

Let’s discuss how DEM tools simplify and expedite the detection and troubleshooting of website performance issues to improve the end-user experience. 

Tracking Website Outages

Organizations need to be on top of their websites’ availability across all business-critical regions to ensure they can serve their customers 24/7. If a website outage remains undetected, it can lead to severe reputational and financial losses. Website administrators used to rely on the traditional ping utility to check response times and detect issues with website availability. However, modern DEM tools automate ping tests and alerts and assist in root cause analysis. With a DEM tool, organizations can configure the frequency of ping tests and track their websites’ uptime from different servers across the world. If they detect an outage, administrators need to drill down and find out whether it’s a coding error, DNS resolution problem, networking issue, or server issue with their web hosting provider. They also need to be proactive in communicating the outage to their end users. DEM tools can display a public status page with details of expected downtime. With regular uptime monitoring, organizations can ensure their website delivers a consistent experience to customers across all regions.

Monitoring Page Speed Issues

Modern websites have pages with several HTML, JavaScript, and CSS components along with images and videos. When a webpage loads all these components, it creates requests, which sometimes can slow down a website. Website administrators need to constantly monitor the performance of these components to detect whether any of them are behaving abnormally or taking longer than usual to load. DEM tools offer advanced page speed monitoring features designed to help capture the file sizes, load times, and other relevant details about page elements to detect which component or content piece is impacting page performance. Granular metrics and visualization make root cause analysis simpler. DEM tools also provide recommendations to improve test scores. Additionally, organizations can get a filmstrip timeline view of the page load performance to get a better sense of how a page loads. This helps them make informed decisions regarding minification of code, compression of images, implementation of lazy load, and more to improve the real and perceived end-user experience.

Resolving Critical Website Errors

Sometimes websites throw error messages, which can annoy end users. However, the error codes can be useful for web administrators to troubleshoot issues. For instance, missing content on a website can trigger a “404, page not found” error message. Such pages are common, as businesses have to redact or unpublish their outdated pages from time to time. At times, migrating a website to a new CMS or domain can also lead to 404 errors.

Similarly, users often receive internal server errors (500) potentially caused by a range of issues within the website. DEM tools help administrators capture these errors over a period to identify trends such as peak traffic triggering such issues. A 503 error code generally indicates website congestion, which could be due to increased genuine traffic or a malicious attack. If such errors are frequent, teams can dig deep to assess their security and consider investing in content delivery networks. With CDN nodes reducing the load on the central host server, a website is less likely to show 503 error codes. Tracking and resolving such website errors in near-real time is crucial, and digital experience monitoring can help with this. With DEM tools, admins can get notified about errors like these through email, Slack, SMS, or any other preferred medium.

Synthetic Transaction Monitoring

While real user monitoring techniques are useful for keeping track of production issues difficult to replicate in test or staging environments, developers can also detect such issues in production using synthetic transaction monitoring. This involves running test scripts in the production environment on a predefined frequency to surface issues with critical transactions or website workflows such as logins, filling and submitting a form, adding items to a shopping cart, etc. Synthetic transaction monitoring allows admins and developers to detect issues before their end users encounter them. DEM tools make transaction monitoring simpler for admins by providing out-of-the-box code snippets to check critical transactions and no-code transaction recording features. With these tools, it’s possible to set up transaction checks and alerts for certain keyword matches on critical pages. With this, admins can detect known error messages or ensure a URL redirects to the right page. 

Conclusion

We’ve discussed how businesses can improve their end-user experience using real user monitoring and synthetic monitoring techniques. Though most businesses partially employ these techniques, they lack a holistic view of their website performance and user experience.

DEM tools like SolarWinds® Pingdom® can bridge this gap, helping them gain deeper insight into the user experience with correlated visibility into their websites’ real-world performance issues. We recommend a 30-day free trial to learn more about the SolarWinds Pingdom DEM tool and its benefits.

The post Troubleshooting End-User Issues With a DEM Tool appeared first on pingdom.com.

]]>
Exploring the Software Behind Facebook, the World’s Largest Social Media Site https://www.pingdom.com/blog/the-software-behind-facebook/ https://www.pingdom.com/blog/the-software-behind-facebook/#comments Tue, 07 Feb 2023 13:54:35 +0000 http://royalpingdom.wpengine.com/?p=6723 FacebookAt the scale that Facebook operates, a lot of traditional approaches to serving web content break down or simply aren’t practical. The challenge for Facebook’s engineers has been to keep the site up and running smoothly in spite of handling over two billion active users. This article takes a look at some of the software and techniques they use to accomplish that.

The post Exploring the Software Behind Facebook, the World’s Largest Social Media Site appeared first on pingdom.com.

]]>
FacebookAt the scale that Facebook operates, several traditional approaches to serving web content break down or simply aren’t practical.

The challenge for Facebook’s engineers has been to keep the site up and running smoothly in spite of handling over two billion active users. This article takes a look at some of the software and techniques they use to accomplish that.

Facebook’s Scaling Challenge

Before we get into the details, here are a few factoids to give you an idea of the scaling challenge that Facebook has to deal with:

  • Facebook had 2.96 billion users as of Q4 2022 (the service is available in over 100 languages)
  • Every 60 seconds: 317 thousand status updates are added, 147 thousand photos are uploaded, and 54 thousand links are shared on Facebook
  • Facebook users generate 8 billion video views per day on average, 20% of which are live broadcast
  • In 2021, Facebook had 40 million square feet of data center space among its 18 campuses around the globe that host millions of servers

Sources: 1, 2, 3

One interesting fact is that even at this enormous scale, Facebook (Meta) data centers are supported by 100% renewable energy.

Check out this blog post to learn more stats on the most used social media platforms.

Software That Helps Facebook Scale

In some ways Facebook is still a LAMP site (kind of) which refers to services using Linux, Apache, MySQL, and PHP, but it has had to change and extend its operation to incorporate a lot of other elements and services, and modify the approach to existing ones.

For example:

  • Facebook still uses PHP, but it has built a compiler for it so it can be turned into native code on its web servers, thus boosting performance.
  • Facebook uses Linux but has optimized it for its own purposes (especially in terms of network throughput).
  • Facebook uses MySQL, but primarily as a key-value persistent storage, moving joins and logic onto the web servers since optimizations are easier to perform there (on the “other side” of the Memcached layer). In 2022, Facebook migrated to MySQL 8.0

Then there are the custom-written systems, like Haystack, a highly scalable object store used to serve Facebook’s immense amount of photos, or Scribe, a logging system that can operate at Facebook’s scale (which is far from trivial).

But enough of that. Let’s present (some of) the software that Facebook uses to provide us all with the world’s largest social network site.

Memcached

MemcachedMemcached is by now one of the most famous pieces of software on the internet. It’s a distributed memory caching system that Facebook (and a ton of other sites) uses as a caching layer between the web servers and MySQL servers (since database access is relatively slow). Through the years, Facebook has made a ton of optimizations to Memcached and the surrounding software (like optimizing the network stack).

Facebook runs thousands of Memcached servers with tens of terabytes of cached data at any one point in time. It is likely the world’s largest Memcached installation handling billions of requests per second.

HipHop for PHP and HipHop Virtual Machine (HHVM)

HipHop for PHPPHP, being a scripting language, is relatively slow when compared to code that runs natively on a server. HipHop converts PHP into C++ code which can then be compiled for better performance. This has allowed Facebook to get much more out of its web servers since Facebook relies heavily on PHP to serve content.

A small team of engineers (initially just three of them) at Facebook spent 18 months developing HipHop, and it was used for a few years. The project was discontinued back in 2013 and then replaced by HHVM (HipHop Virtual Machine).

https://developers.facebook.com/blog/post/2010/02/02/hiphop-for-php–move-fast/

Facebook dropped the long-standing PHP support in HHVM version 4.0 in response to PHP7’s higher performance upstream interpreter.

Haystack

Haystack is Facebook’s high-performance photo storage/retrieval system (strictly speaking, Haystack is an object store, so it doesn’t necessarily have to store photos).

It has a ton of work to do; there are more than 260 billion images Facebook, and each one is saved in four different resolutions, resulting in more than 20 petabytes of data.  And the scale is constantly increasing, with users uploading one billion new photos each week or 60 terabytes of data.

And it’s not just about being able to handle billions of photos; web performance is critical. As we mentioned previously, Facebook users upload around 147,000 photos every minute which makes it 2,450 photos per second.

BigPipe

BigPipe is a dynamic web page serving system that Facebook has developed. Facebook uses it to serve each web page in sections (called “pagelets”) for optimal performance. This approach is similar to the pipelining in modern microprocessors where multiple instructions are piped through different execution units to maximize performance.

For example, the chat window is retrieved separately, the news feed is retrieved separately, and so on. These pagelets can be retrieved in parallel, which is where the performance gain comes in, and it also gives users a site that works even if some part of it would be deactivated or broken.

Cassandra (Instagram)

CassandraCassandra is a distributed storage system with no single point of failure. It’s one of the poster children for the NoSQL movement and has been made open source (it’s even become an Apache project). Facebook used it for its Inbox search.

Other than Facebook, a number of other services use it, for example Digg. We’re even considering some uses for it here at SolarWinds® Pingdom®.

Facebook abandoned Cassandra back in 2010 but the solution has been used at Instagram since 2012 replacing Redis.

Scribe

Scribe was a flexible logging system that Facebook used for a multitude of purposes internally. It’s been built to be able to handle logging at the scale of Facebook, and automatically handles new logging categories as they show up (Facebook has hundreds). As of 2019, Scribe’s GitHub repository states that this project is no longer supported or updated by Facebook which probably means that it’s not in use anymore.

Hadoop and Hive

HadoopHadoop is an open source map-reduce implementation that makes it possible to perform calculations on massive amounts of data. Facebook uses this for data analysis (and as we all know, Facebook has massive amounts of data). Hive originated from within Facebook, and makes it possible to use SQL queries against Hadoop, making it easier for non-programmers to use.

Both Hadoop and Hive are open source (Apache projects) and are used by a number of big services, for example Yahoo and Twitter.

For more information, check out the article on “How Is Facebook Deploying Big Data?

Apache Thrift

Facebook uses several different languages for its different services. PHP is used for the front-end, Erlang is used for Chat, Java and C++ are also used in several places (and perhaps other languages as well). Apache Thrift is an internally developed cross-language framework that ties all of these different languages together, making it possible for them to talk to each other efficiently at scale. It was developed at Facebook for scalable cross-language services development. This has made it much easier for Facebook to keep up its cross-language development.

Facebook has made Thrift open source and support for even more languages has been added.

Varnish

VarnishVarnish is an HTTP accelerator which can act as a load balancer and also cache content which can then be served lightning-fast.

Facebook uses Varnish to serve photos and profile pictures, handling billions of requests every day. Like almost everything Facebook uses, Varnish is open source.

React

React logoReact is an open-source JavaScript library created in 2011 by Jordan Walke, a software engineer at Facebook. Later, Facebook introduced React Fiber, which is a collection of algorithms for rendering graphics. Interestingly, React is now one of the world’s most widely used JavaScript libraries. Read the story of how React became so successful.

https://dev.to/saamerm/did-facebook-really-slow-down-or-move-away-from-react-native-2fh5

Other Things That Help Facebook Run Smoothly

We have mentioned some of the software that makes up Facebook’s system(s) and helps the service scale properly. But handling such a large system is a complex task, so we thought we would list a few more things that Facebook does to keep its service running smoothly.

Gradual Releases and Dark Launches

Facebook has a system they called Gatekeeper that lets them run different code for different sets of users (it basically introduces different conditions in the code base). This lets Facebook do gradual releases of new features, A/B testing, activate certain features only for Facebook employees, etc.

Gatekeeper also lets Facebook do something called “dark launches”, which are to activate elements of a certain feature behind the scenes before it goes live (without users noticing since there will be no corresponding UI elements). This acts as a real-world stress test and helps expose bottlenecks and other problem areas before a feature is officially launched. Dark launches are usually done two weeks before the actual launch.

Profiling of the Live System

Facebook carefully monitors its systems (something we here at Pingdom of course approve of), and interestingly enough it also monitors the performance of every single PHP function in the live production environment. This profiling of the live PHP environment is done using an open source tool called XHProf.

Gradual Feature Disabling for Added Performance

If Facebook runs into performance issues, there are a large number of levers that let them gradually disable less important features to boost performance of Facebook’s core features.

The Things We Didn’t Mention

We didn’t go much into the hardware side in this article, but of course that is also an important aspect when it comes to scalability. For example, like many other big sites, Facebook uses a CDN to help serve static content. And then of course there are many data centers Facebook has, including the 27,000-square meter facility in Lulea, Sweden, launched in 2013, 150,000 square meter facility in Clonee Ireland launched in 2018, and the massive 11-story, 170,000-square-meter facility under development in Singapore since 2018.

And aside from what we have already mentioned, there is of course a ton of other software involved. However, we hope we were able to highlight some of the more interesting choices Facebook has made.

Facebook’s Love Affair with Open Source

We can’t complete this article without mentioning how much Facebook likes open source. Or perhaps we should say, “loves”.

Not only is Facebook using (and contributing to) open source software such as Linux, Memcached, MySQL, Hadoop, and many others, it has also made much of its internally developed software available as open source.

Examples of open-source projects that originated from inside Facebook include HipHop, Cassandra, Thrift, and Scribe, React, GraphQL, PyTorch, Jest, and Docusaurus. Facebook has also open-sourced Flow, as static type checker for JavaScript that identifies issues as you code.  If you are a JavaScript developer definitely check it out.  It can save you hours of debugging time.

(A list of open-source software that Facebook is involved with can be found on Facebook’s Open Source page.)

More Scaling Challenges to Come

Facebook has been growing at an incredible pace. Its user base is increasing almost exponentially and now includes over two billion active users—and who knows what it will be by the end of the year.

Facebook even has a dedicated “growth team” that constantly tries to figure out how to make people use and interact with the site even more.

This rapid growth means that Facebook will keep running into various performance bottlenecks as it’s challenged by more and more page views, searches, uploaded images (including images formats and sizes), status messages, and all the other ways that Facebook users interact with the site and each other.

But this is just a fact of life for a service like Facebook. Facebook’s engineers will keep iterating and coming up with new ways to scale (it’s not just about adding more servers). For example, Facebook’s photo storage system has already been completely rewritten several times as the site has grown.

So, we’ll see what the engineers at Facebook come up with next. We bet it’s something interesting. After all, they are scaling a mountain that most of us can only dream of; a site with more users than most countries. When you do that, you better get creative.

If you’re interested in how the Internet works, be sure to check out our article on how Google collects data about you and the Internet.

Data sources: Various presentations by Facebook engineers, as well as the always informative Facebook engineering blog.

Note: This article first appeared on this blog back in 2009, and we have repeatedly updated the data to keep it current. 

The post Exploring the Software Behind Facebook, the World’s Largest Social Media Site appeared first on pingdom.com.

]]>
https://www.pingdom.com/blog/the-software-behind-facebook/feed/ 59
The Developer Obsession With Code Names – 200+ Interesting Examples https://www.pingdom.com/blog/the-developer-obsession-with-code-names-186-interesting-examples/ https://www.pingdom.com/blog/the-developer-obsession-with-code-names-186-interesting-examples/#comments Mon, 06 Feb 2023 13:43:58 +0000 http://royalpingdom.wpengine.com/?p=6620 Code names can be about secrecy, but when it comes to software development, it’s usually not so much about secrecy as it is about the convenience of having a name for a specific version of the software. It can be very practical to have a unique identifier for a project to get everyone on the […]

The post The Developer Obsession With Code Names – 200+ Interesting Examples appeared first on pingdom.com.

]]>
Code names can be about secrecy, but when it comes to software development, it’s usually not so much about secrecy as it is about the convenience of having a name for a specific version of the software. It can be very practical to have a unique identifier for a project to get everyone on the same page and avoid confusion. It can also be a great way to build excitement and cohesion in a development team.

And we want to name our darlings, don’t we?

So what kind of code names are developers out there coming up with? Here is a collection of code names for software products from companies like Google, Microsoft, Apple, Canonical, Red Hat, Adobe, Mozilla, Automattic and more. We tried to give some background information wherever possible. You’ll notice that some code name schemes are definitely more out there than others.

Mozilla Code Names

Mozilla has based most of the code names for different Firefox versions on parks.

An interesting aside is that Mozilla itself was originally the internal code name at Netscape for its Netscape Navigator project.

  • Phoenix – Firefox 1.0
  • Deer Park – Firefox 1.5
  • Bon Echo – Firefox 2
  • Gran Paradiso – Firefox 3
  • Shiretoko – Firefox 3.5
  • Namoroka – Firefox 3.6

Microsoft Code Names

Microsoft has a ton of products, and code names for most of them. When it comes to Windows, Microsoft seems largely obsessed with location names, with a few exceptions.

  • Janus – Windows 3.1
  • Snowball – Windows for Workgroups 3.11
  • Chicago – Windows 95
  • O’Hare – First version of Internet Explorer
  • Memphis – Windows 98
  • Daytona – Windows NT 3.5
  • Cairo – Windows NT 4.0
  • Whistler – Windows XP
  • Longhorn – Windows Vista
  • Vienna – Windows 7
  • Blue – Windows 8.1
  • Threshold – Windows 10 (RTM and 1511)
  • Redstone – Windows 10 (versions 1607, 1703, 1709, 1803 and 1809)
  • Santorini- Windows !0x (Cancelled)
  • Sun Valley – Windows 11

Canonical Code Names

Code names for Ubuntu versions always follow the pattern “adjective + animal”. The first Ubuntu release was called Warty Warthog because it was created in a short period of time and there wasn’t much time for polish. Canonical wanted to keep using “hog” in the version names, but soon abandoned that (after Hoary Hedgehog). If they hadn’t, Breezy Badger would have been code named Grumpy Groundhog.

Note also that as of Breezy Badger, the code names have been in alphabetical order.

  • Warty Warthog – Ubuntu 4.10
  • Hoary Hedgehog – Ubuntu 5.04
  • Breezy Badger – Ubuntu 5.10
  • Dapper Drake – Ubuntu 6.06
  • Edgy Eft – Ubuntu 6.10
  • Feisty Fawn – Ubuntu 7.04
  • Gutsy Gibbon – Ubuntu 7.10
  • Hardy Heron – Ubuntu 8.04
  • Intrepid Ibex – Ubuntu 8.10
  • Jaunty Jackalope – Ubuntu 9.04
  • Karmic Koala – Ubuntu 9.10
  • Lucid Lynx – Ubuntu 10.04
  • Maverick Meerkat – Ubuntu 10.10
  • Natty Narwhal – Ubuntu 11.04
  • Oneiric Ocelot – Ubuntu 11.10
  • Precise Pangolin – Ubuntu 12.04
  • Quantal Quatzal – Ubuntu 12.10
  • Raring Ringtail – Ubuntu 13.04
  • Saucy Salamander – Ubuntu 13.10
  • Trusty Tahr – Ubuntu 14.04
  • Utopic Unicorn – Ubuntu 14.10
  • Vivid Vervet – Ubuntu 15.04
  • Wily Werewolf – Ubuntu 15.10
  • Xenial Xerus – Ubuntu 16.04
  • Yakkety Yak – Ubuntu 16.10
  • Zesty Zapus – Ubuntu 17.04
  • Artful Aardvark – Ubuntu 17.10
  • Bionic Beaver – Ubuntu 18.04
  • Cosmic Cuttlefish – Ubuntu 18.10
  • Disco Dingo – Ubuntu 19.04
  • Eoan Ermine – Ubuntu 19.10
  • Focal Fossa – Ubuntu 20.04
  • Groovy Gorilla – Ubuntu 20.10
  • Hirsute Hippo – Ubuntu 21.04
  • Impish Indri – Ubuntu 21.10
  • Jammy Jellyfish – Ubuntu 22.04
  • Kinetic Kudu – Ubuntu 22.10
  • Lunar Lobster – Ubuntu 23.04

Apple Code Names

Just like Microsoft, Apple has several products, and code names for basically all of them. We focused on Mac OS. The influences for Apple’s Mac OS code names are pretty obvious. For a while they were mostly musical terms, and as of Mac OS X, the focus switched to big cats.

Fun little anecdote: System 7.5 was code named Mozart, but also Capone. Why Capone? Because like the famous gangster, it was meant to rule over Chicago (Windows 95).

We also have to mention Apple’s code name for A/UX (Apple Unix) 1.0: Pigs in Space.

  • Harmony – Mac OS 7.6
  • Tempo – Mac OS 8.0
  • Bride of Buster – Mac OS 8.1
  • Allegro – Mac OS 8.5
  • Sonata – Mac OS 9
  • Fortissimo – Mac OS 9.1
  • Moonlight – Mac OS 9.2
  • Cheetah – Mac OS X 10.0
  • Puma – Mac OS X 10.1
  • Jaguar – Mac OS X 10.2
  • Panther – Mac OS X 10.3
  • Tiger – Mac OS X 10.4
  • Leopard – Mac OS X 10.5
  • Snow Leopard – Mac OS X 10.6
  • Lion – Mac OS X 10.7
  • Mountain Lion – OS X 10.8
  • Mavericks – OS X 10.9
  • Yosemite – OS X 10.10
  • El Capitan – OS X 10.11
  • Sierra – macOS 10.12
  • High Sierra – macOS 10.13
  • Mojave – macOS 10.14
  • Catalina – macOS 10.15
  • Big Sur – macOS 11
  • Monterey – macOS 12
  • Ventura – macOS 13

Automattic Code Names

Starting after WordPress 1.0, Automattic has code named most WordPress releases after well-known jazz musicians.

  • Mingus – WordPress 1.2
  • Strayhorn – WordPress 1.5
  • Duke – WordPress 2.0
  • Ella – WordPress 2.1
  • Getz – WordPress 2.2
  • Dexter – WordPress 2.3
  • Brecker – WordPress 2.5
  • Tyner – WordPress 2.6
  • Coltrane – WordPress 2.7
  • Baker – WordPress 2.8
  • Carmen – WordPress 2.9
  • Thelonious – WordPress 3.0
  • Reinhardt – WordPress 3.1
  • Gershwin – WordPress 3.2
  • Sonny – WordPress 3.3
  • Green – WordPress 3.4
  • Elvin – WordPress 3.5
  • Oscar – WordPress 3.6
  • Basie – WordPress 3.7
  • Parker – WordPress 3.8
  • Smith – WordPress 3.9
  • Benny – WordPress 4.0
  • Dinah – WordPress 4.1
  • Powell – WordPress 4.2
  • Billie – WordPress 4.3
  • Clifford – WordPress 4.4
  • Coleman – WordPress 4.5
  • Pepper – WordPress 4.6
  • Vaughan – WordPress 4.7
  • Evans – WordPress 4.8
  • Tipton – WordPress 4.9
  • Bebo – WordPress 5.0
  • Betty – WordPress 5.1
  • Jaco – WordPress 5.2
  • Kirk – WordPress 5.3
  • Adderley – WordPress 5.4
  • Eckstine – WordPress 5.5
  • Simone – WordPress 5.6
  • Esperanza – WordPress 5.7
  • Tatum – WordPress 5.8
  • Arturo – WordPress 6.0

Google Code Names

Someone at Google clearly has a sweet tooth. All Android code names are pastries or desserts. (For those who wonder what FroYo is, it’s short for frozen yogurt.)

  • Cupcake – Android 1.5
  • Donut – Android 1.6
  • Eclair – Android 2.0/2.1
  • FroYo – Android 2.2
  • Gingerbread – The update after FroYo
  • Ice Cream Sandwich – Android 4.0
  • Jelly Bean – Android 4.1-4.3
  • KitKat – Android 4.4
  • Lollipop – Android 5.0-5.1
  • Marshmallow – Android 6.0
  • Nougat – Android 7.0-7.1
  • Oatmeal Cookie (Oreo) – Android 8.0-8.1
  • Pistachio Ice Cream (Pie) – Android 9
  • Quince Tart – Android 10
  • Red Velvet Cake – Android 11
  • Snow Cone – Android 12
  • Tiramisu – Android 13
  • Upside Down Cake – Android 14

Adobe Code Names

Adobe’s code names for Photoshop largely seem to be movie related in one form or another, with names of movie characters, movie titles, and other references, some definitely more obscure than others.

  • Fast Eddy – Photoshop 2.0
  • Tiger Mountain – Photoshop 3.0
  • Big Electric Cat – Photoshop 4.0
  • Strange Cargo – Photoshop 5.0
  • Venus in Furs – Photoshop 6.0
  • Liquid Sky – Photoshop 7.0
  • Dark Matter – Photoshop CS
  • Space Monkey – Photoshop CS2
  • Red Pill – Photoshop CS3
  • Stonehenge – Photoshop CS4
  • White Rabbit – Photoshop CS5
  • Superstition – Photoshop CS6
  • Lucky 7 – Photoshop CC
  • Single Malt Whiskey Cat – Photoshop CC 2014
  • Dedicated to Thomas and John Knoll – Photoshop CC 2015 1, 1.2
  • Haiku – Photoshop CC 2015.5, 5.1
  • Big Rig – Photoshop CC 2017
  • White Lion – Photoshop CC 2018
  • B Winston – Photoshop CC 2019

Fedora Code Names

Fedora started off relatively thematic, with the code names for Fedora Core 1 through 5 all being in some way related to alcohol (wine or beer). After that, the relationships between the code names get much less consistent.

Update: Fedora uses a naming scheme where a new release has to have a relationship with the previous release. More info on their guidelines page.

  • Yarrow – Fedora Core 1
  • Tettnang – Fedora Core 2
  • Heidelberg – Fedora Core 3
  • Stentz – Fedora Core 4
  • Bordeaux – Fedora Core 5
  • Zod – Fedora Core 6
  • Moonshine – Fedora 7
  • Werewolf – Fedora 8
  • Sulphur – Fedora 9
  • Cambridge – Fedora 10
  • Leonidas – Fedora 11
  • Constantine – Fedora 12
  • Goddard – Fedora 13
  • Laughlin – Fedora 14
  • Lovelock – Fedora 15
  • Verne – Fedora 16
  • Beefy Miracle – Fedora 17
  • Spherical Cow – Fedora 18
  • Schrödinger’s Cat – Fedora 19
  • Heisenbug – Fedora 20
  • Twenty One – Fedora 21

Red Hat Linux Code Names

The geek presence is strong here. Note for example the (original) Battlestar Galactica reference for RHL 5.2 and 5.9: Apollo and Starbuck. Or it could be a coincidence, because there are many literary and mythical references here. For example, Starbuck is also a character in the novel Moby-Dick, The Sea-Wolf is a Jack London novel, and so on. Fun aside, one RHL version shares its name with a Mozilla product: Thunderbird.

  • Mother’s Day – RHL 1.0
  • Picasso – RHL 3.0.3
  • Colgate – RHL 4.0
  • Vanderbilt – RHL 4.1
  • Biltmore – RHL 4.2
  • Thunderbird – RHL 4.8
  • Mustang – RHL 4.9
  • Hurricane – RHL 5.0
  • Manhattan – RHL 5.1
  • Apollo – RHL 5.2
  • Starbuck – RHL 5.9
  • Hedwig – RHL 6.0
  • Cartman – RHL 6.1
  • Piglet – RHL 6.1.92
  • Zoot – RHL 6.2
  • Pinstripe – RHL 6.9.5
  • Guinnes – RHL 7.0
  • Fisher – RHL 7.0.90
  • Wolverine – RHL 7.0.91
  • Seawolf – RHL 7.1
  • Roswell – RHL 7.1.93
  • Enigma – RHL 7.2
  • Skipjack – RHL 7.2.91
  • Valhalla – RHL 7.3
  • Limbo – RHL- 7.3.29
  • Psyche – RHL 8.0
  • Shrike – RHL 9
  • Severn – RHL 9.0.93

Debian Code Names

All Debian releases are code named after character names from the film Toy Story. Remember Sid, the emotionally unstable, toy-destroying kid next door? That’s the permanent name for Debian’s unstable development distribution.

  • Buzz – Debian 1.1
  • Rex – Debian 1.2
  • Bo – Debian 1.3
  • Hamm – Debian 2.0
  • Slink – Debian 2.1
  • Potato – Debian 2.2
  • Woody – Debian 3.0
  • Sarge – Debian 3.1
  • Etch – Debian 4.0
  • Lenny – Debian 5.0
  • Squeeze – Debian 6.0
  • Wheezy – Debian 7.0
  • Jessie – Debian 8.0
  • Stretch – Debian 9.0
  • Buster – Debian 10.0
  • Bullseye – Debian 11

Final Words (Not in Code)

It’s not just software developers who are fond of code names. You’ll find code names wherever there’s some form of research and development going on. For example, Intel and AMD have code names for their processors, Microsoft has code names for each iteration of Xbox 360, Apple has code names for its various computers, and so on.

What we find interesting is the amount of creativity many put into these code names, often revealing cultural references and other obscure interests of its developers.

Want More?

Can’t get enough geekiness? On the theme of code names, take a look at project “Natick” the Microsoft underwater datacenter.  It looks as if taken straight from a 007-movie! Also, be sure to check out this massive collection of early computers.

Note: This article first appeared on this blog back in 2010, and we have slightly touched up the content in January 2023.

The post The Developer Obsession With Code Names – 200+ Interesting Examples appeared first on pingdom.com.

]]>
https://www.pingdom.com/blog/the-developer-obsession-with-code-names-186-interesting-examples/feed/ 6
Web API Monitoring Explained: A Helpful Introductory Guide https://www.pingdom.com/blog/web-api-monitoring-explained-introductory-guide/ Fri, 25 Nov 2022 12:22:08 +0000 https://www.pingdom.com/?post_type=blog&p=35080 An API, application programming interface, is a collection of tools, protocols, and subroutines that can be used when building software programs or applications. APIs makes software development easier by providing reusable components and a set of clearly defined communication protocols. Recently APIs have come to mean web services, but there are also APIs for software […]

The post Web API Monitoring Explained: A Helpful Introductory Guide appeared first on pingdom.com.

]]>
An API, application programming interface, is a collection of tools, protocols, and subroutines that can be used when building software programs or applications. APIs makes software development easier by providing reusable components and a set of clearly defined communication protocols. Recently APIs have come to mean web services, but there are also APIs for software and hardware libraries, operating systems and databases.   

For Web APIs that support a web-based application in production, there are two main metrics to keep an eye on .

  1. Is your Web API behaving as expected?
  2. What’s happening with your API when it’s not?

Fortunately, we have Web API monitoring for API developers, letting us track both.

This guide discusses API monitoring, what you can track, some hints for effective monitoring, and how to get started monitoring your Web APIs.

What Is API Monitoring, and Why Do It?

Web API monitoring is the process of tracking, logging, and analyzing the performance and availability of a Web API. By monitoring availability, developers can identify and address issues before they cause significant problems, sometimes before customers realize it.

Additionally, API monitoring can help developers optimize their API’s performance by identifying potential areas for improvement, like low-performing request responses.

Many different tools and services are available for API monitoring, including open-source options. Ultimately, deciding which tool to use will depend on the project’s needs. However, all API monitoring tools share one common goal: to help developers build better APIs.

The Benefits of API Monitoring

There are many benefits to Web API monitoring, including:

  • Monitoring uptime: Service-level agreements (SLAs) are at the heart of web-based services, and ensuring you meet them is vital to your business.
  • Improving the quality of APIs: By monitoring APIs, organizations can identify areas where they need improvement and make changes to improve the quality.
  • Tracking usage: API monitoring helps track trends and usage patterns for planning future updates, product releases, and potentially new product offerings.
  • Tracking performance: API monitoring can track performance over time and make changes to improve it.
  • Ensuring timely updates: One of the advantages of API monitoring is it can help organizations ensure their APIs are always up to date. API monitors can serve as a type of quality control for organizations.
  • Debugging issues: API monitoring helps debug problems and identify potential errors.
  • Reducing support costs: Organizations can save money on support costs by identifying and fixing problems with APIs before they cause significant issues.

Web API monitoring is an essential tool for any organization offering APIs. In many cases, it’s equally crucial to API development.

API Monitoring Tools

There are several Web API monitoring tools and services available to assist with monitoring and testing, including:

  • Pingdom®: A professional, enterprise-grade API monitoring tool for tracking API uptime, response time, and availability.
  • Prometheus®: An open-source monitoring tool providing enough functionality if your requirements are relatively low. It lacks some analysis capabilities and will only write logs to a local disk.
  • Graphite®: Another open-source monitoring tool. It’s easy to install and allows you to track application deployments to narrow down the root cause.
  • SoapUI®: A tool used to test the functionality of APIs, initially only for SOAP calls, but it has expanded to more types in recent years.
  • Postman®: A tool for testing APIs including several features for working with APIs, such as making test requests and inspecting responses.

An API Monitoring Sample Plan

There are many ways to set up an Web API monitoring program. Below is a basic outline of steps to get you started:

  1. Define your goals, requirements, and metrics. Which metrics are important will vary depending on your specific business goals, but some standard metrics include availability, response time, error rates, and throughput.
  2. Choose an appropriate monitoring tool based on your goals, requirements, metrics, budget, and technical capabilities. Install it and train your people to use it.
  3. Establish monitoring criteria such as uptime, performance, or usage trends to produce meaningful information supporting business goals.
  4. Set up regular testing by a human to ensure the API is functioning as expected and your monitoring tool is accurate.
  5. Review the results regularly and adjust to improve the quality of your API monitoring program. If business goals change, change the metrics to support these changes.
  6. Repeat starting with No.3.

With the following tips, you can ensure your API monitoring program provides maximum value to your business.

API Monitoring Tips

Many variables go into API monitoring, but here are some things to keep in mind.

Monitor Everything

All of your Web APIs need coverage, including the one with only one customer using it once a month. Most likely, the API is critical to your customer’s business once a month, and you will hear about it when issues arise.

Monitor All the Time

Perform heartbeat checks every five to 15 minutes around the clock, 365 days a year. Your business is constantly moving, so your monitoring must keep up.

Ensure Accuracy

Whenever you have a human reviewing data or interacting with your Web API, there’s a chance for human error. Create automatic processes for testing wherever possible to avoid this issue. Additionally, validate the results you see in the monitoring tool with human-driven testing.

Prioritize

Not all metrics are created equal. Some will be more important to your business than others. Prioritize your most critical metrics and base your monitoring around them.

Test as Your Customer Uses the Service

Testing as your customer means accessing the Web API from outside your internal network, with the credentials and security exactly how a customer would use the system. And set up a test user in production with valid, dummy data.

Test Where Your Customer Uses the Service

Geography can affect service times. If the issues are beyond your control (such as inadequate local infrastructure), having the data to show customers can soothe heated conversations.

Go Beyond Testing Only the Return Codes

I can’t tell you how many ‘200’ responses I’ve seen with an error message in the body. You won’t get all of these alerts set up right away, but you can determine ways to detect anomalies in “correct” responses over time. Deeper testing is another reason for having a dummy customer with valid data in production.

Have the Business Review Your Checks

You need to monitor what is promised to customers, not what your technical team has delivered. If you aren’t providing something promised, it’s better for your team to discover it vs. your customer.

Don’t Only Monitor Averages

A group of alerts for a set of customers can show issues with the underlying code or infrastructure. For example, your one customer with 10 times the data with long wait times can help identify problems with your database queries or a need to build out a database cluster.

Don’t Sweat One-Off Anomalies

Ignoring alerts seems the opposite of the above point, but sometimes you’ll encounter sporadic unrepeated issues. A super-rare confluence of events could have caused the alert and is unlikely to occur again. Concentrate on replicable and reproducible problems, so you don’t spend time on issues distracting from your business goals.

API Monitoring First Steps

Luckily, it’s relatively easy to get started with a Web API monitoring tool.

First, go to a provider’s website and sign up for the service. Many paid providers, including Pingdom, give you a free 30-day trial to get a feel for their service.

Next, add your APIs to the dashboards.

Now set up your alerts.

All done. From this point forward, you’ll get real-time data on the status of your APIs.

Wrapping Up

Web API monitoring is a valuable tool for businesses and developers alike. Tracking Web API uptime, response time, availability, and other performance metrics will ensure your API functionality meets users’ expectations.

This post was written by Steven Lohrenz. Steven is an IT professional with 25-plus years of experience as a programmer, software engineer, technical team lead, and software and integrations architect. He blogs at StevenLohrenz.com about IT, programming, the cloud, and more.

The post Web API Monitoring Explained: A Helpful Introductory Guide appeared first on pingdom.com.

]]>