Differences in web analytics systems


Differences in web analytics systems (TNO report)

One of the presentations was the research TNO in the persons of Almerima Jamakovic, Bart Gijsen and Martijn Staal has to “differences in Web Analytics. One of the presentations was the research TNO in the persons or Almerima Jamakovic, Bart Gijsen and Martijn Staal has to “differences in Web Analytics. The subtitle is: Facts, myths and expectations.” The subtitle is: Facts, myths and expectations. In this article the slides of the presentation and a summary of the findings. In this article the slides of the presentation and a summary of the findings.

Main conclusions:

  • Differences between measurements of web show (for a good implementation) over long periods of constant, but vary by site. Differences between measurements of web show (for a good implementation) over long periods of constant, but vary by site.
  • For web, a maximum percentage difference between measurements of different packages. For web, a maximum percentage difference between measurements of different packages. This is good guidelines to ensure the reliability of web and checking implementation. This is good guide lines to ensure the reliability of web and checking implementation.
  • Dart and STIR data, both absolute and trends hardly comparable. Dart and STIR data, both absolute and trends hardly comparable.
  • The explanation of all the causes of the differences is very complex, because differences usually result from a very large number of causes. Explaining all the reasons for the differences is very complex, because differences usually result from a very large number of causes. Attention should be paid to advance the accuracy of the implementation of the packages. Attention should be paid to advance the accuracy of the implementation of the packages.

Summary: (from TNO)

Differences in web regularly lead to questions about the reliability of the data. The migration to another package, the ‘coincidence’ Comparing the statistics with data from other packages and Settlement advertising can thus give rise to much discussion.

TNO in cooperation with Blue Mango, Click Value, Maximum and Netprofiler investigated the reliability of web. The migration to a different package, the ‘coincidence’ Comparing the statistics with data from other packages and advertising Settlement can thus give rise to much discussion. In cooperation with TNO Blue Mango, Click Value, Maximum Netprofiler and research into the reliability or web.

Important questions to what extent differences were real and acceptable, the reliability of the web for a specific implementation of statistics to assess. Important questions to what extent differences were real and acceptable, the reliability of the web for a specific implementation of statistics to assess

Project Approach: Differences In Web

A solid basis to understand the differences to last year by Stone Temple through the Shootout report. In collaboration with online marketing experts of Blue Mango, Click Value, Maximum and Netprofiler TNO started in the second half of 2008 a project to further detail the differences and measurement of the web to understand and a picture of the reliability of the data. In collaboration with online marketing experts of Blue Mango, Click Value, Maximum and Netprofiler TNO started in the second half of 2008 a project to further detail the differences and measurement of the web to understand and a picture of the reliability of the data. The approach is comprised of two types of studies. The approach is comprised of two types of studies. First, several large Dutch websites statistics data with STIR and Dart analyzed. First, several large Dutch website statistics data with STIR and Dart analyzed.

This analysis worked under other Agis, Ilse Media, typhoon, Univé Insurance and TNO them. This analysis worked under other Agis, Ilse Media, typhoon, Univé Insurance and TNO them. For additional insight into the differences and the causes of these differences, are second in a closed environment testing of different website packages Web executed. For additional insight into the differences and the causes of these differences, are second in a closed environment testing of different website packages Web executed. In the closed environment, the traffic was regulated and variables such as the click, IP addresses and browser types adjustable. In the closed environment, the traffic was regulated and variables such as the click, IP addresses and browser types adjustable. The kits have been tested in the closed environment are Google Analytics and Webtrends Sitestatstext. The kits have been tested in the closed environment are Google Analytics and Webtrends Sitestatstext. In addition, the data analysis of some sites also HBX parcels and Speed Trap included including Dart and STIR data. In addition, the data analysis of some sites also HBX parcels and Speed Trap included including Dart and STIR data.

The differences between measurements of packages on a website are constant

The analysis shows that the measured values by various website statistics packages often very different. The direction and extent of the trends for the number of visitors, visits and page views appear to be highly negotiable. The direction and extent of the trends for the number of visitors, visits and page views appear to be highly negotiable. This means that the differences between the packages remain constant over longer periods. This means that the differences between the packages remain constant over longer periods. But although the differences between measurements show constant, this difference is not the same for each site. But although the differences between measurements show constant, this difference is not the same for each site. A statistical package on one site structurally higher values than package B, there is another website just to give lower values. A statistical package on one site structurally higher values than package B, there is another website just to give lower values. Important reasons for this appear to lie in the content and structure of the website. Important reasons for this appear to lie in the content and structure of the website

What degree of difference is real and acceptable?

An important conclusion from the research is that the analysis shows that the values of Web measurements are normally distributed. In addition, the analysis that the distribution of statistics from well-configured software is an upper limit to be specified. In addition, the analysis that the distribution of statistics from well-configured software is an upper limit to be specified. These statistical properties of concrete rules of thumb for ‘acceptable’ or ‘real’ disorders. These statistical properties of concrete rules of thumb for ‘acceptable’ or ‘real’ disorders. This allows website (statistics) managers simply the reliability of the implementation of web check by the percentage differences between the packages to compare with the number of packages that you use. This allows website (statistics) managers simply the reliability of the implementation of web check by the percentage differences between the packages to compare with the number of packages that you use. Failure to meet the guidelines, then this practice implementation errors or incidents which significant differences occur. Failure to meet the guidelines, then this practice implementation errors or incidents which significant differences occur. This control is used for both large and small websites because research shows that the number of visitors, visits and page views a minor influence on the results. This control is used for both large and small websites because research shows that the number of visitors, visits and page views a minor influence on the results.

Check here the reliability of your web

The websites to which two or more packages are being used is relatively easy to determine whether statistics are well implemented. In addition, it is worthwhile for websites with 1 package for an additional (free) package like Google Analytics to implement this control can do. In addition, it is worthwhile for websites with 1 package for an additional (free) package like Google Analytics to implement this control can do.

For the reliability check, the following guidelines:

  • Decide on the basis of statistical data on a weekly or monthly level for a few periods the percentage difference in visits, visitors and / or page views between the packet containing the highest and the package with the lowest value.
  • Select the table row with the number of packages that you have running on your site
  • Is your rate fixed for several periods greater than the percentage in the column “Max-Min deviation rarely greater than”, it is likely that the implementation of the WA so different packages (eg a set of tags are not all webpages) that the interpretation of the data is incomparable. Is your rate fixed for several periods greater than the percentage in the column “Max-Min deviation rarely greater than”, it is likely that the implementation of the WA so different packages (eg a set of tags are not at all webpages) that the interpretation of the data is incomparable. Late in this case the implementation by the ICT department or you look Analytics Agency, or consider an audit. Late in this case the implementation by the ICT department or your Analytics Office look, or consider an audit

Dart and STIR data are not comparable

In addition to an examination of the web for some sites, the Dart and STIR data examined. In addition to an examination of the web for some sites, the Dart and STIR data examined. The assumption that these data are not comparable with the web data, is confirmed by the investigation. The assumption that these data are not comparable with the web data, is confirmed by the investigation. Because there are other objectives and analysis methods used are the absolute data and trends little or limited comparable. Because there are other objectives and analysis methods used are the absolute data and trends little or limited comparable

Causes of differences in WA packages

In the closed test, a number of causes further. Basis for this was a non-exhaustive list of causes: Basis for this was a non-exhaustive list of causes:

By pre-established traffic on some of the outer world shielded webpages result, there was more insight into the different ways of measuring the WA packages. This revealed that the packages largely the same work, and therefore in principle should give the same numbers. This revealed that the packages largely the same work, and therefore in principle should give the same numbers. The differences that still occur seem to be partly caused by the failure count of page views by Webtrends when using the back arrow in your browser.

The differences that still occur seem to be partly caused by the failure count of page views by Webtrends when using the back arrow in your browser. In addition, none of the packages are able to traffic comes from webbots full filtering. In addition, none of the packages are able to traffic comes from webbots full filtering. In general we find that the deviations are mainly caused by the large number of relatively small causes. In general we find that the deviations are mainly caused by the large number of relatively small causes. Explaining all the reasons it is very complex, so the focus has to go to a good implementation of the packages. Explaining all the reasons it is very complex, so the focus has to go to a proper implementation of the packages.

Advertisements

One thought on “Differences in web analytics systems

  1. Pingback: » Exponentially equivalent measures Charles Garry

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s