What units are represented by the various letters:
kbs - 1000 bit per second or 1024?
mbs - 1000000 bits per second or 1048576?
Is there any chance that the many speed test sites are consistent?
What units are ISPs quoting?
Does TCP Optimizer ever modify settings on the adapter card? When I look at the adapter via the device manager there are several variables (crc offload, full/half duplex, jumbo frames, etc) to be altered. Does TCP Optimizer change any of these or only values in the registry that are adapter dependent?
Speedtest.net seems to show the latency the same as the ping time shown early in the test. When I run an NDT test the RTT shown are almost always larger (sometime much larger) than the ping RTT. Any comments?
Thanks much..............John
Units and other basic questions.
Broadband speed are measured in bits per second. For convenience in communication and text literature, it is usually rounded up when speed is in millions bits per second eg. 1 mbps = 1024 kbps and so on. But the actual bits are used for speed less than 1 mbps such as in 256 kbps or 768 kbps...
Speedtest are not consistent due to various factors: traffic load on the test server, distance to the test server, size of test file to the line speed, etc. A more reliable way to test is to download a large file relative to the line speed from a nearby web server which has good serving capacity and use the file transfer speed recorded after 10 mins of transfer (in order to eliminate caching effect).
TCP Optimizer only adjust registry values.
Speedtest.net shows latency value and ping time separately. Ping time is shown even before you click on any test server. Latency is only reported after testing is completed to a specific test server. To compare RTT and latency correctly, check latency value to a test server that is near to the NDT test server.
Speedtest are not consistent due to various factors: traffic load on the test server, distance to the test server, size of test file to the line speed, etc. A more reliable way to test is to download a large file relative to the line speed from a nearby web server which has good serving capacity and use the file transfer speed recorded after 10 mins of transfer (in order to eliminate caching effect).
TCP Optimizer only adjust registry values.
Speedtest.net shows latency value and ping time separately. Ping time is shown even before you click on any test server. Latency is only reported after testing is completed to a specific test server. To compare RTT and latency correctly, check latency value to a test server that is near to the NDT test server.
It turns out there is a standard defining terms such as KiB and MiB representing 1024 Bytes and 1048576 Bytes. See this Wikipedia article http://en.wikipedia.org/wiki/Kibibyte for some detail. See also http://en.wikipedia.org/wiki/Kibi#IEC_standard_prefixes for more detail.
The first (smaller) article points out that this standard has not been widely accepted.
The question I was originally trying to ask, is there any consistency in what various speed test sites and ISPs mean when they specify kbs or Mbs? The HardwareGeeks sites show both kbs and Mbs values and the ratio is 1024. Do other sites and ISPs use 1000 or 1024? Anyone have any comments?
Thanks................John
The first (smaller) article points out that this standard has not been widely accepted.
The question I was originally trying to ask, is there any consistency in what various speed test sites and ISPs mean when they specify kbs or Mbs? The HardwareGeeks sites show both kbs and Mbs values and the ratio is 1024. Do other sites and ISPs use 1000 or 1024? Anyone have any comments?
Thanks................John