-- Harvey Newstrom <http://HarveyNewstrom.com> <http://Newstaff.com>
Jim Fehlinger wrote on Monday, April 02, 2001 7:48 pm, > > _Going Inside..._ by John McCrone. > From Chapter 3 "Ugly Questions About Chaos" (pp. 50-73): > > "[T]he success of the digital computer is founded on precisely its ability > to squeeze out any uncertainty in its behaviour. Computers are built to > follow the same steps and come to the same answers every time they run a > program.
I always disagree when people claim that computers are predictable. They are not. There are too many variables that are not predictable. I have debugged too many problems that were not readily reproducible. I even developed a patent-pending method for determining whether a given set of source code produced a given set of executable code. This is because even source code is not predictable!
When writing to disk, the file may or may not exist. It may or may not be in use. The disk may or may not be full. The disk may or may not be remotely mounted on another computer over the network. When a write occurs, it may or may not have gotten a random I/O error. It may have been successful or not. It may or may not have gotten written on a bad sector. The file may or may not be fragmented across different segments. It may or may not be cached in memory or really written to disk directly. It may or may not be compressed or not. The disk may or may not be the newest Windows FAT partition table, or it may be in an older format. It may be a disk, a floppy, a CD, or a RAM disk in memory.
I think you see my point. There are so many layers of complexity and abstraction, that there is no way to predict exactly what a computer will do. All these variables can occur in any combination. They may or may not work. They may or may not be faster or slower than last time. The program may or may not recover from an unusual error or delay compared to previous runs. When running it again, it may or may not run into the same situation.
Most programs do not recover from all possible errors or delay states. They work 99% of the time, but randomly fail or act differently under different conditions. Anybody who uses the Windows operating system will tell you that computers are not reliable or predictable under most circumstances.
The main reason that these process are unpredictable is because the disk is a shared resource among dozens of simultaneous processes. The timing of each is dependent on memory fragmentation, disk fragmentation, cup load and micro variations in cup processor speed. There are so many variables that it is virtually impossible to duplicate the exact environment again. Add to that networking or human input, and the system becomes totally unpredictable. Even sunspots and random background radiation can affect network transmission speeds.
Back to the original reference, computers are not predictable. What is really occurring is that computers are abstracted. All of the computers can be programmed with the same source code. This does indeed allow them to be mass produced. However, they do not all function the same given the same source code. In the military, we had problems with rounding errors between different computers or chip sets. Even standard mathematical calculations do not come up the same with the same rounding error between machines. One divided by three times three should be one again. Some systems say yes. Others will say 0.9999999... others will round in various other ways. That's just with integer division. Start doing sine waves, electronics and real-time calculations, and calculations become even more fuzzy.
This archive was generated by hypermail 2b30 : Mon May 28 2001 - 09:59:44 MDT