Using Bonnie
Bonnie spends a long time writing and reading disk files (with intermittent
progress reports), then produces a small but dense report.
There are options for controlling which disk Bonnie works with, how
big a file to use, how to label the output, and whether to generate HTML.
To: Textuality .
Bonnie
|
Getting Bonnie
First, you will have to fetch a copy of Bonnie from the
Textuality Web site; go to
http://www.textuality.com/bonnie and follow the pointers
from there.
There are other places around the Internet from which you can
get Bonnie, but this is the only site where it is maintained.
Bonnie is incredibly easy to build from source, but I make some
binary versions available as well for people who don't have C compilers
or want to save some time.
If you have to get the source code (which is available in shar
and tar.gzip formats), after you've unpacked it, just try
typing
make
There is an excellent chance that this will just work.
If it doesn't read the file named "Instructions".
The Bonnie Command Line
The Bonnie command line, given here in Unix style, is:
Bonnie [-d scratch-dir] [-s size-in-Mb] [-m machine-label] [-html]
All the things enclosed in square brackets may be left out.
The meaning of the things on this line is:
Bonnie
- The name of the program.
You might want to give it a different name.
If you are sitting in the same directory as the program, you might
have to use something like
./Bonnie
.
-d scratch-dir
- The name of a directory; Bonnie will write and read scratch files
in this directory.
Bonnie does not do any special interpretation of the directory name,
simply uses it as given.
Suppose you used
-d /raid1/TestDisk
; Bonnie
would write, then read back, a file whose name was something like
/raid1/TestDisk/Bonnie.34251
.
If you do not use this option, Bonnie will write to and read from
a file in the current directory, using a name something like
./Bonnie.35152
.
Bonnie does clean up by removing the file after using it; however,
if Bonnie dies in the middle of a run, it is quite likely that
a (potentially very large) file will be left behind.
-s size-in-Mb
- The number of megabytes to test with.
If you do not use this, Bonnie will test with a 100Mb file.
In this discussion, Megabyte means 1048576 bytes.
If you have a computer that does not allow 64-bit files, the
maximum value you can use is 2047.
It is important to use a file size that is several times the size
of the available memory (RAM) - otherwise, the operating system
will cache large parts of the file, and Bonnie will end up doing
very little I/O.
At least four times the size of the available memory is desirable.
-m machine-label
- This is the label that Bonnie will use to label its report.
If you do not use this, Bonnie will use no label at all.
-html
- If you use this, Bonnie will generate a report in HTML form,
as opposed to plain text.
This is not all that useful unless you are prepared to write a
table header.
Bonnie Results
Before explaining each of the numbers, it should be noted that the
columns below labeled %CPU
may be misleading on
multiprocessor systems.
This percentage is computed by taking the total CPU time reported for
the operation, and dividing that by the total elapsed time.
On a multi-CPU system, it is very likely that application code and
filesystem code may be executing on different systems. On the final
test (random seeks), the parent process creates four child processes
to perform the seeks in parallel; if there are multiple CPUs it is
nearly certain that all will be involved. Thus, these numbers should
be taken as a general indicator of the efficiency of the operation
relative to the speed of a unit unit CPU.
Taken literally, this could make a machine with 10 50-horsepower CPUs
appear less efficient than one with one 100-horsepower CPU.
Here is an example of some typical Bonnie output:
-------Sequential Output-------- ---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
mystery 750 534 65.7 1236 22.5 419 17.5 564 74.3 1534 32.8 35.0 8.3
Reading the bottom line across, left to right:
mystery
- This test was run with the option
-m mystery
"mystery" is the label for the test.
750
- This test was run with the option
-s 750
Bonnie used a 750-Megabyte file to do the testing.
For the numbers to be valid, the computer had better not have had more
than about 200M of memory.
534
- When writing the file by doing 750 million putc() macro
invocations, Bonnie recorded an output rate of 534 K per second.
65.7
- When writing the file by doing 750 million putc() macro
invocations, the operating system reported that
this work consumed 65.7% of one CPU's time.
This is not very good; it suggests either a slow CPU or an inefficient
implementation of the stdio interface.
1236
- When writing the 750-Mb file using efficient block writes,
Bonnie recorded an output rate of 1,236 K per second.
22.5
- When writing the 750-Mb file using efficient block writes,
the operating system reported that this work
consumed 22.5% of one CPU's time.
419
- While running through the 750-Mb file just creating, changing
each block, and rewriting, it, Bonnie recorded an ability to
cover 418 K per second.
17.5
- While running through the 750-Mb file just creating, changing
each block, and rewriting, it, the operating
system reported that this work consumed 17.5% of one CPU's time.
564
- While reading the file using 750 million getc() macro
invocations, Bonnie recorded an input rate of 564 K per second.
74.3
- While reading the file using 750 million getc() macro
invocations, the operating system reported that this
work consumed 74.3% of one CPU's time. This is amazingly
high.
1534
- While reading the file using efficient block reads, Bonnie
reported an input rate of 1,534 K per second.
32.8
- While reading the file using efficient block reads,
the operating system reported that this work consumed
32.8% of one CPU's time.
35.0
- Bonnie created 4 child processes, and had them execute 4000
seeks to random locations in the file. On 10% of these seeks,
they changed the block that they had read and re-wrote it.
The effective seek rate was 32.8 seeks per second.
8.3
- During the seeking process, the operating
system reported that this work consumed 8.3% of one CPU's time.