Introducing the STHbench.sh benchmark script

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
I think we should do some analysis of what other sites are doing and add some of those same benches where it makes sense to have comparability. I think we need to focus on integrating those value-add benchmarks that we have all talked about. ngnix, apache, memcached, etc.

What do you guys think?
I do think ngnix/ apache and memcached need to get done next. Would also posit that pybench and 7-zip we may want to think about running without pts with since that is a big piece of divergent code between Ubuntu and other distributions right now.

The flip side to this that this is just a script so users can run benchmarks on their own. Also, a decently easy stress test for the CPU/ memory. The real goal is that if anyone wants to test themselves they can just fire up a linux distribution and run with three commands.

On the comparison side, you are 100% right. Once we get this stable with those few open benchmarks, the next step is to get the parser/ uploader completed.
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
Pseudo-coding for apache and nginx:
Install Apache
Install nginx
Config Apache port 80 - likely want to have a downloaded config file here
Config nginx port 81 (or 82 if you want) - likely want to have a downloaded config file here
Use seige against Apache
Use seige against nginx

I wonder if we can find the code from here: Siege Benchmarks #1 - Centmin Mod - Menu based Nginx installer for CentOS servers (saw while gooling)
JoeBlog Siege Home

If we are already downloading config files, maybe we also can give some sort of test site so it isn't just static html or phpinfo? I wonder if there's something like wordpress or another tool we can download. WP would require a mysql compatible Db to run. I'm not sure that is a bad thing other than we are installing a lot more stuff.
 

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
Here is a try at apache that I am going to let run while watching football:

I downloaded apache 2.4.7 but no go since missing APR (even tried "./configure --with-included-apr")

Code:
#! /bin/bash
wget -N http://www.poolsaboveground.com/apache//httpd/httpd-2.4.7.tar.bz2 http://apache.mesi.com.ar//apr/apr-1.5.0.tar.gz http://apache.mesi.com.ar//apr/apr-util-1.5.3.tar.gz ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.32.tar.gz
tar xjf httpd-2.4.7.tar.bz2
tar xfz apr-1.5.0.tar.gz
tar xfz apr-util-1.5.2.tar.gz
tar xvzf pcre-8.32.tar.gz

mv apr-1.5.0 /httpd-2.4.4/srclib/apr
mv apr-util-1.5.2 /httpd-2.4.4/srclib/apr-util

cd pcre-8.32
./configure --prefix=/usr/tmp
make
make install

cd httpd-2.4.7
./configure --prefix=/usr/tmp/apache --with-included-apr --with-pcre=/usr/tmp/pcre
make
make install

cd /usr/tmp/apache
sudo ./apachectl
Going to let that run whilst I watch football!
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
Quick question for you all, is this script meant to run on clean systems only or also on existing systems?

I'm quite torn on the sudo requirement and there are ways to get around it, though troublesome. I'm not a fan of running software as root, especially if I either a) don't know what it fully does without reading through it all, and b) don't want packages installed that could conflict with what is currently installed or is generally not needed. I had to reread the thread to see what the history was and it really is there to remove dependencies and make it easier for users so I understand the intent.

An example of conflicting packages could be the Apache example above, what if I have an existing install? Instead, you could build/run apache on an unprivileged port and get the same effect. Also dangerous can be incompatible versions of glibc and whatnot. I don't necessarily like things installing packages on a minimal install OS installs for servers that I've deployed. I don't even like making random directories w/o telling the user. As an example, to run some of these tests today, I had to compile on head nodes and transfer over to a compute node since they are minimal installs w/o compilers or libraries. Or if I were curious about how a machine runs on a cluster or a hosting company, I won't necessarily have root at the site, but because I am a geek I want to run it.

On the other hand, as you get to the list of network tests and whatnot, we're going to start hitting firewall rule issues and SELinux problems on machines that you don't have admin rights on. That's when the sudo helps, but I wouldn't want to give carte-blanche to a script containing lots of parts full su rights. Best practices and whatnot...

Just looked at PTS, definitely can get around it by doing a liveCD and PTS installs everything in its own area I think. I should go and grab it.

I can't think of the last time that I had to give admin rights to run a benchmark on a Windows machine. I do know that I get prompted to install dependancies, but once those are in, I shouldn't need to give admin rights to run.

Just random thoughts.
 

Jeggs101

Well-Known Member
Dec 29, 2010
1,529
241
63
Chuckleb - I'm totally onboard with what you are saying.

Here's the thing - if you use pre-compiled binaries, you are x86 only, no ARM.

nitrobass24 already suggested the great idea of eventually making this either a LiveCD or LiveCD compatible running it all in memory.

When you install Linux, you usually do not have all the build tools you need so you basically need to do su for the yum/apt-get installs.

Now, I also think you bring up a really (really) good point:
Can we do a setup script then a benchmark script? Or we could prob fork a benchmark only script if we keep all of the necessary su commands into the first part.

I really like the 3 command full update/ suite thing but you are right, we do need a way to use on existing systems.

I think we also need to get to a few command options. Like, if you rerun the script it updates/ downloads/ installs/ runs/ deletes, then it is rerun and updates/ downloads/ installs/ runs/ deletes again. Seems like way too much downloading for a second run. If we had flags, we could download the script, then just run certain benchmarks.
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
I actually don't mind compiling... Get cleaner and more optimized builds that way. We spend lots and lots of time building toolchains for our users just to squeeze out an additional few percent. Wait till you tune for MPI and math libraries... yuck. I had to recompile stream for every system I ran on due to the optimizer flags. I definitely don't advocate a binary-only model, and compilers are relatively easy to install, just need to make it optional if the user can get a binary version they trust and want to use perhaps?

Yes, it is easy to leave previously downloaded as cache, or in my case, I could download to an NFS mount point and share. Etc. Definitely need a way to override defaults, half of my tweaks for core count and whatnot are my opinion, others can do things they like.

I'll use it if I trust it doesn't break things is my overall comment. I don't trust others to make root decisions for me, I would rather blame myself if things break. Or run it on systems I don't care about, but that's a lot less data points.
 

nitrobass24

Moderator
Dec 26, 2010
1,087
131
63
TX
Chuckleb - You make some excellent points. But I wanted to answer your initial question about whether this is to be run on an existing installation or a clean installation?

The genesis behind this is to create a set of benchmarks to be run on clean minimal Ubuntu Installations for machines that Patrick gets to review on the main site. I personally prefer RHEL based systems so I took the initiative and made it slightly more universal and I think that is where I probably threw this thing off-course. :)

We need to keep this focused on the initial intent to provide a baseline of tests, that can easily and quickly be run on a clean system for review purposes. I think that if we can get the benchmark to a stable state (feature-wise and operationally) it will be easy enough to transform into a live-CD/USB stick which could then be run on existing systems, without impacting existing installations, downloading and running code as root, etc.
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Good points nitrobass24.

The only thing to add is that while the goal was certainly to get a script that could be used to run against new hardware, the other goal is that anyone should be able to reproduce/ compare on a fresh system. The LiveCD/ USB stick would certainly help but are probably a quick next step after the main script is done (should not take all that long really and would be extra useful.)

I think we are still at the point where we are getting the benchmark to a stable state. From a test perspective we still have:
1. memcached
2. apache
3. nginx
4. possibly with an in-memory mysql application with #2 or #3
5. 7-zip non-pts
6. pybench non-pts

With that in mind, this running on ARM is a major plus. That is coming to the lower end of the market.

After those ~6 items are done, we are probably at the minimum viable script point. From there, the next steps are:
1. Ability to parse script to get data easily
2. Option to upload script
3. Option to run script multiple times
4. Option to run only selected tests
5. LiveCD/ USB

With CES this week I cannot test as much as I would want. Still, I think we are fairly close. I do want to get the next few done at least by the time Broadwell comes out. Also, if we can get the 1-6 list done in terms of benchmarks, I will have the opportunity to at least ask if I can run it on a new upcoming super-high-end multi-processor system(s) on January 16. Under embargo on what that might be, but unreleased hardware. Not 100% sure I will get to run my benchmark, but I can always ask.

Jeggs101 - gave that apache version a shot to no avail.
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
p7zip + C6100 comparisons

p7zip tests. Feel free to move the bz2 file Patrick ;). Added in quick check to see if bz2 already exists. Requires a compiler for g++, parallel compile so it's fast to build. I chose to highlight the average results.

Quick look at the results show 7zip to be very parallel friendly, I've updated my spreadsheet of results in case anyone wants to compare or add results, I figure it helps to see how the numbers stack up to see if tests are running consistently.

http://goo.gl/t96G4f

Also, if anyone is interested in what different CPUs, HT and ESX virtualization does to the C6100, lines 5-9 are quite fun to interpret. Seems my instance of ESX halves the memory bandwidth as an example, but doesn't really affect the CPU performance (as expected).

Anyway, I might work on some of the other requested ones if I can get some time. That last batch of web/db/cache are all pretty related, just need to figure out I can do it safely on what I have instead of firing up another VM. I'm lazy tonight. ;)

Code:
#!/bin/bash


if [ ! -e p7zip_9.20.1_src_all.tar.bz2 ]
then
        wget https://dl.dropboxusercontent.com/u/124184/p7zip_9.20.1_src_all.tar.bz2
fi


tar xvfj p7zip_9.20.1_src_all.tar.bz2
cd p7zip_9.20.1
make -j 2>&1 >> /dev/null


echo "Starting 7zip benchmark, this will take a while"
bin/7za b >> output.txt


compressmips=$(grep Avr output.txt | tr -s ' ' |cut -d" " -f4)
decompressmips=$(grep Avr output.txt | tr -s ' ' |cut -d" " -f7)


echo "Compress speed (MIPS):" $compressmips
echo "Decompress speed (MIPS):" $decompressmips
 
Last edited:

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
Fun side note. Due to running these tests on all machines, I've spotted a few odd results and am forcing my admins to look at how the machines are set up and why the numbers differ so much between similar gear. Seems the users never noticed since the machines are good enough for them, and we don't run cross-system benchmarks often so it's relatively good enough. Probably BIOS settings... now I have a renewed personal interest in getting these done. ;)
 

MiniKnight

Well-Known Member
Mar 30, 2012
3,072
973
113
NYC
Will give that one a try. 7zip is always well threaded since it is a mature compression app.

I would not have expected half on the esxi machine. How many cores does it have?
 

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Not going to happen over press room wireless. Will move tomorrow evening.
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
That's the dual L5520s that are common, so 8 cores with HT on, so 16 threads.

Will give that one a try. 7zip is always well threaded since it is a mature compression app.

I would not have expected half on the esxi machine. How many cores does it have?
 

Chuckleb

Moderator
Mar 5, 2013
1,017
331
83
Minnesota
Mysql benchmark

This one took quite a few hours. There were two ways to do it, one is a simple yum install and run, the other is what I did. I chose to do the clean and isolated model so that it can be run on any system, even existing ones.

This will grab source, extract to a standalone directory, run as current user, bind to a different non-privileged port to avoid conflict, set a dummy password, and kill itself when done. One more note, performance may vary where the db is stored, so running in /dev/shm is ideal, but I leave that to the person running as it will run in $PWD. Not everyone has access to 16GB /dev/shm stores...

Last part, I don't know which tests are useful so I tee'd the output into a log that can be culled. You can also choose to run single tests instead of all tests, which would speed things up.

This was tricky, mysql really wants to run as root and write in /var/lib... took hours of testing and tweaking.

I have no results published and I dare not run it on my old Athlons ;). This is a really slow test, about 27 mins to run all tests on my dual E5-2665 running in /dev/shm. I will be first to admit that I don't necessarily play with mysql enough to know how to tune so there are probably flags or configs to change to make it faster/better.

Lastly, these are the canned benchmarks with mysql. There are many more, but now that we have an isolated instance, we can throw others at it if they are better/more preferred.

Code:
#!/bin/bash


cores=$(grep "processor" /proc/cpuinfo | wc -l)


# If the system has a ramdisk at /dev/shm, recomend to use that. Decided not to make it mandatory since we don't know how much RAM a system has.
# Space used after extracting and tests is about 2.5GB and Linux defaults to 50% RAM as ramdisk.


mysqlbuild=$PWD/chroot-mysql


if [ ! -e mysql-5.6.15.tar.gz ]
then
        wget http://dev.mysql.com/get/Downloads/MySQL-5.6/mysql-5.6.15.tar.gz
fi


if [ ! -e cmake-2.8.12.1-Linux-i386.tar.gz ]
then
        wget http://www.cmake.org/files/v2.8/cmake-2.8.12.1-Linux-i386.tar.gz
fi


tar xfz mysql-5.6.15.tar.gz
tar xfz cmake-2.8.12.1-Linux-i386.tar.gz
cd mysql-5.6.15


echo "Building configs..."
../cmake-2.8.12.1-Linux-i386/bin/cmake -Wno-dev -DCMAKE_INSTALL_PREFIX=$mysqlbuild -DMYSQL_TCP_PORT=5615 . 2>&1 >> /dev/null


# Using half the cores on the system to speed up build, but not too many cores. If you use too many, you have dependancies that cause build to fail
echo "Compiling..."
make -j $(($cores / 2)) 2>&1 >> /dev/null


echo "Installing..."
make install
cd $mysqlbuild
scripts/mysql_install_db


# Create a simple config, using an unprivledged port
echo "[mysqld]" > my.cnf
echo "basedir = $mysqlbuild" >> my.cnf
echo "port = 5615" >> my.cnf


bin/mysqld_safe --defaults-file=my.cnf &
sleep 10


# Set a disposable password
./bin/mysqladmin -u root password 'newdummypassword'


echo "Running tests, this will take a long time..."
cd sql-bench
./run-all-tests --user='root' --password='newdummypassword' --host='localhost.:5615' | tee output.txt


# Cleanup Mysqld instance
kill -TERM `cat $mysqlbuild/data/$HOSTNAME.pid`


echo "All mysql databases stopped"
 
Last edited:

Patrick

Administrator
Staff member
Dec 21, 2010
12,511
5,792
113
Just tried the MySQL on a dual E5-2697 V2 system I have in the FLIR test bed. Made my.sh with the above.

Code:
patrick@e5v2:~$ chmod +x my.sh
patrick@e5v2:~$ ./my.sh
--2014-01-11 11:15:42--  http://dev.mysql.com/get/Downloads/MySQL-5.6/mysql-5.6.15.tar.gz
Resolving dev.mysql.com (dev.mysql.com)... 137.254.60.11
Connecting to dev.mysql.com (dev.mysql.com)|137.254.60.11|:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: http://cdn.mysql.com/Downloads/MySQL-5.6/mysql-5.6.15.tar.gz [following]
--2014-01-11 11:15:43--  http://cdn.mysql.com/Downloads/MySQL-5.6/mysql-5.6.15.tar.gz
Resolving cdn.mysql.com (cdn.mysql.com)... 23.7.200.96
Connecting to cdn.mysql.com (cdn.mysql.com)|23.7.200.96|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 32794954 (31M) [application/x-tar-gz]
Saving to: ‘mysql-5.6.15.tar.gz’


100%[======================================>] 32,794,954  6.82MB/s   in 4.6s


2014-01-11 11:15:47 (6.73 MB/s) - ‘mysql-5.6.15.tar.gz’ saved [32794954/32794954]


--2014-01-11 11:15:47--  http://www.cmake.org/files/v2.8/cmake-2.8.12.1-Linux-i386.tar.gz
Resolving www.cmake.org (www.cmake.org)... 66.194.253.19
Connecting to www.cmake.org (www.cmake.org)|66.194.253.19|:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2014-01-11 11:15:48 ERROR 404: Not Found.


tar (child): cmake-2.8.12.1-Linux-i386.tar.gz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
Building configs...
./my.sh: line 32: ../cmake-2.8.12.1-Linux-i386/bin/cmake: No such file or directory
Compiling...
./my.sh: line 37: make: command not found
Installing...
./my.sh: line 41: make: command not found
./my.sh: line 42: cd: /home/patrick/chroot-mysql: No such file or directory
./my.sh: line 43: scripts/mysql_install_db: No such file or directory
./my.sh: line 52: bin/mysqld_safe: No such file or directory
./my.sh: line 57: ./bin/mysqladmin: No such file or directory
Running tests, this will take a long time...
./my.sh: line 62: ./run-all-tests: No such file or directory
cat: /home/patrick/chroot-mysql/data/e5v2.pid: No such file or directory
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
All mysql databases stopped
I think the cmake site is down. I used a browser to look at the site and saw:
Notice:
Some features of the site are currently offline while we perform scheduled maintenance. Normal service will be restored soon.
 

OBasel

Active Member
Dec 28, 2010
494
62
28
Hey can I give a thought here?

Why not write everything to tmpfs? /dev/shm ?

Would that help performance a bit and also if the machine is powered down it would delete everything.