Donnerstag, 19. Juni 2014

Display HBA and WWN in HP-UX


First display the available HBA's installed in your system:

# ioscan -kfnC fc
Class I  H/W Path     Driver S/W State   H/W Type     Description
====================================================================
fc    0  0/3/0/0/0/0  fclp   CLAIMED     INTERFACE    HP AD355-60001
                         /dev/fclp0
fc    1  0/3/0/0/0/1  fclp   CLAIMED     INTERFACE    HP AD355-60001
                         /dev/fclp1
fc    2  0/7/0/0/0/0  fclp   CLAIMED     INTERFACE    HP AD355-60001
                         /dev/fclp2
fc    3  0/7/0/0/0/1  fclp   CLAIMED     INTERFACE    HP AD355-60001
                         /dev/fclp3

The ioscan command above shows the devices fclp0 to fclp3. Then run the fcmsutil to get the WWN's for each adapter:

# /opt/fcms/bin/fcmsutil /dev/fclp0

Vendor ID is = 0xXXXX
Device ID is = 0xXXXX
PCI Sub-system Vendor ID is = 0xXXXX
PCI Sub-system ID is = 0xXXXX
Chip version = 2
Firmware Version = 2.70X5 SLI-3 (Z3F2.70X5)
EFI Version = ZE3.21A3
EFI Boot = ENABLED
Driver-Firmware Dump Available = NO
Driver-Firmware Dump Timestamp = N/A
Topology = PTTOPT_FABRIC
Link Speed = 4Gb
Local N_Port_id is = 0xXXXXXX
Previous N_Port_id is = None
N_Port Node World Wide Name = 0x0000000000000000
N_Port Port World Wide Name = 0x0000000000000001
Switch Port World Wide Name = 0x1000000000000000
Switch Node World Wide Name = 0x1000000000000001
Driver state = ONLINE
Hardware Path is = 0/3/0/0/0/0
Maximum Frame Size = 2048
Driver Version = @(#) FCLP: PCIe Fibre Channel driver (FibrChanl-02), B.11.31.0909, Jun  5 2009, FCLP_IFC (3,2)

Samstag, 14. Juni 2014

Load-Generating Tools

One important thing to keep in mind when load-testing is that there are only so many socket connections you can have in Linux. This is a hard-coded kernel limitation, known as the Ephemeral Ports Issue. You can extend it (to some extent) in /etc/sysctl.conf; but basically, a Linux machine can only have about 64,000 sockets open at once. So when load testing, we have to make the most of those sockets by making as many requests as possible over a single connection. In addition to that, we'll need more than one machine to do the load generation. Otherwise, the load generators will run out of available sockets and fail to generate enough load.

Apache Bench

I started with 'ab', Apache Bench. This is the simplest general-use http benchmarking tool that I know of. And it ships with Apache, so it's probably already on your system. Unfortunately, I could only get about 900 requests/sec using this. I've seen other people get up to 2,000 with it, but I could tell right away that 'ab' wasn't the tool for this job.

Httperf

Next, I tried 'httperf'. This tool is more powerful, but still relatively simple and limited in its capabilities. Figuring out how many req/sec you'll be generating is not as straightforward as just passing it a number. It took me several tries to get more than a couple hundred req/sec. For example:

This creates 100,000 sessions, at a rate of 1,000 per second. Each session makes 5 calls, which are spread out by 2 seconds.

httperf --hog --server=192.168.122.10 --wsess=100000,5,2 --rate 1000 --timeout 5
Total: connections 117557 requests 219121 replies 116697 test-duration 111.423 s
Connection rate: 1055.0 conn/s (0.9 ms/conn, <=1022 concurrent connections) Connection time [ms]: min 0.3 avg 865.9 max 7912.5 median 459.5 stddev 993.1 Connection time [ms]: connect 31.1 Connection length [replies/conn]: 1.000
Request rate: 1966.6 req/s (0.5 ms/req) Request size [B]: 91.0
Reply rate [replies/s]: min 59.4 avg 1060.3 max 1639.7 stddev 475.2 (22 samples) Reply time [ms]: response 56.3 transfer 0.0 Reply size [B]: header 267.0 content 18.0 footer 0.0 (total 285.0) Reply status: 1xx=0 2xx=116697 3xx=0 4xx=0 5xx=0
CPU time [s]: user 9.68 system 101.72 (user 8.7% system 91.3% total 100.0%) Net I/O: 467.5 KB/s (3.8*10^6 bps)
Eventually, I was able to get 6,622 connections/sec with these settings:

httperf --hog --server 192.168.122.10 --num-conn 100000 --ra 20000 --timeout 5
(A total of 100,000 connections created, and the connections are created at a fixed rate of 20,000 per second.)
It has potential, and a few more features than 'ab'. But not quite the heavy-lifter that I need for this project. I need something that supports multiple load-testing nodes in a distributed fashion. Hence, my next attempt: Jmeter.

Installing Tsung in CentOS 6.2

The first thing you'll need is the EPEL repository (for Erlang). So set those up before continuing. Once that's done, install the required packages on each of the nodes that you'll be using to generate load. If you don't already have passwordless SSH keys set up between the nodes, do that too.
yum -y install erlang perl perl-RRD-Simple.noarch perl-Log-Log4perl-RRDs.noarch gnuplot perl-Template-Toolkit firefox
Download the latest Tsung from Github, or from their website.
wget http://tsung.erlang-projects.org/dist/tsung-1.4.2.tar.gz
Untar and compile.
tar zxfv  tsung-1.4.2.tar.gz
cd tsung-1.4.2
./configure && make && make install
Copy the example config into ~/.tsung. This is the location of the Tsung config files, and log files.
cp  /usr/share/doc/tsung/examples/http_simple.xml /root/.tsung/tsung.xml
You can edit this file to your specifications, or use the one that works for me. This is my config that, after much trial and error, now generates 5 million http requests per second, when used with 7 distributed nodes.
?
It's a lot to take in at first, but it's really quite simple once you understand it. 
  •  is simply the host(s) to run Tsung on. You can specify IPs, and the max number of CPUs that you want Tsung to use. You can also set a limit on the number of users that the node will simulate with maxusers. Each of these users will perform an operation that we will define later. 
  •  is the name(s) of the [http] server you want to test. We will be using this option to test the cluster IP, as well as individual servers.
  • defines when our simulated users will "arrive" at our website, and how quickly they will arrive.
    •  In phase 1, which lasts 10 minutes, 15,000 users will arrive, at a rate of 8 per second.

    • There are two more arrivalphases, in which users arrive in a similar fashion. 
    • Altogether, these arrivalphases make up a , which controls how many requests per second we'll be generating.
  •  This section defines what those users will be doing once they've arrived at your website.
  • probability allows you to define random things that users might do. Sometimes they may click this, other times they may click that. Probabilities must add up to equal 100%.
  • In the configuration above, the users only ever do one thing, so it has a probability of 100%. 
  •  This is what the users do, 100% of the time. They loop through 10,000,000 times and  a single web page, /test.txt.
  • This looping construct allows us to use less user-connections to achieve a very high number of requests per second.
Once you've got that in place, you can create this handy alias to quickly view your Tsung reports.
vim ~/.bashrc
alias treport="/usr/lib/tsung/bin/tsung_stats.pl; firefox report.html"
source ~/.bashrc
Then start up Tsung.
[root@loadnode1 ~] tsung start
Starting Tsung
"Log directory is: /root/.tsung/log/20120421-1004"
And view the report when finished.
cd /root/.tsung/log/20120421-1004
treport
tsung-thumb

Using Tsung to Plan Your Cluster Build

Now that we have a powerful enough load-testing tool, we can plan the rest of the cluster build:
  1. Use Tsung to test a single http server. Get a base benchmark.
  2. Tune the heck out of those web servers, testing with Tsung regularly to see improvements.
  3. Tune the TCP sockets of those systems to obtain optimal network performance. Again, test, test, test.
  4. Build the LVS cluster, which contains those fully-tuned web servers.
  5. Stress-test LVS by using Tsung on the cluster IP.
In the next two articles, I'll show you how to get your web server performing at top speed, and how to bring it altogether with the LVS cluster software.

Configuring A High Availability Cluster (Heartbeat) On CentOS

Configuring A High Availability Cluster (Heartbeat)


Assign hostname node01 to primary node with IP address 172.16.4.80 to eth0.
Assign hostname node02 to slave node with IP address 172.16.4.81.



Note: on node01
uname -n
must return node01.
On node02
uname -n
must return node02.
172.16.4.82 is the virtual IP address that will be used for our Apache webserver (i.e., Apache will listen on that address).

Configuration

1. Download and install the heartbeat package. In our case we are using CentOS so we will install heartbeat with yum:
yum install heartbeat
or download these packages:
heartbeat-2.08
heartbeat-pils-2.08
heartbeat-stonith-2.08
2. Now we have to configure heartbeat on our two node cluster. We will deal with three files. These are:
authkeys
ha.cf
haresources
3. Now moving to our configuration. But there is one more thing to do, that is to copy these files to the /etc/ha.d directory. In our case we copy these files as given below:
cp /usr/share/doc/heartbeat-2.1.2/authkeys /etc/ha.d/
cp /usr/share/doc/heartbeat-2.1.2/ha.cf /etc/ha.d/
cp /usr/share/doc/heartbeat-2.1.2/haresources /etc/ha.d/
4. Now let's start configuring heartbeat. First we will deal with the authkeys file, we will use authentication method 2 (sha1). For this we will make changes in the authkeys file as below.
vi /etc/ha.d/authkeys
Then add the following lines:
auth 22 sha1 test-ha
Change the permission of the authkeys file:
chmod 600 /etc/ha.d/authkeys
5. Moving to our second file (ha.cf) which is the most important. So edit the ha.cf file with vi:
vi /etc/ha.d/ha.cf
Add the following lines in the ha.cf file:
logfile /var/log/ha-loglogfacility local0keepalive 2deadtime 30initdead 120bcast eth0udpport 694auto_failback onnode node01node node02
Note: node01 and node02 is the output generated by
6. The final piece of work in our configuration is to edit the haresources file. This file contains the information about resources which we want to highly enable. In our case we want the webserver (httpd) highly available:
vi /etc/ha.d/haresources
Add the following line:
node01 172.16.4.82 httpd
7. Copy the /etc/ha.d/ directory from node01 to node02:
scp -r /etc/ha.d/ root@node02:/etc/
8. As we want httpd highly enabled let's start configuring httpd:
vi /etc/httpd/conf/httpd.conf
Add this line in httpd.conf:
Listen 172.16.4.82:80
Copy the /etc/httpd/conf/httpd.conf file to node02:
scp /etc/httpd/conf/httpd.conf root@node02:/etc/httpd/conf/
10. Create the file index.html on both nodes (node01 & node02):
On node01:
echo "node01 apache test server" > /var/www/html/index.html
On node02:
echo "node02 apache test server" > /var/www/html/index.html
11. Now start heartbeat on the primary node01 and slave node02:
/etc/init.d/heartbeat start
12. Open web-browser and type in the URL:
http://172.16.4.82
It will show node01 apache test server.
13. Now stop the hearbeat daemon on node01:
/etc/init.d/heartbeat stop
In your browser type in the URL http://172.16.4.82 and press enter.
It will show node02 apache test server.
14. We don't need to create a virtual network interface and assign an IP address (172.16.4.82) to it. Heartbeat will do this for you, and start the service (httpd) itself. So don't worry about this.
Don't use the IP addresses 172.16.4.80 and 172.16.4.81 for services. These addresses are used by heartbeat for communication between node01 and node02. When any of them will be used for services/resources, it will disturb hearbeat and will not work.