Friday, July 31, 2009

CPU usage in Linux?

Hello





Can any body write a software in C language which is supposed to work in Linux. This is software is going to be placed at a server side on a port. Then a client will connect to the server through this port and the program is supposed to return the CPU usage of the server to the client.





Thank you for your co-operation

CPU usage in Linux?
Hello,





yes this is possible. You dont even have to code a lot of things.





All you need is to enable telnet to your server. And then via telnet, execute "free" or "vmstat".





If you dont like telnet solution then you would have to either do it via CGI or via sockets creating your own service application.





But I think TELNET with enough security settings will be good enough for this problem. =)





Cheers and Merry Christmas in advance. =)





PS: check out the link i am sending.
Reply:You know, you could reinvent the wheel but this sort of software has been around for ages.





I use zabbix to monitor servers on remote sites:


http://www.zabbix.com/
Reply:I like that telnet suggestion, although "ssh" would be more secure.





Also try "uptime" or "top".
Reply:I *could* write such a program, but why am I going to reinvent the wheel? Try "man rstatd". You may have to install the "r" commands first, of course (distribution dependent). The main problem here is the definition of "CPU usage". Generally, Linux (and Unix) use the concept of "load" instead. So, if rstatd (actually rpc.statd) is running on the server, "rup" will report the load averages.*





And, of course, those statistics are also available by snmp. Or, use rsh (ssh) and run the iostat command remotely (and extract the %usr %sys %idle from that).





If this is an exersize, I will suggest W. Richard Stevens "Unix Network Programming". You will find examples in these books that are "ready to go". (these books should be in your University Library, and I suggest aquiring copies for your own use -- you will find them valuable as a professional resource).





* A postscript on Load Average vs. CPU% reporting.





Generally, if you are the ONLY user of a system, your jobs will get all available CPU. Which means that CPU% is intensely not interesting! You get what the hardware will deliver... If its too slow, pick faster hardware.





If you have access to a number of machines, and each of those machines also supports a number of users, you will want to know if you put job on that machine, the fraction of the CPU and other resources you will get. That is will (or may), in a future sense.





To attempt to answer this question, the concept of "load average" is used. Load average is a set of three numbers, representing the load average for 1, 10 and 15 minutes (exponentially damped). The idea is to give you a trend (which you can extrapolate to the future yourself). The next thing is the definition of "load". This is the number of processes waiting for, or using, the CPU(s). In Linux, it includes processes in an "i/o wait" state as well.





When YOUR process is running (say on a single-processor box) it gets 100% of the CPU for a duration. But, given the choice between scheduling a process on a 2GHZ processor with a load average of (5.0 5.0 5.0) or on a 1GHZ processor with a load average of (0.0 0.0 1.0), it would probably be better to pick that 1GHZ processor (your job should be completed in around 1/2 the time!).





CPU% reporting is also available, but is far less useful (%user, %system, %idle). If a job is introduced, it does NOT tell you how much of the CPU you can expect to get...
Reply:I wonder is something like SNMP would provide you with this functionality plus a whole lot more.

aster

No comments:

Post a Comment