I am starting to learn the ESXi command line tools. I posted an article on it at http://www.whiteboardcoder.com/2015/04/esxi-51-command-line.html
In this post I am trying to poke around and figure out some
commands that I can use for quick diagnostics of the system.
Simple commands
I am not going into how I am going to use these, I am just
going to make a quick list of commands I can figure out that I think I can
reuse later to query my system's health.
List of Physical Hard drives
> esxcli
storage core device list | grep t10 | grep -v ":"
|
Total Space in TiB of all drives
> esxcli
storage core device list | grep "[[:space:]]\+ Size" | sed 's/.*:
//' | awk '{ SUM += $1} END { print SUM/(1024*1024) }'
|
I got back 6.8 TiB.
(I have a 2 TB + 1.5TB + 4
TB = 7.5TB
Remember TiB != TB.
Pretty close numbers.
Health data for all hard drives
There is a little script
written for this
> /usr/lib/vmware/vm-support/bin/smartinfo.sh
|
Hard Drive Temperatures
> esxcli
storage core device list | grep t10 | grep -v ":" | while read i;
do esxcli storage core device smart get -d `echo $i`; done | grep "Drive
Temp" | awk '{print $3}'
|
Total Memory on box
This returns the total
memory on the box in bytes
> vim-cmd
hostsvc/hosthardware | grep memorySize | awk '{print $3}' | sed 's/,//'
|
List all the IP addresses being used.
This returns the total
memory on the box in bytes
> vim-cmd
vmsvc/getallvms | tail -n+2 | sed 's/\[data.*//g' | sed 's/ */:/g' | while
read i; do echo $i | cut -d ':' -f2,3; vim-cmd vmsvc/get.guest
`echo $i | cut -d
':' -f2` | grep -m 1 ipAddress; done
|
CPU information
OK I honestly could not find good details on this type of
information. I could not find a simple ESX command that returns just this type
of data nor a linux one, the Linux
commands on the box are stripped down.
ESXTOP
There is some salvation though, esxtop. It’s a tool like top, but with specific ESX
info in it. You can also run it in a batch
mode and output the data to a file. This
file is a giant csv file. Trying to
interpret this file is a little more involved so I will do it in another post.
No comments:
Post a Comment