1, Free command –> The most effective tool for finding the memory usage and other details. The m option displays all data in MBs.
total used free shared buffers cached
Mem: 22528 14872 7655 1048 0 13497
-/+ buffers/cache: 1374 21153
Swap: 4096 1461 2634
2,/proc/meminfo –> Another way is to read the /proc/meminfo file. The important values can be find from this. Please check sample result.
MemTotal: 23068672 kB
MemFree: 7839488 kB
Cached: 13890932 kB
SwapTotal: 4194304 kB
SwapFree: 2697752 kB
Shmem: 1143948 kB
3,vmstat –> The vmstat command with the s option, lays out the memory usage statistics much like the proc command.
23068672 total memory
15195280 used memory
7472596 active memory
7478680 inactive memory
7873392 free memory
0 buffer memory
13855964 swap cache
4194304 total swap
1496552 used swap
2697752 free swap
4, Top command and htop command will also give all the information. The result from top command is given below.
Mem: 23068672k total, 15279760k used, 7788912k free, 0k buffers
Swap: 4194304k total, 1496184k used, 2698120k free, 13857660k cached
5, Ram information –> To find out hardware information about the installed RAM, use the demidecode command. It reports lots of information about the installed RAM memory.
dmidecode -t 17
Usually we all have confusion regarding the amount of free memory available while running the command free -m. In the following example
total used free shared buffers cached
Mem: 72348 71972 375 0 6242 41377
-/+ buffers/cache: 24352 47995
Swap: 4095 1309 2786
In the above result, from the top line we can see that out of 72GB available 71GB is being used, but we don’t see any such high memory usage in the server. But you can see lot of memory in cached. The actual memory is being used is in the disk cache. Cached memory is essentially free, so that it can be quickly taken if a running (or newly starting) program needs the memory.
The reason Linux uses so much memory for disk cache is because the RAM is wasted if it isn’t used. Keeping the cache means that if something needs the same data again, there’s a good chance it will still be in the cache in memory. Fetching the information from there is around 1,000 times quicker than getting it from the hard disk. If it’s not found in the cache, the hard disk needs to be read anyway.
The -/+ buffers/cache line shows how much memory is used and free from the perspective of the application.
The difference between buffers and cache –> Buffers are associated with a specific block device, and cover caching of filesystem metadata as well as tracking in-flight pages. The cache only contains parked file data.
That is, the buffers remember what’s in directories, what file permissions are, and keep track of what memory is being written from or read to for a particular block device. The cache only contains the contents of the files themselves.
Fail2ban uses iptables to block ips. If we want to permanently whitelist a particular ip, you need add in the following file
Now check the line
Add the ips that you want. Several addresses can be defined using space separator.
ignoreip = 184.108.40.206 192.x.x.1
I have a server with raid controller adaptec and the raid status was showing as impacted.
Logical device number 1
Logical device name :
RAID level : 50
Status of logical device : Impacted
In order to get the array into an Optimal state, a Verify with Fix must be initiated from Storage Manager or ARCCONF. Using ARCCONF, the following method is used
/usr/StorMan/arcconf task start <Controller#> LOGICALDRIVE <LogicalDrive#> option
Controller# is the controller number
LogicalDrive# is the number of the logical drive in which the task is to be performed
/usr/StorMan/arcconf task start 1 logicaldrive 1 verify_fix
Various Logical drive options:
– verify_fix (Verify with fix) — verifies the logical drive redundancy and repairs the drive if bad data is found.
– verify — verifies the logical drive redundancy without repairing bad data.
– clear — removes all data from the drive.
I got the following error while running host command
-bash: host: command not found
On checking further, I could see that whois command was also missing. We can fix this by installing the following.
yum install bind-utils jwhois
Csf in a very helpful software during ddos attack. There are different options available in csf to prevent or stop attacks to a greater extent.
CONNLIMIT –> This option configures iptables to offer more protection from DOS attacks against specific ports
useful value : CONNLIMIT = “80;20” –> here we’re limiting port 80 to 20 connections.
PORTFLOOD –> This option limits the number of new connections per time interval that can be made to specific ports
useful value : PORTFLOOD = “80;tcp;20;5” –> That means you’ll only allow 20 connections per IP-address per five seconds.
SYN Flood Protection –> you should only enable SYN flood protection (SYNFLOOD= “1″) if you are currently under a SYN flood attack.
CT_LIMIT –> This feature tracks connections and blocks the IP if the number of connections is too high. Use caution because if you enable this option and set this value too low, it will block legitimate traffic.
CT_LIMIT = 30
I got the 503 error when loading my domain via browser and I was getting the following error from the log.
mod_hostinglimits:Error on LVE enter: LVE( HANDLER(fcgid-script)
This happens due to customer hitting entry processes limit. Entry processes limit restricts the number of concurrent connections to dynamic (php & cgi) scripts for the customer. Otherwise, one site could use up all Apache slots, and cause all the sites to go down.
Fix: You can increase entry processes limit by running:
lvectl set USER_ID –maxEntryProcs NEW_LIMIT –save
If the same user is hitting CPU limit at the same time at the same time as entry processes limit, raising just entry processes limit will not help. You should find why the user is using too much resources.