Could not create log file /var/cpanel/logs/.copyacct_[packages,features

I got the following error while transfering domains via whm copy multiple account feature  from my old host to the new vps.

Could not create log file /var/cpanel/logs/.copyacct_[packages,features

The issue was due to the disk inode in the vps got full.

Fix: raise the value of diskinodes using the step.

vzctl set VEID –diskinodes 1000000:1000000 –save

 

Command to check load in all the VPS in a nodes

vzlist -o hostname,laverage,veid

HOSTNAME         LAVERAGE       VEID
test1.com              1.61/2.85/1.87        156
test2.com              1.08/1.81/1.36        157
test3.com             0.29/0.78/0.53        158
test4.com             3.79/3.57/2.46        160

Error:vzquota : (error) can’t lock quota file, some quota operations are performing for id

I was unable to start my vps and was getting the following error.

“Error:vzquota : (error) can’t lock quota file, some quota operations are performing for id:
Fix : vzquota off veid

vzquota drop veid

vzctl restart veid

No filesystems with quota detected.

I got the following error while running /scripts/fixquotas in the vps server. Also in whm unlimited quota was showing for all account.

—————–

/scripts/fixquotas –force
Installing Default Quota Databases……Done
Linux Quota Support
Quotas are now on
Resetting quota for use to 500 M
edquota: Quota file not found or has wrong format.
No filesystems with quota detected.

—————–

Fix

touch /home/quota.user
touch /home/quota.group
chmod 600 /home/quota.user
chmod 600 /home/quota.group
Now run the command

quotacheck -acugvm

quotacheck: Scanning /dev/simfs [/] quotacheck: Cannot stat old user quota file: No such file or directory
quotacheck: Cannot stat old group quota file: No such file or directory
quotacheck: Cannot stat old user quota file: No such file or directory
quotacheck: Cannot stat old group quota file: No such file or directory
done
quotacheck: Checked
quotacheck: Old file not found.
quotacheck: Old file not found.

 

Finally run fixquota again.

/scripts/fixquotas –force

Migrate a single VE from one Hardware Node to another.

The vzmigrate script is used to migrate a single VE from one Hardware Node to another.

 

 Prerequisites

Make sure:
* you have at least one good backup of the virtual machine you intend to migrate
* rsync is installed on the target host
* In general you cannot migrate from bigger kernel versions to smaller ones
* By default, after the migration process is completed, the Container private area and configuration file are ”’deleted”’ on the old HN. However, if you wish the Container private area on the Source Node to not be removed after the successful Container migration, you can override the default <code>vzmigrate</code> behavior by using the <code>–r no
*both the nodes must be in same rack. If it is in different rack, you need to chabnge ips of all domains in new vps after the migration.

Setting up SSH keys

 

You first have to setup SSH to permit the old HN to be able to login to the new HN without a password prompt. Run the following on the old HN.

root@oldserver ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory ‘/root/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
74:7a:3e:7f:27:2f:42:bb:52:4c:ad:55:31:6f:79:f2 root@OpenVZ.ics.local
[root@OpenVZ ~]# cd .ssh/
[root@OpenVZ .ssh]# ls -al
total 20
drwx——  2 root root 4096 Aug 11 09:41 .
drwxr-x—  5 root root 4096 Aug 11 09:40 ..
-rw——-  1 root root  887 Aug 11 09:41 id_rsa
-rw-r–r–  1 root root  231 Aug 11 09:41 id_rsa.pub
[root@OpenVZ .ssh]# scp id_rsa.pub root@10.10.10.1:./id_rsa.pub
The authenticity of host ‘ 10.10.10.1(10.10.10.1)’ can’t be established.
RSA key fingerprint is 3f:2a:26:15:e4:37:e2:06:b8:4d:20:ee:3a:dc:c1:69.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘10.1.5.6’ (RSA) to the list of known hosts.
root@10.10.10.1’s password:
id_rsa.pub               100%  231     0.2KB/s   00:00

Run the following on the new HN.

<pre>[root@newserver]# cd .ssh/
[root@new .ssh]# touch authorized_keys2
[root@new .ssh]# chmod 600 authorized_keys2
[root@new .ssh]# cat ../id_rsa.pub >> authorized_keys2
[root@new .ssh]# rm ../id_rsa.pub
rm: remove regular file `../id_rsa.pub’? y</pre>

Run the following on the old HN.

<pre>[root@oldserver .ssh]# ssh -2 -v root@10.10.10.1
OpenSSH_3.9p1, OpenSSL 0.9.7a Jun 19 2012
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug1: Connecting to  10.10.10.1[10.10.10.1] port 22.
debug1: Connection established.
debug1: permanently_set_uid: 0/0
debug1: identity file /root/.ssh/id_rsa type 1
debug1: identity file /root/.ssh/id_dsa type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_4.3
debug1: match: OpenSSH_4.3 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_3.9p1
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-cbc hmac-md5 none
debug1: kex: client->server aes128-cbc hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Host ‘10.10.10.1’ is known and matches the RSA host key.
debug1: Found key in /root/.ssh/known_hosts:1
debug1: ssh_rsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey,gssapi-with-mic,password
debug1: Next authentication method: gssapi-with-mic
debug1: An invalid name was supplied
Cannot determine realm for numeric host address

debug1: An invalid name was supplied
Cannot determine realm for numeric host address

debug1: Next authentication method: publickey
debug1: Offering public key: /root/.ssh/id_rsa
debug1: Authentications that can continue: publickey,gssapi-with-mic,password
debug1: Offering public key: /root/.ssh/id_rsa
debug1: Server accepts key: pkalg ssh-rsa blen 149
debug1: read PEM private key done: type RSA
debug1: Authentication succeeded (publickey).
debug1: channel 0: new [client-session]
debug1: Entering interactive session.
Last login: Thu Jun  19 16:41:30 2012 from 10.10.10.1
[root@new ~]# exit</pre>

vzmigrate usage

Now that the vzmigrate script will function, a little bit on vzmigrate.

This program is used for container migration to another node
Usage:
vzmigrate [-r yes|no] [–ssh=<options>] [–keep-dst] [–online] [-v]

=== Example ===
Here is an example of migrating container 101 from the current HN to one at 10.10.10.1(if you hav internel ip configured in both servers. it will be bettre to transfer through them

[root@old .ssh]# vzmigrate 10.10.10.1 101 –> this is the basic command, but it is not safe
OPT:10.1.5.6
Starting migration of container 101 on 10.1.5.6
Preparing remote node
Initializing remote quota
Syncing private
Syncing 2nd level quota
Turning quota off
Cleanup

Use the following options while using the command vzmigrate

vzmigrate -r no –keep-dst –online

-r –> no means don’t remove the source container

keep-dst –> keep the detination new vps if anything happened in between the transfer

–online –>Perform online (zero-downtime) migration: during the migration the
container hangs for a while and after the migration it continues working
as though nothing has happened.

== Migrate all running containers ==

Here’s a simple shell script that will migrate each container one after another. Just pass the destination host node as the single argument to the script. Feel free to add the -v flag to the vzmigrate flags if you’d like to see it execute with the verbose option:

for CT in $(vzlist -H -o veid); do vzmigrate –remove-area no –keep-dst $1 $CT; done