Thursday, April 12, 2018

OpenLDAP integration with SonicWall firewall


Install OpenLDAP Packages

On CentOS and RedHat, use yum install as shown below, to install the openldap related packages.
yum install -y openldap openldap-clients openldap-servers
You should install the following three packages:
1.    openldap-servers – This is the main LDAP server
2.    openldap-clients – This contains all required LDAP client utilities
3.    openldap – This packages contains the LDAP support libraries

Create olcRootDN Account as Admin

It is always recommended to create a dedicated user account first with the full permissions to change information on the LDAP database.
Modify the olcDatabase={2}bdb.ldif file, and change the olcRootDN entry. The following is the default entry.

# grep olcRootDN /etc/openldap/slapd.d/cn=config/olcDatabase={2}bdb.ldif
olcRootDN: cn=Manager,dc=my-domain,dc=com

Change the above line to an admin user. In this example, user “ramesh” will be the olcRootDN.

olcRootDN: cn=ramesh,dc=aeri,dc=com

Create olcRootPW Root Password


Now use slappasswd command to create a hash for the root password you want to use. Once the password is generated, open the cn=config.ldif file, include the olcRootPW parameter, and copy the hashed password as shown below.
Execute the following command and specify a password. This will generate the hash for the given password.
# slappasswd
New password: SecretLDAPRootPass2015
Re-enter new password: SecretLDAPRootPass2015
{SSHA}1pgok6qWn24lpBkVreTDboTr81rg4QC6

Take the hash output of the above command and add it to the oclRootPW parameter in the config.ldif file as shown below.
# vi /etc/openldap/slapd.d/cn=config.ldif
olcRootPW: {SSHA}1pgok6qWn24lpBkVreTDboTr81rg4QC6

Create olcSuffix Domain Name

Now setup the olcSuffix and to set the domain that you want. Simply modify the line that starts with olcSuffix in the file olcDatabase={2}bdb.ldif as shown below.
# vi /etc/openldap/slapd.d/cn=config/olcDatabase={2}bdb.ldif
olcSuffix: dc=aeri,dc=com

Verify The Configuration Files

Use slaptest command to verify the configuration file as shown below. This should display “testing succeeded” message as shown below.
# slaptest -u
config file testing succeeded

You might get the following messages during the above command, which you can ignore for now.
54a39508 ldif_read_file: checksum error on "/etc/openldap/slapd.d/cn=config/olcDatabase={1}monitor.ldif"
54a39508 ldif_read_file: checksum error on "/etc/openldap/slapd.d/cn=config/olcDatabase={2}bdb.ldif"

Start the LDAP Server

Start the ldap server as shown below.
# service slapd start
Checking configuration files for slapd: [WARNING]
config file testing succeeded
Starting slapd:                         [  OK  ]

Verify the LDAP Search

To verify the ldap server is configured successfully, you can use the below command and verify that the domain entry is present.
# ldapsearch -x -b "dc=aeri,dc=com"
# extended LDIF
#
# LDAPv3
# base <dc=aeri,dc=com> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# search result
search: 2
result: 32 No such object
# numResponses: 1

Base LDAP Structure in base.ldif

The use of OU (organizational unit) objects can help you in providing additional structure to the LDAP database. If you are planning on adding in different types of entries, such as users, groups, computers, printers and more to the LDAP directory, it makes it easier to put every entry type into its own container.
To create these OU’s, you can create an initial LDIF file as shown in the below example. In this example, this file allows you to create the base container which is dc=aeri,dc=com and it creates two organizational units with the names users and groups in that container.
# cat base.ldif
dn: dc=aeri,dc=com
objectClass: dcObject
objectClass: organization
o: aeri.com
dc: aeri
dn: ou=users,dc=aeri,dc=com
objectClass: organizationalUnit
objectClass: top
ou: users
dn: ou=groups,dc=aeri,dc=com
objectClass: organizationalUnit
objectClass: top
ou: groups

Import Base Structure Using ldapadd

Now we can import the base structure in to the LDAP directory using the ldapadd command as shown below.
# ldapadd -x -W -D "cn=ramesh,dc=aeri,dc=com" -f base.ldif
Enter LDAP Password:
adding new entry "dc=aeri,dc=com"
adding new entry "ou=users,dc=aeri,dc=com"
adding new entry "ou=groups,dc=aeri,dc=com"

Verify the Base Structure using ldapsearch

To verify the OUs are successfully created, use the following ldapsearch command.
# ldapsearch -x -W -D "cn=ramesh,dc=aeri,dc=com" -b "dc=aeri,dc=com" "(objectclass=*)"
Enter LDAP Password:
The output of the above command will display all the objects in the LDAP directory structure.
# extended LDIF
#
# LDAPv3
# base <dc=aeri,dc=com> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# aeri.com
dn: dc=aeri,dc=com
objectClass: dcObject
objectClass: organization
o: aeri.com
dc: aeri
# users, aeri.com
dn: ou=users,dc=aeri,dc=com
objectClass: organizationalUnit
objectClass: top
ou: users
# groups, aeri.com
dn: ou=groups,dc=aeri,dc=com
objectClass: organizationalUnit
objectClass: top
ou: groups
# search result
search: 2
result: 0 Success
# numResponses: 4
# numEntries: 3
In the next OpenLDAP article, we’ll explain how to add new users and groups to the LDAP Directory.
1.    ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/cosine.ldif
2.    ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/nis.ldif
3.    ldapadd -Y EXTERNAL -H ldapi:/// -f /etc/openldap/schema/inetorgperson.ldif


Change LDAP Password for the user:


ldappasswd -s kanna@123 -W -D “cn=Manager,dc=aeri,dc=local” -x “uid=kanna,ou=Employees,dc=aeri,dc=local”

Login to Firewall:





























Wednesday, April 11, 2018

Ceph pool rbd has many more objects per pg than average (too few pgs?)

Ceph pool rbd has many more objects per pg than average (too few pgs?)



Targeting issues

[root@lab8106 ~]# ceph -s
    cluster fa7ec1a1-662a-4ba3-b478-7cb570482b62
     health HEALTH_WARN
            pool rbd has many more objects per pg than average (too few pgs?)
     monmap e1: 1 mons at {lab8106=192.168.8.106:6789/0}
            election epoch 30, quorum 0 lab8106
     osdmap e157: 2 osds: 2 up, 2 in
            flags sortbitwise
      pgmap v1023: 417 pgs, 13 pools, 18519 MB data, 15920 objects
            18668 MB used, 538 GB / 556 GB avail
                 417 active+clean

The cluster has this warning. pool rbd has many more objects per pg than average (too few pgs?)The alert in the hammer version ispool rbd has too few pgs

This place to view the cluster details:

[root@lab8106 ~]# ceph health detail
HEALTH_WARN pool rbd has many more objects per pg than average (too few pgs?); mon.lab8106 low disk space
pool rbd objects per pg (1912) is more than 50.3158 times cluster average (38)
Look at the object state of the cluster's pool

[root@lab8106 ~]# ceph df
GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED
    556G      538G       18668M          3.28
POOLS:
    NAME       ID     USED       %USED     MAX AVAIL     OBJECTS
    rbd        6      16071M      2.82          536G       15296
    pool1      7        204M      0.04          536G          52
    pool2      8        184M      0.03          536G          47
    pool3      9        188M      0.03          536G          48
    pool4      10       192M      0.03          536G          49
    pool5      11       204M      0.04          536G          52
    pool6      12       148M      0.03          536G          38
    pool7      13       184M      0.03          536G          47
    pool8      14       200M      0.04          536G          51
    pool9      15       200M      0.04          536G          51
    pool10     16       248M      0.04          536G          63
    pool11     17       232M      0.04          536G          59
    pool12     18       264M      0.05          536G          67
Check the number of pg storage pool

[root@lab8106 ~]# ceph osd dump|grep pool
pool 6 'rbd' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 132 flags hashpspool stripe_width 0
pool 7 'pool1' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 1 pgp_num 1 last_change 134 flags hashpspool stripe_width 0
pool 8 'pool2' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 1 pgp_num 1 last_change 136 flags hashpspool stripe_width 0
pool 9 'pool3' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 1 pgp_num 1 last_change 138 flags hashpspool stripe_width 0
pool 10 'pool4' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 1 pgp_num 1 last_change 140 flags hashpspool stripe_width 0
pool 11 'pool5' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 1 pgp_num 1 last_change 142 flags hashpspool stripe_width 0
pool 12 'pool6' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 1 pgp_num 1 last_change 144 flags hashpspool stripe_width 0
pool 13 'pool7' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 1 pgp_num 1 last_change 146 flags hashpspool stripe_width 0
pool 14 'pool8' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 1 pgp_num 1 last_change 148 flags hashpspool stripe_width 0
pool 15 'pool9' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 1 pgp_num 1 last_change 150 flags hashpspool stripe_width 0
pool 16 'pool10' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 100 pgp_num 100 last_change 152 flags hashpspool stripe_width 0
pool 17 'pool11' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 100 pgp_num 100 last_change 154 flags hashpspool stripe_width 0
pool 18 'pool12' replicated size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 200 pgp_num 200 last_change 156 flags hashpspool stripe_width 0
We see how this is obtained

pool rbd objects per pg (1912) is more than 50.3158 times cluster average (38)

rbd objects_per_pg = 15296 / 8 = 1912
objects_per_pg = 15920 /417 ≈ 38
50.3158 = rbd objects_per_pg / objects_per_pg = 1912 / 38

That is, there are too few objects in other pools, and there are fewer pg and more objects, and this will be prompted. Let's look at the judgments in the code below.

https://github.com/ceph/ceph/blob/master/src/mon/PGMonitor.cc

int average_objects_per_pg = pg_map.pg_sum.stats.sum.num_objects / pg_map.pg_stat.size();
     if (average_objects_per_pg > 0 &&
         pg_map.pg_sum.stats.sum.num_objects >= g_conf->mon_pg_warn_min_objects &&
         p->second.stats.sum.num_objects >= g_conf->mon_pg_warn_min_pool_objects) {
int objects_per_pg = p->second.stats.sum.num_objects / pi->get_pg_num();
float ratio = (float)objects_per_pg / (float)average_objects_per_pg;
if (g_conf->mon_pg_warn_max_object_skew > 0 &&
    ratio > g_conf->mon_pg_warn_max_object_skew) {
  ostringstream ss;
  ss << "pool " << name << " has many more objects per pg than average (too few pgs?)";
  summary.push_back(make_pair(HEALTH_WARN, ss.str()));
  if (detail) {
    ostringstream ss;
    ss << "pool " << name << " objects per pg ("
       << objects_per_pg << ") is more than " << ratio << " times cluster average ("
       << average_objects_per_pg << ")";
    detail->push_back(make_pair(HEALTH_WARN, ss.str()));
  }
The following major restrictions

Mon_pg_warn_min_objects = 10000 //The total number of objects exceeds 10,000
mon_pg_warn_min_pool_objects = 1000 //The storage pool object exceeds 1000
mon_pg_warn_max_object_skew = 10 //This is a multiple of the mean of the above storage pool and the average of all pg

Solve the problem
There are three ways to resolve this warning:

Deleting useless storage pools 
If there are unused storage pools in the cluster and the relative number of pg is still high, then you can delete some of these storage pools, thereby lowering mon_pg_warn_max_object_skewthis value and warning will disappear.

It is possible to increase the number of pg of pools that are prompted. The number of pg in this storage pool is not enough from the beginning. Increasing the number of pg and pgp also reduces the mon_pg_warn_max_object_skewvalue.

Increased mon_pg_warn_max_object_skewparameter values
If there are already enough pg in the cluster, adding pg will be unstable. If you want to remove this warning, you can increase this parameter value, the default is 10
to sum up
This warning is a comparison of the deviation between the number of objects in the storage pool and the average number of objects in the pg of the entire cluster. If the deviation is too large, a warning is issued.

Check the steps:

ceph health detail
ceph df
ceph osd dump | grep pool
mon_pg_warn_max_object_skew = 10.0

((objects/pg_num) in the affected pool)/(objects/pg_num in the entire system) >= 10.0 警告就会出现

Tuesday, April 3, 2018

Install Grub from chroot


Install Grub from chroot



The simple way to install grub/ grub2 from any Linux live-CD or any other bootable medium.


Step 1: boot from Linux live CD/DVD or even USB
Step 2: mount your hdd
Step 3: chroot in the mounted file system
Step 3: install grub
Step 4: reboot

#fdisk -l
#mount /dev/sda1 /mnt/
#mount -t proc none /mnt/proc
#mount -o bind /dev /mnt/dev
#mount -t sysfs sys /mnt/sys
#chroot /mnt/ /bin/bash
#update-grub
#/usr/sbin/grub-install --recheck --no-floppy /dev/sda
#sync & reboot



SUSE Syntax:

#update-grub is Debian and Ubuntu little helper this execute the same as:


#grub2-install --recheck --no-floppy /dev/sda
#grub2-mkconfig -o /boot/grub2/grub.cfg
#mkinitrd


Tuesday, March 13, 2018

How To Disk Dump dd


Disk Dump is nothing less than a life saviour when we’re talking about disk disaster recovery or even data forensics.

Here’s a quick list of cool examples with the dd tool.

Create a backup

dd if=/dev/sda of=/opt/backup_sda.img

Restore a backup

dd if=/opt/backup_sda.img of=/dev/sda

Clone a hard disk

dd if=/dev/sdb of=/dev/sdc

Transfer a disk image

dd if=/dev/sdb | ssh root@target "(cat > backup.img)"

Create an iso image of a CD/DVD

dd if=/dev/cdrom of=cdimage.iso

Burn an iso image of a CD/DVD

dd if=cdimage.iso of=/dev/cdrom obs=32k seek=0

Rescue a file that contains bad blocks

dd if=movie.avi of=rescued_movie.avi conv=noerror

Create your own bootloader

dd conv=notrunc if=bootloader of=qemu.img

Create a backup of your MBR

dd if=/dev/sdb of=mbr_backup bs=512 count=1

Restore a backup of your MBR

dd if=mbr_backup of=/dev/sdb bs=512 count=1

Mount dd image of and entire disk

You must use the start number of the partition.
fdisk -u -l disk_image
Disk /mnt/storage/disk_image: 0 MB, 0 bytes255 heads, 63 sectors/track, 0 cylinders, total 0 sectors
Units = sectors of 1 * 512 = 512 bytesDisk identifier: 0x41172ba5

Device                      Boot    Start    End       Blocks   Id  System
/mnt/storage/disk_image1            63       64259     32098+   de  Dell Utility
/mnt/storage/disk_image2    *       64260    78108029  39021885 7   HPFS/NTFS

Partition 2 has different physical/logical endings:phys=(1023, 254, 63) logical=(4861, 254, 63)
Then take the start of the partition that you want to edit, 64260 (disk_image2) in this case, and multiply it by 512

Ex: 512 * 64260 = 32901120
mount -o loop,offset=32901120 -t auto /mnt/storage/disk_image /mnt/image_partition_2

When the hard disk has errors

Get the dd_rescue tool
dd_rescue /dev/sdb /opt/backup_sdb.img

Network Clone

  • Destination:
nc -l -p 2222 | dd of=/dev/sda bs=16M
  • Source:
dd if=/dev/sda bs=16M | nc $Destination 2222

Network speed test

dd if=/dev/zero bs=1M count=100 | ssh user@machine 'cat > /dev/null'