I created a logical volume (
scandata) containing a single ext3 partition. It is the only logical volume in its volume group (
case4t). Said volume group is comprised by 3 physical volumes, which are three primary partitions on a single block device (
When I created it, I could mount the partition via the block device
Since last reboot the aforementioned block device file has disappeared.
It may be of note -- I'm not sure -- that my superior (a college professor) had prompted this reboot by running
sudo chmod -R [his name] /usr/bin, which obliterated all suid in its path, preventing the both of us from
sudo-ing. That issue has been (temporarily) rectified via this operation.
Now I'll cut the chatter and get started with the terminal dumps:
$ sudo pvs; sudo vgs; sudo lvs
Logging initialised at Sat Jan 8 11:42:34 2011 Set umask to 0077 Scanning for physical volume names PV VG Fmt Attr PSize PFree /dev/sdb1 case4t lvm2 a- 819.32G 0 /dev/sdb2 case4t lvm2 a- 866.40G 0 /dev/sdb3 case4t lvm2 a- 47.09G 0 Wiping internal VG cache Logging initialised at Sat Jan 8 11:42:34 2011 Set umask to 0077 Finding all volume groups Finding volume group "case4t" VG #PV #LV #SN Attr VSize VFree case4t 3 1 0 wz--n- 1.69T 0 Wiping internal VG cache Logging initialised at Sat Jan 8 11:42:34 2011 Set umask to 0077 Finding all logical volumes LV VG Attr LSize Origin Snap% Move Log Copy% Convert scandata case4t -wi-a- 1.69T Wiping internal VG cache
$ sudo vgchange -a y
Logging initialised at Sat Jan 8 11:43:14 2011 Set umask to 0077 Finding all volume groups Finding volume group "case4t" 1 logical volume(s) in volume group "case4t" already active 1 existing logical volume(s) in volume group "case4t" monitored Found volume group "case4t" Activated logical volumes in volume group "case4t" 1 logical volume(s) in volume group "case4t" now active Wiping internal VG cache
$ ls /dev | grep case4t
$ ls /dev/mapper
$ sudo fdisk -l /dev/case4t/scandata
Disk /dev/case4t/scandata: 1860.5 GB, 1860584865792 bytes 255 heads, 63 sectors/track, 226203 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00049bf5 Device Boot Start End Blocks Id System /dev/case4t/scandata1 1 226203 1816975566 83 Linux
$ sudo parted /dev/case4t/scandata print
Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/case4t-scandata: 1861GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 1861GB 1861GB primary ext3
$ sudo fdisk -l /dev/sdb
Disk /dev/sdb: 1860.5 GB, 1860593254400 bytes 255 heads, 63 sectors/track, 226204 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000081 Device Boot Start End Blocks Id System /dev/sdb1 1 106955 859116006 83 Linux /dev/sdb2 113103 226204 908491815 83 Linux /dev/sdb3 106956 113102 49375777+ 83 Linux Partition table entries are not in disk order
$ sudo parted /dev/sdb print
Model: DELL PERC 6/i (scsi) Disk /dev/sdb: 1861GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 880GB 880GB primary reiserfs 3 880GB 930GB 50.6GB primary 2 930GB 1861GB 930GB primary
I find it a bit strange that partition one above is said to be reiserfs, or if it matters -- it was previously reiserfs, but LVM recognizes it as a PV.
To reiterate, neither
/dev/mapper/case4t-scandatap1 (which I had used previously) nor
/dev/case4t/scandata1 (as printed by
fdisk) exists. And
/dev/case4t/scandata (no partition number) cannot be mounted:
$sudo mount -t ext3 /dev/case4t/scandata /mnt/new
mount: wrong fs type, bad option, bad superblock on /dev/mapper/case4t-scandata, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so
All I get on syslog is:
[170059.538137] VFS: Can't find ext3 filesystem on dev dm-0.
Thanks in advance for any help you can offer,
P.S. I am on Ubuntu GNU/Linux 2.6.28-11-server (Jaunty) (out of date, I know -- that's on the laundry list).
For me activating it did it:
vgchange -a y. After that the
/dev/mapper/$vgname-* devices immediately showed up.
Edit: Also required
use_lvmetad = 1 instead of
/etc/lvm/lvm.conf to make it mount on boot. Using
update-initramfs -u after activating might or might not have had something to do with it.
User contributions licensed under CC BY-SA 3.0