MFS分布式文件系統
系統:redhat
機器:192.168.1.248(Master)
192.168.1.249(Backup)
192.168.1.250(Chunkserver 1)
192.168.1.238(Chunkserver2)
192.168.1.251 (client)
Master安裝MFS:
配置之前確保5台機器的selinux關閉,iptables關閉 # useradd mfs # tar zvxf mfs-1.6.17.tar.gz # cd mfs-1.6.17 # ./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs --disable-mfschunkserver --disable-mfsmount # make # make install # cd /usr/local/mfs/etc # mv mfsexports.cfg.dist mfsexports.cfg # mv mfsmaster.cfg.dist mfsmaster.cfg # mv mfsmetalogger.cfg.dist mfsmetalogger.cfg # cd /usr/local/mfs/var/mfs/ # mv metadata.mfs.empty metadata.mfs # echo "192.168.1.248 mfsmaster" >> /etc/hosts Mfsmaster.cfg 配置文件包含主控服務器master 相關的設置 mfsexports.cfg 指定那些客戶端主機可以遠程掛接MooseFS 文件系統,以及授予 掛接客戶端什麼樣的訪問權。默認是所有主機共享/ 試著運行master 服務(服務將以安裝配置configure 指定的用戶運行mfs): # /usr/local/mfs/sbin/mfsmaster start working directory: /usr/local/mfs/var/mfs lockfile created and locked initializing mfsmaster modules ... loading sessions ... ok sessions file has been loaded exports file has been loaded loading metadata ... create new empty filesystemmetadata file has been loaded no charts data file - initializing empty charts master <-> metaloggers module: listen on *:9419 master <-> chunkservers module: listen on *:9420 main master server module: listen on *:9421 mfsmaster daemon initialized properly 為了監控MooseFS 當前運行狀態,我們可以運行CGI 監控服務,這樣就可以用浏覽器查看整個MooseFS 的運行情況: # /usr/local/mfs/sbin/mfscgiserv starting simple cgi server (host: any , port: 9425 , rootpath: /usr/local/mfs/share/mfscgi) 現在,我們在浏覽器地址欄輸入http://192.168.1.248:9425 即可查看master 的運行情況(這個時候,是不能看見chunk server 的數據)。 Backup服務器配置 (作用是故障了替代Master): # useradd mfs # tar zvxf mfs-1.6.17.tar.gz # cd mfs-1.6.17 # ./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs --disable-mfschunkserver --disable-mfsmount # make # make install # cd /usr/local/mfs/etc # cp mfsmetalogger.cfg.dist mfsmetalogger.cfg # cp mfsexports.cfg.dist mfsexports.cfg # cp mfsmaster.cfg.dist mfsmaster.cfg # echo "192.168.1.248 mfsmaster" >> /etc/hosts # /usr/local/mfs/sbin/mfsmetalogger start working directory: /usr/local/mfs/var/mfs lockfile created and locked initializing mfsmetalogger modules ... mfsmetalogger daemon initialized properly Chunkserver 服務器配置(存儲數據塊,每台Chunkserver配置都一樣): # useradd mfs # tar zvxf mfs-1.6.17.tar.gz # cd mfs-1.6.17 # ./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs --disable-mfsmaster # make # make install # cd /usr/local/mfs/etc # cp mfschunkserver.cfg.dist mfschunkserver.cfg # cp mfshdd.cfg.dist mfshdd.cfg # echo "192.168.1.248 mfsmaster" >> /etc/hosts 建議在chunk server 上劃分單獨的空間給 MooseFS 使用,這樣做的好處是便於管理剩余空間,這裡使用的共享點是/mfs1和/mfs2 在配置文件mfshdd.cfg 中,我們給出了用於客戶端掛接MooseFS 分布式文件系統根分區所使用的共享空間位置 # vi /usr/local/mfs/etc/mfshdd.cfg #加入下面2行 /mfs1 /mfs2 # chown -R mfs:mfs /mfs* 開始啟動chunk server # /usr/local/mfs/sbin/mfschunkserver start working directory: /usr/local/mfs/var/mfs lockfile created and locked initializing mfschunkserver modules ... hdd space manager: scanning folder /mfs2/ ... hdd space manager: scanning folder /mfs1/ ... hdd space manager: /mfs1/: 0 chunks found hdd space manager: /mfs2/: 0 chunks found hdd space manager: scanning complete main server module: listen on *:9422 no charts data file - initializing empty charts mfschunkserver daemon initialized properly 現在再通過浏覽器訪問 http://192.168.1.248:9425 可以看見這個MooseFS 系統的全部信息,包括主控master 和存儲服務chunkserver 。 client配置(客戶端掛載mfs共享目錄): 前提環境: 所有的client都需要安裝fuse,內核版本為2.6.18-128.el5需要按照fuse-2.7.6.tar.gz,如果是2.6.18-194.11.3.el5內核則需要安裝fuse-2.8.4否則報錯) 在/etc/profile文件最後面加上:PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:$PKG_CONFIG_PATH 再執行 source /etc/profile 使之生效 tar xzvf fuse-2.7.6.tar.gz cd fuse-2.7.6 ./configure --enable-kernel-module make;make install 如果安裝成功會找到/lib/modules/2.6.18-128.el5/kernel/fs/fuse/fuse.ko這個內核模塊 再執行modprobe fuse 查看是否加載成功 :lsmod|grep "fuse" # useradd mfs # tar zvxf mfs-1.6.17.tar.gz # cd mfs-1.6.17 #./configure --prefix=/usr/local/mfs --with-default-user=mfs --with-default-group=mfs --disable-mfsmaster --disable-mfschunkserver # make # make install # echo "192.168.1.248 mfsmaster" >> /etc/hosts 掛載操作 # mkdir -p /data/mfs # /usr/local/mfs/bin/mfsmount /data/mfs -H mfsmaster mfsmaster accepted connection with parameters: read-write,restricted_ip ; root mapped to root:root 進行副本建立,只要在1台client上進行操作就可以了 # cd /data/mfs/ 副本數為1 # mkdir floder1 副本數為2 # mkdir floder2 副本數為3 # mkdir floder3 使用命令mfssetgoal –r 設定目錄裡文件的副本數: # /usr/local/mfs/bin/mfssetgoal -r 1 /data/mfs/floder1 /data/mfs/floder1: inodes with goal changed: 0 inodes with goal not changed: 1 inodes with permission denied: 0 # /usr/local/mfs/bin/mfssetgoal -r 2 /data/mfs/floder2 /data/mfs/floder2: inodes with goal changed: 1 inodes with goal not changed: 0 inodes with permission denied: 0 # /usr/local/mfs/bin/mfssetgoal -r 3 /data/mfs/floder3 /data/mfs/floder3: inodes with goal changed: 1 inodes with goal not changed: 0 inodes with permission denied: 0 拷貝文件測試 # cp /root/mfs-1.6.17.tar.gz /data/mfs/floder1 # cp /root/mfs-1.6.17.tar.gz /data/mfs/floder2 # cp /root/mfs-1.6.17.tar.gz /data/mfs/floder3