分布式调度 WebService 个人收款码 html iphone ipad yii permissions python数据挖掘实战pdf oracle删除字段sql div外边距 cpm怎么计算 python参考手册 java入门 java中string java使用正则表达式 java集合框架图 java项目下载 房产证生成器 端口关闭工具 图解深度学习 隐藏进程 lol体验服转换器 野德天赋 teraterm 烧饼修改器打不开 流程图工具 论文修改软件 cad视口旋转 正当防卫4存档 linux解压命令 globalscape 文件分割 qq流览器下载 cad文件 显卡怎么设置 掌门一对一官网下载 excel转csv id页码怎么设置 pr时间轴不见了
当前位置: 首页 > 学习教程  > 编程语言

k8s——数据管理

2020/7/24 11:21:03 文章标签: 测试文章如有侵权请发送至邮箱809451989@qq.com投诉后文章立即删除

k8s——数据管理

    • Volume:
      • emptyDir
      • hostpath volume:
      • 外部 storage provider
        • 回收pv(删除pvc):
        • MYSQL使用pv,pvc:

Volume:

Volume可以持久化保存数据,Volume的生命周期独立于容器,Pod中的容器有可能出现意外,而volume会被保留,Pod 中的所有容器都可以共享 Volume,它们可以指定各自的 mount 路径

emptyDir

emptyDir 是主机的一个空目录,对于容器他是持久的,而对于pod则不是,删除pod,emptydir volume也会被删除,emptyDir Volume 的生命周期与 Pod 一致

apiVersion: v1
kind: Pod
metadata:
  name: empty-pod
spec:
  containers:
  - image: busybox
    name: test
    volumeMounts:
    - mountPath: /aaa
      name: empty
    args:
    - /bin/sh
    - -c
    - echo "it's test"    >/aaa/aaa; sleep  300000000000

  - image: busybox
    name: test1
    volumeMounts:
    - mountPath: /hello
      name: empty
    args:
    - /bin/sh
    - -c
    - cat  /hello/aaa; sleep 3000000


  volumes:
  - name: empty
    emptyDir: {}

在一个pod下创建了两个容器,两个容器都挂载了一个相同的volume,虽然在容器内的名字不一样,但是都是指向同一个volume。

查看pod:

[root@k8smaster emptydir]# kubectl    get    pod  -o  wide
NAME        READY   STATUS    RESTARTS   AGE    IP            NODE       NOMINATED NODE   READINESS GATES
empty-pod   2/2     Running   0          2m8s   10.244.1.86   k8snode2   <none>           <none>

查看kubectl logs:

[root@k8smaster emptydir]# kubectl    logs    empty-pod test
[root@k8smaster emptydir]# kubectl    logs    empty-pod test1
it's test

可以看到第一个的日志没有显示 因为我们只是往里面写入
第二个日志显示了我们在第一个容器写入的信息

去节点查看挂载的目录:

[root@k8snode2 ~]# docker  inspect     2agc5
"Mounts" : [
    {
         "Type": "bind",
         "Source": "//var/lib/kubelet/pods/43ba2f73-184a-49a1-a0ec-6b859c9f4cd8/volumesty-dir/empty
[root@k8snode2 ~]# docker  inspect    hgjq2
 		 "Source": "//var/lib/kubelet/pods/43ba2f73-184a-49a1-a0ec-6b859c9f4cd8/volumesty-dir/empty         

这两个容器但是挂载的目标位置是一样的

查看目录

[root@k8snode2 7]# cd   /var/lib/kubelet/pods/43ba2f73-184a-49a1-a0ec-6b859c9f4cd8/volumesty-dir/empty
[root@k8snode2 empty]# ls
aaa
[root@k8snode2 empty]# cat  aaa 
it's test

删除pod 这个目录也就不存在了

hostpath volume:

hostpath volume 是把主机的目录共享到pod里的容器上

[root@k8smaster emptydir]# kubectl   edit  pod  --namespace=kube-system   kube-apiserver-k8smaster

volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/pki
      name: etc-pki
      readOnly: true
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true

  volumes:
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
      name: ca-certs
  - hostPath:
      path: /etc/pki
      type: DirectoryOrCreate
      name: etc-pki
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
      name: k8s-certs

这种 volume是pod删除了 数据还会保存,但是主机坏掉了数据也就没了

外部 storage provider

如 ceph glusterfs 独立于 kubernetes 集群的 就算集群瘫痪了 数据也不会丢失

PV(persistentVolume)和PVC(persistentVolumeClaim):
Pv是外部存储系统的一块存储空间,具有持久性,生命周期独立于pod
Pvc是对pv的申请,用户创建pvc时,指明存储资源的大小和访问模式等信息,kubernetes会查找符合条件的pv
Kubernetes支持多中persistentVolume:NFS Ceph EBS等

在创建pv和pvc之前,先搭建一个NFS

Nfs 192.168.19.163

所有节点都安装:

yum   -y install  rpcbind  nfs-utils 

在nfs节点上:

[root@localhost ~]# vim   /etc/exports
/volume    *(rw,sync,no_root_squash)

创键这个目录:

[root@localhost ~]# mkdir   /volume

启动服务:所有节点

systemctl   start   rpcbind  && systemctl    enable  rpcbind

在nfs节点:

[root@localhost ~]# systemctl start   nfs-server.service   &&systemctl enable   nfs-server.service

关闭防火墙和selinux:

[root@localhost ~]# setenforce 0
[root@localhost ~]# systemctl stop   firewalld

除了nfs节点 随便找一个节点测试一下可以访问nfs节点的目录么:

[root@k8smaster emptydir]# showmount   -e   192.168.19.163
Export list for 192.168.19.163:
/volume *

创键pv:

[root@k8smaster pv]# cat    pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mypv
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: nfs
  nfs:
    path: /volume/mypv
    server: 192.168.19.163

kind: PersistentVolume    ## 类型是pv
 capacity:
storage: 1Gi     ## 指定pv的容量
accessModes:
    - ReadWriteOnce    ## 指定访问的模式   读写模式mount到单个节点
persistentVolumeReclaimPolicy: Recycle      ## 指定pv的回收策略
storageClassName: nfs   ## 指定pv的class       pvc ##可以指定class中的pv
path: /volume/mypv    ##  需要手动在nfs节点上创建  不然会报错

ReadWriteMany ## 读写模式mount到多个节点
ReadOnlyMany ## 只读的模式mount到多个节点
Recycle ## 删除pv中的数据
Retain ## 需要手动清除数据
Delete ##删除 storage provider上对应的存储资源

查看pv:

[root@k8smaster pv-pvs]# kubectl    get   pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
mypv   1Gi        RWO            Recycle          Available           nfs                     14s

创建pvc:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs

指定资源类型,访问模式 ,容量大小,指定class

查看pvc:

[root@k8smaster pv-pvs]# kubectl    get   pvc
NAME    STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mypvc   Bound    mypv     1Gi        RWO            nfs            14s

再次查看pv:

[root@k8smaster pv-pvs]# kubectl    get   pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM           STORAGECLASS   REASON   AGE
mypv   1Gi        RWO            Recycle          Bound    default/mypvc   nfs                     4m22s

可以看到pvc已经成功bound到pv上了 之前没创建pvc时 pv的状态为Available

创建一个pod,使用这个存储:

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
  - image: busybox
    name: test
    args:
    - /bin/sh
    - -c
    - sleep 300000000000
    volumeMounts:
    - mountPath: /aaaa
      name: myvolu
  volumes:
    - name: myvolu
      persistentVolumeClaim:
        claimName: mypvc

查看Pod:

[root@k8smaster pv-pvs]# kubectl   get pod   
NAME    READY   STATUS    RESTARTS   AGE
mypod   1/1     Running   0          38s

测试创建一个文件试一下:

[root@k8smaster pv-pvs]# kubectl   exec   mypod   touch   /aaaa/test

在nfs节点的目录查看:

[root@localhost ~]# cd   /volume/mypv/
[root@localhost mypv]# ls
test

可以看到已经保存到nfs 节点的/volume/mypv/下了

回收pv(删除pvc):

先删除pod:

[root@k8smaster pv-pvs]# kubectl    delete pod   mypod 
pod "mypod" deleted

删除pvc:

[root@k8smaster pv-pvs]# kubectl   delete   pvc  mypvc 
persistentvolumeclaim "mypvc" deleted

查看一下pv的状态:

[root@k8smaster pv-pvs]# kubectl   get     pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
mypv   1Gi        RWO            Recycle          Available           nfs                     16m

可以看到状态又变为 Available了,并且 nfs 节点的数据也没有了

数据被清除 是因为pv的回收策略是 recycle 所以数据被删除了
要想不被删除 需要改成 retain

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReeclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    path: /volume/mypv
    server: 192.168.19.163

执行文件

查看一下更新后的pv:

[root@k8smaster pv-pvs]# kubectl   get    pv
NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
mypv   1Gi        RWO            Retain           Available           nfs                     20m

MYSQL使用pv,pvc:

创建pv:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-pv
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 1Gi
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    path: /volume/mysql-pv
    server: 192.168.2.200
    path: /volume/mysql-pv   

创建pvc:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs

查看 pv 与 pvc:

[root@k8smaster mysql]# kubectl    get  pv
NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
mysql-pv   1Gi        RWO            Retain           Available           nfs                     45s
[root@k8smaster mysql]# kubectl    get  pvc
NAME        STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mysql-pvc   Bound    mysql-pv   1Gi        RWO            nfs            5s

创建MYSQL:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.7
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: password
        ports:
        - containerPort: 3306
          name: mysql  
        volumeMounts:
        - name: mysql-volume
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-volume
        persistentVolumeClaim:
          claimName: mysql-pvc

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: mysql
  name: mysql
spec:
  ports:
  - protocol: TCP
    port: 3306
    targetPort: 3306
  selector:
    app: mysql

进入mysql:

[root@k8smaster mysql]# kubectl    run   -it   --rm  --image=mysql:5.7  --restart=Never   mysql-client  -- mysql -h mysql -ppassword

进入数据库创建库,表,插入数据

mysql> create  database   aaa;
Query OK, 1 row affected (0.00 sec)

mysql> use  aaa;
Database changed
mysql> create  table  test(id  int,name varchar(20));
Query OK, 0 rows affected (0.03 sec)

mysql> insert   into   test values (1,"aa");
Query OK, 1 row affected (0.03 sec)

mysql> insert   into   test values (2,"bb");
Query OK, 1 row affected (0.00 sec)

mysql> insert   into   test values (3,"cc"),(4,"dd");

mysql> select   *  from   test;
+------+------+
| id   | name |
+------+------+
|    1 | aa   |
|    2 | bb   |
|    3 | cc   |
|    4 | dd   |
+------+------+

查看pod:

[root@k8smaster mysql]# kubectl   get   pod   -o  wide
NAME                     READY   STATUS    RESTARTS   AGE     IP            NODE       NOMINATED NODE   READINESS GATES
mysql-7774bd7c76-5sk2l   1/1     Running   0          6m33s   10.244.1.90   k8snode2   <none>           <none>

可以看到 pod在node2上

关闭node2模拟故障

等待一段时间会发现 mysql服务转移到node1上了

[root@k8smaster mysql]# kubectl    get   pod   -o wide
NAME                     READY   STATUS        RESTARTS   AGE   IP            NODE       NOMINATED NODE   READINESS GATES
mysql-7774bd7c76-5sk2l   1/1     Terminating   0          14m   10.244.1.90   k8snode2   <none>           <none>
mysql-7774bd7c76-6w49h   1/1     Running       0          73s   10.244.2.83   k8snode1   <none>           <none>

登陆mysql,验证数据

[root@k8smaster mysql]# kubectl     run   -it   --rm  --image=mysql:5.7   --restart=Never  mysql-client  -- mysql   -h mysql -ppassword

mysql> show  databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| aaa                |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
5 rows in set (0.01 sec)

mysql> use  aaa;
mysql> show   tables;
+---------------+
| Tables_in_aaa |
+---------------+
| test          |
+---------------+
1 row in set (0.00 sec)

mysql> select  *  from   test;
+------+------+
| id   | name |
+------+------+
|    1 | aa   |
|    2 | bb   |
|    3 | cc   |
|    4 | dd   |
+------+------+
4 rows in set (0.00 sec)

数据并没有丢失


本文链接: http://www.dtmao.cc/news_show_50395.shtml

附件下载

相关教程

    暂无相关的数据...

共有条评论 网友评论

验证码: 看不清楚?