kubectl running problem in master node while creating Kubernetes cluster using kubespray

0

I am trying to create Kubernetes cluster using kubespray with single master and 3 working node. I cloned the github kubespray repository and running the ansible playbook from my control node for forming cluster.

I am trying the following command:

ansible-playbook \
  -i inventory/sample/hosts.ini \
  cluster.yml \
  --become \
  --ask-become-pass

When I am running the command, 2 worker node getting final status ok. But for the master node it showing failed and getting error like the following:

fatal: [mildevkub020]: FAILED! => {
  "changed": false, 
  "msg": "error running kubectl (/usr/local/bin/kubectl apply 
  --force --filename=/etc/kubernetes/k8s-cluster-critical-pc.yml) 
  command (rc=1), out='', err='error: unable to recognize 
  \"/etc/kubernetes/k8s-cluster-critical-pc.yml\": Get 
  http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: 
  connect: connection refused\n'"
}

I am adding the screenshot for the error below:

enter image description here

Modification

I removed my older kubespray repo and cloned the fresh one from the following link,

https://github.com/kubernetes-sigs/kubespray.git

And updated my inventory. But still getting the same error. When I run "journalctl" command for logs, I am getting like the following:

Oct 15 09:56:17 mildevdcr01 kernel: NX (Execute Disable) protection: active
Oct 15 09:56:17 mildevdcr01 kernel: SMBIOS 2.4 present.
Oct 15 09:56:17 mildevdcr01 kernel: DMI: VMware, Inc. VMware Virtual 
Platform/440BX Desktop Reference Platform, BIOS 6.00 09/22/2009
Oct 15 09:56:17 mildevdcr01 kernel: Hypervisor detected: VMware
Oct 15 09:56:17 mildevdcr01 kernel: Kernel/User page tables isolation: disabled
Oct 15 09:56:17 mildevdcr01 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Oct 15 09:56:17 mildevdcr01 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Oct 15 09:56:17 mildevdcr01 kernel: AGP: No AGP bridge found
Oct 15 09:56:17 mildevdcr01 kernel: e820: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct 15 09:56:17 mildevdcr01 kernel: MTRR default type: uncachable
Oct 15 09:56:17 mildevdcr01 kernel: MTRR fixed ranges enabled:
Oct 15 09:56:17 mildevdcr01 kernel:   00000-9FFFF write-back
Oct 15 09:56:17 mildevdcr01 kernel:   A0000-BFFFF uncachable
Oct 15 09:56:17 mildevdcr01 kernel:   C0000-CBFFF write-protect

Error ,

fatal: [mildevkub020]: FAILED! => {"attempts": 10, "changed": false, "msg": "error running kubectl (/usr/local/bin/kubectl apply --force --filename=/etc/kubernetes/node-crb.yml) command (rc=1), out='', err='W1016 06:50:31.365172   22692 loader.go:223] Config not found: etc/kubernetes/admin.conf\nerror: unable to recognize \"/etc/kubernetes/node-crb.yml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\n'"}
kubernetes
ansible
kubespray
asked on Stack Overflow Oct 15, 2019 by Jacob • edited Jul 12, 2020 by David Medinets

1 Answer

1

Make sure that you have followed all requirements before cluster installation. Especially copying ssh key to all the servers part of your inventory.

Reset environment after previous installation:

$ sudo ansible-playbook -i inventory/mycluster/hosts.yml reset.yml -b -v \
  --private-key=~/.ssh/private_key

Remember to change cluster configuration file and personalize it. You can change network plugin - default is Calico.

Then run ansible playbook again using this command:

$ sudo ansible-playbook -i inventory/sample/hosts.ini cluster.yml -b -v \ 
  --private-key=~/.ssh/private_key

Try to copy /sample folder and rename it and then change k8s-cluster and hosts file.

Check hosts file: Remember to not modify the children of k8s-cluster, like putting the etcd group into the k8s-cluster, unless you are certain to do that.

k8s-cluster ⊂ etcd => kube-node ∩ etcd = etcd

Example inventory file you can find here: inventory.

If problem still exists please execute command journalctl and check what logs show.

EDIT:

As you provided more information. From your logs it looks like you have to set the VM hardware version to the highest available in your VMware setup, and install all available updates on this system.

answered on Stack Overflow Oct 15, 2019 by Malgorzata • edited Oct 16, 2019 by Malgorzata

User contributions licensed under CC BY-SA 3.0